At the Build conference this morning, Microsoft demonstrated how their intelligent agents will not be restricted to self-contained commands, and will be able to engage in full conversations with clients.
At the moment, virtual agents are incapable of combining skills or carrying the context of one interaction to the next. Current virtual agents rely on a back-end system, with a manually curated set of skills or intents, in order to deduce the customer’s query.
Since acquiring Semantic Machines last year, Microsoft have created breakthrough new conversational AI technology with the intention of building “future where every organisation has their own agents with their own unique contexts, just like they have their own websites and apps today, and where those agents can seamlessly inter-operate”
The AI technology builds up a memory from turn to turn and crosses skill boundaries, connecting together back-end services within Microsoft and externally. It will power a new class of multi-turn, multi-domain and multi-agent experiences.
The new conversational engine won’t just benefit companies; the technology will also be integrated into Cortana, so Windows devices will understand commands and perform them with greater efficiency. It will also be made available to developers through the Bot Framework, as well as other Azure surfaces.
Check it out a video of it in action below: