Most people have used natural language computing in one form or another. thanks to voice assistants out there from Amazon, Google, Microsoft, and others. The idea is that you can interact with your device just as if you were talking to a human. The most popular of these voice assistants is Amazon's Alexa and even Facebook is working on a voice assistant competitor as well. Voice assistants are here to stay and Microsoft wants to make them better.
Systems available today can do things like make an appointment on your calendar, but they can't engage in a dialog back and forth with the user about how to handle multiple high-priority requests. Microsoft says that the next generation of intelligent assistants will be able to have back-and-forth conversations using breakthroughs in conversational AI and machine learning that was pioneered by Semantic Machines.
The voice assistants of today use skills that are programmed by humans by hand. That means the programmer has to be able to anticipate all the ways that the skill could be used and then write a script to cover each of those. This limits the usefulness of assistants because programmers can't cover all potential outcomes. With the tech from Semantic Machines, the assistant learns to map people's words to the computational steps needed to carry out requested tasks.
The machine learning methods used let the system generalize from contexts it has seen to new contexts so it can learn to do things in more ways. This means if the system learns to get sports scores, it also learns to get the weather and traffic report. The system also has memory and can keep track of context in a conversation with full duplex capability allowing it to keep dialog flowing. The system that Microsoft is showing off at Build will add to Cortana's existing capability and will eventually power conversations across all of Microsoft's products and services.