NVIDIA Smashes Records For Conversational AI Training That Could Turbocharge Alexa, Siri, Google Assistants

Artificial intelligence (AI) seems to be invading every aspect of our lives from enabling partial and full self-driving cars, to fueling the search engines we use, to more mundane tasks like running algorithms on our smartphone to optimize battery life. AI can be used to augment many of the tasks we perform on a daily basis, and NVIDIA is hoping to lend a hand with its Tesla GPU accelerators.

nvidia ai demo

One of the big fields that's ripe for advancement is with conversational AI. While today's digital assistants can "converse" with their human operators, their voice performance and comprehension abilities are often stilted and not exactly human-like. That's where NVIDIA comes in with breakthroughs that it has made with conversational AI using the Bidirectional Encoder Representations from Transformers (BERT) language model.

Admittedly, NVIDIA had some massive processing power at its disposal to enable state-of-the-art language understanding that can be deployed by large corporations to their customers around the globe. For BERT training, NVIDIA deployed a DGX SuperPOD, which consists of 92 NVIDIA DGX-2H systems outfitted with a total of 1,472 Tesla V100 GPUs. 

With this power at its disposal, NVIDIA was able to perform BERT-Large training in just 53 minutes. To put that in perspective, BERT-Large training performance is usually measured in days. In addition, NVIDIA says that it was able to dramatically improve inference times as well, slashing BERT-Base SQuAD dataset inference from 40 milliseconds down to just 2.2 milliseconds. According to NVIDIA, anything under 10 milliseconds is preferable with respect to real-time applications.

So, what does this all mean for us, the end user? Well, it means that we'll have better performing and more intelligent AI chatbots, more relevant search results when typing in a request, and digital assistants like Alexa, Siri, Cortana, and the Google Assistant that can process our requests while "thinking" more like a human. Does this mean that a real-life J.A.R.V.I.S. could soon be within our grasp? We'll just have to see...

Not surprisingly, Microsoft is one of the early beneficiaries of NVIDIA's conversational AI advances. "In close collaboration with NVIDIA, Bing further optimized the inferencing of the popular natural language model BERT using NVIDIA GPUs, part of Azure AI infrastructure, which led to the largest improvement in ranking search quality Bing deployed in the last year," said Rangan Majumder, group program manager, Microsoft Bing.

"We achieved two times the latency reduction and five times throughput improvement during inference using Azure NVIDIA GPUs compared with a CPU-based platform."

Show comments blog comments powered by Disqus