Items tagged with deep learning

Google's DeepMind has been working on some truly incredible things over the past couple of years. Just last week, we learned that DeepMind would be teaching itself how to play StarCraft II, which wouldn't be the first time it had a gaming focus. Before Google acquired DeepMind a couple of years ago, its AI was used to learn and conquer Atari games, and more recently, it taught itself how to beat an expert at Go. Since then, we've seen DeepMind used to enhance AI-speech generation, and even work to conquer blindness. Now it's going to teach itself how to identify objects in a virtual world. As we covered last month, DeepMind's engineers have been hard at work on helping its AI teach itself, and... Read more...
We often joke about certain advances in technology leading to Skynet scenarios where machines wage war with humans, but sometimes it feels inevitable. Take for example what a team of researchers from Google Brain, Google's deep learning project, have discovered. In our quest to advance machine learning capabilities, neural networks are now able to devise their own encryption schemes, which in turn could allow them to communicate in secret with each other.Potential for human extinction aside, it's a rather fascinating thing. Neural networks are computer systems loosely modeled after the neural structure of the brain. Researchers Martin Abadi and David Andersen demonstrated that neural networks... Read more...
“Anything you can do I can do better; I can do anything better than you.” That is likely Microsoft’s mantra, as its research wizards have reached a milestone in speech recognition, with a word error rate (WER) of just 5.9 percent. That figure itself is down from last month, when Microsoft’s speech recognition system stood at 6.3 percent WER. “We’ve reached human parity,” said Xuedong Huang, Microsoft’s chief speech scientist. “This is an historic achievement.” But things are little bit better than that;Microsoft admits that’s speech recognition system actually “makes the same or fewer errors” than professional transcriptionists. “Even five years ago, I wouldn’t have thought we could have achieved... Read more...
A lot has been accomplished with Google's DeepMind artificial intelligence subsidiary, and it looks like the progress train isn't about to slow down anytime soon. As a quick recap, this year alone we've seen DeepMind take down a Go player, get started on an AI kill switch, attempt to cure blindness, improve speech generation, and even slash a mammoth power bill. What's next? Completely self-contained learning. We wouldn't be shocked if you started having ideas of Skynet running through your head right now. With a new hybrid system called Differentiable Neural Computer (DNC), a neural network is paired with external storage (aka: a large external dataset), and the computer's AI is smart enough... Read more...
We've been hearing about NVIDIA's NVLink for quite a while -- ever since the original announcement of Pascal -- but we've still not seen it put into broad use. That changes with NVIDIA's super-high-end Tesla P100, but it's still not shipping in huge quantities. What might come first is IBM's newest Linux-based servers, which also employ the use of NVLink for accelerated AI and deep-learning research. As you might suspect with the use of NVLink, IBM's latest Linux servers are GPU-focused, and because of that, IBM says that they can offer up to 80% better performance-per-dollar than solely x86-based servers. It's no secret that GPUs are highly parallel beasts, and NVIDIA itself has touted the benefits... Read more...
Today’s opening keynote at the Intel Developers Forum focused on a number of forward-looking AI, deep learning, connectivity and networking technologies, like 5G and Silicon Photonics. But late in the address, Intel’s Vice President and General Manager of its Data Center Group (DCG), Diane Bryant, quickly dropped a few details regarding the company’s next-generation Xeon Phi processor, codenamed Knights Mill.Knights Mill is designed for high-performance machine learning and artificial intelligence workloads, and is currently slated for release sometime in 2017. According to Bryant, Knights Mill is optimized for scale-out analytics implementations and it will include architectural enhancements... Read more...
NVIDIA has just announced that its first-ever DGX-1 deep-learning server has found a home, and it couldn't be more appropriate. That new home is with OpenAI, the world's largest non-profit artificial intelligence research agency, which is based in San Francisco. If the OpenAI name sounds familiar but you can't quite place why, it's probably because we've talked about it a couple of times before. With the help of fellow industry legends, Tesla's Elon Musk helped launch OpenAI last year with a major goal: to progress AI with the mindset that it will benefit all humankind. The integration of the DGX-1 into its first lab was special enough for NVIDIA CEO Jen-Hsun Huang to show up and hand-deliver... Read more...
Intel Xeon Phi Processor DieIt’s been nearly two years since we first heard about Intel’s next generation Xeon Phi “Knights Landing” processors, which are geared towards the High Performance Computing (HPC) segment. The processors are a big part of Intel’s Scalable System Framework (SSF) and are built on using general-purpose x86 architecture using open standards. Intel says that this offers customers greater flexibility with respect to programming languages and tools for software development — basically anything that can run on traditional Xeon processors is available to the Xeon Phi family. Today, Intel announced that its Xeon Phi processors are finally available to customers. This comes nearly... Read more...
The concept and implementation of artificial intelligence is nothing new, but with today's computer hardware at our perusal, AI advancement only continues to develop at a rapid pace. Siri and Cortana are both effective AI bots, able to understand a great number of your queries and spit back an answer immediately. As time goes on, AI is only going to become more prevalent, and more important. That's a thought that Dave Coplin, Microsoft UK's Chief Envisioning Officer, agrees with. At an AI conference held late last week, Coplin made the huge statement that AI is the most important technology anyone is working on today. It's something that's not just going to benefit the companies... Read more...
Yesterday, during his keynote address at GTC 2016, NVIDIA CEO Jen-Hsun Huang made a number of interesting announcements and disclosures. We saw Apple co-founder Steve “Woz” Wozniak take a virtual tour of Mars and witnessed the official unveiling of NVIDIA’s Tesla P100, which is based on the company’s bleeding-edge, GP100 GPU. GP100 leverages NVIDIA’s forward-looking Pascal GPU architecture and features 16GB of HBM2 memory, all built using TSMC’s 16nm FinFET manufacturing process. The GP100 is massive with a 600mm2 die, which is about the size of current gen high-end Maxwell GPUs, though when you consider it's built on 16nm process technology, it's obvious Tesla's compute resources are massive,... Read more...
NVIDIA hit it out of the park with its fourth quarter earnings, that the company reported this afternoon. The silicon valley graphics and system-on-a-chip titan outpaced last year’s already strong performance with revenue of $1.4 billion (a 12 percent increase from the same quarter a year ago). Net income came in at $297 million, or 35 cents per share. NVDA's full fiscal year 2016 was reported at $5.01 billion, which was a 7 percent increase over fiscal 2015. The company’s quarterly performance far outpaced analysts’ expectations and NVIDIA was rewarded with a 7 percent share boost in after hours trading. Looking ahead to fiscal Q1 2017, NVIDA is expecting revenue of $1.26 billion and capital... Read more...
Prev 1 2