Items tagged with deep learning

It’s happened to us all at some point in time — you capture an image in less-than-ideal light conditions and the end result is a is grainy photo filled with digital noise. While you may still be able to make out many of the details in the photograph, wouldn’t it be nice if you could somehow magically restore it to near perfect condition the way it was meant to be seen? That’s exactly what researchers from NVIDIA, MIT and Aalto University have been able achieve using deep learning artificial intelligence called Noise2Noise. NVIDIA used a team of Tesla P100 GPUs along with... Read more...
It's not quite the Skynet that Tesla CEO Elon Musk has warned us about, but researchers from the Musk-based OpenAI initiative have made a breakthrough in AI algorithms using Dota 2 as a testbed. OpenAI's achievement is remarkable due in part to its scope. Most AI versus human matches -- be it go or a computer game -- involves a single computer against a single human (as was the case with OpenAI’s victory last year). But OpenAI has managed to train its AI to master competing against humans on a five-player team. The team of five neural networks that was developed is collectively... Read more...
Anyone who has lived through the 1980s knows how maddeningly difficult it is to solve a Rubik's Cube, and to accomplish the feat without peeling the stickers off and rearranging them. It's not just challenging for humans, either. Apparently the six-sided contraption presents a special kind of challenge to modern deep learning techniques that makes it more difficult than, say, learning to play chess or Go. That used to be the case, anyway. Researchers from the University of California, Irvine, have developed a new deep learning technique that can teach itself to solve the Rubik's Cube. What they... Read more...
Gigabyte today announced a couple of new 4U GPU servers for the datacenter, both packed with multiple NVIDIA Tesla GPUs to bring massive parallel computing capabilities to the sector. According to Gigabyte, its new G481-S80 and G481-HA0 offer some of the highest GPU density available in the 4U form factor—the former accommodates eight SXM2 form factor GPUs, such as NVIDIA's Volta-based Tesla V100 or Pascal-based Tesla P100, and the latter packs 10 GPUs. These new servers also feature NVIDIA's NVLink technology supporting bi-directional communication between GPUs, allowing for higher bandwidth... Read more...
Hot on the heels of the debut of its 8th gen Core series, and also its brand-new top-end Core X chips, Intel just announced Loihi. With this new chip, Intel is going all-in on artificial intelligence (AI) and self-learning. It also drops a term you may have heard recently: neuromorphic computing; in effect, neural systems simulation. In his blog post, Intel's Corporate VP and Managing Director of Intel Labs Dr. Michael Mayberry lays down a couple of great examples of how AI could benefit our lives in the future. Picture, for example, stoplight-mounted cameras being tied to an AI backend that adjusted... Read more...
Microsoft was on hand at the Hot Chips 2017 show and rolled out a new deep learning acceleration platform dubbed Project Brainwave. Microsoft's Doug Burger says that the platform is a "major leap forward in both performance and flexibility for cloud-based serving of deep learning models." Microsoft says that it designed the system for real-time AI so the system is able to process requests as fast as they are received with ultra-low latency. Microsoft says that real-time AIs are becoming increasingly important to process live data streams in a cloud infrastructure. That sort of data includes things... Read more...
Intel is expanding its reach into the deep learning field today with the launch of the Neural Compute Stick (NCS), which as developed by its Movidius subsidiary. The Movidius NCS is aimed at democratizing deep learning and artificial intelligence, with Intel billing it as “the world’s first self-contained AI accelerator in a USB format.” The Movidius NCS is powered by the Myriad 2 vision processing unit (VPU), which promises 100 gigaflops of performance all while operating within a 1-watt power envelope. Given its low-power requirements, Intel is aiming the Movidius NCS at developers, research... Read more...
We talked yesterday of an example of how deep learning and artificial intelligence can be used to put words in people's mouths, creating video proof of something someone said, even if they didn't really say it. Prospects like that are downright scary, but so too are the realities of the jobs AI will be able to take away from humans. Case in point: professional photography editing. This is a bit of an odd one, as most photographers will edit their own photos, so maybe we should consider this an example of how AI could help someone get through their workflow more efficiently. And perhaps even... Read more...
Many of us have had to heed the warning of, "Don't put words in my mouth" at some point over the course of our lives, but generally speaking, no one actually means it from in a literal sense. In time, though, thanks to deep learning and artificial intelligence, putting words in someone's mouth could become a legitimate reality. Complementing the upcoming SIGGRAPH conference in Los Angeles, researchers from University of Washington have worked their magic to put words into former President Barack Obama's mouth. At least in this case, the words said actually did come from Obama's mouth, but the footage... Read more...
A board game like "Go" might not look complicated on the surface to the untrained eye, which could lead the uninformed to believe that it wouldn't be all that difficult for a computer to best a human player in a head-to-head match. We've seen many examples in the past where that hasn't been the case (IBM's Watson is a good start), and it's because despite their simple nature, the number of solutions/moves at any given time is sometimes astronomical. Last month, we wrote about Google's DeepMind and its challenge of going up against the world's best Go player, Ke Jie. Fast-forward to now, and... Read more...
NVIDIA's push into deep learning and artificial intelligence markets is is paying off in a big way. As a result of its ongoing investments into data center GPUs, the company was able to rake in $1.94 billion in revenue during the first quarter of 2017, an increase of 48 percent from $1.3 billion in same quarter a year ago. After paying the bills, NVIDIA was left with a profit of $507 million, up a whopping 144 percent from last year. "The AI revolution is moving fast and continuing to accelerate," said Jensen Huang, founder and chief executive officer of NVIDIA. "NVIDIA's GPU deep learning platform... Read more...
We are starting to sense a recurring theme from NVIDIA at the annual GPU Technology Conference (GTC). Last year at GTC NVIDIA unveiled its DGX-1, the world's first deep learning supercomputer. Featuring Tesla P100 GPUs based on its Pascal architecture, the DGX-1 made clear NVIDIA's commitment to deep learning and artificial intelligence. Building on top of that, NVIDIA announced at this year's GTC plans to train 100,000 developers through its Deep Learning Institute in 2017. That is a tenfold increase over the number of developers it trained last year. The reason NVIDIA is making such a big commitment... Read more...
You don't need to look very far to see examples of how machine- and deep-learning could enrich our lives. You might be surprised, given how often we hear about deep-learning nowadays, that most companies have only been utilizing the technology to a notable degree for the past couple of years - including Amazon. Despite that, we've seen some amazing things accomplished in a relatively short time. With Amazon specifically, we've seen the company sell the fruits of its deep-learning labor to those who need reliable predictive services, use it to improve its product reviews sections, and of course,... Read more...
Over the years, we've seen numerous examples of how AI can enrich our lives. It could brew us an awesome beer, solve serious health diagnosis cases, and of course, handle every day problem solving tasks in a number of other ways. We can also use AI and deep learning to simulate real-world scenarios, such as what would happen if resources became scarce, among other other growing environmental concerns. Google's DeepMind team has just detailed one the projects it's working on that revolves around that exact subject. They have also created two demos (seen in the videos below) to showcase... Read more...
1 2 Next