Items tagged with Deep-Mind

The greatest chess matches are no longer played by mere mortals comprised of flesh, bone, and blood, but sophisticated artificial intelligence (AI) schemes. Sitting at the top, at least for now, is AlphaZero, a product of Google's DeepMind division where machine learning rules the day. It also rules the chess board—AlphaZero took down Stockfish, the sophisticated open-source chess engine that many players use to prepare for big matches. Stockfish's recent string of successes includes winning the 2016 TCEC Championship and and the 2017 Chess.com Computer Chess Championship. It is a formidable... Read more...
Anyone out there who shares Elon Musk's fear of a Skynet apocalypse may find Google's latest AI dubbed AlphaGo Zero to be frightening. The new AI is the followup to the original AlphaZero AI that dominated all human players in an ancient Chinese game called "Go". Google chose Go because it is said to be a game of intuition and the original AlphaGo AI was able to beat the top human player in four out of five games. AlphaGo Zero completed three days of self-learning and then challenged AlphaGo for a match. Zero decimated its predecessor winning 100 games out of 100. "AlphaGo Zero not only rediscovered... Read more...
DeepMind is an offshoot of Google that has scientific mission in mind, with the goal of developing Artificial Intelligence (AI) systems that have the ability to learn to solve complex problems. DeepMind says that meeting that mission requires it to design agents and then test their ability in a range of environments, with some built specifically for testing of AI and others using things built for humans. DeepMind has announced that it has teamed up with Blizzard Entertainment to open the popular video game StarCraft II for AI testing and research. The two firms have released SC2LE, which is... Read more...
Google is heavily invested in artificial intelligence, one of the hottest fields in tech right now right alongside machine learning, the two of which are tightly intertwined with one another. As part of its ongoing effort in AI, Google's DeepMind team has opened up an AI research office in Edmonton, Canada. This is the teams first and only AI research office located outside of the UK. "It was a big decision for us to open our first non-UK research lab, and the fact we’re doing so in Edmonton is a sign of the deep admiration and respect we have for the Canadian research community," Google stated... Read more...
A board game like "Go" might not look complicated on the surface to the untrained eye, which could lead the uninformed to believe that it wouldn't be all that difficult for a computer to best a human player in a head-to-head match. We've seen many examples in the past where that hasn't been the case (IBM's Watson is a good start), and it's because despite their simple nature, the number of solutions/moves at any given time is sometimes astronomical. Last month, we wrote about Google's DeepMind and its challenge of going up against the world's best Go player, Ke Jie. Fast-forward to now, and... Read more...
A year after defeating South Korea’s Go master Lee Se-dol, Google’s DeepMind AlphaGo is once again ready to challenge the world’s top players. Google, the China Go Association, and the Chinese government are planning to host the “Future of Go Summit” in Wuzhen this coming May. The “Future of Go Summit” is a five-day festival intended to bring together Google’s AI experts and China’s top Go players. There will be a number of major events during the festival. “Pair Go” will be a game between two pros, however, both pros will also have AlphaGo as their teammate. This event is supposed to “take the... Read more...
Over the years, we've seen numerous examples of how AI can enrich our lives. It could brew us an awesome beer, solve serious health diagnosis cases, and of course, handle every day problem solving tasks in a number of other ways. We can also use AI and deep learning to simulate real-world scenarios, such as what would happen if resources became scarce, among other other growing environmental concerns. Google's DeepMind team has just detailed one the projects it's working on that revolves around that exact subject. They have also created two demos (seen in the videos below) to showcase... Read more...
Google's DeepMind has been working on some truly incredible things over the past couple of years. Just last week, we learned that DeepMind would be teaching itself how to play StarCraft II, which wouldn't be the first time it had a gaming focus. Before Google acquired DeepMind a couple of years ago, its AI was used to learn and conquer Atari games, and more recently, it taught itself how to beat an expert at Go. Since then, we've seen DeepMind used to enhance AI-speech generation, and even work to conquer blindness. Now it's going to teach itself how to identify objects in a virtual world. As we... Read more...
Google announced at BlizzCon 2016 in Anaheim, California, that it is collaborating with Blizzard Entertainment to use StarCraft II as a training platform for artificial intelligence (AI) and machine learning research. As part of that partnership, Google will pit its DeepMind project against human players in StarCraft II and use the information obtained to tweak its AI algorithms. "DeepMind is on a scientific mission to push the boundaries of AI, developing programs that can learn to solve any complex problem without needing to be told how. Games are the perfect environment in which to do this,... Read more...
A lot has been accomplished with Google's DeepMind artificial intelligence subsidiary, and it looks like the progress train isn't about to slow down anytime soon. As a quick recap, this year alone we've seen DeepMind take down a Go player, get started on an AI kill switch, attempt to cure blindness, improve speech generation, and even slash a mammoth power bill. What's next? Completely self-contained learning. We wouldn't be shocked if you started having ideas of Skynet running through your head right now. With a new hybrid system called Differentiable Neural Computer (DNC), a neural network is... Read more...
Google’s powerful DeepMind AI is saving the company a lot of dough. The technology has resulted in a 15 percent improvement in power usage efficiency (PUE). Get your calculators out. Google said it used 4,402,836 MWh of electricity in 2014 which is the equivalent to 366,903 family homes in the United States. The average American family paid between $25 to $40 USD per MWh in 2014. If we split this difference, we can theorize that Google spent at least $140,890,752 just on its 2014 electric bill. A fifteen percent improvement means a little over $21 million in savings. Most of Google’s electric... Read more...
Google DeepMind recently commenced a new collaboration with the United Kingdom’s National Health Service (NHS). DeepMind is working with Moorfields Eye Hospital in east London. The goal is to create a machine learning system that will be able use digital eye scans in order to recognize sight-threatening conditions. Moorfields Eye Hospital DeepMind researchers will use millions of anonymous eye scans to train an algorithm to better spot the early signs of eye conditions. Professor Peng Tee Khaw, the head of Moorfields’ Ophthalmology Research Centre, remarked, “These scans are incredibly detailed,... Read more...
Wondering what's next for AlphaGo, the computer that bested South Korea's Lee Sedol, the world champion in the ancient Chinese board game Go? The artificial intelligence program developed from Google's DeepMind division will go up against Ke Jie, China's top Go grandmaster, sometime in March of next year. This is something the Chinese Go Association wanted to see happen so it got in contact with Google and its AlphaGo team and discussed a possible match. Google returned the interest, and so the two sides will arrange the match and accompanying details this year, assuming nothing unforeseen gets... Read more...
World renowned Go player Lee Sedol had aspirations of quickly dispatching with Google’s AlphaGo computer. However, Sedol’s chances of embarrassing Google’s DeepMind AI quickly started to evaporate when AlphaGo won the first match. Then AlphaGo won the second match, and early this morning, Google’s AI finished off Sedol, taking its record to 3-0. According to Google, AlphaGo won by resignation after 176 moves had been completed. This challenge was the first time that computer gone up against such a highly-skilled Go player, and it did so in a convincing fashion. With its overall victory sealed,... Read more...
1 2 Next