Items tagged with Deep-Mind

Artificial intelligence is all the rage these days, as it's being used in everything from our smartphones to our digital assistants to even the vehicles we drive (or rather, are driven for us). However, as AI becomes even more powerful, there are those that say that the technology should have limits on where it can be applied. Most often, those limits are envisioned for robots that would be placed on the battlefield and would have the ability to target and potentially kill without human oversight. A group of researchers and companies that has expertise in the... Read more...
Researchers and computer scientists far and wide are leveraging artificial intelligence for all kinds of tasks, everything from weather prediction and automating dangerous tasks, to finding cures for diseases and solving complex social problems. Oh, and gaming. In fact, Google trained its DeepMind system to play Quake III, and apparently it's kicking some human butt. Why train an advanced AI system to play a computer game? There are actually practical applications, and they have nothing to do with winning an esports tournament or bragging rights. "Billions of people inhabit the planet, each with... Read more...
Google has been able to train its various DeepMind AIs to do some very cool things. The AlphaZero AI was able to destroy the highly acclaimed Stockfish chess program over a 100-game match, winning 28 of the matches outright and tying the other 72 matches for a no loss record. DeepMind certainly isn’t the only AI that has performed some cool and impressive feats, however. With respect to a pure gaming venue, EA has developed an AI that is able to successfully battle human players in Battlefield 1. And yet another AI, inspired by the human visual cortex, was recently demonstrated as having... Read more...
The greatest chess matches are no longer played by mere mortals comprised of flesh, bone, and blood, but sophisticated artificial intelligence (AI) schemes. Sitting at the top, at least for now, is AlphaZero, a product of Google's DeepMind division where machine learning rules the day. It also rules the chess board—AlphaZero took down Stockfish, the sophisticated open-source chess engine that many players use to prepare for big matches. Stockfish's recent string of successes includes winning the 2016 TCEC Championship and and the 2017 Chess.com Computer Chess Championship. It is a formidable... Read more...
Anyone out there who shares Elon Musk's fear of a Skynet apocalypse may find Google's latest AI dubbed AlphaGo Zero to be frightening. The new AI is the followup to the original AlphaZero AI that dominated all human players in an ancient Chinese game called "Go". Google chose Go because it is said to be a game of intuition and the original AlphaGo AI was able to beat the top human player in four out of five games. AlphaGo Zero completed three days of self-learning and then challenged AlphaGo for a match. Zero decimated its predecessor winning 100 games out of 100. "AlphaGo Zero not only rediscovered... Read more...
DeepMind is an offshoot of Google that has scientific mission in mind, with the goal of developing Artificial Intelligence (AI) systems that have the ability to learn to solve complex problems. DeepMind says that meeting that mission requires it to design agents and then test their ability in a range of environments, with some built specifically for testing of AI and others using things built for humans. DeepMind has announced that it has teamed up with Blizzard Entertainment to open the popular video game StarCraft II for AI testing and research. The two firms have released SC2LE, which is... Read more...
Google is heavily invested in artificial intelligence, one of the hottest fields in tech right now right alongside machine learning, the two of which are tightly intertwined with one another. As part of its ongoing effort in AI, Google's DeepMind team has opened up an AI research office in Edmonton, Canada. This is the teams first and only AI research office located outside of the UK. "It was a big decision for us to open our first non-UK research lab, and the fact we’re doing so in Edmonton is a sign of the deep admiration and respect we have for the Canadian research community," Google stated... Read more...
A board game like "Go" might not look complicated on the surface to the untrained eye, which could lead the uninformed to believe that it wouldn't be all that difficult for a computer to best a human player in a head-to-head match. We've seen many examples in the past where that hasn't been the case (IBM's Watson is a good start), and it's because despite their simple nature, the number of solutions/moves at any given time is sometimes astronomical. Last month, we wrote about Google's DeepMind and its challenge of going up against the world's best Go player, Ke Jie. Fast-forward to now, and... Read more...
A year after defeating South Korea’s Go master Lee Se-dol, Google’s DeepMind AlphaGo is once again ready to challenge the world’s top players. Google, the China Go Association, and the Chinese government are planning to host the “Future of Go Summit” in Wuzhen this coming May. The “Future of Go Summit” is a five-day festival intended to bring together Google’s AI experts and China’s top Go players. There will be a number of major events during the festival. “Pair Go” will be a game between two pros, however, both pros will also have AlphaGo as their teammate. This event is supposed to “take the... Read more...
Over the years, we've seen numerous examples of how AI can enrich our lives. It could brew us an awesome beer, solve serious health diagnosis cases, and of course, handle every day problem solving tasks in a number of other ways. We can also use AI and deep learning to simulate real-world scenarios, such as what would happen if resources became scarce, among other other growing environmental concerns. Google's DeepMind team has just detailed one the projects it's working on that revolves around that exact subject. They have also created two demos (seen in the videos below) to showcase... Read more...
Google's DeepMind has been working on some truly incredible things over the past couple of years. Just last week, we learned that DeepMind would be teaching itself how to play StarCraft II, which wouldn't be the first time it had a gaming focus. Before Google acquired DeepMind a couple of years ago, its AI was used to learn and conquer Atari games, and more recently, it taught itself how to beat an expert at Go. Since then, we've seen DeepMind used to enhance AI-speech generation, and even work to conquer blindness. Now it's going to teach itself how to identify objects in a virtual world. As we... Read more...
Google announced at BlizzCon 2016 in Anaheim, California, that it is collaborating with Blizzard Entertainment to use StarCraft II as a training platform for artificial intelligence (AI) and machine learning research. As part of that partnership, Google will pit its DeepMind project against human players in StarCraft II and use the information obtained to tweak its AI algorithms. "DeepMind is on a scientific mission to push the boundaries of AI, developing programs that can learn to solve any complex problem without needing to be told how. Games are the perfect environment in which to do this,... Read more...
A lot has been accomplished with Google's DeepMind artificial intelligence subsidiary, and it looks like the progress train isn't about to slow down anytime soon. As a quick recap, this year alone we've seen DeepMind take down a Go player, get started on an AI kill switch, attempt to cure blindness, improve speech generation, and even slash a mammoth power bill. What's next? Completely self-contained learning. We wouldn't be shocked if you started having ideas of Skynet running through your head right now. With a new hybrid system called Differentiable Neural Computer (DNC), a neural network is... Read more...
Google’s powerful DeepMind AI is saving the company a lot of dough. The technology has resulted in a 15 percent improvement in power usage efficiency (PUE). Get your calculators out. Google said it used 4,402,836 MWh of electricity in 2014 which is the equivalent to 366,903 family homes in the United States. The average American family paid between $25 to $40 USD per MWh in 2014. If we split this difference, we can theorize that Google spent at least $140,890,752 just on its 2014 electric bill. A fifteen percent improvement means a little over $21 million in savings. Most of Google’s electric... Read more...
1 2 Next