Items tagged with Deep-Mind

We’ve seen Alphabet/Google’s DeepMind AI crush human opponents in games like StarCraft II, but now the team is going back in time to tackle games from 4 decades ago. Dubbed Agent57, this new AI is gaining attention for learning how to play a total of – you guessed it – 57 Atari 2600 games. The DeepMind team often looks to existing games to help improve its AI routines, and it explains that they provide “excellent testing ground for building adaptive algorithms” that can encompass a “rich suite of tasks which players must develop sophisticated behavioral strategies to master.” More importantly, each game has its own high... Read more...
Google's DeepMind division has been demolishing human opponents left in right in traditional games like Go (also called Baduk), and even more modern fare like Starcraft. However, one human Go player is calling it quits years after he was defeated by DeepMind AlphaGo AI.  Lee Se-dol, a South Korean Go champion told the Yonhap News Agency that he is officially retiring from competitive play. Lee gained worldwide recognition back in 2016 for actually defeating AlphaGo, but the sheer strength of the AI has the champion disillusioned about the future of competitive play. Although Lee was able to best AlphaGo, it was only in a single match out of five. "I rarely read comments... Read more...
Alphabet's DeepMind has been working on its AlphaStar AI for years to get the AI to be able to beat human players at the popular and challenging video game Starcraft II. The game is one of the most popular esports titles in the world. The game is particularly challenging to humans and specifically to the AI. DeepMind says that Starcraft II has posed a tougher challenge to AlphaStar than games like chess, and other board games, in part because the Starcraft opponent's pieces are often hidden from view. Some professional gamers have conflicting emotions about AlphaStar claiming Grandmaster status in the game. Despite any mixed feelings people have about the AI's new status, DeepMind says that AlphaStar's... Read more...
Alphabet owns both DeepMind and Waymo, and the two subsidiaries are working together to train AIs that operate Waymo self-driving cars. The way the duo are training the AI is using the technique that DeepMind calls population-based training (PBT) that was developed and used previously by DeepMind for training the video game algorithms used to defeat humans in StarCraft II. The same technique was also used to train AIs to play Quake III Arena. PBT takes inspiration from biological evolution and can speed up the selection of machine-learning algorithms and parameters for a particular task. PBT does this by having the AI choose the machine-learning algorithms and parameters available for a specific... Read more...
Google's DeepMind division has had considerable success competing against and crushing human competitors in Starcraft II. The DeepMind AlphaStar artificial intelligence (AI) trounced Grzegorz "MaNa" Komincz of Team Liquid back in January 5-0 in a professional match on a competitive ladder. However, in that matchup, MaNa knew that he was going up against AI ahead of time, so there were no surprises. Google will now enter its AlphaStar into a Starcraft II European competitive ladder in the near future where it will essentially be going undercover to battle against human opponents.  "For scientific test purposes, DeepMind will be benchmarking AlphaStar’s performance by... Read more...
We often joke about the possibility of a Skynet situation playing out, but even if that never happens, there is at least one way that machines can assert their dominance over the flesh and blood race—gaming. We have already seen this play out in numerous ways, such as on Jeopardy when IBM's Watson wiped the floor with its opponents. Google's DeepMind has been very much active in this space as well, and is now adding Quake III Arena to the list of games where it can play as good as an actual person. What's unique about Quake III Arena, as it relates to AI, is that the notion of capture the flag gameplay is a team-based sport that seemingly requires human traits to be effective at. Attacking... Read more...
Artificial intelligence is all the rage these days, as it's being used in everything from our smartphones to our digital assistants to even the vehicles we drive (or rather, are driven for us). However, as AI becomes even more powerful, there are those that say that the technology should have limits on where it can be applied. Most often, those limits are envisioned for robots that would be placed on the battlefield and would have the ability to target and potentially kill without human oversight. A group of researchers and companies that has expertise in the AI field have come together with a pledge to not develop or participate in the development of machines... Read more...
Researchers and computer scientists far and wide are leveraging artificial intelligence for all kinds of tasks, everything from weather prediction and automating dangerous tasks, to finding cures for diseases and solving complex social problems. Oh, and gaming. In fact, Google trained its DeepMind system to play Quake III, and apparently it's kicking some human butt. Why train an advanced AI system to play a computer game? There are actually practical applications, and they have nothing to do with winning an esports tournament or bragging rights. "Billions of people inhabit the planet, each with their own individual goals and actions, but still capable of coming together through teams, organizations... Read more...
Google has been able to train its various DeepMind AIs to do some very cool things. The AlphaZero AI was able to destroy the highly acclaimed Stockfish chess program over a 100-game match, winning 28 of the matches outright and tying the other 72 matches for a no loss record. DeepMind certainly isn’t the only AI that has performed some cool and impressive feats, however. With respect to a pure gaming venue, EA has developed an AI that is able to successfully battle human players in Battlefield 1. And yet another AI, inspired by the human visual cortex, was recently demonstrated as having the ability to beat CAPTCHA prompts. Google's DeepMind AI has been at it again, and this time... Read more...
The greatest chess matches are no longer played by mere mortals comprised of flesh, bone, and blood, but sophisticated artificial intelligence (AI) schemes. Sitting at the top, at least for now, is AlphaZero, a product of Google's DeepMind division where machine learning rules the day. It also rules the chess board—AlphaZero took down Stockfish, the sophisticated open-source chess engine that many players use to prepare for big matches. Stockfish's recent string of successes includes winning the 2016 TCEC Championship and and the 2017 Chess.com Computer Chess Championship. It is a formidable opponent by any standards, except when going up against AlphaZero. In a closed-door event, AlphaZero... Read more...
Anyone out there who shares Elon Musk's fear of a Skynet apocalypse may find Google's latest AI dubbed AlphaGo Zero to be frightening. The new AI is the followup to the original AlphaZero AI that dominated all human players in an ancient Chinese game called "Go". Google chose Go because it is said to be a game of intuition and the original AlphaGo AI was able to beat the top human player in four out of five games. AlphaGo Zero completed three days of self-learning and then challenged AlphaGo for a match. Zero decimated its predecessor winning 100 games out of 100. "AlphaGo Zero not only rediscovered the common patterns and openings that humans tend to play ... it ultimately discarded them... Read more...
DeepMind is an offshoot of Google that has scientific mission in mind, with the goal of developing Artificial Intelligence (AI) systems that have the ability to learn to solve complex problems. DeepMind says that meeting that mission requires it to design agents and then test their ability in a range of environments, with some built specifically for testing of AI and others using things built for humans. DeepMind has announced that it has teamed up with Blizzard Entertainment to open the popular video game StarCraft II for AI testing and research. The two firms have released SC2LE, which is described as a set of tools that Blizzard and DeepMind hope will accelerate AI research using the... Read more...
Google is heavily invested in artificial intelligence, one of the hottest fields in tech right now right alongside machine learning, the two of which are tightly intertwined with one another. As part of its ongoing effort in AI, Google's DeepMind team has opened up an AI research office in Edmonton, Canada. This is the teams first and only AI research office located outside of the UK. "It was a big decision for us to open our first non-UK research lab, and the fact we’re doing so in Edmonton is a sign of the deep admiration and respect we have for the Canadian research community," Google stated in a blog post. "In fact, we’ve had particularly strong links with the UAlberta for many years: nearly... Read more...
A board game like "Go" might not look complicated on the surface to the untrained eye, which could lead the uninformed to believe that it wouldn't be all that difficult for a computer to best a human player in a head-to-head match. We've seen many examples in the past where that hasn't been the case (IBM's Watson is a good start), and it's because despite their simple nature, the number of solutions/moves at any given time is sometimes astronomical. Last month, we wrote about Google's DeepMind and its challenge of going up against the world's best Go player, Ke Jie. Fast-forward to now, and we learn that DeepMind's AlphaGo helped it secure a win against the Grandmaster on the very first... Read more...
A year after defeating South Korea’s Go master Lee Se-dol, Google’s DeepMind AlphaGo is once again ready to challenge the world’s top players. Google, the China Go Association, and the Chinese government are planning to host the “Future of Go Summit” in Wuzhen this coming May. The “Future of Go Summit” is a five-day festival intended to bring together Google’s AI experts and China’s top Go players. There will be a number of major events during the festival. “Pair Go” will be a game between two pros, however, both pros will also have AlphaGo as their teammate. This event is supposed to “take the concept of ‘learning together’ quite literally”. “Team Go” will be a game between AlphaGo and a five-player... Read more...
Over the years, we've seen numerous examples of how AI can enrich our lives. It could brew us an awesome beer, solve serious health diagnosis cases, and of course, handle every day problem solving tasks in a number of other ways. We can also use AI and deep learning to simulate real-world scenarios, such as what would happen if resources became scarce, among other other growing environmental concerns. Google's DeepMind team has just detailed one the projects it's working on that revolves around that exact subject. They have also created two demos (seen in the videos below) to showcase how the AI behaves in different scenarios. In the above demo, called "Gathering," two AI entities... Read more...
Google's DeepMind has been working on some truly incredible things over the past couple of years. Just last week, we learned that DeepMind would be teaching itself how to play StarCraft II, which wouldn't be the first time it had a gaming focus. Before Google acquired DeepMind a couple of years ago, its AI was used to learn and conquer Atari games, and more recently, it taught itself how to beat an expert at Go. Since then, we've seen DeepMind used to enhance AI-speech generation, and even work to conquer blindness. Now it's going to teach itself how to identify objects in a virtual world. As we covered last month, DeepMind's engineers have been hard at work on helping its AI teach itself, and... Read more...
Google announced at BlizzCon 2016 in Anaheim, California, that it is collaborating with Blizzard Entertainment to use StarCraft II as a training platform for artificial intelligence (AI) and machine learning research. As part of that partnership, Google will pit its DeepMind project against human players in StarCraft II and use the information obtained to tweak its AI algorithms. "DeepMind is on a scientific mission to push the boundaries of AI, developing programs that can learn to solve any complex problem without needing to be told how. Games are the perfect environment in which to do this, allowing us to develop and test smarter, more flexible AI algorithms quickly and efficiently, and also... Read more...
A lot has been accomplished with Google's DeepMind artificial intelligence subsidiary, and it looks like the progress train isn't about to slow down anytime soon. As a quick recap, this year alone we've seen DeepMind take down a Go player, get started on an AI kill switch, attempt to cure blindness, improve speech generation, and even slash a mammoth power bill. What's next? Completely self-contained learning. We wouldn't be shocked if you started having ideas of Skynet running through your head right now. With a new hybrid system called Differentiable Neural Computer (DNC), a neural network is paired with external storage (aka: a large external dataset), and the computer's AI is smart enough... Read more...
Google’s powerful DeepMind AI is saving the company a lot of dough. The technology has resulted in a 15 percent improvement in power usage efficiency (PUE). Get your calculators out. Google said it used 4,402,836 MWh of electricity in 2014 which is the equivalent to 366,903 family homes in the United States. The average American family paid between $25 to $40 USD per MWh in 2014. If we split this difference, we can theorize that Google spent at least $140,890,752 just on its 2014 electric bill. A fifteen percent improvement means a little over $21 million in savings. Most of Google’s electric bill comes from powering its large servers.  In 2014 the company said it used neural networks... Read more...
Google DeepMind recently commenced a new collaboration with the United Kingdom’s National Health Service (NHS). DeepMind is working with Moorfields Eye Hospital in east London. The goal is to create a machine learning system that will be able use digital eye scans in order to recognize sight-threatening conditions. Moorfields Eye Hospital DeepMind researchers will use millions of anonymous eye scans to train an algorithm to better spot the early signs of eye conditions. Professor Peng Tee Khaw, the head of Moorfields’ Ophthalmology Research Centre, remarked, “These scans are incredibly detailed, more detailed than any other scan of the body we do: we can see at the cellular level.” This... Read more...
Wondering what's next for AlphaGo, the computer that bested South Korea's Lee Sedol, the world champion in the ancient Chinese board game Go? The artificial intelligence program developed from Google's DeepMind division will go up against Ke Jie, China's top Go grandmaster, sometime in March of next year. This is something the Chinese Go Association wanted to see happen so it got in contact with Google and its AlphaGo team and discussed a possible match. Google returned the interest, and so the two sides will arrange the match and accompanying details this year, assuming nothing unforeseen gets in the way. Ke Jie is an 18-year-old prodigy with all the confidence you'd expect from a young person.... Read more...
1 2 Next