Google Teaches Robots To Walk On Their Own, Skynet Overlords Are Pleased

google robots 2
There's no question that artificial intelligence and robotics technology will soon become an even more pervasive force in our daily lives. From self-driving cars to robotics dogs that can empty the dishwasher and fetch us a beer, things are looking mighty good for the future. Now, the folks at Google Robotics are making huge strides in developing robots that are less dependent on their human handlers.

While today's autonomous robots use reinforcement-learning algorithms that rely more on trial and error (with a lot of human intervention during this process) in a preset virtual environment to navigate obstacles and perform certain tasks, Google researchers have taken things to the next level with robots that are able to perform basic functions that we take for granted completely on their own. 

For example, a four-legged robot within a few hours of being "born" was able to stand up, and walk (both forward and backwards) on its own accord using deep reinforcement learning (Deep RL). "[Deep RL] can learn control policies automatically, without any prior knowledge about the robot or the environment," write the Google researchers [PDF]. "In principle, each time the robot walks on a new terrain, the same learning process can be applied to acquire an optimal controller for that environment."

Placing the robot in an environment with no preset routines allowed it to more quickly adapt to many difficult situations. For example, prior modeling to allow a robot to adapt to inclines, gravel, steps, or uneven/slippery surfaces could take considerable time to compute, and lengthy trial and error runs would also be needed. Allowing the robot to adapt to these situations in real-time, however, proved to be a much more efficient way to navigate through its new environment.

google robots

Even with all of "on the job" training going on, humans still had to intervene "hundreds of times" during the real-time training process. But researchers were eventually able to set boundaries to restrict where the robot could go. When hitting a boundary while walking forward, for example, the robot then was then able to acquire a new skill: walking backwards to escape. After these minor tweaks were in place, the robot’s ability to navigate without further human intervention only increased. 

“Removing the person from the process is really hard. By allowing robots to learn more autonomously, robots are closer to being able to learn in the real world that we live in, rather than in a lab," the researchers added.

Eventually, the Google researchers hope to use its new Deep RL algorithms to allow multiple robots to operate within the same environment, and even expand beyond the four-legged form-factor.

Brandon Hill

Brandon Hill

Brandon received his first PC, an IBM Aptiva 310, in 1994 and hasn’t looked back since. He cut his teeth on computer building/repair working at a mom and pop computer shop as a plucky teen in the mid 90s and went on to join AnandTech as the Senior News Editor in 1999. Brandon would later help to form DailyTech where he served as Editor-in-Chief from 2008 until 2014. Brandon is a tech geek at heart, and family members always know where to turn when they need free tech support. When he isn’t writing about the tech hardware or studying up on the latest in mobile gadgets, you’ll find him browsing forums that cater to his long-running passion: automobiles.

Opinions and content posted by HotHardware contributors are their own.