Elon Musk, Google DeepMind, AI Researchers Sign Pact Not To Develop Skynet-Like Terminator Bots

terminator
Artificial intelligence is all the rage these days, as it's being used in everything from our smartphones to our digital assistants to even the vehicles we drive (or rather, are driven for us). However, as AI becomes even more powerful, there are those that say that the technology should have limits on where it can be applied. Most often, those limits are envisioned for robots that would be placed on the battlefield and would have the ability to target and potentially kill without human oversight.

A group of researchers and companies that has expertise in the AI field have come together with a pledge to not develop or participate in the development of machines that autonomously carry out lethal attacks on humans. "In this light, we the undersigned agree that the decision to take a human life should never be delegated to a machine," these individuals and organizations write in a joint statement.

"There is a moral component to this position, that we should not allow machines to make life-taking decisions for which others – or nobody – will be culpable."

Campaign Killer Robots

The pledge, which was organized by the Future of Life Institute, has an all-star list of signatures including Alphabet's Google DeepMind, HiBot Corp, Lucid.ai, Clearpath Robotics/OTTO Motors, and Tesla Motors/SpaceX CEO Elon Musk.

The thought seems to be that if all of the top talent in the industry along with the top tech companies in the world dedicated to AI and robotics refuse to participate in "killer robot" endeavors, there will be less of an incentive for governments to attempt developing such systems. In addition, shaming them also has its benefits according to the Future of Life Institute.

"We, the undersigned, call upon governments and government leaders to create a future with strong international norms, regulations and laws against lethal autonomous weapons," the group adds. "Stigmatizing and preventing such an arms race should be a high priority for national and global security."

Elon Musk has been especially wary of AI being used for militaristic purposes and has warned that AI could take away human jobs, is more worrisome than North Korea having access to nuclear weapons, and that an AI arms race could lead to a third World War.

Bottom image courtesy Flickr/Global Panorama

Brandon Hill

Brandon Hill

Brandon received his first PC, an IBM Aptiva 310, in 1994 and hasn’t looked back since. He cut his teeth on computer building/repair working at a mom and pop computer shop as a plucky teen in the mid 90s and went on to join AnandTech as the Senior News Editor in 1999. Brandon would later help to form DailyTech where he served as Editor-in-Chief from 2008 until 2014. Brandon is a tech geek at heart, and family members always know where to turn when they need free tech support. When he isn’t writing about the tech hardware or studying up on the latest in mobile gadgets, you’ll find him browsing forums that cater to his long-running passion: automobiles.

Opinions and content posted by HotHardware contributors are their own.