The slide rule/tin foil hat set is having a "Singularity Summit" to discuss artificial intelligence, and they all seem a bit worried. Worried that information technology is getting very powerful very fast, and that might mean that machines will soon be smarter than their makers. They seem to be suffering from an overdose of Gene Roddenberry, but there are interesting questions to be addressed by march of Moore's Law.
"We and our world won't be us anymore," Rodney
Brooks, a robotics professor at the Massachusetts Institute of
Technology, told the audience. When it comes to computers, he said,
"who is us and who is them is going to become a different sort of
Eliezer Yudkowsky, co-founder of
the Palo Alto-based Singularity Institute for Artificial Intelligence,
which organized the summit, researches on the development of so-called
"friendly artificial intelligence." His greatest fear, he said, is that
a brilliant inventor creates a self-improving but amoral artificial
intelligence that turns hostile.
Ah, yes; Skynet is now active. I remember when Gary Kasparov was first playing chess against IBM's supercomputer. The computer finally beat him, and we heard that the end was near for our puny human brains. But after he lost, Gary Kasparov made a few phone calls, drove home, and made himself a peanut butter sandwich. The computer just sat there. How smart is something that can't make a peanut butter sandwich?