July 28, 2009
Will Machines Outsmart Humans?
Source: NY Times, original story by John Markoff.
Impressed and alarmed by advances in artificial intelligence, a group of computer scientists is debating whether there should be limits on research that might lead to loss of human control over computer-based systems that carry a growing share of society’s workload, from waging war to chatting with customers on the phone.
Their concern is that further advances could create profound social disruptions and even have dangerous consequences.
Earlier this year (in February) a group of scientists from the Association for the Advancement of Artificial Intelligence met in California’s Asilomar Conference grounds to discuss possible impacts of human-level artificial intelligences, aka “The Singularity.” A report from the conference will be released later this year… we hope. The conference was about discussing certain issues that might arise due to the Singularity and loss of human control of cybernetic technologies. Topics included the possible effects of a “robotic takeover” leading to massive job loss, legal and ethical problems in dealing with human-like AIs, and maybe some plans in case a HAL, SHODAN, or Skynet should go online.
The Singularity Time Table. Depending on who you ask, the Singularity will appear definitely before 2050, and possibly as soon as 2020. Even so, that may be latter than we think, as scientist say that they can create a working human brain in 10 years. More recently, Chinese scientist have reportedly been able to grow mice from skin. It shouldn’t be too hard to think of human clones before long, and the possibilities of the Singularity. But just as another meeting at Asilomar dealt with genetics in the mid-70s, this conference deals with cybernetics. Specifically, how to proceed with AI research that will benefit humanity and eliminate the possibilities of a HAL/SHODAN/Skynet.
The A.A.A.I. report will try to assess the possibility of “the loss of human control of computer-based intelligences.” It will also grapple, Dr. Horvitz said, with socioeconomic, legal and ethical issues, as well as probable changes in human-computer relationships. How would it be, for example, to relate to a machine that is as intelligent as your spouse?
Dr. Horvitz said the panel was looking for ways to guide research so that technology improved society rather than moved it toward a technological catastrophe. Some research might, for instance, be conducted in a high-security laboratory.