Opinion
2:01 pm
Thu October 27, 2011

Intelligent Design: McCarthy, Myself And AI

Originally published on Thu November 3, 2011 6:24 pm

Adam Frank is an astrophysicist at the University of Rochester. He is a regular contributor to the NPR blog 13.7: Cosmos and Culture.

What is going to happen when our machines wake up? What will happen when all these computers that run our lives suddenly become intelligent and self-aware? It's a question that makes sense to ask today, as the world marks the recent passage of John McCarthy.

McCarthy was an emeritus professor at Stanford who was known for pushing the boundaries of what was possible with computers. He developed the venerable computer programming language called Lisp, using it to develop one of the first computer chess programs. He also helped spur the growth of the Internet through his early time-sharing technologies (these were like an early version of cloud computing). But it is for A.I. that he will be remembered.

Artificial intelligence has become such a standard part of culture that I can refer to it by its initials, A.I., and you know what I'm talking about. But the possible consequences of developing an intelligent self-aware machine are just as familiar. If I say "robot overlords," you will still know what I mean.

There are basically two possibilities we naturally imagine when we think about A.I. In the first, the development of machine intelligence brings us a new age of wonders. We cure cancer, solve global warming and design ships that carry us to the stars. It's a machine-human paradise. Think of the efficient feminine voice of the computer on the Starship Enterprise if you need an example.

The other possibility — the robot overlord version — tends to get more play. We build machines. The machines wake up. The machines try to wipe us out. From the errant HAL in 2001: A Space Odyssey to the machine masters in The Matrix or Battlestar Galactica, we have given fictional flesh (or perhaps steel and plastic forms) to our worst fears.

Many A.I. researchers can see both possibilities hovering somewhere in the future. While some scientists admit it's difficult to imagine why a conscious, hyper-intelligent machine would want to continue doing our bidding, others think we will understand how to avoid that fate in the very process of building a conscious machine.

Some ambitious scientists in the field speak of the race to develop "good" A.I. rather than the "bad" kind. For these researchers, the dangers of getting it wrong make global warming look like a game of checkers.

The fact is we don't know what will happen. It's even possible that the intelligence from machines will be so different from our own that entirely new forms of cooperation or antagonism will come to pass.

One thing is certain, though. For a culture that is ever more dependent on its machines, humanity will have to learn to share the world with our silicon children.

Copyright 2013 NPR. To see more, visit http://www.npr.org/.

Transcript

MELISSA BLOCK, HOST:

American history is full of great inventors: Thomas Edison, Alexander Graham Bell, Steve Jobs toiling away in his garage. Well, this week brought the death of another great inventor. While the computer scientist and mathematician John McCarthy was not a household name, he is the reason you might have heard of a little something called artificial intelligence. He coined the term. Astrophysicist Adam Frank has these thoughts on McCarthy's legacy.

ADAM FRANK: What is going to happen when our machines wake up? What will happen when all these computers, which run our lives, suddenly become intelligent and self-aware? It's a question which makes sense to ask this week as the world marks the passage of John McCarthy. McCarthy was a professor at Stanford known for pushing the boundaries of what was possible with machines. He developed the venerable computer language called LISP and used it to create one of the first chess programs.

He also helped spur the growth of the Internet through his early time-sharing technologies, which were like a version of cloud computing. But it's for A.I. that he will be remembered. Artificial intelligence has become such a standard part of culture that I can refer to it by its initial's, A.I., and you know exactly what I'm talking about. But the possible consequences of developing an intelligent self-aware machine are just as familiar. If I say robot overlords, you still know what I mean.

There are basically two possibilities we naturally imagine when we think about A.I. In the first, the development of machine intelligence brings us to a new age of wonders. We cure cancer, solve global warming and design ships to carry us to the stars. It's a machine-human paradise. Think of the efficient feminine voice of the computer on the Starship Enterprise if you need an example. The other possibility, the robot overlord version, tends to get more play.

We build machines. The machines wake up. The machines try to wipe us out. From the errant HAL in "2001: A Space Odyssey" to the machine masters of "The Matrix" or "Battlestar Galactica," we have often given fictional form or perhaps steel and plastic form to our worst nightmares. Many A.I. researchers can see both possibilities hovering somewhere in the future. While some scientists admit it's difficult to imagine why a conscious hyper-intelligent machine would want to continue doing our bidding, others think we will understand how to avoid that fate in the very process of building an intelligent machine.

Some ambitious scientists even speak of a race to develop good A.I. rather than the bad kind. For these researchers, the dangers of getting it wrong make a catastrophe like the worst global warming look like a game of checkers. It's even possible that the intelligence from machines will be so different from our own that entirely new forms of cooperation or antagonism will come to past. One thing is certain, though, for a culture that is ever more dependent on its devices, humanity will have to learn to share the world with its silicon children.

BLOCK: Astrophysicist Adam Frank is the author of "About Time: Cosmology and Culture at the Twilight of the Big Bang." Transcript provided by NPR, Copyright NPR.

Related program: