Follow US:

Practice English Speaking&Listening with: Will we be wiped out by machine overlords? Maybe we need a game plan now

(0)
Difficulty: 0

JUDY WOODRUFF: Now: the fears around the development of artificial intelligence.

Computer superintelligence is a long, long way from the stuff of sci-fi movies, but several

high-profile leaders and thinkers have been worrying quite publicly about what they see

as the risks to come.

Our economics correspondent, Paul Solman, explores that.

It's part of his weekly series, Making Sense.

ACTOR: I want to talk to you about the greatest scientific event in the history of man.

ACTOR: Are you building an A.I.?

PAUL SOLMAN: A.I., artificial intelligence.

ACTRESS: Do you think I might be switched off?

ACTOR: It's not up to me.

ACTRESS: Why is it up to anyone?

PAUL SOLMAN: Some version of this scenario has had prominent tech luminaries and scientists

worried for years.

In 2014, cosmologist Stephen Hawking told the BBC:

STEPHEN HAWKING, Scientist (through computer voice): I think the development of full artificial

intelligence could spell the end of the human race.

PAUL SOLMAN: And just this week, Tesla and SpaceX entrepreneur Elon Musk told theMDNMNational

Governors Association:

ELON MUSK, CEO, Tesla Motors: A.I. is a fundamental existential risk for human civilization.

And I don't think people fully appreciate that.

PAUL SOLMAN: OK, but what's the economics angle?

Well, at Oxford University's Future of Humanity Institute, founding director Nick Bostrom

leads a team trying to figure out how best to invest in, well, the future of humanity.

NICK BOSTROM, Oxford University: We are in this very peculiar situation of looking back

at the history of our species, 100,000 years old, and now finding ourselves just before

the threshold to what looks like it will be this transition to some post-human era of

superintelligence that can colonize the universe, and then maybe last for billions of years.

PAUL SOLMAN: Philosopher Bostrom has been perhaps the most prominent thinker about the

benefits and dangers to humanity of what he calls superintelligence for many years.

NICK BOSTROM: Once there is superintelligence, the fate of humanity may depend on what that

superintelligence does.

PAUL SOLMAN: There are plenty of ways to invest in humanity, he says, giving money to anti-disease

charities, for example.

But Bostrom thinks longer-term, about investing to lessen existential risks, those that threaten

to wipe out the human species entirely.

Global warming might be one.

But plenty of other people are worrying about that, he says.

So, he thinks about other risks.

What are the greatest of those risks?

NICK BOSTROM: The greatest existential risks arise from certain anticipated technological

breakthroughs that we might make, in particular, machine superintelligence, nanotechnology,

and synthetic biology, fundamentally because we don't have the ability to uninvent anything

that we invent.

We don't, as a human civilization, have the ability to put the genie back into the bottle.

Once something has been published, then we are stuck with that knowledge.

PAUL SOLMAN: So Bostrom wants money invested in how to manage A.I.

NICK BOSTROM: Specifically on the question, if and when in the future you could build

machines that were really smart, maybe superintelligent, smarter than humans, how could you then ensure

that you could control what those machines do, that they were beneficial, that they were

aligned with human intentions?

PAUL SOLMAN: How likely is it that machines would develop basically a mind of their own,

which is what you're saying, right?

NICK BOSTROM: I do think that advanced A.I., including superintelligence, is a sort of

portal through which humanity will have passage, assuming we don't destroy ourselves prematurely

in some other way.

Right now, the human brain is where it's at.

It's the source of almost all of the technologies we have.

PAUL SOLMAN: I'm relieved to hear that.

(LAUGHTER)

NICK BOSTROM: And the complex social organization we have.

PAUL SOLMAN: Right.

NICK BOSTROM: It's why the modern condition is so different from the way that the chimpanzees

live.

It's all through the human brain's ability to discover and communicate.

But there is no reason to think that human intelligence is anywhere near the greatest

possible level of intelligence that could exist, that we are sort of the smartest possible

species.

I think, rather, that we are the stupidest possible species that is capable of creating

technological civilization.

PAUL SOLMAN: And capable of creating technology that has begun to surpass us, first in chess,

then in "Jeopardy," now in the supposedly impossible game for a machine to win, Go.

This is just task-oriented software, some have argued, and not really intelligence at

all.

Moreover, whatever you call it, there will be enormous benefits, says Bostrom.

On the other hand, if we approach real intelligence, it could also become a threat.

Think of "Ex Machina" or "The Matrix" or Elon Musk's fantasy fear this week about advanced

A.I.

ELON MUSK: Well, it could start a war by create -- by doing fake news and spoofing e-mail

accounts and fake press releases, and just by, you know, manipulating information.

The pen is mightier than the sword.

PAUL SOLMAN: So, this is going to be a cat-and-mouse game between us and the intelligence?

NICK BOSTROM: That would be one model.

One line of attack is to try to leverage the A.I.'s intelligence to learn what it is that

we value and what we want it to do.

PAUL SOLMAN: In order to protect ourselves from what could be a truly existential risk.

So, how do you get the greatest good for the greatest number of present and future humans

beings?

It might be to invest now in controlling the evolution of artificial intelligence.

For the "PBS NewsHour," this is economics correspondent Paul Solman, reporting from

Oxford, England.

The Description of Will we be wiped out by machine overlords? Maybe we need a game plan now