Follow US:

Practice English Speaking&Listening with: Fei-Fei Li & Yuval Noah Harari in Conversation - The Coming AI Upheaval

(0)
Difficulty: 0

(audience applauding)

- Good evening, everyone.

My name is Rob Reich, I am delighted to welcome you here

to Stanford University for an evening of conversation

with Yuval Harari, Fei-Fei Li, and Nick Thompson.

I'm a professor of political science here

and the faculty director of the Stanford Center

for Ethics in Society, which is a co-sponsor

of tonight's event along with the Stanford Institute

for Human-Centered Artificial Intelligence

and the Stanford Humanities Center.

Our topic tonight is a big one.

We're going to be thinking together about the promises

and perils of artificial intelligence,

the technology quickly reshaping our economic, social,

and political worlds for better or for worse.

The questions raised by the emergence of AI

are by now familiar, at least to many people here

in Silicon Valley, but I think it's fair to say

that their importance is only growing.

What will the future of work look like

when millions of jobs can be automated?

Are we doomed or perhaps blessed to live in a world

where algorithms make decisions instead of humans?

And these are smaller questions in the big scheme of things.

What, might you ask, of the large ones?

Well, here are three.

What will become of the human species

if machine intelligence approaches

or exceeds that of an ordinary human being?

As a technology that currently relies

on massive centralized pools of data,

does AI favor authoritarian centralized governance

over more decentralized democratic governance?

And are we at the start now of an AI arms race,

and what will happen if powerful systems of AI, especially

when deployed for purposes like facial recognition,

are in the hands of authoritarian rulers?

These challenges only scratch the surface

when it comes to fully wresting with the implications of AI

as the technology continues to improve

and its use cases continue to multiply.

I wanna mention the format of the evening event.

First, given the vast areas of expertise

that Yuval and Fei-Fei have, when you ask questions

via Slido, those questions should pertain

or be limited to the topics under discussion tonight.

So this web interface that we're using, Slido,

allows people to up vote and down vote questions,

so you can see them now if you have

an Internet communication device.

If you don't have one, you can take one of these post cards,

which hopefully you got outside, and on the back,

you can fill in a question you might have

about the evening event and collect it at the end,

and the Stanford Humanities Center

will try to foster some type of conversation

on the basis of those questions.

A couple housekeeping things.

If you didn't purchase one already, Yuval's books

are available for sale outside in the lobby after the event.

A reminder to please turn your cell phone ringers off.

And we will have 90 minutes

for our moderated conversation here

and will end sharp after 90 minutes.

Now, I'm going to leave the stage in just a minute

and allow a really amazing undergraduate student here

at Stanford to introduce our guests.

Her name is Anna-Sofia Lesiv, let me just tell you

a bit about her.

She's a junior here at Stanford, majoring in economics

with a minor in computer science.

And outside the classroom, Anna-Sofia is a journalist

whose work has been featured in The Globe and Mail,

Al Jazeera, The Mercury News, The Seattle Times,

and this campus' paper of record, The Stanford Daily.

She's currently the executive editor of the Daily,

and her daily magazine article from earlier

in the year called CS + Ethics examined the history

of computer science and ethics education at Stanford,

and it won the student prize for best journalism of 2018.

She continues to publish probing examinations

of the ethical challenges faced

by technologists here and elsewhere.

So ladies and gentlemen, I invite you to remember

this name, for you'll be reading about her

or reading her articles or likely both.

Please welcome Stanford junior Anna-Sofia Lesiv.

(audience applauding)

- Thank you very much for the introduction, Rob.

Well, it's my great honor now to introduce

our three guests tonight, Yuval Noah Harari,

Fei-Fei Li, and Nicholas Thompson.

Professor Yuval Noah Harari is a historian,

futurist, philosopher, and professor at Hebrew University.

The world also knows him for authoring some

of the most ambitious and influential books of our decade.

Professor Harari's internationally best selling books,

which have sold millions of copies worldwide,

have covered a dizzying array of subject matter,

from narrativizing the entire history

of the human race and sapience

to predicting the future awaiting humanity,

and even coining a new faith called dataism in Homo Deus.

Professor Harari has become a beloved figure

in Silicon Valley whose readings are assigned

in Stanford's classrooms and whose name is whispered through

the hallways of the comparative literature

and computer science departments alike.

His most recent book is 21 Lessons for the 21st Century,

which focuses on the technological, social,

political, and ecological challenges of the present moment.

In this work, Harari cautions that,

as technological breakthroughs continue to accelerate,

we will have less and less time to reflect upon

the meaning and consequences of the changes they bring.

And this urgency is what charges

professor Fei-Fei Li's work every day

in her role as the co-director

of Stanford's Human-Centered AI Institute.

This institute is one of the first to insist that AI

is not merely the domain of technologists,

but a fundamentally interdisciplinary

and ultimately human issue.

Her fascination with the fundamental questions

of human intelligence is what piqued her interest

in neuroscience, as she eventually become one

of the world's greatest experts in the fields

of computer vision, machine learning,

and cognitive and computational neuroscience.

She's published over 100 scientific articles

in leading journals and has had research supported

by the National Science Foundation,

Microsoft, and The Sloan Foundation.

From 2013 to 2018, professor Fei-Fei Li served

as the director of Stanford's AI lab,

and between January 2017 and September 2018,

professor Fei-Fei Li served as vice president at Google

and chief scientist of AI and machine learning

at Google Cloud.

Nicholas Thompson is the editor-in-chief

of Wired Magazine, a position he's held since January 2017.

Under Mr. Thompson's leadership,

the topic of artificial intelligence has come

to hold a special place at the magazine.

Not only has Wired assigned him more feature stories

on AI than on any other subject,

but it is the only specific topic

with a full time reporter assigned to it.

It's no wonder then that professors Harari and Li

are no strangers to its pages.

Mr. Thompson has led discussions

with the world's leaders in technology and AI,

including Mark Zuckerberg on Facebook and privacy,

French president Emmanuel Macron on France's AI strategy,

and Ray Kurzweil on the ethics and limits of AI.

Mr. Thompson is a Stanford University graduate

who earned his BA double majoring in Earth systems

and political science, and impressively,

even completed a third degree in economics.

Of course, I would be remiss if I did not mention

that Mr. Thompson cut his journalistic teeth

in the opinion section of the Stanford Daily.

So Nick, that makes both of us.

Like all our guests today, I am

at once fascinated and worried by the challenges

that artificial intelligence poses for our society.

One of my goals at Stanford has been

to write about and document the challenge

of educating a generation of students whose lives

and workplaces will eventually be transformed by AI.

Most recently, I published

an article called Complacent Valley with the Stanford Daily.

In it, I critiqued our propensity

to become overly comfortable with the technological

and financial achievements that Silicon Valley

has already reached lest we become complacent

and lose our ambition and momentum

to tackle the great challenges the world has in store.

Answering the fundamental questions

of what we should spend our time on,

how we should live our lives has become much more difficult,

particularly on the doorstep of the AI revolution.

I believe that the kind of crisis of agency

that author J. D. Vance wrote of in Hillbilly Elegy,

for example, is not confined to Appalachia

or the deindustrialized Midwest,

but is emerging even at elite institutions like Stanford.

So conversations like ours this evening hosting speakers

that aim to recenter the individual at the heart of AI

will show us how to take responsibility in a moment

when most decisions can seemingly be made

for us by algorithms.

There are no narratives to guide us through

a future with AI, no ancient myths or stories

that we may rely on to tell us what to do.

At a time when humanity is facing

its greatest challenge yet, somehow,

we cannot be more at a loss for ideas or direction.

It's this momentous crossroads in human history

that pulls me towards journalism and writing in the future,

and it's why I'm so eager to hear

our three guests discuss exactly such a future tonight.

So please, give me a very, please join me

in giving them a very warm welcome this evening.

(audience applauding)

- Wow.

Thank you so much, Anna-Sofia, thank you, Rob.

Thank you, Stanford, for inviting us all here,

I'm having a flashback to the last time I was on a stage

at Stanford, which was playing guitar at the CoHo.

And I didn't have either Yuval or Fei-Fei with me,

so there were about six people in the audience,

one of whom had her headphones on.

But I did meet my wife.

Isn't that sweet?

All right, so a reminder, housekeeping.

Questions are gonna come in in Slido,

you can put them in, you can vote up questions,

we've already got several thousand.

So please, vote up the ones you really like.

If someone can program an AI that can get

a really devastating question in and stump Yuval,

I will get you a free subscription to Wired.

I want this conversation to kind of have three parts.

First, lay out where we are.

Then talk about some of the choices we have to make now.

And last, talk about some advice for all

of the wonderful people in the halls.

So those are the three general areas,

I'll feed in questions as we go.

We may have a specific period for questions at the end,

but let's get cracking.

Yuval.

So the last time we talked, you said many,

many brilliant things, but one that stuck out,

it was a line where you said, we are not just

in a technological crisis, we are in a philosophical crisis.

So explain what you meant, explain how it ties to AI,

and let's get going with a note of existential angst.

- Yeah, so I think what's happening now

is that the philosophical framework of the modern world

that has been established in the 17th and 18th century

around ideas like human agency and individual free will

are being challenged like never before.

Not by philosophical ideas, but by practical technologies.

And we see more and more questions,

which used to be, you know, the bread and butter

of the philosophy department being moved

to the engineering department.

And that's scary partly because, unlike philosophers

who are extremely patient people.

They can discuss something for thousands of years

without reaching any agreement and they are fine with that.

The engineers won't wait.

And even if the engineers are willing to wait,

the investors behind the engineers won't wait.

So it means that we don't have a lot of time.

And in order to encapsulate what the crisis is,

I know that, you know, engineers, especially

in a place like Silicon Valley, they like equations.

So maybe I can try and formulate an equation

to explain what's happening.

And the equation is B times C times D equals H.

Which means biological knowledge multiplied

by computing power multiplied by data equals

the ability to hack humans.

And the AI revolution or crisis is not just AI,

it's also biology, it's biotech.

We haven't seen anything yet because

the link is not complete.

There is a lot of hype now around AI and computers

but just, there is just half the story.

The other half is the biological knowledge

coming from brain science and biology.

And once you link that to AI, what you get

is the ability to hack humans.

And maybe I'll explain what it means,

the ability to hack humans to create an algorithm

that understands me better than I understand myself

and can therefore manipulate me, enhance me, or replace me.

And this is something that our philosophical baggage

and all our belief in, you know, human agency

and free will and the customer is always right

and the voter knows best, this just falls apart

once you have this kind of ability.

- Once you have this kind of ability and it's used

to manipulate or replace you, not if it's used

to enhance you.

- Also when it's used to enhance you, the question is,

who decides what is a good enhancement

and what is a bad enhancement?

So our immediate fallback position is to fall back

on the traditional humanist ideas

that the customer is always right.

The customers will choose the enhancement.

Or the voter is always right, the voters will vote,

there will be a political decision about enhancement.

Or if it feels good, do it.

We'll just follow our heart, we'll just listen to ourselves.

None of this works when there is a technology

to hack human on a large scale.

You can't trust your feelings or the voters

or the customers on that.

The easiest people to manipulate are the people

who believe in free will, because they think

they cannot be manipulated.

So how do you decide what to enhance if,

and this is a very deep ethical and philosophical question.

Again, that philosophers have been debating

for thousands of years, what is good,

what are the good qualities we need to enhance?

So if you can't trust the customer, if you can't trust

the voter, if you can't trust your feelings,

who do you trust, what do you go by?

- All right, Fei-Fei, you have a PhD, you have a CS degree,

you're a professor at Stanford.

Does A times B times C equal H?

Is Yuval's theory the right way

to look at where we're headed?

- Well.

What a beginning, thank you, Yuval.

One of the things, I've been reading Yuval's book

for the past couple of years and talking to you.

And I'm very envious of philosophers now,

because they can propose questions in crisis,

but they don't have to answer them.

(audience laughing)

No, as an engineer and scientist, I feel like

we have to now solve the crisis.

So honestly, I think I'm very thankful, I mean,

personally, I've been reading your book for two years.

And I'm very thankful that Yuval, among other people,

but have opened up this really important question for us.

And it's also quite a, when you said the AI crisis,

and I was sitting there thinking,

this is a field I loved and felt passionate about

and researched for 20 years.

And that was just a scientific curiosity

of a young scientist entering PhD in AI.

How did, what happened that 20 years later,

it has become a crisis?

And it actually, speak of the evolution of AI

that got me where I am today and got my colleagues

at Stanford where we are today with the human-centered AI

is that this is a transformative technology,

it's a nascent technology, it's still

a budding science compared to physics, chemistry, biology.

But with the power of data, computing,

and the kind of diverse impact that AI is making,

it is, like you said, it's touching human lives

and business in broad and deep ways.

And responding to that kind of questions and crisis

that's facing humanity, I think one

of the proposed solution, or if not a solution,

at least a try, that Stanford is making an effort about

is can we reframe the education, the research,

and the dialogue of AI and technology in general

in a human-centered way?

We're not necessarily gonna find a solution today.

But can we involve the humanists, the philosophers,

the historians, the political scientists, the economists,

the ethicists, the legal scholars, the neuroscientists,

the psychologists, and many more other disciplines

into the study and development of AI

in the next chapter, in the next phase?

- Don't be so certain we're not gonna get

an answer today, I've got two of the smartest people

in the world glued to their chairs and I've got Slido

for 72 minutes, so let's give it a shot.

- He said we have thousands of years.

- But let me go a little bit further in Yuval's question.

So there are a lot, or Yuval's opening statement,

there are a lot of crises about AI that people talk about.

They talk about AI becoming conscious

and what would that mean, they talk about job displacement,

they talk about biases, and Yuval has very clearly laid out

what he thinks is the most important one,

which is the combination of biology

plus computing plus data leading to hacking.

So he's laid out a very specific concern.

Is that specific concern what people who are thinking

about AI should be focused on?

- So absolutely.

So any technology humanity has created, starting from fire,

is a double edge sword.

So it can bring improvements to life, to work,

and to society, but it can bring the perils,

and AI has the perils, you know.

I wake up every day, worry about the diversity,

inclusion issue in AI.

We worry about fairness or the lack of fairness, privacy,

the labor market, so absolutely, we need to be concerned,

and because of that, we need to expand the study,

the research, and the development of policies

and the dialogue of AI beyond just

the codes and the products into these human wrongs,

into these societal issues.

So I absolutely agree with you on that,

that this is the moment to open the dialogue,

to open the research in those issues.

- Okay.

- Even though I would just say that, again,

part of my fear is that the dialogue,

I don't fear AI experts talking with philosophers,

I'm fine with that, historians, good,

literary critics, wonderful.

I fear the moment you start talking with biologists.

That's my biggest fear, when you and the biologists will,

hey, we actually have a common language.

And we can do things together.

And that's when the really scary things I think will be--

- [Fei-Fei] Can you elaborate on, what is scaring you

that we talk to biologists?

- That's the moment when you can really hack human beings

not by collecting data about our search words

or our purchasing habits or where do we go about town.

But you can actually start peering inside

and collect data directly from our hearts

and from our brains.

- Okay, can I be specific?

First of all, the birth of AI is AI scientists talking

to biologists, specifically neuroscientists, right,

the birth of AI is very much inspired

by what the brain does.

Fast forward to 60 years later, today's AI

is making great improvement in healthcare,

there's a lot of data from our physiology

and pathology being collected and using machine learning

to help us.

But I feel like you're talking about something else.

- That's part of it, I mean, if there wasn't a great promise

in the technology, there would also be no danger

because nobody would go along with that path.

I mean, obviously, there are enormously beneficial things

that AI can do for us, especially

when it is linked with biology.

We are about to get the best healthcare in the world

in history, and the cheapest, and available

for billions of people via their smartphones,

which today they have almost nothing.

And this is why it is almost impossible

to resist the temptation.

And with all the issue, you know, of privacy,

if you have a big battle between privacy and health,

health is likely to win hands down.

So I fully agree with that, and you know,

my job as a historian, as a philosopher,

as a social critic, is to point out the dangers in that,

because, especially in Silicon Valley,

people are very much familiar with the advantages,

but they don't like to think so much about the dangers.

And the big danger is what happens when you can hack

the brain, and that can serve not just

your healthcare provider, that can serve so many things

from a crazy dictator to--

- Let's focus on that, what it means to hack the brain.

Like what, right now, in some ways, my brain is hacked,

right, there is an allure of this device,

it wants me to check it constantly,

like my brain has been a little bit hacked,

yours hasn't because you meditate two hours a day,

but mine has, and probably most of these people have.

But what exactly is the future brain hacking going

to be that it isn't today?

- Much more of the same, but on a much larger scale.

I mean, the point when, for example,

more and more of your personal decisions in lives

are being outsourced to an algorithm that

is just so much better than you.

So you know, we have two distinct dystopias

that kind of mesh together.

We have the dystopia of surveillance capitalism,

in which there is no like Big Brother dictator,

but more and more of your decisions

are being made by an algorithm.

And it's not just decisions about what to eat

or what to shop, but decisions like where to work

and where to study and whom to date

and whom to marry and whom to vote for.

It's the same logic.

And I would be curious to hear if you think

that there is anything in humans which

is by definition unhackable.

That we can't reach a point when the algorithm

can make that decision better than me.

So that's one line of dystopia which is

a bit more familiar in this part of the world.

And then you have the full fledged dystopia

of a totalitarian regime based

on a total surveillance system.

Something like the totalitarian regimes

that we have seen in the 20th century,

but augmented with biometric censors

and the ability to basically track each

and every individual 24 hours a day.

And you know, which in the days of, I don't know,

Stalin or Hitler was absolutely impossible

because they didn't have the technology,

but maybe might be possible in 20 years to 30 years.

So we can choose which dystopia to discuss,

but they are very close in the--

- Let's choose the liberal democracy dystopia.

Fei-Fei, do you wanna answer Yuval's specific question,

which is is there something in dystopia A,

liberal democracy dystopia, is there something

endemic to humans that cannot be hacked?

- So when you asked me that question just two minutes ago,

the first word that came to my mind is love.

Is love hackable?

- Ask the Internet, I don't know.

(audience laughing)

- Dating.

- That's a defense--

- Dating is not the entirely of love, I hope.

- But the question is, which kind of love

are you referring to?

If you're referring to this, you know, I don't know,

Greek philosophical love or the loving kindness of Buddhism,

that's one question, which I think

it's much more complicated.

If you are referring to

the biological mammalian courtship rituals and,

then I think yes, I mean, why not?

Why is it different from anything else

that is happening in the body?

- But humans are humans because we are,

there is some part of us that are beyond

the mammalian courtship, right?

So is that part hackable?

- That's the question, I mean, you know,

in most science fiction books and movies,

they give your answer.

When the extraterrestrial evil robots are about

to conquer planet Earth and nothing can resist them,

resistance is futile, at the very last moment,

humans win because the robots don't understand love.

- Last moment, there's one heroic white dude that saves us.

(audience laughing)

- Why we do this?

- No, no, it was a joke, don't worry.

But okay, so the two dystopia, I do not have answers

to the two dystopias.

But I wanna keep saying is, this is precisely why

this is the moment that we need to seek for solutions.

This is precisely why this is the moment

that we believe the new chapter of AI needs to be written

by cross-pollinating efforts from humanists,

social scientists to business leaders to civil society

to governments to come at the same table,

to have that multilateral and cooperative conversation,

and I think you really bring out the urgency

and the importance and the scale of this potential crisis.

But I think in the face of that, we need to act.

- Yeah, and I agree that we need cooperation,

that we need much closer cooperation between engineers

and philosophers or engineers and historians.

And also, from a philosophical perspective,

I think there is something wonderful

about engineers philosophically.

- Thank you.

- That they really cut the bullshit.

I mean, philosophers can talk and talk, you know,

in cloudy and flowery metaphors.

And then the engineers can really focus the question.

Like I just had a discussion the other day

with an engineer from Google about this.

And that he said okay, I know how to maximize people's time

on the website.

If somebody comes to me and tells me, look,

your job is to maximize time on this application,

I know how to do it because I know how to measure it.

But if somebody comes along and tells me, well,

you need to maximize human flourishing or you need

to maximize universal love, I don't know what it means.

So that's what the engineers go back to the philosophers

and ask them, what do you actually mean?

Which, you know, a lot of philosophical theories collapse

around that, because they can't really explain what,

and we need this kind of collaboration

in order to move forward. - We need a equation for that.

- But then, Yuval, is Fei-Fei right, if we can't explain

and we can't code love, can artificial intelligence

ever recreate it, or it is something intrinsic

to humans that the machines will never emulate?

- I don't think that machines will feel love,

but you don't necessarily need to feel it

in order to be able to hack it, to monitor it,

to predict it, to manipulate it.

I mean, machines don't like to play Candy Crush,

but they can still--

- This device, in some future where

it's infinitely more powerful than it's right now,

could make me fall in love with somebody in the audience?

- That's, that goes to the question

of consciousness and mind.

- We should go there.

- I don't think that we have the understanding

of what consciousness is to answer the question

whether a non-organic consciousness is possible

or is not possible, I think we just don't know.

But again, the bar for hacking humans is much lower,

the machines don't need to have consciousness of their own

in order to predict our choices and manipulate our choices.

They just need to, all right, if you accept

that something like love is, in the end,

a biological process in the body, if you think that AI

can provide us with wonderful healthcare,

by being able to monitor and predict something like

the flu or something like cancer,

what's the essential difference between flu and love

in the sense of, is this biological

and this is something else which is so separated

from the biological reality of the body that,

even if we have a machine that's capable of monitoring

and predicting flu, it still lacks something essential

in order to do the same thing with love?

- [Nick] Fei-Fei.

- So I wanna make two comments, and this is where

my engineering, you know, personality speaking.

We're making two very important assumptions

in this part of the conversation, one is that AI

is so omnipotent, that it's achieved to a state

that it's beyond predicting anything physical,

it's got into the consciousness level, it got into even

the ultimate, the love level of capability.

And I do wanna make sure that we recognize

that we're very very very far from that,

this technology is still very nascent,

part of the concern I have about today's AI

is that super hyping of its capability.

So I'm not saying that that's not a valid question,

but I think that part of this conversation

is built upon that assumption that this technology

has become that powerful, and there is,

I don't know how many decades we are from that.

Second related assumption I feel we are,

our conversation is being based on is that

we're talking about the world or a state of the world

that only that powerful AI exists

or that small group of people who have produced

the powerful AI and is intended to hack human are existing.

But in fact, our human society is so complex,

there's so many of us, right.

I mean, humanity in its history have faced

so many technology, if we left it in the hands

of a bad player alone without any regulation,

multinational collaboration, rules, laws, moral codes,

that technology could have maybe not hack human,

but destroy human or hurt human in massive ways.

It has happened, but by and large, our society,

in a historical view, is moving to a more civilized

and controlled state.

So I think it's important to look at that greater society

and bringing other players and people into this dialogue

so we don't talk like there is only this omnipotent AI,

you know, deciding it's gonna hack everything to the end.

And that brings to your topic that, in addition

of hacking human at that level that you're talking about,

there are some very immediate concerns already.

Diversity, privacy, labor, legal changes, you know,

international geopolitics.

And I think it's critical to tackle those now.

- Well, let's, I love talking to AI researchers,

because five years ago, all the AI researchers were like,

it's much more powerful than you think,

and now they're all like, it's not as powerful as you think.

(audience laughing)

All right, so I'll just, let me ask--

- It's because five years go, you have no idea what AI is,

now you're extrapolating too much.

- I didn't say it was wrong, I just said it was the thing.

Let's, I wanna go into what you just said,

but before I do that, I wanna take one question here

from the audience, because once we move into

the second section, we won't be able to answer it.

So the question is, it's for you, Yuval.

How do we, this is from Marin Nasini,

how can we avoid the formation

of AI powered digital dictatorships?

So how do we avoid dystopia number two,

let's enter that, and then let's go, Fei-Fei,

into what we can do right now,

not what we can do in the future.

- The key issue is how to regulate the ownership of data.

Because we won't stop research in biology

and we won't stop research in computer science and AI,

so from the three components of biological knowledge,

computing power, and data, I think data is the easiest,

and it's also very difficult, but still the easiest kind of

to regulate or to protect.

Place some protections there.

And there are efforts now being made.

And they are not just political efforts,

but you know, also philosophical efforts

to really conceptualize what does it mean

to own data or to regulate the ownership of data?

Because we have a fairly good understanding

what it means to own land.

We had thousands of years of experience with that.

We have a very poor understanding

of what it actually means to own data

and how to regulate it, but this is a very important front

that we need to focus on in order to prevent

the worst dystopian outcomes.

And I agree that AI is not nearly as powerful

as some people imagine, but this is why,

again, I think we think to place the bar low.

To reach a critical threshold, we don't need the AI

to know us perfectly, which will never happen.

We just need the AI to know us better

than we known ourselves, which is not so difficult

because most people don't know themselves very well

and often make huge mistakes in critical decisions.

So whether it's finance or career or love life,

to have this shift in authority from humans to algorithm,

they can still be terrible.

But as long as they are a bit less terrible than us,

the authority will shift to them.

- You, in your book, you tell a very illuminating story

about your own self and you come to terms

with who you are and how you could be manipulated.

Will you tell that story here,

about coming to terms with your sexuality

and the story you told about Coca-Cola in your book?

'Cause I think that will make it clear

what you mean here very well.

- Yeah, so I said that I only realized

that I was gay when I was 21.

And I look back at the time, and I was,

I don't know, 15, 17.

And it should've been so obvious.

How, and it's not like a stranger,

like I'm with myself 24 hours a day

and I just don't notice any of the screaming signs

that say yeah, you were gay.

And I don't know how, but the fact is, I missed it.

Now, an AI, even a very stupid AI today, will not miss it.

- I'm not so sure.

- So imagine, this is not like, you know,

like a science fiction scenario for century from now.

This can happen today, that you can write

all kinds of algorithms that, you know,

they are not perfect, but they are still better say

than the average teenager, and what does it mean

to live in a world in which you learn about,

something so important about yourself from an algorithm,

what does it mean, what happens if the algorithm

doesn't share the information with you,

but it shares the information with advertisers

or with governments?

So if you want to, and I think we should go down

from the cloudy heights of, you know, the extreme scenarios,

to the practicalities of day to day life,

this is a good example.

Because this is already happening.

- Yeah, all right, well, let's take

the elevator down to the more conceptual level

at this particular shopping mall

that we're shopping in today.

And Fei-Fei, let's talk about what we can do today

as we think about the risks of AI, the benefits of AI,

and tell us your punch list of what you think

the most important things we should be thinking

about with AI are.

- Oh boy, there are so many things we could do today.

And I cannot agree more with Yuval

that this is such an important topic.

Again, I'm gonna try to speak about all the efforts

that's been made at Stanford, because I think

this is a good representation of what we believed

are so many efforts we can do.

So in human-centered AI in which,

this is the overall theme we believe

that the next chapter of AI should be is human-centered,

we believe in three major principles.

One principle is to invest in the next generation

of AI technology that is more, that reflects more

of the kind of human intelligence we would like.

I was just thinking about your comment about AI's dependence

on data and how the policy and governance of data

should emerge in order to regulate and govern the AI impact,

while technology is, we should be developing technology

that can explain AI.

In technical field, we call it explainable AI

or AI interpretability studies.

We should be focusing on technology that have

the more nuanced understanding of human intelligence.

We should be investing in the development

of less data dependent AI technology

that would take into considerations of intuition,

knowledge, creativity and other forms of human intelligence.

So that kind of human intelligence inspired AI

is one of our principles.

The second principle is to, again,

welcome in the kind of multidisciplinary study of AI,

cross-pollinating with economics, with ethics, with law,

with philosophy, with history, cognitive science and so on,

because there is so much more we need to understand

in terms of AI's social, human, anthropological,

ethical impact.

And we cannot possibly do this alone as technologists,

some of us shouldn't even be doing this,

it's the ethicists, philosophers should participate

and work with us on these issues.

So that's the second principle.

And the third principle, and within this,

we work with policymakers.

We convene the kind of dialogues

of multilateral stakeholders.

Then the third, last but not the least.

I think Nick, you said that at the very beginning

of this conversation, that we need to promote

the human enhancing and collaborative

and augmentative aspect of this technology.

You have a point, even there, it can become manipulative.

But we need to start with that sense of alertness,

understanding, but still promote that kind

of benevolent applications and design of this technology,

at least these are the three principles

that Stanford's Human-Centered AI Institute is based on.

And I just feel very proud within the short few months

of the birth of this institute, there are more

than 200 faculty involved on this campus in this kind

of research dialogue, you know, study education.

And their number's still growing.

- Wow.

Let's, of those three principles,

let's start digging into them.

So let's go to number one, explainability,

'cause this is a really interesting debate

in artificial intelligence.

So there are some practitioners who say

you should have algorithms that can explain

what they did and the choices they made.

Sounds eminently sensible.

But how do you do that?

I make all kinds of decisions that

I can't entirely explain, like why did I hire this person,

not that person, and I can tell a story

about why I did it, but I don't know for sure.

Like we don't know ourselves well enough

to always be able to truthfully and fully explain what

we did, how can we expect a computer using AI to do that?

And if we demand that here in the West,

then there are other parts of the world

that don't demand that who may be able to move faster.

So why don't we start, why don't I ask you the first part

of that question, Yuval the second part of that question.

So the first part is, can we actually get explainability

if it's super hard even within ourselves?

- Well, it's pretty hard for me to multiply two digits,

but you know, computers can do that.

So the fact that something is hard for humans

doesn't mean we shouldn't try to get the machines to do,

especially, you know, after all these algorithms

are based on very simple mathematical logic.

Granted, we're dealing with neural networks these days

of millions of nodes and billions of connections.

So explainability is actually tough, it's ongoing research.

But I think this is such a fertile ground

and it so critical when it comes to healthcare decisions,

financial decisions, legal decisions.

There is so many scenarios where this technology

can be potentially positively useful,

but with that kind of explainable capability.

So we've gotta try, and I'm pretty confident,

with a lot of smart minds out there,

this is a crackable thing.

And on top--

- [Nick] Got 200 professors on it.

- Right, not all of them doing AI algorithms.

On top of that, I think you have a point that,

if we have technology that can explain

the decision making process of algorithms,

it makes it harder for it to manipulate and cheat, right.

It's a technical solution, not the entirety

of the solution, that will contribute

to the clarification of what this technology is doing.

- But because the, presumably, the AI makes decision

in a radically different way than humans,

then even if the AI explains its logic, the fear is,

it will make absolutely no sense to most humans.

Most humans, when they are asked to explain a decision,

they tell a story in a narrative form,

which may or may not reflect what

is actually happening within them.

In many cases, it doesn't reflect.

It's just a made up rationalization and not the real thing.

Now, an AI could be much better than a human

in telling me like, I applied for a bank,

to the bank for a loan, and the bank says no,

and I ask why not?

And the bank says okay, we'll ask our AI.

And the AI gives this extremely long statistical analysis

based not on one or two salient feature of my life,

but on 2517 different data points,

which it took into account and gave different weights.

And why did you give this, this weight,

and why did you give, oh, there is another book about that.

And most of the data points would seem

to a human completely irrelevant.

You applied for a loan on Monday and not on Wednesday.

And the AI discovered that, for whatever reason,

it's after the weekend, whatever,

people who apply for loans on a Monday

are 0.075% less likely to repay the loan.

So it goes into the equation.

And I get this book of the real explanation, finally,

I get a real explanation.

It's not like sitting with a human banker

that just bullshits me.

- So are you rooting for AI?

Are you saying AI is good in this case?

- In many cases, yes, I mean, I think in many case,

it's two sides of the coin, I think that,

in may ways, the AI in this scenario

will be an improvement over the human banker.

Because for example, you can really know

what the decision is based on, presumably.

But it's based on something that I as a human being just

cannot grasp, I just don't, I know how to deal

with simple narrative stories.

I didn't give you a loan because you're gay,

that's not good.

Or because you didn't repay any of your previous loans,

okay, I can understand that.

But I don't, my mind doesn't know what to do

with the real explanation that the AI will give,

which is just this crazy statistical thing

which says nothing to me.

- Okay, so there are two layers to your comment.

One is how do you trust and be able to comprehend

the AI's explanation?

Second is, actually, can AI be used

to make humans more trustable or be more trustable humans?

On the first point, I agree with you,

if AI gives you 2000 dimensions of potential features

with probability, it's not human understandable.

But the entire history of science in human civilization

is to be able to communicate the result of science

in better and better ways, right.

Like I just had my annual physical and

a whole bunch of numbers came to my cell phone.

And well, first of all, my doctors can,

the expert can help me to explain these numbers.

Now even Wikipedia can help me

to explain some of these numbers.

But the technological improvements

of explaining these will improve.

It's our failure as AI technologists if

we just throw 2000 dimensions of probability numbers at you.

- But this is, I mean, this is the explanation,

and I think that the point you raise is very important.

But I see it differently, I think science

is getting worse and worse in explaining its theories

and findings to general public, which is the reason

for things like doubting climate change and so forth,

and it's not really even the fault of the scientists.

Because the science is just getting more

and more complicated, and reality is extremely complicated,

and the human mind wasn't adapted to understanding

the dynamics of climate change or the real reasons

for refusing to give somebody a loan.

That's the point when you have an, again,

let's put aside the whole question of manipulation

and how can I trust, let's assume the AI is benign

and let's assume it makes, that there are no hidden biases,

everything is okay.

But still, I can't understand the decisions of the AI.

- People like Nick, the storytellers has to explain.

What I'm saying, you're right, it's very complex.

But there are people like--

- I'm gonna lose my job to a computer like next week,

but I'm happy to have your confidence with me.

- But that's the job of the society collectively,

to explain the complex science.

I'm not saying we're doing a great job at all.

But I'm saying there is hope if we try.

- But my fear is that we just really can't do it

because the human mind is not built for dealing

with these kinds of explanations and technologies.

And it's true for, I mean, it's true

for the individual customer who goes to the bank

and the bank refuses to give them a loan.

And it can even be on the level, I mean,

how many people today on Earth understand

the financial system?

How many presidents and prime ministers understand

the financial system?

- In this country, it's zero.

(audience applauding)

- What does it mean to live in a society where the people

who are supposed to be running the business,

and again, it's not the fault of a particular politician.

It's just the financial system has become so complicated,

I don't think that economists are trying on purpose

to hide something from general public.

It's just extremely complicated.

You had some of the wisest people in the world going

to the finance industry and creating

these enormously complex models and tools which objectively,

you just can't explain it to most people unless,

first of all, they study economics and mathematics

for 10 years or whatever.

So I think this is a real crisis.

And this is, again, this is part

of the philosophical crisis we started with.

And the undermining of human agency is,

that's part of what's happening,

that we have these extremely intelligent tools

that are able to make perhaps better decisions

about our healthcare, about our financial system.

But we can't understand what they are doing

and why they are doing it.

And this undermines our autonomy and our authority.

And we don't know as a society how to deal with that.

- Well, ideally, Fei-Fei's institute will help that.

Before we leave this topic though,

I wanna move to a very closely related question,

which I think is one of the most interesting,

which is the question of bias in algorithms,

which is something you've spoken eloquently about,

and let's take the financial system.

So you can imagine a loan used by a bank

to determine whether somebody should get a loan.

And you can imagine training it on historical data,

and historical data is racist, and we don't want that.

So let's figure out how to make sure the data isn't racist

and that it gives loans to people regardless of race.

I think we probably all, everybody in this room agrees

that that is a good outcome.

But let's say that analyzing the historical data suggests

that women are more likely to repay their loans than men.

Do we strip that out or do we allow that to stay in?

If you allow it to stay in, you get

a slightly more efficient financial system.

If you strip it out, you have a little more equality

between men and women.

How do you make decisions about what biases you wanna strip

and which ones are okay to keep?

- Yeah, that's an excellent question, I mean,

I'm not gonna have the answers personally,

but I think you touch on a really important question,

this, first of all, machine learning system bias

is a real thing, you know, like you said.

It starts with data, it probably starts

with the very moment where collecting data

and the type of data we're collecting,

all the way through the whole pipeline

and then all the way to the application.

But biases come in very complex ways.

At Stanford, we have machine learning scientists studying

the technical solutions of bias, like you know,

debiasing data and normalizing certain decision making.

But we also have humanists debating about

what is bias, what is fairness, when is bias good,

when is bias bad?

So I think you just opened up a perfect topic

for research and debate and conversation in this topic.

And I also wanna point out that Yuval, you already used

a very closely related example.

Machine learning algorithm has a potential

to actually expose bias, right.

It, you know, like one of my favorite study

was a paper a couple of years ago analyzing Hollywood movies

and using machine learning face recognition algorithm,

which is a very controversial technology these days,

to recognize Hollywood systematically gives more screen time

to male actors than female actors.

That's, no human being can sit there

and count all the frames of faces and gender bias.

And this is a perfect example of using machine learning

to expose bias.

So in general, there is a rich set

of issues we should study, and again,

bring the humanists, bring the ethicists,

bring the legal scholars, bring the gender study experts.

- Agreed, though standing up for humans, I knew Hollywood

was sexist even before that paper, but yes, agreed.

- You're a smart human.

- Yuval, on that question of the loans.

Do you strip out the racist data,

do you strip out the gender data,

what biases do you get rid of, what biases do you not?

- I don't think there is a one size fits all, I mean,

it's a question we, again, we need this day

to day collaboration between engineers and ethicists

and psychologists and political scientists.

- [Nick] But not biologists, right, but not biologists?

- And increasingly also biologists.

And you know, and it goes back to the question,

what should we do?

So we should teach ethics to coders

as part of their curriculum.

The people today in the world that most need

a background in ethics is the people

in the computer science departments.

So it should be an integral part of the curriculum.

And it's also in the big corporations

which are designing these tools,

there should be embedded within the teams people

with background in things like ethics, like politics,

that they always think in terms of what biases

might we inadvertently be building into our system,

what could be the cultural or political implications

of what we are building.

It shouldn't be a kind of afterthought

that you create this neat technical gadget,

it goes into the world, something bad happens.

And then you start thinking oh,

we didn't see this one coming, what do we do now?

From the very beginning, it should be clear

that this is part of the process.

- I do wanna give a shout out to Rob Reich,

who just introduced this whole event.

He and my colleagues, Mehran Sahami

and a few other Stanford professors

have opened this course called ethics, computation,

and sorry, Rob, I'm abusing the title of your course.

But this is exactly the kind of classes, it's,

I think this quarter, the offering has more

than 300 students signed up to that.

- Fantastic.

The course, I wish the course had existed

when I was a student here.

Let me ask an excellent question from the audience

that ties into this, this is from Eugene Lee.

How do you reconcile the inherent trade offs

between explainability and efficacy

and accuracy of algorithms?

- Quick question, this question seems to be assuming,

if you can explain it, you're less good or less accurate.

- Well, you can imagine that if you require explainability,

you lose some level of efficiency, you're adding

a little bit of complexity to the algorithm.

- So okay, first of all, I don't necessarily believe

in that, there's no mathematical logic to this assumption.

Second, let's assume there is a possibility

that an explainable algorithm suffers efficiency.

I think this is a societal decision we have to make,

you know, when we put the seatbelt in our car, driving,

that's a little bit of an efficiency loss

'cause I have to do that seatbelt movement

instead of just hopping and drive.

But as a society, we decided we can afford

that loss of efficiency because we care more

about human safety.

So I think AI is the same kind of technology,

as we make these kind of decisions going forward

in our solutions, in our products,

we have to balance human well being

and societal well being with efficiency.

- So let me, Yuval, let me ask you

the global consequences, this is something

that a number of people have asked about

in different ways and we've touched on,

but we haven't hit head on.

There are two countries, imagine you have country A

and you have country B.

Country A says all of you AI engineers,

you have to make it explainable,

you have to take ethics classes,

you have to really think about the consequences

of what you're doing, you gonna have dinner with biologists,

you have to think about love and you have to like read,

you know, John Locke.

That's group A.

Group B country says, just go build some stuff, right.

These two countries at some point

are gonna come in conflict.

And I'm gonna guess that country B's technology

might be ahead of country A's.

Is that a concern?

- Yeah, that's always the concern with arms races,

which become a race to the bottom in the name

of efficiency and domination.

And we are in a, I mean, what is extremely problematic

or dangerous about the situation now with AI

is that more and more countries are waking up

to the realization that this could be

the technology of domination in the 21st century.

So you're not talking about just any economic competition

between the different textile industries

or even between different oil industries.

Like one country decides to, we don't care

about the environment at all, we'll just full gas ahead,

and the other country's is much more environmentally aware.

The situation with AI is potentially much worse,

because it could be really the technology of domination

in the 21st century, and those left behind

could be dominated, exploited, conquered

by those who forge ahead.

So nobody wants to stay behind.

And I think the only way to prevent this kind

of catastrophic arms race to the bottom

is greater global cooperation around AI.

Now, this sounds utopian because we are now moving

in exactly the opposite direction

of more and more rivalry and competition.

But this is part of, I think, of our job,

like with the nuclear arms race, to make people

in different countries realize that

this is an arms race that, whoever wins, humanity loses.

And it's the same with AI, if AI becomes an arms race,

then this is extremely bad news for all the humans.

And it's easy for say people in the US

to say we are the good guys in this race,

you should be cheering for us.

But this is becoming more and more difficult

in a situation when the motto of the day is America first.

I mean, how can we trust the USA to be the leader

in AI technology if ultimately,

it will serve only American interests

and American economic and political domination?

So it's really, I think most people,

when they think arms race in AI,

they think USA versus China.

But there are almost 200 other countries in the world.

And most of them are far far behind.

And when they look at what is happening,

they are increasingly terrified, and for a very good reason.

- The historical example you've made

is a little unsettling, is if I heard your answer correctly,

it's that we need global cooperation, and if we don't,

we're gonna lead to an arms race.

In the actual nuclear arms race,

we tried for global cooperation from, I don't know,

roughly 1945 to 1950, and then we gave up.

And then we said we're going full throttle

in the United States, and then why did

the Cold War end the way it did?

Who knows, but one argument would be

that the United States, you know, build up

and its relentless buildup of nuclear weapons helped

to keep the peace until the Soviet Union collapsed.

So if that is the parallel, then what might happen here

is we'll try for global cooperation in 2019, 2020, 2021,

and then we'll be off in an arms race.

A, is that likely, and B, if it is,

would you say, well then the US needs

to really move full throttle in AI

because it'd be better for the liberal democracies

to have artificial intelligence than totalitarian states?

- Well, I'm afraid it is very likely

that cooperation will break down and we will find ourselves

in an extreme version of an arms race.

And in a way, it's worse than the nuclear arms race,

because with nukes, at least until today,

countries developed them but never used them.

AI will be used all the time.

It's not something you have on the shelf

for some doomsday war, it will be used all the time

to create potentially total surveillance regimes

and extreme totalitarian systems in one way or the other.

And so from this perspective, I think the danger

is far greater.

You could say that the nuclear arms race

actually saved democracy and the free market and, you know,

rock and roll and Woodstock and then the hippies,

and they all owe a huge debt to nuclear weapons.

Because if nuclear weapons weren't invented, you needed,

there would've been a conventional arms race

and conventional military buildup

between the Soviet bloc and the American bloc.

And that would've meant total mobilization of society,

if the Soviets are having total mobilization,

the only way the Americans can compete is to do the same.

Now, what actually happened was that you had

an extreme totalitarian mobilized society

in the communist bloc, but thanks to nuclear weapons,

you didn't have to do it in the United States

or in Western Germany or in France,

because you relied on nukes, you don't need millions

of conscripts in the army.

And with AI, it's going to be just the opposite.

That the technology will not only be developed,

it will be used all the time.

And that's a very scary scenario.

- Wait, can I just add one thing?

I don't know history like you do,

but you said AI is different from nuclear technology.

I do wanna point out, it is very different because,

at the same as you're talking about

these more scarier situations, this technology

has a wide international scientific collaboration basis

that is being used to make transportation better,

used to improve healthcare, to improve education.

And so it's a very interesting new time

that we haven't seen before, because while

we have this kind of competition,

we also have massive international

scientific community collaboration on these benevolent users

and democratization of this technology.

I just think it's important to see both side of this.

- You're absolutely right, yeah.

There are some, as I said, there are also enormous benefits

to this technology.

- [Fei-Fei] And in a global collaborative way,

especially between, among the scientists.

- The global aspect is more complicated

because the question is, what happens

if there is a huge gap in abilities

between some countries and most of the world?

Would we have a rerun

of the 19th century industrial revolution,

when the few industrial powers conquer

and dominate and exploit the entire world,

both economically and politically?

What's to prevent that from repeating?

So even in terms of, you know,

without this scary war scenario, we might still find ourself

with a global exploitation regime in which the benefits,

most of the benefits go to a small number of countries

at the expense of everybody else.

- Have you heard of archive.org?

- Archive.org?

- So students in the audience might laugh at this,

but we are in a very different scientific research climate,

is that the kind of globalization of technology

and technique happens in the way that the 19th century

and even 20th century never saw before.

Any paper that is a basic science research paper

in AI today that is, or technique that is produced,

let's say this week at Stanford,

it's easily get globally distributed through

this thing called Archive or GitHub repository or this--

- The information is out there, yeah.

- The globalization of this scientific technology travels

in a very different way from the 19th and 20th century.

I mean, I don't doubt there are, you know,

confined development of this technology maybe by regimes.

But we do have to recognize that this global,

the difference is pretty sharp now,

and we might need to take that into consideration,

that the scenario you are describing is harder.

I'm not saying impossible, but harder to happen.

- I'll just say that it's not just the scientific papers.

Yes, the scientific papers are there.

But if I live in Yemen or in Nicaragua

or in the Indonesia or in Gaza, yes, I can connect

to the Internet and download the paper,

what will I do with that?

I don't have the data, I don't have the infrastructure.

I mean, you look at where the big corporations

are coming from that hold all the data of the world,

they are basically coming from just two places.

I mean, even Europe is not really in the competition.

There is no European Google or a European Amazon

or European Baidu or European Tencent.

And if you look beyond Europe,

you think about Central America,

you think about most of Africa, the Middle East,

much of Southeast Asia, it's, yes,

the basic scientific knowledge is out there,

but this is just one of the components that go

to creating something that can compete

with Amazon or with Tencent or with the abilities

of governments like the US government

or like the Chinese government.

So I agree that the dissemination of information

and basic scientific knowledge, we're at

a completely different place than in the 19th century.

- Let me ask you about that, 'cause it's something three

or four people have asked in the questions, which is,

it seems like there could be a centralizing force

of artificial intelligence, that it will make whoever

has the data and the best compute more powerful,

and that it could then accentuate income inequality,

both within countries and within the world, right,

you can imagine the countries you've just mentioned,

the United States, China, Europe lagging behind,

Canada somewhere behind, way ahead of Central America.

It could accentuate global income inequality.

A, do you think that's likely, and B,

how much does it worry you?

We've got four people who've asked a variation on that.

- Well, as I said, it's very very likely,

it's already happening.

And it's extremely dangerous, because the economic

and political consequences could be catastrophic.

We are talking about the potential collapse

of entire economies and countries.

Countries that depend say on cheap manual labor,

and they just don't have the educational capital

to compete in the world of AI.

So what are these countries going to do?

I mean, if, say, you shift back most production from,

say, Honduras or Bangladesh to the US and to Germany,

because the human salaries are no longer part

of the equation, and it's cheaper

to produce the shirt in California than in Honduras,

so what will the people there do?

And you can say okay, but there will be many more jobs

for software engineers.

But we are not teaching the kids in Honduras

to be software engineers.

So maybe a few of them could somehow immigrate to the US.

But most of them won't, and what will they do?

And we, at present, we don't have the economic answers

and the political answers to these questions.

- Fei-Fei, you wanna jump in?

- I think that's fair enough.

I think Yuval definitely has laid out

some of the critical pitfalls enough,

and that's why we need more people to be studying

and thinking about this?

One of the things we over and over noticed,

even in this process of building the community

of human-centered AI and also talking to people,

both internally and externally, is that

there are opportunities for business around the world

and governments around the world

to think about their data and AI strategy,

there are still many opportunities for, you know,

outside of the big players in terms of companies

and countries to really come to the realization

it's an important moment for their country,

for their region, for their business

to transform into this digital age.

And I think when you talk about these potential dangers,

the lack of data in parts of the world that

hasn't really caught up with this digital transformation,

the moment is now, and we hope to, you know,

raise that kind of awareness and to encourage

that kind of information.

- Yeah, I think it's very urgent, I mean,

what we are seeing at the moment is, on the one hand,

what you could call some kind of data colonization.

That the same model that we saw in the 19th century

that you have the imperial hub where they have

the advanced technology, they grow

the cotton in India or Egypt, they send the raw materials

to Britain, they produce the shirts,

the high tech industry of the 19th century, in Manchester,

and they send the shirts back to sell them in India

and out compete the local producers.

And we, in a way, might beginning to see the same thing now

with the data economy, that they harvest the data

in places also like Brazil and Indonesia,

but they don't process the data there,

the data from Brazil and Indonesia goes to California

or goes to eastern China, being processed there,

they there produce the wonderful new gadgets

and technologies and sell them back as finished products

to the provinces or to the colonies.

Now, it's not a one to one, it's not the same,

there are differences.

But I think we need to keep this analogy in mind.

And another thing that maybe we need to keep in mind

in this respect I think is the reemergence of stone walls

that I'm kind of, you know, originally, I was,

my speciality was medieval military history.

This is how I began my academic career,

with the crusades and castles and knights and so forth.

And now I'm doing all these cyborgs and AI stuff.

But suddenly, there is something that I know from back then,

the walls are coming back.

And I try to kind of, what's happening here?

I mean, we have future realities, we have 3G, AI,

and suddenly, the hottest political issue

is building a stone wall.

Like the most low tech thing you can imagine.

And what is the significance of a stone wall

in a world of interconnectivity and all that?

And it really frightens me that there

is something very sinister there, the combination of data

is flowing around everywhere so easily,

but more and more countries, and also my home country

of Israel, it's the same thing, you have the, you know,

the startup nation, and then the wall.

And what does it mean, this combination?

- Fei-Fei, you wanna answer that?

(audience laughing)

- Maybe you can look at the next question.

- You know what, let's go to the next question

which is tied to that.

And the next question is, you have the people here

at Stanford who will help building these companies,

who will either be furthering a process

of data colonization or reversing it

or who will be building, you know,

the efforts to create a virtual wall

in a world based on artificial intelligence

that are being created, funded at least,

by a Stanford graduate.

So you have all these students here in the room.

What do you want them to, how do you want them

to be thinking about artificial intelligence

and what do you want them to learn,

let's spend the last 10 minutes of this conversation

talking about what everybody here should be doing.

- So if you're a computer science or engineering student,

take Rob's class.

If you're a humanist, take my class.

And all of you, read Yuval's books.

- Are his books on your syllabus?

- Not on my, sorry.

I teach hardcore deep learning.

His book doesn't have equations.

- I don't know, B plus C plus D equals H.

- But seriously, you know, what I meant to say

is that Stanford students, you have a great opportunity,

this is, we have a proud history of bringing this technology

to life, Stanford was at the forefront of the birth of AI,

in fact, our very professor John McCarthy coined

the term artificial intelligence

and came to Stanford in 1963 and started this nation's,

one of the two oldest AI labs in this country.

And since then, Stanford's AI research

has been at the forefront of every wave of AI changes.

And this 2019, we're also at the forefront

of starting the human-centered AI revolution or

the writing of the new AI chapter.

And we did all this for the past 60 years for you guys,

for the people who come through the door

and who will graduate and become practitioners,

leaders, and part of the civil society.

And that's really what the bottom line is about.

Human-centered AI needs to be written

by the next generation of technologists

who have taken classes like Rob's class to think about

the ethical implications, the human wellbeing.

And it's also gonna be written by

those potential future policymakers

who came out of Stanford's humanities studies

and been in this school who are versed

in the details of the technology, who understand

the implications of this technology, and who has

the capability to communicate with the technologies.

That is, no matter how we agree and disagree,

that's the bottom line, is that we need

these kind of multilingual leaders and thinkers

and practitioners, and that is

what Stanford's Human-Centered AI Institute is about.

- Yuval, how do you wanna answer that question?

- On the individual level, I think it's important

for every individual, whether in Stanford,

whether an engineer or not, to get to know yourself better.

Because you're now in a competition.

You know, it's the oldest advice in the book

in philosophy is know yourself.

We're heard it from Socrates, from Confucius,

from Buddha, get to know yourself.

But there is a difference, which is that now,

you have competition.

In the day of Socrates or Buddha,

if you didn't make the effort, so okay,

so you missed on enlightenment.

But still, the king wasn't competing with you.

They didn't have the technology, now you have competition.

You're competing against these giant corporations

and governments.

If they get to know you better than you know yourself,

the game is over.

So you need to buy yourself some time,

and the first way to buy yourself some time

is to get to know yourself better,

and then they have more ground to cover.

For engineers and students, I would say,

I'll focus on engineers, maybe.

The two things that I would like to see coming out

from the laboratories and the engineering departments

is first, tools that inherently work better

in a decentralized system than in a centralized system.

I don't know how to do it, but if you,

I hope there is something that engineers can work with.

I heard that blockchain is like the big promise

in that area, I don't know.

But whatever it is, part of, when you start designing

a tool, part of the specification of what this tool

should be like, I would say this tool should work better

in a decentralized system than in a centralized system.

That's the best defense of democracy.

The second thing that I would like to see coming out--

- I don't wanna cut you off 'cause I want you

to get to this second thing, how do you make

a tool work better in a democracy than--

- I'm not an engineer, I don't know.

(audience laughing)

- [Nick] Okay.

All right, we'll go to part two.

Take that, someone in this room, figure that out,

'cause it's very important--

- I can think about it and then,

I can give you historical examples of tools

that work better in this way or in that way.

But I don't know how to translate it into present day--

- Go to part two, 'cause I got

a few more questions asked from the audience.

- Okay, so the other thing though

I would like to see coming is an AI sidekick

that serves me and not some corporation or government,

so to take all, I mean, we can't stop

the progress of this kind of technology.

But I would like to see it serving me.

So yes, it can hack me, but it hacks me

in order to protect me.

Like my computer has an antivirus, but my brain hasn't.

It has a biological antivirus against the flu or whatever,

but not against hackers and trolls and so forth.

So one project to work on is to create an AI sidekick,

which I paid for maybe a lot of money and it belongs to me.

And it follows me and it monitors me

and what I do and my interactions.

But everything it learns, it learns in order to protect me

from manipulation by other AIs,

by other outside influencers.

So this is something that I think,

with the present day technology,

I would like to see more effort in the direction.

- Not to get into too technical terms,

I think you would feel comforted to know that

the budding efforts in this kind of research is happening,

you know, trustworthy AI, explainable AI,

security, you know, motivated or aware AI.

So I'm not saying we have the solution,

but a lot of technologists around the world

are thinking along that line and trying to make that happen.

- And it's not that I want an AI that belongs to Google

or to the government that I can trust,

I want an AI that I'm its master, it's serving me.

- And it's powerful, it's more powerful than my AI,

'cause otherwise, my AI could manipulate your AI.

(audience laughing)

- It will have the inherent advantage

of knowing me very well.

So it might not be able to hack you,

but because it follows me around and it has access

to everything I do and so forth, it gives it an edge

in the specific realm of just me.

So this is a kind of counterbalance

to the danger that the people with--

- But even that would have a lot of challenges

in our society, who is accountable for,

are you accountable for your action or your sidekick?

- This is going to be a more and more difficult question

that we will have to deal with.

- The sidekick defense.

All right.

Fei-Fei, let's go through a couple questions quickly.

We often talk, this is from Reagan Pollack,

we often talk about top down AI from big companies,

how should we design personal AI

to help accelerate our lives and careers?

The way I interpret that question is,

so much of AI is being done at the big companies.

If you wanna have AI at a small company or personally,

can you do that?

- Well, first of all, one solution

is what Yuval just said.

- Probably those things will be built by Facebook.

- So first of all, it's true, there is a lot of investment

and efforts putting, and resource putting big companies

in AI research and development, but it's not

that all the AI is happening there,

I wanna say the academia continue to play a huge role

in AI's research and development,

especially in the long term exploration of AI.

And what is academia?

Academia is a worldwide network of individuals,

students, and professors thinking very independently

and creatively about different ideas.

So from that point of view, it's a very grassroot kind

of effort in AI research that continues to happen.

And small businesses and independent research institutes

also have a role to play, right.

There are a lot of publicly available datasets,

we, it's a global community that is very open

about sharing and disseminating knowledge and technology.

So yes, please, by all means,

we want global participation in this.

- All right, here's my favorite question,

this is from anonymous, unfortunately.

If I am in eight grade, do I still need to study?

(audience laughing)

- As a mom, I will tell you yes.

Go back to your homework.

- All right, Fei-Fei, what do you want Yuval's next book

to be about?

- Wow, I didn't know this, I need to think about that.

- All right, well, while you think about that,

Yuval, what area of machine learning

do you want Fei-Fei to pursue next?

- The sidekick project.

- Yeah, I mean, just what I said, an AI,

can we create a kind of AI which can serve individual people

and not some kind of big network?

I mean, is that even possible or is there something about

the nature of AI which inevitably will always lead back

to some kind of networked effect

and winner takes all and so forth?

- [Nick] All right, we're gonna wrap with Fei-Fei--

- His next book is gonna be a science fiction book

between you and your sidekick.

(audience laughing)

- All right, one last question for Yuval,

'cause we've got two of the voted questions are this.

Without the belief in free will,

what gets you up in the morning?

- Without the belief in free will?

I don't think that the question of, I mean,

is very interesting or very central,

it has been central in Western civilization

because of some kind of basically theological mistake

made thousands of years ago.

But it's a really, it's a misunderstanding

of the human condition.

The real question is how do you liberate yourself

from suffering?

And one of the most important steps in that direction

is to get to know yourself better, and for that,

you need to just push aside this whole, I mean, for me,

the biggest problem with the belief in free will

is that it makes people incurious about themselves

and about what is really happening inside themselves.

Because they basically say, I know everything.

I know why I make decisions, this is my free will.

And they identify with whatever thought or emotion pops up

in their mind because hey, this is my free will.

And this makes them very incurious about

what is really happening inside and what is also

the deep sources of the misery in their lives.

And so this is what makes me wake up in the morning,

to try and understand myself better,

to try and understand the human condition better.

And free will is just irrelevant for that.

- And if we lose it, your sidekick

can get you up in the morning.

Fei-Fei, 75 minutes ago, you said

we weren't gonna reach any conclusions,

do you think we got somewhere?

- Well, we opened a dialogue between the humanist

and the technologist, and I wanna see more of that.

- Great, all right, thank you so much, thank you, Fei-Fei,

thank you, Yuval Noah Harari, it was wonderful to be here,

thank you to the audience.

(audience applauding)

The Description of Fei-Fei Li & Yuval Noah Harari in Conversation - The Coming AI Upheaval