Follow US:

Practice English Speaking&Listening with: Amir Husain: "Is the World Ready for the Age of AI?" | Talks at Google

Normal
(0)
Difficulty: 0

[MUSIC PLAYING]

[APPLAUSE]

JAKA JAKSIC: What are some of your general observations

about how the world is prepared for the age of AI?

Is the world ready?

AMIR HUSAIN: I don't think the world is entirely ready,

both at the practical level as well as the intellectual level.

I think at the intellectual level,

there are still unaddressed concerns

about how AI impacts jobs as an example.

One of the leading areas in which

AI is being implemented at full force is in the military.

Just in the last couple of months,

there have been major advances, for example

in the testing of unmanned ground vehicles,

as well as a number of other AI powered web systems

that the Russians have been investing in.

And this creates further fears around will these systems just

become sentient and kill us all at the one extreme,

but then even a middle of the road approach to some

of this development as well, do we

need to have an international framework that

governs the development of these systems and so on.

So given the fact that those frameworks don't exist,

the conversation doesn't exist, there

is a UN group that is charged with talking

about and discussing autonomous weapons systems,

and they haven't really been able to over years and years

come to any sort of a tangible conclusion

on what approach to take with regards to these weapons

systems.

All of this speaks to the fact that there

are many opinions, many national interests,

and those national interests and opinions have not yet

coalesced to a level that would allow preparation

in the global sense.

So I can give you many other examples,

but I would just say that even these two are

sufficient to cite in making the case

that I believe there's a lot more that needs to be

done for us to prepare well.

JAKA JAKSIC: What are some of the biggest challenges?

You've mentioned already AI weapons?

Is this, in your opinion, the biggest threat,

or are there other things that might pose

was a great challenge?

AMIR HUSAIN: I don't think that AI weapons systems are

the greatest threat, partly because there's

two types of fears, you know, broadly, around AI weapon

systems, one, that these systems will become

like the Terminator, will develop

some level of intellectual capacity,

and this fundamental malevolence.

I don't think really we need to build

machines to come up with the most unbelievable kinds

of malevolence.

The human race is sufficient for that.

We've shown that throughout our history.

So sophisticated technology in the hands

of a highly movable malevolent individual is plenty scary.

I don't think that the source of that concern

should be the further development

of artificial intelligence.

But the other type of concern is, well,

what if this creates a massive disparity?

You know, this is a class of weapons systems that

might create some sort of race.

It might create some sort of a fundamental disparity

between competitive nations, and therefore, the study

of military history shows us that generally stability

is achieved when there is some modicum of equality,

where there is some parity, and when one competing power is

very dominant and another competing power is very

weak, that creates reasons, and it gives impetus

to instability and war and confrontation and so on.

So maintaining parity is an important theme.

And in AI, particularly AI weapons systems,

I think the risk is that on the one hand,

certain countries are approaching this with vim

and vigor and speed and making large investments,

and certain other countries are not,

and therefore, that disparity that may result

may, in fact, seed a new kind of instability.

During the nuclear age, during the Cold War,

there was this idea, sad as it was,

of mutually assured destruction.

But mutually assured destruction worked in the sense

that all the way to the end of the Cold War, what

kept the peace was parity.

What kept the peace was this idea of consequences.

And where we may see risks emanate from,

maybe with this new technology in this sort of capability gap,

that capability gap cannot be allowed to exist,

not because one power is inherently better than another

power, but because stability requires parity.

JAKA JAKSIC: What would you recommend

to mitigate this problem?

Are there any solutions in sight?

Are there any countries or any organizations that

have been approaching the solution?

AMIR HUSAIN: There are many organizations

who have taken the approach of, for example, regulating

these weapon systems in very tight ways.

There are other organizations, which call for outright bans.

There are some countries which are

taking a variety of different tactics

to delay the onset of such bans.

So because it's a complex problem,

one would expect with multi hundred stakeholders,

more than 200 states involved in some

of these discussions, every avenue of attack,

every vector of thought has been applied to this problem.

So you see all sorts of thinking.

The issue is that this speaks to something very fundamental

about the human condition, which is

that we recognize that if there is no parity,

there is instability.

We recognize that if we gain the advantage,

then we get to drive the future to a large extent.

So at the very minimum, maintain parity, and if you can,

attain an advantage.

That fundamental drive, I don't think can be reprogrammed

or deprogrammed from humanity in a near-term--

over the near-term period.

So then one has to be very realistic about this, which

is to say that the real threat from AI weapons systems

is that they are fundamentally software.

When you have a deal with a certain nation,

and you say, look, don't invest in nuclear technology,

and they create these buildings with domes, large construction,

they're digging into the sides of mountains,

through satellites and through surveillance aircraft,

you can verify whether or not this activity is going on.

The development of software, however,

cannot be verified in this way.

So this notion of trust, but verify fails with AI based

weapon systems, because you cannot guarantee that a certain

sequence of bits exists nowhere in any computer system

in an entire country.

That's a fundamentally unverifiable sort

of an activity.

So when you get to that position,

where you know that on the one hand, you aspire to parity.

On the other hand, you fear someone else taking

on an advantage, and then you're working under conditions where

the verification of the presence of that advantage is

a difficult problem to solve and well-nigh an impossible problem

to solve, then you enter into the realm of game theory,

where the assumption is that since you cannot verify that

the competitor is doing what they tell you that

they're doing, which is to comply with whatever agreement

they have with you, that they are indeed violating that

agreement.

And the competitor must know that you know that you

are working in that way.

So it becomes a game theoretic sort

of approach, which leads to its obvious outcomes.

In our book, "Hyperwar," we talk about this in depth also.

But this is unfortunately just the reality.

So it's one thing to take a moderate position

and say look, weapons are bad, and therefore, we

must ban all weapons.

Of course, weapons can cause harm,

and if we could, in fact, ban all weapons,

and there was a practical way to go about doing that,

perhaps that would be the thing to do,

but that is not practical.

And what is practical right now is to create stability.

And creating stability will require large scale competitors

to make the right sorts of investments

and to create very obvious case for parity

and, therefore, stability.

JAKA JAKSIC: What about--

one of the key differences between AI weapons

versus nuclear weapons is that nuclear weapons

are very difficult to build, whereas AI

weapons, like drones, you can build one with a few hundred

dollars and, you know, some spare time.

How do you see small actors come into this picture?

AMIR HUSAIN: See, I mean, if you think

about this as a military strategist,

the purpose of AI weapons system and the purpose

of a nuclear weapon system are very different.

The only case in which nuclear weapons have been used

was not for a tactical victory or to clear some embankment,

or to deal with an armored column.

It was to break the will of a nation.

That is the strategic use of nuclear weapons.

AI weapons of the type that you are describing,

the tactical weapons.

For a few dollars, yes, in fact, in the Middle East,

right now, with all the confrontation

that's going on there, there are rebel groups

that have put together the weapons of the type

that you are describing.

There have been assassination attempts.

There have been successful strikes against air ports

at great distances.

There have even been cases of drones

being flown into the radar, arrays

of anti-missile batteries, and those anti-missile batteries

have been taken down for brief moments in time,

while a ballistic missile, also kind

of a homegrown ballistic missile,

has been timed to arrive at the scene

while the drone has taken out the radar

of the anti-missile battery.

So these sorts of sophisticated, you know,

orchestrated attacks across multiple actions

and with multiple systems, these have actually happened now.

These are not the realm of fiction.

So the risks that emanate from that kind of technology today

are definable.

But your point is well taken, which is where will this go.

In general, if you think about weapons systems,

the trend is towards precision, because if you go back

to World War II, and if you've seen any documentaries

about World War II, where they show huge numbers of bombers

flying in and dropping large numbers of bombs

to take down a bridge, or to take down a single building,

the reason why you needed huge numbers of bombers

with large numbers of bombs was because you didn't

have accurate gun sights.

You didn't have any kind of geolocation

on the weapons system.

That was all replaced by a single 2,000 pound bomb.

You can take out a bridge with a single 2,000 pound bomb.

So instead of flying in 50 bombers and hundreds bombers

and losing a lot of people and creating mass damage

in the area, where the bombing campaign was happening,

you just drop one bomb, you take out the bridge.

That trend of reducing the amount of kinetic firepower

and increasing the amount of precision, that trend

is generally continuing in military technology.

And with artificial intelligence,

one of the things that could happen

is that you could create far more precise systems, which

are even less kinetically powerful than what

is being used now.

And instead of taking on for example, an armored division,

you might take out 23 very specific people

within that armored division at no loss of life to anyone else,

again, collapsing the will of that fighting unit.

So this is-- the thinking around a lot of these concepts

is not fully developed.

But the potential for the use of artificial intelligence

to actually reduce casualties in war

also exists, even in the kinetic case.

Now, we'll see in the fullness of time

where these technologies go.

But my point here is only that this is not

a black and white issue.

This is not a 0 or a 1 issue.

There is much nuance in this topic.

And I don't believe that those who are writing the rules

or hope to write the rules understand fully

the nuance that they need to in order to effective legislation

or the construction of treaties and so on and so forth.

JAKA JAKSIC: Yeah.

What about some of the other types of disruption

that can be caused by AI?

Like most of us work day to day on things

that seem much more benign than weapons, like you know,

recommender systems, various types of pattern recognition.

How much of a disruption do you see happening

from these benevolent systems?

AMIR HUSAIN: So Jaka, in my office,

I have my computer in front of me, and on my left

there's a wall, and on that wall,

there is a poster that shows well

over 100 different cognitive biases

that the human mind suffers from these cognitive biases

are actually well known.

You can go, and you can Google and look up

a list of these cognitive biases.

There are also ways in which you can analyze text, for example,

and categorize peoples personalities.

OCEAN is well known.

There are many other techniques.

And so we know that we succumb to these cognitive biases.

We know that the study of these cognitive biases

has been formalized.

We know that there's ways to, at distance, simply

with automatic analysis, NLB, if you will,

we can categorize people into various different profiles,

and we can gauge their susceptibility

to some or all or a few of these cognitive biases.

And then, where natural language generation

is being used for various other things,

natural language generation can also

be used to induce the cognitive bias,

and push a person in a direction where

a pre-designed outcome can be realized

with a high probability.

When you do this at scale, you're

talking about essentially the intentional reprogramming

of large numbers of people.

This is something that in "The Sentient Machine,"

I refer to as mind hacking.

It's definitely one of the greater threats.

The reason for it being a great threat

is because to this audience, you will get this 100%.

Building a system like that is neither very expensive,

nor is it impossible, nor is it very technically complex,

because you can take a lot of the piece parts

from various different sources.

And I was at the Computer History Museum, which

for me is sort of like a pilgrimage every time

I go there, but one of the holy artifacts at that temple

is one of the original Google server tracks, where

you had the sort of off the shelf

motherboards on these trays with the batteries

in the back, which reduced the cost of delivering cloud based

services.

With that kind of innovative thinking,

if you think about how you can, on an inexpensive level,

create physical capacity that can run a system like this.

Then one could imagine, even small groups

can conduct large scale, even nation level

attacks of this type.

I don't know that there is a full recognition

of that threat, and I don't know that those

who are tasked with protecting us against such attacks,

that those organizations formally exist,

and in the absence of the formal existence

of such organizations, whatever the closest proxy

organization is, has the level of skill and understanding

to be able to detect and defeat such threats.

So this is just one example, of course.

Another one, which I've actually started,

just started the process of writing my second book.

And one of--

I'm sort of giving this away a little bit.

But one of the core tenants of that

is that we are past the age where

there can be a willful disconnect from technology.

There was a time when you could say, I'll turn off my phone,

and I'll turn off my network connection,

and I'll go just be off the grid.

My submission is that there is no off the grid anymore.

As sensor technology has become more and more complex

and fused sensor analysis using machine learning

becomes provably more and more successful,

even you walking through a collection of air molecules

at distance exhibiting a gate, having

a conversation inside a room with a glass window,

using a mechanical typewriter, and I give all these examples

in the opening chapters of the book,

are completely susceptible to signature

analysis detection, reverse engineering at distance.

So your existence is a signature.

Your existence emanates information into everywhere

you are and beyond.

And the sense of complexity is such now

that that signature can be extracted.

And what can be extracted can then be used to predict.

We're entering a world where disconnection may not

be possible.

So these types of threats are to be taken quite seriously.

JAKA JAKSIC: Even if you personally go off the grid,

the world around you is still profoundly

impacted by technology and AI.

AMIR HUSAIN: Absolutely.

JAKA JAKSIC: Yeah, so what are some of the approaches

that you would recommend for mitigating these risks,

either--

like would that be a regulation?

Would that be like some technology

that would self-regulate AI?

AMIR HUSAIN: I think that second case that we just

spoke about, I do believe that in that area,

AI is the best way to deal with AI, adversarial AI.

And the way you would do that is, again,

this concept of AI shields, where

you want to be able to detect AI driven messages,

AI driven behaviors and to be able to foil them.

There's some work happening now in being able to detect

deep fix, as an example.

That, to me, is very basic work, because it's essentially

an image or a sequence of images compiled together as a video,

and you're trying to detect whether this is machine-made.

When you get into more complex media with language

and other things layered on, that detection may become more

difficult. But those AI shields are,

I think, very important to develop on the one hand.

Secondly, in the area of regulation, the Department

of Commerce, the US Department of Commerce

recently solicited comment on what should we

do to regulate the export of certain types of AI.

And we're at a very basic stage.

So my submission to them was that, first of all,

instead of talking about AI, we should really

talk about the data, the algorithms, and the models,

and then, of course, the underlying hardware

infrastructure, like really sophisticated sensors, high

end computing equipment, things like that that are enablers.

So if you go down the path of regulating algorithms,

or if you go about thinking of AI as a category of software,

I think that creates many impossibilities,

because to say, I'm going to export

control AI is like saying I'm going

to export control mathematics.

It's not possible.

So the idea here is that there are already

laws that govern the exportability of data.

To the extent the data is export controlled,

any algorithm that processes that data

to produce a model that uses the export control

data, the model and the data should we export controlled.

But the AI products that embody the algorithms,

those themselves are like the mathematics, sort of like,

again, don't export differentiation

to such and such parts of the world, very hard to do.

So that's sort of this nuanced approach

of going in and saying, look, please stop talking about AI.

Let's start talking about specific things.

And then let's create security and safety around those things.

Those are two ideas that I would share with you.

AI shields, the detection of AI generated content,

and AI generated campaigns, and also, this kind

of nuanced regulation that allows commerce

by allowing the exportability of products

and doesn't prevent American companies from facing

what we faced earlier.

We export controlled drones.

Other countries were able to sell those drones

with those capabilities to eight or nine states,

which were otherwise American customers.

We export controlled computers, personal computers,

above a certain MIPS threshold.

Until in the Clinton presidency, that limit was removed.

That caused computers and processors

that were built in Far East Asian countries

to be exported to the rest of the world.

And the consequence was the entire PC industry shifted

to that part of the world.

You know, the PC industry was really based in Texas.

You had Dell.

You had Compaq.

You had CompuAdd.

You had all the high volume companies were out of Texas.

Outside of Dell, that's all gone.

And we know where it went to.

So I don't think that mistake needs to be made again.

JAKA JAKSIC: Yeah.

What are some of the countries that are best prepared

to face the future?

And is this even something that individual countries

can face themselves?

AMIR HUSAIN: You know, I was recently in the UAE.

And in the UAE--

I go there very frequently--

there's a young gentleman who's been appointed the world's

first minister for AI.

His name is Omar Sultan Al Olama.

He is the UAE's minister for artificial intelligence.

He's a very forward looking person,

and he's focused heavily on education related

activities with AI, very practical uses of AI.

They've got very specific national goals,

which is that they want to incorporate our autonomous

AI driven decision making and execution within government

to really power the next wave in their e-government, you know,

digitization.

They are working on, again, tangible,

defined specific projects.

I think that's one approach.

It starts from, again, in the grand scheme

of things, a relatively smaller country,

but commensurate with that, a set of goals that

are realistic and achievable.

And I think they've got the right kind of people

behind that.

I've spent a lot of time in talking to NATO officials,

had several conversations in Brussels.

The problem there is that-- and that,

again, is one of the examples of military implementations of AI,

and there, the problem is that the conventional Western

militaries are configured in a way where procurement

is very, very difficult.

It spans multiple years.

And at a time when procurement spans multiple years,

and you have a technology like AI

which is maturing so quickly, and competitors

are so incented to create a competitive differentiator.

There's this danger of these organizations

being left behind, whether it's NATO or DOD or MOD in the UK.

And in each one of these cases, I've

had extensive conversations.

China is moving at speed.

I've had many interactions there.

I think they have a large number now of AI startup companies.

They've got a greater availability of capital.

In general, things seem to happen faster, from inception

to launch, from launch to the first product,

from launch to getting a team of 100 people going.

All of these practical things that one

has to do to build value or to launch a project

or to get from A to B, you know, it's

just objectively measurable.

Those things are happening faster.

So I think right now, we're in a position

where the world is looking at these technologies

in various different ways.

Russia has a very military focused view.

China has a very pan view across economy and military.

Countries like the UAE have a very specific view of AI

as to what it can give them.

It's fascinating.

I think many countries are realizing

that what McKinsey said in their report

about AI, which is that AI might be

the first technology that if you adopt it at sufficient scale,

it may create advantages, which are unassailable,

because essentially, you use data, you create a model,

you get insight.

Using that insight, you then improve your business.

You then use a better model to start

creating even better data.

And this becomes an upward spiral.

And then the exponential benefits

of being higher up in that spiral

are such that if somebody starts too late,

you're running at a completely different rate of speed.

It's very difficult to catch up.

JAKA JAKSIC: What about things like jobs

displacement in the age if AI?

Do you have any thoughts or recommendations about that?

AMIR HUSAIN: Yeah.

Look, my view has been for a long time

here that the disconnect really should only

be on one topic, which is the timing over which

the displacement will occur.

People can have different views of that,

because you can calculate that in various different ways.

There are lots of complex economic variables

that play into that.

And the end state, what percentage unemployment

will that end state result in?

You can have some disconnect on that.

But I think a logical person can't have a disconnect

fundamentally, that over some period of time, t,

there will be some employment, e,

where e is substantially greater than the present unemployment.

You know?

And if you think about this problem in this way,

then the fact that you need to come up with a solution

is obvious.

One of the ways to come up with that solution

is to say, OK, well, all this technology

and all this autonomous capability

is producing something.

It's producing something of value that's adding something

to the economy.

There are ideas around taxing that production

and sharing that wealth with the folks who

are no longer working.

There is the idea that perhaps, what you need to do

is you need to take some core resources, land in a country,

or some core resource in a country

and say, look, all of the citizens of this country

own this percentage of that core resource.

So as you build anything, you build a factory,

you build whatever it is, the percentage

of that core resource that you use,

you've got to pay the share of the ownership of the citizens

in using that resource.

There is universal basic income, which

is now being tried in various different places.

All of these things essentially lead

to the same idea, which is that we

will have to start being people without them doing the work

to make the money, net net.

And that also creates this different idea

of your relationship with money.

So conceptually, the way I look at it

is I say, OK, well, the human condition has evolved

from where 100% of your time was being spent just

trying to somehow stay alive, and then that diminished,

and you got time to do a few other things.

But at this point in time, can we not please just

get food and basic health?

With the level of technology that we have now in the world,

is that too much to just get food and basic health and maybe

some education?

You can look at that, you can say, well,

that's a dastardly idea.

But it's no different to how other types of enablement

has been supplied to humanity over time

as technology has sort of progressed.

I think it's more this notion of--

this concept gets caught up in politics.

It's really not a very political concept.

It's just we have a bunch of stuff here.

If we were to distribute that in a decent way,

these basic needs could be met, and let's

talk about competition at a level of need that's

higher than these basic needs, some level

of free redistribution, some level of redistribution

is absolutely necessary.

I don't even think that needs to be a debate.

JAKA JAKSIC: How do like political leaders view

these ideas?

Is that something that they're very reluctant to adopt,

or are there differences in different parts of the world?

AMIR HUSAIN: It's very interesting.

If you look at the UK, there's certain types of services

that they want to provide, where even a conservative in the UK

would want to provide a certain level of service,

say, in health, that a conservative in the US

would consider, you know, disastrous.

So on specific topics, as you bring up the conversation, OK,

do people deserve some basic health care?

Should we allow a person to live when they're ill

and they can't pay that bill?

What should we do?

Should we kick them out of the hospital?

If the society is collectively rich enough,

should we allow for that person to live?

That's the basic question.

Similar things around whether we allow them to eat and so on.

And ultimately, the one lesson that I've

learned in my study of human history

is that all people are the same in the following way.

If you take 100 or 1,000 people from Canada,

a similar number from Australia, and a similar number

from Nigeria and so on, and you subject them

to roughly the same conditions, you will over time

see the same behaviors.

Now, this doesn't apply to one person,

because one person is not statistically significant.

And what if that person is Jesus Christ, who

always does the right thing?

Or Muhammad or Krishna or whoever.

But the thing is that in a large collection of people,

generally, the same thing.

So in that sense, if all people are equal,

what brings about the worst in people is when you stress them.

You're not going to get food.

There's this lion coming.

It's going to eat you up.

There's this other group of people coming.

They're going to throw stones at you and whatever that is,

take over your land.

That is when people are their ugliest.

The human condition, by the way, is that today in every way.

There is no change.

When you create that level of stress,

we revert to what we fundamentally are.

So the solution, in my mind, is to remove

the fundamental stressors that create the greatest amount

of instability in society.

That is something that we all want.

I have three boys in school.

I fear sending them to school because of everything that's

happening.

These things do impact.

All of us all of us have loved ones and families

for ourselves.

There is an impact.

So to me, this is not about some irresponsible socialism

and throwing money to people who aren't worth anything.

It's actually about creating stability in society.

And that does benefit me.

And there's a modicum of that stability

that one should consider funding.

Now, you don't have to provide them opulent mansions.

But yes, food and perhaps a stay at the hospital

when they're dying, that that might be a good start.

JAKA JAKSIC: Do you think people themselves--

I think right now, many of us derive

a lot of our meaning from our work,

regardless of what we do almost.

Do you think this change of attitude towards work

is something that's going to happen

within, say, a generation?

AMIR HUSAIN: I think that change in attitude

is already happening.

You know, I was just actually talking to a friend of mine

here in the Valley last night, and we were talking about just

over the last 18, 20 years, when he and I started

our careers versus the attitudes that we see towards work now

with the current generation, and if you talk to-- say,

if I talk to my parents, they would tell me

that, well, between you and us, there

was a change in the attitude towards work.

The thing is that at some stage, we

said work is everything we are, to the point where

I'm going to just name my child Ben Farmer, right?

I'm just going to call his last name what he does

and painter and blacksmith and et cetera and goldsmith.

You know, these are names.

So the job became the identity.

And the point is that the unique characteristic of a human being

is not the fact that they are a painter or a goldsmith.

It's the fact that they can produce economic output.

That's not a unique characteristic

of a human being.

The unique characteristic of a human being

is that they can discover undiscovered knowledge

and perceive it.

There is no other such thing that we

know of that can discover undiscovered knowledge

and perceive it.

Now, here's some good news, which

is that the landscape of knowledge,

what I call the idea scape in "The Sentient Machine,"

is infinite.

The amount of ideas that are out there to be discovered

are infinite.

So this whole notion that oh, AI will think a billion times

faster than us, then we will become useless

because our minds can't think at that rate of speed,

so therefore what's the point?

That whole idea is kind of irrelevant,

because what matters in this journey

through an infinite landscape of ideas is not speed,

because given any speed, the percentage

of that territory that you will cover tends to 0%.

You could be walking at 2 miles an hour.

You can be flying at hyper speed.

You will uncover 0% of infinity.

But what does matter is perspective.

So if you think about these perceivers seated

in the landscape of ideas, and you have an AI over here,

that with its very high rate of speed can bloom like this

and discover ideas like this, and you

have a human mind over here, that at its rate of speed,

can just pretty much stay where it is, maybe

cover a small distance around.

The point is not that this AI is running faster.

The point is that because of this difference in perspective,

this human mind still has value, because it

can be at a part of that intellectual landscape that

allows it to see something that at any rate of speed,

this AI would not get to.

It is a matter of perspective, not one of speed.

The value is perceiving ideas from a point of perspective.

And in that, all humanity has value, even in the age of AI.

JAKA JAKSIC: OK, one last question,

and then we're going to get the audience question.

What would you recommend to Google as one of the leading AI

companies to do to help the world transition

towards this new age of AI?

AMIR HUSAIN: I think we are suffering

from a collapse in trust.

People don't know how AI systems work.

People don't know how AI systems are trained.

We are all told as consumers that, you know,

AI systems devour data.

This data is being captured from everywhere.

This data is being used for everything.

There isn't a level of transparency in the industry

broadly as to how AI systems are being built.

Every day, there's a news story about somebody listening

to something, and there was a human being involved,

and accidentally, there was a private message sent

from one person to another person,

and the net result of all of this

is that there has been a tremendous collapse of trust.

So I believe that Google, of all the players, relatively,

is still at a point where it can attempt to rebuild that trust.

The first step in rebuilding that trust is transparency.

The second step in rebuilding that trust is then education.

And I think education of how this works in education,

of how certain things are done.

And a tremendous advantage Google

has is the scale of Google is such

that it's not like if one or two things that Google does

were known to the world, suddenly

Google would be under some existential threat.

There are so many competitive barriers

that over time, Google has been able to build

as a solid business, that one or two things here

or there won't make an existential difference.

But what will happen is that there

will be at least a move towards the restoration of trust.

The fact that AI technology is developing,

that goes without saying.

It's developing very quickly.

But I think it needs to develop under conditions of trust

and under conditions of explainability.

Those are two places where I think Google is uniquely

qualified to contribute.

AUDIENCE: So what was your inspiration when

you were creating this book?

What were you really trying to accomplish?

What is the one message that you'd

like to get out to the audience that you're

presenting this book for?

AMIR HUSAIN: The motivation was that I like reading philosophy.

I picked up a book.

I think it was "The Republic."

And in reading the first few chapters of the republic,

I realized that all these questions

that Plato had been thinking about a long time ago,

and that I had read at a younger age and now I'm reading again,

these are the fundamental questions.

What of us?

Where did we come from?

What's our worth?

What's good and what's bad?

And in the age of artificial intelligence,

today I don't think we've answered those questions

in any greater depth or to any greater measure of satisfaction

than at any time in the past, right?

So we haven't really made great strides

in those core questions, where technology has come a long way.

And now we are faced with this century

of artificial intelligence, this age of artificial intelligence.

So I wanted to combine those two things.

What is the advent of AI?

What does that mean to humanity?

And can the advent of AI become a mirror

in which we find ourselves, where

we ask questions, like AI is going to be so much faster,

so does that make us irrelevant?

And then we start thinking, well,

why would it make us irrelevant, or why wouldn't it?

And then we arrive at some conclusions

that perhaps answer some deep questions that we've always

been wondering since the beginning of time.

So the motivation was to combine that sort

of philosophical approach with a practical view

of where we are now in time and what

AI will soon be doing for us.

AUDIENCE: At a practical level are there

things that Google or Googlers can

do to build ethical AI within our products

outside of the broader landscape of establishing trust

and explainability at more of an individual level

or within certain product areas that you would suggest?

AMIR HUSAIN: Well, I think even at an individual level

like, for example, if you're a product manager

or you're a senior architect, or you

have some ability to influence the design of a product, I

think--

or you're a UX designer, even certain small clues,

even certain small textual explanations,

even certain little configuration options that

essentially break down how data is being collected,

what kinds of decision making is this going on--

I'm not saying that you solve the explainability problem

in terms of black neural network explainability.

But even some cues, some information beyond what's

present today.

Because if you look at a lot of these products today

that use a lot of data, there is no one place

where you can go and just get the spiel.

What are you doing with my stuff?

That's not hard to do.

I think at an individual level, yes, indeed,

UX designers, programmers, team leads can push for that,

and say, look, what's the problem with a single place,

where it just says, this is what I do,

and here's where you can turn things off, not the config page

where you've got four things that you can turn off,

but then you find out in three articles,

three months later that there were also four other things

that weren't in the software.

Now there's going to be an update that will contain

two more of those buttons.

You know, it just creates a condition of a mistrust,

where it seems to the consumer that, oh, they got caught out.

Then they added half of what they needed to add,

just one place that just says, here's what I do, period.

Even small things like that in the design, little cues,

that is explainability of a very practical type, you know?

It's not the holy grail of AI explainability,

but at least, it explains something

to the user beyond what they know.

AUDIENCE: So I don't fear being killed by a machine,

but I fear being killed by a human being, as a gay man,

after being recognized by a machine

that I'm a gay man in a country where homosexuality

is being punishable.

Is there any way that you think it

will be possible in the future to prevent things

like that to happen?

AMIR HUSAIN: Let me be very honest.

Artificial intelligence is not a solution for human malevolence.

However, human malevolence is magnified

in conditions of stress.

Artificial intelligence is a way to reduce stress.

You can certainly diminish the chances.

You can certainly lessen the places

where such violence is wantonly carried out.

But I'm not here to tell you that artificial intelligence

can cure the human condition.

And I use those two words very carefully.

AUDIENCE: Hi.

On one side, public trust is important

what we are doing with people's data, and on the other side,

you also mentioned parity.

And so not all countries will be working

with the same constraints, and they can probably

take bigger strides towards it.

So which one should we choose?

AMIR HUSAIN: I don't think that there's an either/or there.

There are consumer systems that are military systems, OK?

So what I'm talking about explainability in is.

The kind of explainability that I was addressing earlier

isn't consumer systems.

Parity is a concept around military technology.

And in military technology, generally, I

don't know that many people will complain

if you, for example, develop visual models of an open T72

hatch or T90 hatch.

I don't think that's going to be a privacy issue.

But if you take data and start shipping it across the network,

and it's not clear where it's going

and people still aren't sure whether they

said the name of some perfume, and then they suddenly

saw the ad connected with that perfume.

Was it listening?

Was it not?

I'm talking about those sorts of things.

So I think these are two different domains.

And there are different standards involved.

Parity is in the military domain,

and I don't think you're going to have consumers complaining,

again, about visual databases of military assets.

So the mind hack, again, I feel that you can't just--

it's sort of like saying a criminal is going

to commit a bad act, like theft, and you should pass a law

to say that theft is bad.

Well, that law already exists.

So the issue is not so much the passage of the law.

The issue is do you have effective technology

to prevent that?

Do you have a police force?

Do you have faster cars than the thieves have?

And so on.

There, I think you're talking about AI shields.

The idea behind mind hacking is that you

have computer generated content that's

being used to divert people in ways that is wrong in some form

or fashion.

You need to detect and stop that.

Our law enforcement networks themselves,

none of these companies or organizations

currently have that level of technology.

Needs to be developed and invested in.

AUDIENCE: So we're talking about kind of auxiliary effects

on people, sort of auxiliary effects of AI.

But I'm wondering what is your kind of view of the day to day

impact of AI on people's lives?

AMIR HUSAIN: So I mean, obviously,

that question is tied to the day to day use of AI

in people's lives.

I have a 5 and 1/2 year old at one level.

I think the level of interaction that he

has with a lot of these personal assistants

is at an emotional level.

I can see that--

in fact, the other day, I was asking him,

I said what would you like you know the personal assistant

to do.

He said, well, you know, I've already

thought of the next personal assistant,

and he came up with a name for it.

He said the difference between this

and the current personal assistant

is that it'll be able to come out of the box.

And when I say clean my room, it won't just give me an answer.

It'll actually come out of the box and clean my room.

So there's a lot going on there, right?

There's a lot going on there.

He's thinking about the fact that actually this-- there

is a physical presence, but it's just that the manufacturers

of the device have not created the right door yet

from which that physical presence, you know, comes out.

So now, that's quite something.

That's quite something, because he doesn't believe in magic.

He doesn't believe in deus ex machina.

He believes that men can create things

like that, possibly the first generation that thinks that.

And their relationship with this technology

is going to be very, very different.

I think the legislation that these guys, you know the 5

and 1/2 year olds will get into 20

and 30 years down the road will be very different

and a lot of the resistance, I think.

And this is also the age old story of human history.

Sometimes, people just need to age and move

on before the next set comes in and runs the world the way

that they want to run.

So in that sense, that's just one very small example.

I don't want to bore everyone with very many others.

But that's profound.

AUDIENCE: So you were talking about parity

in the military area.

I was thinking that--

you kind of touched on that when you said that like there

is this report that says if certain companies don't

start using AI right now, they might be out of the game

forever.

But I was thinking that if we apply this

to different countries, that's less like a different problem

that you probably didn't touch that much on,

and it is the fact that different countries

are in different stages of economical evolution.

Can you comment on that?

What happens if just some of the countries,

and this is what is happening right now will use AI while

others just cannot?

AMIR HUSAIN: Yeah.

So this has always happened across human history

and military history.

And the solution to that is called alliances,

because essentially, like, for example,

there was nuclear technology.

The Soviets had it, and the Americans had it, right?

And the rest of the world was seeking nuclear umbrellas

from one or the other.

That desire to seek nuclear umbrellas resulted

in the Warsaw Pact on the one hand

and NATO, SEATO, CENTO, Baghdad Pact on the other.

So those are the types of things that

are constructs that deal with the reality

that while the underlying phenomenon of parity

can only be maintained between certain players.

Those players then are the dominant players,

and everybody else must be part of an ecosystem, right?

So by the way, that works in technology also.

When a platform takes off and adopts a level of scale that's

beyond a certain level, and it's hard to deal

with it, as a small ISV, as a small game developer, whichever

platform you want to pick, what option do you have?

You want to launch an independent game.

How many people are brave enough to just go off on their own

and create their own?

You've got to deal with the platform guys, right?

Now you can choose whether platform A or platform B.

But the chances of you creating a third platform

while you're essentially a game developer are 10 to zero.

By the way, these are patterns that

apply to everything in life.

Here's the other thing.

This is a very meta sort of a thing.

The idea of dominance being owned

by a small sect within a network, then

nodes coalescing around one or the other,

this fundamentally, in physics, is called gravity.

JAKA JAKSIC: Thank you all for coming, and thank you, Amir.

This was a fascinating conversation.

I wish we had more time.

AMIR HUSAIN: Thank you very much, Jaka,

and thank you very much.

[APPLAUSE]

The Description of Amir Husain: "Is the World Ready for the Age of AI?" | Talks at Google