Practice English Speaking&Listening with: Vint Cerf | Talks at Google

Normal
(0)
Difficulty: 0

>>>

The Greyglers is a group of approximately 200 Googlers, a little less than that, most

of us of a certain age. And this is our first big public event.

So thanks very much for coming. Just wanted to give you a really fast overview

of our primary goals. They are to raise awareness of age diversity

at Google and in our user base, to help attract experienced talent to Google, and to help

make Google culture and Google products and services welcome to people of all ages.

Anyone who can relate to our mission is welcome to join.

You don't have to have gray hair, or even gray roots, like some of us.

[ Laughter ] >>> And today we're running a one-time special.

There's a goto address on the screen if you'd like to join.

It's goto/join-Greyglers. The first 50 people who join will get one

of these way-cool tee shirts. So don't forget to do that.

So we are honored that Vint is a Greygler, and he's also our executive advisor.

Speaking of the main attraction, Vint will be taking questions via Dory at the end of

his talk. So if you have questions, please be sure to

go and submit them. The address is up on the screen.

It's goto/Vinttalk -- one word -- hyphen questions. >>> Hi.

As a Greygler, I can remember what things were like -- as a Greygler, I can remember

what things were like before the Internet existed.

Everybody had a lot of free time on their hands.

[ Laughter ] >>> And no one had any information at their

fingertips. But Vint Cerf has done a lot to make that

change. With Bob Kahn, he invented the TCP/IP protocols

and the basic architecture that allow computers all over the world to link together and communicate.

And he spent most of his career providing leadership to develop the technology into

the Internet that we all know and love today. Because of his contributions, he is frequently

called one of the fathers of the Internet. And he has received quite a few awards of

recognition from lofty organizations. He received the Turing Award from the ACM.

He received the National Medal of Technology from President Clinton.

And, in a show of bipartisanship, he received the Medal of Freedom from President George

Bush. He's also --

[ Laughter ] >>> He's also been designated one of the most

intriguing people of the year 1994 by "People Magazine."

[ Laughter ] >>> So his talents span a broad spectrum.

He's been an advisor of many boards and organizations. He's received many, many awards, at least

two of which include the word "legend" in their title.

And he serves Google as our chief Internet evangelist.

So today he's going to share with us some of his insights into how the whole Internet

evolved and how we could have done it better if we had only had known these things in advance.

So with no further ado, here's Vint. [ Applause ]

>>Vint Cerf: You know, I always get nervous when people clap before you say anything.

It makes me want to just sit down, because it won't get any better than that.

[ Laughter ] >>Vint Cerf: I -- really, I'm stunned at this

overwhelming turnout. I do recognize, however, that theorem 208

might apply. It says, if you feed them, they will come.

I know there's food out there. So that's part of it.

The other thing I'm very conscious of. You probably have no idea how daunting it

is for me to confront a bunch of really smart Googlers.

I don't know how many of you know about the talking dog, some guy finds a dog that can

talk and he takes it around to various places. And everybody's all excited about the fact

that this dog can talk. It doesn't matter what the dog says.

It's just the fact that he can talk is amazing. I feel like the talking dinosaur.

I've been around since the Jurassic Period. I hope I have something of interest to say.

Two things. First of all, if you get bored, I won't be

offended if you leave. And probably some of your folks around here

will appreciate it if somebody went away because of the density.

And then second, when we get into Q&A, I really invite people to raise issues, to challenge

things I might have said. The fact that you're around for a long time

does not necessarily give you vision. In fact, in some respects, if you've been

around for a long time, you can't necessarily either predict the future or see what's coming.

There is -- I forget who said this, but he said, you know, if an old and distinguished

scientist tells you you can do that, the chances are pretty good that you can.

But if that same individual says you can't do that, there's a good chance he's wrong.

So I may be wrong about some of the things I'm going to say.

And I invite some interaction about that. Let me start out by just reminding you that

the thing that you're working with started out in a predecessor program called ARPANET

with four nodes, just four computers and four routers or, actually, packet switches.

I used to be at UCLA as a graduate student. And had the responsibility for writing the

software to connect a Sigma 7 computer to the first router that was installed at UCLA

in September of 1969. The Sigma is in a museum somewhere.

And some people think I should be there along with it.

[ Laughter. ] >>Vint Cerf: If you fast forward a few years,

you see an Internet that looks like this. This was generated automatically by looking

at the global routing tables and assigning a different color to each autonomous system

and showing something about the connectivity. This is Bill Cheswick, formerly of AT&T Labs,

generated this. And I'm sure if we looked at the 2010 version,

it would be very much like this, maybe bigger and more colorful, but still very complex.

One of the things that it's important, I think, for you to take away from this picture is

that it does represent a very large number of autonomous systems, each one of them operated

independently. This system is not centrally controlled.

It is something that is collaborative. It's something that people voluntarily interconnect

in order to gain benefit. And so even if we argue about whether Metcalfe's

Law should be N squared or N log N, the reason this works -- yes.

>>> (inaudible) this is based on -- >>Vint Cerf: No.

The colors, the colors represent different autonomous systems.

>>> The colors maybe. >>Vint Cerf: And, of course, the connectivity

is clearly I.P. But that's an A.S. -- each color is a different

A.S. Okay.

Are we square on that? See, it's okay to interact.

[ Laughter ] >>Vint Cerf: Somebody throw the -- all right.

[ Laughter ] >>Vint Cerf: All right.

So that's -- the point here is that it is voluntarily a collaboration of global proportions.

This doesn't happen unless a lot of people want it to happen.

So an important other takeaway is even though Bob Kahn and I did do the original design

work, this wouldn't have happened if, literally, millions of people didn't decide they wanted

it to happen. Here's just kind of a picture of how quickly

the number of machines on the network has grown over time.

It started out in -- with the four nodes that I mentioned on the ARPANET.

When the network was first -- Internet was first released, it was January of 1983, there

were 400 computers on the network in January of '83.

And it was a hard problem to get everybody to switch over from the older NCP protocols

to the TCP/IP protocols. And I don't want to waste your time telling

you how we did that. But it was coercive.

It was not cooperative. It was one of those I'm turning your funding

off if you don't switch over to TCP/IP. Fortunately, I was at DARPA at the time and

could actually say that. Today you can't do that.

That's harder. The number of machines on the net now that

are publicly visible is approaching 750 million. The number of machines on the Net, in fact,

is much larger than that. We're an example, because there are lots of

machines behind firewalls that people can't see through the public DNS.

The number of users now is estimated by Internet world stats at 1.8 billion.

Again, that's an estimate. The other thing which has been going on that

we are all very much participant in is the mobiles that have shown up in the telecom

environment. Extremely rapid innovation and penetration.

A lot of those mobiles are Internet-enabled. And, of course, we care a lot about that because

we've offered Android as an operating system, and we are wanting very much to offer and

let others offer applications that take advantage of mobile access to the network.

If we look at where the Internet is right now, or where the users are, it's actually

a fairly stunning revelation that North America is only third on the list now.

It used to be at the top, because this is where the first penetration occurred.

We're still the highest percentage penetrated. But as you can see, Asia represents the largest

single grouping. And of the 764 million people on the Net in

Asia, about half of them are in mainland China. So the largest grouping of users on the Internet

today is in China. Now, we all recognize that there have been

interactions between Google and China and there's policy questions.

But it is an inescapable fact that the largest population of users on the Net today is in

China. And that will continue to grow, because their

percentage penetration, Asian penetration, is only 20%.

So you understand when it gets to 75 or 76%, the number will be dramatically bigger.

We care a lot about these kinds of statistics because it tells us where our products and

services have to go, where they have to serve, what kinds of users, cultures, languages,

scripts, and everything else we have to be prepared to deal with.

These are just another way of displaying the same kind of information.

So I'll kind of skip through all of that. You'll notice at the bottom here that the

average penetration around the world is slightly over 26%.

As the chief Internet evangelist at Google, I have the feeling I have three-quarters of

the world to convert still. There's a lot of work to be done to get Internet

in place. Mobiles are actually helping a lot in some

respects, because their economics is so attractive. When Bob Kahn started this -- I want to emphasize,

it was Bob Kahn that started the project. He was at the Defense Advanced Research Projects

Agency. I was still at Stanford.

And he had some simple ideas about what he wanted to accomplish in the interconnection

of networks. Each network would have to stand on its own.

There wouldn't be any changes permitted to the networks that were part of the original

Internet in order to make them part of the Internet.

Which led to the need for gateways in between. Because the networks had no way of referring

to any other nets. Their addressing structure thought they were

the only network in the world. Later on, we'll see a similar situation coming

up with clouds. This is all best efforts communication.

And in a sense, that was really important. We made no guarantees at all.

If you could get a packet from point A to point B with some probability greater than

zero, that's all we were asking of the underlying system.

There would be these black boxes, which we called gateways at the time, because we didn't

know they were supposed to be called routers, and we assumed they would be able to send

objects, packets, through each of the networks that interconnected and that the Internet

packets would be encapsulated inside those packets of each individual network.

That turned out to be a very important tactic, because it allowed arbitrarily large numbers

of different kinds of packet networks to be part of the system, because we were simply

encapsulating and decapsulating the Internet packets that went end to end.

And, finally, there's no global control at the operational level.

That was important from the military point of view, because we didn't want a central

place that could be attacked and the network disabled.

So there's global addressing was needed, because the networks didn't have a way of referring

to other nets. Gateways were forwarding packets.

There were algorithms to recover from lost packets.

We didn't assume everything was reliable. And some of the initial networks that were

involved were actually quite unreliable. Packet radio was a mobile system.

Ethernet was not necessarily a reliable transmission scheme.

We provided pipelining so that there could be a stream of packets flowing from end to

end, which is where those of you doing TCP stuff know about Windows, about how many packets

can be outstanding before you have to stop and wait.

There were some end-to-end check zones. We assumed things would be reassembled.

We knew we needed host flow control, although we'll see that we didn't do as good a job

then as we could probably do now. And then interfacing to a variety of operating

systems and other things. There were some other secondary concerns like

efficiency, performance, and, by the way, security, although, in fact, we didn't focus

very heavily at all on security, and we might look back on that and regret it.

But I have to say that if we had tried to focus heavily on security in these early days,

we might never have even gotten anything built that we could test.

We honestly did not know whether this system would actually work.

One of the nice things about the Net is that we didn't make any assumptions about applications.

And, you know, the old expression, if you don't know where you're going, any road will

take you there, if you don't know what application you're running, any packet will take you there.

And in a very funny way, the Internet has survived and thrived because of the fact that

it wasn't designed to do any particular thing. A lot of the historical networks, like the

telephone, telegraph net, the television broadcast networks, the cable networks, were purpose-designed.

And that purpose design limited what they could do.

The Internet didn't have that limitation. And as the capacity of the Net has grown,

as the economics of electronics has dropped, it's been possible to make the packet-switched

net do a lot more than it did when we were working, you know, with 50 kilobit backbones.

I mean, the word 50 kilobit and backbone in the same sentence sounds, excuse me?

That's a dialup modem today. The other thing that's interesting is, the

Internet layer, the Internet packets, have no idea what they're carrying.

They don't know how they're being carried. They don't care if it goes over a satellite

link, an optical fiber, or submarine cable. Only the edge devices that can interpret the

contents of the packets understand what it means.

There is no national indicator in the I.P. address space.

That was a deliberate decision on my part. Remember, I'm doing this at the Defense Department

or for the Defense Department. I wanted not to have any binding of the networks

to any particular geographical location. Now, we all understand that the domain name

system has a very specific potential binding, because we have, you know, country codes,

like .DE, .FR. And a lot of us have worked hard to make it

possible to associate locale with I.P. addresses because we're interested in geographic --

applications that have some knowledge of geographic location.

But when the design was done, I didn't want to have any such binding, because I didn't

want any national authority to interfere with the military's ability to deploy a network

wherever it needed to go. I didn't want some third party to say, "I'm

sorry, you can't have address space because we think you're an aggressor or an enemy."

So I didn't want that to be a problem. And, finally, our performance targets, again,

were best efforts. Now, there are some changes going on in the

Net. And we're involved in some of them very intimately.

IPv6 is one. And I'll show you a chart that motivates why

we need to have a different I.P. address space than the IPv4.

Some of the people in this room -- I saw Lorenzo here.

I don't know if Eric is here. But you have some -- a group of people here

at Google that have single-handedly pushed us to implement IPv6.

The reason that's so important is that the IPv4 address space only has 32 bits.

It's 4.3 billion terminations. I made the decision in 1977 that the address

space could be 32 bits and it would be enough. I thought we were doing an experiment.

[ Laughter ] >>Vint Cerf: I thought, you know, that we

didn't know if this thing was going to work, and surely 4.3 billion terminations is enough

for an experiment. I mean, even the Defense Department probably

couldn't use that many termination points. So I thought what would happen is if we demonstrated

this capability, then we would in fact do a production engineering job of it.

Well, the experiment never ended. Here it is 2010, we're running out of address

space, we need larger address space. We actually argued 32 bits, 128 bits, and

variable length. The variable-length guys lost, because the

programmers said it's going to chew up a bunch of computer cycles to find the fields and

the packets, and so they didn't want to do that, so that was out the door.

And finally, because nobody could make up their minds and I'm sitting there in the Defense

Department trying to get this program to move ahead, we haven't built anything, I said,

it's 32 bits. That's it.

Let's go do something. Here we are.

My fault. [ Laughter ]

>>Vint Cerf: On the other hand, you know, if I had stood up and said we need 128 bits

of address space because this is going to be the world's backbone in 25 or 30 years,

they'd look at me and say, "Boy, are you out of --" internationalized domain names.

Remember I showed you all the distribution of users in the Internet?

There are more users who don't use Latin character sets than there are people who use Latin character

sets. So internationalized domain names are a way

of writing domain names in scripts other than Latin characters.

So Cyrillic and Chinese and Hebrew and Arabic and so on are all scripts that are now available

for IDNs. And they're being introduced now by the Internet

Corporation for Assigned Names and Numbers. The domain name system, as I'm sure you are

all very well aware, has got a bunch of vulnerabilities in it.

One aspect of that is that it's possible to poison the cache of a DNS resolver.

It's possible to, in fact, invade the name servers.

And so getting digital signatures that you can use to confirm the binding of a domain

name and address is actually an important and useful step.

It doesn't solve all problems. But it's an important one.

That's starting to happen now. The root zone will be, in fact, signed sometime

during the summer this year. And then, finally, the routing system has

the awkward problem that it's possible to lie about what you're connected to.

And if you want to avoid having -- introducing lies into the routing tables, one way to do

that is to have tables that show which entities, which autonomous systems are allowed to announce

what connectivity they have. If it turns out that you received a routing

update which isn't confirmed by a digitally signed entry in a table that should be publicly

available, you might discard that particular update and maybe even raise a red flag and

say somebody is trying to spoof the system into directing traffic where it doesn't belong.

There are sensored networks that are coming up and being connected to the net.

I'm give you an example. The smart grid is an activity that we are

very much involved with because we have a power meter program.

I'm personally involved in this in a variety of ways because the smart grid interoperability

panel has a governing board and I'm on that, and the whole point here is to try to find

ways of letting electricity-consuming devices be more cooperative in the event you're reaching

a peak load. One of the ways is to go through the Internet.

So they will be, increasingly, on the network. And, finally, mobiles, as you all know.

We have a bunch of problems -- I'm not going to catalogue all the problems.

I'm not going to talk you through all of them. Because you know most of them.

The thing I want to emphasize here is that as soon as you turn a system like this over

to the general public, you have to deal with the general public.

You have to deal with the motivations of the general public.

And not everyone in the general public necessarily has everyone else's best interests in mind.

And so one of the facts of life is that the network began with a fairly homogeneous, uniform

collection of academic types, research types, engineer types, who more or less shared a

common view of what it was they were trying to accomplish, and they more or less trusted

each other to do the right thing. We are now faced with the consequences of

commercialization. And spread of the Net to the general public.

Now, I was a big proponent in 1988 of commercializing the Internet.

And I know my colleagues, many of them, anyway, thought I was crazy.

Why would you let the general public, you know, this great unwashed, into our wonderful,

you know, sandbox? And the honest answer to that is that I didn't

think it would ever be available to the general public if we didn't build a commercial engine

underneath it to support it. And so I was very much in favor of trying

to find commercial businesses models that would allow the Internet to grow, because

the general public and industry wanted to be on it.

So, indeed, that's what happened. I'm not trying to claim any credit.

Just for having been much -- an advocate of that particular thing happening.

So the point I also want to make here is that in spite of all the vulnerabilities and everything

else -- and these are not just technical vulnerabilities -- you know, bad passwords, social engineering,

and things of that sort -- some of the worst things that happen to the Internet are mistakes

that we make. You know, half the Net falls off the Net,

or YouTube disappears because somebody in Pakistan creates a black hole that everyone

routes to, not just the routers that are in Pakistan.

It's amazing how we can screw things up without even trying.

And so one of the big problems with design of systems like this is how to make them more

resilient and more conscious of errors in configuration.

This is probably one of the hardest problems that I can think of.

It's figuring out that a configuration is wrong.

It's easy if some part of that configuration has a parameter value which is outside of

a valid range. That's the easy case.

The hard case is some ensemble of configuration parameters that turn out to add up to bad

things. Figuring out that that's the case, at least

in my experience, is very, very difficult. So that's a problem worthy of serious thought

and consideration. There are other serious security weaknesses

in the system. We have operating systems that are much too

willing to be invaded, in effect, to allow their resources to be attacked or coerced.

Browsers which are very naive about software that they download or that -- the Web pages

they download and then interpret. That's why we released Chrome, because we

think we did a better job of building sandboxes for the interpretation of downloaded Web pages.

But we are still far from having what I consider to be a completely secure environment.

We never will. But we can probably improve beyond where we

are now. We don't have very good access control practices.

I talked about improper configurations. A lot of the botnets that cause trouble, that

generate spam, do denial of service attacks and everything else are a direct consequence

of laptops and PCs and increasingly now mobiles that have been coerced and invaded and become

tools of the botnet herders. So even trying to detect and go after botnets

is -- is going after the symptom. The real issue is that we allow these resources

to be coerced by people who don't own them and have no business using the cycles.

And, of course, there are all kinds of parties, including organized crime and state-sponsored

agencies, that want to exploit these vulnerabilities in the network for their own advantage.

There are lots of privacy problems. And, again, they cover the gamut.

And I want to get -- because it's already 20 minutes into this thing, I want to get

beyond the list of problems and get to the things that we really need to do.

I do want to mention cloud collaboration. That is to say, interactions between clouds.

This is important to us because we are committed to data liberation.

We're committed to letting people get data out of the cloud.

The question is, how do they get it out? First of all, where do they put it?

And second, if they want to move it to another cloud, how do they do that?

And without going into terribly long diatribe here, I believe that cloud interaction in

2010 is about where we were with networking in 1973.

We had networks. They were proprietary.

You could make all the IBM machines interact with each other, the Digital machines interact

with each other using DECnet. But we did not have much interaction between

networks of different manufacturers. Internet was intended as a nonproprietary

solution to that problem. Today we have proprietary clouds.

In fact, most of the clouds have no idea that other clouds exist.

The best they can see is another cloud possibly because it looks like a Web server somewhere.

But clouds have properties. And the different clouds that are built have

different properties. Some of you just go back into history for

a moment, will remember a protocol called TELNET.

TELNET had a very interesting property. It had a thing called a network virtual terminal.

This was a thing that never existed. The problem was that the computers at the

time had a whole variety of different terminals, and they all had different encoding schemes

and different conventions for what new line is and so on.

So the guys doing the ARPANET said why don't we invent a thing which doesn't really exist

but we'll call it a network virtual terminal. It has a set of properties.

We will map our terminals on a given host into that network virtual terminal behavior.

And what we would like to do is to say, for everyone who's out on the Net, as long as

you can handle the network virtual terminal, then you can interact with any terminal on

-- any host in the ARPANET. Maybe we need a network virtual cloud.

We need something that all clouds can pretend to be.

It won't exercise every functional capability of every cloud, but it might exercise sufficient

commonality that we can move data back and forth.

Maybe we're capable of actually getting processes to run in two clouds that are interacting

with each other concurrently. What about data that's in a cloud and it's

access-controlled and we want to move it to another cloud?

What metadata do we have to move to the other cloud and how will it be interpreted and will

that access control continue to be observed? These are all questions that don't yet have

answers or maybe, in some cases, too many different answers, because we have so many

different options, you know, SOAP and UDDP and all -- UDDI and so on from the Web world,

and a variety of other options. So we don't have any agreed standards on letting

clouds interact with each other. So I think there's a rich area for research

and development there. So here are the kinds of problems that we

did not solve and have not solved adequately in the Net.

One of security at all levels in the architecture. There's no one place in the Net where you

can guarantee all the security you need. We don't actually have a very good understanding

of the statistical nature of the network. In fact, it's very hard, I think, to characterize

the Internet or even pieces of it in a statistical way, because unlike the telephone system,

the dynamic range of demand on the Net is extremely broad.

You could be clicking a mouse for a little while, sending almost no data at all, and

the next thing you know, it's a 150-megabit file transfer.

Unlike the telephone system, where no matter how much you talk, you can only consume 64

kilobits per second on average, not counting compression.

So erelong formulas for the Net that will help you design pieces are hard to come by.

There are lots of debates about whether there should be quality of service mechanisms in

the network. Right now all we have is best efforts.

And there are valid arguments that sometimes you want more than best efforts.

The problem is, you can't rely on everybody wanting to or being capable of offering more

than that. So the question is, how do you build that

in so that multiple networks can somehow participate in anything other than best effort?

I mentioned Internationalized Domain Names. Distributed algorithms.

I don't think that we have done a very, very good job in the R & D community of developing

distributed algorithms. There are some.

The routing algorithms are an example. Their stability is sometimes in question.

We do a lot of distributed computation, but it's more like parallel stuff; right?

When we do a search, it's, you know, one engine is running the search algorithm.

We don't have a lot of stuff going on in parallel except for the fact that we can do our searches

by splitting the data up into a large number of different places and letting everybody

run in parallel. There aren't very many examples of distributed

algorithms that work well. Some of ours are wonderful.

Presence, which showed up originally in chat kinds of things, could be quite rich in its

character. It's not just whether you're online or not

online, but where are you online or what are you doing online.

These are all kinds of variations in Presence that we can exploit.

We've done a rotten job of dealing with mobility. If there's anything that I think we really

messed up, it's that. The problem that happened is that the example

of mobile in the original Internet design was the packet radio network.

This was a mobile network that was operating here in the San Francisco Bay Area.

We had some radio store and forward repeaters up in the mountaintops and we had mobile devices

down on the Bayshore freeway or driving around Palo Alto and Mountain View and so on.

The network itself took care of dealing with the mobility of the devices below the Internet

Protocol layer. It's not too different from what happens with

mobiles today, because what happens in a mobile today is that the binding of the telephone

number happens below the level of the phone number.

The consequence of that is that I thought, wrongly, that we had dealt with the mobile

problem. I didn't take into account that a device,

a host on the network, might actually move physically from one place to another.

Because at the time, they were big computer centers that didn't get up and move around.

Well, now they're in your pocket or maybe they're even a chip that you've got inserted

in your arm or it's in the dog. So the problem here is that I failed to recognize

that we might actually have nodes whose I.P. addresses had to change depending on where

they were accessing the network. If we had -- I'll give you an example of possible

kinds of solutions. Multihoming, we did a terrible job of multihoming.

Because every port on the computer had a different I.P. address.

It wasn't as if we had an address of the device and then, you know, spokes pointing out.

Every one of them had a different I.P. address. That means that if you are getting service

from two different ISPs, it's not exactly clear that you can set up a TCP/IP connection

with packets flowing along both of those in order to assemble them together as one stream.

Broadcast is another rant. We did a terrible job of dealing with broadcast

media. In fact, we turned broadcast media into point-to-point

links. Think about 802.11, for example.

So I would love to see some serious effort put into, for example, a satellite system

that could rain I.P. packets down on 100 million receivers at the same time.

For information that everybody wants, that has a large number of interested recipients,

broadcast is really good stuff. And we don't do that.

We're busy, you know, with these point-to-point TCP connections.

We do that with YouTube, we do that with a lot of other applications.

And I imagine if we were trying to pump out a large number of software updates, it would

be really nice if everybody would say, okay, at 8:00, we're all going to listen for the

next update of the -- of that software, and then I only have to transmit it once.

Okay. Some people would have to say, hey, I missed

a couple of packets. Please send them to me on a unicast basis.

But the bulk of the delivery could be done very, very efficiently.

So I want to keep going here. Another huge issue is identity, authentication,

and authorization. Those are all three different things.

We don't do a very good job of authenticating the components of the network.

We don't know whether a router over there is really one that we should be talking to

or not. There isn't any built-in, inherent ability

in the Net to authenticate various sources and syncs.

And if I were doing this over again, I would have wanted to put in tools for strong authentication.

This doesn't mean that I believe that anonymity is a bad thing or that it should be banned

or anything like that. I think anonymity is important.

But there are times when you really need to know what it is that you're interacting with

at various layers in the protocol. So if we're having a simple little chat, I

may not care exactly who you are. But if you say, "I want to borrow $50,000,"

I care a lot more about who you are and whether you can pay me back and how do I find you

again. So authenticity and identification turn out

to be really important. And I would submit to you that for Google's

own range of applications, that investing in good-quality identity and authentication

is really important, because that's the only way we will convince people that it's okay

to use the cloud, that we have a strong way of identifying the users, and that we have

a good way of preventing people from getting access to information that they shouldn't

have access to. So we need these kinds of technologies to

convince our users that it's safe to use the cloud and that we know how to protect their

information from others. Well, okay.

So multicore processors. This is another little headache.

Moore's Law is broken. The clock speed is not going up at the rate

that it used to. Now our problem is how do I use a larger number

of cores running at a fixed clock speed. I think a lot of algorithms don't know how

to take advantage of that. So this is another rich place for us to evolve

our applications. I'll come back to delay and disruption tolerance.

Let me just say that in the mobile world, one of the big problems is that you don't

have guaranteed solid connectivity or reliable connectivity.

Frequently, it's interrupted. Frequently, there's packet loss.

And most of the algorithms in the Internet are not as good as they could be and should

be in order to overcome those problems. Governance is a nontechnical problem space.

Now, if you and I think that technical decisions are hard, imagine what it takes to make political

decisions associated with who's in charge of what.

These don't necessarily have quantitative metrics.

And so you wind up having to argue, coerce, cajole, you know, demand and all these other

things in order to come to some kind of agreements. Sometimes they're international in character.

Sometimes they're national in scope. Sometimes they might be corporate.

But governance questions are policy-related questions, and they're very, very hard to

deal with. There's a bunch of them here.

I won't go into any further detail there. Finally, mobile operation again.

What I would love to see, frankly, is some ability to let a device move around from one

termination point to another in I.P. terms and still maintain its connectivity to the

higher-level protocols. There are some applications or protocols for

that purpose that are being looked at in the Internet Engineering Task Force.

This was a mistake that I made, a really bad one, that I didn't recognize at the time.

When we split I.P. from TCP, I thought it was very clever to use the I.P. address as

part of the end-point connection identifier. So TCP was bound very, very strongly to IP.

And I was patting myself on the back, thinking, "Look, I saved all these bits.

I didn't have to have another set of identifiers at the TCP or UDP layer."

And I thought, boy, I was really smart, it was very efficient, blah, blah, blah.

Turns out to have been a bad mistake. It screwed up multihoming, it screwed up mobility

where I.P. addresses were changing. Now, this leads to a different kind of a problem.

Think about it for a moment, that you're moving around and getting different I.P. addresses.

And so at some point, you have an end-point identifier distinct from the I.P. address,

and you've established, let's say, a TCP connection using those end-point identifiers.

Now you move to a different I.P. location, and you say to the other guy, hi, it's me

again. I'm just at a different I.P. address.

Well, that looks like a hijack, doesn't it? So the real question is, how do you reauthenticate

so the guy on the other end says, well, I know you said it's you, but, you know, can

you take this encrypted challenge and send it back to me properly encrypted in my public

key, blah, blah, blah. You need to have mechanisms that reestablish

authentic trust between the two parties if you're going to allow this kind of dynamics

to happen. We don't do a very good job of routing in

general. But more importantly, in the mobile world,

if topologies are changing, getting an routing algorithm that works, when the topology keeps

changing -- it's not just a question of it's a fixed topology and links are up and down;

this is a topology which is varying -- that gets to be a really hard problem.

And it hasn't been well solved in my view. We don't even have the idea of persistent

connections. I can open up a connection and things are

running, and I lose connectivity. And when I come back again, I would like to

reestablish the session that I had. We don't have a session layer in the formal

sense in the Internet architecture. And that, too, might be a useful development.

And then, finally, in the mobile world, there is a great deal of interest in being able

to self-organize. You throw a bunch of nodes down and they're

wandering around and they discover each other. This is all wonderful until you start worrying

about the possibility that one of them isn't supposed to be there because it's the bad

guy. And if you have a self-organizing network,

the bad guy might look like a good guy because you didn't know the difference.

So he's part of your environment and all of a sudden you have a problem.

So figuring out who are the good guys and the bad guys in a self-organizing network

are -- that's a particularly interesting problem. In the military world, this is called the

overrun problem. I have a whole bunch of nodes, they were all

okay. Then the bad guys overran the good guys and

the bad guys got ahold of their devices and said, hi, we're still good guys.

And, of course, they're not, but they're taking advantage of your assets to defeat you.

So figuring out how to make that all work in a way that is reliable and trustworthy

is tricky. Performance.

Well, the original Internet design was best efforts.

And here I'm going to invoke Greg Chesson. He is back there.

Wave your hand. You doesn't have any hair, either.

You don't have your tee shirt on, though. So Greg --

>>> That's the -- Greglers. >>Vint Cerf: You get two points for that.

Very good. So Greg is, like a lot of our senior management,

intensely concerned with performance, particularly inside the data centers.

And so what I want to observe to you is that for a long time, performance on the Internet

was not as -- was not viewed as critical as just getting it to work at all.

And so now we're at the point where this thing has been around for a long time and every

one of us and our customers and users are relying on this thing working reliably and

quickly. One of the things I want to emphasize to you

is that I am not an I.P. bigot. I'm not a TCP bigot.

Just because I happen to have been involved in designing and building that stuff does

not mean that it is the place to stop. It means it might be the end of the beginning.

But the important thing is here that none of you should be afraid to say, maybe we should

do something different. That's not irreligious.

Or heretical. I know I'm the Internet evangelist.

But, you know, I feel like the Pope and the heretic.

Right? The real answer here is that you should not

be afraid to think differently, to branch out into new areas, to think about things

that may not even be compatible. When people think about what might come next,

they think, oh, how in the hell are we ever going to deploy this in the existing Internet?

And I would remind you that before the Internet, there was this big network called the telephone

network. And you could have said, how in the heck would

you ever deploy the Internet? We already have a global network.

It's called the telephone net. You'd never be able to do this.

Well, it did take a few decades. But it happened.

And there's no reason why it couldn't happen again with good ideas that work better.

So I want to somehow, if I can empower you to not be afraid to think differently.

And Greg I know already is fully capable of doing that, because he's got ideas inside

our data center that work better than the conventional TCP mechanisms.

We have -- this is not an entirely internal meeting.

We have some folks from outside. So we're not going to go into any details

about that, unless Greg chooses to. But the point I want to make is that this

guy is fearless. He's looking at, you know, 80 gigabits a second

worth of traffic going in two different directions to figure out what the heck is going on inside

the TCP. And he's done a brilliant job of making this

work a lot better. Now, there is another area of real interest

here. Addresses in the Internet are bound mostly

to edge devices. And Bob Kahn and I have had this long-standing

debate about what should have an address. Why should we stop at the box?

Could we talk about the things that are inside the box?

What about logical addresses of objects that are inside the box?

Should the routing tables be able to route not to a box, but to something inside that

might move around? Well, I don't have an answer to you.

But I can tell you that thinking about addressing other than just termination at a box may turn

out to be a very rich way of dealing with some of the cloud issues that we face, because

the data in the cloud is what has to move around.

So this means, by the way, that the domain name system may not necessarily be the sole

means of binding an identifier to an I.P. address.

And this is another example of don't be afraid to think differently.

So instead of using DNS, it may very well be that some other label which has a different

meaning than the DNS might be useful and might be worthwhile implementing in addition to

or even instead of the DNS. So the DNS isn't necessarily fixed in the

universe of Internet architecture. I'm going to skip over policy considerations

in the interest of time here. I do want to bring up a couple of other major

problems, though, for us at Google. Intellectual property treatment is troublesome,

because the history has been rooted in hard copy of things.

In fact, if you read the copyright law, it talks about the fixing of a work in a physical

manifestation. So it's a CD, or it's a tape, or it's a hard

drive, or it's a memory stick, or a book, or a magazine.

And when we start dealing with digital information, we have to cope with the fact that, A, it's

easy to replicate, and, B, it's easy to distribute. And the notion of controlling copies as the

means of recovering value from intellectual property collides with this ease of distribution

and copying. In fact, if you think about the way the Web

works in particular, what does a router do? The first thing it does is go out and copy

whatever the home page is on the Web site. It's a giant copying engine.

No wonder the people who think of intellectual property protection as controlling copies

are a little unnerved by this, you know, global spread of this giant copying machine.

It's, you know, sort of Xerox written large. So the problem here is that the current rules

are rooted in the wrong kind of concepts. And we really need to rethink how people can

recover value from intellectual property in addition to or instead of simply controlling

copies. Now, I don't have a magic answer for you here.

I mean, the usual easy one is, well, just encrypt everything and then sell people the

keys. Yeah, that's all fine, unless somebody buys

one key and puts it up on the Net. And then everybody has a copy to it, and so

much for that. I would point out, however, even in that scenario,

that somebody who puts the key up is -- morally, -- has morally created a copyright violation

of the same order of magnitude as making copies of music CDs or making copies of books and

distributing them. The problem, of course, might be finding that

person in order to somehow inhibit that sort of behavior.

The semantic Web is something that Tim Berners-Lee I know has been working on very hard.

In fact, it came up in the CIO meetings that we hosted here yesterday.

This problem is that there is a lot of unstructured information on the Web right now.

But there's also a lot of structured information, except that we can't see it, because it isn't

expressed in a form that the crawler can detect it.

So when we have a lot of structured data, it's kind of like dark matter in the universe.

We know it's there but we can't quite detect it.

So there's dark information in the Internet. And finding a way to make it more visible

so that our crawlers can find it and our users can discover it is a nontrivial exercise.

We have to find ways of labeling content that's consistent across a large number of different

bins full of information. This is a hard thing.

It's the data dictionary collision problem for people who speak databases.

The other two things on this slide have to do with my recurring nightmare that our digital

history will be lost in not very many years. So let me give you the long-term example.

It's the year 3000. And you've just done a Google search.

And you find a 1997 PowerPoint file. That's even -- let's pretend you're using

Windows 3000, just for the sake of argument. [ Laughter ]

>>Vint Cerf: The question is, does Windows 3000 know how to interpret a 1,000-year-old

PowerPoint file. And the chances are pretty good that it can't.

The question is even if we were using open source software, it isn't necessarily clear

that the versions of the open source software that we might be dealing with in the year

3000 know how to interpret files that are 1,000 years old.

The reason I'm so exercised about this is not the thousand-year problem.

It's the next year or the third or fourth or fifth-year problem.

I'm already seeing image formats that are noninterprettable by current-day software.

I see this happen a lot between the Macs and the PCs.

I do a presentation on a Mac and then show it on a PC, and a big thing comes up saying

"We don't know how to interpret this thing." So it's a big blank in the slide.

So I am worried that our descendents 100 years from now or maybe even our kids 20 years from

now won't actually have any idea what the beginning of the 21st century was like, because

none of the digital objects that we have created and stored away are interpretable.

So the question is how to solve that problem. This is hard, because it's not just a question

of hanging on to the application software. If somebody says, "I'm not going to support

that software anymore," and it runs on that version of an operating system, is there any

way that we can hang on to not only the application, but the operating system it ran on and maybe

the emulation of the hardware that the operating system ran on that ran the application that

knew how to interpret the complex bits? The question is, what rules reasonably apply

to, let's say, not coerce, but encourage people to make sure that objects that were created

in earlier forms are still interpretable? And this gets all mixed up in motivations,

in business ethics, in business practices, and in intellectual property management.

So if we don't do something about this in a conscious way, we will end up with a lot

of rotten bits that won't be interpretable. And that can't be the right outcome.

I am particularly personally embarrassed about this because when I work with librarians and,

you know, we walk in with our fancy CDs and DVDs and everything else, they say, "How will

these last?" I'm not sure I actually have a good answer

to that. And then they kind of walk out with a 1,000-year-old

vellum manuscript which is magnificently illuminated and if you happen to read Greek or Latin,

you can still read it. And they hold this up and say, talk to me

again about these digital formats and how long you think they're going to last?

I want to shrink down and walk out the door. This is serious.

And it's serious for us at Google. Because we are offering to people to put their

stuff in our clouds. And we're offering to allow them to get it

back. And the question is, when they get it back,

in what form will it be delivered to them? And will they be able to interpret it?

Okay. I'm going to do just a couple more slides

and then I want to do some Q&A. This picture many of you have already seen.

It's my way of saying, there's some weird stuff on the Internet that I didn't expect

to be there, you know, like picture frames and refrigerators.

It was the farthest thing from my mind. The Internet-enabled surfboard is my favorite

one. Some guy in The Netherlands -- I haven't met

him -- I just have this picture, he's sitting on the water, waiting for the next wave, thing,

if I had a laptop on my surfboard, I could be surfing the Internet while I'm waiting.

He put a Wi-Fi service in the rescue shack and he surfs the Net while waiting for the

next wave. If you're interested in that, that's the product

for you. I mentioned earlier that there are sensored

networks showing up on the Net. Some of them are being driven by smart grid

ideas, some are driven by being more thoughtful about dealing with efficient use of energy.

I have a sensored network at home. And I'm proud to say it's a commercial thing.

It's not me with a soldering gun. It's from Arch Rock.

It's running IPv6. It's sampling temperature, humidity, and light

levels in all the rooms of the house every five minutes.

And there is a server that's accumulating that stuff.

Now, one of the rooms in the house is the wine cellar.

And it's very important that the wine not get up above 60 degrees Fahrenheit and the

humidity not drop below about 50%. That particular room is alarmed so that if

the temperature is -- exceeds 60 degrees, I get an SMS on my mobile.

And it happened when I went to the -- a laboratory -- Argon National Laboratory.

Just as I was walking in the door for a three-day visit, my mobile goes off.

It's the wine cellar calling. "You've just breached the 60 Fahrenheit barrier."

So every five minutes, I kept getting reminded my wine was warming up.

Unfortunately, my wife was away on some trip, so she couldn't go and reset the cooler.

So I called the guys at Arch Rock, and I said, "Do you make actuators so I could do some

remote actuation?" They said, "Yes, that's a little project for

the weekend." I did think, though, that I could infer some

things about what was going on when I wasn't at home.

For example, if somebody goes into the wine cellar and turns on the lights, I might be

able to detect that. Of course, I don't know what they did in there.

So the next step is to put RFID chips on all the bottles.

[ Laughter. ] >>Vint Cerf: That way, I can do a rapid inventory

to see if any bottles have left the wine cellar without my permission.

Somebody did debug this for me, though. He pointed out, you could go into the wine

cellar and drink the wine and leave the bottle. [ Laughter. ]

>>Vint Cerf: Clearly, I have some work to do to debug the architecture.

So we're going to have sensored networks everyone. The data is going to be flowing to us.

That's simply going to be another part of our Internet universe, huge amounts of information

either flowing to consumption places or devices just talking back and forth to each other.

Okay. So last thing I want to do is quickly mention

the Interplanetary Internet. I know many of you have heard me talk about

this. I'm very excited about it.

It's not a Google project. We're not planning to take over the solar

system as part of our business model, at least not as far as I've been told.

But I am interested in the problem of being able to give richer communication for manned

and robotic space exploration. Some of this is driven by our visits to Mars

with the Rovers and others -- since 1976, actually, we've been sending equipment to

Mars. And most of the communication with the spacecraft

have been with point-to-point links. One of the things that my colleagues at the

Jet Propulsion Lab and NASA have been wanting is a richer networking environment so that

we can support much more complex missions with many different devices, lots of different

orbiters, things on the ground that are moving around, sensored networks that are self-organizing.

So this is just a view of Google Mars, which is a beautiful piece of work.

What you might know is that when the Rovers landed several years ago on Mars, the original

plan was to transmit data direct to Earth. And the radios overheated.

And so they decided that they'd have to reduce the duty cycle to keep the radios from damaging

themselves. And that reduced the data return, which is

already rather limited at 28 kilobits a second. Now, remember, these things are 35 million

to 235 million miles away. At the speed of light, it takes three and

a half minutes for a signal to go from Earth to Mars when we're closest together and 20

minutes when we're farthest apart. So my colleagues and I were thinking, okay,

why don't we use Internet protocols. It works here.

It ought to work there. And that's true.

But it doesn't work very well in between. Imagine TCP with flow control with a 40-minute

round-trip time. Then there's the other problem of celestial

motion. The planets are rotating, and we haven't figured

out how to stop that, either. [ Laughter. ]

>>Vint Cerf: The problem is, there's a thing on the surface of Mars, and it rotates around,

and you can't talk to it until it comes back around again.

And the same thing happens with orbiters. The Jet Propulsion Lab guy said, "We have

orbiters around Mars already. They were used to map the surface of Mars

to decide where the Rover should land." They reprogrammed the Rovers and the satellites

to do store-and-forward relay back to Earth at 128 kilobits a second using a different

radio. So store-and-forward is the way all the data

has been coming back from Mars. When the Phoenix lander arrived, in May of

2008, it was going to the North Pole. It didn't have any director or path.

So they deliberately did store-and-forward. So my colleagues and I thought, well, why

don't we just use TCP. But it doesn't work.

So we had to invent a whole new suite of protocols that we call Delay and Disruption Tolerant

Networking Protocols. Those not only have been made to work at interplanetary

distances using the Epoxy spacecraft, which is not on its way to rendezvous with a comet

and will be using the new DTN protocols, but it also has been applied to military tactical

communications. The Defense Advance Research Agency that funded

the original interplanetary work and the original Internet work has adapted these Delay and

Disruption Tolerant Protocols to Tactical Com, which is a highly hostile environment.

So what we're hoping, frankly, is that the standards that we have been developing for

interplanetary communication will be adopted by all the countries around the world.

If they do, then everybody's spacecraft could potentially interwork, which means when they

have completed their initial missions, they might well be repurposed to be part of an

interplanetary backbone. Over the course of the next several decades,

assuming the standardization is adopted, we may actually see an interplanetary backbone

emerge out of the space exploration that goes on.

Okay. So I really overshot my time.

But let me stop here and invite questions. I know we've got some Dory things.

And in any event, thanks a whole bunch for letting me take up an hour of your time.

[ Applause ] >>> Let me read the first question on the

Dory. We have a lot of our distributed offices VCed

in, including the West Coast, plus Boulder, plus New York.

So let's just get to one of those, at least. Comcast recently had a court victory in their

effort to own the Internet and control traffic on it.

Murdoch has similar ambitions. And I suspect that the military will make

a similar claim. Can a free and open Internet be effectively

defended? >>Vint Cerf: Wait a minute.

I was too busy stabbing myself in the gut here.

Wow. Okay.

So I don't think that Comcast quite had the victory that's characterized here.

What they basically said was, when the FCC told us that we should cease and desist interfering

with BitTorrent, that the FCC didn't have a basis for making that demand.

Now, you need to have a little context. A few years ago, Internet was considered a

telecommunications service and it was regulated in what's called Title II of the Telecom Act,

as amended in 1996. In its infinite wisdom, FCC, upon hearing

debates between the telcos and the cable companies on the differences in their regulation, said,

"I hear you. We realize that you are regulated differently,

but you're both offering Internet service. We are going to move Internet service into

what's called Title I of the Telecom Act." This is essentially applications.

It's an information service. And it was treated as a vertically integrated

service, so there was no notion of telecommunications. There was no notion of common carriage anymore,

because it was treated as this unitary thing. Well, within Title I, the court ruled that

the FCC did not have any authority deriving from that title that would allow them to tell

Comcast to cease and desist. Now, I want to defend Comcast just a little

bit here. What they had -- what was happening to them

is that they have an asymmetric network; right? You can't push data into the cable system

as quickly as you can pull it out. So they're faced with a problem with people

doing BitTorrent, because some people who are running multiple TCP connections to do

the BitTorrent application are consuming much more bandwidth than everybody else.

So they're kind of freezing everybody out. So Comcast is saying, well, we have to do

something to make it more fair for everyone to get access to the limited resources that

we can make available. And their way of doing the network management

was, let me say, not very ept. They --

[ Laughter ] >>Vint Cerf: -- chose to do deep-packet inspections,

see which guys were doing the BitTorrent protocol and then generate fake reset messages to both

people, leaving them both thinking the other guy hung up.

That was a particularly crude way of trying to do network management.

In subsequent conversations with them, I urged them to consider an alternative, which was

to look at the bandwidth that was being consumed by these parties in a, you know, nonprotocol-centric

way and simply limit the amount of bandwidth that anybody could get if the system had gone

into congestion. And that, as I understand it, is what they

have chosen to do. So the court case basically says to the FCC

that Title I may not be a very good way of preserving fairness and choice for users.

What do we want? At Google, what we want is every user has

the ability to go anyplace on the Internet and run any applications that he or she wants

to. We understand that it's not right for a user

to get more resource than they are paying for.

We understand that people have to do some network management to deal with, you know,

potential denial of service attacks, to deal with fair allocation of the resources.

But we don't want them to abuse their access to and control over a broadband conduit that

gets the consumer into Internet services. In particular, for example, if you're a cable

company that's offering television as a service and you're being paid for that, and you're

also offering Internet, broadband Internet service, and you're being paid for that, we

wouldn't want YouTube, for example, to be somehow interfered with over the Internet

part of the channel in order to favor the other application that that company offers.

Now, I'm not picking on Comcast or any other cable company.

I'm just saying that we are looking for fair and equitable access for users and for suppliers

of service on those underlying broadband facilities. The likely outcome of this particular situation

is that the FCC will choose to reclassify Internet service as a Title II kind of service.

Some of you who know anything about this could conceivably levitate skyward, because there

are a huge number of things in Title II that are all about telephony, all about universal

service and taxation and tariffing and all this stuff.

The FCC has the ability to forbear from applying some of the possible controls, remedies, and

regulations in Title II. They always have the freedom not to do something.

And so if anything does happen to reclassify Internet as Title II, my hope and expectation

is that only a small number of the terms and conditions of Title II would apply to the

Internet service. One of the things I'm very attracted to is

common carriage. Common carriage basically says you have to

apply the same rules to everybody, all customers are served under the same terms and conditions.

There is no favoritism. And I think that's important for our business.

Okay. Next Dory question.

I don't know how to -- >>> How about if we --

>>Vint Cerf: There we go. Why don't I read it.

That way, I'll pay attention. That's terrible, isn't it?

How would the Internet be different if there had been native support early on for user

identification and for content signing and for encryption?

This turned out to be hard to be retrofit. Yeah, it has been hard.

I actually think it would have been very helpful and powerful not only to have users able to

authenticate themselves, but have devices able to say, yes, I'm getting control from

somebody I recognize. I will accept the exchange or demand for information.

I wish that we had done that. The problem is, the technology to do it wasn't

available at the time that the Internet standards were being finalized in 1978.

Public key crypto was invented in 1977 by Whit Diffie and Marty Hellman at Stanford.

And I knew all about what they were doing. But it was an idea at the time.

It wasn't -- RSA hadn't been implemented yet. So by the time those things matured, the Internet

had raced on into, you know, such a deployment that we couldn't force those things in.

If we were doing it over again, I would have wanted to have the advantage of that.

Okay. How long did the government want to keep TCP/IP

a secret technology? Zero milliseconds.

[ Laughter ] >>Vint Cerf: Never.

And I have to tell you, I'm really proud of this.

Bob and I, Bob Kahn and I, deliberately set about to make TCP/IP visible to the public,

to anyone in the world. We visibly wanted it to be unconstrained,

fully accessible, fully documented from the get-go.

And the reason is very simple. Our rationale was that if -- this was ever

going to be an international standard, if it was ever going to be adopted by the proprietary

community, there had to be zero barriers, no excuses.

You can't say, well, we couldn't get the documentation, or, no, we didn't have access to a reference

implementation. Our conclusion was, make this so available

that there are no excuses for not adopting it.

And we didn't put any patent constraints on it.

We didn't do anything other than to make it fully and widely available.

And I think that openness served us well, because, ultimately, it did get adopted as

an international standard. What's the one thing you wish you had been

-- No -- oh, wish had been properly separated into two or more things from the beginning.

Well, the one thing I did mention was this separation of identifier at the end points

from the I.P. addresses. I really wish I had done that.

By the way, we can still do that. There is absolutely nothing stopping us from

implementing protocols that have that property. And, in fact, one of the things that is interesting

about the Internet is that you can keep changing it.

You can keep adding new capabilities. And I hope that we actually get there someday

so that we have this flexibility that we don't have today.

How do I get this little thing to scroll down? There we go.

Okay. What's -- Let's see.

Something to think about at the end of the discussion: If the Internet's original design

had been different in the ways we've discussed, how would its evolution have been different?

Boy, that's a DNA question, isn't it, as opposed to a DNS question.

[ Laughter ] >>Vint Cerf: I know.

Bad -- minus 2. Well, I think probably, if we had done some

of the things I suggested, we would have had strong authentication available to make the

system a little more secure. We would have had better mobility, multihoming,

ability to use multiple routes to push data across the Net faster, taking advantage of

multiple paths at the same time. I think we might have had a good shot at making

the network more resistant to various forms of attack, because we had put in authentication

in places where it could usefully be applied. And, again, I want to remind you that it's

not impossible to put some of those things in again.

There's an activity in the National Science Foundation called FIND, Future Internet Design.

And there are ideas that are being pursued with support from NSF, some of which touch

on the ideas I've talked about. Some are even more elaborate than that.

And there's nothing stopping us from trying some of those ideas out, either in the existing

Internet or, in fact, in something which is independent.

So my sense right now is that we are not constrained to live the history of the network.

We have the freedom and flexibility to change that if we want to.

Let's see. What choices would you have made if you'd

known that the Internet would eventually be used for commercial purposes?

I don't think I would have made different choices, to be honest with you, since I was

a big proponent of trying to turn various parts of the network into commercially useful

things. I was very happy to see Cisco and Proteon

and others make commercial routers. Because the way we made routers before that

was to take a computer and a graduate student and wrap the graduate student around the router.

[ Laughter ] >>Vint Cerf: And we were running out of graduate

students. So seeing commercially available routers was

great. And finally came Internet services that were

commercially available. And that didn't happen until we broke a policy

logjam in 1988. I asked permission from the federal government

to allow me to connect the commercial MCI mail system to the Internet in 1988.

Up until that time, 15 years of the Internet program, nobody was allowed to put commercial

bits on the government-sponsored backbone. So I was really happy when they said, yes,

you can do that. Within a year, we had gotten the MCI mail

system connected. All the other e-mail providers said, wait

a minute, those MCI guys shouldn't have that privilege alone.

So they jumped on, telemail, and Ontime, and CompuServe, and so on.

An odd side effect, they hadn't been able to talk to each other before, because they

were closed e-mail systems, but they could all talk to the Internet.

In consequence, they could talk to another through the Internet.

That was a surprise for everybody. Holy cow, I just got a message from some guy

on a different e-mail system. How did that happen?

The side effect of having allowed the commercial use of the government-sponsored backbone is

that commercial providers of service didn't have to build their own backbone.

So that meant they had a relatively low barrier for offering commercial service that could

use the government-sponsored backbones for a time.

It didn't take very long. That was 1989 when we saw the first commercial

Internet services come along. By 1995, just a few years later, the National

Science Foundation shut down the NSFNET because it didn't need it anymore.

It could buy, the universities could buy commercial service more cheaply than they could get it

from a specially operated network. Sorry?

Yes. >>> I asked this question a few weeks ago

at TGIF, and I got completely shot down. But --

>>Vint Cerf: Can you use the microphone? You'll be on record, by the way, so you should

be aware of that. >>> But basically, I was just wondering what

your thoughts were on where the Internet will be in the future maybe ten, 50 years down

the road, and what our relationship with the technology will be then.

I won't hold you to it. >>Vint Cerf: That's a nice, simply-asked question

with a really easy answer; right? The short answer is, beats the hell out of

me. I wish I was eight years old so I could actually

wait around to see the answer. Honestly, I think two things.

First of all, mobility, high bandwidth. The network will disappear.

You won't see it. It will just be part of the environment in

the same way that we don't think very much about electrical power.

We just plug into the wall and expect it to be there.

We will expect networking to be part of the environment.

It will always be available. All of the devices that we use will be networkable.

And the thing that I think is the most exciting possibility is the possibility that the network

will be able to interface to our neural systems. Now, before you think I've gone all Ray Kurzweil

on you. I'm not quite there.

But I will -- I need to finish with this comment, even though there are other questions, because

it's already way over time. But I love to tell the story, many of you

have heard this before, but I'll tell it anyway. My wife was born with normal hearing, but

she lost her hearing when she was three. She had spinal meningitis.

So she was deaf for 50 years. Then she got a cochlear implant.

Now, this is a device that interfaces to the inner ear, the cochlea.

It has electrodes that touch the auditory nerve.

The speech processor that she wears takes in sound, does a Fourier transform to figure

out what frequencies are present, then generates electrical impulses in the inner ear to fake

the brain out into thinking that the inner ear is working.

Now, the implication of that is that we understand enough about sensorineural processes to imitate

them electronically. Now, I had been thinking, why not program

-- reprogram the speech processor so that it can do TCP/IP?

[ Laughter ] >>Vint Cerf: And now -- I'm not trying to

get TCP/IP stuck in her brain. We aren't there yet.

But what I am interested in is the possibility that she could ask a question, speak out loud,

the microphones that normally are hearing sound to help her hear could pick up what

she says. We could digitize that, packetize that, send

it as voice-over-IP to a speech-understanding processor.

Guess what? We make some of those; right?

So she could ask a question. The answer would come back through the Internet,

go into the speech processor and go straight up into her auditory nerve electronically

and then, you know, the brain interprets that as sound.

So we could Internet-enable my wife. That's pretty amazing.

[ Laughter ] >>Vint Cerf: So there -- I think, honestly,

that our ability to network ourselves in the sensory sense is quite likely.

Whether we ever get cognitive interaction is a whole other story.

And it's one which is at the moment still science fiction.

But sensory, neural and sensory motor possibilities are very strong.

And, finally, something that you all know about, augmented reality.

When we think about things like my wife's cochlear implant, this is remediating a problem

that she had. Otherwise, she can't hear.

But there's no reason why we can't extend the range of people's ability to sense the

environment. Why can't we see x-rays?

Why can't we hear at the frequencies that dogs can?

There's no reason why we can't augment our abilities.

So I think if we're looking 30, 40, 50 years into the future, we will see a lot of augmentation

of our abilities, some of them sensory motor, some sensory neural.

Whether we ever get to the point where we have cognitive interactions with computers,

I don't know. But I think it's very possible that we might

have conversations with computers that are richer than the ones that we have today, which

are mostly command and control. I think if you saw the TED lecture where the

guy was wearing a projector and had a television camera, he had a headset and a microphone,

and instead of carrying a mobile around, he carried a piece of paper.

That was his display screen. And he projected stuff onto the paper, or

in one case he projected stuff on his hand. It was the telephone dialing pad.

And he dialed by just, you know, poking on his hand.

And, of course, what was going on is that there was a video receiver that was watching

the gesture and interpreting it. This is the first time I have thought about

and seen a computer-based system that was participating in the same sensory environment

that I am. It's observing what I'm observing.

It's seeing gestures and interpreting them. So that kind of -- It was a really moving

example of what happens when you let a computer participate with you in the real world.

If we see anything arrive out of that in 20 or 30 years' time, I bet it's going to be

because we've got common, shared experiences with our computers.

Okay. That's really all the time we've got.

But thanks, again, for letting me be here this afternoon.

[ Applause ]

The Description of Vint Cerf | Talks at Google