Practice English Speaking&Listening with: The Virus Safe Computing Initiative at HP Labs

Normal
(0)
Difficulty: 0

DAN BORNSTEIN: I'm Dan Bornstein.

I'm hosting a series of TechTalks this month on

authorization based access control, currently or possibly

formerly known as object capabilities.

We have four speakers who are associated with HP Labs.

Each of them has a different perspective, a different story

to tell, different projects to present.

So, with that, we have Alan Karp today.

He's the head of the Virus Safe Computing

Initiative at HP Labs.

He's going to give us an overview of what his group

does and talk about a couple projects.

ALAN KARP: OK, first I got to show you this.

Every six to nine months, the branding office sends out a

new standard format for slides.

This is the new HP branded slide format.

So I thought that was pretty cool.

I have to start with a disclosure.

HP has a very active security team.

We are not part of that team.

HP Labs has a very active security team.

We are not part of that team.

We are like a skunk works off in the

corner working on security.

And the reason is, we take a very different view of

security than most security professionals do.

I'm going to tell you about some of that today, and you're

going to hear more about it in the next three weeks.

That's OK with us.

The rest of the world is crazy, and we're the only ones

who are sane.

My goal here is to disinfect you of the world's craziness

and infect you with ours.

Just put that in perspective.

In order to demonstrate the value of our way of thinking

about security, we decided to do what many people would

consider impossible.

We decided to make a virus safe computing environment for

Microsoft Windows.

You guys can stop laughing now.

We have it.

It runs.

Our beta release is out.

But the operative word is safe.

You see, we're not going to detect the virus.

We don't block the virus.

We don't change the operating system.

We don't touch the applications.

We just let the virus run, but it doesn't hurt you and you

don't spread it to others.

And if we succeed in doing that, we'll have made virus

writing uninteresting, and that's the way

you win this war.

Now this is a security talk, and so when anybody says

security, you have to ask, what do you mean by that?

I've come up with 11 meanings of security.

There are the three As of security to which I've added

access control.

There are the things that are normally done with crypto.

And then the things related to, can I

actually use my machine?

In other words, am I safe from denial of service by

sledgehammer?

Now, no security person, no security project, can be

expert in all of these, yet every security architecture

must cover them all.

So when I say security, I'm going to

tell you what we mean.

We largely focus on access control.

You'll see that we touch on some of the other things, but

our primary focus is on access control.

Here's the outline for the talk.

What's the problem?

Well, on my computer, on many of your computers, there's

this very powerful program.

This program can erase files.

It can search looking for secrets and sell them to the

competition.

In extreme cases it can even reformat your hard drive.

Yet, there are employees at my place, and I'm sure there are

employees at this place, who use this program at work every

day, some of them for hours at a time.

Would anybody care to venture a guess for what this very

powerful program is?

Any guesses?

AUDIENCE: All of them.

ALAN KARP: Sorry?

AUDIENCE: All of them.

ALAN KARP: All of them, he says.

Exactly right.

This one can do all of those things.

Now, why does Solitaire have the power to reformat your

hard drive just because you're running as Administrator?

I mean, it's a ridiculous authority to give it.

Why can it plant a Trojan horse in your start-up folder?

Well, nobody ever said that Solitaire did, but the fact

that it can is the root cause of all our-- of many of our

problems.

The reason this happens is because we use identification

or role-based access control.

You log into your computer and you get your identity.

Then every process you start runs with that identity.

That's a good way to keep you from doing something you're

not allowed to do, but it doesn't prevent a program

acting on your behalf from doing something you're allowed

to do that you don't want done.

That's what the viruses do, right?

They do stuff you're allowed to do that

you don't want done.

We think a better approach is to enforce the principle of

least authority.

Give each program only the authority it needs to do the

job the user wants done.

Then you don't have to worry.

And it really recognizes that we give privileges to people,

but we can only control access on processes.

Now, this approach has been used many times in the past,

and it's always failed.

Even the NSA gave up on it.

The reason is it's viewed as a usability nightmare.

You keep getting these pop-up dialog boxes: may I?

May I?

May I?

It's a denial of service attack against the user.

The key insight here comes from Marc Stiegler's work on

CapDesk where he noted that, when I want to edit a file,

typically I'll do something like double-click on the icon

for the file.

That's an active designation that I have to

do no matter what.

His key insight was that the system can also use that as an

active authorization.

When we do that, we find that almost all the security

business decisions disappear into the background, the

actions you normally take, anyway.

You'll see that in the demo.

Today's approach is, every time there's

a new kind of attack--

an Excel macro that does something bad, don't use

macros, Love Letter, well, don't use email attachments--

every time there's a new class of attack we're told not to do

something else.

When we get the next class of attacks I fully expect to see

this be the recommendation.

That's what it's going on here.

This is feature starvation.

There's good reasons why we want to do that stuff.

We think that, especially Dynamic POLA, where the

authorities can change as the user wants to do different

things, is far better than feature starvation.

It lets you use the features.

There will still be some risk, but at least you get to use

the features while you limit the risk.

The group consists of five people.

You're going to hear four of us during this lecture series.

Me, Mark Miller next week, Tyler Close the week after,

and Marc Stiegler the week after that.

We've worked on or are working on a number

of different projects.

What I asked everybody on the team to do, including me, is

for each of those projects produce three slides.

Just to make it a little bit harder, no more than 50 words

total on all three.

Sometimes that's straightforward, and sometimes

it's a really hard thing.

You can ask Tyler how hard that is.

We all sweated a couple weeks over getting one of his sets

of three slides.

Now even at that limit I don't have time to go

through all of these.

What I'm going to do now is, I'm going to give you a brief

summary of each of these projects and then open it up

to the audience and show you the ones you want.

And if you don't choose, I will.

About 15 minutes before the end, we'll stop and I'll give

you the demo and wrap up.

OK?

Let's start at the top.

Mark Miller, who you're going to hear next week, has spent

the last six or seven years working on the E language.

He calls it a language, but it's far more than that.

It's a full distributed, secure, run-time system with

some very interesting ways of dealing with concurrency.

Marc Stiegler used the E language to write CapDesk,

which is a desktop environment that looks like any other

desktop with icons and menus and all that kind of stuff,

except it's inherently secure and distributed, and

underneath it is a secure distributed file system.

But just transparent.

You just work on your desktop as you would normally.

Mark Miller and Marc Stiegler formed a company that got a

DARPA contract.

DARPA wanted a browser that they could use in the Pentagon

where the rendering engine had been written

by the Chinese army.

Browser to use in the Pentagon where the rendering engine had

been written by the Chinese --

That's quite a challenge.

These guys succeeded.

There was a security review, there were some bugs, but

overall, they succeeded in building such a browser.

That's the DARPA browser.

How many people here know about Donut--

PlanetLab?

A few people.

PlanetLab is a consortium of about 60

universities, ten companies.

They've got nearly 1000 computers

scattered around the world.

If you want to do an experiment in large-scale

distributed computing, and you're not at Google, you can

go ahead and sign up and get a piece of each one of those

1000 machines.

So it looks like you got 1000 machines to yourself.

So, for example, OceanStore is one of the projects being run

on PlanetLab.

Well, we had a security review of PlanetLab and they came in

and told us about it.

And the first thing we noticed was

there's a PlanetLab Central.

Now, security guys panic anytime they hear central.

Central point of failure, central point of

vulnerability.

Marc Stiegler, Mark Miller and Terry Stanley, about two years

ago, a little more than two years ago, took a Memorial Day

weekend, a three-day weekend, and they built essentially

most of what's--

the important part of what's in PlanetLab,

but without a center.

So they call a DonutLab, which is PlanetLab without a center.

The interesting thing is because they use E, they only

needed about 12 pages of code, and that included their test

application.

That's an impressive demonstration of what you can

do with the E language.

A number of years ago, I was involved in an HP product

called E-speak.

That's no longer being offered as a product, but we learned

some important lessons.

One of them was that you need to

consider structural security.

In a distributed system, you have certain components and

they can be plugged together in different ways.

Some of those ways actually make it possible to secure

your systems and others don't.

You need to study those.

We learned some lessons, and that's what structural

security is all about.

OK, what do you do with your passwords?

Well, it's a big mess, right?

You got them in your head, which means you only have a

few and they're low entropy.

I think one of the best things you can do is write them down

and keep them your wallet.

Because I never heard of a pickpocket who hacked

computers or a hacker who picked pockets.

But I worried about this problem.

A few years ago I came up with a little tool that lets me

calculate my passwords rather than remember them.

So I only have to remember one secret, and I can get strong

different passwords for every place I log in.

That's the site password tool.

Phishing is a big problem.

Farming is a problem.

What are we going to do about that?

Tyler Close realized that the essential problem was that you

were being asked to make an important decision based on

information provided by the attacker.

The attacker chooses the URL, the attacker chooses the look

and feel of the web page.

So he came up with the Petname tool, which is available for

Firefox, that lets you see information provided by you.

If you don't see that information, most likely

you've been phished or farmed and you know to

be very, very careful.

Ka-Ping Yee, who's a grad student, fifth-year grad

student at Berkeley, be finishing up I hope soon, took

these two tools and put them together in a way that one

user said it has changed his life.

He no longer agonizes over passwords.

It's a good example of part of our philosophy, which is

improving the user experience by adding security.

Improving the user experience by adding security.

We have about 15 users at HP labs, most of whom use this

tool because it's easier.

They don't even care that they're more secure.

Ping is also an expert in interaction

designed for security.

He's written a couple of book chapters on this subject.

He's come up with ten principles for secure

interaction design.

Tyler, again, looked at the problem of composing services

on the web.

Not necessarily web services in the sense of SOAP, but how

do you put things together on the web in a reasonable way so

that it's manageable and understandable and securable

and all that?

He came up with something.

We're thrashing around on the name.

The current name we're using is Web Keys.

But the basic idea is, you can compose web services with very

little or no glue code.

That's the idea behind the Web Keys.

Tyler also had a consulting business in a former life when

he was living on the beach in Anguilla.

I don't know if you know this, but the oil tankers at sea,

the oil on them changes hands several times between when

it's loaded and when it's unloaded.

Those contracts have a very complicated protocol.

Tyler, again, identified the essential thing, the key point

that was making those protocols so complicated, and

came up with the IOU protocol.

The IOU protocol is simple enough to write down on about

a page and a half of text.

So he greatly simplified it by identifying the key source of

the complexity.

Then I've been working with the Navy.

The Navy is trying to turn the DoD into a whole

bunch of web services.

They're using the services oriented architecture and the

web services standards.

I looked at what they were doing.

And I was giving this presentation to a group of

high-level managers, program managers.

These are guys who wouldn't even consider writing a check

for less than $300 million.

Little stuff like that for the government.

I told them, by the way, before I start, what you're

doing ain't going to work.

I can tell you why, and I can tell you how to fix it.

And so, over dinner, I tell them why, and I told

them how to fix it.

Two weeks later their consultants came back and

said, it's not working.

They said, why?

It was exactly what I told them.

So now I go down to Monterey every once in awhile to tell

them how to fix this stuff.

And that's what this is ABAC is all about.

Then the demo, the virus safe computing environment.

So that's the overview of the projects.

Anything that you'd like to hear about in particular?

Anybody?

Otherwise I get to choose.

AUDIENCE: [UNINTELLIGIBLE]

ALAN KARP: So NSA added code to Linux to do sandboxing.

HP has a product on that called NetTop.

This is for the multi-layer level security.

The way it works right now, you've got some

private in the Army.

Actually, I'm dealing with the Navy.

So you've got some seaman in the Navy.

And he's dealing with highly classified material, somewhat

classified material, and confidential material.

And he's got three machines on his desk on a submarine.

Right.

You know, they don't got a lot of space on a submarine.

In fact, worse than the machines on the desk are the

holes you have to drill in the bulkheads to

get the cables through.

It turns out that's a bigger problem for the Navy.

They've got very limited space there.

So the NSA developed this infrastructure based on

Virtual Machines, VMware, where you can have one machine

and run three separate compartments.

NSA has certified the code that they can't cross-talk.

That's what NetTop is.

ABAC is something different.

It's a completely different way of thinking about how to

do security.

So, since you're my host, I'm going to take his question

first, and I'll explain to you a little bit about that.

Besides, this is the work I'm doing, and I'd rather talk

about my stuff than theirs.

But I'll get to it.

Let me start with a sad story.

You can see what happens when they give you a new format.

That's a little too close to the bear, but I

have to edit my slides.

Let me tell you the sad story of Zebra Copy.

This is a true story.

Zebra Copy was a mom-and-pop copy center in Palo Alto, down

the corner of Page Mill and El Camino.

They were so small that when business was slow, they'd

dress up one of the employees in a zebra suit to stand out

and wave at cars.

When my son was little, he liked that.

Well, they had a contract with HP.

About 2000 HP people had the right to order copy jobs

through Zebra Copy.

So when an order came in, Zebra Copy would look at the

name on the order and check it against their list. If the

name was on the list, they'd do the job, right?

And when an employee changed jobs, HP would just sent a

note down to Zebra Copy and they'd update their list.

The first time I heard this I thought it was a joke.

It's the way we do things, but it's a joke.

Why is it a joke?

At that time HP had more than ten thousand small business

partners like Zebra Copy, and Zebra Copy had maybe 500 or

1000 clients in the valley like this.

They must have had a full-time person doing nothing but

updating these lists.

That was a nightmare.

Now, they're out of business now.

I don't think that extra person was the only reason

they went out of business, but it sure didn't help.

So what was going on?

So let's say Bob was one of those people at HP who could

order a copy jobs from Zebra Copy.

Bob would send a request to Zebra Copy.

They'd look up his name in their policy database.

Maybe he could only order black and white jobs and not

colored jobs.

Get back the result, and if he was authorized, then they'd

send the job back to Bob.

Very simple, very straightforward.

But it turns out there's a number of problems. First of

all, how does Bob's name get in the

Zebra Copy policy database?

What if there's already a Bob there?

Does he have to be Bob Q or Bob X or Bob Z?

How many identities do you have on the internet because

of such problems?

Also Bob had to be able to authenticate.

He had to learn how to log in.

That's why we have so many passwords.

Look at how it worked.

We have the problem of ambient authorities.

When Bob makes a request of Zebra Copy, they look at the

request and they look at the sea of all the things that Bob

is allowed to do.

And if there's a match, they do it.

So if Bob intended to just check his account balance and

inadvertently, due to programming error or input

error, ordered 10,000 copies, 10,000 copies get ordered.

Or if the program Bob was running had a virus, and Bob

intended to just check his account balance, the virus

could order 10,000 copies.

So there was a lot of chance for confusion.

There's no convenient way in this picture for Bob to give a

program a subset of his authorities.

Bob goes on vacation and he needs Carol to

take over for him.

How does he do that?

Well, he can tell HP to get Carol to

the Zebra Copy database.

That's one way.

Of course, that takes some time, and it's awkward, and

it's a problem.

Having done that, let's say Carol does something wrong.

You've lost track of who was

responsible for Carol's actions.

Bob was the one who said she could do it.

Bob ought to share in the blame.

But you've lost track of that.

What normally happens in this case, anyway, is it's so hard

to let somebody else take over part of your job that you just

let them take over all of it and you

give them your password.

I don't know about here, but at HP, every manager has given

his or her admin his or her password.

Because you can't give part of your authority--

you need them to have part of your authority to do your job,

to you give them all your authority, and now you have to

trust them.

Very bad situation.

Bob gets back from vacation.

How's he going to do the revocation?

He tells HP he's back.

They revoke Carol.

HP sends Zebra Copy a message, says, remove

Carol from the database.

But what if Carol was already in the

database for another reason?

Oops.

How do you keep track of all this stuff?

And finally there is a very important vulnerability that

is widely overlooked, which is the confused deputy.

Let's say that Bob doesn't make the request

directly, but HP does.

It's quite likely that HP can do more things than Bob is

allowed to do.

So what if Bob asks HP to do something that HP is allowed

to do but Bob isn't?

HP might do it, right?

So how do you prevent that?

Well, HP has to check Bob's authorities before it acts as

his deputy.

But that means every service that acts on behalf of

somebody else has to build its own access control mechanisms.

And that's just not a recipe for success.

So here's what we propose.

Authorization based access control.

Now, it's not much of a change.

I've just moved two lines on the architecture diagram.

What's going to happen is, when HP signs the contract

with Zebra Copy, Zebra Copy is going to give HP a bunch of

authorizations, independent authorizations, one for each

action covered by the contract.

HP is going to put them in its policy database.

When Bob logs in, and he only has to log into HP, he will

get back a bundle of authorizations, all the things

he's allowed to do: within HP, with Zebra Copy, with other

business partners, just a big bundle of them.

And now when Bob wants to order copies from Zebra Copy,

he'll give that running program the exact

authorization it needs to do the job he wants done.

That's POLA: Principle of Least Authority.

Has a number of advantages.

Remember, there was no trust relationship between Bob and

Zebra Copy.

It was Zebra Copy to HP, HP to Bob.

This makes that explicit, and making trust relationships

explicit is always a good thing.

I'll get to delegation, but revocation is manageable.

If Bob has delegated one of his authorities, he can just

revoke it whenever he gets back from vacation.

It's a very simple thing, very easy to do.

You'll notice that Zebra Copy is now out of the business of

managing HP employees.

So we've taken this N times M problem and reduced it to an N

plus M problem.

That's another big win.

Furthermore, it has privacy implications, because now

nobody else has to know when HP has a reorg or when

somebody changes jobs or gets fired or whatever.

This is pretty important when you're talking to the military

in particular.

Oh isn't that nice?

OK.

Shall we run my backup now?

Maybe not.

It's easier to upgrade the system.

If Zebra Copy has a certain format for the authorizations,

Zebra Copy is the only one who looks at what they mean.

So if Zebra Copy wants to add a field to the authorizations

or change the names of the authorizations, they're the

only ones who look at it.

As far as HP's concerned, it's just an opaque bag of bits

that they know what rights it grants.

But they don't have to know how

those rights are expressed.

So if Zebra Copy wants to do an upgrade,

they just do an upgrade.

Yeah?

AUDIENCE: You said revocation is manageable.

Let's say Bob has delegated Carol.

Carol can now do replay attacks.

ALAN KARP: So, if Bob was delegated to Carol, Carol can

now do replay attacks.

And it depends on the technology you use.

And I am not saying anything about the technology you use.

The whole point here is to get you to stop

asking, who are you?

And start asking is this request authorized?

Everything gets simpler when you do that.

And finally, distributed policy management is viewed as

a very hard problem.

I actually think with explicit authorizations, with

authorization based access control, distributor policy

management is quite straightforward because it

just follows the way we do things anyway.

HP signs the contract with Zebra Copy, gets a bundle of

authorizations.

Bob manages the Graphics Arts department, so they give him

the authorizations.

He delegates them to his people.

They delegate them to their people.

It's just the way we've managed companies

for hundreds of years.

And it just works.

And we don't need a whole lot of

infrastructure to make it work.

So that's the story on authorization

based access control.

In fact, I was just at the TIPPI workshop at Stanford.

There was, how do you prevent all these authorization

attacks, attacks against our authorization systems?

And the question is, why are we asking people to

authenticate so many times?

That's opening you up to phishing.

Look, when you set up a new account at a bank, you get a

web URL that you can bookmark.

Why does that URL take you to a form where you put a user ID

and password?

Why doesn't that URL just take you

directly into your account?

And the URL is just a big secret, like

a big random number?

It's the moral equivalent of a password but you don't have to

remember it.

It's just the web link.

So why not--

AUDIENCE: Will you tell us which [? tool ?]

[INAUDIBLE]

ALAN KARP: So, yes, we can go to all kinds of--

we've considered that, and we understand how

to deal with that.

There are ways to deal with that.

For example, the bookmark can be encrypted in the browser,

and you can use SOCKS proxy to do the decryption to send it

to the bank.

There are ways to deal with these things.

But the point is, I can do keyboard sniffing, I can do

phishing, I can do farming, I can do all kinds of things, if

the bank link takes me to a login page.

The number of tacks and the way of implementing them is

much harder if I just have a bookmark.

By the way, it's easier on the user, And it can

be made more secure.

So let me just go back.

OK, what else?

AUDIENCE: Site password.

ALAN KARP: Site password.

OK.

So as I said, too many services require login.

We haven't changed the world yet.

We'll get there eventually.

We haven't changed it yet.

So either you have one or a small number of passwords or

you have a different password for each site.

If you have a different--

I just counted, I have 55 places I log in.

I'm not going to remember 55 different strong passwords.

So I mentioned the piece of paper in your wallet.

There are these password managers.

Firefox has one built in.

And you can buy these things that keep them encrypted on

your machine.

But then, how do you get your password when you're not on

your machine?

What are you going to do?

I just wrote this little tool.

You have one password one strong

password you have to remember.

It's subject to a dictionary attack, so it needs to be a

strong password.

You have an easy to remember name for the website.

And they just get hashed together to produce a password

that you can cut and paste into the field.

We have a version.

You can get it off the HP Labs download site.

By the way, if you're not at your own machine, you can just

go to my web page and run the program on the machine you're

sitting at using the executable sitting on the web.

So you can get your passwords wherever you are.

We've actually identified a number of other uses.

For example, role based access control.

We can use the name and the role as part of it to put them

together in different ways.

We've actually used it for symmetric key management when

Marc Stiegler first joined HP.

He actually lives in Arizona, comes in every second week.

He couldn't get through the VPN tunnel.

It turns out his ISP had some port closed.

So we just set up a symmetric key with a shared secret that

we shared when he was visiting one time.

Then we were able to change the key by just updating the

one field with the month, which could be in the clear.

Or we could just tell each other openly

about the second part.

With a slight change on the server side, it can be used to

generate one-time passwords.

One of the things left out of the wireless equivalent

protocol was key management.

This tool can actually be used for Web Key management.

Yeah?

AUDIENCE: Is there a simple way [INAUDIBLE]?

ALAN KARP: So, is there a simple way to change your

master password without changing all of them?

No, but there's a simple way to change one password without

changing them all, which is just change the name.

So Schwab makes me change my password every year.

Just a couple months ago I switched the site name, this

thing, from being Schwab 2005 to being Schwab 2006.

That can be public information.

So I actually have all my site names and user IDs actually on

my public website.

There's no link to it, so I know where it is and you

don't, ha ha ha.

But if somebody got, that it would give them very little

security information.

Except for Schwab, where my user ID is my Social Security

number, so that's not listed.

In general, it's just there.

Yeah?

AUDIENCE: [INAUDIBLE]

ALAN KARP: So, are you more vulnerable to a keystroke

logging attack?

So, what I do is I have site password in my start-up.

So when I first log into the system, when I'm least likely

to have at least a hook-based logger running, I type in my

master password.

After that there are no keystrokes to log.

AUDIENCE: [INAUDIBLE]

ALAN KARP: So if I'm going to log in from another computer

that's always dangerous.

Take your chances.

But if I'm at a friend's house--

Look, I bought these shoes standing in the Sony store at

Stanford Shopping Center.

OK.

I was over there, my wife was shopping.

I was bored.

I said, I need my shoes.

So I went to the place where I bought them before and I

needed my password.

I'm standing at a computer in the Sony store.

What is the risk that they're going to get my login, my

master password, and know what to do with it, quite frankly,

because nobody else knows about Site Password?

I just ran the thing and got my account, got my password,

logged into the site and bought my shoes.

You've got to manage your risk.

Why are you logging in from somebody

else's machine, anyway?

There are plenty of things that-- there could be a

hardware keystroke logger there, right?

So, you know, you manage your risks.

In this case, I managed my risk.

OK?

AUDIENCE: It seems like the [INAUDIBLE]

ALAN KARP: Absolutely.

There are many ways you can deal with it

that make it easier.

There are plenty of things, once you have the idea, and

this is just hash-based passwords.

It's just an MD5 hash, and the secret and the site name.

AUDIENCE: [INAUDIBLE]

ALAN KARP: Sure.

You're going to hear more about this next

week, right, Markham?

AUDIENCE: Yeah.

Yeah, I want to spend time [? just focused on ?]

language.

[INAUDIBLE]

ALAN KARP: We've wanted to have electronic cash for a

long time, but we don't have it yet.

So it must be very hard to do.

I mean, there's a lot of money to be made if we could have

electronic cash.

I'm not talking about micro-payments, but just

electronic cash.

This is the problem.

You can see again the problem with the new

style on the slides.

I got to go back and double-check them.

This is was the problem that, I think, Markham started out

to solve, which is why he got into this racket.

No?

Maybe.

Sort of.

At any rate.

So it must be very complicated.

Here's the situation.

We have Alice and Bob that have

accounts at the same bank.

Alice wants to pay Bob $10.

I'm going to show you the E code she needs to do that.

So the first thing Alice is going to do is make a purse.

Now she's got the handle to an empty purse.

And then she's going to tell--

she's going to deposit $10 into her purse.

You see, it gets removed from the bank.

Then she's going to hand the purse to Bob.

Now Bob has the handle to the purse.

And then Bob is going to deposit the

money into his account.

You would think this would have to be very complicated

with all the things that could go wrong.

Money could appear, money could disappear.

Alice could cheat the bank.

The bank could cheat Bob.

I mean, there are all kinds of things that could go on.

But here's the totality of the E code you need to do that.

Even when Alice and Bob and the bank are on three

different computers talking over the open internet.

Very simple.

Because the proper way of thinking about security was

built in the language at the very beginning.

What about the bank?

That must be a real mess, right?

If it's so simple for the clients, the bank

must be a real mess.

Here is the total code you need for that transaction.

Now it's not really a bank.

There's no auditing.

No SOCKS compliance, all that other stuff.

But it's all the stuff you need to make that bank work

without any chance of money disappearing or the bank being

cheated in any way.

And it fits on one slide, not a page of code, but one slide.

Again, that's because Mark Miller thought about all the

implications of the security actions that he was taking.

Basically this is ABAC at the level of

the individual object.

Now if you think that you need to do--

learn a new language and learn a new run-time system in order

to make this work, you don't.

It turns out that what E is doing is just good object

oriented code design.

Throw away all the things that break the object oriented

model, and you'll almost be there.

and so in fact there's a project called Joe-E at

Berkeley that's providing a filter to verify that you're

using the proper subset of the Java language.

But since it's a subset, any Joe-E program will run as a

Java program.

It's just the subset.

And the verifier they're writing makes sure that you

don't use any of the bad stuff that you shouldn't have been

using anyway.

So, you can actually use Java to get this level of security.

Now, it's a bit more verbose, and you can see there's some

funky library calls.

You can use E for the simpler--

it's hard for me to say this without choking--

the simpler syntax.

But in this case it is.

We have syntax arguments all the time.

Yes?

AUDIENCE: What happens to the purse when an

exception is thrown?

ALAN KARP: What happens to the purse when an

exception is thrown?

We haven't shown that here.

Let's see.

What exception would be thrown here?

AUDIENCE: Could I--

ALAN KARP: Yeah.

But just tell me quickly and I'll

AUDIENCE: OK, so--

ALAN KARP: And I'll repeat it.

AUDIENCE: The [INAUDIBLE] deposit method, that's the one

who transfers money.

And the critical issue is that, if the two purses or

purses from different currencies or one is not a

legitimate purse, or there's insufficient funds, any of

those things will cause an exception to be thrown before

money is transferred at all.

Once you've gone and cashed all those checks, now you're

in code for which there's no opportunity for [INAUDIBLE].

The only thing left is the [UNINTELLIGIBLE].

ALAN KARP: So let me repeat that.

For this simple example of one bank where there's only one

currency, if the purse is a legitimate purse, there are no

exceptions that can be thrown once you get to that point.

But if there are, then the exception code would deal with

putting the money back in the accounts.

So, for example, if Bob did some operation against the

purse that was invalid, well, Bob would get an exception,

but the purse would still be there.

But there are no exceptions.

This is why it's very careful to use things like branding

and the methods that--

the arithmetic in E is always using non-overflowing

arithmetic.

So the number of exceptions--

in this case there are no

exceptions that can be generated.

But if there's an incompatibility in the purses,

you put in dollars and I try to pull out euros, then the

exception will be thrown, but nothing bad

happens to the purse.

If the bank crashes, E is replayable

from previous states.

AUDIENCE: So the key thing is that, synchronization-wise,

the only states made persistent are the states--

There's an implicit transactional model here.

[INAUDIBLE]

persistent.

[INAUDIBLE]

transactions.

So if you crash and you come back from your last persistent

state, you're always in a persistent state.

ALAN KARP: Right.

It's inherently transactional in that sense.

So OK.

AUDIENCE: I'll be talking somewhat more

about E in my talk.

ALAN KARP: OK, so let me to to the demo.

Now, HP Labs has lots of various projects.

One of them is the nanocomputing.

Have you heard about this?

Where they're making individual circuit elements

out of molecules, one molecule thick, switching and

transistors and all that.

They built the densest memory by a factor of 30.

I've forgotten how many gigabytes per cubic millimeter

or square millimeter or whatever it is.

But then they got 64 kilobytes of it.

At any rate.

One of the things you may not have heard about is their work

on a molecule called thiotimoline.

Thiotimoline is a molecule made up of four carbon atoms.

It was discovered in the early '60s by Isaac Asimov. And what

it is, is four carbon atoms in a tetrahedron and it's been

rotated in four space so that one of the carbon atoms is 1.3

seconds in the future.

And our guys have figured out how to chain these things

together sort of like a carbon nanotube, so they can see an

arbitrary distance in the future.

And they built a spreadsheet that will calculate--

not predict, but calculate--

the value of my stock portfolio

ten years from today.

Now, OK, good.

Sometimes I give this talk to rooms full of executives and

nobody laughs.

I get really worried about my stock portfolio.

OK.

At any rate, they're physicists and chemists, and

they don't know much about computer security, so they get

a lot of viruses.

So what am I going to do?

So let me run a spreadsheet for you.

We call this spreadsheet Killer.xls because it's the

killer app, right?

This is the ultimate killer app.

So the first thing I want to do is I want to open it the

way I would open it-- the way you would open up on your

machine today.

The first thing you get, of course, is a

stupid dialog box.

Dialog boxes need some translation.

This one is saying, do you want to get your work done and

risk your whole machine or not?

And really that's all that it's saying.

We say, of course, I want to risk my whole machine.

Really, what we're saying is we want to get our work done.

$100 invested in HP ten years from today

will be worth $133,000.

Hey, I like that.

We don't know the rate of inflation.

It might not buy a double tall latte.

By then, Google will be charging for

lattes, don't worry.

How about $100 invested in Microsoft?

And I have just been bitten by a virus.

And this virus is in the process of

eating up my desktop.

Watch back here.

And because it's eating my desktop it could eat up my

whole machine.

This is very bad.

Kablooey.

This is not a good situation.

Fortunately, the virus is either very stupid or not

really malicious because it didn't destroy my stuff, it

just hid it.

So what I'm going to do is I'm going to put it back,

rearrange things here, and now I'm going to do is I'm going

to launch it under Polaris.

I'm just going to double-click.

This is the active designation.

We start Excel with the authority to edit no

spreadsheets and add the authority to edit the one I

just designated.

We have an error in our virus, and I'm going to have to

relaunch this.

Isn't that interesting?

The virus has a bug.

We'll see how we fix-- how we deal with those later.

It's a timing error that we haven't found since the last

update of Office So let me try one more time.

I'm going to pause, give it time to think,

settle down, virus.

One more.

OK.

If it doesn't work on this one next time I'm

going to give up.

What will happen is it will just run-- the virus will show

me that it has no files it can edit.

The virus has chosen not to run.

Let me just-- and it's dead.

What happens is that window comes up, says I've been

bitten by a virus, and shows me no files.

The virus didn't need the authority to edit any file, so

it didn't get any authority to edit any files.

That's the problem.

Yeah?

AUDIENCE: [INAUDIBLE]

ALAN KARP: I'm sorry, what?

AUDIENCE: You mean Excel didn't get the authority.

ALAN KARP: I'm sorry, Excel didn't get the authority.

So, normally when you start Excel you start it running as

you and you give it a string representing the name of the

file you want it to edit.

Excel has to be prepared to edit any file you might name,

so it needs all your authority.

We start it with the authority to edit none and add the one

you just designated.

Sorry about that.

I ran the demo just before the session, but the

virus decided to die.

AUDIENCE: [INAUDIBLE]

ALAN KARP: Right, so that's a good question.

What about the config files and everything else?

Obviously, Excel needs its own executable.

It needs its fonts.

It needs all kinds of stuff.

But those are things it needs every time it runs, so we call

that the installation endowment.

You configure that once when you configure the application

to run under Polaris, and then it just gets those authorities

every time.

We have a little tool that configures the applications.

For most applications it's very straightforward.

We haven't run into any that it's more than one or two

exceptions.

AUDIENCE: So is it possible to [INAUDIBLE]

ALAN KARP: I'll show you that.

Since you asked, we'll give it one more chance.

No, it's not going to do it.

OK, but if I want to open another file-- now watch

closely what happens when I do this.

I'm going to click the File Open dialog box.

You see that flash?

That was the file dialog box that was running with the

permissions of Excel.

We watch for that File dialog box, hide it, and present one

that's running with my permissions.

Now I can select any other file and give this running

instance of Excel authority to that file.

And the virus doesn't have access to that window, that

full power of dialog box.

Only user does.

So we're finding are acts of designation that are available

to the person but not the virus.

And as soon as we have that, we can implement the function.

So we have a little service running in the background

that's watching for that dialog box.

AUDIENCE: So, well, in that particular

case it didn't matter.

The File dialog [INAUDIBLE]

ALAN KARP: Yes, it was.

It was very much--

You notice how this one is decorated with the angle

brackets Excel, so I know that that's running in what we call

a pet, Because I've given it a pet name.

You see, this says Open for Excel.

It's not just Open, which is better than the standard File

dialog box.

It actually tells me which application, which running

pet, is asking for this file.

AUDIENCE: Are you going to--

Are you doing anything more than that?

Because the average user are not going to--

ALAN KARP: The average user probably won't get in trouble,

but usability is more important than small

increments in security.

AUDIENCE: [INAUDIBLE]

Let's say that the application opens the [? Trojan ?] dialog

box [INAUDIBLE]

The result is that the application has failed to

acquire the authority that the user would like to give it.

[INAUDIBLE]

ALAN KARP: So let me repeat this for the

people who are remote.

If the virus opened the file dialog box, it only has access

to the one that's running in the confined space.

The user might not, if, you know, the one we just showed

was hidden or didn't show up, the user wouldn't be able to

grant the authority, but that fails safe.

But that's not your question.

Yeah.

Right.

AUDIENCE: [INAUDIBLE]

ALAN KARP: Well, in this case, we do this

for file dialog boxes.

And we do label it.

You do the best you can.

I don't know what we can do.

Maybe we could have a sledgehammer that

says, no, no, no.

But how do we know if it was--

viruses can inject keystrokes.

How do we know if it was the user who clicked to the virus?

Only the user knows.

If a file dialog box pops up surprisingly, maybe you

shouldn't open a file.

But you probably don't know which file you want to open

anyway, so maybe you won't.

That's the best we can do.

AUDIENCE: [INAUDIBLE]

What about access to non-file things [INAUDIBLE]?

ALAN KARP: What about access to non-file things?

So one of the things we would like to do is prevent certain

applications from getting access to the internet.

Our original plan was to just use the firewall to do that.

Unfortunately, every firewall we've looked at has rules for

preventing network access by application,

but not by user account.

That's too bad, right?

Because we'd like the main user account to have full

access to everything the user wants and

limit the pets as needed.

We haven't forgot to do that yet.

So we have several proposals, including our own SOCKS proxy

running on the machine and blah blah blah.

We haven't done that so far for the date.

Let me go on and show you.

I've got a couple more short demos.

One of the main sources of spyware is the browser.

Here is a website that will make my disk shine like new.

I'd like to have a shiny new disk, so I'm going to go to

this website.

Of course, you get a crazy dialog box.

This one is saying, do you want a shiny new disk or not?

Because that's what you asked for, and of

course you say yes.

And now I have a shiny new disk.

Of course, I've got spyware all over my machine.

In this case it's just on my desktop.

That's because the browser is running

with all of my authority.

Now Windows Vista is going to provide some protection from

that, but not enough.

How about if I go to that same website in the browser running

under Polaris?

First thing is, I don't care about the dialog box, I can

just let it go.

So I don't have to be bothered by that.

And of course, there's no trash on my desktop.

This is actually the virus on the previous run failing.

The virus on this run failing.

That's good.

That's good that the virus failed.

Some of them fail quietly.

A third source of trouble is email attachments.

You know, right now, most corporations are filtering

most kinds of email attachments.

And yet, here I have this very important email.

It's the crucial Windows virus update.

It's very important that my viruses be up to date.

Wouldn't want an out-of-date--

well, we've seen the problems without

out-of-date viruses, right?

They don't run.

And you're not allowed to send an executable through the

firewall, so what do you do?

Well, you put it in a Zip file, right?

In fact, the instructions on the HP web site

tell you to do that.

Talk about shooting yourself in the head.

So I go to launch the virus.

I get a stupid dialog box.

I go launch the virus.

I get another stupid dialog box.

You know, these things are like teaching a pig to sing.

It's going to waste your time and it's just going to

irritate the pig.

At the end of the day, the virus is run and all my

friends and neighbors have been infected.

Now, we couldn't get our mitts in to grab the double-click

launch here, so what we did is we added a button.

We teach our users to launch attachments with this button.

This is going to run them under Polaris, so no stupid

dialog box.

When the virus runs, it doesn't have the authority to

do anything bad, and we've set up that account to have one

entry in the context list, which is the local sysadmin,

so your whole environment becomes a honeypot.

So those are the Polaris demos.

Sorry the spreadsheet didn't work.

AUDIENCE: [INAUDIBLE]

ALAN KARP: Right.

Right, so if I were going to attack the system, I would

write a virus that made the application looked like it was

running strangely to induce the person to open it outside

of Polaris.

The only thing we can do is make sure that nothing will

happen strangely while you're inside Polaris.

When people get confidence in it, what we tell them to do

is, if it's acting strangely, is open up some other file

that does something similar outside Polaris and see if it

still acts strangely.

Then you know it's something in the file and maybe you

should be suspicious.

That's the best we can do.

But yes, that's exactly the attack I would mount, and that

would succeed some of the time.

But there are some people answer Nigerian letters and

there's nothing we can do about that.

So again, we do the best we can.

AUDIENCE: [INAUDIBLE]

ALAN KARP: So what's the implementation?

What layer are we doing?

We are using a variant of the Windows Run As system.

Each application we set up a separate account running as a

restricted user.

When you double- click-- we changed the file associations

so that when I double-clicked on that .xls, instead of it

running Excel directly, we did a Run As Excel in that

restricted user account.

That's it.

After that, it's just running and we're out of the picture.

It's up until the File dialog box appears.

We didn't touch the application.

We don't change-- we're not running at any layer.

We're not intercepting system calls.

We're just using the normal Windows API and the Run As

facility that they already provide.

So let me just wrap up now.

What you just saw was more functional because you could

use all these features.

It was easier to use because you didn't have those stupid

dialog boxes.

And it was more secure at the same time.

Most people would say you'd have to pick two out of three.

You just saw all three at the same time.

It actually blocks today's viruses.

I mean, macro viruses, all kinds of things, right?

It just blocks them.

And it's easy to use.

This presentation is being done under Polaris.

Some people have said, use a Mac because it's safer.

But the same kinds of viruses will run on Mac.

And when enough people did it, there'll be viruses for the

Mac because Mac uses identification

based access control.

But with Polaris you get to use your existing tools.

You notice it's Windows, so I didn't say that

you know and love.

I just said that you know.

But you get to use them and you're safe from these whole

class of attacks.

Somebody would have to come up with a new class of attacks.

Now we don't protect against attacks against the kernel.

But in that case, you know, when Blaster came out, that

was an attack against the kernel, we would provide no

protection against that.

Microsoft's response is, here's the patch.

When Love Letter came out, there's nothing to patch.

The system was operating the way it

was designed to operate.

So what was Microsoft's response to Love Letter?

Buy a smarter user.

There's just nothing they could do about it.

This is the one that Polaris addresses, so it's

complimentary, and it forces Microsoft to be responsible

for bugs in its system.

Virus scanners are fighting the last war.

They can only stop a virus that they've seen before.

Now the attack time is dropping.

Firewalls are a great idea.

I mean, I recommend them.

They're fine, except the first step in a perimeter defense is

actually having a perimeter.

Microsoft's approach, you know, the Trustworthy

Computing initiative with Quality

Assurance and patching.

That's a good thing.

But if that's all you do, you only win if you achieve

perfection, because as long as there's an exploitable flaw,

it will be exploited.

What we're doing is saying, how can we reduce the

exploitability of flaws that will already be there to limit

the damage that can be done?

That's a good complementary question to the question, how

can we prevent an attack from succeeding, which is the

conventional question asked by security experts.

Finally, here's a quote from a smart person who

happens to be very lazy.

That's me.

"The best way to solve a problem is to avoid it." If we

give processes only the rights they need to do the job the

user wants done, we can limit the damage a virus can do,

maybe to the point where we've made virus writing

uninteresting.

OK, now I got two more slides.

First of all, what's this bear?

No guesses.

Of course, it's the POLA bear.

It's the POLA bear.

Why is it Polaris?

Well, that's the Principle of Least Authority for real

internet security.

I have a Ph.D. in astronomy, and every astronomer knows

that the Big Bear points to the North Star.

So that's why we call it Polaris.

We have a tech report, which is available now.

We've just found out that we've been scheduled for the

September issue of CACM.

So if you want to read more about it, you can read the

tech report or wait for your CACM to come out.

That's the end of the first of our four presentations.

[APPLAUSE]

ALAN KARP: Any more questions?

OK.

Sounds like we covered--

AUDIENCE: [INAUDIBLE]

ALAN KARP: I'm sorry, what?

AUDIENCE: Can you download Polaris?

ALAN KARP: Can you download Polaris?

We have a beta test program.

You have to sign our beta license agreement.

You notice we didn't sign your non-disclosure agreement,

maybe you can't sign our beta license agreement.

We're actually--

two weeks ago tomorrow we will have distributed our beta to

our original test set.

We're trying to get the bugs out of that.

Within the month, we'll be looking to

expand the beta program.

Send me email if you're interested.

I'll send you a copy of the license agreement for your

lawyers to look at.

That's the big barrier, it seems.

AUDIENCE: When you're--

sometimes these macros do useful things that are going

to get prevented.

I wonder how is the user supposed to allow--

ALAN KARP: Sometimes these macros are going to be useful

things that will be prevented.

Actually this won't prevent them, because you're going to

let you run the macros in Polaris If it fails, it

probably should have because it's trying to use authorities

that the user didn't grant it.

AUDIENCE: How do you grant it?

ALAN KARP: How do you grant it?

Well, mostly it's through the file system.

So if the macro needs to use a file, it's going to have to

ask the user what file.

And if it asks the user what file, that'll be

a File dialog box.

Then the user can grant the authority and the virus can't.

That's the primary step.

Sometimes there will be other dialog boxes that are

non-standard, and we have a workaround for that, but it's

just a workaround.

That's the advantage of doing the beta and not the product.

AUDIENCE: Can you summarize very quickly when you said

about mapping the [INAUDIBLE].

What is the unsafe subset [INAUDIBLE]?

ALAN KARP: Primarily, it's avoid using

mutable static state.

So if there's any global state, it must be immutable.

Now, when I was learning object oriented programming, I

got myself into a lot of trouble.

I identified the source of the trouble, just for me,

personally, for my own use.

The main thing I was getting in trouble was, I was using

mutable static state.

And I realized that that was violating the object model.

Just good object oriented design seems to

be 90% of the way.

OK?

Thank you.

I think our time's up anyway.

The Description of The Virus Safe Computing Initiative at HP Labs