Follow US:

Practice English Speaking&Listening with: Lec 20 | MIT 14.01SC Principles of Microeconomics

Normal
(0)
Difficulty: 0

The following content is provided under a Creative

Commons license.

Your support will help MIT OpenCourseWare continue to

offer high quality educational resources for free.

To make a donation or view additional materials from

hundreds of MIT courses, visit mitopencourseware@ocw.mit.edu.

PROFESSOR: Modeling decision under uncertainty turns out to

be a critical part of what we do in economics.

And I'll spend today's lecture talking

about this set of issues.

And, let me just say, the uncertainty you face now is

nothing compared to the uncertainty that you'll face

later in life.

So you have uncertainty now about whether you should study

for the final, or carry an umbrella, or go on a date with

this person.

I've got uncertainty about whether I should refinance my

mortgage, or which college to send my kid to, or how much

life insurance I should buy.

Uncertainty only get more and more important as

you move on in life.

This is an important issue.

Now, how do we think about uncertainty?

Well, the tool that we use to think about uncertainty is,

once again, to make simplifying assumptions which

allow us to write down sensible models, but which

capture the key elements of what we're thinking about.

And the simplifying assumption here is we move to the tools

of what we call expected utility theory.

And so, basically, the way we think about expected utility

theory is the following.

Imagine that I offered you guys in this class a choice.

And I'm just going to say right now, there's no right

answer to this.

But I do want you guys to answer me.

There's no right answer.

Here's the question.

I'm going to give you a choice.

I'm going to flip a coin.

I have a coin in pocket, and I'm going to flip it.

And I'm going to offer you guys the

ability to make a bet.

If it comes up heads, you win $125.

If it comes up tails, you lose $100.

Heads, you win a $125.

Tails, you lose $100.

There's no right answer.

How many would take that bet.

How many people would not take that bet?

Very good.

That's the typical set of responses I get to this.

Now, what's interesting is to think about the

parameters of that bet.

And to think about it, let's take a step back to something

we've discussed already this semester, the concept of

expected value.

What's the expected value of that gamble?

The expected value, if you remember, is the probability

of each outcome times the value of that outcome.

That is you remember expected value, which you defined

before, is the probability that you lose times the value

if you lose plus the probability that you win times

the value if you win.

That's the expected value of a gamble.

So, in this context, the expected value is there's a

50% probability that you lose, so 0.5.

And if you lose, you lose minus $100 plus a 50% value

that you win.

It's flipping a coin after all.

And if you win, you won $125.

So the expected value of this gamble is $12.50.

On average, if I did this enough times, you would win

$12.50 per time.

Statistically, if I did this enough times, you'd

win $12.50 per time.

So, in other words, we say that this is

more than a fair bet.

A fair bet is one with an expected value of 0.

A fair bet has an expected value of 0.

So a fair bet would be tails you lose $100,

heads you win $100.

This is a more than fair bet.

There's more than 0 expected value.

Yet, the majority of you would not be willing

to take this bet.

In fact, the majority of people would

not take this bet.

Why is that?

Why is it that I've dictated a bet which has a positive

expected value and yet, people won't take it.

Yeah.

AUDIENCE: But wouldn't that also depend on how much money

you have.

PROFESSOR: It will absolutely depend on how much money you

have.

AUDIENCE: Right.

So if I were a richer person, then losing $100 isn't as

important to me as the chance of getting $125.

PROFESSOR: OK.

So flesh that out.

Why is that?

Why is it that basically it would matter how much wealth

you have. Because no matter how much wealth you have, this

math is impeachable.

It's always a good bet.

So why is it that your state without much wealth, your

state as college students without much wealth, what is

it about you that causes you to not want to take this bet

that's more than fair.

AUDIENCE: So, basically, for me, the risk of losing or the

state I will be in after I lose is much greater, well,

for me, a lot more than what I would be in if win.

PROFESSOR: Exactly.

And there's two possible reasons for that.

One we're going to push off to the very end of the lecture.

The main reason we're going to focus on is because

individuals do not consider expected value, they consider

expected utility, and individuals are risk averse.

Expected utility is going to differ from expected value

when individuals are risk averse.

Expecting utility is not going to be the probability times

the value if you lose.

Expected utility is going to be the probability that you

lose times the utility if you lose plus the probability that

you win times the utility if you win.

And utility is not the same as value, importantly, because

utility functions exhibit diminishing marginal utility.

Utility functions are not linear.

Utility functions are nonlinear.

And, in particular, there's diminishing marginal utility.

And with diminishing marginal utility, you're going to not

want bets where there's the chance you lose is equal to or

even a bit smaller than the value that you win.

And the basic point is that the joy of winning is smaller

than the pain of losing with diminishing marginal utility.

Yeah.

AUDIENCE: Isn't there also a statistical side to this then?

Because we don't know how many times we're going to bet.

It might just be once.

We're a lot more comfortable if, let's say, use the law of

large numbers and say, OK, it's going to eventually even

out so we'll win $12.50 a game.

But for the first, let's say 10 or so games, we might get

really unlucky and flip eight tails and two heads.

PROFESSOR: But, once again, if you weren't risk averse, you

wouldn't care about that.

Hold that thought.

I'm going to explain why that isn't true.

So just hold that thought.

So now let's imagine that your utility functions are the

typical form we've worked with before, the typical

diminishing marginal utility form we've worked with before

where utility is the square root of consumption.

You're casting your mind back to consumer theory here.

You're going to have to start integrating the course now,

both consumer and producer theory.

So remember we said the typical diminishing marginal

utility function we worked with was u equals the

square root of c.

Now, let's say you start with consumption of $100.

Imagine you consume your income.

Let's say you have consumption of $100.

Well, then utility is 10.

If you start with a consumption of $100, your

utility is 10.

Now let's calculate the expected

utility of this gamble.

The expected utility of this gamble is that there's a 50%

chance that you lose.

And, if you lose, what is your utility?

Well you lose $100.

So consumption goes to 0.

So utility is 0 plus a 50% chance that you win.

Well, what do you get if you win.

Well, if you win, you go from $100 to $225.

So your utility is the square root of $225, or $15.

Your utility is the square root of $225 or 15.

It's half chance of having 15.

I'm sorry.

So this is a negative.

Yeah.

So utility, if you take this gamble, is you end up with a

utility of 7.5.

So utility falls.

You move from a utility of 10 without the gamble to a

utility of 7.5 with the gamble.

Utility is lower with the gamble, which is why people

decided they didn't want to take that gamble.

Utility is lower.

And the reason is because given a utility function of

this form, you are sadder about losing than happier

about winning.

To see that, we can see that graphically in Figure 20-1.

This graph's utility against wealth-- we don't usually

graph utility, because it's not cardinal.

Remember it's just ordinal.

But the sort of gives you a sense of the intuition.

This is a graph of utility against wealth levels.

So you start at point A. You start with $100 in wealth,

which is consumption.

and utility of 10.

Now, I give you a choice of a gamble.

That gamble has a 50% chance of leaving you at 0 and a 50%

chance a leaving you at point B. So your utility and

expected value is the midpoint of that chord that runs from 0

to B or point C. Your expecting utility is lower

than your initial utility.

Why?

Because utility is concave. You are made so sad by getting

to 0 that it vastly overcompensate the happiness

you feel moving to $225 because of the diminishing

marginal utility.

Because, basically, think of it this way.

Imagine it's your actual income.

Let's take the point about the size of the gamble relative to

income seriously.

Imagine, literally, I was asking you to gamble your

entire income for the year.

And if you lose, you starve to death.

And if you win, you get to eat extra nice.

Well, clearly, the disutility of starving to death vastly

outweighs the extra utility to eating well.

So, in that extreme example, if this was your entire

wealth, you can see why you would have a situation where

you wouldn't want to take that gamble.

Because if you lost, you'd die.

And, basically, risk aversion arises because, basically,

with diminishing marginal utility you're

made so much sadder.

That steepness at the bottom, you get so much sadder as you

get towards 0 that it vastly overcompensates the flatter

part as you move above your initial point.

So, as you can see, you are going to end up not wanting

gambles even if they're fair.

Gambles that are fair, that is positive expected value, might

still lead to a reduction in your expected utility.

Indeed, let me go further.

You dislike this gamble so much that if I said the

following, I as your teacher am going to force you to take

this gamble-- imagine it's like 100 years ago where

teachers can beat students and stuff--

I'm going to force you take this gamble unless you pay me,

you would actually be willing to pay me to

avoid taking this gamble.

How much would you pay me?

Imagine utilities in dollar terms. Imagine we're actually

measuring utility in dollar terms. How much would you pay

me to avoid taking this gamble.

If I said you either take the gamble, or you pay me.

You're starting with a utility of 100.

Yeah?

AUDIENCE: The difference between the two utilities.

PROFESSOR: Well, the difference

between the two utilities.

So utility is 100 here.

Here utility is 7.5 squared, so 56.25.

So you would actually pay me $43.75 to

avoid taking this gamble.

Think about that.

I've offered you a more than fair bet, a very good bet,

which, on average, will yield you a positive $12.50.

Yet you will pay me $43.75.

You will pay almost half of your entire wealth to avoid

taking that gamble.

That's pretty incredible if you think about it.

I've offered you a more than fair bet, and yet you will pay

me more than half your wealth, almost half your wealth, to

avoid taking that bet.

So another way to see this, let's look at

this another way.

How large would I have to make the positive payoff for you to

take the bet?

Let's look at it that way.

Right now I said you win $125 with heads.

How much would you have to win with heads if you were going

to take that bet?

Yeah.

And tell us how you figured that out.

AUDIENCE: Because you need to have at least the same utility

as you had before from the unexpected utility.

So more than half of his per year utility

would be 20 if he wins.

20 squared is 400.

[INAUDIBLE PHRASE].

PROFESSOR: Right.

You'd need to win 300.

Because I'd need to take your utility to 20 if you win.

Only then would you be willing to take this gamble.

So another way to say it is that's how fair a gamble would

need to be, how more than fair it would need to be

before you take it.

You'd need me to pay off 3:1 on a 50% chance before you'd

take the bet.

And this is just with a typical looking utility

function of the kind we worked earlier in the semester.

You didn't look at this earlier in the semester and

say, wow, that's a bizarre utility function.

We got sensible answers on our problems, and problem sets,

and tests, and things, examples from

square root of c.

That seemed like a sensible function.

And yet it yields these incredibly wild predictions

that you would pay people almost half of your wealth to

avoid engaging in a more than fair bet.

And that you would need the odds to be like 3:1 before you

even consider taking a a bet.

That's the power of uncertainty and the power of

risk aversion.

Really, risk aversion, it's just the power of diminishing

marginal utility.

The power of diminishing marginal utility is so key to

driving our decisions.

It's the fact that that first pizza means so much more to

you than the fifth pizza, that you really hate outcomes that

don't let you get the first pizza.

And, as a result, you will pay a lot to be forced into a

situation where you don't get any pizzas.

You'll need to be paid a lot in the state where you do win

to deal with the state where you don't.

Questions about that?

Now, we can change the example in some interesting ways to

understand it.

So let's change the example to say, instead, let's talk about

some alternatives to this example and how they affect

our intuition.

First alternative, imagine your utility function instead

of being square root of c, your utility function was 0.1

times c, a linear utility function, not a non-linear

utility function.

We can now say that, in that case, you actually would take

the gamble.

There's a 0.5% chance of 0.

And I chose 0.1 times c, because your initial utility

is still 10 then.

I normalized this.

So starting with your bundle of 100 you still start at 10.

It gives the same starting point as

the square root function.

But now your expected utility from his gamble is 0.5 times 0

plus 0.5 times if you win 125, your utility is 12.5.

I'm sorry.

It's 22.5.

So your expected utility is 11.25 which is higher than

your starting utility.

So you would take this gamble.

What's changed?

AUDIENCE: No diminishing marginal utility.

PROFESSOR: No diminishing marginal utility because now

we are no longer risk averse.

We are what we call risk neutral.

A linear utility function yields risks neutrality.

And once you're risk neutral, you only care

about expected value.

Risk neutral consumers would only care

about expected value.

And so a linear utility function will lead to risk

neutrality since you don't have

diminishing marginal utility.

Then you take any bet that's fair.

You don't care.

You're indifferent between winning a dollar and losing a

dollar with this utility function.

It doesn't matter if you go up or down.

The joy you get from winning is the same as the pain you

get from losing.

Whereas with this utility function, the pain you get

from losing exceeds the joy from winning.

We can see that graphically in the next figure, Figure 20-2,

the case of risk neutrality.

Here, you start at point A. You have 100, and

your utility is 10.

Now, I've offered you a gamble where there's a 50% chance of

getting 0 and a 50% chance of getting B. Well, that yields

an outcome of c, which is a higher utility.

So since your utility is linear, you're risk neutral,

and you'll take any fair bet.

We can go further.

What if utility, instead, was of the form u equals c squared

over 1,000?

What if this was your utility function?

Once again, your initial utility u of 100 is 10.

It's the same starting point.

But this is a utility function which now if you do this

gamble, your expected utility is 50% times 0 plus 50% times

$225 squared over 1,000 which is 25.3.

That's a huge increase in utility from this gamble.

So your expected utility with the gamble is 25.3.

It's a huge increase in utility.

And that's because this is an individual where the shape of

the utility function has change where they don't have

diminishing marginal utility, they have

increasing marginal utility.

We've never worked with utility

functions like this before.

These are individuals we call risk-loving.

That is, they are made happier by winning $1 than they are

made sadder by losing $1.

It's the opposite of all the intuition we developed earlier

in this course.

It's a crazy utility function.

But the notion of a risk-loving utility function

is one where literally $1 that moves you up makes you happier

than $1 that moves you down makes you sadder.

You can see that in Figure 20-3.

Here's a risk-loving utility function.

The individual starts at point A. They have a choice of a

gamble where they can have a 50% chance of landing at 0 and

a 50% chance--

Jessica, that B should be down at the

intersection of dashed lines--

a 50% chance of landing at B at the intersection of the

dashed lines.

You take the average of those two, and it's c.

Their utility is way higher with the gamble than it was

without the gamble.

In fact.

we can go further.

With a risk-loving person, they would actually take an

unfair bet.

Consider the following bet.

Tails you lose $100, heads you win $75.

That's a bet with a negative expected value.

Neither the risk averse nor the risk neutral person would

take that bet.

But a risk-loving person would.

If you work out the math, that bet gives them a gain in

expected utility.

That is a bet with a negative expected value that gives them

a gain in expected utility.

Why is that?

Because it's the opposite of diminishing

marginal utility intuition.

They're made so much happier by winning that they're

willing to take a bet even if it's a

negative expected value.

Just like the risk averse person is made so much sadder

by losing, they won't take a bet even if

it's more than fair.

So you can actually develop all the opposite predictions

from a risk-loving person.

They'll even take unfair gambles.

Now, by the way, I skipped over your earlier question

about risk neutrality.

With risk neutrality, you see it doesn't matter if you do it

100 times or one time.

If you're risk neutral, you should take the bet anytime,

because the expected value is still positive.

Now, you're thinking about risk aversion where, in

substance, you're more confident as

the numbers go up.

But if you're risk neutral, you'll take it no matter how

many times I offer you that bet.

So to extend this further, let's go to a third extension

which will develop this intuition further.

Now imagine that I offer you guys a different gamble.

And, once again, I really want you to answer honestly.

Don't try to game me.

Answer honestly.

Now the gamble is if I flip a coin, tails you lose $1, heads

you win $1.25.

Now how many of you would take that gamble.

How many would not take that gamble?

OK.

I hope you're answering honestly.

But maybe you're just thinking ahead and realizing that that

gamble is very different.

And why are people more willing to take that gamble

than they were willing to take the previous gamble, the same

risk averse people.

Yeah.

AUDIENCE: The difference in [INAUDIBLE PHRASE].

PROFESSOR: Exactly.

In particular, the utility function is locally linear.

Let's go back to Figure 20-1.

As you get closer and closer to A, you could draw,

essentially, a linear segment.

So for an infinitesimal bet, utility is linear.

So it's linear at point A.

So for small bets, you become risk neutral.

Even a risk averse person moves towards risk neutrality

as the bet is small relative to their resources.

This was the point that you were making.

Basically if you're a rich person, you'd probably be

happy to take the $100 and $125 thing.

I'd be happy to do that.

I'm a rich guy.

I'd be happy to do that.

So, basically, what determines your willingness to take a bet

is going to be about what's at stake

relative to your resources.

And what you can see is that if you solve the math here,

that basically expected utility even with a square

root of c is positive for that smaller gamble.

Because as it gets smaller relative to the $100 you start

with, you become roughly risk neutral.

And then you'll go ahead and take the gamble.

So at the end of the day what's going to determine

whether you're going to take a gamble is going to be your

level of risk aversion and the size of the risk you're taking

relative to your resources.

The more risk averse you are, and the bigger the gamble, the

less likely you are to take it at a given level of fairness.

Questions about that?

All right.

So now that we all understand expected utility theory.

Now we're going to go on and talk about why this matters in

the real world and how we use it.

And I want to talk, in particular, about two

applications, insurance and the lottery.

Let's start by talking about insurance and

why people have insurance.

Because, in fact, given what we learned in this lecture,

there would be no reason for insurance.

This lecture tells us why people have insurance.

Because there's diminishing marginal utility, and you're

made so much sadder with a negative outcome, you're

willing to pay you avoid it.

Remember we talked about that you would be willing to pay

almost $44 to avoid being forced to take that bet?

That's what insurance does.

Insurance allows you to avoid taking gambles.

That's what you can think of insurance as.

It's a way to avoid taking a gamble.

You're gambling you're going to get sick.

You're gambling your house is going to burn down.

These are gambles you face that are

forced on you by nature.

What insurance does is allow you to avoid

taking those gambles.

And just like you'd pay me to avoid the $100, $125 gamble,

you're paying Aetna to avoid gambling that you might have

to go to the hospital.

So let's say there's a 25-year-old who is deciding

whether to buy health insurance.

And let's say they're 25-year-old

guy, totally healthy.

I say guy because there's no risk they're

going to have a kid.

So he's basically totally healthy, basically zero chance

they're going to use the doctor except if they

get hit by a car.

So imagine the situation is that you've got 25-year-old

with an income of $40,000.

And let's say that there's a 1% chance that they'll

get hit by a car.

It is Cambridge after all.

So every time you cross the street, there's a 1% chance

you get hit by a car.

And if you get hit by a car, you're going to suffer $30,000

in hospital bills.

And let's say your utility function is square root of c.

So you're a risk averse guy.

So let's say that I then come to you and say, look, each

year there's an expected cost to you of getting

hit by car of $300.

How did I calculate that?

Well, every year there's a 1% chance you get hit.

They're independent draws, let's say.

If you get hit this year, it doesn't mean

suddenly you're safer.

It's random.

It's just crazy drivers.

So there's a 1% chance you're going to get hit every year.

And if you get hit, there's a $30,000 cost. So every year

there's an expected cost to you--

the opposite of expected value is expected cost--

of $300.

So let's say I offered to sell you insurance for $300.

I offered to sell you insurance in a way where, on

average, if you lived an infinite number of years, you

would pay out in premiums what you'd get in benefits.

If you paid $300 a year and lived forever or lived for

many, many years-- the law of large numbers enough years--

then basically you would pay out in premiums what you would

collect in benefits.

You'd get hit once every 100 years.

And ever 100 years you would have paid $30,000 in premiums,

and you'd collect $30,000 in benefits.

So that's what we call actuarially fair insurance.

Actuarially fair insurance is insurance where the price of

the insurance equals the probability of the bad outcome

times the cost of the bad outcome.

That's actuarially fair insurance where the price you

pay is the probability of the bad outcome times the cost of

the bad outcome.

That's fair because, over a large enough population, the

premiums that get paid in will get paid out in the form of

claims.

Now, let's ask what is your utility if you do not or do

buy insurance.

So for the first thing you say if I'm a 25-year-old.

Screw it.

I'm never going to get hit by a car.

I'm not going to buy insurance.

What's your utility with no insurance?

Well, if you have no insurance, there's a 1%

chance, 0.01, that you'll lose $30,000.

You'll get hit by a car and lose $30,000.

Your income is $40,000.

So there's a 1% chance that you'll end up with a utility,

which is the square root of 10,000.

And there's a 99% chance you'll end up with a utility

that's the square root of 40,000.

You work this out, and the answer is you get 199.

Utility without insurance is 199 which is pretty close to

utility just if you weren't going to get hit by the car.

Because it's so rare that you get hit by the car.

So utility is 199 without insurance.

Now, let's ask the question, how much would you be willing

to pay to have insurance?

How do we figure that out?

$300 is the actuarially fair premium.

But now let's do a different question.

I'm an insurance company, and I want to make money.

I don't want to just charge the actuarially fair premium.

The insurance company makes no money with

this premium of $300.

So the insurance company wants to make money.

How would we figure out how much would you be willing to

pay, this 25-year-old, be willing

to pay to get insurance?

How do we figure that out?

Yeah.

AUDIENCE: Maybe you could keep the utility function constant.

PROFESSOR: Keep the utility value constant.

AUDIENCE: Value, yes.

PROFESSOR: Exactly.

You'd have ask well, how much would I be willing to pay to

have insurance which would protect me and leave me at the

same utility level.

Obviously it would have to be a little bit higher.

But let's just set it equal.

So, in other words, if I bought insurance, my utility

with insurance, there's a 1% chance that I will

get hit by the car.

In that case, what happens to me?

Well, if I get hit by the car, I get $10,000.

I make $40,000.

I lose $30,000.

Let me actually write it out.

If I get hit by the car, what happens to me?

Well, I make $40,000.

I always make $40,000 each year.

I lose $30,000, because I get hit by the car.

But then the insurance company pays me $30,000.

They pay off my debts.

So then I gain $30,000.

So these things cancel.

But I have to pay the insurance company premium.

So I have to pay some amount x.

If I don't get hit by the car, I get my $40,000 income, but I

still have to pay the insurance company premium.

I have to pay them whether I get hit or not.

It's insurance.

I pay them either way.

So that's my utility.

So my expected utility with insurance is the

sum of these two.

And I want to set that equal to 199.

I want to say what x am I willing to pay that would

leave me at the same utility as if I was uninsured as per

the answer here?

Well, it turns out that if you do, that if you solve this,

you get that x equals 399.

That is you would pay $399 for insurance that has a

value of only $300.

You'd pay $399 for insurance even though the actuarially

fair price is $300.

You would pay you insurance company $99 more than they

expect to pay out to you.

Why?

Because you're risk averse.

Because you're made so much sadder than being left with

$10,000 than you are by having to pay $300.

If it doesn't work out, you pay $300.

Who cares?

That's tiny compared to your income.

But if it does work out, you're safe

from having to starve.

You pay $400, I'm sorry.

You pay $399.

You're like, look, I'll be bummed if I have to pay $400.

That's a percent of my income basically.

That would be a shame to pay a percent of my income for

something that doesn't happen.

But, boy, would I be happy in that 1 in 100 chance where I

get hit by a car when I'm not out $30,000.

So you will pay $399 for insurance

that's only worth $300.

That extra $99 we call a risk premium.

We call that a risk premium.

The extra $99, we call a risk premium.

That is the amount that you are willing to pay above and

beyond the fair price, because you're risk averse.

And what you should go home and show yourself using the

same kind of mathematics is that, for example, the risk

premium will rise the bigger the loss is.

Hopefully you can see the intuition on that.

The bigger the loss is for a given level of income the

bigger the risk is.

Likewise, for a given loss, the risk

premium falls with income.

So the bigger is the loss of relative to income the more

risk premium you're willing to pay.

You should also, obviously, see that the more risk averse

you are, the bigger premium you're willing to pay.

A risk neutral person would not pay a risk premium.

Only a risk averse person will.

So the more risk averse you are, the bigger risk premium

you'll pay, and the bigger the loss is

relative to your income.

These are the same principles we talked about before.

So the $43.75 we were willing to pay to avoid that gamble I

was going to force on you, that was the risk premium.

You were willing to pay $44 to avoid that gamble.

Here, you're willing to pay $99 to avoid the risk of

ending up in that bad state where you get hit by the car.

And that's why people buy insurance.

And that's why insurance companies make

ungodly amounts of money.

In the US we have a health insurance industry, for

example, that earns about $800 billion a year.

Why do they make all that money?

Because people are risk averse, and they're willing to

pay to have someone else bear the risk of

their injury or illness.

Any questions about that?

Now, I don't mean by that to say, insurance is a bad thing,

and we shouldn't do it.

Risk aversion is the nature of our utility functions.

We should be willing to pay a risk premium.

It's just that you need to understand why, in fact, it

makes sense to have insurance in that case.

The second application is the lottery.

The lottery is a total ripoff.

I hope you knew this already.

The expected value of $1 lottery

ticket is roughly $0.50.

So for every $1 you spend in the lottery, in expectation,

you get about $0.50 back.

This is an incredibly bad bet, incredibly unfair, an

incredibly unfair bet.

On average, you lose $0.50 for every $1 you bet.

So, basically, despite that, lotteries are wildly popular.

They've become a huge source of revenue for state

governments.

A lot of the money that state governments now take in is

through state lotteries.

What accounts for the fact that lotteries are so popular?

Well, there's four different theories for why lotteries are

so popular.

The first is that people are risk-loving.

We have it all wrong.

Actually people like taking risks, and the

lottery feeds that.

This, of course, we can immediately rule out.

How?

How do we know this is wrong?

That the answer is that people play the lottery because

they're risk-loving.

How do we know people aren't risk-loving?

AUDIENCE: The same people don't take [UNINTELLIGIBLE]

PROFESSOR: And they spend $800 billion a

year on health insurance.

Basically, as a society, we spend, in total, about $1.5

trillion a year on insuring various risks that face us.

We're not risk-loving.

So that's clearly not the answer.

However, there's an alternative.

People could basically alternate between risk-loving

and risk-aversion.

This is a theory due to Milton Friedman, the famous economist

from Chicago and a co-author named Savage, the

Friedman-Savage preferences, where the notion is that

basically people are risk averse over small gambles but

risk-loving over large gambles.

So to see that, go last figure in the graph.

This is sort of a complicated case.

Basically, the notion is if you take someone, they have a

utility function which is initially risk averse and then

becomes risk-loving.

That is in the segment between W1 and W3, that looks like a

risk averse utility function.

But once you get above W3, it looks like a risk-loving

utility function.

So the notion is that for things which can make me very

poor, I'm risk averse.

I want to insure against events which will leave me in

that bottom segment.

But once I'm going to be above W3, then great.

I'm happy to take risks.

Then I become risk-loving.

Now, this is a not crazy idea.

Graphically, what I'm showing you here, is that b* is

utility without the gamble and b is with.

So you see you're happier without the gamble when your

income is low.

Once your income is a lot higher, you're happier with

the gamble at d then you are without the gamble at d*.

That's not a crazy theory.

The notion is that once I'm rich enough, I become

risk-loving.

But when I'm poor, I don't want to take the risks.

The problem is that this is inconsistent with lottery

behavior in the following sense.

Most people who play the lottery don't

play the Mega Millions.

They play tiny scratch lotteries where you

bet $1 to win $10.

And people spend huge amounts of money on lotteries with

very, very low payoffs.

That is inconsistent with this.

Because this would say that you'd only play lotteries that

have big payoffs.

Lotteries that have small payoffs, once again, there's

no reason to play that and still buy insurance.

So if you're buying insurance against being low income, why

are you playing these small lotteries that are a ripoff.

Because those small ones are a ripoff too.

So the existence of the fact that the most popular

lotteries are actually the small lotteries is

inconsistent with this explanation.

Yeah.

AUDIENCE: So I'm confused.

Is it risk-loving on large gambles?

PROFESSOR: Yeah, risk-loving on large gambles.

It's not the size of the gamble.

You're risk-loving on gambles which leave you in a high

wealth state.

The point is that if I'm gambling over

winning Mega Millions.

Yeah, I'm a little risk averse.

But the truth is winning Mega Millions would make me so

happy that I could move into the risk-loving part of my

utility function.

But this would not explain why people ever play something

that pays off $100.

This is a fancy way of the intuition you probably have.

It's I'd think differently about something which would

completely change my life and make me a multi-billionaire,

that's something that would make me raise me, than the bet

I offered you guys before.

People are systematically taking terrible bets like the

kind i offered you guys before.

And that's inconsistent with these preferences.

The third explanation is entertainment.

It's that the utility function has in it the

thrill of the risk.

We only write down utility functions that are a function

of consumption like how many pizza and movies you see.

But people have utility over lots of things.

One thing you may have utility of the thrill of being able to

scratch the thing off and seeing if they won or not.

That would actually be consistent with the fact that

people play a lot of small lotteries.

If it's a thrill of winning that matters, if it's the

scratch off thrill that matters, then the optimal

thing to do, in fact, would be to not play one Mega Million.

It would be to play lots of little lotteries.

And that would be consistent with that behavior.

So one story that is consistent with what we see is

that people actually view this as entertainment.

On the other hand, once again, it's really expensive

entertainment.

Because you're throwing away $0.50 of every $1.

So you've got to get a lot of enjoyment out of that scratch

off relative to when you go to see a movie.

So that's another theory.

I'm going to put this in here.

It sort of inserts in here.

We talked about the fact that people can't be risk-loving

because they buy insurance.

And this alternating thing doesn't work, because they

play small lotteries.

But another theory that might fit here is a theory we call

loss aversion.

This is sort of a different version of the Friedman-Savage

preferences.

It's that people are, in general, risk averse.

But, in fact, they're really risk averse on the downside,

and they don't care so much on the upside.

So, in other words, the point is that when I initially

offered you that bet of win $125, lose $100, part of your

reaction was about the risk aversion.

But a lot of you are thinking, I'd be really

bummed if I lost $100.

It's not just that I don't have it to spare.

It's just like, god, I would kick myself.

It was one flip of the coin.

How could I possibly have been so stupid?

Whereas if you won, you'd be happy.

But then you'd go on to the next class.

The notion is that basically it's an extreme version of

risk aversion.

It's not only that you're risk averse, it go

further than that.

Relative to the starting point, anything which is a

loss really pisses you off.

So, in fact, even that little gamble I offered you, win

$1.25 lose $1, you still might not take.

Some of you still wouldn't take it.

And the reason you wouldn't take it

can't be risk aversion.

Because it's just too small for risk aversion

to plausibly work.

It's that you'll just be bummed that you did that and

you took that chance.

You'd be made sadder by the loss than you'd be made

happier by the win.

In that case, that could explain why people spend a lot

of money to buy insurance.

Because they'll be so bummed if things go badly.

But they might play the lottery because, in fact,

around that point, they don't view the money they're

spending as a loss.

They think of it differently.

They think of the loss of being my house burned down.

That's a loss.

That would make me really sad.

But the $1 I paid to pay the lottery, that's

not really a loss.

So I'm risk neutral going up and really risk

averse going down.

So I'm willing to take gambles that push me up.

It's sort of like Friedman-Savage.

I'm willling to take gambles that push me up, not gambles

that pull me down.

But, once again, that doesn't really explain the small ones.

That doesn't really explain the small ones.

That's more the entertainment theory.

Then finally, the last theory we have is

that people are stupid.

The lottery is, after all, its official motto is

a tax on the stupid.

And that's what it is.

It's a tax on the stupid.

Basically many of your public schools are financed by taxes

paid by stupid people.

It's sort of ironic.

But people just don't know.

You probably all had a vague sense that the lottery wasn't

a sensible thing to play.

But how many people actually knew it was that

bad a deal as I said.

That is actually was $0.50 expected payoff.

A few of you knew.

But most of you knewm had a vague sense it was a bad deal.

You didn't know how bad a deal it was.

This is sort of hard to figure out.

Meanwhile, you see on TV that these guys win these bazillion

dollars, and you get the thrill of scratching if off.

So, basically, if people are just stupid, then that could

explain it.

The problem is it matters a lot for government policy

which of these is right.

Because if A through C is right, if one through three

are right, then the government should go

ahead and allow lotteries.

And there's no reason why the state shouldn't run a lottery.

In fact, let's take the entertainment theory.

If this is really entertainment, and the state

can make money off of my entertainment,

then that's a win-win.

I'm happy, because I'm playing the lottery.

The state is happy, because it's financing schools.

That's a win-win.

So if these are right, you're going to want to encourage

state lotteries.

But if this one's right, we don't want to have them.

Because, A terrible way to raise government revenues is

to tax stupid people.

There are much better ways to raise government revenues.

We'll talk about taxation in a couple of lectures.

But, clearly, taxing the stupid is not going to be an

optimal tax.

Yeah.

AUDIENCE: I can maybe sort of understand why people would

prefer smaller lotteries over bigger lotteries.

Because they are thinking that in smaller lotteries, they

have a much bigger chance of winning

than in bigger lotteries.

So, in that sense, their expected payoff in terms of

utility or other [INAUDIBLE PHRASE]

is a lot higher than the antes in the bigger ones, even

though the bigger ones might end up being a lot heavier--

PROFESSOR: So that's sort of an entertainment theory, which

is my utility derives from the win.

You have a theory in mind my utility derives from the win.

Because if it's just about dollars, that

wouldn't explain it.

Because I win so many more from the big one that it would

compensate from the frequency at which I'd

win the little one.

But if I actually, in my utility function, have the joy

of seeing that winning thing, then that would explain it.

That's an entertainment theory.

You're saying, in my utility function, I actually get joy

from scratching off and seeing that it's a winner, and so

much joy that I'd much rather take a 10% chance at a small

win than a 1% chance at a huge win.

Because then, at least, with the first one, 1 in 10 times I

get that joy of the scratch off and seeing it's a win.

So that's sort of an explanation.

And that would say that lotteries are good.

The other way economists might think about lotteries is

they're voluntary taxes.

The public doesn't like taxes.

Here's a voluntary tax.

You never hear policy makers getting up and railing against

a horrible evils of the lottery.

Sometimes groups do.

Sometimes outside groups do and stuff.

But politicians don't.

But those same politicians will go on and on about how

terrible taxes are.

I'm going to cut your taxes.

Taxes are terrible.

Well, the lottery is a voluntary tax in that sense.

And I might say, look, there's no reason to oppose it, it's a

voluntary tax.

It's those involuntary taxes that

cause problems in society.

Well, whether we want to buy that story or not depends on

how much we think it's being played because people are

stupid or not.

OK.

Let me stop there.

So that's a great example of how a little bit of an

extension of our model can really enrich our

understanding about a lot of decisions that we make in the

real world.

We'll come back and talk about another

version like that later.

And that is the case of thinking about savings

decisions and thinking about individual decisions on how

much to save and how much to spend.

The Description of Lec 20 | MIT 14.01SC Principles of Microeconomics