SHAYAN OVEIS GHARAN: Hello?

Welcome to the second session.

Let me just, first, one-- say one thing.

So each talk is supposed to be 12 minutes,

if you haven't-- if you didn't know that.

Like, you have like 15 minutes.

One minute is supposed to be for questions, one minute

for like changing of speakers and so on,

but you have essentially 12 minutes to present.

We're going to have five talks in this session.

While I was looking at these talks,

I think the main theme, probably,

in common is that they essentially, almost all

of them, are introducing new models or sometimes

the relaxations of classical models.

And in many cases, this leads to maybe avoiding some barriers

or sometimes improved on practical algorithms

or mechanisms.

So let me start.

The first talk is Preference-Informed Fairness

by Michael Kim, Aleksandra Korolova, Guy Rothlbum,

and Gal Yona.

So, basically, the idea is that we

have classical notions of fairness,

like individual fairness, which says that--

which basically looks at individual qualifications,

but not their preferences, when you allocate objects.

And, on the other hand, envy freeness

typically looks at preferences, but not the qualifications.

So they introduce a new notion of fairness.

They call it preference-informed individual fairness, which

is sort of interpolated between the two,

and it gives them better outcomes.

The second talk, A New Analysis of Differential Privacy's

Generalization Guarantees, by Christopher Jung,

Katrina Ligett, Seth Neel, Aaron Roth, Saeed Sharifi,

and Moshe Shenfeld, they kind of--

they study this problem.

Say you want to-- you want to do a data analysis task,

and let's say you want to do k experiments.

In order to not do overfitting, you basically

need k batches of data.

Now, a few years ago, Dwork et al

proved this transfer theorem which shows that--

how you can improve the data that you need quadratically.

Like, instead of make--

like linear a number of data on the number of experiments,

you only need like square root of k, many data.

But, unfortunately, that's not yet practical.

This paper proves a new proof of this transfer theorem

which is hopefully much more practical and efficient,

and hopefully we'll see applications.

The next one, the third talk, is Strategic Payments

in Financial Networks by Neil Bertschinger, Martin Hoefer,

Daniel Schmand, where they study financial markets.

So essentially, in a financial market, you have these firms.

Each firm has some assets.

It has some debts.

You showed that by directed edges,

and basically what you want to do

is that you want to clear the market.

So these firms will pay their assets in some order,

in some way, depending on the amount of, you know,

assets they have so far.

And each firm want to do this in order

to maximize their assets at the end of the day.

So they study Nash equilibrium of such a financial market,

when you have strategic firms, and only a particular set

of payments are permissible.

The fourth talk is Incentive-Compatible Active

Learning by Siddharth Prasad and Federico Echenique,

where they--

basically, I see this as a, you know, as a problem

at the intersection of learning and mechanism design.

So you want to design a incentive-compatible learner

where, let's say, in some economic application,

your learner wants to learn attitudes of individuals,

say, for-- towards risk.

So they study two specific settings,

inference of a risk and belief over experiments and choice

over uncertainty, and prove very nice results.

And, lastly, we have the talk on the Implementation

in Advised Strategies--

Welfare Guarantees from Posted-Price Mechanisms

When Demand Queries are NP-Hard by Clay Thomas, Linda

Cai, and Matt Weinberg.

What they do is that they introduce

a new notion of truthfulness for combinatorial auction.

And they basically show that, with this new notion

of truthfulness, some classical distinction

between computational efficiency and communication efficiency

disappears, and you would have more interesting problems

to study.

So without further ado, let's start the first talk by--

on Preference-Informed Fairness.

And Gal will give the talk.

GAL YONA: OK, great.

So hi, everyone.

Good morning.

My name is Gal, and I want to tell you

about our paper, Preference-Informed Fairness.

This is joint with Michael Kim, who

is also in the audience, Alexandra Korolova, and Guy

Rothblum.

OK, good, so I'll jump right into the setting.

So we have a collection of individuals,

that I'm going to call x and a collection of outcomes

that I'm going to call c.

And you can think of the outcomes kind

of as a binary classification, maybe yes or no

for receiving a loan, yes or no for receiving

a medical treatment, but kind of more

generally any outcome space, anything we want to allocate.

So maybe it's a collection of advertisements.

And the objective is going to be to map individuals

to outcomes in a way that provides strong protections

against discrimination.

OK, so kind of what do we do in this paper?

So it's not a new problem.

It's not a new setting.

So what we do is we defined a new notion of fairness

that we call Preference-Informed Individual Fairness,

and we show that it's a relaxation of two

kind of classical approaches for this problem,

individual fairness and envy freeness,

and then we go on to study this notion.

So we look at the social welfare.

We look at how do you do efficient optimization

subject to this notion.

And, also, we look at kind of, how

do you do preference-informed extensions

of other notions of fairness?

OK, and for this talk today, I want

to mainly focus on kind of giving you

a taste of this definition.

OK, good.

So I'll start with individual fairness.

This is from an ITCS paper by Dwork et al in 2012.

So, basically, the idea is we want

to formalize the concept of treating

similar individuals similarly.

OK, so we have two notions of similarity here.

The first is kind of a metric on individuals,

and the second is measuring distance between outcomes.

So the way I want to formalize-- the way I'm going to formalize

this, what they did is--

first of all, we're talking about

probabilistic allocations.

So, now, I'm looking at kind of mapping individuals

to distributions over the outcome space.

I'm going to denote this delta C.

And I'm going to say that such an allocation is individually

fair if, kind of, for every two individuals,

the distance between their outcome distributions

is upper bounded by their distance under the metric.

OK, so let's kind of see an example

for why this makes sense.

So suppose, now, that I'm allocating loans.

So I'm concerned about discrimination in this context,

and maybe a reasonable kind of notion

of what the metric should be between individuals

is to say that there's a distance of the zero

between two individuals that are equally credit-worthy.

OK?

So, now, note that if I have two individuals that

are equally credit-worthy, basically those constraints

mean that they have to kind of receive

the loan with similar probabilities.

OK, so there's the question of how

do you get the proper metric?

But, really, with the proper metric,

this is a very strong kind of notion of protection.

OK, this really means that I can't

make any distinctions maybe based according to their gender

or anything else.

OK, so this is an example where this makes a lot of sense.

I want to slightly tweak this example

and show that, here, it doesn't make as much sense.

So, now, I'm allocating interview slots.

So maybe I have a bunch of graduate students,

and I want to allocate them to interviews at Google, Facebook,

or Netflix.

OK, so this is another setting where

I might be concerned about discrimination.

And, here, maybe the appropriate metric

quantifies how we kind of similarly

qualify two individuals or, say, according to their GPA.

OK, so now note that if I have two individuals that

have similar GPA, the outcome that

gives the first guy the Facebook interview

and the second girl the Netflix interview,

this allocation is not individually fair, right,

because they're similar, but they're

getting very different outcome distributions.

OK, note that this is also the case when,

kind of, his favorite interview is

Facebook and her favorite interview is Netflix.

So kind of even when every individual

is receiving their favorite outcome,

individual fairness rules this out.

And, really, the kind of observation

is this notion of similar or treated similarly

is really overly restrictive when individuals

can have kind of different preferences over the outcome

space.

Again, in the loan example we didn't have this.

We kind of assumed everyone prefers the loan

over not getting the loan.

But, really, there are other applications

where this is not a kind of reasonable assumption to make.

OK, so I've mentioned the notion of preferences,

so just to say how we kind of model it.

So for this talk, just assume that every individual

has this reflexive and binary relation

over this entire outcome space.

So I'm just using this notation to say that kind of individual

I prefers outcome p over outcome q.

OK, and once I've given you kind of this notion of preferences,

a very natural notion of fairness is envy-freeness.

So this is from the literature on fair division,

so basically the kind of setting is, I have a cake,

and now I want to divide it fairly between individuals.

And what this notion says is kind of every individual

prefers their piece of the cake.

OK, so formally, I'm going to, again, talk

about probabilistic allocations and say that an allocation is

envy free if for every individual with respect

to every other individual j, i prefers their allocation, pi

of i over what j receives, pi of j.

OK, so going back to kind of this example

that we said was not individually fair from before,

note that this is the prototypical envy-free

allocation.

Right?

Every individual receives their favorite outcome,

so of course there is no envy between individuals.

So this is great.

But note that when I kind of did this exercise,

I didn't use the fact that these two individuals are similar

at all.

OK, so now I can tweak this example

and, say, have two new individuals

which are not similar.

They don't have the same GPA.

And, now, note that there is envy, right?

The person that's on the right that's getting the Facebook

interview envies the one that gets the Netflix interview.

So this is not an envy-free allocation.

But kind of intuitively, these individuals

are different, right?

She is less qualified, so if we don't

have as many kind of Netflix interviews

that we can give out, it makes sense

to make some distinctions.

OK.

So, really, the intuition is that individual fairness, kind

of the main contribution, was making this distinction

between outcome distributions that are kind of a bummer

to, maybe, some of the individuals versus ones

that are unfair.

And that's what the metric gives you,

and we don't have this with envy-freeness.

OK, so kind of to reiterate, the fact

is that kind of envy-freeness is also overly restrictive,

this time, when the individuals differ in their qualifications

for the task.

So just graphically, we can look at kind of-- we

have the set of all the possible allocations.

Within it, we have the set of all the Individually Fair

allocations, but we've kind of shown

that the allocation in which everyone

receives their favorite outcome is not in this set.

OK, so the fact that individual fairness

doesn't look at preferences means

that, from the individual's perspective,

it can be too restrictive.

And then we have envy-freeness, which is a natural definition.

In particular, we have this--

everyone receives their favorite outcome inside it.

But, now, the fact that it doesn't consider the metric

means that it can be overly restrictive from the decision

maker's perspective.

OK.

So, really, what we're looking at

is we want to try to define this set of all

the fair allocations, the ones that aren't overly restrictive.

And the perspective that we give in this work

is to kind of start with individual fairness

as our notion of fairness, and then use information

about individuals' preferences to relax it into this new set.

And once I'm talking about relaxing individual fairness,

I have to make some distinctions between kind of deviations

from individual fairness that are discriminatory

versus deviations that are not discriminatory.

OK, so the guiding principle that we would formulate

in our definitions is that I'm going to allow deviations

from individual fairness as long as they are kind of in line

with individuals' preferences.

OK, so this would be the kind of intuition

behind the definition.

And I'll go back to this example to demonstrate.

So, remember, we had this case where

it seems like a reasonable outcome distribution,

but it's not individually fair.

So, now, I'm going to do, again, the same kind

of interpersonal relationships, so I'm going to compare him

with her and her with him.

So when I'm comparing him to her,

I'm going to kind of ask him this kind

of counterfactual question, which is,

"OK, under individual fairness, because you're similar to her,

you would be receiving the Netflix interview.

So, now, do you prefer your current outcome

over this possible allocation?"

And note that kind of the answer is yes.

And, again, I can do the same argument for her.

So I'm going to say, "OK, under individual fairness,

you would be receiving the Facebook interview.

Do you prefer your current outcome?"

And, again, the answer is yes.

OK, so this is kind of the idea behind

preference-informed fairness.

So this is just to formalize exactly this idea.

So I'm going to say that an allocation is

preference-informed individual fairness,

so now it's with respect to kind of the two

metrics from individual fairness and the set of preferences.

And it's going to be fair if, for every individual,

with respect to every other individual,

there exists this kind of fictitious allocation p of i

with respect to j such that the following two things hold.

So the first is that the pij is individually fair with respect

to what j is receiving.

OK, so this is the first constraint.

And then the second is that i prefer

is what they're currently receiving over this allocation.

OK, and kind of just going back to this slide,

it's a very simple argument to see

that kind of both individual fairness and envy freeness

are immediate relaxations of this notion,

so this is really how kind of the-- graphically things

look like.

And we also show that it's a non--

it's a meaningful relaxation in the sense

that individual fairness and envy freeness really

capture two different things.

OK, so kind of there are problems where, you know,

you can't cast individual fairness

as a preference as envy freeness for any set of preferences

and the other direction.

OK, so I don't have time to talk about the rest of the things.

If you want to hear more about, kind of, how

do you optimize something subject to these constraints

or kind of what can we say about social welfare,

then you're welcome to talk to us.

I'll end with kind of this takeaway message, which

is we got to this problem by kind

of thinking about targeted advertising

and wanting to actually implement and look

at how things look like in practice.

And then we started playing around

with individual fairness, and we realized that, OK,

it's not even a reasonable notion of fairness

for this task.

So I think I'll end by saying that if we want to incorporate

fairness into more of these like real-world systems,

I think there is still room for thinking and kind of doing

theory about what are these--

what are actually the right notions of fairness.

And, in this case, we kind of showed

that by not modeling preferences, maybe

you're being suboptimal.

Maybe there are other things that we

should bring into our model.

That's all.

Thanks.

[APPLAUSE]

SHAYAN OVEIS GHARAN: First.

AUDIENCE: Yeah, so what happens if you go the other direction?

Do you get the same outcome?

So you started in one model, and then you deformed to another.

GAL YONA: Right, so--

AUDIENCE: What if you started in the other, and then deformed?

GAL YONA: Exactly.

So I didn't talk about it to not give you

like a fourth definition, but you can do exactly that.

We call the-- what you get, we call it metric envy freeness.

So you can think of it as kind of you

would have the envy freeness constraint,

but that would be up to the metric.

So if the metric says we're similar,

then I shouldn't envy you.

And it's a very similar notion.

We kind of-- we discuss them both in the paper.

They give--

AUDIENCE: But don't they the same [INAUDIBLE]??

GAL YONA: So they're-- yeah, we prove that up to some

assumptions on the kind of all these parameters,

they are the same.

Yeah.

AUDIENCE: So the definition of event--

of individually fair for kind of the different companies,

does it somehow imply that some of the companies

are better to work at than others?

And if it does, is there a way to kind of view

PIIF as some sort of stable matching between the employees

and the companies?

GAL YONA: So, OK, so definitely what I can say

is that individual fairness, as stated,

it only makes sense if kind of everyone

agrees on what are the good outcomes.

We haven't thought of it from like the matching perspective.

So there might be something there.

SHAYAN OVEIS GHARAN: Any other questions?

OK.

GAL YONA: OK.

SHAYAN OVEIS GHARAN: So.

[APPLAUSE]

And the next talk is by Saeed Sharifi

on A New Analysis of Differential Privacy's

Generalization Guarantee.

SAEED SHARIFI: Hi, everyone.

My name is Saeed, and I'm going to be talking

about differential privacy and its connections

to generalization bounds in adaptive data analysis.

So you probably have heard about reproducibility crisis, which

is basically empirical findings that are not reproducible,

because they're result of overfitting

to the data set at hand.

And to avoid overfitting, what classical statistics suggest

is that you fix your queries, you fix the hypothesis that you

want to test, before before you even look at the data set.

But what happens, usually, in practice

is that you have the data set first.

You may do some exploration.

You may look at some histograms.

And then you come up with hypotheses

that you want to test.

And, on top of this, you--

after you test the first set of hypothesis,

you may go back and come up with a new set of hypotheses.

You may not be satisfied with the first batch of results.

You update your queries, you update your hypotheses,

and you get the answers to those queries.

And this is the adaptivity that invalidates

most statistical results, including confidence intervals,

p values, central limit theorems,

and this brings us to this field, adaptive data analysis.

So the question that we're trying

to answer in this field is that, how

do we perform a valid statistical inference

in the adaptive model?

And when we say inference, we mean confidence intervals,

generalization bounds, or out-of-sample accuracy bounds.

These are all the same thing.

And there is one solution that one might come up with,

which is the so-called sample splitting where

if you want to answer k-adaptive queries,

you partition your data into k parts.

And every time you want to answer a new query,

you get a fresh batch of data points, and you run your test.

You get the answer on that fresh batch of data points.

And this approach is straightforward.

It's quite simple.

But as you can imagine, this is not the best that we can do.

In fact, this approach requires, at least linearly, many data

points, linear in the number of queries k,

to get non-trivial confidence intervals.

And there is a recent line of work,

starting with the work of Dwork et al in 2015

that show how we can improve on this sample splitting approach

using differential privacy.

And the major results of these works and similar works

is the so-called transfer theorem

which says if you have algorithms that

are accurate in sample and simultaneously

differentially private, then you get out-of-sample accuracy.

And the confidence intervals that

are implied by the transfer theorems

achieve the optimal rate of n, the sample data

set size, growing with--

growing in the square root of the number of queries k.

And one important thing to note here

is that although the rates in adaptive data analysis

are already optimal, the constants are not.

And, in fact, these constants are

too big that make the sample splitting approach favorable,

in most cases, compared to the differential privacy approach.

And in this talk, I'm going to be talking

about a new proof of the transfer theorem which has

several appealing properties.

One is that this proof is much simpler

than the previous proof proposed for the transfer theorem.

And it can give some new structural insights

that we believe can be used in other places.

And the confidence bounds which are implied by our transfer

theorem are substantially better in terms of the constants.

And just to quickly show you how our bonds compares

with other bonds, let's look at this plot here.

On this plot, on the y-axis, we have the number of queries k.

And on the x-axis, we have data set size n.

The blue curve, here, corresponds to the bounds

implied by our transfer theorem, while the rest

are either sample splitting baseline or the bounds implied

by the work of Dwork et al or Vasily et al.

And ignoring the details, you can

see that, for each data set size n,

our bound corresponds to answering much more

queries compared to the other bounds.

And before I get into the details

of our theorem or our model, let's just briefly go over

differential privacy.

Differential privacy is a property

of a randomized algorithm.

We say an algorithm is differentially private

if the output distribution of the algorithm

does not change by much if you change one data

point in your data set.

And when we say does not change by much,

this is measured formally through

these parameters epsilon and delta in this form.

So, here, d and d prime are two data sets that

differ in only one data point.

And, most of the time, like usually differential privacy

is achieved by adding noise to computations.

So if you want-- for example, if you

want to compute an average in a differentially private fashion,

you can just add a little bit of the plot's noise to it,

and you're good to go.

And besides this privacy, points of view to this definition,

here, you can view differential privacy as a stability notion.

Because this, what this definition tells

us is that a differentially private mechanism is not

too sensitive to individual data points,

and that's sort of stability notion.

Now, let's get into the model for adaptive data analysis.

So we have an algorithm M which has direct access to a data set

S, where each entry of this data set the sampled [INAUDIBLE]

from a distribution p.

And you, as a data analyst, are asking queries q1 through qk

and getting answers back a1 through ak from this mechanism

M.

And just to make it formal, these queries

are functions mapping data sets to real numbers.

You can imagine, for example, like one query

can be the average of the second feature in the data set, OK.

And q of S is the value of the query on the data set,

and q of P is the value of the query on the distribution

P, which is the expected value of the query when you sample

S from the distribution.

And the answers are just real numbers,

given by this algorithm here.

And we denote the transcripts of interaction

between the analyst and the algorithm

by pi, which is basically all the queries and all

the answers in a vector.

And what we want, as the algorithm designer here,

is that the answers given by this algorithm

be close to the value of the query on the distribution P.

Or, in other words, you want to quantify

a bound on this difference here, the difference of the answers

and the values of the query and the distribution.

And note that there are a pair of random variables here.

We have a data set S, and we also

have this transcript of interaction pi.

And the key quantity that appears in our analysis

is the conditional distribution of data

set S given the transcript pi.

and we denote note this by q of pi.

And now, let's look at our transfer theorem.

If M, the algorithm, is epsilon,delta-differentially

private, and alpha, beta-sample accurate,

which means with probability 1 minus beta,

all the answers given by the algorithm are within alpha

of the value of the query on the data set S.

This is the definition of sample accuracy.

Then we get alpha prime, beta prime out

of sample accuracy, which means, again, with probability

1 minus beta prime.

The answers given by the algorithm

are within alpha prime of the value

of the query on the distribution.

So this is sort of a confidence bound, a generalization bound.

And this is how alpha prime and beta prime relate

to the parameters epsilon,delta, alpha,beta.

And there are some, like, free parameters,

like constants c and d.

Here, which correspond to Markov-like

in our concentration here.

In the width of the confidence interval, we have c and d,

and in the confidence parameter, we have beta over c and delta

over d, so there is some sort of a Markov-like concentration

here.

And now, let's quickly go over the sketch

of the proof for this theorem.

So there are two steps to this proof.

The first step is that the alpha,beta-sample accuracy

on its own, without using differential privacy

guarantees, that the answers given by this algorithm M are

close to the value of the query on the conditional distribution

of the data.

So this is basically the expected value of the query

when you sample the data from the conditional distribution

of the data sets.

So this is-- this was sort of surprising to us,

and the proof is very simple.

It just uses Markov inequality in it.

And the next step, here, is that epsilon,delta-differential

privacy, again, on its own, implies that the value

of the query on the conditional distribution is close

to the value of the query on the real distribution P. Again,

this is only using the definition of differential

privacy and some clever trick, which is not too complicated.

The proof is, again, short, and the rest

is just a triangle inequality.

We have answers close to this guy,

and then this guy close to the value

of the query on the distributions of a triangular

inequality and a union bound.

The proof is complete.

So here is the proof of sketch.

And now, let's look at the conclusions and some summary

and featured steps.

So we have given a new proof of the transfer theorem

that has, like, appealing properties.

One is that the proof is simpler, much simpler

than the previous proofs.

And it implies better confidence bounds in terms of constants,

as we saw in the plots earlier in the talk.

And we hope that this can be--

this, that the bounds implied by our theorem

can be used in practice.

And we saw that the key quantity that appeared in our proof

was the conditional distribution of the data set.

And we believe that this can be used in other places as well.

Wherever you have two sources of randomness--

one is a data set and one other things like the noise added

by differential privacy--

then you can sort of think about the conditional distribution

of the data set given all the other sources of randomness,

and it might be helpful.

It can be helpful to think about that.

And there is-- if you remember from our theorem,

there is-- there was some Markov-like concentration.

And the question is if we can improve

this Markov-like concentration to

a Chernoff-like concentration, and the answer is yes.

For epsilon,zero-differential privacy,

we get Chernoff-like concentration which is

in the form of root 1 or n.

But for epsilon,delta-differential

privacy, we don't know if this answer is yes,

so that's an open question.

And thank you very much for your attention.

The paper is on Archive, and I'm happy to take questions, now.

[APPLAUSE]

SHAYAN OVEIS GHARAN: Questions?

It was simply done.

All right.

[APPLAUSE]

So--

AUDIENCE: Sorry.

SHAYAN OVEIS GHARAN: OK, so the third talk

is on the Strategic Payments in Financial Networks,

and Martin Hoefer will give the talk.

MARTIN HOEFER: Why isn't it showing?

SHAYAN OVEIS GHARAN: Oh.

I think we have to--

ooh, don't click it.

MARTIN HOEFER: Yes.

SHAYAN OVEIS GHARAN: Great.

MARTIN HOEFER: Good.

Yeah, so thank you for the introduction.

And, yeah, thanks for coming to my talk.

So this is joint work with Nils Bertschinger and Daniel

Schmand.

Dan is my postdoc, and Nils Bertschinger

is a professor in Frankfurt on systemic risk.

And so this paper, essentially, evolved

when I was talking to him because I have

no idea of financial networks.

By now, I have some idea after talking to him.

So OK, so the main point in this area of systemic risk

is that you have financial entities

like banks, investment funds.

They have monetary liabilities and dependencies, and so much

of the work that is happening right now

started in the financial crisis in 2008.

We saw that there was sort of bankruptcy dangers

and potential cascading defaults that

were threatening the financial system

and also threatening sort of jobs and the economy and so on.

And the ongoing challenge in this area

is that there's sort of this instability.

There's still the danger of crashes and stuff.

So debt is the major source of risk in financial systems,

and one of the main goals is to understand

the effects of these debt relations

and design suitable measures for regulation

to avoid cascading insolvency.

OK, so far, so good.

So, now, I talked to Nils, and he said, OK,

so one of the major--

or one of the main models in financial systems

is the following model.

It's a simple one, and actually predates the financial crisis.

Interesting.

So it's by Eisenberg and Noe, and it's a graph-based model.

So you have a set of financial institutions or firms,

and they have debt relations.

And so firms are modeled as nodes.

Relations are modeled as edges.

And so each edge has a value, and the value

is the debt that the source owes to the target.

All right?

So, here, this node on the bottom-right, he owes like 20,

let's say 20 million to this firm on the top.

Right, so now each institution has external assets

which come from things that are not part of this debt relation,

for instance, real estate or maybe gold

or something like this.

And now the question is, in this system, who's bankrupt?

Right, so there's like tons of relations going on.

So who's bankrupt?

Who's still solvent?

Who can pay all his debt, and who cannot?

So, for this, the idea is to design a money flow

to determine what's going on in this network.

OK.

So the goal is to find, in a way,

out sort of how this network resolves,

how this network clears if, for instance, there's

a shock and people start paying their debt.

And for simplicity, here, we're going

to assume that these debt values are integral values,

but they're not really necessary for computation,

although it will be helpful.

OK, so how much money does a firm have,

and which ones are bankrupt?

So, determine this, each institution we're going to say

has total assets, and these total assets

are the sum of the external assets and the internal assets.

And the internal assets are the money that

is being paid by others, right.

And there's total liabilities, which

is the sum of value of outgoing edges.

I didn't do anything, so (LAUGHING) what's happening?

SHAYAN OVEIS GHARAN: This [INAUDIBLE]..

MARTIN HOEFER: Does it come back by itself?

SHAYAN OVEIS GHARAN: Yeah, usually.

MARTIN HOEFER: It's coming back, nice.

OK.

So, for instance, down-- no.

Maybe you should stand here, and then--

[LAUGHTER]

SHAYAN OVEIS GHARAN: I want to.

MARTIN HOEFER: OK, good.

So consider this firm on the bottom-right.

It pays forward to the firm on top,

so now the term has the seven external assets

plus four internal assets, pays all its assets

to the firm on top.

On the bottom-left, now, things get

a little bit more interesting.

Because in this paper by Eisenberg and Noe,

they assumed that debts are paid pro-rata,

and pro-rata simply means proportional.

So if I have 11 million, let's say,

and I have these kinds of debts, then

I will have sort of one third of debt

to the top firm and 2/3 of debt to the bottom-right firm.

So I pay in this one-third, 2/3 relation.

So this is how I pay my debt.

All right, so this is the assumption that they make.

And then the money flow is a maximum flow

under this pro-rata constraint.

Because this pro-rata constraint essentially

is like a fixed-point constraint, right?

So you have this incoming money, and you distribute it

to the outgoing edges, and then these outgoing neighbors

have incoming money that depend on your payments.

And your payments depend on your incoming money,

which depends on their payments, and so on and so forth.

Right, so this is like a fixed-point relation.

And they choose the fixed point that sort

of maximized the flow under this pro-rata constraint

such that everybody pays its debts in this way.

And then an institution is insolvent

if it doesn't pay its liabilities,

and solvent otherwise.

So if you resolve the systems like this,

you would see that these top two firms are insolvent.

Right, if you could, the top two--

the bottom two firms are insolvent.

The rest of them are solvent.

Right.

So now the question we were asking, or I was asking,

so what about incentives?

Right, so I'm an algorithmic [INAUDIBLE] person.

I'm interested in incentives.

Like, what are the incentives?

Have they been studied?

Is there any work on this?

And, interestingly, there's very little work

on incentives in this domain.

Right, so the question, here, that we want to study

is, what are strategic incentives in how to pay debts,

and what effect does this have on money flow

and solvency of firms?

So essentially, we're taking this pro-rata mechanism out

and replacing it by something strategic.

So we assume that these firms are strategic entities trying

to maximize the money they have in the end,

and they try to pay strategically.

OK, some properties of the system, any solvent institution

clears all its debt.

OK, any insolvent institution must spend all assets.

So there's no Cayman Islands where deposit some money.

No, if you still have debt and you still have money,

you're supposed to pay that.

And no institution is able to spend more than its assets,

so there's no magic money coming out of nowhere.

So then the question is, OK, well,

if you say strategic incentives, then what is the strategy?

So how can an insolvent institution

distribute its assets?

And the end solution for this that we took was seniorities,

and seniorities are nothing else than ranking over

liabilities of institutions.

So it's a priority order in which debts will be cleared.

So we're using this notion of seniorities

with priority order, which has also been studied before.

So we assumed that these institutions choose

the seniority strategically.

And now, there's still different ways in which you can do it.

One is, for instance, edge ranking,

where you simply say I define a ranking

over the outgoing edges.

I'm going to pay my debts in this order.

So I'm first going to pay this debt contract and then

this one and then this one and then this one.

Another one, a little bit more flexible, would be, say, OK,

I'm sort of taking a ranking over integral flow units

in the sense that my first dollar I pay here,

and the second dollar I pay there,

and the third dollar I pay here, and so on and so forth.

So it's like coin rankings.

I'm taking coins and I'm defining a ranking over coins.

Or, more generally, you could think

of any sort of flow function that takes, as input,

the assets you have and, as output, the flow

that you spend on the outgoing edges.

And if you think of it a little bit,

then this coin ranking is actually

equivalent to a flow function that is monotone.

So the more assets you have, the more you spend.

So it's not, that sort of, if I have five assets,

I'll pay all five here, and if I have seven assets,

I'll pay all seven there.

But it's a monotone, a monotone flow function

where the more assets means monotonically increasing

payments on all edges.

Right.

So, OK, given these strategies, we still

have to define, now, who has how much,

right, so we have to define these clearing states.

And, OK, so the utility or the target of these firms

is to pick strategies in a way to maximize

the total assets, to maximize the money they have available.

Then given a strategy profile, how

to define-- determine these assets,

how to determine the clearing state,

how to determine who's bankrupt, who is not, what is the money

flow.

And it turns out that for monotone strategies,

there is an infinite letters of possible clearing states,

so that's analogous to what Eisenberg and Noe proved

in their paper.

So there's a unique clearing state

that maximizes the total flow in the system,

and we're going to use this as the clearing

state in the system.

And it turns out if we have a suitable representation

of these--

of the graph, we can actually compute this clearing state,

this unique maximal clearing state in polynomial time,

at least for edge-shrinking games.

For coin-ranking games, a little bit different.

It depends on how you represent the monotone flow function.

But for edge-ranking games, it's simple.

You have this ranking over edges.

And then it's very simple.

You simply resolve paths and cycles

until you're done, right?

So if you have this network, for instance,

and you assume players have these preferences,

so these are the edges that they first resolve.

There's always going to be either a cycle or a path

from a node which still has external assets.

So, here, there's a cycle, so you essentially

fill up this cycle until one edge goes tight.

Here is this edge number 10 going tight.

So at this point-- you can't see it again--

but there is an edge going tight.

The edge drops out.

It's being paid for.

Now, then, the firm essentially goes to the next higher ranked

edge, so there will be a new cycle

or there will be a new path evolving.

So, here, this edge is tight, so it goes with the system.

So, now, we resolve a path that stems from the top nodes

to the bottom-right node via 5.

And then, so in this way, you continue.

This edge is paid for, so this middle firm

has a second priority edge.

There's a new cycle evolving.

You resolve the cycle.

Edge goes tight.

You resolve a path, edge goes tight, and so on and so forth,

until the system is cleared.

So, here, we can see, with this strategy profile,

where the bottom firms actually prefer the lower edges instead

of the horizontal edges, you get this one insolvent firm.

Now, this is also a Nash equilibrium.

I'm not going to prove this to you,

but I'm going to-- simply going to say,

suppose this bottom firm deviates and picks

a different ranking, so prefers top edge

instead of the horizontal edge.

Then it turns out to be worse, right,

so the more people get bankrupt, more institutions get bankrupt.

The assets are less, and so on and so forth.

Right, so this is the idea.

So, some results.

For edge-ranking, things are actually pretty bad.

Right, so edge-ranking games are sort of having bad properties,

in the worst case.

Equilibrium might not exist, so there

might be inherently cycling dynamics

in these strategic incentives.

Deciding the existence is NP-hard.

Even when they exist, computing them is NP-hard.

Computing of the social optimum, which

maximizes the total assets available to everyone,

is NP-hard.

Computation for best response for a single firm is NP-hard.

And in terms of social welfare, where we simply

measure the total money that is available to everyone,

they can be--

I mean, for pure Nash activity, but it can be unbounded, live

far away from social optimum.

For equilibrium concepts that consider

a coalitional deviation, it's still a factor of n,

so it's pretty bad.

Now, we have this alternative which

is coin-ranking, which is essentially monotone flow

functions.

And these coin-ranking games, they're

related to quite a number of games in the literature.

So I'm not going to talk about it.

If you want to know about it, come talk to me later.

There's different flow games that

have been studied in the literature which

are very related, but not the same, surprisingly.

And for coin-ranking, things are much better.

So, for coin-ranking, the social optimum

is actually a strong equilibrium.

So if you take the way or the flow that

maximizes the total circulation in the network,

we can define strategies that will turn this

into a correlational strong equilibrium.

Yeah, I'm not doing anything, so it's just

something that works by itself.

These strong equilibria can be computed

in strongly polynomial time.

Computation of best response, although,

remains NP-hard, in general.

And these results extend to general flow strategies,

so even if you don't have to monotonicity constraint,

you can still turn the social optimum

into a strong equilibrium.

The problem with general flow strategies,

though, is that they're not necessarily-- sort

of have these clearing states.

Right, you remember one of the letters of clearing states.

It doesn't exist necessarily for flow strategies.

Also, quality of equilibria, OK, so I'm not going to--

I'm not going to expect you to read this.

There is bounds on the quality of equilibria that we show.

So even sort of worst-case equilibria

have a bounded deterioration of social welfare.

Anyway, so there's also some some policy advice

that Nils asked me to include.

So the idea is that, in general, ranking on a per-contract basis

leads to extremely bad properties.

If, instead, you allow people to rank on a coin basis,

to have monotone flow functions, then you

can have this optimal, socially optimal, clearing

state that also allows no coalition of firm deviations.

But even in this scenario, arbitrary Nash equilibria

can be bad, so it sort of calls for a benevolent regulator

who defines who pays what, and with which way.

If you let the people, on their own,

decide how to pay their debts, then actually the system

can end up in a very bad stable state.

And that's it, mostly.

So we have this classic model of credit networks.

We define strategic variance where people prioritize

clearing of certain debt.

For this coin-ranking measure, we

have optimal strong equilibria that are

computable in polynomial time.

For the edge-ranking, we have mostly negative results.

And, of course, there's a lot of further issues you can study.

There's a lot of things going on in this area of systemic risk

which is not well understood from an incentive

and from a game-theoretic perspective.

And I'll leave you with that.

Thank you very much.

[APPLAUSE]

SHAYAN OVEIS GHARAN: Questions?

AUDIENCE: So if you--

hi.

So if you take a network and, let's say, you do--

two firms decide to lend each other $100 million.

So I guess that wouldn't change the social optimum because they

could-- it just, you're adding a little.

But could it affect some of these, you know, other methods?

In other words, could you take something

where, you know, by adding in, say, two $100 million,

two ways, all of the sudden, you get

a lot more insolvency through the other methods?

MARTIN HOEFER: OK that's actually

an interesting question because this is what

people study very recently.

It's called netting, and netting is

to take out these cycles, these two

like directed cycles between two firms lending each other money.

So, in our model, I think these effects

are very limited because we have the flexibility of having

these flow strategies.

But if you go back to this pro-rata payment model,

there's a lot of counter-intuitive effects

that can happen by taking out these two cycles.

And then people are-- that's actually

a current topic of interest.

Very recently, people are starting

to analyze like what are the effects.

And can you decide this, and can you

decide that, and characterisations

and so on and so forth of this netting,

not only the two-by-two netting, but then also

like larger cycles, if you take them out?

I mean, essentially our algorithm is netting.

All right, so this algorithm that divides the clearing state

is netting.

So, in this way, we are sort of--

in this way, you're independent, in a sense,

of this netting effect.

But if you go for the pro-rata payment,

there's a lot of interesting things you can say about that.

SHAYAN OVEIS GHARAN: Any the other questions?

You also study some notion of failed, like, welfare.

Like, let's say you want to minimize

the number of insolvent?

MARTIN HOEFER: Yeah, that's actually,

we got something-- that's a comment we got in the reviews

saying that, OK, your notion of social welfare

is the total amount of assets.

You sum the amount of assets.

If you go to, so you want to minimize the number

of insolvent firms, it's hard.

So many of the things we study here can be adopted.

And then, OK, so this optimal strong equilibrium proof,

you can adapt this to quite a number

of notions of social welfare.

But the polytime computation goes away

if you want to minimize the number of insolvent firms.

SHAYAN OVEIS GHARAN: All right.

Thank you.

[APPLAUSE]

So, yeah, the next talk is on Incentive Compatible Active

Learning, and Siddharth is giving the talk.

CREW: I just want to see if everything's tight.

Just go ahead.

SIDDHARTH PRASAD: Can I use the HDMI?

CREW: Yeah.

I think I [INAUDIBLE].

[SIDE CONVERSATION]

SIDDHARTH PRASAD: OK, I'll be talking on our paper, Incentive

Compatible Active Learning, and this

is joint work with Federico Echenique at Caltech.

So the main motivation is to come up

with a model of active learning that

takes into account incentive issues, mainly motivated

for experimental design and economics.

So in economic experiments, you have

sort of an interaction between a learner and an agent,

and the learner's trying to learn some parameters that

might govern agents' choices over some abstract outcome set.

And such experiments are always incentivized,

which just means that there's some payoff to the agent

for participating that depends on the history

of the interaction that took place.

And this paper deals with active learning,

so I'll just quickly distinguish between passive and active

learning.

The passive learning setup is this sort of familiar setup

where we get a bunch of labeled points produced

by some unknown distribution.

And then sort of the question that you

want answer is, how many samples do you

need to see to learn something?

In active learning, you give the learner

sort of complete control over the data points on which

he wants to see a label.

And the question you ask is, well,

how many labels do you need to learn something?

And in the traditional active learning literature,

this is known as the membership queries model.

But whenever I talk about active learning,

I just mean this model where he has full control over what

data points he learns from.

And there is a bunch of work that's

been done in recent years, both on the economic applications

of sort of passive pack learning as well as learning

in the presence of, like, incentives

and strategic agents.

So this is the basic setup of preference elicitation

that I'll use.

There is an abstract type space data,

so agents have some types that are drawn from this type space.

It's going to be a metric space that's bounded

with respect to the metric.

There's some abstract outcome space O,

and an agent of type theta enjoys

the utility of u of theta comma o

when presented with outcome o.

And you can also talk about this in the language of preference

relations.

So type theta is going to induce a preference relation

in natural way where an agent of type theta

prefers O to O prime, yeah, if he gets more utility from it.

OK, so let's see formally what it

means for, like, a learner and an agent to interact

and for this learning process to take place.

So we have a learner, and we have an agent of type theta,

and he's going to answer questions, for now truthfully.

So learner asks him to make--

the learner asks him to make a choice from this outcome

set, o1 or o1 prime.

And maybe he says he likes o1 better.

And then this interaction continues

for some number of rounds.

And since he's a truthful agent, he answered truthfully

according to his utility.

The learner learned something after this interaction,

some hypothesis theta h each for the true-typed agent.

And, finally, he pays the agent off

based on the last question that was asked.

And, in particular, he's going to implement

the outcome the agent chose on the very last question.

So then, what does it mean for such an interaction

to satisfy a learning property?

Well, it just means that the learner is implementing

some algorithm such that regardless

of what the agents true type is, as long as he answers

truthfully, then this algorithm is

going to come up with a good hypothesis

with high probability.

And the minimum number of rounds that the learner

has to execute this interaction for,

well, we're going to call the learning complexity.

OK, and so now I want to talk about incentive compatibility.

And to do that, we need to talk about what

it means for an agent to be strategic in this interaction.

So this is diagram for similar interaction,

but maybe the agent actually wants

to play according to strategy because he

thinks he can sort of steer the agent to offer him outcomes

that maybe he thinks he's going to get higher utility from.

So we assume like the agent sort of

understands how the learner, learner's algorithm works.

So, here, this is just a strategy sigma.

It's just some mapping from the history of the interaction

so far into maybe some randomized response

that comes next.

And incentive compatibility just says that, you know,

for all types, it should hold with high probability that he

cannot get more than a non-- like more than a negligible

advantage from trying to manipulate the interaction

in such a way.

And, similarly, we can ask for, we

can define something called the incentive-compatible

complexity.

It's just a minimum number of rounds this algorithm needs

to be run for to, you know, achieve desired advantage

and confidence parameters.

And finally, you can just combine the two definitions

to demand both properties and call

that the IC learning complexity of the learner's algorithm.

OK, so here's a sort of simple, a very simple method

of achieving these definitions, although the algorithm that

arises might be inefficient.

So suppose we have this sort of assignment function--

and if anyone has seen the theory of scoring rules,

this is called an effective scoring rule--

and all it does is assign to every type an outcome

such that if your true type is theta, you sort of--

the how much you like the assignment for some other type.

Theta prime is sort of proportional to how far

away theta prime is from your true type.

So let's look at a concrete example of such a function.

So let's look at expected utility

preferences over r to n.

So the story, here, is that there

is some n states of the world.

There's some uncertainties, like maybe it's

like the weather tomorrow.

And an agent's type is just the probability over the n states.

It's his belief over, you know, like how certain

is over each state occurring.

Learner offers outcomes, and these

are just sort of rewards in the event that every state occurs.

And agent's utility is just the expectation

of this reward vector.

Then this thing called a spherical scoring rule, which

just takes a like a probability and projects it

onto the unit's sphere, is going to satisfy this effectiveness

condition.

And I'll just gloss over this very quickly,

but if you are able to construct such a function s

in your environment, then there's

a very simple way of achieving incentive-compatibly, almost

trivial way, where you just cut up the types space very

finely and sort of iterate over the entire type space

to find the point in the cover of the type space

that is closest to the agent's true type.

And this is sort of very trivially incentive compatible.

It's a deterministic algorithm that

is going to learn the true type, and it's strictly

incentive compatible in that, at any point,

the agent has no incentive to lie.

The problem is it can be very inefficient.

For example, the like an epsilon cover

of the probability simplex is already

like exponential in the dimension.

Nevertheless, let's look at, like, a sufficient condition

under which we can actually construct such a net.

So, here, I said, well, if there's an s, we can do this.

That's sort of an abstract condition.

So here's a nice sufficient condition

for when you can actually do this.

So, again, let's work with Euclidean preferences,

so the outcome space is r to the n,

and we'll talk in the language of preference relations.

So one notation is that for a outcome x and r

to the n, the upper contour set at x

is just the set of outcomes that are at least as

good as x for that agent.

OK, then a theorem is that as long

as that in this environment, for all agents,

all upper contour sets satisfies some nice properties,

then you can kind of run the procedure on the previous slide

and get this learnability result.

And so one of the nice conditions that we pose

is something called hyperplane uniqueness, and just--

this just says that if you have two different agents,

at any outcome point, if you look at that out the two

upper contour sets of those two agents,

at that outcome point, the supporting hyperplanes

at that upper contour--

at that outcome point for the two agents should be distinct.

So, in particular, you can't have anything

like the picture on the right.

OK, so now let's-- we look at a sort of situation in which you

can actually do a lot better, and it's going to be

the setting that I discussed previously of expected utility

preferences.

So this is just, recall the setting.

So, again, and on certain states of the world, the type

is just a probability of the n states.

And I should remark that there is a very trivial learning

algorithm in this case in which you can sort of elicit

the belief in every single state by doing like a binary search

over some fine-enough cover of the 01 interval.

So you can do that in sort of like logarithmically, many

questions, but that's going to be very easily

manipulable by the agent to sort of get you to maybe offer him

some sort of better outcome at the very end,

regardless of what his type is.

But so we're able to get incentive-compatibly condition

that achieves similar query complexity,

differing only by some small poly--

sub-polynomial factors.

So this is going to be based on this so-called

disagreement-based algorithm for learning linear separators.

So, here, it's just like you-- that you hold a normal vector,

and we're trying to learn to half space

that the normal vector defines.

So this is a pretty simple algorithm to define,

so you start off with your hypothesis as being,

you know, the entire unit sphere,

so all unit-length vectors.

And you keep iterating following procedure.

So maybe there's some unknown distribution

giving you unlabeled points.

And anytime there is a disagreement

in your current hypothesis set, that's

when you query the label of the point.

So, in particular, if my current hypothesis

set was this, like, blue-green region,

and I received the red point, well,

everything in my hypothesis set is

going to characterize the red point in the same way,

so I would not ask for its label.

But if I received a green point, then there's some disagreement,

so I asked for the label.

And maybe I update my hypothesis set to the green sector

in the next step.

Right.

So this enjoys a nice sort of label complexity guarantee

that it sort of depends linearly on the VC dimension

of your hypothesis class.

And it's also an exponential improvement

in sort of the pack learning sample complexity that you

get when you don't do this sort of intelligent discarding

of things that you already know.

And this data of thing is called the disagreement coefficient.

[COMPUTER CHIMING]

Right.

It's called the disagreement coefficient,

and that just depends on the distribution that's

giving you the points and sort of measures how quickly you

can sort of throw away points in your hypothesis space.

So how-- so our algorithm, here, is

going to be like a pretty simple modification

to this disagreement-based algorithm.

And we're basically going to use this spherical scoring

rule to sort of learn the same thing at each step,

but ensure that the questions we ask

are done so in an incentive-compatible manner.

So, in fact, the first step, now, our hypothesis set

is the probability simplex.

And we keep doing this thing where

we sample a uniformly random vector from the unit sphere.

Again, if there's no disagreement, you re-sample.

And, at every step, we're going to basically find

two other probabilities in our simplex,

project those two probabilities onto the sphere such

that their projections asking the agent

to choose between those two projections

is exactly the same question as asking like, OK,

on which side of your half space does the vector v fall on?

And then, finally, after how many ever rounds we need,

we pay the agent off based on his decision

on the last question.

And basically, this algorithm is going

to enjoy the same learning guarantee as the disagreement

algorithm, and it's also going to satisfy

this sort of incentive-compatibility

guarantee.

And, in fact, you can show that.

So the title has as the, like, advantage

parameter in the incentive compatibility definition,

but it turns out that you can just

run this for sufficiently many rounds

such that any best responding agent will end up

reporting something that's in a small ball

around his true type.

And that's pretty much it, but I'll

conclude with what I think is an interesting question.

And that's we gave some sufficient conditions for when

you can do this incentive-compatible learning,

but, you know, you can ask, is there

some sort of precise combinatorial characterization

sort of like VC dimension for pack learning,

like little stone dimension for things like online

learning and so on?

So that's it.

Thanks.

[APPLAUSE]

SHAYAN OVEIS GHARAN: Questions?

AUDIENCE: You have an n to the 3/2 in the exponent there.

Do-- how-- what do you know about the--

how close to optimal that could be?

Or--

SIDDHARTH PRASAD: So I did the guarantees

from the disagreement algorithm here.

As far as I know, r doesn't have any, like, lower bounds.

And this is more or less just an instantiation of that.

And the extra terms come from, like--

so that this is--

our learning algorithm, like, we use total variation distance,

so there's some extra terms you gather when you

move from different metrics.

But, yeah, so as far as I know, I'm

not sure how close that type that is.

But, yeah.

SHAYAN OVEIS GHARAN: Great, thanks.

[APPLAUSE]

So the last talk of the session is

on Implementing an Advised Strategies Welfare Guarantees

from Posted-Price Mechanisms when

Demand Queries are NP-Hard.

Linda Cai will give the talk.

LINDA CAI: Is that?

Just show me.

SHAYAN OVEIS GHARAN: It should come.

Let us wait one second.

It should.

Hold on.

This is not connected to anything.

LINDA CAI: No.

SHAYAN OVEIS GHARAN: Hold on.

CREW: Well, there we go.

LINDA CAI: Great.

CREW: That will help her.

LINDA CAI: Cool.

Hi, I'm Linda, and this is joint work with Clay Thomas and Matt

Weinberg.

So we're concerned with combinatorial auctions

where there are n bidders and m items, where each bidder i has

valuation function, vi, and the auctioneer awards disjoint sets

of item to each bidder while charging some prices.

So the goal of the mechanism design

here is to maximize welfare.

A centralized question for mechanism-designed

combinatorial auction is what kind of welfare

can the mechanism guarantee considering that the agents are

self-interested and strategic?

So the most common way to address this problem

is to use truthful mechanism, which means that it is always

in a bidder's interest to be truthful,

to report truthfully, regardless of what others do.

However, we will observe, here, that true-- the limitation

on what the truthful mechanism can achieve

does not always translate into overall limitation on welfare

guarantees.

So there are usually two types of mechanisms that are studied.

One, what we call computationally efficient,

is when we assume the auctioneer and the bidders

to be computationally bounded and can only

compute functions in P. The other is communication

efficient.

This is when the auctioneer and the bidders

are computationally unbounded and can compute anything,

but can only communicate polynomial of any bits.

Here, we study two very commonly considered classes

of valuations, some modular and XOS.

As you can see, here, there is a big gap

between what communicationally-efficient

truthful mechanism and computationally-efficient

truthful mechanisms can achieve.

One is log log m to the cube.

The other is it cannot do better that square root of m.

So we wondered, why is there such a big separation?

And, first, we considered the case for XOS.

In the communication model, the state-of-the-art truthful

mechanism, the price-learning mechanism from Ashlagi

and Singer, 2019, has some learning components

and preprocessing, but it's, at its core,

a posted-price mechanism, where it visits bidders one

at a time, and the bidder gives them some price vector,

and then asks the bidder to compute,

what is your favorite set that maximizes your utility?

This is actually called the demand query,

which is NP-hard to compute both for XOS

and some modular classes.

And this is the main part of why price-learning mechanism is not

computationally efficient.

Everything else should be fully timed.

So in a computational model, we observe

that it is NP-hard for truthful mechanism

to achieve anything better than a square root m approximation.

However, this fact is not very surprising because even if we

don't consider incentives, any approximation algorithm cannot

achieve better than the square root of m.

Now, when we look at this of modular case,

these are a little different.

So the state-of-the-art truthful mechanism for the communication

model as well as the lower bound on computational model is

exactly the same.

However, we could do much better with approximation algorithms

without considering incentives.

In fact, we could achieve a constant approximation.

So something is strange is happening here,

where the approximation algorithm has some improvement,

but somehow none of that has translated into the case where

we consider incentives.

Here to-- in order to see what is happening here, let's

draw a parallel to a similar case, where we consider it

when there is only one buyer and m items,

but the buyer can only receive k out of the m items.

So finding the maximum k set is NP-hard.

Here, you could see that the strategic concern

is very simple.

Basically, the auctioneer-- both the auctioneer and the buyer

want the buyer to find a set that's as good as possible.

Indeed, in the communication model,

we could achieve optimal welfare by simply asking the bidder,

what is your favorite set?

And when we consider an approximation algorithm,

things are also not very bad.

There exists a constant approximation algorithm.

But when we look at truthful mechanisms

in the computationally-bounded model,

now, it is NP-hard to achieve anything better

than square root of m.

But is it really true that there is no mechanism that

can guarantee better than square root of root m approximation

when bidders are strategic?

You can consider if a bidder knows about this approximation

algorithm, and it participates in set for free.

It seems reasonable to expect that the welfare will be better

than what the approximation algorithm gives,

instead of only being root m.

But there is-- truthful does not exactly

describe the kind of guarantee set for free gifts,

in this case.

So we consider a different solution concept, where

it involves a certain advice.

We define it as where it takes a tentative strategy and outputs

either the original strategy or a strategy that dominates it.

To make sure that you can't just keep

improving over and over again with this advice,

we also assume advice is item potent.

So, here, assist advice can only improve your strategy.

It seems reasonable to assume that a rational bidder given

this oracle will first use this oracle

to try to improve their strategy,

and then use the strategy.

So we call this to follow the advice.

And now, we say that a polynomial type mechanism

guarantees the alpha approximation in implementation

by strategy if there exists a poly-time advice such

that an alpha approximation can be

achieved if all of these players follow the advice.

We observed that this definition is

equivalent to what is called algorithmic implementation

in by Babaioff, Lavi, and Pavlov, 2009.

I will not go into the details why they're equivalent,

but if you're interested, you can talk to me after.

So, now, we can see that now we have a way

to describe the sort of guarantees set for free gifts.

We consider this advice where it computes the approximation

algorithm set and returns the better set out

of user input and t, here.

And we now say that Set-For-Free guarantees a e

over e minus 1 approximation in implementation,

in advice strategy with advice A.

So let's go back to our main topics of modular welfare

maximization.

The question, now, we consider is,

can price-learning mechanism be modified into a polynomial time

mechanism in this new--

the solution concept as well?

And we answer it in the affirmative.

So we have said that the main part that

is not computationally efficient is the demand query.

So we, first, try to find some notion of approximate demand

query for some module bidders, and then we

can use such approximate demand query as advice

to use in conjunction with price-learning algorithm--

mechanism.

The first, most natural attempt to approximate is simply

to define c-approximate demand oracle where the oracle returns

a set that gives you to a c fraction of the utility,

the maximum utility on a fixed price vector, P. However,

it turns out that you cannot design a very good demand

oracle in this space.

So, instead, we considered a by-criterion approximate demand

oracle, where it returns a set that c approximates

the maximum utility, and not on the original price vector,

but when a price is blown up by a factor of 1 over d.

And to see why this is interesting,

We should see when, in term true,

we proved that if v is a subclass of XOS valuations,

and we are given a cd approximate demand oracle,

in fact, price-learning mechanism with this oracle

can achieve maximum of 1 over c, 1 over d times

log log m cube approximation guarantee in implementation

in advised strategy.

Therefore, if we can find a constant by-criteria

approximate demand oracle for some modular bidders,

then we have proven that price-learning mechanism

with this oracle will give good welfare guarantee.

And, indeed, we find such a half-half approximate demand

oracle, and it's actually quite--

was a quite simple algorithm where we simply

go through all the items in order,

and we see if the marginal welfare it provides

is greater than two times the price.

If so, we put it into the set.

Otherwise, we do not.

And that concludes the proof.

So, in conclusion, we used a solution concept implementation

advised strategy to show that price-learning mechanism

for some modular welfare maximization

maintains its approximation guarantee when buyers follow

advice recommended by this half-half approximate demand

oracle.

We, again, observe here that implementation

advised strategy is equivalent to a concept

algorithmic implementation proposed in 2009.

And to the best of our knowledge,

this is the first application of such a concept since 2009.

They applied it in a different context.

So we also wonder, here, maybe for future work,

if there are more applications.

Thank you.

Thank you for listening.

[APPLAUSE]

SHAYAN OVEIS GHARAN: Questions?

OK.

Well, thank you.

[APPLAUSE]

So we're going to have lunch, now.

And then we're going to start business meeting at 1:00.

It's all good, so let's [INAUDIBLE]..

[INTERPOSING VOICES]

AUDIENCE: Joe, wait for them.

[SIDE CONVERSATION]

AUDIENCE: That's a better [INAUDIBLE]..