×

`→`Next step`←`Previous step`↓`Skip this slide`↑`Previous slide`m`Show slide thumbnails`n`Show notes`h`Show handout latex source`N`Show talk notes latex source

Click here and press the right key for the next slide (or swipe left)

also ...

Press the left key to go backwards (or swipe right)

Press n to toggle whether notes are shown (or add '?notes' to the url before the #)

Press m or double tap to slide thumbnails (menu)

Press ? at any time to show the keyboard shortcuts

The Roots, 2006

A game is ‘any interaction between agents that is governed by a set of rules specifying the possible moves for each participant and a set of outcomes for each possible combination of moves’ (Hargreaves and Varoufakis, 2004 p. 3)

Aim: describe rational behaviour in social interactions

Wouldn’t it be cool if we had a way of saying, for any situation, how interacting rational agents would act?
Suppose we could specify a general recipe which would tell us,
for any situation at all, which actions rational agents would
perform. Wouldn’t that be useful to understanding social interactions?

‘we wish to find the mathematically complete principles which define “rational behavior” for the participants in a social economy, and to derive from them the general characteristics of that behavior’

\citep[p.~31]{neumann:1953_theory}

von Neumann & Morgenstern, 1953 p. 31

Here is a very simple situation in which you face a choice.
Assuming you prefer £10 to £10, I can predict which box you will open.

Alternatively, observing which box you open will tell me whether you prefer
to have £10 or £0.
(significance [for later]: Revealed Preference interpretation)

me | |||

put £10 in box A | put £10 in box B | ||

you | open box A | £10 £0 | £0 £0 |

open box B | £0 £0 | £10 £0 |

But now consider this situation.
Here it is not just your choice that determines the outcome, but also mine.
But you don’t have an information about what I will do.
So now there’s nothing for you to do but pick a box at random.

The tacit assumption is that I am getting nothing no matter what I do.
But suppose we change the game slightly ... suppose you offer me
£2 to put the money in box A.
Then your situation changes ...

me | |||

put £10 in box A | put £10 in box B | ||

you | open box A | £8 £2 | £0 £0 |

open box B | -£2 £2 | £10 £0 |

If I prefer £2 to £0[, and if I am rational, and ...] then I will put the money in box A.
If you know all this, you can predict my action.
And if you can predict my action, you can
rationally choose to open box A.

Let me pause over this.
Suppose that I don’t care about your reward, only my own.
Suppose also that I prefer £2 to £0.
Then regardless of what you do, I should put the money into box A

This is a relatively simple interaction: the outcome my actions bring about for me does not
depend at all on what you do.

By contrast, which outcomes your actions bring about depends on what actions I select.

Note also that it is rational for you to choose box A even if your preferred outcome would be to get £10 rather than £8.
As a rational agent, you want to best satisfy your preferences.
But of course you can’t just follow the money: instead you have to take into account how I am likely to act.

How you act

is a function of two things:

your preferences

and your beliefs about how others will act.

Prisoner X | |||

resist | confess | ||

Prisoner Y | resist | 3 3 | 0 4 |

confess | 4 0 | 1 1 |

Consider this profile of actions ...
... you might think that these are the most rational actions to perform
since they give each Prisoner what she most prefers. But note that:

Prisoner X can improve the ouctome by unilaterally deviating from this profile ...

So this is the only nash equilibrium.

How you act

is a function of two things:

your preferences

and your beliefs about how others will act.

The Roots, 2006

A game is ‘any interaction between agents that is governed by a set of rules specifying the possible moves for each participant and a set of outcomes for each possible combination of moves’ (Hargreaves and Varoufakis, 2004 p. 3)

Aim: describe rational behaviour in social interactions

Wouldn’t it be cool if we had a way of saying, for any situation, how interacting rational agents would act?
Suppose we could specify a general recipe which would tell us,
for any situation at all, which actions rational agents would
perform. Wouldn’t that be useful to understanding social interactions?

‘we wish to find the mathematically complete principles which define “rational behavior” for the participants in a social economy, and to derive from them the general characteristics of that behavior’

\citep[p.~31]{neumann:1953_theory}

von Neumann & Morgenstern, 1953 p. 31

A nash equilibrium for a game

is a profile of actions

from which no agent can unilaterally profitably deviate

see Osborne & Rubinstein, 1994 p. 14; Dixit et al, 2014 p. 95

Let’s see another example

Prisoner X | |||

resist | confess | ||

Prisoner Y | resist | 3 3 | 0 4 |

confess | 4 0 | 1 1 |

Consider this profile of actions ...
... you might think that these are the most rational actions to perform
since they give each Prisoner what she most prefers. But note that:

Prisoner X can improve the ouctome by unilaterally deviating from this profile ...

So this is the only nash equilibrium.

Gangster X | |||

back off | fight | ||

Gangster Y | back off | 3 3 | 1 4 |

fight | 4 1 | 0 0 |

Game Theory

Aim: describe rational behaviour in social interactions.

An action is rational

in a noncooperative game

if it is a member of a nash equilibrium?

Why is the notion of a nash equilibrium so cool? Consider:

How you act

is a function of two things:

your preferences

and your beliefs about how others will act.

Your beliefs about how others will act

are a function of your knowledge of two things:

your beliefs about their preferences

and your beliefs about how they believe others will act.

Your beliefs about how they believe others will act ...

Consider all this complexity.
The notion of a nash equilibrium cuts it out.
It allows us to identify rationally optimal actions in a way that doesn’t
involve working through how these beliefs might be formed.
Or does it?