Eco514|Game Theory
Lecture 10: Extensive Games with (Almost) Perfect
Information
Marciano Siniscalchi
October 19, 1999
Introduction
Beginning with this lecture, we focus our attention on dynamic games. The majority of games
of economic interest feature some dynamic component, and most often payo uncertainty as
well.
The analysis of extensive games is challenging in several ways. At the most basic level,
describing the possible sequences of events (choices) which de ne a particular game form is
not problematic per se; yet, di erent formal de nitions have been proposed, each with its
pros and cons.
Representing the players’ information as the play unfolds is nontrivial: to some extent,
research on this topic may still be said to be in progress.
The focus of this course will be on solution concepts; in this area, subtle and unexpected
di culties arise, even in simple games. The very representation of players’ beliefs as the play
unfolds is problematic, at least in games with three or more players. There has been a erce
debate on the \right" notion of rationality for extensive games, but no consensus seems to
have emerged among theorists.
We shall investigate these issues in due course. Today we begin by analyzing a particu-
larly simple class of games, characterized by a natural multistage structure. I should point
out that, perhaps partly due to its simplicity, this class encompasses the vast majority of
extensive games of economic interest, especially if one allows for payo uncertainty. We shall
return to this point in the next lecture.
Games with Perfect Information
Following OR, we begin with the simplest possible extensive-form game. The basic idea is
as follows: play proceeds in stages, and at each stage one (and only one) player chooses an
1
action. Sequences of actions are called histories; some histories are terminal, i.e. no further
actions are taken, and players receive their payo s. Moreover, at each stage every player
gets to observe all previous actions.
De nition 1 An extensive-form game with perfect information is a tuple = (N;A;H;P;Z;U)
where:
N is a set of players;
A is a set of actions;
H is a collection of nite and countable sequences of elements from A, such that:
(i) ;2H;
(ii) (a1;:::;ak)2H implies (a1;:::;a‘)2H for all ‘<k;
(iii) If h = (a1;:::;ak;:::) and (a1;:::;ak)2H for all k 1, then h2H.
Z is the set of terminal histories: that is, (a1;:::;ak) 2 Z i (a1;:::;ak) 2 H and
(a1;:::;ak;a)62H for all a2A. Also let X = HnZ. All in nite histories are terminal.
P : X!N is the player function, associating with each non-terminal history h2X the
player P(h) on the move after history h.
U = (Ui)i2N : Z ! R is the payo function, associating a vector of payo s to every
terminal history.
I di er from OR in two respects: rst, I nd it useful to specify the set of actions in
the de nition of an extensive-form game. Second, at the expense of some (but not much!)
generality, I represent preferences among terminal nodes by means of a vN-M utility function.
Interpreting De nition 1
A few comments on formal aspects are in order. First, actions are best thought of as move
labels; what really de nes the game is the set H of sequences. If one wishes, one can think of
A as a product set (i.e. every player gets her own set of move labels), but this is inessential.
Histories encode all possible partial and complete plays of the game . Indeed, it is
precisely by spelling out what the possible plays are that we fully describe the game under
consideration!
Thus, consider the following game: N =f1;2g; A =fa1;d1;a2;d2;A;Dg; H =f;;(d1);(a1);(a1;D);(a1;A);(a1;A;d2);(a1;A;a2)g;
thus, Z = f(d1);(a1;D);(a1;A;d2);(a1;A;a2)g and X = f;;(a1);(a1;A);g; nally, P(;) =
P((a1;A)) = 1, P(a1) = 2, and U((d1)) = (2;2), U((a1;D)) = (1;1), U((a1;A;d1)) = (0;0),
U((a1;A;a2)) = (3;3). Then = (N;A;H;Z;P;U) is the game in Figure 1.
The empty history is always an element of H, and denotes the initial point of the game.
Part (ii) in the de nition of H says that every sub-history of a history h is itself a history in
its own right. Part (iii) is a \limit" de nition of in nite histories. Note that in nite histories
are logically required to be terminal.
A key assumption is that, whenever a history h occurs, all players (in particular, Player
P(h)) get to observe it.
2
3,3r
1
2,2
d1
a1 r2
D
A
1,1
r1
d2
a2
0,0
Figure 1: A perfect-information game
Strategies and normal form(s)
De nition 1 is arguably a \natural" way of describing a dynamic game|and one that is at
least implicit in most applications of the theory.
According to our formulations, actions are the primitive objects of choice. However, the
notion of a strategy, i.e. a history-contingent plan, is also relevant:
De nition 2 Fix an extensive-form game with perfect information . For every history
h 2 X, let A(h) = fa 2 A : (h;a) 2 Hg be the set of actions available at h. Then, for
every player i2N, a strategy is a function si : P 1(i)!A such that, for every h such that
P(h) = i, si(h)2A(h). Denote by Si and S the set of strategies of Player i and the set of
all strategy pro les.
Armed with this de nition (to which we shall need to return momentarily) we are ready
to extend the notion of Nash equilibrium to extensive games.
De nition 3 Fix an extensive-form game with perfect information. The outcome function
O is a map O: S!Z de ned by
8h = (a1;:::;ak)2Z;‘<k : a‘+1 = sP((a1;:::;a‘))((a1;:::;a‘))
The normal form of the game is G = (N;(Si;ui)i2N), where ui(s) = Ui(O(s)).
The outcome function simply traces out the history generated by a strategy pro le. The
normal-form payo function ui is then derived from Ui and O in the natural way. Finally:
De nition 4 Fix an extensive-form game with perfect information. A pure-strategy Nash
equilibrium of is a pro le of strategies s2S which constitutes a Nash equilibrium of its
normal form G ; a mixed-strategy Nash equilibrium of is a Nash equilibrium of the mixed
extension of G .
3
Thus, in the game of Figure 1, both (a1a2;A) and (d1d2;D) are Nash equilibria.
Observe that a strategy indicates choices even at histories which previous choices dictated
by the same strategy prevent from obtaining. In the game of Figure 1, for instance, d1a1 is
a strategy of Player 1, although the history (a1;A) cannot obtain if Player 1 chooses d1 at;.
It stands to reason that d2 in the strategy d1d2 cannot really be a description of Player
1’s action|she will never really play d2!
We shall return to this point in the next lecture. For the time being, let us provisionally
say that d2 in the context of the equilibrium (d1d2;D) represents only Player 2’s beliefs about
Player 1’s action in the counterfactual event that she chooses a1 at ;, and Player 2 follows
it with A.
The key observation here is that this belief is crucial in sustaining (d1d2;D) as a Nash
equilibrium.
Games with observable actions and chance moves
The beauty of the OR notation becomes manifest once one adds the possibility that more
than one player might choose an action simultaneously at a given history. The resulting game
is no longer one of perfect information, because there is some degree of strategic uncertainty.
Yet, we maintain the assumption that histories are observable: that is, every player on the
move at a history h observes all previous actions and action pro les which comprise h.
The OR de nition is a bit vague, so let me provide a rigorous, inductive one. I also add
the possibility of chance moves, i.e. exogenous uncertainty.
De nition 5 An extensive-form game with observable actions and chance moves is a tuple
= (N;A;H;P;Z;U;fc) where:
N is a set of players; Chance, denoted by c, is regarded as an additional player, so c62N.
A is a set of actions
H is a set of sequences whose elements are points in Qi2JA for some A N[fcg;
Z and X are as in De nition 1;
P is the player correspondence P : X)N[fcg
U : Z!RN as in De nition 1;
H satis es the conditions in De nition 1. Moreover, for every k 1, (a1;:::;ak) 2H
implies that (a1;:::;ak 1)2H and ak2Qi2P((a1;:::;ak 1)) A.
For every i2N[fcg, let Ai(h) =fai2A :9a i2Qj2P(h)nfigA s.t. (h;(ai;a i))2Hg.
Then fc : fh : c 2 P(h)g! (A) indicates the probability of each chance move, and
fc(h)(Ai(h)) = 1 for all h such that c2P(h).
The de nition is apparently complicated, but the underlying construction is rather nat-
ural: at each stage, we allow more than one player (including Chance) to pick an action; the
4
chosen pro le then becomes publicly observable. We quite simply replace individual actions
with action pro les in the de nition of a history, and adapt the notation accordingly.
Remark 0.1 Let A(h) =fa2Qi2P(h)A : (h;a)2Hg. Then A(h) = Qi2P(h)Ai(h).
The de nition of a strategy needs minimal modi cations:
De nition 6 Fix an extensive-form game with observable actions and chance moves.
Then, for every player i2N[fcg, a strategy is a function si : fh : i2P(h)g!A such
that, for every h such that i2P(h), si(h)2Ai(h). Denote by Si and S the set of strategies
of Player i and the set of all strategy pro les.
In the absence of chance moves, De nition 4 applies verbatim to the new setting. You can
think about how to generalize it with chance moves (we do not really wish to treat Chance
as an additional player in a normal-form game, so we need to rede ne the payo functions
in the natural way). Finally, the de nition of Nash equilibrium requires no change.
For those of you who are used to the traditional, tree-based de nition of an extensive
game, note that you need to use information sets in order to describe games without perfect
information, but with observable actions. That is, you need to use the full expressive power
of the tree-based notation in order to describe what is a slight and rather natural extension
of perfect-information games.1
Most games of economic interest are games with observable actions, albeit possibly with
payo uncertainty; hence, the OR notation is su cient to deal with most applied problems
(payo uncertainty is easily added to the basic framework, as we shall see).
1On the other hand, the OR notation is equivalent to the standard one for games with perfect information:
just call histories \nodes", actions \arcs", terminal histories \leaves" and ; \root".
5