Eco514|Game Theory
Lecture 14,General Extensive Games
Marciano Siniscalchi
November 10,1999
Introduction
[By and large,I will follow OR,Chapters 11 and 12,so I will keep these notes to a minimum.]
Games with observed actions and payo uncertainty
Not all dynamic models of strategic interaction t within the category of games with observed
actions we have developed in the previous lectures,In particular,no allowance was made
for payo uncertainty.
On the other hand,it is very easy to augment the basic model to allow for it,I am going
to follow OR very closely here:
De nition 1 An extensive-form game with observed actions and incomplete information
(OR,a Bayesian extensive game with observed actions) is a tuple = (N;A;H;P;Z;( i;pi;Ui)i2N)
where:
N,is a set of players;
A is a set of actions;
H is a collection of nite and countable sequences of vectors of elements of A,such that:
(i) ;2H;
(ii) (a1;:::;ak)2H implies (a1;:::;a‘)2H for all ‘<k;
(iii) If h = (a1;:::;ak;:::) and (a1;:::;ak)2H for all k 1,then h2H.
Z is the set of terminal histories,that is,(a1;:::;ak) 2 Z i (a1;:::;ak) 2 H and
(a1;:::;ak;a)62H for all a2A,Also let X = HnZ,All in nite histories are terminal.
P,X)N is the player correspondence.
i is a set of payo types; write = Qi2N i.
pi is a probability distribution over i; for i6= j,pi and pj are independent.
U = (Ui)i2N,Z Qi2N i!R is the payo function,associating a vector of payo s to
every terminal history and every pro le of payo types.
1
Thus,the key innovations are the sets i and the measures pi,The former enter as
parameters in the payo functions,and thus capture payo uncertainty; note that Player j’s
payo s may depend on Player i’s payo type (i.e,values may be interdependent,using our
auction-theoretic terminology).
The probabilities pi determine a common prior over,This is not yet apparent,but it
will be when we de ne a solution concept that applies to such games.
The implicit assumption one makes is that,before the game begins,payo types are
drawn according to the probabilities p1;:::;pN; each player i 2 N learns her own payo
type i,and the product measure p i2 ( i) de ned by
p i( i) =
Y
j6=i
pj( j) 8 i = ( j) 6=i2 i
serves as her prior belief about her opponents’ payo types.
Two important comments are in order,First of all,one might visualize the selection
of payo types as a chance move,Clearly,to make things interesting,it must be the case
that this initial move by chance is not perfectly observed by all players,Hence,one needs a
model of extensive games without observed actions; this is precisely the model we are going
to develop in the next section.
Indeed,one might even say that games with incomplete information are the main reason
why the theory has traditionally focused on extensive games without observable actions,My
point here (shared by OR,as well as Fudenberg and Tirole) is that the general notion is
perhaps more than is strictly required to model dynamic games of economic interest.
I should also add that reducing a situation with incomplete information/payo uncer-
tainty to one with imperfect information (in particular,unobserved actions) is not an entirely
harmless assumption|chie y because doing so requires imposing the common-prior assump-
tion (can you see why?)
Second,note that I am deviating somewhat from our basic framework for interactive
decision making,First,I am incorporating probabilities in the description of the model,and
I am making the players’ priors over payo type pro les common,This is why I am using
the term \incomplete information" as opposed to \payo uncertainty"|in keeping with the
literature.
Second,for simultaneous-move games,the setup does not immediately reduce to the one
we adopt for static Bayesian games,However,to relate the two constructions,it is su cient
to note that is the counterpart of in our static Bayesian setting,and each i de nes a
partition of whose cells are of the form ff ig ig.
While the static Bayesian model is more general than this derived formulation (think
about a setting in which all players’ payo s depend on a single random draw which no
2
player observes),the two can be reconciled by introducing a dummy player in De nition 1.
This is not a particularly clean solution,of course,but it works.
I will return to Bayesian extensive games with observed actions when I discuss Perfect
Bayesian equilibrium,the \modal" solution concept for such games,For the time being,
let me point out that,while the model implicitly de nes each player i’s prior on i,it
stands to reason that,as the game progresses,Player i will update her prior beliefs about her
opponents’ payo types based on her observations (as well as the conjectured equilibrium).
These updated beliefs must somehow be part of the description of the equilibrium; also,
they must be subject to some sort of consistency condition|Bayes’ rule,for instance,We
shall see that identifying such conditions is the key problem in the formulation of extensive-
form solution concepts.
General Games with Incomplete Information
With the above discussion as a motivation of sorts,we are led to consider a notion of extensive
games which allows for partial observability of actions.
The basic idea is to start with a game with perfect information (i.e,without simultaneous
moves) and enrich it with a description of what players actually \know" when a given history
obtains.
Player i’s knowledge is modelled via a partition Ii of the set of histories at which they
are called upon to choose an action,i.e,P 1(i),This is exactly as in our model of payo
uncertainty in static games,in the interim stage,Player i learns,hence knows,that the
true state of the world ! is an element of the cell ti(!),but need not know that it is
exactly !,Similarly,at a history h,a player learns that the actual history belongs to some
set Ii(h) P 1(i),but not necessarily that it is h.
De nition 2 An extensive-form game with chance moves is a tuple = (N;A;H;P;Z;(Ui;Ii);fc)
where:
N is a set of players; Chance,denoted by c,is regarded as an additional player,so c62N.
A is a set of actions
H is a set of sequences whose elements are elements of A,such that
(i) ;2H;
(ii) (a1;:::;ak)2H implies (a1;:::;a‘)2H for all ‘<k;
(iii) If h = (a1;:::;ak;:::) and (a1;:::;ak)2H for all k 1,then h2H;
Z and X are the set of terminal and non-terminal histories;
P is the player function P,X!N[fcg
U,Z!RN is the payo vector function;
Ii is a partition of P 1(i).
3
For everyi2N[fcg,letAi(h) =fa2A,(h;a)2Hg,Thenfc,fh,c2P(h)g! (A)
indicates the probability of each chance move,and fc(h)(Ai(h)) = 1 for all h such that
c2P(h).
Note that we are not allowing for simultaneous moves at a given history,This can be
easily xed,but it is not necessary because simultaneous moves can be modelled exploiting
the fact that moves may be unobservable.
The cells of each partition Ii are called information sets.
Perfect Recall
Note that,in general,it may be the case that,for some sequences of actions h;h0,both h
and (h;h0) belong to the same information set,This represents a situation in which a player
forgets her own previous choices.
More generally,a player may forget that her opponents have chosen certain actions in
the past.
De nition 2 allows for both possibilities,However,it is conventional (and expedient)
to assume that players have perfect recall,they forget neither their own actions,nor their
previous observations.
To formulate this assumption rigorously,we rst de ne,for each player i2N,a collection
of sequences (Xi(h))h2X such that,for every h2X,Xi(h) includes all the actions Player i
has chosen and all the information sets in Ii she has visited in the partial history h.
Formally,let Xi(;) = ;; next,assuming that Xi(h) has been de ned for all histories of
length ‘,for every a2A(h),let:
Xi(h;a) = Xi(h)[fag[Ii(h;a) if P(h) = P(h;a) = i;
Xi(h;a) = Xi(h)[fag if P(h) = i6= P(h;a);
Xi(h;a) = Xi(h)[Ii(h;a) if P(h;a) = i6= P(h); and
Xi(h;a) = Xi(h) if P(h)6= i and P(h;a)6= i.
Then we say that an extensive game has perfect recall i,for every player i 2 N,
information set Ii2Ii,and histories h;h02Ii,Xi(h) = Xi(h0),Observe that Xi is de ned
at every history,but the notion of perfect recall imposes direct restrictions only at histories
where Player i moves.
Note that,in particular,perfect recall implies that it is never the case that,for some
history h and sequence of actions h0,both h and (h;h0) belong to the same information set.
4
Strategies
Pure and mixed strategies in general extensive games are easily de ned:
De nition 3 Fix an extensive game and a player i2N,A pure strategy for Player i is
a map si,Ii!A such that,for all Ii2Ii,si(Ii)2A(Ii).
Denote by Si,S i and S the sets of strategies of Player i,the set of strategy pro les of
i’s opponents,and the set of complete strategy pro les respectively,A mixed strategy for
Player i is a probability distribution i2 (Si).
The notion of randomization captured in De nition is as follows,at the beginning of the
game,players spin a roulette wheel to choose a pure strategy,and stick to that strategy
thereafter.
However,players could also randomize at each information set,This gives rise to a
di erent notion:
De nition 4 Fix an extensive game and a player i2N,A behavioral strategy for Player
i is a map i,Ii! (A) such that,for all Ii2Ii,i(Ii)(A(h)) = 1.
Denote byBi,B i andBthe sets of i’s behavioral strategies,of her opponents’ behavioral
strategy pro les,and of complete behavioral strategy pro les.
If this reminds you of our de nition of chance moves,it should,the function fc is simply
Chance’s behavioral strategy!
It is natural to ask whether the two notions of randomization are related,The answer
is that,of course,they are,To be precise about this,we rst de ne (with some abuse of
notation) the outcome functions O,B! (Z) and O,Qi2N (Si) ! (Z) to be the
maps which associate with each pro le of behavioral or mixed strategies a distribution over
terminal nodes (you should provide a formal de nition).
We then ask two distinct questions,First,given a behavioral strategy pro le 2B,can
we nd an outcome-equivalent mixed strategy pro le,i.e,some 2Qi2N (Si) such that
O( ) = O( )?
The answer is a rmative for all games which satisfy the following condition,for every
player i2N,information set Ii 2Ii,and histories h;h02Ii,it is never the case that h is
a subhistory of h0 or vice-versa,Hence,in particular,the answer is a rmative for all games
with perfect recall,OR provides an example (p,214) where this condition fails.
Second,given a mixed strategy pro le,can we nd an outcome-equivalent behavioral
strategy pro le? The answer is again a rmative,if we restrict ourselves to games with
perfect recall,This is Proposition 214.1 in OR,also known as Kuhn’s Theorem.
I conclude by noting that a Nash equilibrium of an extensive game is simply a Nash
equilibrium of (the mixed extension of) its normal form; you can think of a formal de nition,
using outcome functions.
5