Eco514|Game Theory
Lecture 5,Games with Payo Uncertainty (2)
Marciano Siniscalchi
September 30,1999
Introduction
This lecture continues our analysis of games with payo uncertainty,The three main objec-
tives are,(1) to illustrate the exibility of the Harsanyi framework (or our version thereof);
(2) to highlight the assumptions implicit in the conventional usage of the framework,and
the possible departures; (3) to discuss its potential problems,as well as some solutions to
the latter.
Cournot Revisited
Recall our Cournot model with payo uncertainty,Firm 2’s cost is known to be zero; Firm
1’s cost is uncertain,and will be denoted by c2f0; 12g,Demand is given by P(Q) = 2 Q
and each rm can produce qi2[0;1].
We represent the situation as a game with payo uncertainty as follows,let =f0; 12g,
T1 =ff0g;f12gg,T2 =f gand p2(0) =,It is easy to see that specifying p1 is not relevant
for the purposes of Bayesian Nash equilibrium analysis,what matters there are the beliefs
conditional on each t1 2T1,but these will obviously be degenerate.
The following equalities de ne a Bayesian Nash equilibrium (do you see where these come
from?):
q1(0) = 1 12q2
q1(12) = 34 12q2
q2 = 1 12
q1(0) + (1 )q1(12)
1
For = 12,we get
q1(0) = 0:625; q1(12) = 0:375; q2 = 0:75
(by comparison,23 is the equilibrium quantity for both rms if Firm 1’s cost is always c = 0.)
Textbook analysis
Recall that,for any probability measure q2 ( ) and player i2N,we de ned the event
[q]i = f!,pi(!jti(!)) = qg,In this game,[p2]2 =,that is,at any state of the world,
Player 2’s beliefs are given by p2,By way of comparison,it is easy to see that it cannot be
the case that [p1]1 = (why?) regardless of how we specify p1.
For notational ease (and also as a \sneak preview" of our forthcoming treatment of
interactive epistemology),we introduce the belief operator,Recall that,for every i2N and
!2,ti( ) denotes the cell of the partition Ti containing !.
De nition 1 Given a game with payo uncertainty G = (N; ;(Ai;ui;Ti)i2N),Player i’s
belief operator is the map Bi,2 !2 de ned by
8E ; Bi(E) =f!2,pi(Ejti(!)) = 1g
A more appropriate name for Bi( ) would perhaps be certainty operator,but we shall
follow traditional usage,If !2Bi(E),we shall say that \At !,Player i is certain that (or
believes that) E is true." Certainty is thus taken to denote probability one belief.
Now return to the Cournot game and take the point of view of Player 1,Since [p2]2 =,
it is trivially true that B1([p2]2) = ; in words,at any state !2,Player 1 is certain that
Player 2’s beliefs are given by p2 (i.e,by ),By the exact same argument,at any state !,
Player 2 is certain that Player 1 is certain that Player 2’s beliefs are given by p2,that is,
B2(B1([p2]2) = ),And so on and so forth.
The key point is that
Harsanyi’s model of games with payo uncertainty,together with a speci cation
of the players’ priors,easily generates in nite hierarchies of interactive beliefs,
that is \beliefs about beliefs..."
Although you may not immediately \see" these hierarchies,they are there|and they are
easily retrieved.
We can summarize the situation as follows,in the setup under consideration,there is
uncertainty concerning Firm 1’s payo s,but no uncertainty about Firm 2’s beliefs.
2
There is also uncertainty about Firm 1’s beliefs|but in a degenerate sense,there is a
one-one relationship between Player 1’s conditional beliefs at any !2 and her cost.
At rst blush,this makes sense,after all,payo uncertainty is about Player 1’s cost,so
as soon as Firm 1 learns the value of c,her uncertainty is resolved,Similarly,since Firm 2’s
cost is known to be zero,there is no payo uncertainty as far as the latter is concerned.
How about the absence of uncertainty about Firm 2’s beliefs? This is a legitimate as-
sumption,of course,The point is,it is only an assumption,it is not a necessary consequence
of rationality,of the Bayesian approach,or,indeed,a necessary feature of Harsanyi’s model
of incomplete information.
\Unconventional" (but legit) use of Harsanyi’s approach
Indeed,it is very easy to enrich the model to allow for uncertainty (on Firm 1’s part) about
Firm 2’s beliefs.
Let us consider the following alternative model for our Cournot game,First,=f!cxy,
c2f0; 12g;x;y2f1;2gg,The interpretation is that in state !cxy,Firm 1’s cost is c,Firm
1’s \belief state" is x,and Firm 2’s \belief state" is y,This terminology is nonstandard and
merely suggestive,the exact meaning will be clear momentarily.
Next,let T1 = ff!cx1;!cx2g,c2f0; 12g;x2f1;2gg = ftcx1,c2f0; 12g;x2f1;2gg and
T2 =ff!01y;!02y;!1
2 1y;!1
2 2y
g,y2f1;2gg=fty2,y2f1;2gg,Thus,at each state !,Firm
1 learns her cost and her \belief state",and Firm 2 learns his \belief state."
We can get a lot of action from this simple extension,Let us de ne conditional proba-
bilities as follows:
p2(!011jt12) = p2(!1
2 21
jt12) = 0:5
i.e,type t12 of Firm 2 is certain that Firm 1 is in belief state 1 whenever her cost is 0,in
belief state 2 whenever her cost is 12; moreover,the two combinations of belief states and
costs are equally likely,Next,
p2(!022jt22) = 1 p2(!1
2 22
jt22) = 0:75
i.e,t22 is certain that Firm 1’s belief state,regardless of cost,is x = 2; moreover,he has a
di erent marginal on c than type t12,Finally,
p1(!011jt011 ) = p1(!1
2 11
jt12 11 ) = 1 and p1(!021jt021 ) = p1(!1
2 21
jt12 21 ) = 12
that is,regardless of her cost,in belief state 1 Firm 1 is certain that she is facing type t12,
whereas in belief state 2 she considers both types of Firm 2 to be equally likely,As I noted
last time,this is really not relevant (also see the \Common priors" section below).
To complete the speci cation of our priors,we simply assume that players regard the
cells of their respective type partitions as being equally likely.
3
The following equalities de ne a BNE:
q1(t011 ) = 1 12q2(t12)
q1(t021 ) = 1 12
1
2q2(t
1
2) +
1
2q2(t
2
2)
q1(t12 11 ) = 34 12q2(t12)
q1(t12 21 ) = 34 12
1
2q2(t
1
2) +
1
2q2(t
2
2)
q2(t12) = 1 12
1
2q1(t
01
1 ) +
1
2q1(t
1
2 21 )
q2(t22) = 1 12
3
4q1(t
02
1 ) +
1
4q1(t
1
2 21 )
You should be able to see where the above equalities come from by inspecting the de -
nitions of p1 and p2.
With the help of a numerical linear equation package we get
q1(t011 ) =,62638
q1(t021 ) =,63472
q1(t12 11 ) =,37638
q1(t12 21 ) =,38472
q2(t12) =,7472
q2(t22) =,7138
Note that we have simply applied OR’s de nition of Bayesian Nash equilibrium,That is,
we are still on familiar ground,We have only deviated from \tradition" in that our model
is more elaborated than the \textbook" variant.
Consider state !011,Observe that !011 2B1(t12),that is,in this state Firm 1 is certain
that Firm 2’s marginal on c is 12 12,and indeed this belief is correct,Moreover,Firm 2 is
certain that,if Firm 1 has low cost,she (Firm 1) holds correct beliefs about his (Firm 2’s)
marginal on c; this belief,too,is correct,However,Firm 2 thinks that,if Firm 1 has high
cost,she (Firm 1) may be mistaken about his (Firm 2’s) marginal on c with probability 12.
Thus,there seem to be \minimal" deviations from the textbook treatment given above; in
particular,Firm 2’s rst-order beliefs aboutc are the same in both cases,Yet,the equilibrium
outcome in state !011 is di erent from the \textbook" prediction,Indeed,there is no state in
which Bayesian Nash equilibrium predicts the same outcome as in the \textbook" treatment.
4
The bottom line is that (i) assumptions about higher-order beliefs do in uence equilib-
rium outcomes,and (ii) it is very easy to analyze deviations from the \textbook" assumptions
about higher-order beliefs in the framework of standard Bayesian Nash equilibrium analysis.
Priors and Common Priors
The other buzzword that is often heard in connection with games with incomplete informa-
tion is \common prior."
Simply stated,this is the assumption that pi = p for all i 2 N,Note that,strictly
speaking,the common prior assumption (CPA for short) is part of the \textbook" de nition
of a \game with incomplete information"; our slightly nonstandard terminology emphasizes
that (1) we do not wish to treat priors (common or private) as part of the description of the
model,but rather as part of the solution concept; and that (2) we certainly do not wish to
impose the CPA in all circumstances.
But is the CPA at all reasonable? The answer is,well,it depends,One often hears the
following argument:
Prior beliefs re ect prior information,We assume that players approach the game
with a common heritage,i.e,with the same prior information,Therefore,their
prior beliefs should be the same.
On priors as summary of previously acquired information
Let us rst take this argument at face value,From a Bayesian standpoint,this makes sense
only if we assume that players approach the game after having observed the same events for
a very long time; only in this case,in fact,will their beliefs converge.
But perhaps we do not want to be really Bayesians; perhaps \prior information" is \no
information" and we wish to invoke some variant of the principle of insu cient reason,I
personally do not nd this argument all that convincing,but you may di er.
On Priors and interactive beliefs
However,the real problem with this sort of justi cation of the CPA is that,as we have
illustrated above,the set actually conveys information about both payo uncertainty and
the players’ in nite hierarchies of interactive beliefs,Therefore,it is not clear how players’
beliefs about in nite hierarchies of beliefs can \converge due to a long period of common
observations." How do I \learn" your beliefs?1
The bottom line is that the only way to assess the validity of the CPA is via its implica-
tions for the in nite hierarchies of beliefs it generates.
1More precisely,perhaps I can make inferences about your beliefs,with the aid of some auxiliary as-
sumptions,but I can never observe your beliefs!
5
We now know that the CPA is equivalent to the assumption that,at any state !2,
players are not willing to engage in bets over the realization of!-contingent random variables.
Thus,it entails a very strong notion of agreement.
For instance,in our elaboration of the Cournot example,denote by pcx the prior proba-
bility of cell tcx1 of T1,and by q the prior probability of cell t12 of T2,Then the conditional
probabilities indicated above imply that the priors p1 and p2 must satisfy the following
equalities:
State ! p1(!) p2(!) X(!)
!011 p01 12q 1
!1
2 11
p1
2 1
0 1
!021 12p02 0 3
!1
2 21
1
2p12 2
1
2q -2
!012 0 0 0
!1
2 12
0 0 0
!022 12p02 34(1 q) -2
!1
2 22
1
2p12 2
1
4(1 q) 3
Table 1,The priors in the Elaborated Cournot Model
Disregarding the last column for the time being,the table yields a set of necessary
conditions for the existence of a common prior,First,from the line corresponding to !1
2 21
,
p1
2 2
= q,Hence,from the line corresponding to !1
2 22
,we must have 12q = 14(1 q),or
2q = 1 q,so q = 13 = p1
2 2
.
However,from the line corresponding to !022,p02 = 32 23 = 1,This is impossible,because
the pcx’s must add up to one,Thus,there can be no common prior.
The rightmost column of Table 1 illustrates a bet that,in accordance with the results
cited above,at each state ! yields a positive expected payo to Player 1 conditional on t1(!),
and a negative expected payo to Player 2 conditional on t2(!),Hence,at each state ! of
the model,Player 1 would want to o er a bet stating that,at each state !0,Player 2 would
pay her X(!0) dollars,and Player 2 would readily accept it.
To see this,note that E[Xjt011 ] = 1,E[Xjt12 11 ] = 1,E[Xjt021 ] = 12 and E[Xjt12 21 ] = 12; on
the other hand,E[Xjt12] = 1 and E[Xjt22] = 34.
If players’ conditional beliefs were consistent with a common prior,no such mutually
acceptable bet would exist,This statement is also known as the no-trade theorem,Indeed,
the results cited above state that the converse is also true,if no such mutually acceptable bet
exists,then players’ conditional beliefs are consistent with a common prior,For instance,in
the simpler \textbook" treatment of the Cournot model (in which the set of states is the set
6
of possible costs for Firm 1),for a bet X,f0; 12g!R to be pro table to Firm 1 conditional
on every t1(!) = ! it must be the case that X(0) > 0 and X(12) > 0; but then no such bet
can be acceptable to Firm 2.
Justifying Harsanyi Models
There are two related problems with the models of payo uncertainty we have proposed.
The rst has to do with generality,We have seen that we can construct rather involved
models of interactive beliefs in the presence of payo uncertainty,In particular,our models
generate complicated hierarchies of beliefs rather easily.
However,let us ask the reverse question,Given some collection of hierarchies of beliefs,
can we always exhibit a model which generates in in some state?
The second question has to do with one implicit assumption of the model,In order to
make statements about Player 1’s beliefs concerning Player 2’s beliefs,Player 1 must be
assumed to \know" p2 and T2,In order to make statements about Player 1’s beliefs about
Player 2’s beliefs about Player 1’s beliefs,we must assume that Player 2 \knows" p1 and T1,
and that Player 1 \knows" this.
More concisely,the model itself must be \common knowledge",But we cannot even
formulate this assumption!
Both issues are addressed in a brilliant paper by Mertens and Zamir (Int,J,Game
Theory,1985),The basic idea is as follows.
Let us focus on two-player games,First,let us x a collection 0 of payo parameters.
This set is meant to capture \physical uncertainty," or more generally any kind of uncertainty
that is not related to players’ beliefs.
A player’s beliefs about 0 are by de nition represented by points in ( 0),Now,here’s
the idea,if we wish to represent a player’s beliefs about,(1) 0 and (2) her opponent’s
beliefs about 0,it is natural to consider the set
1 = 0 ( 0)
and describe a player’s beliefs about (1) and (2) above as points in ( 1),The idea readily
generalizes,suppose we have constructed a set k as above,then
k+1 = k ( k)
and we represent beliefs about (1) k and (2) the opponent’s beliefs about k as points in
( k+1),Of course these are complicated spaces (even if 0 is nite),but in any case the
de nitions are straightforward.
We wish to describe a player’s beliefs by an in nite sequencee = (p1;p2;:::) of probability
measures such that,for each k 1,pk2 ( k 1),That is,each player’s beliefs are described
7
by a measure on 0,a measure on 0 ( 0),a measure on 0 ( 0) ( 0 ( 0)),
and so on,Let E0 = Qk 0 ( k) denote the set of such descriptions,the reason for the
superscript will be clear momentarily.
You may immediately see a problem with this representation of beliefs,Fixe = (p1;p2;:::)2
E,Observe that,for every k 2,pk2 ( k 1) = ( k 2 ( k 2)),Thus,the marginal of
pk on k 2 conveys information about a player’s beliefs about k 2,However,by de nition,
so does pk 1 2 ( k 2)!
We obviously want the information conveyed by pk and pk 1 about our player’s beliefs
concerning k 2 to be consistent,We thus concentrate on the subset of E0 de ned by
E1 =fe2E0,8k 2; marg k 2pk = pk 1g
Mertens and Zamir prove that,under regularity conditions on,the set E1 is homeo-
morphic to the set ( 0 E0),that is,there exists a one-to-one,onto function f,E1 !
( 0 E0) (continuous,and with a continuous inverse) such that (1) every sequence e2E1
corresponds to a unique measure f(e) on the product space 0 E0,and (2) every point
(!0;e0) = (!0;p1;p2:::) 2 0 2E0 can be mapped back to a unique e = f 1(!0;e0) 2E1.
Moreover,as you would expect,if e = (p1;p2;:::),then
marg k 1f(e) = pk
(recall the de nition of E0).
In plain English,this says that,if we are interested in describing a player’s beliefs about
(1) 0 and (2) her opponent’s full hierarchy of beliefs about 0,then we should look no
further than E1,That is,we can regard elements as E1 as our player’s types.
But,there is still something wrong with the construction so far,The reason is that,
while each sequence e2E1 satis es the consistency requirement,its elements may assign
positive probability to inconsistent (sub)sequences of measures,That is,a player may hold
consistent beliefs,but may believe that her opponent does not.
On the other hand,it is easy to see that,in any state ! of a model of payo -uncertainty
as we have de ned it,players’ beliefs are consistent,Hence,at any state,players necessarily
believe that their opponents hold consistent beliefs,that their opponent believe that their
beliefs are consistent,and so on,That is,there is common certainty of consistency,We wish
to impose the same restriction on the sequences of probability measures we are considering.
It is easy to do so inductively,Assume we have de ned Ek,Then
Ek+1 =fe2E1,f(e)( 0 Ek) = 1g
This makes sense,f(e) is a measure on 0 E0,so we can use it to read o the probability
of the event 0 Ek,Note well what we are doing,the restriction is stated in terms of the
probability measure on 0 E0 induced by e2E1,but (via the function f) this entails a
restriction on the elements of the sequence e = (p1;p2;:::).
8
Finally,let E = Tk 1 Ek (how do we know the intersection is nonempty?),The main
result is as follows:
Theorem 0.1 (Mertens and Zamir,1985),Under regularity conditions,there exists a home-
omorphism g,E! ( 0 E) such that,for all k 1 and e = (p1;p2;:::)2E,
marg k 1g(e) = pk
This is really what we were after,De ne = 0 E E (we treat the rst E as referring
to Player 1’s hierarchical beliefs,and the second as referring to Player 2’s),Next,for all
! = (!0;e1;e2),let
ti(!) =f!0 = ( !0; e1; e2),ei = eig
for each i = 1;2; that is,in state !,Player i \learns" only her own beliefs,Finally,let
Ti =ff!0 = ( !0; e1; e2),ei = eig,ei2Eg
i.e,the set of possible types corresponds to E,This is just a big huge model of payo
uncertainty,but it is a model according to our de nition,The key points are:
(1) Every possible \reasonable" hierarchical belief is represented in this big huge
model.
(2) Type partitions arise naturally,they correspond to \reasonable" hierarchical
beliefs.
Thus,both di culties with models of payo uncertainty can be overcome.
9
Lecture 5,Games with Payo Uncertainty (2)
Marciano Siniscalchi
September 30,1999
Introduction
This lecture continues our analysis of games with payo uncertainty,The three main objec-
tives are,(1) to illustrate the exibility of the Harsanyi framework (or our version thereof);
(2) to highlight the assumptions implicit in the conventional usage of the framework,and
the possible departures; (3) to discuss its potential problems,as well as some solutions to
the latter.
Cournot Revisited
Recall our Cournot model with payo uncertainty,Firm 2’s cost is known to be zero; Firm
1’s cost is uncertain,and will be denoted by c2f0; 12g,Demand is given by P(Q) = 2 Q
and each rm can produce qi2[0;1].
We represent the situation as a game with payo uncertainty as follows,let =f0; 12g,
T1 =ff0g;f12gg,T2 =f gand p2(0) =,It is easy to see that specifying p1 is not relevant
for the purposes of Bayesian Nash equilibrium analysis,what matters there are the beliefs
conditional on each t1 2T1,but these will obviously be degenerate.
The following equalities de ne a Bayesian Nash equilibrium (do you see where these come
from?):
q1(0) = 1 12q2
q1(12) = 34 12q2
q2 = 1 12
q1(0) + (1 )q1(12)
1
For = 12,we get
q1(0) = 0:625; q1(12) = 0:375; q2 = 0:75
(by comparison,23 is the equilibrium quantity for both rms if Firm 1’s cost is always c = 0.)
Textbook analysis
Recall that,for any probability measure q2 ( ) and player i2N,we de ned the event
[q]i = f!,pi(!jti(!)) = qg,In this game,[p2]2 =,that is,at any state of the world,
Player 2’s beliefs are given by p2,By way of comparison,it is easy to see that it cannot be
the case that [p1]1 = (why?) regardless of how we specify p1.
For notational ease (and also as a \sneak preview" of our forthcoming treatment of
interactive epistemology),we introduce the belief operator,Recall that,for every i2N and
!2,ti( ) denotes the cell of the partition Ti containing !.
De nition 1 Given a game with payo uncertainty G = (N; ;(Ai;ui;Ti)i2N),Player i’s
belief operator is the map Bi,2 !2 de ned by
8E ; Bi(E) =f!2,pi(Ejti(!)) = 1g
A more appropriate name for Bi( ) would perhaps be certainty operator,but we shall
follow traditional usage,If !2Bi(E),we shall say that \At !,Player i is certain that (or
believes that) E is true." Certainty is thus taken to denote probability one belief.
Now return to the Cournot game and take the point of view of Player 1,Since [p2]2 =,
it is trivially true that B1([p2]2) = ; in words,at any state !2,Player 1 is certain that
Player 2’s beliefs are given by p2 (i.e,by ),By the exact same argument,at any state !,
Player 2 is certain that Player 1 is certain that Player 2’s beliefs are given by p2,that is,
B2(B1([p2]2) = ),And so on and so forth.
The key point is that
Harsanyi’s model of games with payo uncertainty,together with a speci cation
of the players’ priors,easily generates in nite hierarchies of interactive beliefs,
that is \beliefs about beliefs..."
Although you may not immediately \see" these hierarchies,they are there|and they are
easily retrieved.
We can summarize the situation as follows,in the setup under consideration,there is
uncertainty concerning Firm 1’s payo s,but no uncertainty about Firm 2’s beliefs.
2
There is also uncertainty about Firm 1’s beliefs|but in a degenerate sense,there is a
one-one relationship between Player 1’s conditional beliefs at any !2 and her cost.
At rst blush,this makes sense,after all,payo uncertainty is about Player 1’s cost,so
as soon as Firm 1 learns the value of c,her uncertainty is resolved,Similarly,since Firm 2’s
cost is known to be zero,there is no payo uncertainty as far as the latter is concerned.
How about the absence of uncertainty about Firm 2’s beliefs? This is a legitimate as-
sumption,of course,The point is,it is only an assumption,it is not a necessary consequence
of rationality,of the Bayesian approach,or,indeed,a necessary feature of Harsanyi’s model
of incomplete information.
\Unconventional" (but legit) use of Harsanyi’s approach
Indeed,it is very easy to enrich the model to allow for uncertainty (on Firm 1’s part) about
Firm 2’s beliefs.
Let us consider the following alternative model for our Cournot game,First,=f!cxy,
c2f0; 12g;x;y2f1;2gg,The interpretation is that in state !cxy,Firm 1’s cost is c,Firm
1’s \belief state" is x,and Firm 2’s \belief state" is y,This terminology is nonstandard and
merely suggestive,the exact meaning will be clear momentarily.
Next,let T1 = ff!cx1;!cx2g,c2f0; 12g;x2f1;2gg = ftcx1,c2f0; 12g;x2f1;2gg and
T2 =ff!01y;!02y;!1
2 1y;!1
2 2y
g,y2f1;2gg=fty2,y2f1;2gg,Thus,at each state !,Firm
1 learns her cost and her \belief state",and Firm 2 learns his \belief state."
We can get a lot of action from this simple extension,Let us de ne conditional proba-
bilities as follows:
p2(!011jt12) = p2(!1
2 21
jt12) = 0:5
i.e,type t12 of Firm 2 is certain that Firm 1 is in belief state 1 whenever her cost is 0,in
belief state 2 whenever her cost is 12; moreover,the two combinations of belief states and
costs are equally likely,Next,
p2(!022jt22) = 1 p2(!1
2 22
jt22) = 0:75
i.e,t22 is certain that Firm 1’s belief state,regardless of cost,is x = 2; moreover,he has a
di erent marginal on c than type t12,Finally,
p1(!011jt011 ) = p1(!1
2 11
jt12 11 ) = 1 and p1(!021jt021 ) = p1(!1
2 21
jt12 21 ) = 12
that is,regardless of her cost,in belief state 1 Firm 1 is certain that she is facing type t12,
whereas in belief state 2 she considers both types of Firm 2 to be equally likely,As I noted
last time,this is really not relevant (also see the \Common priors" section below).
To complete the speci cation of our priors,we simply assume that players regard the
cells of their respective type partitions as being equally likely.
3
The following equalities de ne a BNE:
q1(t011 ) = 1 12q2(t12)
q1(t021 ) = 1 12
1
2q2(t
1
2) +
1
2q2(t
2
2)
q1(t12 11 ) = 34 12q2(t12)
q1(t12 21 ) = 34 12
1
2q2(t
1
2) +
1
2q2(t
2
2)
q2(t12) = 1 12
1
2q1(t
01
1 ) +
1
2q1(t
1
2 21 )
q2(t22) = 1 12
3
4q1(t
02
1 ) +
1
4q1(t
1
2 21 )
You should be able to see where the above equalities come from by inspecting the de -
nitions of p1 and p2.
With the help of a numerical linear equation package we get
q1(t011 ) =,62638
q1(t021 ) =,63472
q1(t12 11 ) =,37638
q1(t12 21 ) =,38472
q2(t12) =,7472
q2(t22) =,7138
Note that we have simply applied OR’s de nition of Bayesian Nash equilibrium,That is,
we are still on familiar ground,We have only deviated from \tradition" in that our model
is more elaborated than the \textbook" variant.
Consider state !011,Observe that !011 2B1(t12),that is,in this state Firm 1 is certain
that Firm 2’s marginal on c is 12 12,and indeed this belief is correct,Moreover,Firm 2 is
certain that,if Firm 1 has low cost,she (Firm 1) holds correct beliefs about his (Firm 2’s)
marginal on c; this belief,too,is correct,However,Firm 2 thinks that,if Firm 1 has high
cost,she (Firm 1) may be mistaken about his (Firm 2’s) marginal on c with probability 12.
Thus,there seem to be \minimal" deviations from the textbook treatment given above; in
particular,Firm 2’s rst-order beliefs aboutc are the same in both cases,Yet,the equilibrium
outcome in state !011 is di erent from the \textbook" prediction,Indeed,there is no state in
which Bayesian Nash equilibrium predicts the same outcome as in the \textbook" treatment.
4
The bottom line is that (i) assumptions about higher-order beliefs do in uence equilib-
rium outcomes,and (ii) it is very easy to analyze deviations from the \textbook" assumptions
about higher-order beliefs in the framework of standard Bayesian Nash equilibrium analysis.
Priors and Common Priors
The other buzzword that is often heard in connection with games with incomplete informa-
tion is \common prior."
Simply stated,this is the assumption that pi = p for all i 2 N,Note that,strictly
speaking,the common prior assumption (CPA for short) is part of the \textbook" de nition
of a \game with incomplete information"; our slightly nonstandard terminology emphasizes
that (1) we do not wish to treat priors (common or private) as part of the description of the
model,but rather as part of the solution concept; and that (2) we certainly do not wish to
impose the CPA in all circumstances.
But is the CPA at all reasonable? The answer is,well,it depends,One often hears the
following argument:
Prior beliefs re ect prior information,We assume that players approach the game
with a common heritage,i.e,with the same prior information,Therefore,their
prior beliefs should be the same.
On priors as summary of previously acquired information
Let us rst take this argument at face value,From a Bayesian standpoint,this makes sense
only if we assume that players approach the game after having observed the same events for
a very long time; only in this case,in fact,will their beliefs converge.
But perhaps we do not want to be really Bayesians; perhaps \prior information" is \no
information" and we wish to invoke some variant of the principle of insu cient reason,I
personally do not nd this argument all that convincing,but you may di er.
On Priors and interactive beliefs
However,the real problem with this sort of justi cation of the CPA is that,as we have
illustrated above,the set actually conveys information about both payo uncertainty and
the players’ in nite hierarchies of interactive beliefs,Therefore,it is not clear how players’
beliefs about in nite hierarchies of beliefs can \converge due to a long period of common
observations." How do I \learn" your beliefs?1
The bottom line is that the only way to assess the validity of the CPA is via its implica-
tions for the in nite hierarchies of beliefs it generates.
1More precisely,perhaps I can make inferences about your beliefs,with the aid of some auxiliary as-
sumptions,but I can never observe your beliefs!
5
We now know that the CPA is equivalent to the assumption that,at any state !2,
players are not willing to engage in bets over the realization of!-contingent random variables.
Thus,it entails a very strong notion of agreement.
For instance,in our elaboration of the Cournot example,denote by pcx the prior proba-
bility of cell tcx1 of T1,and by q the prior probability of cell t12 of T2,Then the conditional
probabilities indicated above imply that the priors p1 and p2 must satisfy the following
equalities:
State ! p1(!) p2(!) X(!)
!011 p01 12q 1
!1
2 11
p1
2 1
0 1
!021 12p02 0 3
!1
2 21
1
2p12 2
1
2q -2
!012 0 0 0
!1
2 12
0 0 0
!022 12p02 34(1 q) -2
!1
2 22
1
2p12 2
1
4(1 q) 3
Table 1,The priors in the Elaborated Cournot Model
Disregarding the last column for the time being,the table yields a set of necessary
conditions for the existence of a common prior,First,from the line corresponding to !1
2 21
,
p1
2 2
= q,Hence,from the line corresponding to !1
2 22
,we must have 12q = 14(1 q),or
2q = 1 q,so q = 13 = p1
2 2
.
However,from the line corresponding to !022,p02 = 32 23 = 1,This is impossible,because
the pcx’s must add up to one,Thus,there can be no common prior.
The rightmost column of Table 1 illustrates a bet that,in accordance with the results
cited above,at each state ! yields a positive expected payo to Player 1 conditional on t1(!),
and a negative expected payo to Player 2 conditional on t2(!),Hence,at each state ! of
the model,Player 1 would want to o er a bet stating that,at each state !0,Player 2 would
pay her X(!0) dollars,and Player 2 would readily accept it.
To see this,note that E[Xjt011 ] = 1,E[Xjt12 11 ] = 1,E[Xjt021 ] = 12 and E[Xjt12 21 ] = 12; on
the other hand,E[Xjt12] = 1 and E[Xjt22] = 34.
If players’ conditional beliefs were consistent with a common prior,no such mutually
acceptable bet would exist,This statement is also known as the no-trade theorem,Indeed,
the results cited above state that the converse is also true,if no such mutually acceptable bet
exists,then players’ conditional beliefs are consistent with a common prior,For instance,in
the simpler \textbook" treatment of the Cournot model (in which the set of states is the set
6
of possible costs for Firm 1),for a bet X,f0; 12g!R to be pro table to Firm 1 conditional
on every t1(!) = ! it must be the case that X(0) > 0 and X(12) > 0; but then no such bet
can be acceptable to Firm 2.
Justifying Harsanyi Models
There are two related problems with the models of payo uncertainty we have proposed.
The rst has to do with generality,We have seen that we can construct rather involved
models of interactive beliefs in the presence of payo uncertainty,In particular,our models
generate complicated hierarchies of beliefs rather easily.
However,let us ask the reverse question,Given some collection of hierarchies of beliefs,
can we always exhibit a model which generates in in some state?
The second question has to do with one implicit assumption of the model,In order to
make statements about Player 1’s beliefs concerning Player 2’s beliefs,Player 1 must be
assumed to \know" p2 and T2,In order to make statements about Player 1’s beliefs about
Player 2’s beliefs about Player 1’s beliefs,we must assume that Player 2 \knows" p1 and T1,
and that Player 1 \knows" this.
More concisely,the model itself must be \common knowledge",But we cannot even
formulate this assumption!
Both issues are addressed in a brilliant paper by Mertens and Zamir (Int,J,Game
Theory,1985),The basic idea is as follows.
Let us focus on two-player games,First,let us x a collection 0 of payo parameters.
This set is meant to capture \physical uncertainty," or more generally any kind of uncertainty
that is not related to players’ beliefs.
A player’s beliefs about 0 are by de nition represented by points in ( 0),Now,here’s
the idea,if we wish to represent a player’s beliefs about,(1) 0 and (2) her opponent’s
beliefs about 0,it is natural to consider the set
1 = 0 ( 0)
and describe a player’s beliefs about (1) and (2) above as points in ( 1),The idea readily
generalizes,suppose we have constructed a set k as above,then
k+1 = k ( k)
and we represent beliefs about (1) k and (2) the opponent’s beliefs about k as points in
( k+1),Of course these are complicated spaces (even if 0 is nite),but in any case the
de nitions are straightforward.
We wish to describe a player’s beliefs by an in nite sequencee = (p1;p2;:::) of probability
measures such that,for each k 1,pk2 ( k 1),That is,each player’s beliefs are described
7
by a measure on 0,a measure on 0 ( 0),a measure on 0 ( 0) ( 0 ( 0)),
and so on,Let E0 = Qk 0 ( k) denote the set of such descriptions,the reason for the
superscript will be clear momentarily.
You may immediately see a problem with this representation of beliefs,Fixe = (p1;p2;:::)2
E,Observe that,for every k 2,pk2 ( k 1) = ( k 2 ( k 2)),Thus,the marginal of
pk on k 2 conveys information about a player’s beliefs about k 2,However,by de nition,
so does pk 1 2 ( k 2)!
We obviously want the information conveyed by pk and pk 1 about our player’s beliefs
concerning k 2 to be consistent,We thus concentrate on the subset of E0 de ned by
E1 =fe2E0,8k 2; marg k 2pk = pk 1g
Mertens and Zamir prove that,under regularity conditions on,the set E1 is homeo-
morphic to the set ( 0 E0),that is,there exists a one-to-one,onto function f,E1 !
( 0 E0) (continuous,and with a continuous inverse) such that (1) every sequence e2E1
corresponds to a unique measure f(e) on the product space 0 E0,and (2) every point
(!0;e0) = (!0;p1;p2:::) 2 0 2E0 can be mapped back to a unique e = f 1(!0;e0) 2E1.
Moreover,as you would expect,if e = (p1;p2;:::),then
marg k 1f(e) = pk
(recall the de nition of E0).
In plain English,this says that,if we are interested in describing a player’s beliefs about
(1) 0 and (2) her opponent’s full hierarchy of beliefs about 0,then we should look no
further than E1,That is,we can regard elements as E1 as our player’s types.
But,there is still something wrong with the construction so far,The reason is that,
while each sequence e2E1 satis es the consistency requirement,its elements may assign
positive probability to inconsistent (sub)sequences of measures,That is,a player may hold
consistent beliefs,but may believe that her opponent does not.
On the other hand,it is easy to see that,in any state ! of a model of payo -uncertainty
as we have de ned it,players’ beliefs are consistent,Hence,at any state,players necessarily
believe that their opponents hold consistent beliefs,that their opponent believe that their
beliefs are consistent,and so on,That is,there is common certainty of consistency,We wish
to impose the same restriction on the sequences of probability measures we are considering.
It is easy to do so inductively,Assume we have de ned Ek,Then
Ek+1 =fe2E1,f(e)( 0 Ek) = 1g
This makes sense,f(e) is a measure on 0 E0,so we can use it to read o the probability
of the event 0 Ek,Note well what we are doing,the restriction is stated in terms of the
probability measure on 0 E0 induced by e2E1,but (via the function f) this entails a
restriction on the elements of the sequence e = (p1;p2;:::).
8
Finally,let E = Tk 1 Ek (how do we know the intersection is nonempty?),The main
result is as follows:
Theorem 0.1 (Mertens and Zamir,1985),Under regularity conditions,there exists a home-
omorphism g,E! ( 0 E) such that,for all k 1 and e = (p1;p2;:::)2E,
marg k 1g(e) = pk
This is really what we were after,De ne = 0 E E (we treat the rst E as referring
to Player 1’s hierarchical beliefs,and the second as referring to Player 2’s),Next,for all
! = (!0;e1;e2),let
ti(!) =f!0 = ( !0; e1; e2),ei = eig
for each i = 1;2; that is,in state !,Player i \learns" only her own beliefs,Finally,let
Ti =ff!0 = ( !0; e1; e2),ei = eig,ei2Eg
i.e,the set of possible types corresponds to E,This is just a big huge model of payo
uncertainty,but it is a model according to our de nition,The key points are:
(1) Every possible \reasonable" hierarchical belief is represented in this big huge
model.
(2) Type partitions arise naturally,they correspond to \reasonable" hierarchical
beliefs.
Thus,both di culties with models of payo uncertainty can be overcome.
9