Eco514|Game Theory
Lecture 6: Interactive Epistemology (1)
Marciano Siniscalchi
October 5, 1999
Introduction
This lecture focuses on the interpretation of solution concepts for normal-form games. You
will recall that, when we introduced Nash equilibrium and Rationalizability, we mentioned
numerous reasons why these solution concepts could be regarded as yielding plausible restric-
tions on rational play, or perhaps providing a consistency check for our predictions about
it.
However, in doing so, we had to appeal to intuition, by and large. Even a simple assump-
tion such as \Player 1 believes that Player 2 is rational" involves objects that are not part
of the standard description of a game with complete information. In particular, recall that
Bayesian rationality is a condition which relates behavior and beliefs: a player is \rational"
if and only if she chooses an action which is a best reply given her beliefs. But then, to say
that Player 1 believes that Player 2 is rational implies that Player 1 holds a conjecture on
both Player 2’s actions and her beliefs.
The standard model for games with complete information does not contain enough struc-
ture for us to formalize this sort of assumption. Players’ beliefs are probability distributions
on their opponents’ action pro les.
But, of course, the model we have developed (following Harsanyi) for games with payo
uncertainty does allow us to generate beliefs about beliefs, and indeed in nite hierarchies of
mutual beliefs.
The objective of this lecture is to present a model of interactive beliefs based on Harsanyi’s
ideas, with minimal modi cations to our setting for games with payo uncertainty. We shall
then begin our investigation of \interactive epistemology" in normal-form games by looking
at correlated equilibrium.
1
The basic idea
Recall that, in order to represent payo uncertainty, we introduced a set of states of the
world, and made the players’ payo functions depend on the realization !2 , as well as
on the pro le (ai)i2N 2Qi2N Ai of actions chosen by the players.
This allowed us to represent hierarchical beliefs about the state of the world; however,
we are no more capable of describing hierarchical beliefs about actions (at least not without
introducing additional information, such as a speci cation of equilibrium actions for each
type of each player).
Thus, a natural extension suggests itself. For simplicity, I will consider games without
payo uncertainty, but the extension should be obvious.
De nition 1 Consider a simultaneous game G = (N;(Ai;ui)i2N) (without payo uncer-
tainty). A frame for G is a tupleF = ( ;(Ti;ai)i2N) such that, for every player i2N, Ti is
a partition of , and ai is a map ai : !Ai such that
a 1i (ai)6=; ) a 1i (ai)2Ti:
Continue to denote by ti(!) the cell of the partition Ti containing !. Finally, a model for G is
a tupleM= (F;(pi)i2N), whereF is a frame for G and each pi is a probability distribution
on ( ).
I distinguish between frames and models to emphasize that probabilistic beliefs convey
additional information|which we wish to relate to solution concepts. The distintion is also
often made in the literature.
The main innovation is the introduction of the functions ai( ). This is not so far-fetched:
after all, uncertainty about opponents’ actions is clearly a form of payo uncertainty|one
that arises in any strategic situation. However, by making players’ choices part of the state
of the world, it is possible to discuss the players’ hierarchical beliefs about them. Ultimately,
we wish to relate solution concepts to precisely such assumptions.
There is one technical issue which deserves to be noted. We are assuming that \actions
be measurable with respect to types," to use a conventional expression; that is, whenever
!;!0 2 ti 2 Ti, the action chosen by Player i at ! has to be the same as the action she
chooses at !0. This is natural: after all, in any given state, a player only knows her type, so
it would be impossible for her to implement a contingent action plan which speci es di erent
choices at di erent states consistent with her type. Our de nition of a frame captures this.
Putting the model to work
Let us consider one concrete example to x ideas. Figure 1 exhibits a game and a model for
it.
2
L R
T 1,1 0,0
B 0,0 1,1
! t1(!) a1(!) p1(!) t2(!) a2(!) p2(!)
!1 t11 T 0 t12 R 0.4
!2 t11 T 0.5 t22 L 0.5
!3 t21 B 0.5 t22 L 0.1
Figure 1: A game and a model for it
The right-hand table includes all the information required by De nition 1. In particular,
note that it implicitly de nes the partitions Ti, i = 1;2, via the possibility correspondences
ti : ) .
As previously advertised, at each state !2 in a model, players’ actions and beliefs are
completely speci ed. For instance, at !1, the pro le (T,R) is played, Player 1 is certain that
Player 2 chooses L (note that this belief is incorrect), and Player 2 is certain that Player 1
chooses T (which is a correct belief). Thus, given their beliefs, Player 1 is rational (T is a
best reply to L) and Player 2 is not (R is not a best reply to T).
Moreover, note that, at !2, Player 2 believes that the state is !2 (hence, that Player 1
chooses T) with probability 0:50:5+0:1 = 56, and that it is !3 (hence, that Player 1 chooses B)
with probability 16. At !2 Player 2 chooses L, which is her unique best reply given her beliefs.
Thus, we can also say that at !1 Player 1 assigns probability one to the event that the
state is really !2, and hence that (i) Player 2’s beliefs about Player 1’s actions are given by
(56,T; 16,B); and that (ii) Player 2 chooses L. Thus, at !1 Player 1 is \certain" that Player 2
is rational. Of course, note that at !1 Player 2 is really not rational!
We can push this quite a bit further. For instance, type t22 of Player 2 assigns probability
1
6 to !3, a state in which Player 1 is not rational (she is certain that the state is !3, hencethat Player 2 chooses L, but she plays B). Hence, at !
1, Player 1 is \certain" that Player
2 assigns probability 16 to the \event" that she (i) believes that 2 chooses L, and (ii) plays
B|hence, she is not rational. This is a statement involving three orders of beliefs. It also
corresponds to an incorrect belief: at !1, Player 2 is certain that Player 1 chooses T and is
of type t11|hence, that she is rational!
We are ready for formal de nitions of \rationality" and \certainty." Recall that, given
any belief i2 (A i) for Player i, ri( i) is the set of best replies for i given i.
First, a preliminary notion:
De nition 2 Fix a game G = (N;(Ai;ui)i2N) and a model M = ( ;(Ti;ai;pi)i2N) for G.
The rst-order beliefs function i : ! (A i) for Player i is de ned by
8!2 ; a i2A i : i(!)(a i) = pi (f!0 :8j6= i;aj(!0) = ajgjti(!))
That is, the probability of a pro le a i2A i is given by the (conditional) probability of
all states where that pro le is played. Notice that the function i( ) is Ti-measurable, just
like ai( ). Also note that this is a belief about players j6= i, held by player i.
3
De nition 3 Fix a game G = (N;(Ai;ui)i2N) and a model M = ( ;(Ti;ai;pi)i2N) for G.
A player i2I is deemed rational at state !2 i ai(!) 2ri( i(!)). De ne the event,
\Player i is rational" by
Ri =f!2 : ai(!)2ri( i(!))g
and the event, \Every player is rational" by R = Ti2N Ri.
This is quite straightforward. Finally, adapting the de nition we gave last time:
De nition 4 Fix a game G = (N;(Ai;ui)i2N) and a model M = ( ;(Ti;ai;pi)i2N) for G.
Player i’s belief operator is the map Bi : 2 !2 de ned by
8E ; Bi(E) =f!2 : pi(Ejti(!)) = 1g:
Also de ne the event, \Everybody is certain that E is true" by B(E) = Ti2N Bi(E).
The following shorthand de nitions are also convenient:
8i2N; q2 (A i) : [ i = q] =f! : i(!) = qg
which extends our previous notation, and
8i2N;ai2Ai : [ai = ai] =f! : ai(!) = aig
We now have a rather powerful and concise language to describe strategic reasoning in
games. For instance, the following relations summarize our discussion of Figure 1:
!1 2B1([a2 = L])\B2([a1 = T]);
and also, more interestingly:
!1 2R1; !2 2R2; !1 2B1(R2):
In fact:
!2 2 nB2(R1); !1 2B1( nB2(R1)):
Notice that we are nally able to give formal content to statements such as \Player 1 is
certain that Player 2 is rational". These correspond to events in a given model, which in
turn represents well-de ned hierarchies of beliefs.
I conclude by noting a few properties of belief operators.
Proposition 0.1 Fix a game G = (N;(Ai;ui)i2N) and a model M= ( ;(Ti;ai;pi)i2N) for
G. Then, for every i2N:
(1) ti = Bi(ti);
(2) E F implies Bi(E) Bi(F);
(3) Bi(E\F) = Bi(E)\Bi(F);
(4) Bi(E) Bi(Bi(E)) and nBi(E) Bi( nBi(E));
(5) Ri Bi(Ri).
4
Correlated Equilibrium
As a rst application of this formalism, I will provide a characterization of the notion of
correlated equilibrium, due to R. Aumann.
I have already argued that the fact that players choose their actions independently of
each other does not imply that beliefs should necessarily be stochastically independent (recall
the \betting on coordination" game). Correlated equilibrium provides a way to allow for
correlated beliefs that is consistent with the equilibrium approach.
De nition 5 Fix a gameG = (N;(Ai;ui)i2N). A correlated equilibrium ofGis a probability
distribution 2 (A) such that, for every player i2N, and every function di : Ai!Ai,
X
(ai;a i)2A
ui(ai;a i) (ai;a i)
X
(ai;a i)2A
ui(di(ai);a i) (ai;a i)
The above is the standard de nition of correlated equilibrium. However:
Proposition 0.2 Fix a game G = (N;(Ai;ui)i2N) and a probability distribution 2 (A).
Then is a correlated equilibrium of G i , for any player i2N and action ai 2Ai such
that (faig A i) > 0, and for all a0i2Ai,
X
a i2A i
ui(ai;a i) (a ijai)
X
a i2A i
ui(a0i;a i) (a ijai)
where (a ijai) = (fai;a igjfaig A i).
Proof: Fix a player i2N. Observe rst that, for any function f : Ai!Ai,
X
(ai;a i)2A
ui(f(ai);a i) (ai;a i) =
X
ai2Ai
X
a i2A i
ui(f(ai);a i) (ai;a i) =
=
X
ai: (faig A i)>0
(faig A i)
X
a i2A i
ui(f(ai);a i) (a ijai)
Suppose rst that there exists an action ai 2 Ai with (f aig A i) > 0 such thatP
a i2A iui( ai;a i) (a ij ai) <
P
a i2A iui(a
0
i;a i) (a ij ai). Then the function di : Ai !
Ai de ned by di( ai) = a0i and di(ai) = ai for all ai 6= a constitutes a pro table ex-ante
deviation (see the above observation), so cannot be a correlated equilibrium.
Conversely, suppose that the above inequality holds for all ai and a0i as in the claim.
Now consider any function di : Ai ! Ai: by assumption, Pa i2A iui(ai;a i) (a ijai) P
a i2A iui(di(ai);a i) (a ijai) for any ai such that (faig A i) > 0. The claim follows
from our initial observation.
5
Proposition 0.2 draws a connection between the notions of Nash and Correlated equi-
librium: recall that, in the former, an action receives positive probability i it is a best
response to the equilibrium belief. Note also that, if 2 (A) is an independent probability
distribution, (a ijai) = i(a i), the marginal of on A i, for all ai 2Ai. Thus, every
Nash equilibrium is a correlated equilibrium.
Moreover, Proposition 0.2 reinforces our interpretation of correlated equilibrium as an
attempt to depart from independence of beliefs, while remaining rmly within an equilibrium
setting.
The rst step in the epistemic characterization of correlated equilibrium is actually
prompted by a more basic question. The story that is most often heard to justify corre-
lated equilibrium runs along the following lines: the players bring in an outside observer
who randomizes according to the distribution and prescribes an action to each player.
De nition 5 then essentially requires that the players nd it pro table ex-ante to follow
the prescription rather than adopt any alternative prescription-contingent plan (i.e. \if the
observer tells me to do X, I shall do Y instead"). Proposition 0.2 shows that this is equivalent
to assuming that, upon receiving a prescription, players do not gain by deviating to any other
action.
The basic question that should have occurred to you is whether a richer \communication
structure" allows for more coordination opportunities|i.e. whether there exists expected
payo vectors which may be achieved using a richer structure, but may not be achieved when
messages are limited to action prescriptions.
The answer to this question is actually negative, as follows from a simple application
of the Revelation principle. However, the point is that frames may also be used to de ne
correlated equilibria in this extended sense.
De nition 6 Fix a game G = (N;(Ai;ui)i2N). An extended correlated equilibrium is a
tuple (F; ) where:
(1) F = ( ;(Ti;ai)i2N) is a frame for G;
(2) 2 ( ) is a probability over such that, for all i2N and ti2Ti, (ti) > 0;
(3) for every player i2N and ti2Ti,
X
!2
ui(ai(!);(aj(!))j6=i) (!jti)
X
!2
ui(a0i;(aj(!))j6=i) (!jti)
for all a0i2Ai.
The similarity between (3) and Proposition 0.2 should be obvious.
Note that the formal de nition of a frame in an extended correlated equilibrium is as in
De nition 1. However, the standard interpretation is di erent: the cells ti 2Ti represent
possible messages that the observer may send to Player i; since the action functions ai( ) are
6
Ti-measurable by the de nition of a frame, they represent message-contingent action plans;
nally, and model an abstract, general randomizing (or \correlating") device. The idea
is that, upon observing !2 , the outside observer sends the message ti(!) to every Player
i2N.
It is clear that every correlated equilibrium according to De nition 5 can be interpreted
as an extended correlated equilibrium as per De nition 6: let = supp , i.e. the set of
action pro les that get played in equilibrium; then de ne type partitions indirectly, via the
possibility correspondence, assuming that, at each state ! = (ai;a i) 2 , Player i is told
what her action must be:
ti(ai;a i) =faig fa0 i : (ai;a0 i)2supp g:
Since ti(ai;a i) actually depends on ai only, I denote this type by taii . Finally, let ai(ai;a i) =
ai and = . With these de nitions, note that, for every ai 2 Ai, and for every ! =
(ai;a i) 2 taii , ai(!) = ai and (!jtaii ) = (a ijai). This implies that (3) in De nition 6
must hold.
By the Revelation principle, the converse is also true. Intuitively, instead of sending the
message ti(!) to Player i whenever ! is realized, the outside observer could simply instruct
Player i to play ai(!). If it was unpro table to deviate from ai(!) in the original messaging
setting, then it must be unpro table to do so in the simpli ed game as well.
Formally, given an extended correlated equilibrium (F; ), de ne
(a) = (f! :8i2N;ai(!) = aig);
now observe that, for any ai2Ai, (faig A i) > 0 i there exists a (maximal) collection of
types t1i;:::;tKi 2Ti such that, at all states !2SKk=1 tki , ai(!) = ai. Now (3) in De nition
6 implies that
X
!2
ui(ai;(aj(!))j6=i) (!j
K[
k=1
tki )
X
!2
ui(a0i;(aj(!))j6=i) (!j
K[
k=1
tki )
for all a0i2Ai, because all types have positive probability. We can clearly rewrite the above
summations as follows:
X
a i2A i
ui(ai;a i) (
j6=i
[aj = aj]j
K[
k=1
tki )
X
a i2A i
ui(a0i;a i) (
j6=i
[aj = aj]j
K[
k=1
tki )
and since (Tj6=i[aj = aj]jSKk=1tki ) = (a ijai) by construction, is a correlated equilibrium
according to Proposition 0.2.
7
The bottom line is that we can actually take De nition 6 as our basic notion of correlated
equilibrium (OR does this, for example): there is no added generality.
But then we get an epistemic characterization of correlated equilibrium almost for free.
Observe that Condition (3) in De nition 6 implies that, at each state ! 2 ti, Player i’s
action ai(!) is a best response to her rst-order beliefs, given by i(!)(a i) = (f!0 :8j6=
i;aj(!0) = ajgjti(!)). Hence, if we reinterpret an extended correlated equilibrium (F; ) as
a model M= (F;(pi)i2N) in which pi = for all i2N, we get:
Proposition 0.3 Fix a game G = (N;(Ai;ui)i2N) and a model M = ( ;(Ti;ai;pi)i2N)
for G. If there exists 2 ( ) such that pi = for all i 2 N, and R = , then
(( ;(Ti;ai)i2N); ) is an extended correlated equilibrium of G.
Conversely, if is a correlated equilibrium ofG, there exists a modelM= ( ;(Ti;ai;pi)i2N)
for G in which = R.
The model alluded to in the second part of the Proposition is of course the one constructed
above. Observe that in that model is indeed a common prior.
You may feel a bit cheated by this result. After all, it seems all we have done is change
our interpretation of the relevant objects. This is of course entirely correct!
It is certainly the case that Proposition 0.3 characterizes correlated equilibrium beliefs.
More precisely, a distribution over action pro les is a correlated equilibrium belief if it is a
common prior in a model with the feature that every player is rational at every state.
As I have mentioned several times, this may or may not have any behavioral implications,
but at least Proposition 0.3 provides an arguably more palatable rationale for correlated
equilibrium beliefs than the \outside observer" story.
8