Eco514|Game Theory
The Trembling Hand: Normal-Form Analysis and
Extensive-Form Implications
Marciano Siniscalchi
January 10, 2000
Introduction: Invariance
In their seminal contribution, Von Neumann and Morgenstern argue that the normal form
of a game contains all \strategically relevant" information. This view, note well, does not
invalidate or trivialize extensive-form analysis; rather, it leads those who embrace it to be
suspicious of extensive-form solution concepts which yield di erent predictions in distinct
extensive games sharing the same normal (or reduced normal) form. Solution concepts
which do not yield di erent predictions for such games are called invariant.
The supposed \strategic su ciency of the normal form" also motivated the search for
normal-form solution concepts which exhibit \nice" properties in every extensive-form as-
sociated with a given normal-form game. The main proponent of this line of research is J.F.
Mertens.
In my opinion, whether or not the normal form contains all \strategically relevant"
information depends crucially on the solution concept one wishes to apply. This is actually a
rather trivial point, but I am afraid it was overlooked in the debate on the su ciency of the
normal form. For instance, in order to compute the minmax value of a game, one only needs
to look at strategies and payo s associated with strategy pro les; the information conveyed
by the extensive form of the game (if such information is at all provided) is irrelevant as
far as the minmax value calculation is concerned. Since Von Neumann and Morgenstern
were mostly concerned with minmax values, the normal form was indeed su cient for their
purposes. The argument readily extends to Nash equilibrium analysis.
However, as soon as one wishes to restrict the attention to sequential equilibria, it is clear
that the normal form is not su cient to carry out the analysis. Quite simply, the formal
notion of \normal-form game" does not include a speci cation of information sets!
This point is more subtle that it appears. You will remember that, given an extensive
game and its normal form G, we de ned, for each information set I, the collection of
1
strategy pro les which reach I, S(I). Now, the latter is a normal-form object: it is simply
a set of strategies. The key point is that, in order to de ne it, we used the information
contained in the de nition of the extensive game : the set S(I) is not part of the formal
description of the normal-form game G!
[ Note for the interested reader: Mailath, Samuelson and Swinkels (1993) characterize
the sets of strategy pro les which can correspond to some information set in some extensive
game with a given normal form. Their characterization only relies on the properties of the
normal-form payo functions, and is thus purely \normal-form" in nature and inspiration.
However, a more sophisticated version of the argument given above applies: even granting
that a given normal form contains enough information about all potential information sets,
extensive games derived from that game di er in the actual information sets that the players
have to take into account in their strategic reasoning. Clearly, this is \strategically relevant"
information! ]
As usual, I am not going to ask you to subscribe to my point of view. And, in any
case, even if one does not \believe" in the su ciency of the normal form, it may still be
interesting to investigate the extensive-form implications of normal-form solution concepts.
This is what we shall do in these notes.
Before that, we will Thompson’s and Dalkey’s result concerning\inessential transforma-
tions" of extensive games; I refer you to OR for a formal treatment.
Inessential Transformations
As you will remember, Thompson and Dalkey propose four transformations of extensive
games which, in their opinion, do not change the strategic problem faced by the players. As
may be expected, these transformations are prima facie harmless, but, on closer inspection,
at least some of them should not be accepted easily.
The result Thompson and Dalkey prove is striking: if game 1 can be mutated into game
2 by means of a sequence of inessential transformations, then 1 and 2 have the same
(reduced) normal form. \Corollary": the normal form contains all strategically relevant
information!
Of course, the result is correct, but the \Corollary" is not a formal statement: we can only
accept it if we accept the transformations proposed by Thompson and Dalkey as irrelevant.
Let me emphasize a few key points. First, you will recall that I introduced a fth
transformation which entails replacing a non-terminal history where Chance moves, followed
by terminal histories only, with a single terminal history; the corresponding payo s are
lotteries over the payo s attached to the original terminal nodes, with probabilities given by
the relevant Chance move probabilities. I mention this only for completeness: in my opinion,
once we accept that players are Bayesian, this transformation is harmless.
2
These remarks are written under the assumption that you already know the formalities.
If you don’t, please review OR before proceeding.
Splitting-Coalescing. This transformation is rather harmless, in my opinion. It only
becomes questionable if we assume that the \agents" of a given player assigned to distinct
information sets really have a mind of their own. But since agents are a ction to begin
with, I feel comfortable with splitting/coalescing.
However, it must be noted that sequential equilibrium is not invariant to this transfor-
mation.
bX2,3 1
T r
2 u 1,0
HH
Hd 4,1
HH
HH
HHB r
2
u 0,1
HH
Hd 3,0
Figure 1: Splitting matters.
In the game of Figure 1, (X;u), together with the out-of-equilibrium belief (fT;Bg)(B) =
1, is a sequential equilibrium: given the threat of u after T or B, 1 does well to choose X,
and given the belief that 1 actually played B, 2 does well to play u. Note that is really
unconstrained here.
However, suppose that 1’s choice is split into two histories: rst, at , 1 chooses between
X and, say, Y; then, after (Y), 1 chooses between T and B. The history (Y;T) corresponds
to (T) in the old game, (Y;B) corresponds to (B), and so on.
Now the unique equilibrium of the game is (YT;d): afterY, Player 1 faces a simultaneous-
moves subgame in which T strictly dominates B. Hence, in any sequential equilibrium,
1(Y)(T) = 1. By consistency, this implies that (f(Y;T);(Y;B)g)((Y;T)) = 1, so that
2(f(Y;T);(Y;B)g)(d) = 1. But then 1( )(Y) = 1.
In my opinion, this example highlights a shortcoming of sequential equilibrium. However,
note that, even in the original game, the equilibrium (X;u) fails a \reasonableness" test based
on forward induction. Here’s the idea: Upon being reached, Player 2 must conclude that it
is not the case that: (i) Player 1 is rational, and (2) Player 1 expects Player 2 to choose
u (for otherwise Player 1 would have chosen X); however, at least T would be a rational
choice for 1, if 1 expected 2 to play d and not u (i.e., (1) may still be true, although (2) is
false); on the other hand, B would never be a rational choice. Thus, if we assume (note well:
\assume") that Player 2 believes that Player 1 is rational as long as this is possible, we can
conclude that Player 2 will interpret a deviation from X as a \signal" that (2) is false, but
3
that (1) is still true; in particular, Player 2 will believe that 1 has played T. But then he
will best-respond with d, which breaks the original equilibrium.
Since we can break the (X;u) equilibrium without using a \transformation" of the game,
I suggest that we accept the latter, and think about the possibility of re ning away \un-
reasonable" sequential equilibria. We shall return to this point in the notes on forward
induction.
Addition/Deletion of a Super uous Move. In my opinion, this is the crucial transformation|
one that I would certainly not call \inessential." Recall that a move of Player i is super uous
if, roughly speaking, (i) it does not in uence payo s; (ii) Player i does not know that the
move she is making is super uous; (iii) no opponent of Player i observes the super uous
move. Thus, adding such a move sounds pretty harmless.
However, the mere existence of a super uous move may change the strategic problem
faced by Player i. Here is an extreme example: consider the Entry Deterrence game from
Lecture 11 (Fig. 1 in the notes); now add two super uous moves, labelled f and a, after
the Entrant’s choice N; both moves lead to the terminal payo corresponding to N in the
original game; nally, let (E) and (N) belong to the same information set of the Incumbent.
You can check the de niiton in OR to verify that the moves f and a added after N are
indeed super uous.
The game one obtains is ostensibly the normal form of the Entry Deterrence game.
Clearly, we do not consider the two games equivalent!
To put it di erently, the transformation applied to the Entry Deterrence game has
changed the nature of the strategic problem faced by the Incumbent to a dramatic extent.
In the original formulation, if the Incumbent was called upon to move, he knew that the
Entrant had entered; hence, he was certain that a was the better course of action. After the
modi cation, however, the Incumbent does not observe the choice of the Entrant; thus, we
can construct an equilibrium in which the Incumbent threatens to play f, and the Entrant
stays out. [If we want to push the \entry" story a bit harder, we can even say that this threat
is credible. If the Entrant chooses E, the Incumbent does not observe this, and continues to
believe that, as speci ed by the equilibrium, the Entrant has actually choosen N. Since the
Incumbent is indi erent between f and a after N, f is a best reply.]
This argument at least suggests that addition/deletion of a super uous move may not
be an \inessential" transformation.
Perfection and Properness
Perfection
The basic idea behind perfect equilibrium is that equilibria should be robust to small \trem-
bles" of the opponents away from the predicted play. This idea applies equally well to the
normal and the extensive form.
4
OR covers perfection and proves a key result: equilibria that are perfect in the extensive
form can be extended to sequential equilibria. I will add alternative characterizations of
perfection.
We begin by de ning perturbations:
De nition 1 Fix a game G = (N;(Ai;ui)i2N). A perturbation vector is a point =
( i(ai))i2N;ai2Ai 2R
x50
i2NjAij
++ such that, for all i2N,
P
ai2Ai i(ai) < 1. The -perturbation
of G is the game G( ) = (N;(A i;Ui)i2N), where
A i =f i2 (Ai) :8ai2Ai; i(ai) i(ai)g
and each payo function Ui is the usual extension of ui from QiAi to Qi (Ai).
That is, in the game G( ), the minimum probability with which ai2Ai is played is i(ai).
Note that -perturbations have convex and compact strategy sets, and payo functions are
continuous; therefore, every -perturbation of G has a Nash equilibrium.
I use the notation A = Qi2N A i, A i = Qj6=iA j.
Remark 0.1 An action pro le ( i)i2N 2 A is a Nash equilibrium of G( ) i , for every
i2N and ai2Ai,
si62ri( i)) i(ai) = i(ai)
(where ri( ) is the best-reply correspondence of the original game).
De nition 2 Fix a game G = (N;(Ai;ui)i2N). A (mixed-action) equilibrium of G is
perfect i there exist sequences n ! and n ! 0 such that, for each n, n is a Nash
equilibrium of G( n).
The intuition is that is perfect i it is the limit of equilibria of perturbed games in
which every action gets played with positive (but vanishing) probability.
We now provide two alternative characterizations, one of which is the version you are
most familiar with.
De nition 3 Fix a game G = (N;(Ai;ui)i2N). A mixed action pro le is an -perfect
equilibrium of G i for all i2N and ai2Ai, (i) i(ai) > 0 and (ii) ai62ri( i)) i(ai) .
That is, in an -equilibrium, actions that are not best replies receive \vanishingly small"
probability. Note that an -perfect equilibrium need not be a Nash equilibrium.
I conclude with the main characterization result of this subsection.
Proposition 0.1 Fix a game G = (N;(Ai;ui)i2N). Then the following statements are
equivalent.
(i) is a perfect equilibrium of G.
5
(ii) There exists sequences n ! and n ! 0 such that, for each n, n is an -perfect
equilibrium of G.
(iii) There exists a sequence n ! such that: (a) for every n, i 2 N and ai 2 Ai,
ni (ai) > 0; (b) for every n, i2N and ai2Ai such that i(ai) > 0, ai2ri( n i).
You will recognize that (iii) is the familiar characterization of perfect equilibria. Condition
(b) states formally that i is a best reply to each n i.
I advise you to try and reconstruct the proof of this result from your notes.
Properness
In an -perfect equilibrium, informally speaking, \right choices" are in nitely more likely
than mistakes. However, mistakes can be more or less costly|some mistakes entail a larger
loss of utility compared with a best reply.
Hence Myerson’s idea: let us assume that more costly mistakes are in nitely more likely.
We are led to
De nition 4 Fix a game G = (N;(Ai;ui)i2N) and > 0. An -proper equilibrium of G is
a pro le such that, for all i2N, (i) for every ai2Ai, i(ai) > 0, and (ii) for every pair
ai;a0i2Ai, ui(ai; i) < ui(a0i; i) ) i(ai) i(a0i). A pro le is a proper equilibrium
of G i there exist sequences n! and n!0 such that, for each n, n is an n-proper
equilibrium of G.
Clearly, every proper equilibrium is perfect, but not vice-versa.
The key result about proper equilibria is stated below:
Proposition 0.2 Let be an extensive-form game and let G be its normal form. Then
every proper equilibrium of G can be extended to a sequential equilibrium of .
Again, you should try to reconstruct the proof of this result from your class notes.
Observe that, by construction, proper equilibria are invariant to the addition or deletion
of actions which yield payo vectors which can be duplicated by existing actions. Thus,
proper equilibria of a normal-form game are also proper equilibria of its reduced normal
form.
Thus, here is the tie-in with our preceding discussion of invariance: every proper equilib-
rium of a reduced normal-form game G induces payo -equivalent sequential equilibria in every
extensive game having G as its (reduced) normal form. We have identi ed a normal-form
solution concept which exhibits \nice" properties in every \extensive-form presentation" of
a given game. Not bad!
For those of you who are (still!) interested, let me point out that, once we start eliminating
duplicate actions from a game, it comes relatively natural to think about eliminating actions
6
that can be duplicated by a mixture of other actions. Unfortunately, proper equilibrium does
not survive the addition/deletion of such \mixed-duplicated" actions. Correspondingly, it is
not always possible to nd a sequential equilibrium equilibrium of an extensive game which
survives the addition/deletion of a mixed-duplicated action: Kohlberg and Mertens have a
beautiful example of this in their 1986 paper on strategic stability.
7