Ch. 13 Di erence Equations
1 First-Order Di erence Equations
Suppose we are given a dynamic equation relating the value Y takes on at date
t to another variables Wt and to the value Y took in the previous period:
Yt = Yt 1 + Wt; (1)
where is a constant. Equation (1) is a linear rst-order di erence equation.
A di erence equation is an expression relating a variable Yt to its previous values.
This is a rst-order di erence equation because only the rst lag of the variable
(Yt 1) appears in the equation. Note that it expresses Yt as a linear function of
Yt 1 and Wt.
In Chapter 14 the input variable Wt will be regarded as a random variable, and
the implication of (1) for the statistical properties of the output variables Yt will
be explored. In preparation for this discussion, it is necessary rst to understand
the mechanics of the di erence equations. For the discussion in Chapter 13, the
values for the input variables fW1; W2; :::g will simply be regarded as a sequence
of deterministic numbers. Our goal is to answer the following question: If a
dynamic system is described by (1), what are the e ects on Y of changes
in the value of W ?
1.1 Solving a Di erence equations by Recursive Substitu-
tion
The presumption is that the dynamic equation (1) governs the behavior of Y for
all dates t, that is
Yt = Yt 1 + Wt; t 2 T :
We rst consider the index set T = f0; 1; 2; 3; :::g. By direct substitution
Yt = Yt 1 + Wt
= ( Yt 2 + Wt 1) + Wt
= 2Yt 1 + Wt 1 + Wt
= 2( Yt 2 + Wt 1) + Wt 1 + Wt
= :::::
1
Assume the value of Y for date t = 1 is known (Y 1 here is an "initial value"),
we can express (1) by repeated substitution in the form
Yt = t+1Y 1 + tW0 + t 1W1 + t 2W2 + ::: + Wt 1 + Wt: (2)
This procedure is known as solving the di erence equation (1) by recursive
substitution.
1.2 Dynamic Multipliers
Note that (2) expresses Y as a linear function of the initial value Y 1 and the
historical value of W. This makes it very easy to calculate the e ect of W0 (say)
on Yt. If W0 were to change with Y 1 and W1; W2; :::; Wt taken as una ected,
(this is the reason that we need the error term to be a white noise sequence in
the ARMA model in the subsequent chapters) the e ect on Yt would be given by
@Yt
@W0 =
t backword:
Note that the calculation would be exactly the same if the dynamic simulation
were started at date t (taking Yt 1 as given); then Yt+j can be described as a
function of Yt 1 and Wt; Wt+1; :::; Wt+j:
Yt+j = j+1Yt 1 + jwt + j 1Wt+1 + j 2Wt+2 + ::: + Wt+j 1 + Wt+j: (3)
The e ect of Wt on Yt+j is given by
@Yt+j
@Wt =
j foreword: (4)
Thus the dynamic multiplier (or also refereed as the impulse-response func-
tion) (4) depends only on j, the length of time separating the disturbance to the
input variable Wt and the observed value of output Yt+j. the multiplier does not
depend on t; that is, it does not depend on the dates of the observations them-
selves. This is true for any di erence equation.
Di erent value of in (1) can produce a variety of dynamic responses of
Y to W. If 0 < < 1, the multiplier @Yt+j=@Wt in (4) decays geometrically
toward zero. If 1 < < 0, the absolute value of the multiplier @Yt+j=@Wt
in (4) also decays geometrically toward zero. If > 1, the dynamic multiplier
2
increase exponentially over time and if < 1, the multiplier exhibit explosive
oscillations.
Thus, if j j < 1, the system is stable; the consequence of a given change in
Wt will eventually die out. If j j > 1, the system is explosive. An interesting
possibility is the borderline case, j j = 1. In this case, the solution (3) becomes
Yt+j = Yt 1 + Wt + Wt+1 + Wt+2 + ::: + Wt+j 1 + Wt+j:
Here the output variables Y is the sum of the historical input W. A one-unit
increase in W will cause a permanent one-unit increase in Y :
@Yt+j
@Wt = 1 for j = 0; 1; :::: unit root:
2 pth-Order Di erence Equations
Let us now generalize the dynamic system (1) by allowing the value of Y at date
t to depend on p of its own lags along with the current value of the input variable
Wt:
Yt = 1Yt 1 + 2Yt 1 + :::: + pYt p + Wt; t 2 T : (5)
Equation (5) is a linear pth-order di erence equation.
It is often convenient to rewrite the pth-order di erence equation (5) in the
scalar Yt as a rst-order di erence equation in a vector t. De ne the (p 1)
vector t by
t
2
66
66
66
66
4
Yt
Yt 1
Yt 2
:
:
:
Yt p+1
3
77
77
77
77
5
;
3
the (p p) matrix F by
F
2
66
66
66
66
4
1 2 3 : : p 1 p
1 0 0 : : 0 0
0 1 0 : : 0 0
: : : : : : :
: : : : : : :
: : : : : : :
0 0 0 : : 1 0
3
77
77
77
77
5
;
and the (p 1) vector vt by
vt
2
66
66
66
66
4
Wt
0
0
:
:
:
0
3
77
77
77
77
5
:
Consider the following rst-order vector di erence equation:
t = F t 1 + vt; (6)
or
2
66
66
66
66
4
Yt
Yt 1
Yt 2
:
:
:
Yt p+1
3
77
77
77
77
5
=
2
66
66
66
66
4
1 2 3 : : p 1 p
1 0 0 : : 0 0
0 1 0 : : 0 0
: : : : : : :
: : : : : : :
: : : : : : :
0 0 0 : : 1 0
3
77
77
77
77
5
2
66
66
66
66
4
Yt 1
Yt 2
Yt 3
:
:
:
Yt p
3
77
77
77
77
5
+
2
66
66
66
66
4
Wt
0
0
:
:
:
0
3
77
77
77
77
5
:
This is a system of p equations. The rst equation in this system is identical
to equation (5). The remaining p 1 equations is simply the identity
Yt j = Yt j; j = 1; 2; :::; p 1:
Thus, the rst-order vector system (6) is simply an alternative representation of
the pth-order scalar system (5). The advantage of rewriting the pth-order system
in (5) in the form of a rst-order system (6) is that rst-order systems are often
easier to work with than pth-order systems.
4
A dynamic multiplier for (5) can be found in exactly the same way as was
done for the rst-order scalar system of section 1. If we knew the value of 1,
then proceeding recursively in this fashion as in the scalar rst order di erence
equation produce a generalization of (2):
t = Ft+1 1 + Ftv0 + Ft 1v1 + Ft 2v2 + :::: + Fvt 1 + vt: (7)
Writing this out in terms of the de nition of t and vt,
2
66
66
66
66
4
Yt
Yt 1
Yt 2
:
:
:
Yt p+1
3
77
77
77
77
5
= Ft+1
2
66
66
66
66
4
Y 1
Y 2
Y 3
:
:
:
Y p
3
77
77
77
77
5
+ Ft
2
66
66
66
66
4
W0
0
0
:
:
:
0
3
77
77
77
77
5
+ Ft 1
2
66
66
66
66
4
W1
0
0
:
:
:
0
3
77
77
77
77
5
+ :::: + F1
2
66
66
66
66
4
Wt 1
0
0
:
:
:
0
3
77
77
77
77
5
+
2
66
66
66
66
4
Wt
0
0
:
:
:
0
3
77
77
77
77
5
: (8)
Consider the rst equation of this system, which characterize the value of Yt.
Let ft11 denote the (1; 1) elements of Ft, ft12 denote the (1; 2) elements of Ft, and
so on. Then the rst equation of (8) states that
Yt = ft+111 Y 1 + ft+112 Y 2 + ::: + ft+11p Y p + ft11W0 + ft 111 W1 + :::: + f111Wt 1 + Wt: (9)
This describe the value of Y at date t as a linear function of p initial value
of Y (Y 1; Y 2; :::; Y p) and the history of the input variables W since date 0
(W0; W1; :::; Wt). Note that whereas only one initial value for Y was needed in
the case of a rst-order di erence equation, p initial values for Y are needed in
the case of a pth-order di erence equation.
The obvious generalization of (3) is
t+j = Fj+1 t 1 + Fjvt + Fj 1vt+1 + Fj 2vt+2 + :::: + Fvt+j 1 + vt+j (10)
from which
Yt+j = fj+111 Yt 1 + fj+112 Yt 2 + ::: + fj+11p Yt p + fj11Wt + fj 111 Wt+1 + ::: + f111Wt+j 1 + Wt+j:(11)
5
Thus, for a pth-order di erence equation, the dynamic multiplier is given by
@Yt+j
@Wt = f
j
11;
where fj11 denotes the (1; 1) element of Fj.
Example:
The (1; 1) elements of F1 is 1 and the (1; 1) elements of F2(= [ 1; 2; :::; p][ 1; 1; 0; :::; 0]0)
is 21 + 2. Thus,
@Yt+1
@Wt = 1; and
@Yt+2
@Wt =
2
1 + 2
in a pth-order system.
For larger values of j, an easy way to obtain a numerical value for the dynamic
multiplier @Yt+j=@Wt in terms the eigenvalues of the matrix F. Recall that the
eigenvalues of a matrix F are those numbers for which
jF Ipj = 0: (12)
For example, for p = 2 the eigenvalues are the solutions to
1 2
1 0
0
0
= 0
or
(
1 ) 2
1
= 2 2 2 = 0: (13)
For a general pth-order system, the determinant in (12) is a pth-order ploy-
nominal in whose p solutions characterize the p eigenvalues of F. This polyno-
mial turns out to take a very similar form to (13).
Proposition:
The eigenvalues of the matrix F de nes in equation (12) are the values of that
satisfy
p 1 p 1 2 p 2 ::: p 1 p = 0:
6
2.1 General Solution of a pth-order Di erence Equation
with Distinct Eigenvalues
Recall that if the eigenvalues of a (p p) matrix F are distinct, there exists a
nonsingular (p p) matrix T such that
F = T T 1
where T = [x1;x2; :::;xp], xi; i = 1; 2; :::; p are the eigenvectors of F corresponding
to its eigenvalues i; and is a (p p) matrix such that
=
2
66
66
66
4
1 0 0 : : : 0
0 2 0 : : : 0
: : : : : : :
: : : : : : :
: : : : : : :
0 0 0 : : : p
3
77
77
77
5
:
This enables us to characterize the dynamic multiplier (the (1,1) elements of Fj)
very easily. In general, we have
Fj = T T 1 T T 1 ::: T T 1 (14)
= T jT 1; (15)
where
j =
2
66
66
66
4
j1 0 0 : : : 0
0 j2 0 : : : 0
: : : : : : :
: : : : : : :
: : : : : : :
0 0 0 : : : jp
3
77
77
77
5
:
7
Let tij denote the row i column j element of T and let tij denote the row i column
j element of T 1. Equation (15) written out explicitly become
Fj =
2
66
66
66
4
t11 t12 : : : : t1p
t21 t22 : : : : t2p
: : : : : : :
: : : : : : :
: : : : : : :
tp1 tp2 : : : : tpp
3
77
77
77
5
2
66
66
66
4
j1 0 0 : : : 0
0 j2 0 : : : 0
: : : : : : :
: : : : : : :
: : : : : : :
0 0 0 : : : jp
3
77
77
77
5
2
66
66
66
4
t11 t12 : : : : t1p
t21 t22 : : : : t2p
: : : : : : :
: : : : : : :
: : : : : : :
tp1 tp2 : : : : tpp
3
77
77
77
5
=
2
66
66
66
4
t11 j1 t12 j2 : : : : t1p jp
t21 j1 t22 j2 : : : : t2p jp
: : : : : : :
: : : : : : :
: : : : : : :
tp1 j1 tp2 j2 : : : : tpp jp
3
77
77
77
5
2
66
66
66
4
t11 t12 : : : : t1p
t21 t22 : : : : t2p
: : : : : : :
: : : : : : :
: : : : : : :
tp1 tp2 : : : : tpp
3
77
77
77
5
from which the (1,1) element of Fj is given by
fj11 = c1 j1 + c2 j2 + ::: + cp jp
where
ci = t1iti1
and
c1 + c2 + ::: + cp = t11t11 + t12t21 + ::: + t1ptp1 = 1:
Therefore the dynamic multiplier of a pth-order di erence equation is:
@Yt+j
@Wt = f
j
11 = c1
j
1 + c2
j
2 + ::: + cp
j
p;
that is the dynamic multiplier is a weighted average of each of the p eigenvalues
raised to the jth power.
The following result provides a closed-form expression for the constant c1; c2; :::; cp.
Proposition 2:
If the eigenvalues ( 1; 2; :::; p) of the matrix F are distinct, then the magnitude
ci can be written as
ci =
p 1
iQ
p
k=1; k6=i( i k)
:
8
Example:
In then case p = 2, we have
c1 =
2 1
1
1 2 =
1
1 2 ;
c2 =
2 1
2
2 1 =
2
2 1 :
2.1.1 Real Roots
Suppose rst that all the eigenvalues of F are real and all these real eigen-
values are less than one in absolute value, then the system is stable, and
its dynamics are represented as a weighted average of decaying exponentially or
decaying exponentially oscillating in sign.
Example:
Consider the following second-order di erence equation:
Yt = 0:6Yt 1 + 0:2Yt 2 + Wt:
The eigenvalues are the solutions the polynomial
2 0:6 0:2 = 0
which are
1 = 0:6 +
p(0:6)2 4(0:2)
2 = 0:84
2 = 0:6
p(0:6)2 4(0:2)
2 = 0:24:
The dynamic multiplier for this system,
@Yt+j
@Wt = c1
j
1 + c2
j
2 = c1(0:84)
j + c2( 0:24)j
is geometrically decaying and is plotted as a function of j in panel (a) of Hamil-
ton, p.15. Note that as j becomes larger, the pattern is dominated by the larger
eigenvalues ( 1), approximating a simple geometric decay at rate ( 1).
If the eigenvalue are all real but at least one is greater than one in
absolute value, the system is explosive. If 1 denotes the eigenvalue that is
9
largest in absolute value, the dynamic multiplier is eventually dominated by an
exponential function of that eigenvalues:
lim
j!1
@Yt+j
@Wt
1
j1 = c1:
2.1.2 Complex Roots
It is possible that the eigenvalue of F are complex (Since F is not symmetric. For
a symmetric matrix, its eigenvalues are all real). Whenever this is the case, they
appear as complex conjugates. For example if p = 2 and 21 + 4 2 < 0, then the
solutions 1 and 2 are complex conjugates. Suppose that 1 and 2 are complex
conjugates, written as
1 = a + bi
2 = a bi
By rewritten the de nition of the sine and the cosine function we have
a = R cos( ) and
b = R sin( );
where for a given angle and R are de ned in terms of a and b by
R = pa2 + b2
cos( ) = aR
sin( ) = bR:
Therefore, we have
1 = R[cos + i sin ]
2 = R[cos i sin ]:
By Eular relations(see for example, Chiang, A.C. (1984), p. 520) we further
have
1 = R[cos + i sin ] = Rei
2 = R[cos i sin ] = Re i ;
10
and when they are raised to the jth power,
j1 = Rj[cos( j) + i sin( j)] = Rjei j
j2 = Rj[cos( j) i sin( j)] = Rje i j:
The contribution of the complex conjugates to the dynamic multiplier @Yt+j=@Wt:
c1 j1 + c2 j2 = c1Rj[cos( j) + i sin( j)] + c2Rj[cos( j) i sin( j)]
= (c1 + c2)Rj cos( j) + i(c1 c2)Rj sin( j):
From Proposition 2 we know that if 1 and 2 are complex conjugates, then c1
and c2 are also complex conjugates; that is they can be written as
c1 = + i
c2 = i;
for some real number and . Therefore, the dynamic multiplier @Yt+j=@Wt can
further be expressed as
c1 j1 + c2 j2 = [( + i) + ( i)] Rj cos( j) + i[( + i) ( i)] Rj sin( j)
= (2 )Rj cos( j) + i (2 i)Rj sin( j)
= 2 Rj cos( j) 2 Rj sin( j):
Thus, when some of the (distinct) eigenvalues are complex, then if
1. R = 1, that is the complex eigenvalues have unit modulus, the multipliers are
periodic sine and cosine functions of j;
2. R < 1, that is the complex eigenvalues are less then one in modulus, the
impulse again follows a sinusoidal pattern though its amplitude decays at the
rate Rj;
3. R > 1, that is the complex eigenvalues are greater then one in modulus, its
amplitude of the sinusoids explodes at the rate Rj.
Example:
Consider the following second-order di erence equation:
Yt = 0:5Yt 1 0:8Yt 2 + Wt:
The eigenvalues are the solutions the polynomial
2 0:5 + 0:8 = 0
11
which are
1 = 0:5 +
p(0:5)2 4(0:8)
2 = 0:25 + 0:86i
2 = 0:5
p(0:5)2 4(0:8)
2 = 0:25 0:86i;
with modulus
R =
p
(0:25)2 + (0:86)2 = 0:9:
Since R < 1, the dynamic multiplier follows a pattern of damped oscillation as
plotted in panel (b) of Figure 1.4 of Hamilton, p. 15.
2.2 General Solution of a pth-order Di erence Equation
with Repeated Eigenvalues
Jordan decomposition
12
3 Lag Operators
3.1 Introduction
As we have de ned that a stochastic process (or time series) is a sequence of
random variables denoted by fXt; t 2 Tg. A time series operator transforms
one time series into a new time series. It accepts as input a sequence such as
fXt; t 2 Tg and has an output a new sequence fYt; t 2 Tg.
An example of a time series operator is the multiplication operator, repre-
sented as
Yt = Xt; (16)
Although it is written exactly the same way as simply scalar multiplication, equa-
tion (16) is actually shorthand for an in nite sequence of multiplication, one for
each date t. The operator multiplies whatever value x the random variable X
takes on at any date t by some constant to generate the value y for that date.
Therefore, it is important to keep in mind that equation (16) has better be read
as
fYt = Xt; t 2 Tg: (17)
A highly useful operator is the lag operator. Suppose that we start a time
series fXt; t 2 Tg and generate a new sequence fYt; t 2 Tg where the value of y
for date t is equal to the value x took on at date t 1:
yt = xt 1:
This is described as applying the lag operator to fXtg. The operator is repre-
sented by the symbol L:
Yt = Xt 1 = LXt:
Consider the result of applying the lag operator twice to a series:
L(LXt) = L(Xt 1) = Xt 2:
Such a double application of the lag operator is indicated by "L2":
L2Xt = Xt 2:
13
In general, for any integer k,
LkXt = Xt k:
Notice that if we rst apply the multiplication operator and then the lag operator,
as in:
Xt ! Xt ! Xt 1;
the result will be exactly the same if we had applied the lag operator rst and
then the multiplication operator:
Xt ! Xt 1 ! Xt 1:
Thus the lag operator and multiplication operator are commutative:
L( Xt) = LXt:
Similarly, if we rst add two series and then apply the lag operator to the result,
(Xt; Wt) ! Xt + Wt ! Xt 1 + Wt 1;
the result is the same as if we had applied the lag operator before adding:
(Xt; Wt) ! (Xt 1; Wt 1) ! Xt 1 + Wt 1:
Thus, the lag operator is distributive over the addition operator:
L(Xt + Wt) = LXt + LWt:
We thus see that the lag operator follows exactly the same algebraic rule
as the multiplication operator. For this reason, it is tempting to use the
expression "multiply Yt by L" rather than "operate on fYt; t 2 Tg by L".
Faced with a time series de ned in terms of compound operators, we are free to
use the standard commutative; associative, and distributive algebraic laws for
multiplication and addition to express the compound operator in an alternative
form. For example, the process de ned by
Yt = (a + bL)LXt
is exactly the same as
Yt = (aL + bL2)Xt = aXt 1 + bXt 2:
14
To take another example,
(1 1L)(1 2L)Xt = (1 1L 2L + 1 2L2)Xt
= (1 [ 1 + 2]L + 1 2L2)Xt
= Xt ( 1 + 2)Xt 1 + ( 1 2)Xt 2:
An expression such as (aL + bL2) is referred to as a polynomial in the
lag operator. It is algebraically to a simple polynomial (az + bz2) where z
is a scalar. The di erence is that the simple polynomial (az + bz2) refers to a
particular number, whereas a polynomial in the lag operator (aL + bL2) refers to
an operator that would applied to one time series fXt; t 2 Tg to produce a new
time series fYt; t 2 Tg.
3.2 Solving First-Order Di erence Equations by Lag Op-
erator
Let now return to the rst-order di erence equation in section 1:
Yt = Yt 1 + wt;
which can be rewritten using lag operator as
Yt = LYt + wt:
This equation, in turn, can be rearranged using standard algebra,
Yt LYt = wt
or
(1 L)Yt = wt: (18)
15
3.2.1 T = f0; 1; 2; :::g
We rst consider "multiplying" both side of (18) by the following operator:
(1 + L + 2L2 + :::: + tLt);
the result would be
(1 + L + 2L2 + :::: + tLt)(1 L)Yt
= (1 + L + 2L2 + :::: + tLt)Wt
or
(1 t+1Lt+1)Yt = (1 + L + 2L2 + :::: + tLt)Wt: (19)
Writing (19) out explicitly produces
Yt t+1Yt (t+1) = Wt + Wt 1 + 2Wt 2 + ::: + tWt t
or
Yt = t+1Y 1 + Wt + Wt 1 + 2Wt 2 + ::: + tW0: (20)
Notice that equation (20) is identical to equation (2). Applying the lag operator
is performing the same set of recursive substitution that were employed in th
previous section.
3.2.2 T = f::: 3; 2; 1; 0; 1; 2; 3; :::g
It is interesting to re ect on the nature of the operator as t become large. We
saw that
(1 + L + 2L2 + :::: + tLt)(1 L)Yt = Yt t+1Y 1:
That is, (1+ L+ 2L2 +::::+ tLt)(1 L)Yt di ers from Yt by the term t+1Y 1.
If j j < 1 and if Y 1 is a nite number, this residual t+1Y 1 will be become
negligible as t became large:
(1 + L + 2L2 + :::: + tLt)(1 L)Yt = Yt for t large:
16
De nition:
A sequence fXt; t 2 T = f:::; 2; 1; 0; 1; 2gg is said to be bounded if there exists
number X such that
jXtj < X for all t:
Thus when j j < 1 and when we are considering applying an operator to a
bounded sequence, we can think of
(1 + L + 2L2 + :::: + jLj)
as approximating the inverse of the operator (1 L), with this approximation
made arbitrarily accurate by choosing j su ciently large:
(1 L) 1 = lim
j!1
(1 + L + 2L2 + :::: + jLj):
This operator (1 L) 1 has the property
(1 L) 1(1 L) = 1;
where "1" denotes the identity operator:
1Yt = Yt:
Provided that j j < 1 and we restrict ourselves to bounded sequence, we have
(1 L) 1(1 L)Yt = (1 L) 1Wt
or
Yt = (1 L) 1Wt;
that is
Yt = Wt + Wt 1 + 2Wt 2 + 3Wt 3 + :::::
17
3.3 Solving Second-Order Di erence Equations by Lag
Operator
Consider next a second-order di erence equation:
Yt = 1Yt 1 + 2Yt 2 + Wt:
Rewriting this in lag operator form produces
(1 1L 2L2)Yt = Wt: (21)
The left side of (21) contains a second-order polynomial in the lag operator L.
Suppose we factor this polynomial and nd its roots. That is, nd numbers 1
and 2 such that
(1 1L 2L2) = (1 1L)(1 2L) = 0;
we obtains L1 = ( 1) 1 and L2 = ( 2) 1. Substituting both roots back the
equation we should get the identity that
1 1 11 2 21 = 0
and
1 1 12 2 22 = 0:
Equivalently,
21 1 11 2 = 0
and
22 1 12 2 = 0;
theses two equation is the same calculation as in nd the eigenvalues of F in
Propositions 1 (That is, i = i; i = 1; 2:). This nding is summarized in the
following proposition.
Proposition 3:
Factoring the polynomial
(1 1L 2L2) = (1 1L)(1 2L)
18
is the same calculation as nding the eigenvalues of the matrix F in (12). The
eigenvalue of F, 1 and 2 are the same as the parameters 1 and 2 in (13).
There are one source of possible semantic confusion about which we have to
be careful. Recall from section 1 that the system is stable if both 1 and 2
are less than 1 in modulus and explosive if either 1 and 2 is greater than 1 in
modulus. Some times this is described as the requirement that the roots of
2 1 2 = 0 (22)
lie inside the unit circle.
The possible confusion is that it is often convenient to work directly with the
polynomial in the lag operator in which it appears as
1 1L 2L2 = 0; (23)
where roots, L = ( ) 1.
Thus, we could say with equal accuracy that the di erence equation is stable
whenever the roots of (22) lie inside the unit circle or that the di erence
equation is stable whenever the roots of (23) lie outside the unit circle. The
two statement mean exactly the same thing. The note will follows the convention
of using the term "eigenvalues" to refer to the roots of (22). Whenever the term
"roots" is used, we will indicate explicitly the equation whose roots are being
described.
From now on this section, it is assumed that the second-order di erence equa-
tion is stable, with the eigenvalue 1 and 2 distinct and both inside the unit
circle. Where this is the case, the inverse
(1 1L) 1 = 1 + 11L + 21L2 + 31L3 + :::
(1 2L) 1 = 1 + 12L + 22L2 + 32L3 + :::
are well de ned for bounded sequence. Written the second-order di erence in
factored form:
(1 1L)(1 2L)Yt = Wt
and operate on both side by (1 1L) 1(1 2L) 1:
Yt = (1 1L) 1(1 2L) 1Wt: (24)
19
Notice that an alternative way of writing the operator is:
( 1 2) 1
1
1 1L
2
1 2L
= ( 1 2) 1
1(1 2L) 2(1 1L)
(1 1L) (1 2L)
= 1(1
1L) (1 2L)
:
Thus, equation (24) can be written as
Yt = ( 1 2) 1
1
1 1L
2
1 2L
Wt
=
1
1 2 [1 + 1L +
2
1L
2 + 3
1L
3 + ::::] 2
1 2 [1 + 2L +
2
2L
2 + 3
2L
3 + ::::]
Wt
or
Yt = (c1 + c2)Wt + (c1 1 + c2 2)Wt 1 + (c1 21 + c2 22)Wt 2 + ::::; (25)
where
c1 = 1=( 1 2)
c2 = 2=( 1 2):
From (25) the dynamic multiplier can be read o directly as
@Yt+j
@Wt = c1
j
1 + c2
j
2;
the same result arrived at in previous sections.
3.4 Solving pth-Order Di erence Equations by Lag Oper-
ator
The techniques generalize in a straightforward way to a pth-order di erence equa-
tion of the form:
Yt = 1Yt 1 + 2Yt 2 + :::: + pYt p + Wt;
and it can be written in terms of lag operator as
(1 1L 2L2 :::: pLp)Yt = Wt:
Factorizing the polynomial in the lag operator as
(1 1L 2L2 :::: pLp) = (1 1L)(1 2L)::::(1 pL)
20
we obtain the roots of this polynomial as Li = 1i ; i = 1; 2; :::; p. From the
identity that for any roots of a polynomial
1 1 1i 2 2i :::: pi = 0
) pi 1 p 1i :::: p 1 i p = 0;
where i are the eigenvalues of F as we de ned before. Thus, Proposition 3 read-
ily generalizes.
Proposition 4:
Factoring the polynomial
(1 1L 2L2 :::: pLp) = (1 1L)(1 2L)::::(1 pL)
is the same calculation as nding the eigenvalues of the matrix F in (12). The
eigenvalue of F, ( 1; 2; :::; p) are the same as the parameters ( 1; 2; :::; p) in
(12).
Assuming that the eigenvalues are inside the unit circle and we are restrict-
ing ourselves to considering bounded sequence, the inverse (1 1L) 1; (1
2L) 1; ::::; (1 pL) 1 all exist, permitting the di erence equation
(1 1L)(1 2L)::::(1 pL)Yt = Wt
to be written as
Yt = (1 1L) 1(1 2L) 1::::(1 pL) 1Wt:
The dynamic multiplier can be read directly as () to be
@Yt+j
@Wt = c1
j
1 + c2
j
2 + :::: + cp
j
p:
3.5 Unbounded Sequences
As we have shown that given a rst-order di erence equation in the lag operator
form:
(1 L)Yt = Wt:
When j j < 1, it is advised to solve the equation "backward" by
(1 L) 1 = (1 + L + 2L2 + :::):
21
However, when j j > 1, Sargent (1987) advice to solve the equation "forward"
by
(1 L) 1 =
1L 1
1 1L 1
= 1L 1(1 + 1L 1 + 2L 2 + :::);
where as de ned, L kYt = Yt+k. In a economic agent with (rational) expectation,
the price at date t depends possibly on future price (expected) at date t+k; k > 0.
22
Notation:
Yt: random variable,
yt: the value of the random variable Yt take,
yt: a random vector,
Yt: a random matrix.
23