CHAPTER 6 LARGE SAMPLE INFERENCE AND PREDICTION 1
Chapter 6 Large Sample Inference and Prediction
6.1 Large sample inference
Consider the null hypothesis
H0 : Rβ = r.
where R is a J ×K matrix. Wald test for this null hypothesis is defined as
W = (Rb?r)′
bracketleftBig
?σ2R(X′X)?1 R′
bracketrightBig?1
(Rb?r)
as n →∞,
W d→χ2 (J).
This result does NOT require a normality assumption. This result follows from
1. √n(Rb?r) d→ N (0,σ2RQ?1R′).
2. ?σ2 p→σ2.
3.
parenleftBigX′X
n
parenrightBig?1 p
→Q?1.
Writing
W = √n(Rb?r)′
?
??σ2R
parenleftBiggX′X
n
parenrightBigg?1
R′
?
?
?1√
n(Rb?r)
and applying these, we have
W d→N
parenleftBig
0,σ2RQ?1R′
parenrightBigbracketleftBig
σ2RQ?1R′
bracketrightBig?1
N
parenleftBig
0,σ2RQ?1R′
parenrightBig
= N (0,IJ)′ N (0,IJ) = χ2 (J)
Consider
H0 : βk = β0k.
For this null hypothesis, we have
t = bk ?β
0
kradicalBig
s2 (X′X)?1kk
.
Writing
t =
√nparenleftBigb
k ?β0k
parenrightBig
radicalBig
s2 (X′X)?1kk
,
we find that
t d→ N
parenleftBig
0,σ2Q?1kk
parenrightBig
radicalBig
σ2Q?1kk
= N (0,1)
CHAPTER 6 LARGE SAMPLE INFERENCE AND PREDICTION 2
6.2 Testing nonlinear restrictions
H0 : c(β) = q.
Since
c
parenleftBig?
β
parenrightBig
? c(β) +
parenleftBigg?c(β)
?β
parenrightBigg′ parenleftBig
?β ?βparenrightBig,
Var
parenleftBig
c
parenleftBig?
β
parenrightBigparenrightBig
?
parenleftBigg?c(β)
?β
parenrightBigg′
Var
parenleftBig?
β
parenrightBigparenleftBigg?c(β)
?β
parenrightBigg
,
the test we use is
Z = c
parenleftBig?
β
parenrightBig
?q
parenleftBig?c(β)
?β
parenrightBig′
|β=?β
hatwidestVarparenleftBig?βparenrightBigparenleftBig?c(β)
?β
parenrightBig
|β=?β
.
As n →∞,
Z d→N (0,1).
Example 1 A Long—Run MPC.
hatwidestlnCt
s.e.
= =a0.003142
(0.01055)
+ =b0.07495
(0.02873)
lnYt+ =c0.09246
(0.02859)
lnCt?1
R2 = 0.999712
s = 0.00874
Estimated asymptotic Cov[b,c] = ?0.0003298.
H0 : δ = β1?γ = 1.
d = b1?c = 0.074951?0.9246 = 0.99403
gb = ?d?b = 11?c = 13.2626
gc = ?d?c = b(1?c)2 = 13.1834.
The estimated asymptotic variance of d is
g2bEst.Asy.Var[b] +g2cEst.Asy.Var[c] + 2gbgcEst.Asy.Cov[b,c]
= 13.26262 ×0.028732 + 13.18342 ×0.028592 + 2(13.2626)(13.1834)(?0.0003298)
= 0.17192.
Thus,
Z = 0.99403?1√0.17192 = ?0.0144.
CHAPTER 6 LARGE SAMPLE INFERENCE AND PREDICTION 3
6.3 Prediction
Suppose that we want to predict
y0 = X0′β +ε0.
The minimum variance linear unbiased estimator of E(y0|X0) = X0′β is
?y0 = X0′b.
The forecast error is
e0 = y0 ? ?y0 = (β ?b)′ X0 +ε0.
The prediction variance is
Var
parenleftBig
e0|X,X0
parenrightBig
= σ2 +Var
bracketleftBig
(β ?b)′ X0|X,X0
bracketrightBig
= σ2 +X0′
bracketleftBig
σ2 (X′X)?1
bracketrightBig
X0.
Theprediction variance can be estimatedbyusing s2 in place of σ2. Aconfidence interval
for y0 would be formed using
?y0 ±tλ/2 ·se
parenleftBig
e0
parenrightBig
.
This formula is based on the assumption of normality for the regression errors.
Measures for assessing the predictive accuracy of forecasting models are
1. Root mean squared error
RMSE =
radicaltpradicalvertex
radicalvertexradicalbt 1
n0
summationdisplay
i
(yi ? ?yi)2
n0 : # of periods being forecasted
2. Mean absolute error
MAE = 1n0 summationdisplay
i
|yi ? ?yi|
3. Theil U?statistic
U =
radicaltpradicalvertex
radicalvertexradicalvertex
radicalbt
parenleftBig 1
n0
parenrightBigsummationtext
i (yi ? ?yi)
2
parenleftBig 1
n0
parenrightBigsummationtext
i y2i
U? =
radicaltpradicalvertex
radicalvertexradicalvertex
radicalbt
parenleftBig 1
n0
parenrightBigsummationtext
i (?yi ???yi)
2
parenleftBig 1
n0
parenrightBigsummationtext
i (?yi)
2