16.322 Stochastic Estimation and Control, Fall 2004
Prof. Vander Velde
Page 1 of 7
Lecture 20
Last time: Completed solution to the optimum linear filter in real-time operation
Semi-free configuration:
()
0
0
()( ) ()
()
2()()() ()()
j
stpt
Lis
L L ii L R ii Rj
p
DpF p S p
Hs dte dp e
jFsFsSs FpSpπ
∞∞
?
?∞
??
??
1?
=
?
∫∫
144424443
Special case: ( )p??
??
is rational:
In this solution formula we can carry out the indicated integrations in literal form
in the case in which ( )p??
??
is rational.
In our work, we deal in a practical way only with rational F ,
is
S , and
ii
S , so this
function will be rational if ()Dp is rational. This will be true of every desired
operation except a predictor. Thus except in the case of prediction, the above
function which will be symbolized as
[ ]
can be expanded into
() () ()
LR
p pp=+??????
??????
where
[ ]
L
has poles only in LHP and
[ ]
R
has poles only in RHP. The zeroes
may be anywhere.
For rational [ ], this expansion is made by expanding into partial fractions, then
adding together the terms defining LHP poles to form
[ ]
L
and adding together
the terms defining RHP poles to form
[ ]
R
. Actually, only
[ ]
L
will be required.
() ()
{}
000
1
() ()
2
j
st pt st st
LR
LR
j
dte dp p p e f t e dt f t e dt
jπ
∞∞∞∞
???
?∞
+= +????
????∫∫ ∫ ∫
where
()
()
1
() 0, 0
2
1
() 0, 0
2
j
pt
L
L
j
j
pt
R
R
j
ft p edp t
j
f tpedpt
j
π
π
∞
?∞
∞
?∞
==<??
??
==>??
??
∫
∫
Note that ()
R
f t is the inverse transform of a function which is analytic in LHP;
thus () 0
R
ft= for 0t > and
16.322 Stochastic Estimation and Control, Fall 2004
Prof. Vander Velde
Page 2 of 7
0
() 0
st
R
f te dt
∞
?
=
∫
Also ()
L
f t is the inverse transform of a function which is analytic in RHP; thus
() 0
L
ft= for 0t < . Thus
()
0
() ()
st st
LL
L
f te dt f te dt s
∞∞
??
?∞
==??
??∫∫
Thus finally,
0
() ( ) ()
() ()
()
() ( ) ()
Lij
Rii R
L
LLiL
DsF s S s
Fs S s
Hs
Fs F s S s
???
??
??
=
?
In the usual case, ()Fsis a stable, minimum phase function. In that case,
() ()
L
Fs Fs= , () 1
R
Fs = ; that is, all the poles and zeroes of ()Fs are in the LHP.
Similarly, () 1
L
Fs?=. Then
0
() ()
()
()
() ()
ij
ii R
L
ii L
DsS s
Ss
Hs
FsS s
??
??
??
=
Thus in this case the optimum transfer function from input to output is
0
() ()
()
() ()
()
ij
ii R
L
ii L
DsS s
Ss
FsH s
Ss
??
??
??
=
and the optimum function to be cascaded with the fixed part is obtained from
this by division by ()Fs, so that the fixed part is compensated out by
cancellation.
Free configuration problem:
0
() ()
()
()
()
ij
ii R
L
ii L
DsS s
Ss
Hs
Ss
??
??
??
=
Optimum free configuration filter:
16.322 Stochastic Estimation and Control, Fall 2004
Prof. Vander Velde
Page 3 of 7
We started with a closed-loop configuration:
”
()
()
1()()()
()
()
1()()()
Cs
Hs
CsFsBs
Hs
Cs
HsFsBs
=
+
=
?
The loop will be stable, but ()Cs may be unstable.
Special comments about the application of these formulae:
a) Unstable ()Fs cannot be treated because the Fourier transform of ()
F
wt
does not converge in that case. To treat this system, first close a feedback
loop around ()Fs to create a stable “fixed” part and work with this stable
feedback system as ()Fs. When the optimum compensation is found, it
can be collected with the original compensation if desired.
b) An ()Fs which has poles on the jω axis is the limiting case of functions
for which the Fourier transform converges. You can move the poles just
into the LHP by adding a real part ε+ to the pole locations. Solve the
problem with this ε and at the end set it to zero.
Zeroes of ()Fs on jω axis can be included in either factor and the result
will be the same. This will permit cancellation compensation of poles of
()Fs on the jω axis, including poles at the origin.
c) In factoring ()
ii
Ss into () ()
ii L ii R
SsSs , any constant factor in ()
ii
Ss can be
divided between ()
ii L
Ss and ()
ii R
Ss in any convenient way. The same is
true of ()Fs and ()Fs? .
d) Problems should be well-posed in the first place. Avoid combinations of
()Ds and ()
ss
S ω which imply infinite
2
()dt because that may assume
infinite
2
e for any realizable filter. Such as a differentiator on a signal
which falls off as
2
1
ω
.
e) The point at 0t = was left hanging in several steps of the derivation of the
solution formula. Don’t bother checking the individual steps; just check
the final solution to see if it satisfies the necessary conditions.
16.322 Stochastic Estimation and Control, Fall 2004
Prof. Vander Velde
Page 4 of 7
The Wiener-Hopf equation requires
1
() 0l τ = for
1
0τ ≥ . Thus ()Ls should be
analytic in LHP and go to zero at least as fast as
1
s
for large s .
0
() ( ) () () () ( ) () ()
ii is
Ls FsHsFsSs FsDsSs=? ??
We have solved the problem of the optimum linear filter under the least mean
squared error criterion.
Further analysis shows that if the inputs, signal and noise, are Gaussian, the
result we have is the optimum filter. This is, there is no filter, linear or nonlinear
which will yield smaller mean squared error.
If the inputs are not both Gaussian, it is almost sure that some nonlinear filters
can do better than the Wiener filter. But theory for this is only beginning to be
developed on an approximate basis.
Note that if we only know the second order statistics of the inputs, the optimum
linear filter is the best we can do. To take advantage of nonlinear filtering we
must know the distributions of the inputs.
Example: Free configuration predictor (real time)
22
()
()
ss
nn n
A
Ss
as
Ss S
=
?
=
The ,sn are uncorrelated.
()
sT
Ds e=
16.322 Stochastic Estimation and Control, Fall 2004
Prof. Vander Velde
Page 5 of 7
Use the solution form
0
0
()
0
1()(
()
2() ()
1(
2() ()
j
stpt
is
ii L ii Rj
j
st p t T
is
ii L ii Rj
DpS p
Hs dte dp e
jS s S p
Sp
dte dp e
jS s S p
π
π
∞∞
?
?∞
∞∞
?+
?∞
=
=
∫∫
∫∫
22
22
22
22
22
() () ()
ii ss nn
n
n
n
n
n
Ss Ss S s
A
S
as
A
as
S
S
as
bs
S
as
bs bs
S
as as
=+
=+
+
??
+?
??
=
?
??
??
?
=
?
+?
????
=
????
+?
????
where
22
n
A
ba
S
=+.
() ()
()()
() ( )
() ( )( )( )
()()
is ss
is
ii R
A
Sp Sp
asas
Sp Aas
Sp asasbs
A
asbs
AA
ba ab
as bs
==
+?
?
=
+??
=
+?
++
=+
+?
Using the integral form,
()
()
,
1
2
0, otherwise
j atT
ptT
j
A
A
etT
ab
edp
ab
jpaπ
∞ ?+
+
?∞
?
>?
?
+
=
+
?
+
?
?
∫
16.322 Stochastic Estimation and Control, Fall 2004
Prof. Vander Velde
Page 6 of 7
()
()
, 1
2
0, otherwise
j btT
ptT
j
A
A
etT
ab
edpab
jbpπ
∞ +
+
?∞
?
<?
?
+
= +
?
?
?
?
∫
() ()
00
1
st a t T aT s a t
aT
AA
e e dT e e dt
ab ab
A
e
ab sa
∞∞
??+ ??+
?
=
++
=
++
∫∫
0
() 1
()
()() ()
aT
aT
nn
AsaAe
Hs e
ab saSsb Sabsb
?
?
+
==
+++ ++
where
2
n
A
ba
S
=+.
Note the bandwidth of this filter and the gain ~
aT
e
?
which is the correlation
between ()St and ()St T+
Example: Semi-free problem with non-minimum phase F . Optimum
compensator
()
()
22
()
()
()
ss
nn n
cs
Fs K
sd s
A
Ss
as
Ss S
?
=
+
=
?
=
The ,sn are uncorrelated.
Servo example where we’d like the output to track the input, so the desired
operator, () 1Ds= .
16.322 Stochastic Estimation and Control, Fall 2004
Prof. Vander Velde
Page 7 of 7
22
22
()
()
()
ii n
ii L n
ii R n
bs
Ss S
as
bs
Ss S
as
bs
Ss S
as
?
=
?
+
=
+
?
=
?
()()
()()
()
()
()
()
L
R
L
K
Fs
sds
Fs c s
cs
Fs K
sds
Fs cs
ε
ε
=
++
=?
+
?=
??
?=+
()
()()
()()()()
()
()()()
() ( ) ()
() ()
Lis
Rii R
DsF s S s
s
Fs S s
csAas Acs
asascsbs ascsbs
?
=??
??
+? +
==
+??? +??