Real-time Parameter Estimation
1,Introduction
2,Least squares and regression
3,Dynamical systems
4,Experimental conditions
5,Examples
6,Conclusions
System Identi#0Ccation
#0F How to get models?
#7B Physics #28white-box#29
#7B Experiments #28black-box#29
#7B Combination #28grey-box#29
#0F Experiment planning
#0F Choice of model structure
#7B Transfer functions
#7B Impulse response
#7B State models
#0F Parameter estimation
#7B Statistics
#7B Inverse problems
#0F Validation
Identi#0Ccation Techniques
#0F Nonparametric methods
#7B Frequency response
#7B Transient response
#7B Correlation analysis
#7B Spectral analysis
#0F Parametric methods
#7B Least squares LS
#7B Maximum likelihood ML
#0F Identi#0Ccation for control
#0F Related areas
#7B Statistics
#7B Numerical analysis
#7B Econometrics
#7B Many applications
Real-time Parameter Estimation
1,Introduction
2,Least squares and regression
3,Dynamical systems
4,Experimental conditions
5,Examples
6,Conclusions
c#0D K,J,#C5str#F6m and B,Wittenmark 1
Least Squares and Regression
#0F Introduction
#0F The LS problem
#0F Interpretation
#7B Geometric
#7B Statistic
#0F Recursive Calculations
#0F Continuous time models
Good Methods are adopted by
Everybody
#0F Mathematics
#0F Statistics
#0F Numerical analysis
#0F Physics
#0F Economics
#0F Biology
#0F Medicine
#0F Control
#0F Signal processing
The Least Squares Method
The problem,The Orbit of Ceres
The problem solver,Karl Friedrich Gauss
The principle,#5CTherefore,that will be the
most probable system of values of the unknown
quantities,in which the sum of the squares
of the di#0Berences between the observed and
computed values,multiplied by numbers
that measure the degree of precision,is a
minimum."
In conclusion,the principle that the sum of
the squares of the di#0Berences between the
observed and computed quantities must be a
minimum,may be considered independently of
the calculus of probabilities.
An observation,Other criteria could be used.
#5CBut of all these principles ours is the most
simple; by the others we should be led into the
most complicated calculations."
c#0D K,J,#C5str#F6m and B,Wittenmark 2
Mathematical Formulation
The regression model
y#28t#29='
1
#28t#29#12
1
+'
2
#28t#29#12
2
+#01#01#01+'
n
#28t#29#12
n
='#28t#29
T
#12
y#7B observed data
#12
i
#7B unknown parameters
'
i
#7B known functions regression variables
Some notations
'
T
#28t#29=
#02
'
1
#28t#29 '
2
#28t#29:::'
n
#28t#29
#03
#12
T
=
#02
#12
1
#12
2
:::#12
n
#03
Y#28t#29=
#02
y#281#29 y#282#29:::y#28t#29
#03
T
E#28t#29=
#02
"#281#29 "#282#29:::"#28t#29
#03
T
#08#28t#29=
0
B
@
'
T
#281#29
.
.
.
'
T
#28t#29
1
C
A
P#28t#29=
#12
t
X
i=1
'#28i#29'
T
#28i#29
#13
,1
=
,
#08
T
#28t#29#08#28t#29
#01
,1
"#28i#29=y#28i#29,^y#28i#29=y#28i#29,'
T
#28i#29#12
Solving the LS Problem
Minimize with respect to #12
V #28#12;t#29=
1
2
t
X
i=1
"#28i#29
2
=
1
2
t
X
i=1
,
y#28i#29,'
T
#28i#29#12
#01
2
=
1
2
#12
T
A#12,b
T
#12 +
1
2
c
where
A =
t
X
i=1
'#28i#29'
T
#28i#29
b =
t
X
i=1
'#28i#29y#28i#29; c =
t
X
i=1
y
2
#28i#29
The parameter
^
#12 that minimizes the loss
function are given by the normal equations
A
^
#12 = b
If the matrix A is nonsingular,the minimum is
unique and given by
^
#12 = A
,1
b = Pb
How to construct the equations!
An Example
y#28t#29=b
0
+b
1
u#28t#29+b
2
u
2
#28t#29+e#28t#29
#1B=0:1
'
T
#28T#29=
#02
1 u#28t#29 u
2
#28t#29
#03
#12
T
=#5Bb
0
b
1
b
2
#5D
Estimated models
Model 1,y#28t#29 =b
0
Model 2,y#28t#29 =b
0
+b
1
u
Model 3,y#28t#29 =b
0
+b
1
u+b
2
u
2
Model 4,y#28t#29= b
0
+ b
1
u+ b
2
u
2
+b
3
u
3
Example Continued
Model
^
b
0
^
b
1
^
b
2
^
b
3
V
1 3:85 34:46
2 0:57 1:09 1:01
3 1:11 0:45 0:11 0:031
4 1:13 0:37 0:14,0:003 0:027
0 2 4 6
0
2
4
6
8
Input
Output
0 2 4 6
0
2
4
6
8
Input
Output
0 2 4 6
0
2
4
6
8
Input
Output
0 2 4 6
0
2
4
6
8
Input
Output
c#0D K,J,#C5str#F6m and B,Wittenmark 3
Geometric Interpretation
E
q
1
j
1
q
2
j
2
j
2
j
1
Y
Y
E = Y,'
1
#12
1
,'
2
#12
2
,#01#01#01,'
n
#12
n
When is E as small as possible?
#28'
i
#29
T
,
y,#12
1
'
1
,#12
2
'
2
,#01#01#01,#12
n
'
n
#01
=0
The normal equations!
Statistical Interpretation
y#28t#29='
T
#28t#29#12
0
+e#28t#29
#12
0
#7B #5Ctrue" parameters
e#28t#29 #7B independent random variables with zero
mean and variance #1B
2
If #08
T
#08 is nonsingular,then
E
^
#12 = #12
0
cov
^
#12 = #1B
2
#28#08
T
#08#29
,1
= #1B
2
P
s
2
=2V#28
^
#12;t#29=#28t,n#29
is an unbiased estimate of #1B
2
n #7B number of parameters in #12
0
t #7B number of data
Recursive Least Squares
Idea:
#0F Want to avoid repeating all calculations if
data new data arrives recursively
#0F Does there exist a recursive formula that
expresses
^
#12#28t#29 in terms of
^
#12#28t,1#29?
Recursive Least Squares
The LS estimate is given by
^
#12#28t#29=P#28t#29
#20
t
X
i=1
'#28i#29y#28i#29+'#28t#29y#28t#29
!
P#28t#29=
#20
t
X
i=1
'#28i#29'
T
#28i#29
!
,1
P#28t#29
,1
= P#28t,1#29
,1
+ '#28t#29'
T
#28t#29
But
t,1
X
i=1
'#28i#29y#28i#29=P#28t,1#29
,1
^
#12#28t,1#29
= P#28t#29
,1
^
#12#28t,1#29,'#28t#29'
T
#28t#29
^
#12#28t,1#29
The estimate at time t can now be written as
^
#12#28t#29=
^
#12#28t,1#29,P#28t#29'#28t#29'
T
#28t#29
^
#12#28t,1#29 + P#28t#29'#28t#29y#28t#29
=
^
#12#28t,1#29 + P#28t#29'#28t#29
#10
y#28t#29,'
T
#28t#29
^
#12#28t,1#29
#11
=
^
#12#28t,1#29 + K#28t#29"#28t#29
Want recursive equation for P#28t#29 not for
P#28t#29
,1
c#0D K,J,#C5str#F6m and B,Wittenmark 4
The Matrix Inversion Lemma
Let A,C,and #28C
,1
+ DA
,1
B#29 be nonsingular
square matrices,Then
#28A + BCD#29
,1
= A
,1
,A
,1
B#28C
,1
+DA
,1
B#29
,1
DA
,1
Prove by direct substitution
Given A
,1
we can get the LHS inverse
What about the inverse on the RHS?
Recursion for P#28t#29
The matrix inversion lemma gives
P#28t#29=
#20
t
X
i=1
'#28i#29'
T
#28i#29
!
,1
=
#20
t,1
X
i=1
'#28i#29'
T
#28i#29+'#28t#29'
T
#28t#29
!
,1
=
,
P#28t,1#29
,1
+ '#28t#29'
T
#28t#29
#01
,1
= P#28t,1#29,P#28t,1#29'#28t#29
#02
,
I + '
T
#28t#29P#28t,1#29'#28t#29
#01
,1
'
T
#28t#29P#28t,1#29
Hence
K#28t#29=P#28t#29'#28t#29
=P#28t,1#29'#28t#29
,
I + '
T
#28t#29P#28t,1#29'#28t#29
#01
,1
Recursive Least-Squares RLS
^
#12#28t#29=
^
#12#28t,1#29 + K#28t#29
,
y#28t#29,'
T
#28t#29
^
#12#28t,1#29
#01
K#28t#29=P#28t#29'#28t#29
=P#28t,1#29'#28t#29
,
I + '
T
#28t#29P#28t,1#29'#28t#29
#01
,1
P#28t#29=P#28t,1#29,P#28t,1#29'#28t#29
#02
,
I + '
T
#28t#29P#28t,1#29'#28t#29
#01
,1
'
T
#28t#29P#28t,1#29
=
,
I,K#28t#29'
T
#28t#29
#01
P#28t,1#29
#0F Intuitive interpretation
#0F Kalman #0Clter
#0F Interpretation of #12 and P
#0F Initial values #28P#280#29 = r#01I#29
Time-varying Parameters
Loss function with discounting
V #28#12;t#29=
1
2
t
X
i=1
#15
t,i
,
y#28i#29,'
T
#28i#29#12
#01
2
The LS estimate then becomes
^
#12#28t#29=
^
#12#28t,1#29 + K#28t#29
,
y#28t#29,'
T
#28t#29
^
#12#28t,1#29
#01
K#28t#29=P#28t#29'#28t#29
=P#28t,1#29'#28t#29
,
#15I + '
T
#28t#29P#28t,1#29'#28t#29
#01
,1
P#28t#29=
,
I,K#28t#29'
T
#28t#29
#01
P#28t,1#29=#15
c#0D K,J,#C5str#F6m and B,Wittenmark 5
Forgetting Factor
0 #3C#15#141
Equivalent time constant
e
,h=T
= #15
Hence
T =,
h
log#15
#19
h
1,#15
Rule of thumb:
Memory decay to 10#25 after
N =
2
1,#15
#0F + Forgets old information
#0F + Adapts quickly when the process
changes
#0F,The estimates get noise sensitive
#0F,The P matrix may grow #28Wind-up#29
Continuous Time Models
Regression model
y#28t#29='
1
#28t#29#12
1
+'
2
#28t#29#12
2
+#01#01#01+'
n
#28t#29#12
n
='#28t#29
T
#12
Loss function with forgetting
V #28#12#29=
Z
t
0
e
#0B#28t,s#29
,
y#28s#29,'
T
#28s#29#12
#01
2
ds
The normal equations
Z
t
0
e
,#0B#28t,s#29
'#28s#29'
T
#28s#29ds
^
#12#28t#29
=
Z
t
0
e
,#0B#28t,s#29
'#28s#29y#28s#29ds
Estimate is unique if the matrix
R#28t#29=
Z
t
0
e
,#0B#28t,s#29
'#28s#29'
T
#28s#29ds
is positive de#0Cnite.
Recursive Equations for Continuous
Time Models
Regression model
y#28t#29='#28t#29
T
#12
Recursive least equations
d
^
#12
dt
= P#28t#29'#28t#29e#28t#29
e#28t#29=y#28t#29,'
T
#28t#29
^
#12#28t#29
dP#28t#29
dt
= #0BP#28t#29,P#28t#29'#28t#29'
T
#28t#29P#28t#29
dR
dt
=,#0BR + ''
T
Real-time Parameter Estimation
1,Introduction
2,Least squares and regression
3,Dynamical systems
4,Experimental conditions
5,Examples
6,Conclusions
c#0D K,J,#C5str#F6m and B,Wittenmark 6
Estimating Parameters in Dynamical
Systems
Basic idea,Rewrite the equations as a regres-
sion model!
#0F Dynamical systems
#7B FIR models
#7B ARMA models
#7B Continuous time models
#7B Nonlinear models
#0F Experimental conditions
#7B Excitation
#7B Closed loop identi#0Ccation
Finite Impulse Response #28FIR#29
Models
y#28t#29=b
1
u#28t,1#29+b
2
u#28t,2#29+#01#01#01+b
n
u#28t,n#29
or
y#28t#29='
T
#28t,1#29#12
where
#12
T
=#5Bb
1
:::b
n
#5D
'
T
#28t,1#29 = #5Bu#28t,1#29:::u#28t,n#29#5D
A regression model!
^y#28t#29=
^
b
1
#28t,1#29u#28t,1#29+#01#01#01+
^
b
n
#28t,1#29u#28t,n#29
e
FIR filter
Adjustment
mechanism
q
y
u
S
y
- 1
y
Pulse Transfer Function Models
y#28t#29+a
1
y#28t,1#29 +#01#01#01+a
n
y#28t,n#29=
b
1
u#28t,1#29 + #01#01#01+b
n
u#28t,n#29
Write as
y#28t#29='#28t,1#29
T
#12
where
'#28t,1#29
=#5B,y#28t,1#29:::,y#28t,n#29u#28t,1#29:::u#28t,n#29#5D
T
#12 =#5Ba
1
:::a
n
b
1
:::b
n
#5D
T
#0F Autoregression!
#0F Equation error
Transfer Function Models
Write the model
d
n
y
dt
n
+a
1
d
n,1
y
dt
n,1
+#01#01#01+a
n
y=b
1
d
n,1
u
dt
n,1
+#01#01#01+b
n
u
as
A#28p#29y#28t#29=B#28p#29u#28t#29
Introduce
A#28p#29y
f
#28t#29=B#28p#29u
f
#28t#29
where
y
f
#28t#29=F#28p#29y#28t#29
u
f
#28t#29=F#28p#29u#28t#29
and F#28p#29 has pole excess greater than n
#12 =
#02
a
1
:::a
n
b
1
:::b
n
#03
T
'
f
#28t#29=
#02
,p
n,1
y
f
:::,y
f
p
n,1
u
f
:::u
f
#03
=
#02
,p
n,1
F#28p#29y:::,F#28p#29y
p
n,1
F#28p#29u:::F#28p#29u
#03
Hence
y
f
#28t#29='
T
f
#28t#29#12
A regression model
c#0D K,J,#C5str#F6m and B,Wittenmark 7
Nonlinear Models
Consider the model
y#28t#29+ay#28t,1#29 = b
1
u#28t,1#29 + b
2
u
2
#28t,1#29
Introduce
#12 =#5Bab
1
b
2
#5D
T
and
'
T
#28t#29=
#02
,y#28t#29u#28t#29u
2
#28t#29
#03
Hence
y#28t#29='
T
#28t,1#29#12
Autoregression!
Linearity in the parameters
Real-time Parameter Estimation
1,Introduction
2,Least squares and regression
3,Dynamical systems
4,Experimental conditions
5,Examples
6,Conclusions
Experimental Conditions
#0F Excitation
#0F Closed loop identi#0Ccation
#0F Model structure
Persistent Excitation
The matrix
P
t
k=n+1
'#28k#29'
T
#28k#29 is given by
0
B
B
B
B
B
@
t
X
n+1
u
2
#28k,1#29,::
t
X
n+1
u#28k,1#29u#28k,n#29
.
.
.
t
X
n+1
u#28k,1#29u#28k,n#29,::
t
X
n+1
u
2
#28k,n#29
1
C
C
C
C
C
A
C
n
= lim
t!1
1
t
#08
T
#08
C
n
=
0
B
B
@
c#280#29 c#281#29,:,c#28n,1#29
c#281#29 c#280#29,:,c#28n,2#29
.
.
.
c#28n,1#29 c#28n,2#29,:,c#280#29
1
C
C
A
c#28k#29 = lim
t!1
1
t
t
X
i=1
u#28i#29u#28i,k#29
A signal u is called persistently exciting #28PE#29 of
order n if the matrix C
n
is positive de#0Cnite.
c#0D K,J,#C5str#F6m and B,Wittenmark 8
Another Characterization
A signal u is persistently exciting of order n if
and only if
U = lim
t!1
1
t
t
X
k=1
#28A#28q#29u#28k#29#29
2
#3E 0
for all nonzero polynomials A of degree n,1
or less.
Proof Let the polynomial A be
A#28q#29=a
0
q
n,1
+a
1
q
n,2
+#01#01#01+a
n,1
A straightforward calculation gives
U = lim
t!1
1
t
t
X
k=1
#28a
0
u#28k + n,1#29 + #01#01#01+a
n,1
u#28k#29#29
2
= a
T
C
n
a
Examples
A signal u is called persistently exciting #28PE#29 of
order n if the matrix C
n
is positive de#0Cnite.
An equivalent condition
#0F A step is PE of order 1
#28q,1#29u#28t#29=0
#0FA sinusoid is PE of order 2
#28q
2
,2qcos!h+1#29u#28t#29=0
#0FWhite noise
#0F PRBS
#0F Physical meaning
#0F Mathematical meaning
Loss of Identi#0Cability due to
Feedback
y#28t#29=ay#28t,1#29 + bu#28t,1#29 + e#28t#29
u#28t#29=,ky#28t#29
Multiply by #0B and add,then
y#28t#29=#28a+#0Bk#29y#28t,1#29 +#28b+#0B#29u#28t,1#29+e#28t#29
Same I#2FO relation for all ^a and
^
b such that
^a = a + #0Bk
^
b = b + #0B
True value
a
b
a
b
Slope - 1 k
Real-time Parameter Estimation
1,Introduction
2,Least squares and regression
3,Dynamical systems
4,Experimental conditions
5,Examples
6,Conclusions
c#0D K,J,#C5str#F6m and B,Wittenmark 9
Examples
#0F Excitation
#0F Model structure
#0F Closed loop estimation
#0F Forgetting old data
#0F Gradient or least squares
Examples
Model
y#28t#29+ay#28t,1#29 = bu#28t,1#29 + e#28t#29
Parameters
a =,0:9
b =0:5
#1B=0:5
^
#12#280#29 = 0
P#280#29 = 100#01
^
#12 =
#12
^a
^
b
#13
'#28t,1#29 =#28,y#28t,1#29 u#28t,1#29#29
Excitation
Input:
#0F Unit pulse at t =50
#0FSquare wave of unit amplitude and period
100
0 200 400 600 800 1000
1
0
1
0 200 400 600 800 1000
1
0
1
Time
Time
#28a#29
#28b#29
^
b
^a
^
b
^a
E#0Bect of Feedback
Case 1,u#28t =,0:2y#28t#29
2?1 0 1
1
0
1
#28a#29
#28b#29
^
b
^a
^
b
^a
Case 2,u#28t#29=,0:32y#28t,1#29
2?1 0 1
1
0
1
^
b
^a
c#0D K,J,#C5str#F6m and B,Wittenmark 10
Forgetting Factor
Recall
T #19
h
1,#15
N =
2
1,#15
0 200 600 1000
1
0
1
0 200 600 1000
1
0
1
0 200 600 1000
1
0
1
0 200 600 1000
1
0
1
Time Time
Time Time
#28a#29 #28b#29
#28c#29 #28d#29
^
b
^a
^
b
^a
^
b
^a
^
b
^a
Parameters,#15 =1,#15=0;999,#15 =0:99,
#15 =0:95
Colored Noise
Process model
y#28t#29,0:8y#28t,1#29 = 0:5u#28t,1#29+e#28t#29,0:5e#28t,1#29
Model used in estimator
y#28t#29+ay#28t,1#29 = bu#28t,1#29 + e#28t#29
0 200 400 600 800 1000
1
0
1
0 200 400 600 800 1000
1
0
1
Time
Time
#28a#29
#28b#29
^
b
^a
^
b
^c
^a
Real-time Parameter Estimation
1,Introduction
2,Least squares and regression
3,Dynamical systems
4,Experimental conditions
5,Examples
6,Conclusions
Conclusions
What you should remember:
#0F The least squares method
#0F The normal equations
#0F The recursive equations
#0F The matrix inversion lemma
What you should master:
#0F The recursive equations
#0F The role of excitation
#0F An intuitive understanding
Role in adaptive control
#0F Recursive estimation is a key part of
adaptive control
#0F Recursive least squares is a useful method
c#0D K,J,#C5str#F6m and B,Wittenmark 11
1,Introduction
2,Least squares and regression
3,Dynamical systems
4,Experimental conditions
5,Examples
6,Conclusions
System Identi#0Ccation
#0F How to get models?
#7B Physics #28white-box#29
#7B Experiments #28black-box#29
#7B Combination #28grey-box#29
#0F Experiment planning
#0F Choice of model structure
#7B Transfer functions
#7B Impulse response
#7B State models
#0F Parameter estimation
#7B Statistics
#7B Inverse problems
#0F Validation
Identi#0Ccation Techniques
#0F Nonparametric methods
#7B Frequency response
#7B Transient response
#7B Correlation analysis
#7B Spectral analysis
#0F Parametric methods
#7B Least squares LS
#7B Maximum likelihood ML
#0F Identi#0Ccation for control
#0F Related areas
#7B Statistics
#7B Numerical analysis
#7B Econometrics
#7B Many applications
Real-time Parameter Estimation
1,Introduction
2,Least squares and regression
3,Dynamical systems
4,Experimental conditions
5,Examples
6,Conclusions
c#0D K,J,#C5str#F6m and B,Wittenmark 1
Least Squares and Regression
#0F Introduction
#0F The LS problem
#0F Interpretation
#7B Geometric
#7B Statistic
#0F Recursive Calculations
#0F Continuous time models
Good Methods are adopted by
Everybody
#0F Mathematics
#0F Statistics
#0F Numerical analysis
#0F Physics
#0F Economics
#0F Biology
#0F Medicine
#0F Control
#0F Signal processing
The Least Squares Method
The problem,The Orbit of Ceres
The problem solver,Karl Friedrich Gauss
The principle,#5CTherefore,that will be the
most probable system of values of the unknown
quantities,in which the sum of the squares
of the di#0Berences between the observed and
computed values,multiplied by numbers
that measure the degree of precision,is a
minimum."
In conclusion,the principle that the sum of
the squares of the di#0Berences between the
observed and computed quantities must be a
minimum,may be considered independently of
the calculus of probabilities.
An observation,Other criteria could be used.
#5CBut of all these principles ours is the most
simple; by the others we should be led into the
most complicated calculations."
c#0D K,J,#C5str#F6m and B,Wittenmark 2
Mathematical Formulation
The regression model
y#28t#29='
1
#28t#29#12
1
+'
2
#28t#29#12
2
+#01#01#01+'
n
#28t#29#12
n
='#28t#29
T
#12
y#7B observed data
#12
i
#7B unknown parameters
'
i
#7B known functions regression variables
Some notations
'
T
#28t#29=
#02
'
1
#28t#29 '
2
#28t#29:::'
n
#28t#29
#03
#12
T
=
#02
#12
1
#12
2
:::#12
n
#03
Y#28t#29=
#02
y#281#29 y#282#29:::y#28t#29
#03
T
E#28t#29=
#02
"#281#29 "#282#29:::"#28t#29
#03
T
#08#28t#29=
0
B
@
'
T
#281#29
.
.
.
'
T
#28t#29
1
C
A
P#28t#29=
#12
t
X
i=1
'#28i#29'
T
#28i#29
#13
,1
=
,
#08
T
#28t#29#08#28t#29
#01
,1
"#28i#29=y#28i#29,^y#28i#29=y#28i#29,'
T
#28i#29#12
Solving the LS Problem
Minimize with respect to #12
V #28#12;t#29=
1
2
t
X
i=1
"#28i#29
2
=
1
2
t
X
i=1
,
y#28i#29,'
T
#28i#29#12
#01
2
=
1
2
#12
T
A#12,b
T
#12 +
1
2
c
where
A =
t
X
i=1
'#28i#29'
T
#28i#29
b =
t
X
i=1
'#28i#29y#28i#29; c =
t
X
i=1
y
2
#28i#29
The parameter
^
#12 that minimizes the loss
function are given by the normal equations
A
^
#12 = b
If the matrix A is nonsingular,the minimum is
unique and given by
^
#12 = A
,1
b = Pb
How to construct the equations!
An Example
y#28t#29=b
0
+b
1
u#28t#29+b
2
u
2
#28t#29+e#28t#29
#1B=0:1
'
T
#28T#29=
#02
1 u#28t#29 u
2
#28t#29
#03
#12
T
=#5Bb
0
b
1
b
2
#5D
Estimated models
Model 1,y#28t#29 =b
0
Model 2,y#28t#29 =b
0
+b
1
u
Model 3,y#28t#29 =b
0
+b
1
u+b
2
u
2
Model 4,y#28t#29= b
0
+ b
1
u+ b
2
u
2
+b
3
u
3
Example Continued
Model
^
b
0
^
b
1
^
b
2
^
b
3
V
1 3:85 34:46
2 0:57 1:09 1:01
3 1:11 0:45 0:11 0:031
4 1:13 0:37 0:14,0:003 0:027
0 2 4 6
0
2
4
6
8
Input
Output
0 2 4 6
0
2
4
6
8
Input
Output
0 2 4 6
0
2
4
6
8
Input
Output
0 2 4 6
0
2
4
6
8
Input
Output
c#0D K,J,#C5str#F6m and B,Wittenmark 3
Geometric Interpretation
E
q
1
j
1
q
2
j
2
j
2
j
1
Y
Y
E = Y,'
1
#12
1
,'
2
#12
2
,#01#01#01,'
n
#12
n
When is E as small as possible?
#28'
i
#29
T
,
y,#12
1
'
1
,#12
2
'
2
,#01#01#01,#12
n
'
n
#01
=0
The normal equations!
Statistical Interpretation
y#28t#29='
T
#28t#29#12
0
+e#28t#29
#12
0
#7B #5Ctrue" parameters
e#28t#29 #7B independent random variables with zero
mean and variance #1B
2
If #08
T
#08 is nonsingular,then
E
^
#12 = #12
0
cov
^
#12 = #1B
2
#28#08
T
#08#29
,1
= #1B
2
P
s
2
=2V#28
^
#12;t#29=#28t,n#29
is an unbiased estimate of #1B
2
n #7B number of parameters in #12
0
t #7B number of data
Recursive Least Squares
Idea:
#0F Want to avoid repeating all calculations if
data new data arrives recursively
#0F Does there exist a recursive formula that
expresses
^
#12#28t#29 in terms of
^
#12#28t,1#29?
Recursive Least Squares
The LS estimate is given by
^
#12#28t#29=P#28t#29
#20
t
X
i=1
'#28i#29y#28i#29+'#28t#29y#28t#29
!
P#28t#29=
#20
t
X
i=1
'#28i#29'
T
#28i#29
!
,1
P#28t#29
,1
= P#28t,1#29
,1
+ '#28t#29'
T
#28t#29
But
t,1
X
i=1
'#28i#29y#28i#29=P#28t,1#29
,1
^
#12#28t,1#29
= P#28t#29
,1
^
#12#28t,1#29,'#28t#29'
T
#28t#29
^
#12#28t,1#29
The estimate at time t can now be written as
^
#12#28t#29=
^
#12#28t,1#29,P#28t#29'#28t#29'
T
#28t#29
^
#12#28t,1#29 + P#28t#29'#28t#29y#28t#29
=
^
#12#28t,1#29 + P#28t#29'#28t#29
#10
y#28t#29,'
T
#28t#29
^
#12#28t,1#29
#11
=
^
#12#28t,1#29 + K#28t#29"#28t#29
Want recursive equation for P#28t#29 not for
P#28t#29
,1
c#0D K,J,#C5str#F6m and B,Wittenmark 4
The Matrix Inversion Lemma
Let A,C,and #28C
,1
+ DA
,1
B#29 be nonsingular
square matrices,Then
#28A + BCD#29
,1
= A
,1
,A
,1
B#28C
,1
+DA
,1
B#29
,1
DA
,1
Prove by direct substitution
Given A
,1
we can get the LHS inverse
What about the inverse on the RHS?
Recursion for P#28t#29
The matrix inversion lemma gives
P#28t#29=
#20
t
X
i=1
'#28i#29'
T
#28i#29
!
,1
=
#20
t,1
X
i=1
'#28i#29'
T
#28i#29+'#28t#29'
T
#28t#29
!
,1
=
,
P#28t,1#29
,1
+ '#28t#29'
T
#28t#29
#01
,1
= P#28t,1#29,P#28t,1#29'#28t#29
#02
,
I + '
T
#28t#29P#28t,1#29'#28t#29
#01
,1
'
T
#28t#29P#28t,1#29
Hence
K#28t#29=P#28t#29'#28t#29
=P#28t,1#29'#28t#29
,
I + '
T
#28t#29P#28t,1#29'#28t#29
#01
,1
Recursive Least-Squares RLS
^
#12#28t#29=
^
#12#28t,1#29 + K#28t#29
,
y#28t#29,'
T
#28t#29
^
#12#28t,1#29
#01
K#28t#29=P#28t#29'#28t#29
=P#28t,1#29'#28t#29
,
I + '
T
#28t#29P#28t,1#29'#28t#29
#01
,1
P#28t#29=P#28t,1#29,P#28t,1#29'#28t#29
#02
,
I + '
T
#28t#29P#28t,1#29'#28t#29
#01
,1
'
T
#28t#29P#28t,1#29
=
,
I,K#28t#29'
T
#28t#29
#01
P#28t,1#29
#0F Intuitive interpretation
#0F Kalman #0Clter
#0F Interpretation of #12 and P
#0F Initial values #28P#280#29 = r#01I#29
Time-varying Parameters
Loss function with discounting
V #28#12;t#29=
1
2
t
X
i=1
#15
t,i
,
y#28i#29,'
T
#28i#29#12
#01
2
The LS estimate then becomes
^
#12#28t#29=
^
#12#28t,1#29 + K#28t#29
,
y#28t#29,'
T
#28t#29
^
#12#28t,1#29
#01
K#28t#29=P#28t#29'#28t#29
=P#28t,1#29'#28t#29
,
#15I + '
T
#28t#29P#28t,1#29'#28t#29
#01
,1
P#28t#29=
,
I,K#28t#29'
T
#28t#29
#01
P#28t,1#29=#15
c#0D K,J,#C5str#F6m and B,Wittenmark 5
Forgetting Factor
0 #3C#15#141
Equivalent time constant
e
,h=T
= #15
Hence
T =,
h
log#15
#19
h
1,#15
Rule of thumb:
Memory decay to 10#25 after
N =
2
1,#15
#0F + Forgets old information
#0F + Adapts quickly when the process
changes
#0F,The estimates get noise sensitive
#0F,The P matrix may grow #28Wind-up#29
Continuous Time Models
Regression model
y#28t#29='
1
#28t#29#12
1
+'
2
#28t#29#12
2
+#01#01#01+'
n
#28t#29#12
n
='#28t#29
T
#12
Loss function with forgetting
V #28#12#29=
Z
t
0
e
#0B#28t,s#29
,
y#28s#29,'
T
#28s#29#12
#01
2
ds
The normal equations
Z
t
0
e
,#0B#28t,s#29
'#28s#29'
T
#28s#29ds
^
#12#28t#29
=
Z
t
0
e
,#0B#28t,s#29
'#28s#29y#28s#29ds
Estimate is unique if the matrix
R#28t#29=
Z
t
0
e
,#0B#28t,s#29
'#28s#29'
T
#28s#29ds
is positive de#0Cnite.
Recursive Equations for Continuous
Time Models
Regression model
y#28t#29='#28t#29
T
#12
Recursive least equations
d
^
#12
dt
= P#28t#29'#28t#29e#28t#29
e#28t#29=y#28t#29,'
T
#28t#29
^
#12#28t#29
dP#28t#29
dt
= #0BP#28t#29,P#28t#29'#28t#29'
T
#28t#29P#28t#29
dR
dt
=,#0BR + ''
T
Real-time Parameter Estimation
1,Introduction
2,Least squares and regression
3,Dynamical systems
4,Experimental conditions
5,Examples
6,Conclusions
c#0D K,J,#C5str#F6m and B,Wittenmark 6
Estimating Parameters in Dynamical
Systems
Basic idea,Rewrite the equations as a regres-
sion model!
#0F Dynamical systems
#7B FIR models
#7B ARMA models
#7B Continuous time models
#7B Nonlinear models
#0F Experimental conditions
#7B Excitation
#7B Closed loop identi#0Ccation
Finite Impulse Response #28FIR#29
Models
y#28t#29=b
1
u#28t,1#29+b
2
u#28t,2#29+#01#01#01+b
n
u#28t,n#29
or
y#28t#29='
T
#28t,1#29#12
where
#12
T
=#5Bb
1
:::b
n
#5D
'
T
#28t,1#29 = #5Bu#28t,1#29:::u#28t,n#29#5D
A regression model!
^y#28t#29=
^
b
1
#28t,1#29u#28t,1#29+#01#01#01+
^
b
n
#28t,1#29u#28t,n#29
e
FIR filter
Adjustment
mechanism
q
y
u
S
y
- 1
y
Pulse Transfer Function Models
y#28t#29+a
1
y#28t,1#29 +#01#01#01+a
n
y#28t,n#29=
b
1
u#28t,1#29 + #01#01#01+b
n
u#28t,n#29
Write as
y#28t#29='#28t,1#29
T
#12
where
'#28t,1#29
=#5B,y#28t,1#29:::,y#28t,n#29u#28t,1#29:::u#28t,n#29#5D
T
#12 =#5Ba
1
:::a
n
b
1
:::b
n
#5D
T
#0F Autoregression!
#0F Equation error
Transfer Function Models
Write the model
d
n
y
dt
n
+a
1
d
n,1
y
dt
n,1
+#01#01#01+a
n
y=b
1
d
n,1
u
dt
n,1
+#01#01#01+b
n
u
as
A#28p#29y#28t#29=B#28p#29u#28t#29
Introduce
A#28p#29y
f
#28t#29=B#28p#29u
f
#28t#29
where
y
f
#28t#29=F#28p#29y#28t#29
u
f
#28t#29=F#28p#29u#28t#29
and F#28p#29 has pole excess greater than n
#12 =
#02
a
1
:::a
n
b
1
:::b
n
#03
T
'
f
#28t#29=
#02
,p
n,1
y
f
:::,y
f
p
n,1
u
f
:::u
f
#03
=
#02
,p
n,1
F#28p#29y:::,F#28p#29y
p
n,1
F#28p#29u:::F#28p#29u
#03
Hence
y
f
#28t#29='
T
f
#28t#29#12
A regression model
c#0D K,J,#C5str#F6m and B,Wittenmark 7
Nonlinear Models
Consider the model
y#28t#29+ay#28t,1#29 = b
1
u#28t,1#29 + b
2
u
2
#28t,1#29
Introduce
#12 =#5Bab
1
b
2
#5D
T
and
'
T
#28t#29=
#02
,y#28t#29u#28t#29u
2
#28t#29
#03
Hence
y#28t#29='
T
#28t,1#29#12
Autoregression!
Linearity in the parameters
Real-time Parameter Estimation
1,Introduction
2,Least squares and regression
3,Dynamical systems
4,Experimental conditions
5,Examples
6,Conclusions
Experimental Conditions
#0F Excitation
#0F Closed loop identi#0Ccation
#0F Model structure
Persistent Excitation
The matrix
P
t
k=n+1
'#28k#29'
T
#28k#29 is given by
0
B
B
B
B
B
@
t
X
n+1
u
2
#28k,1#29,::
t
X
n+1
u#28k,1#29u#28k,n#29
.
.
.
t
X
n+1
u#28k,1#29u#28k,n#29,::
t
X
n+1
u
2
#28k,n#29
1
C
C
C
C
C
A
C
n
= lim
t!1
1
t
#08
T
#08
C
n
=
0
B
B
@
c#280#29 c#281#29,:,c#28n,1#29
c#281#29 c#280#29,:,c#28n,2#29
.
.
.
c#28n,1#29 c#28n,2#29,:,c#280#29
1
C
C
A
c#28k#29 = lim
t!1
1
t
t
X
i=1
u#28i#29u#28i,k#29
A signal u is called persistently exciting #28PE#29 of
order n if the matrix C
n
is positive de#0Cnite.
c#0D K,J,#C5str#F6m and B,Wittenmark 8
Another Characterization
A signal u is persistently exciting of order n if
and only if
U = lim
t!1
1
t
t
X
k=1
#28A#28q#29u#28k#29#29
2
#3E 0
for all nonzero polynomials A of degree n,1
or less.
Proof Let the polynomial A be
A#28q#29=a
0
q
n,1
+a
1
q
n,2
+#01#01#01+a
n,1
A straightforward calculation gives
U = lim
t!1
1
t
t
X
k=1
#28a
0
u#28k + n,1#29 + #01#01#01+a
n,1
u#28k#29#29
2
= a
T
C
n
a
Examples
A signal u is called persistently exciting #28PE#29 of
order n if the matrix C
n
is positive de#0Cnite.
An equivalent condition
#0F A step is PE of order 1
#28q,1#29u#28t#29=0
#0FA sinusoid is PE of order 2
#28q
2
,2qcos!h+1#29u#28t#29=0
#0FWhite noise
#0F PRBS
#0F Physical meaning
#0F Mathematical meaning
Loss of Identi#0Cability due to
Feedback
y#28t#29=ay#28t,1#29 + bu#28t,1#29 + e#28t#29
u#28t#29=,ky#28t#29
Multiply by #0B and add,then
y#28t#29=#28a+#0Bk#29y#28t,1#29 +#28b+#0B#29u#28t,1#29+e#28t#29
Same I#2FO relation for all ^a and
^
b such that
^a = a + #0Bk
^
b = b + #0B
True value
a
b
a
b
Slope - 1 k
Real-time Parameter Estimation
1,Introduction
2,Least squares and regression
3,Dynamical systems
4,Experimental conditions
5,Examples
6,Conclusions
c#0D K,J,#C5str#F6m and B,Wittenmark 9
Examples
#0F Excitation
#0F Model structure
#0F Closed loop estimation
#0F Forgetting old data
#0F Gradient or least squares
Examples
Model
y#28t#29+ay#28t,1#29 = bu#28t,1#29 + e#28t#29
Parameters
a =,0:9
b =0:5
#1B=0:5
^
#12#280#29 = 0
P#280#29 = 100#01
^
#12 =
#12
^a
^
b
#13
'#28t,1#29 =#28,y#28t,1#29 u#28t,1#29#29
Excitation
Input:
#0F Unit pulse at t =50
#0FSquare wave of unit amplitude and period
100
0 200 400 600 800 1000
1
0
1
0 200 400 600 800 1000
1
0
1
Time
Time
#28a#29
#28b#29
^
b
^a
^
b
^a
E#0Bect of Feedback
Case 1,u#28t =,0:2y#28t#29
2?1 0 1
1
0
1
#28a#29
#28b#29
^
b
^a
^
b
^a
Case 2,u#28t#29=,0:32y#28t,1#29
2?1 0 1
1
0
1
^
b
^a
c#0D K,J,#C5str#F6m and B,Wittenmark 10
Forgetting Factor
Recall
T #19
h
1,#15
N =
2
1,#15
0 200 600 1000
1
0
1
0 200 600 1000
1
0
1
0 200 600 1000
1
0
1
0 200 600 1000
1
0
1
Time Time
Time Time
#28a#29 #28b#29
#28c#29 #28d#29
^
b
^a
^
b
^a
^
b
^a
^
b
^a
Parameters,#15 =1,#15=0;999,#15 =0:99,
#15 =0:95
Colored Noise
Process model
y#28t#29,0:8y#28t,1#29 = 0:5u#28t,1#29+e#28t#29,0:5e#28t,1#29
Model used in estimator
y#28t#29+ay#28t,1#29 = bu#28t,1#29 + e#28t#29
0 200 400 600 800 1000
1
0
1
0 200 400 600 800 1000
1
0
1
Time
Time
#28a#29
#28b#29
^
b
^a
^
b
^c
^a
Real-time Parameter Estimation
1,Introduction
2,Least squares and regression
3,Dynamical systems
4,Experimental conditions
5,Examples
6,Conclusions
Conclusions
What you should remember:
#0F The least squares method
#0F The normal equations
#0F The recursive equations
#0F The matrix inversion lemma
What you should master:
#0F The recursive equations
#0F The role of excitation
#0F An intuitive understanding
Role in adaptive control
#0F Recursive estimation is a key part of
adaptive control
#0F Recursive least squares is a useful method
c#0D K,J,#C5str#F6m and B,Wittenmark 11