Stochastic Self-Tuners
1,Introduction
2,Minimum variance control
3,Estimation of noise models
4,Stochastic Self-tuners
5,Feedforward control
6,Predictive control
7,Conclusions
Introduction
Same idea as before
Process parameters
Controller
design
Estimation
Controller
Process
Controller
parameters
Reference
Input Output
Specification
Self-tuning regulator
But now use
#0F Design based on stochastic control theory
#0F Some very interesting results
#0F Here is where it started
Minimum Variance and Moving
Average Control
#0F Motivation
#0F An Example
#0F The General Case
An Example
Process dynamics
y#28t +1#29+ay#28t#29 = bu#28t#29+ e#28t +1#29+ce#28t#29
If parameters are known the control lawis
u#28t#29=,#12y#28t#29=,
c,a
b;y#28t#29
The output then becomes
y#28t#29=e#28t#29
Notice
#0F Output is white noise
#0F Prediction very simple with the model
#0F Innovations representation
#0F Importance of jcj #3C 1
c#0D K,J,#C5str#F6m and B,Wittenmark 1
The Model
Process dynamics
x#28t#29=
B
1
#28q#29
A
1
#28q#29
u#28t#29
Disturbances
v#28t#29=
C
1
#28q#29
A
2
#28q#29
e#28t#29
Output
y#28t#29=x#28t#29+v#28t#29=
B
1
#28q#29
A
1
#28q#29
u#28t#29+
C
1
#28q#29
A
2
#28q#29
e#28t#29
We can write this as
A#28q#29y#28t#29=B#28q#29u#28t#29+C#28q#29e#28t#29
The standard model!!!
The C-polynomial
Example
C#28z#29=z+2
Spectral density
#08#28e
i!h
#29=
1
2#1E
C#28e
i!h
#29C#28e
,i!h
#29
But
C#28z#29C#28z
,1
#29=#28z+2#29#28z
,1
+2#29
=4#28z+0:5#29#28z
,1
+0:5#29
The disturbance
y#28t#29=e#281#29 +2e#28t,1#29
with Ee
2
=1can thus be represented as
y#28t#29=#0F#28t#29+0:5#0F#28t,1#29
with E#0F
2
=4
The General Case
#0F Process model
A#28q#29y#28k#29=B#28q#29u#28k#29+C#28q#29e#28k#29
degA,degB = d,degC = n
C stable
SISO,Innovation model
#0F Design criteria,Minimize
E#28y
2
+#1Au
2
#29
under the condition that the closed loop
system is stable
#0F May assume any causal nonlinear con-
troller
Prediction
#0F Model C stable
y#28k + m#29=
C#28q#29
A#28q#29
e#28k+m#29=
C
#03
#28q
,1
#29
A
#03
#28q
,1
#29
e#28k+m#29
=F
#03
#28q
,1
#29e#28k+m#29+q
,m
G
#03
#28q
,1
#29
A
#03
#28q
,1
#29
e#28k+m#29
#0F Predictor
^y#28k +mjk#29=
G
#03
#28q
,1
#29
C
#03
#28q
,1
#29
y#28k#29=
qG#28q#29
C#28q#29
y#28k#29
#0F Prediction error
~y#28k +mjk#29=F
#03
#28q
,1
#29e#28k+m#29
#0F Optimal predictor dynamics C#28q#29
c#0D K,J,#C5str#F6m and B,Wittenmark 2
Minimumvariance control =
Prediction
Output y(t)
t
Input u(t)
u( t)=?
y (t + d
0
t)=?
t + d
0
t
Choose d
0
= d and u#28k#29 such that
^y#28k + djk#29=0!
MinimumVariance Control
System with stable inverse
y#28k#29=
B#28q#29
A#28q#29
u#28k#29+
C#28q#29
A#28q#29
e#28k#29
=
B
#03
#28q
,1
#29
A
#03
#28q
,1
#29
q
,d
u#28k#29+
C
#03
#28q
,1
#29
A
#03
#28q
,1
#29
e#28k#29
Predict the output
y#28k +d#29=
C
#03
#28q
,1
#29
A
#03
#28q
,1
#29
e#28k+d#29+
B
#03
#28q
,1
#29
A
#03
#28q
,1
#29
u#28k#29
=F
#03
#28q
,1
#29e#28k+d#29
+
G
#03
#28q
,1
#29
A
#03
#28q
,1
#29
e#28k#29+
B
#03
#28q
,1
#29
A
#03
#28q
,1
#29
u#28k#29
Compute old innovations
e#28k#29=
A
#03
C
#03
y#28k#29,q
,d
B
#03
C
#03
u#28k#29
Minimum Variance Control Cont'd
y#28k +d#29=F
#03
e#28k+d#29+
G
#03
C
#03
y#28k#29
,q
,d
B
#03
G
#03
A
#03
C
#03
u#28k#29+
B
#03
A
#03
u#28k#29
=F
#03
e#28k+d#29+
G
#03
C
#03
y#28k#29+
B
#03
F
#03
C
#03
u#28k#29
u#28k#29is function of y#28k#29;y#28k,1#29;::,and
u#28k,1#29;u#28k,2#29;:::,Then
Ey
2
#28k +d#29=E
,
F
#03
e#28k+d#29
#01
2
+E
#12
G
#03
C
#03
y#28k#29+
B
#03
F
#03
C
#03
u#28k#29
#13
2
It follows that
Ey
2
#28k+d#29 #15
,
1+f
2
1
+#01#01#01+f
2
d,1
#01
#1B
2
Equality is obtained for
u#28k#29=
G#03#28q
,1
#29
B#03#28q
,1
#29F#03#28q
,1
#29
y#28k#29=,
G#28q#29
B#28q#29F#28q#29
y#28k#29
Minimum variance controller
Pole Placement Interpretation
Controller
u =,
G
#03
B
#03
F
#03
y
Process
A
#03
y = q
,d
B
#03
u+C
#03
e
Closed loop system
#28A
#03
B
#03
F
#03
+q
,d
B
#03
G
#03
#29y = B
#03
F
#03
C
#03
e
Hence
R
#03
= B
#03
F
#03
S
#03
= G
#03
We have
C
#03
#28z
,1
#29
A
#03
#28z
,1
#29
= F
#03
#28z
,1
#29+z
,m
G
#03
#28z
,1
#29
A
#03
#28z
,1
#29
We can also write this as
#28A
#03
#28z
,1
#29F
#03
#28z
,1
#29+z
,m
G
#03
#28z
,1
#29=C
#03
#28z
,1
#29
where degF
#03
= n,1,Hence
A#28z#29F#28z#29+z
n,m
G=z
n,1
C#28z#29
c#0D K,J,#C5str#F6m and B,Wittenmark 3
MinimumVariance Self-tuners
#0F Simple in principle
#0F How to estimate? Ay = Bu+Ce
#0F Cheating!!
#0F An Example
#0F Surprised!!
#0F A simple case
#0F A general result
Adaptive Control
Process dynamics
y#28t +1#29+ay#28t#29=bu#28t#29+e#28t+1#29+ce#28t#29
Estimate parameters in the model
y#28t +1#29=#12y#28t#29+u#28t#29
The Least squares estimate is
^
#12#28t#29=
P
t,1
k=0
y#28k#29
,
y#28k +1#29,u#28k#29
#01
P
t,1
k=0
y
2
#28k#29
Control law
u#28t#29=,
^
#12#28t#29y#28t#29
How to Estimate Noise Models
An example
y#28t +1#29+ay#28t#29=bu#28t#29+e#28t+1#29+ce#28t#29
A regression model
y#28t +1#29=,ay#28t#29+bu#28t#29+ce#28t#29+e#28t+1#29
We do not know e#28t#29 but we can approximate
it with #0F#28t#29.
Hence
#12 =#28a;b;c#29
'#28t#29=#28,y#28t#29;u#28t#29;#0F#28t#29#29
#0F#28t#29=y#28t#29,'
T
#28t,1#29#12#28t,1#29
#12#28t#29=#12#28t,1#29 +K#28t#29#0F#28t#29
#0B = #15+ '
T
#28t,1#29P#28t,1#29'#28t,1#29#29
K#28t#29=P#28t,1#29'#28t,1#29=#0B
P#28t#29=#28I,K#28t#29P#28t,1#29'
T
#28t,1#29#29P#28t,1#29
An Example
Consider
y#28t +1#29+ay#28t#29=bu#28t#29+e#28t+1#29+ce#28t#29
with a =,0:9;b=3,and c =,0:3,Minimum
variance controller u#28t#29=,0:2y#28t#29Initial
estimates ^a#280#29 = ^c#280#29=0and
^
b#280#29 = 1.
0 100 200 300 400 500
5
0
5
0 100 200 300 400 500
2
0
2
Time
Time
y
u
0 100 200 300 400 500
0
200
400
600
Time
Self-tuning control
Minimum variance control
c#0D K,J,#C5str#F6m and B,Wittenmark 4
An Direct Self-tuner
Process
y#28t +1#29+ay#28t#29=bu#28t#29+e#28t+1#29+ce#28t#29
Parameters,a =,0:9;b =3,and c =,0:3
Direct self-tuner based on
y#28t +1#29=r
0
u#28t#29+s
0
y#28t#29
with #0Cxed r
0
=1,Control law
u#28t#29=,
^s
0
^r
0
y#28t#29
0 100 200 300 400 500
0.0
0.5
1.0
1.5
Time
^s
0
=^r
0
0 100 200 300 400 500
0
200
400
600
Time
Self-tuning control
Minimum variance control
Explanation
Explain the surprising result
^
#12#28t#29=
P
t,1
k=0
y#28k#29
,
y#28k +1#29,u#28k#29
#01
P
t,1
k=0
y
2
#28k#29
Control law
u#28t#29=,
^
#12#28t#29y#28t#29
Properties
1
t
t,1
X
k=0
y#28k +1#29y#28k#29=
1
t
t,1
X
k=0
#10
^
#12#28t#29y
2
#28k#29+u#28k#29y#28k#29
#11
=
1
t
t,1
X
k=0
#10
^
#12#28t#29,
^
#12#28k#29
#11
y
2
#28k#29
^r
y
#281#29 = lim
t!1
1
t
t,1
X
k=0
y#28k +1#29y#28k#29=0
The Direct Self-tuner
Estimate parameters in
y#28t + d#29=R
#03
#28q
,1
#29u
f
#28t#29+S
#03
#28q
,1
#29y
f
#28t#29
R
#03
#28q
,1
#29=r
0
+r
1
q
,1
+#01#01#01+r
k
q
,k
S
#03
#28q
,1
#29=s
0
+s
1
q
,1
+#01#01#01+s
l
q
,l
u
f
#28t#29=
Q
#03
#28q
,1
#29
P
#03
#28q
,1
#29
u#28t#29
y
f
#28t#29=
Q
#03
#28q
,1
#29
P
#03
#28q
,1
#29
y#28t#29
with least squares,Use control law
R
#03
#28q
,1
#29u#28t#29=,S
#03
#28q
,1
#29y#28t#29
Notice d and sampling period are key design
parameters.
Direct Self-tuners
Use the direct self-tuner with Q
#03
=P
#03
=1.
Parameter r
0
= b
0
is either #0Cxed or estimated.
Property1:If the regression vectors are
bounded the closed-loop system has the
properties
y#28t + #1C#29y#28t#29=0 #1C=d;d+1;:::;d+l
y#28t+#1C#29u#28t#29=0 #1C=d;d+1;:::;d+k
where k degR
#03
and l = degS
#03
Property2:If the process is described by
A#28q#29y = B#28q#29u#28t#29+C#28q#29e#28t#29
and if min#28k;l#29#15 n,1 then
y#28t + #1C#29y#28t#29=0 #1C=d;d+1;:::
If parameters converge we will thus obtain
moving average control!
c#0D K,J,#C5str#F6m and B,Wittenmark 5
Integrator with Time Delay
A#28q#29=q#28q,1#29
B#28q#29=#28h,#1C#29q+#1C=#28h,#1C#29#28q +
#1C
h,#1C
#29
C#28q#29=q#28q+c#29
Minimum phase if #1C#3Ch=2,Controller with
d =1,#1Cchanged from 0.4 to 0.6 at time 100.
0 100 200 300 400
5
0
5
0 100 200 300 400
20
0
20
Time
Time
#28a#29
y
u
Controller with d =2
0 100 200 300 400
5
0
5
0 100 200 300 400
20
0
20
Time
Time
#28b#29
y
u
Feedforward
Easy to include feedforward!
Estimate parameters in
y#28t +d#29=R
#03
#28q
,1
#29u
f
#28t#29+S
#03
#28q
,1
#29y
f
#28t#29+S
#03
ff
#28q
,1
#29v
f
#28t#29
v
f
#0Cltered feedforward signal
Control law
^
R
#03
#28q
,1
#29u#28t#29=,
^
S
#03
#28q
,1
#29y#28t#29,
^
S
#03
ff
#28q
,1
#29v#28t#29
Feedforward has proven very useful in applica-
tions!
Discuss why!
Command signals can also be included
^
R
#03
#28q
,1
#29u#28t#29=T
#03
#28q
,1
#29u
c
#28t#29,
^
S
#03
#28q
,1
#29y#28t#29
u
c
command signal #28set point,reference signal#29
Command signals and feedforward can be
combined
Observations
#0F Indirect self-tuners require estimation of
C-polynomial
#0F Direct self-tuners have unexpectedly nice
properties
#0F Self-tuners drive covariances to zero
#0F Compare PI control
#0F The number of covariances depend on the
parameters
#0F With su#0Eciently many parameters we
obtain moving average control
#0F The parameters do not necessarily con-
verge
#0F Design parameters are predictions horizon
d,sampling period and number of parame-
ters in R
#03
and S
#03
polynomials
#0F It is easy to include feedforward
#0F Easy to check in operation
#0F Performance assessment
c#0D K,J,#C5str#F6m and B,Wittenmark 6
1,Introduction
2,Minimum variance control
3,Estimation of noise models
4,Stochastic Self-tuners
5,Feedforward control
6,Predictive control
7,Conclusions
Introduction
Same idea as before
Process parameters
Controller
design
Estimation
Controller
Process
Controller
parameters
Reference
Input Output
Specification
Self-tuning regulator
But now use
#0F Design based on stochastic control theory
#0F Some very interesting results
#0F Here is where it started
Minimum Variance and Moving
Average Control
#0F Motivation
#0F An Example
#0F The General Case
An Example
Process dynamics
y#28t +1#29+ay#28t#29 = bu#28t#29+ e#28t +1#29+ce#28t#29
If parameters are known the control lawis
u#28t#29=,#12y#28t#29=,
c,a
b;y#28t#29
The output then becomes
y#28t#29=e#28t#29
Notice
#0F Output is white noise
#0F Prediction very simple with the model
#0F Innovations representation
#0F Importance of jcj #3C 1
c#0D K,J,#C5str#F6m and B,Wittenmark 1
The Model
Process dynamics
x#28t#29=
B
1
#28q#29
A
1
#28q#29
u#28t#29
Disturbances
v#28t#29=
C
1
#28q#29
A
2
#28q#29
e#28t#29
Output
y#28t#29=x#28t#29+v#28t#29=
B
1
#28q#29
A
1
#28q#29
u#28t#29+
C
1
#28q#29
A
2
#28q#29
e#28t#29
We can write this as
A#28q#29y#28t#29=B#28q#29u#28t#29+C#28q#29e#28t#29
The standard model!!!
The C-polynomial
Example
C#28z#29=z+2
Spectral density
#08#28e
i!h
#29=
1
2#1E
C#28e
i!h
#29C#28e
,i!h
#29
But
C#28z#29C#28z
,1
#29=#28z+2#29#28z
,1
+2#29
=4#28z+0:5#29#28z
,1
+0:5#29
The disturbance
y#28t#29=e#281#29 +2e#28t,1#29
with Ee
2
=1can thus be represented as
y#28t#29=#0F#28t#29+0:5#0F#28t,1#29
with E#0F
2
=4
The General Case
#0F Process model
A#28q#29y#28k#29=B#28q#29u#28k#29+C#28q#29e#28k#29
degA,degB = d,degC = n
C stable
SISO,Innovation model
#0F Design criteria,Minimize
E#28y
2
+#1Au
2
#29
under the condition that the closed loop
system is stable
#0F May assume any causal nonlinear con-
troller
Prediction
#0F Model C stable
y#28k + m#29=
C#28q#29
A#28q#29
e#28k+m#29=
C
#03
#28q
,1
#29
A
#03
#28q
,1
#29
e#28k+m#29
=F
#03
#28q
,1
#29e#28k+m#29+q
,m
G
#03
#28q
,1
#29
A
#03
#28q
,1
#29
e#28k+m#29
#0F Predictor
^y#28k +mjk#29=
G
#03
#28q
,1
#29
C
#03
#28q
,1
#29
y#28k#29=
qG#28q#29
C#28q#29
y#28k#29
#0F Prediction error
~y#28k +mjk#29=F
#03
#28q
,1
#29e#28k+m#29
#0F Optimal predictor dynamics C#28q#29
c#0D K,J,#C5str#F6m and B,Wittenmark 2
Minimumvariance control =
Prediction
Output y(t)
t
Input u(t)
u( t)=?
y (t + d
0
t)=?
t + d
0
t
Choose d
0
= d and u#28k#29 such that
^y#28k + djk#29=0!
MinimumVariance Control
System with stable inverse
y#28k#29=
B#28q#29
A#28q#29
u#28k#29+
C#28q#29
A#28q#29
e#28k#29
=
B
#03
#28q
,1
#29
A
#03
#28q
,1
#29
q
,d
u#28k#29+
C
#03
#28q
,1
#29
A
#03
#28q
,1
#29
e#28k#29
Predict the output
y#28k +d#29=
C
#03
#28q
,1
#29
A
#03
#28q
,1
#29
e#28k+d#29+
B
#03
#28q
,1
#29
A
#03
#28q
,1
#29
u#28k#29
=F
#03
#28q
,1
#29e#28k+d#29
+
G
#03
#28q
,1
#29
A
#03
#28q
,1
#29
e#28k#29+
B
#03
#28q
,1
#29
A
#03
#28q
,1
#29
u#28k#29
Compute old innovations
e#28k#29=
A
#03
C
#03
y#28k#29,q
,d
B
#03
C
#03
u#28k#29
Minimum Variance Control Cont'd
y#28k +d#29=F
#03
e#28k+d#29+
G
#03
C
#03
y#28k#29
,q
,d
B
#03
G
#03
A
#03
C
#03
u#28k#29+
B
#03
A
#03
u#28k#29
=F
#03
e#28k+d#29+
G
#03
C
#03
y#28k#29+
B
#03
F
#03
C
#03
u#28k#29
u#28k#29is function of y#28k#29;y#28k,1#29;::,and
u#28k,1#29;u#28k,2#29;:::,Then
Ey
2
#28k +d#29=E
,
F
#03
e#28k+d#29
#01
2
+E
#12
G
#03
C
#03
y#28k#29+
B
#03
F
#03
C
#03
u#28k#29
#13
2
It follows that
Ey
2
#28k+d#29 #15
,
1+f
2
1
+#01#01#01+f
2
d,1
#01
#1B
2
Equality is obtained for
u#28k#29=
G#03#28q
,1
#29
B#03#28q
,1
#29F#03#28q
,1
#29
y#28k#29=,
G#28q#29
B#28q#29F#28q#29
y#28k#29
Minimum variance controller
Pole Placement Interpretation
Controller
u =,
G
#03
B
#03
F
#03
y
Process
A
#03
y = q
,d
B
#03
u+C
#03
e
Closed loop system
#28A
#03
B
#03
F
#03
+q
,d
B
#03
G
#03
#29y = B
#03
F
#03
C
#03
e
Hence
R
#03
= B
#03
F
#03
S
#03
= G
#03
We have
C
#03
#28z
,1
#29
A
#03
#28z
,1
#29
= F
#03
#28z
,1
#29+z
,m
G
#03
#28z
,1
#29
A
#03
#28z
,1
#29
We can also write this as
#28A
#03
#28z
,1
#29F
#03
#28z
,1
#29+z
,m
G
#03
#28z
,1
#29=C
#03
#28z
,1
#29
where degF
#03
= n,1,Hence
A#28z#29F#28z#29+z
n,m
G=z
n,1
C#28z#29
c#0D K,J,#C5str#F6m and B,Wittenmark 3
MinimumVariance Self-tuners
#0F Simple in principle
#0F How to estimate? Ay = Bu+Ce
#0F Cheating!!
#0F An Example
#0F Surprised!!
#0F A simple case
#0F A general result
Adaptive Control
Process dynamics
y#28t +1#29+ay#28t#29=bu#28t#29+e#28t+1#29+ce#28t#29
Estimate parameters in the model
y#28t +1#29=#12y#28t#29+u#28t#29
The Least squares estimate is
^
#12#28t#29=
P
t,1
k=0
y#28k#29
,
y#28k +1#29,u#28k#29
#01
P
t,1
k=0
y
2
#28k#29
Control law
u#28t#29=,
^
#12#28t#29y#28t#29
How to Estimate Noise Models
An example
y#28t +1#29+ay#28t#29=bu#28t#29+e#28t+1#29+ce#28t#29
A regression model
y#28t +1#29=,ay#28t#29+bu#28t#29+ce#28t#29+e#28t+1#29
We do not know e#28t#29 but we can approximate
it with #0F#28t#29.
Hence
#12 =#28a;b;c#29
'#28t#29=#28,y#28t#29;u#28t#29;#0F#28t#29#29
#0F#28t#29=y#28t#29,'
T
#28t,1#29#12#28t,1#29
#12#28t#29=#12#28t,1#29 +K#28t#29#0F#28t#29
#0B = #15+ '
T
#28t,1#29P#28t,1#29'#28t,1#29#29
K#28t#29=P#28t,1#29'#28t,1#29=#0B
P#28t#29=#28I,K#28t#29P#28t,1#29'
T
#28t,1#29#29P#28t,1#29
An Example
Consider
y#28t +1#29+ay#28t#29=bu#28t#29+e#28t+1#29+ce#28t#29
with a =,0:9;b=3,and c =,0:3,Minimum
variance controller u#28t#29=,0:2y#28t#29Initial
estimates ^a#280#29 = ^c#280#29=0and
^
b#280#29 = 1.
0 100 200 300 400 500
5
0
5
0 100 200 300 400 500
2
0
2
Time
Time
y
u
0 100 200 300 400 500
0
200
400
600
Time
Self-tuning control
Minimum variance control
c#0D K,J,#C5str#F6m and B,Wittenmark 4
An Direct Self-tuner
Process
y#28t +1#29+ay#28t#29=bu#28t#29+e#28t+1#29+ce#28t#29
Parameters,a =,0:9;b =3,and c =,0:3
Direct self-tuner based on
y#28t +1#29=r
0
u#28t#29+s
0
y#28t#29
with #0Cxed r
0
=1,Control law
u#28t#29=,
^s
0
^r
0
y#28t#29
0 100 200 300 400 500
0.0
0.5
1.0
1.5
Time
^s
0
=^r
0
0 100 200 300 400 500
0
200
400
600
Time
Self-tuning control
Minimum variance control
Explanation
Explain the surprising result
^
#12#28t#29=
P
t,1
k=0
y#28k#29
,
y#28k +1#29,u#28k#29
#01
P
t,1
k=0
y
2
#28k#29
Control law
u#28t#29=,
^
#12#28t#29y#28t#29
Properties
1
t
t,1
X
k=0
y#28k +1#29y#28k#29=
1
t
t,1
X
k=0
#10
^
#12#28t#29y
2
#28k#29+u#28k#29y#28k#29
#11
=
1
t
t,1
X
k=0
#10
^
#12#28t#29,
^
#12#28k#29
#11
y
2
#28k#29
^r
y
#281#29 = lim
t!1
1
t
t,1
X
k=0
y#28k +1#29y#28k#29=0
The Direct Self-tuner
Estimate parameters in
y#28t + d#29=R
#03
#28q
,1
#29u
f
#28t#29+S
#03
#28q
,1
#29y
f
#28t#29
R
#03
#28q
,1
#29=r
0
+r
1
q
,1
+#01#01#01+r
k
q
,k
S
#03
#28q
,1
#29=s
0
+s
1
q
,1
+#01#01#01+s
l
q
,l
u
f
#28t#29=
Q
#03
#28q
,1
#29
P
#03
#28q
,1
#29
u#28t#29
y
f
#28t#29=
Q
#03
#28q
,1
#29
P
#03
#28q
,1
#29
y#28t#29
with least squares,Use control law
R
#03
#28q
,1
#29u#28t#29=,S
#03
#28q
,1
#29y#28t#29
Notice d and sampling period are key design
parameters.
Direct Self-tuners
Use the direct self-tuner with Q
#03
=P
#03
=1.
Parameter r
0
= b
0
is either #0Cxed or estimated.
Property1:If the regression vectors are
bounded the closed-loop system has the
properties
y#28t + #1C#29y#28t#29=0 #1C=d;d+1;:::;d+l
y#28t+#1C#29u#28t#29=0 #1C=d;d+1;:::;d+k
where k degR
#03
and l = degS
#03
Property2:If the process is described by
A#28q#29y = B#28q#29u#28t#29+C#28q#29e#28t#29
and if min#28k;l#29#15 n,1 then
y#28t + #1C#29y#28t#29=0 #1C=d;d+1;:::
If parameters converge we will thus obtain
moving average control!
c#0D K,J,#C5str#F6m and B,Wittenmark 5
Integrator with Time Delay
A#28q#29=q#28q,1#29
B#28q#29=#28h,#1C#29q+#1C=#28h,#1C#29#28q +
#1C
h,#1C
#29
C#28q#29=q#28q+c#29
Minimum phase if #1C#3Ch=2,Controller with
d =1,#1Cchanged from 0.4 to 0.6 at time 100.
0 100 200 300 400
5
0
5
0 100 200 300 400
20
0
20
Time
Time
#28a#29
y
u
Controller with d =2
0 100 200 300 400
5
0
5
0 100 200 300 400
20
0
20
Time
Time
#28b#29
y
u
Feedforward
Easy to include feedforward!
Estimate parameters in
y#28t +d#29=R
#03
#28q
,1
#29u
f
#28t#29+S
#03
#28q
,1
#29y
f
#28t#29+S
#03
ff
#28q
,1
#29v
f
#28t#29
v
f
#0Cltered feedforward signal
Control law
^
R
#03
#28q
,1
#29u#28t#29=,
^
S
#03
#28q
,1
#29y#28t#29,
^
S
#03
ff
#28q
,1
#29v#28t#29
Feedforward has proven very useful in applica-
tions!
Discuss why!
Command signals can also be included
^
R
#03
#28q
,1
#29u#28t#29=T
#03
#28q
,1
#29u
c
#28t#29,
^
S
#03
#28q
,1
#29y#28t#29
u
c
command signal #28set point,reference signal#29
Command signals and feedforward can be
combined
Observations
#0F Indirect self-tuners require estimation of
C-polynomial
#0F Direct self-tuners have unexpectedly nice
properties
#0F Self-tuners drive covariances to zero
#0F Compare PI control
#0F The number of covariances depend on the
parameters
#0F With su#0Eciently many parameters we
obtain moving average control
#0F The parameters do not necessarily con-
verge
#0F Design parameters are predictions horizon
d,sampling period and number of parame-
ters in R
#03
and S
#03
polynomials
#0F It is easy to include feedforward
#0F Easy to check in operation
#0F Performance assessment
c#0D K,J,#C5str#F6m and B,Wittenmark 6