Principles of Information Science
Chapter 8
Information Execution
Control Theory
§ 1 Fundamentals of Control Theory
Controller
Object
N,Wiener,Control theory in engineering,whether it is
Concerned with Man,animal or machine,can only be
regarded as a part of the theory of information,
I(O,E) F(S(I))
I(G)
s1
§ 2.1 Description of Controlled
Object
Object Description,The states,The Ways
s2
s3 s4
P(11) … P(14)
… …
P(41) … P(44)
s1 … sN
t1
,
,
tM
,
T x1 xM yN y1
§ 2.2 Description of Goal and
Effect
s g
The initial condition,s,of
the controlled object and
the final condition,g,are
two states of the object in
the state space of control
problem,The path connecting all the states from s to g is
one the possible solutions for the control problem,
In an N dimensional space,the states and the control effect
can be described as
s={s1,…,s N},g={g1,…,g N}
? = [ ? (gn – g’n) ] 2 __ n 1/2 2
'ng
§ 3 The Mechanism of Control
Mechanism of Control,from Information to Action
Strategy
Execution
Object
Information
Action
§ 3.1 The Categories of Control
Open-Loop & Closed-Loop
Object Execution Controlling Noise
Controlling Execution Object Noise
Goal
Effect
§ 3.2 Control Strategy Producing
Mathematical Programming,Strategy Producing
X – N dimensional column vector
f(X) – dependence relationship between goal and the
system states
g(X) – environment constraints of the system
The optimum control strategy can be produced through the
maximizing (minimizing) the goal function under the given
constraints,
Max(Min) f(X)
{g(X)}
An Example,Linear Programming
Goal function,f = 5 x1 + x2
Constraints,(1) x1 ? 0; (2) x2 ? 0; (3) x1 + x2 ? 6;
(4) 3 x1 + x2 ? 12; (5) x1 – x2 ? 2,
Solution,
(1)
(2)
(3)
(5)
(4)
O
A
B
C
D
Point C,
x1 = 26/7
x2 = 6/7 f = 136/7 max
Other Approaches
Non-Linear Programming
Integer Programming
One dimensional Search
Higher dimensional search
Dynamic Programming,etc,
§ 3.3 The Stability Issue
Typical solution – Lyapunoff Criteria
If the state equation of a system is,dx /dt = ?a x,n?(1,N) m nm
m=1
N
n
then its eigen-equation is,?|A-?|=0
(1) If the real part of all roots of the eigen-equation are negative,
then the undisturbed motion is always asymptotically stable,
(2) If there exists at least one root with positive real part,then
the undisturbed motion is always unstable,
(3) Direct Lyapunove Criterion (see references)
§ 3.4.1 Adaptation & Learning,Terms
Terms & Definitions
E changes while P stays in a prescribed range,S is then stable,
S changes while P stays in a prescribed range,S is reliable,
E & structure of S changed while P is still satisfactory,S is
said adaptive,
E changed and P meets requirements after T,S is said a
learning system,
Structure changed yet P meets requirements after T,S is said
a self-repaired system,
E—environment; P—performance; S—system; T– time span,
§ 3.4.2 Adaptation Model
Variable
Controller
Adaptive
Algorithm Measure
Object
Performance
Judgment
V(t) u(t) X(t)
m(t) P(t)
W(t)
V(t)
It is request that X(t) properly responds to an unknown V(t),
m(t) is compared to V(t) and then controller is adjusted based
on W(t),m(t) and V(t),
§ 4.1 Information Threshold
Controller
Object
X Y
N
The goal of control,H(X|Y) = 0
From I(X;Y) = H(X) – H(X|Y)
we have
H(X|Y) = H(X) – [H(Y) – H(Y|X)]
= H(Y|X) + H(X) – H(Y)
Since H(Y|X) ? 0,in order to make H(X|Y) = 0,there must be
H(Y) ? H(X) = H(N)
§ 4.2 Information Driven Search
Assume that among M possible states in search space,the
present state is X,and goal state is X, mo mg
If I(? ) > I(? ) > … > I( ? ),search continues; mo m(k+1) mk
Otherwise,search stops,
The optimum solution can be obtained by the calculation,
I(? ) = Max I(? ) mk mk 0 k
Acquisition,Syntactic ---- S ? H(X)
Transferring,Syntactic –-- lave ? H(X)
R ? C = Max I(X;Y)
R ? R(D) = Min I(X;Y)
I(E;M) = 0
Cognition,Formal(Induction) – Features ?I(C)
Utility ? I(U)
Content ? I(T)
Decision-Making,a(k’) ? Max I(?k)
Execution,H(Y) ? H(X) = H(N)
I(?mk0) = Max I(? mk)
Partial Summary