Massachusetts Institute of Technology
Department of Electrical Engineering and Computer Science
6.243j (Fall 2003): DYNAMICS OF NONLINEAR SYSTEMS
by A. Megretski
Lecture 3: Continuous Dependence On Parameters
1
Arguments based on continuity of functions are common in dynamical system analysis.
They rarely apply to quantitative statements, instead being used mostly for proofs of
existence of certain objects (equilibria, open or closed invariant set, etc.) Alternatively,
continuity arguments can be used to show that certain qualitative conditions cannot be
satis?ed for a class of systems.
3.1 Uniqueness Of Solutions
In this section our main objective is to establish su?cient conditions under which solutions
of ODE with given initial conditions are unique.
3.1.1 A counterexample
Continuity of the function a : R
n
∈? R
n
on the right side of ODE
x˙(t) = a(x(t)), x(t
0
) = ˉx
0
(3.1)
does not guarantee uniqueness of solutions.
Example 3.1 The ODE
x˙(t) = 3|x(t)|
2/3
, x(0) = 0
has solutions x(t) ≥ 0 and x(t) ≥ t
3
(actually, there are in?nitely many solutions in this
case).
1
Version of September 12, 2003
?
2
3.1.2 A general uniqueness theorem
The key issue for uniqueness of solutions turns out to be the maximal slope of a = a(x):
to guarantee uniqueness on time interval T = [t
0
,t
f
], it is su?cient to require existence
of a constant M such that
|a(ˉ x
2
)| ? M|x
1
? ˉx
1
) ? a(ˉ ˉ x
2
|
x
1
, ˉ
for all ˉ x
2
from a neigborhood of a solution x : [t
0
,t
f
] ∈? R
n
of (3.1). The proof of both
existence and uniqueness is so simple in this case that we will formulate the statement
for a much more general class of integral equations.
Theorem 3.1 Let X be a subset of R
n
containing a ball
x
0
) = {ˉ x ? ˉB
r
(ˉ x ≤ R
n
: |ˉ x
0
| ? r}
of radius r > 0, and let t
1
> t
0
be real numbers. Assume that function a : X × [t
0
,t
1
] ×
[t
0
,t
1
] ∈? R
n
is such that there exist constants M,K satisfying
|a(ˉx
1
,?,t) ? a(ˉx
2
,?,t)| ? K|ˉx
1
? ˉx
2
| ? ˉx
1
, ˉx
2
≤ B
r
(ˉx
0
), t
0
? ? ? t ? t
1
, (3.2)
and
|a(ˉx,?,t)| ? M ? ˉx ≤ B
r
(ˉx
0
), t
0
? ? ? t ? t
1
. (3.3)
Then, for a su?ciently small t
f
> t
0
, there exists unique function x : [t
0
,t
f
] ∈? X
satisfying
t
x(t) = ˉx
0
+ a(x(?),?,t)d? ? t ≤ [t
0
,t
f
]. (3.4)
t
0
A proof of the theorem is given in the next section. When a does not depend on the
third argument, we have the standard ODE case
x˙(t) = a(x(t),t).
In general, Theorem 3.1 covers a variety of nonlinear systems with an in?nite dimensional
state space, such as feedback interconnections of convolution operators and memoryless
nonlinear transformations. For example, to prove well-posedness of a feedback system in
which the forward loop is an LTI system with input v, output w, and transfer function
e
?s
? 1
G(s) = ,
s
and the feedback loop is de?ned by v(t) = sin(w(t)), one can apply Theorem 3.1 with
sin(ˉx) + h(t), t ? 1 ? ? ? t,
a(ˉx,?,t) =
h(t), otherwise,
where h = h(t) is a given continuous function depending on the initial conditions.
3
3.1.3 Proof of Theorem 3.1.
First prove existence. Choose t
f
> t
1
such that t
f
? t
0
? r/M and t
f
? t
0
? 1/(2K).
De?ne functions x
k
: [t
0
,t
f
] ∈? X by
t
x
0
, x
k+1
(t) = ˉx
0
(t) ≥ ˉ x
0
+ a(x
k
(?),?,t)d?.
t
0
By (3.3) and by t
f
? t
0
? r/M we have x
k
(t) ≤ B
r
(ˉx
0
) for all t ≤ [t
0
,t
f
]. Hence by (3.2)
and by t
f
? t
0
? 1/(2K) we have
t
|x
k+1
(t) ? x
k
(t)| ? |a(x
k
(?),?,t) ? a(x
k?1
(?),?,t)|d?
t
0
t
? K|x
k
(?) ? x
k?1
(?)|d?
t
0
? 0.5 max {|x
k
(t) ? x
k?1
(t)|}.
t?[t
0
,t
f
]
Therefore one can conclude that
max {|x
k+1
(t) ? x
k
(t)|} ? 0.5 max {|x
k
(t) ? x
k?1
(t)|}.
t?[t
0
,t
f
] t?[t
0
,t
f
]
Hence x
k
(t) converges exponentially to a limit x(t) which, due to continuity of a with
respoect to the ?rst argument, is the desired solution of (3.4).
Now let us prove uniqueness. Note that, due to t
f
? t
0
? r/M, all solutions of (3.4)
must satisfy x(t) ≤ D
r
(ˉx
0
) for t ≤ [t
0
,t
f
]. If x
a
and x
b
are two such solutions then
t
|x
a
(t) ? x
b
(t)| ? |a(x
a
(?),?,t) ? a(x
b
(?),?,t)|d?
t
0
t
? K|x
a
(?) ? x
b
(?)|d?
t
0
? 0.5 max {|x
a
(t) ? x
b
(t)|},
t?[t
0
,t
f
]
which immediately implies
max {|x
a
(t) ? x
b
(t)|} = 0.
t?[t
0
,t
f
]
The proof is complete now. Note that the same proof applies when (3.2),(3.3) are
replaced by the weaker conditions
x
1
,?,t) ? a(ˉ x
1
? ˉ x
1
, ˉ x
0
), t
0
? ? ? t ? t
1
,|a(ˉ x
2
,?,t)| ? K(?)|ˉ x
2
| ? ˉ x
2
≤ B
r
(ˉ
and
x,?,t)| ? m(t) ? ˉ x
0
), t
0
? ? ? t ? t
1
,|a(ˉ x ≤ B
r
(ˉ
where the functions K(·) and M(·) are integrable over [t
0
,t
1
].
4
3.2 Continuous Dependence On Parameters
In this section our main objective is to establish su?cient conditions under which solutions
of ODE depend continuously on initial conditions and other parameters.
Consider the parameterized integral equation
t
x(t,q) = ˉx
0
(q) + a(x(?,q),?,t,q)d?, t ≤ [t
0
,t
1
], (3.5)
t
0
where q ≤ R is a parameter. For every ?xed value of q integral equation (3.5) has the
form of (3.4).
Theorem 3.2 Let x
0
: [t
0
,t
f
] ∈? R
n
be a solution of (3.5) with q = q
0
. For some d > 0
let
X
d
= {ˉ ˉx ≤ R
n
: ? t ≤ [t
0
,t
f
] : |x ? x
0
(t)| < d}
be the d-neigborhood of the solution. Assume that
(a) there exists K ≤ R such that
x
1
,?,t,q)?a(ˉ ˉ ˉ x
1
, ˉ|a(ˉ x
2
,?,t,q)| ? K|x
1
?x
2
| ? ˉ x
2
≤ X
d
, t
0
? ? ? t ? t
f
, q ≤ (q
0
?d,q
0
+d);
(3.6)
(b) there exists K ≤ R such that
x,?,t,q)| ? M ? ˉ|a(ˉ x ≤ X
d
, t
0
? ? ? t ? t
f
, q ≤ (q
0
? d,q
0
+ d); (3.7)
(c) for every ? > 0 there exists ? > 0 such that
|x
0
(q
1
) ? ˉˉ x
0
(q
2
)| ? ? ? q
1
,q
2
≤ (q
0
? d,q
0
+ d) : |q
1
? q
2
| < ?,
|a(ˉ x,?,t,q
2
)| ? ? ? q
1
,q
2
≤ (q
0
? d,q
0
+ d) : |q
1
? q
2
| < ?, ˉx,?,t,q
1
) ? a(ˉ x ≤ X
d
.
Then there exists d
1
≤ (0,d) such that the solution x(t,q) of (3.5) is continuous on
{(t,q)} = [t
0
,t
f
] × (q
0
? d
1
,q
0
+ d
1
).
Condition (a) of Theorem 3.2 is the familiar Lipschitz continuity requirement of the
dependence of a = a(x,?,t,q) on x in a neigborhood of the trajectory of x
0
. Condition
(b) simply bounds a uniformly. Finally, condition (c) means continuous dependence of
equations and initial conditions on parameter q.
The proof of Theorem 3.2 is similar to that of Theorem 3.1.
5
3.3 Implications of continuous dependence on parameters
This section contains some examples showing how the general continuous dependence
of solutions on parameters allows one to derive qualitative statements about nonlinear
systems.
3.3.1 Di?erential ?ow
Consider a time-invariant autonomous ODE
x˙(t) = a(x(t)), (3.8)
where a : R
n
∈? R
m
is satis?es the Lipschitz constraint
|a(ˉ x
2
)| ? M|x
1
? ˉx
1
) ? a(ˉ ˉ x
2
| (3.9)
on every bounded subset of R
n
. According to Theorem 3.1, this implies existence and
uniqueness of a maximal solution x : (t
?
,t
+
) ∈? R
n
of (3.8) subject to given initial
conditions x(t
0
) = ˉx
0
(by this de?nition, t
?
< t
0
< t
+
, and it is possible that t
?
= ??
and/or t
+
= ?). To specify the dependence of this solution on the initial conditions,
we will write x(t) = x(t,t
0
, ˉx
0
). Due to the time-invariance of (3.8), this notation can
be further simpli?ed to x(t) = x(t ? t
0
, ˉ x) means “the value x(t) of the x
0
), where x(t, ˉ
solution of (3.8) with initial conditions x(0) = ˉx”. Remember that this de?nition makes
sense only when uniqueness of solutions is guaranteed, and that x(t, ˉx) may by unde?ned
when |t| is large, in which case we will write x(t, ˉx) = ?.
According to Theorem 3.2, x : ? ∈? R
n
is a continuous function de?ned on an open
ˉ x) de?nes a family of subset ? ? R × R
n
. With x considered a parameter, t ∈? x(t, ˉ
smooth curves in R
n
. When t is ?xed, ˉ x) de?nes a continuous map form an open
x ∈? x(t, ˉ
subset of R
n
and with values in R
n
. Note that x(t
1
,x(t
2
, ˉ x) whenever x)) = x(t
1
+ t
2
, ˉ
x) ∞
x(t
2
, ˉ = ?. The function x : ? ∈? R
n
is sometimes called “di?erential ?ow” de?ned
by (3.8).
3.3.2 Attractors of asymptotically stable equilibria
x
0
≤ R
n
is called an equilibrium of (3.8) when a(ˉ x
0
) ≥ ˉA poiint ˉ x
0
) = 0, i.e. x(t, ˉ x
0
is a
constant solution of (3.8).
De?nition An equilibrium ˉ
x
0
of (3.8) is called asymptotically stable if the following two
conditions are satis?ed:
(a) there exists d > 0 such that x(t, ˉ x
0
as t ? ? for all ˉ x
0
? ˉx) ? ˉ x satisfying |ˉ x| < d;
x) ? ˉ(b) for every ? > 0 there exists ? > 0 such that |x(t, ˉ x
0
| < ? whenever t → 0 and
|ˉ x
0
| < ?.x ? ˉ
6
In other words, all solutions starting su?ciently close to an asymptotically stable
equilibrium ˉx
0
converge to it as t ? ?, and none of such solutions can escape far away
before ?nally converging to ˉx
0
.
Theorem 3.3 Let ˉx
0
≤ R
n
be an asymptotically stable equilibrium of (3.8). The set
x
0
) of all ˉ x) ? ˉA = A(ˉ x ≤ R
n
such that x(t, ˉ x
0
as t ? ? is an open subset of R
n
, and
its boundary is invariant under the transformations ˉ x).x ∈? x(t, ˉ
The proof of the theorem follows easily from the continuity of x(·, ·).
3.3.3 Limit points of a trajectory
x
0
≤ R
n
, the set of all possible limits x(t
k
, ˉ ? as k ? ?, whereFor a ?xed ˉ x
0
) ? x
the sequence {t
k
} also converges to in?nity, is called the limit set of the “trajectory”
t ∈? x(t, ˉx
0
).
Theorem 3.4 The limit set of a given trajectory is always closed and invariant under
the transformations ˉ x).x ∈? x(t, ˉ