Szidarovszky, F., Bahill, A.T. “Stability Analysis”
The Electrical Engineering Handbook
Ed. Richard C. Dorf
Boca Raton: CRC Press LLC, 2000
12
Stability Analysis
12.1 Introduction
12.2 Using the State of the System to Determine Stability
12.3 Lyapunov Stability Theory
12.4 Stability of Time-Invariant Linear Systems
Stability Analysis with State-Space Notation?The Transfer
Function Approach
12.5 BIBO Stability
12.6 Physical Examples
12.1 Introduction
In this chapter, which is based on Szidarovszky and Bahill [1992], we first discuss stability in general and then
present four techniques for assessing the stability of a system: (1) Lyapunov functions, (2) finding the eigenvalues
for state-space notation, (3) finding the location in the complex frequency plane of the poles of the closed-
loop transfer function, and (4) proving bounded outputs for all bounded inputs. Proving stability with
Lyapunov functions is very general: it works for nonlinear and time-varying systems. It is also good for doing
proofs. Proving the stability of a system with Lyapunov functions is difficult, however, and failure to find a
Lyapunov function that proves a system is stable does not prove that the system is unstable. The next techniques
we present, finding the eigenvalues or the poles of the transfer function, are sometimes difficult, because they
require factoring high-degree polynomials. Many commercial software packages are now available for this task,
however. We think most engineers would benefit by having one of these computer programs. Jamshidi et al.
[1992] and advertisements in technical publications such as the IEEE Control Systems Magazine and IEEE
Spectrum describe many appropriate software packages. The last technique we present, bounded-input,
bounded-output stability, is also quite general.
Let us begin our discussion of stability and instability of systems informally. In an unstable system the state
can have large variations, and small inputs or small changes in the initial state may produce large variations in
the output. A common example of an unstable system is illustrated by someone pointing the microphone of
a public address (PA) system at a speaker; a loud high-pitched tone results. Often instabilities are caused by
too much gain, so to quiet the PA system, decrease the gain by pointing the microphone away from the speaker.
Discrete systems can also be unstable. A friend of ours once provided an example. She was sitting in a chair
reading and she got cold. So she went over and turned up the thermostat on the heater. The house warmed
up. She got hot, so she got up and turned down the thermostat. The house cooled off. She got cold and turned
up the thermostat. This process continued until someone finally suggested that she put on a sweater (reducing
the gain of her heat loss system). She did, and was much more comfortable. We modeled this as a discrete
system, because she seemed to sample the environment and produce outputs at discrete intervals about 15
minutes apart.
Ferenc Szidarovszky
University of Arizona
A. Terry Bahill
University of Arizona
? 2000 by CRC Press LLC
12.2 Using the State of the System to Determine Stability
The stability of a system is defined with respect to a given equilibrium point in state space. If the initial state
x
0
is selected at an equilibrium state x of the system, then the state will remain at x for all future time. When
the initial state is selected close to an equilibrium state, the system might remain close to the equilibrium state
or it might move away. In this section we introduce conditions that guarantee that whenever the system starts
near an equilibrium state, it remains near it, perhaps even converging to the equilibrium state as time increases.
For simplicity, only time-invariant systems are considered in this section. Time-variant systems are discussed
in Section 12.5.
Continuous, time-invariant systems have the form
(12.1)
and discrete, time-invariant systems are modeled by the difference equation
(12.2)
Here we assume that f: X ? R
n
, where X í R
n
is the state space. We also assume that function f is continuous;
furthermore, for arbitrary initial state x
0
? X, there is a unique solution of the corresponding initial value
problem x(t
0
) = x
0
, and the entire trajectory x(t) is in X. Assume furthermore that t
0
denotes the initial time
period of the system.
It is also known that a vector x ?
X is an equilibrium state of the continuous system, Eq. (12.1), if and only
if f(x) = 0, and it is an equilibrium state of the discrete system, Eq. (12.2), if and only if x = f(x). In this chapter
the equilibrium of a system will always mean the equilibrium state, if it is not specified otherwise. In analyzing
the dependence of the state trajectory x(t) on the selection of the initial state x
0
nearby the equilibrium, the
following stability types are considered.
Definition 12.1
1.An equilibrium state x is stable if there is an e
0
> 0 with the following property: For all e
1
, 0 < e
1
< e
0
,
there is an e > 0 such that if || x – x
0
||
< e, then || x – x(t)||
< e
1
, for all t > t
0
.
2.An equilibrium state x is asymptotically stable if it is stable and there is an e > 0 such that whenever
|| x – x
0
||
< e, then x(t) ? x as t ? ¥.
3.An equilibrium state x is globally asymptotically
stable if it is stable and with arbitrary initial state x
0
? X, x(t) ? x as t ? ¥.
The first definition says an equilibrium state x is stable
if the entire trajectory x(t) is closer to the equilibrium state
than any small e
1
, if the initial state x
0
is selected close
enough to the equilibrium state. For asymptotic stability,
in addition, x(t) has to converge to the equilibrium state as
t ? ¥. If an equilibrium state is globally asymptotically
stable, then x(t) converges to the equilibrium state regard-
less of how the initial state x
0
is selected.
These stability concepts are called internal, because they
represent properties of the state of the system. They are
illustrated in Fig. 12.1.
In the electrical engineering literature, sometimes our
stability definition is called marginal stability, and our
asymptotic stability is called stability.
˙
() (())xfxtt=
xfx()(())tt+=1
FIGURE 12.1Stability concepts. (Source: F. Szi-
darovszky and A.T. Bahill, Linear Systems Theory, Boca
Raton, Fla.: CRC Press, 1992, p. 168. With permission.)
? 2000 by CRC Press LLC
12.3 Lyapunov Stability Theory
Assume that x is an equilibrium state of a continuous or discrete system, and let W denote a subset of the state
space X such that x ? W.
Definition 12.2
A real-valued function V defined on W is called a Lyapunov function, if
1.V is continuous;
2.V has a unique global minimum at x with respect to all other points in W;
3.for any state trajectory x(t) contained in W, V(x(t)) is nonincreasing in t.
The Lyapunov function can be interpreted as the generalization of the energy function in electrical systems.
The first requirement simply means that the graph of V has no discontinuities. The second requirement means
that the graph of V has its lowest point at the equilibrium, and the third requirement generalizes the well-
known fact of electrical systems, that the energy in a free electrical system with resistance always decreases,
unless the system is at rest.
Theorem 12.1
Assume that there exists a Lyapunov function V on the spherical region
(12.3)
where e
0
> 0 is given; furthermore W í X. Then the equilibrium state is stable.
Theorem 12.2
Assume that in addition to the conditions of Theorem 12.1, the Lyapunov function V(x(t)) is strictly decreasing
in t, unless x(t) = x. Then the equilibrium state is asymptotically stable.
Theorem 12.3
Assume that the Lyapunov function is defined on the entire state space X, V(x(t)) is strictly decreasing in t
unless x(t) = x; furthermore, V(x) tends to infinity as any component of x gets arbitrarily large in magnitude.
Then the equilibrium state is globally asymptotically stable.
Example 12.1
Consider the differential equation
The stability of the equilibrium state (1/w, 0)
T
can be verified directly by using Theorem 12.1 without computing
the solution. Select the Lyapunov function
where the Euclidian norm is used.
This is continuous in x; furthermore, it has its minimal (zero) value at x = x . Therefore, to establish the stability
of the equilibrium state we have to show only that V(x(t)) is decreasing. Simple differentiation shows that
W= - <{}xxx*** ** e
0
˙
xx=
-
?
è
?
?
?
÷
+
?
è
?
?
?
÷
0
0
0
1
w
w
V
T
()()()xxxxx=- -=-** **xx
2
2
d
dt
Vt
TT
(()) ( )
˙
()()x x x x x x Ax b=-×=- +22
? 2000 by CRC Press LLC
with
That is, with x = (x
1
, x
2
)
T
,
Therefore, function V(x(t)) is a constant, which is a (not strictly) decreasing function. That is, all conditions
of Theorem 12.1 are satisfied, which implies the stability of the equilibrium state.
Theorems 12.1, 12.2, and 12.3 guarantee, respectively, the stability, asymptotic stability, and global asymptotic
stability of the equilibrium state, if a Lyapunov function is found. Failure to find such a Lyapunov function
does not mean that the system is unstable or that the stability is not asymptotic or globally asymptotic. It only
means that you were not clever enough to find a Lyapunov function that proved stability.
12.4 Stability of Time-Invariant Linear Systems
This section is divided into two subsections. In the first subsection the stability of linear time-invariant systems
given in state-space notation is analyzed. In the second subsection, methods based on transfer functions are
discussed.
Stability Analysis with State-Space Notation
Consider the time-invariant continuous linear system
(12.4)
and the time-invariant discrete linear system
(12.5)
Assume that x is an equilibrium state, and let f(t,t
0
) denote the fundamental matrix.
Theorem 12.4
1.The equilibrium state x is stable if and only if f(t,t
0
) is bounded for t 3 t
0
.
2.The equilibrium state x is asymptotically stable if and only if f(t,t
0
) is bounded and tends to zero as t ? ¥.
We use the symbol s to denote complex frequency, i.e., s = s + jw. For specific values of s, such as eigenvalues
and poles, we use the symbol l.
Theorem 12.5
1. If for at least one eigenvalue of A, Re l
i
> 0 (or *l
i
* > 1 for discrete systems), then the system is unstable.
2.Assume that for all eigenvalues l
i
of A, Re l
i
£ 0 in the continuous case (or *l
i
*
£ 1 in the discrete case),
and all eigenvalues with the property Re l
i
= 0 (or *l
i
*
= 1) have single multiplicity; then the equilibrium
state is stable.
3.The stability is asymptotic if and only if for all i, Re l
i
< 0 (or *l
i
*
< 1).
Ab=
-
?
è
?
?
?
÷
=
?
è
?
?
?
÷
0
0
0
1
w
w
and
d
dt
Vt x x
x
x
xx x xx x
(()) ,
()
x =-
?
è
?
?
?
÷
-+
?
è
?
?
?
÷
=--+=
2
1
1
20
12
2
1
12 2 12 2
w
w
w
ww
˙
xAxb=+
xAxb()()tt+= +1
? 2000 by CRC Press LLC
Remark 1.Note that Part 2 gives only sufficient conditions for the stability of the equilibrium state. As the
following example shows, these conditions are not necessary.
Example 12.2
Consider first the continuous system x
·
= Ox, where O is the zero matrix. Note that all constant functions x(t)
o x are solutions and also equilibrium states. Since
is bounded (being independent of t), all equilibrium states are stable, but O has only one eigenvalue l
1
= 0
with zero real part and multiplicity n, where n is the order of the system.
Consider next the discrete systems x(t + 1) = Ix(t), when all constant functions x(t) o x are also solutions
and equilibrium states. Furthermore,
which is obviously bounded. Therefore, all equilibrium states are stable, but the condition of Part 2 of the
theorem is violated again, since l
1
= 1 with unit absolute value having a multiplicity n.
Remark 2.The following extension of Theorem 12.5 can be proven. The equilibrium state is stable if and
only if for all eigenvalues of A, Re l
i
£ 0 (or *l
i
* £ 1), and if l
i
is a repeated eigenvalue of A such that Re l
i
=
0 (or *l
i
* = 1), then the size of each block containing l
i
in the Jordan canonical form of A is 1 3 1.
Remark 3.The equilibrium states of inhomogeneous equations are stable or asymptotically stable if and only
if the same holds for the equilibrium states of the corresponding homogeneous equations.
Example 12.3
Consider again the continuous system
the stability of which was analyzed earlier in Example 12.1 by using the Lyapunov function method. The
characteristic polynomial of the coefficient matrix is
therefore, the eigenvalues are l
1
= jw and l
2
= –jw. Both eigenvalues have single multiplicities, and Re l
1
= Re
l
2
= 0. Hence, the conditions of Part 2 are satisfied, and therefore the equilibrium state is stable. The conditions
of Part 3 do not hold. Consequently, the system is not asymptotically stable.
If a time-invariant system is nonlinear, then the Lyapunov method is the most popular choice for stability
analysis. If the system is linear, then the direct application of Theorem 12.5 is more attractive, since the
eigenvalues of the coefficient matrix A can be obtained by standard methods. In addition, several conditions
are known from the literature that guarantee the asymptotic stability of time-invariant discrete and continuous
systems even without computing the eigenvalues. For examining asymptotic stability, linearization is an alternative
approach to the Lyapunov method as is shown here. Consider the time-invariant continuous and discrete systems
f(,)
()
tt e
tt
0
0
==
-O
I
f(,)tt
tt tt
0
00
===
--
AII
˙
x=
-
?
è
?
?
?
÷
+
?
è
?
?
?
÷
0
0
0
1
w
w
x
j
w
w
w()s
s
s
s=
-
--
?
è
?
?
?
÷
=+det
22
˙
() (())xtt=fx
? 2000 by CRC Press LLC
and
Let J(x) denote the Jacobian of f(x), and let x be an equilibrium state of the system. It is known that the method
of linearization around the equilibrium state results in the time-invariant linear systems
and
where x
d
(t) = x(t) – x. It is also known from the theory of ordinary differential equations that the asymptotic
stability of the zero vector in the linearized system implies the asymptotic stability of the equilibrium state x
in the original nonlinear system.
For continuous systems the following result has a special importance.
Theorem 12.6
The equilibrium state of a continuous system [Eq. (12.4)] is asymptotically stable if and only if equation
(12.6)
has positive definite solution Q with some positive definite matrix M.
We note that in practical applications the identity matrix is almost always selected for M. An initial stability
check is provided by the following result.
Theorem 12.7
Let j(l) = l
n
+ p
n–1
l
n–1
+ . . . + p
1
l + p
0
be the characteristic polynomial of matrix A. Assume that all eigenvalues
of matrix A have negative real parts. Then p
i
> 0 (i = 0, 1,..., n – 1).
Corollary.If any of the coefficients p
i
is negative or zero, the equilibrium state of the system with coefficient
matrix A cannot be asymptotically stable. However, the conditions of the theorem do not imply that the
eigenvalues of A have negative real parts.
Example 12.4
For matrix
the characteristic polynominal is j(s) = s
2
+ w
2
. Since the coefficient of s
1
is zero, the system of Example 12.3
is not asymptotically stable.
The Transfer Function Approach
The transfer function of the time invariant linear continuous system
(12.7)
xfx()(())tt+=1
˙
() ()()x Jxx
dd
tt=
x Jxx
dd
()()()tt+=1
AQ QA M
T
+=-
A=
-
?
è
?
?
?
÷
0
0
w
w
˙
xAxBu
yCx
=+
=
? 2000 by CRC Press LLC
and that of the time invariant linear discrete system
(12.8)
have the common form
If both the input and output are single, then
or in the familiar electrical engineering notation
(12.9)
where K is the gain term in the forward loop, G(s) represents the dynamics of the forward loop, or the plant,
and H(s) models the dynamics in the feedback loop. We note that in the case of continuous systems s is the
variable of the transfer function, and for discrete systems the variable is denoted by z.
After the Second World War systems and control theory flourished. The transfer function representation was
the most popular representation for systems. To determine the stability of a system we merely had to factor the
denominator of the transfer function (12.9) and see if all of the poles were in the left half of the complex
frequency plane. However, with manual techniques, factoring polynomials of large order is difficult. So engi-
neers, being naturally lazy people, developed several ways to determine the stability of a system without factoring
the polynomials [Dorf, 1992]. First, we have the methods of Routh and Hurwitz, developed a century ago, that
looked at the coefficients of the characteristic polynomial. These methods showed whether the system was
stable or not, but they did not show how close the system was to being stable.
What we want to know is for what value of gain, K, and at what frequency, w, will the denominator of the
transfer function (12.9) become zero. Or, when will KGH = –1, meaning, when will the magnitude of KGH
equal 1 with a phase angle of –180 degrees? These parameters can be determined easily with a Bode diagram.
Construct a Bode diagram for KGH of the system, look at the frequency where the phase angle equals –180
degrees, and look up at the magnitude plot. If it is smaller than 1.0, then the system is stable. If it is larger than
1.0, then the system is unstable. Bode diagram techniques are discussed in Chapter 11.
The quantity KG(s)H(s) is called the open-loop transfer function of the system, because it is the effect that
would be encountered by a signal in one loop around the system if the feedback loop were artificially opened
[Bahill, 1981].
To gain some intuition, think of a closed-loop negative feedback system. Apply a small sinusoid at frequency
w to the input. Assume that the gain around the loop, KGH, is 1 or more, and that the phase lag is 180 degrees.
The summing junction will flip over the fed back signal and add it to the original signal. The result is a signal
that is bigger than what came in. This signal will circulate around this loop, getting bigger and bigger until the
real system no longer matches the model. This is what we call instability.
The question of stability can also be answered with Nyquist diagrams. They are related to Bode diagrams,
but they give more information. A simple way to construct a Nyquist diagram is to make a polar plot on the
complex frequency plane of the Bode diagram. Simply stated, if this contour encircles the –1 point in the
complex frequency plane, then the system is unstable. The two advantages of the Nyquist technique are (1) in
xAxBu
yCx
( ) () ()
() ()
ttt
tt
+= +
=
1
TF CI A B() ( )ss=-
-1
TF
Y
U
()
()
()
s
s
s
=
TF
G
GH
()
()
()()
s
Ks
Kss
=
+1
? 2000 by CRC Press LLC
addition to the information on Bode diagrams, there are about a dozen rules that can be used to help construct
Nyquist diagrams, and (2) Nyquist diagrams handle bizarre systems better, as is shown in the following rigorous
statement of the Nyquist stability criterion. The number of clockwise encirclements minus the number of
counterclockwise encirclements of the point s = –1 + j
0 by the Nyquist plot of KG(s)H(s) is equal to the number
of poles of Y(s)/U(s) minus the number of poles of KG(s)H(s) in the right half of the s-plane.
The root-locus technique was another popular technique for assessing stability. It furthermore allowed the
engineer to see the effects of small changes in the gain, K, on the stability of the system. The root-locus diagram
shows the location in the s-plane of the poles of the closed-loop transfer function, Y(s)/U(s). All branches of
the root-locus diagram start on poles of the open-loop transfer function, KGH, and end either on zeros of the
open-loop transfer function, KGH, or at infinity. There are about a dozen rules to help draw these trajectories.
The root-locus technique is discussed in Chapter 93.4.
We consider all these techniques to be old fashioned. They were developed to help answer the question of
stability without factoring the characteristic polynomial. However, many computer programs are currently
available that factor polynomials. We recommend that engineers merely buy one of these computer packages
and find the roots of the closed-loop transfer function to assess the stability of a system.
The poles of a system are defined as all values of s such that sI – A is singular. The poles of a closed-loop
transfer function are exactly the same as the eigenvalues of the system: engineers prefer the term poles and the
symbol s, and mathematicians prefer the term eigenvalues and the symbol l. We will use s for complex frequency
and l for specific values of s.
Sometimes, some poles could be canceled in the rational function form of TF(s) so that they would not be
explicitly shown. However, even if some poles could be canceled by zeros, we still have to consider all poles in
the following criteria which is the statement of Theorem 12.5. The equilibrium state of the continuous system
[Eq. (12.7)] with constant input is unstable if at least one pole has a positive real part, and is stable if all poles
of TF(s) have nonpositive real parts and all poles with zero real parts are single. The equilibrium state is
asymptotically stable if and only if all poles of TF(s) have negative real parts; that is, all poles are in the left
half of the s-plane. Similarly, the equilibrium state of the discrete system [Eq. (12.8)] with constant input is
unstable if the absolute value of at least one pole is greater than one, and is stable if all poles of TF(z) have
absolute values less than or equal to one and all poles with unit absolute values are single. The equilibrium
state is asymptotically stable if and only if all poles of TF(z) have absolute values less than one; that is, the poles
are all inside the unit circle of the z-plane.
Example 12.5
Consider again the system
which was discussed earlier. Assume that the output equation has the form
Then
The poles are jw and –jw, which have zero real parts; that is, they are on the imaginary axis of the s-plane.
Since they are single poles, the equilibrium state is stable but not asymptotically stable. A system such as this
would produce constant amplitude sinusoids at frequency w. So it seems natural to assume that such systems
would be used to build sinusoidal signal generators and to model oscillating systems. However, this is not the
case, because (1) zero resistance circuits are hard to make; therefore, most function generators use other
˙
xx=
-
?
è
?
?
?
÷
+
?
è
?
?
?
÷
0
0
0
1
w
w
y=(,)11x
TF()s
s
s
=
+
+
w
w
22
? 2000 by CRC Press LLC
techniques to produce sinusoids; and (2) such systems are not good models for oscillating systems, because
most real-world oscillating systems (i.e., biological systems) have energy dissipation elements in them.
More generally, real-world function generators are seldom made from closed-loop feedback control systems
with 180 degrees of phase shift, because (1) it would be difficult to get a broad range of frequencies and several
waveforms from such systems, (2) precise frequency selection would require expensive high-precision compo-
nents, and (3) it would be difficult to maintain a constant frequency in such circuits in the face of changing
temperatures and power supply variations. Likewise, closed-loop feedback control systems with 180 degrees of
phase shift are not good models for oscillating biological systems, because most biological systems oscillate
because of nonlinear network properties.
A special stability criterion for single-input, single-output time-invariant continuous systems will be intro-
duced next. Consider the system
(12.10)
where A is an n ′ n constant matrix, and b and c are constant n-dimensional vectors. The transfer function of
this system is
which is obviously a rational function of s. Now let us add negative feedback around this system so that u =
ky, where k is a constant. The resulting system can be described by the differential equation
(12.11)
The transfer function of this feedback system is
(12.12)
To help show the connection between the asymptotic stability of systems (12.10) and (12.11), we introduce the
following definition.
Definition 12.3
Let r(s) be a rational function of s. Then the locus of points
is called the response diagram of r. Note that L(r) is the image of the imaginary line Re(s) = 0 under the mapping
r. We shall assume that L(r) is bounded, which is the case if and only if the degree of the denominator is not
less than that of the numerator and r has no poles on the line Re(s) = 0.
Theorem 12.8
The Nyquist stability criterion.Assume that TF
1
has a bounded response diagram L(TF
1
). If TF
1
has n poles
in the right half of the s-plane, where Re(s) > 0, then H has r + n poles in the right half of the s-plane where
Re(s) > 0 if the point 1/k + j · 0 is not on L(TF
1
), and L(TF
1
) encircles 1/k + j · 0 r times in the clockwise sense.
Corollary.Assume that system (12.10) is asymptotically stable with constant input and that L(TF
1
) is bounded
and traversed in the direction of increasing n and has the point 1/k + j · 0 on its left. Then the feedback system
(12.11) is also asymptotically stable.
˙
xAxb cx=+ =uy and
T
TFs s
1
1
() ( )=-
-
cIAb
T
˙
()xAxbcxAbcx=+ =+kk
TT
TF
TF
TF
()
()
()
s
s
ks
=
-
1
1
1
Lr a jba Rerjv b I rjv v() { (()), (()), =+= = ?* m R}
? 2000 by CRC Press LLC
This result has many applications, since feedback systems have a crucial role in constructing stabilizers,
observers, and filters for given systems. Fig. 12.2 illustrates the conditions of the corollary. The application of
this result is especially convenient, if system (12.10) is given and only appropriate values k of the feedback are
to be determined. In such cases the locus L(TF
1
) has to be computed first, and then the region of all appropriate
k values can be determined easily from the graph of L(TF
1
).
This analysis has dealt with the closed-loop transfer function, whereas the techniques of Bode, root-locus,
etc. use the open-loop transfer function. This should cause little confusion as long as the distinction is kept in
mind.
12.5 BIBO Stability
In the previous sections, internal stability of time-invariant systems was examined, i.e., the stability of the state
was investigated. In this section the external stability of systems is discussed; this is usually called the BIBO
(bounded-input, bounded-output) stability. Here we drop the simplifying assumption of the previous section
that the system is time-invariant: we will examine time-variant systems.
Definition 12.4
A system is called BIBO stable if for zero initial conditions, a bounded input always evokes a bounded output.
For continuous systems a necessary and sufficient condition for BIBO stability can be formulated as follows.
Theorem 12.9
Let T(t,
t) = (t
ij
(t,
t)) be the weighting pattern, C(t)f(t,
t)B(t), of the system. Then the continuous time-variant
linear system is BIBO stable if and only if the integral
(12.13)
is bounded for all t > t
0
, i and j.
Corollary.Integrals (12.13) are all bounded if and only if
(12.14)
FIGURE 12.2Illustration of Nyquist stability criteria. (Source: F. Szidarovszky and A. T. Bahill, Linear Systems Theory, Boca
Raton, Fla.: CRC Press, 1992, p.184. With permission.)
**ttd
ij
t
t
(,)tt
0
ò
It tt d
ij
ji
t
t
() (,)=
??
ò
**tt
0
? 2000 by CRC Press LLC
is bounded for t 3 t
0
. Therefore, it is sufficient to show the boundedness of only one integral in order to establish
BIBO stability.
The discrete counterpart of this theorem can be given in the following way.
Theorem 12.10
Let T(t,
t) = (t
ij
(t,
t)) be the weighting pattern of the discrete linear system. Then it is BIBO stable if and only
if the sum
(12.15)
is bounded for all t > t
0
, i and j.
Corollary.The sums (12.15) are all bounded if and only if
(12.16)
is bounded. Therefore it is sufficient to verify the boundedness of only one sum in order to establish BIBO
stability.
Consider next the time-invariant case, when A(t) o A, B(t) o B and C(t) o C. From the foregoing theorems
and the definition of T(t,
t) we have immediately the following sufficient condition.
Theorem 12.11
Assume that for all eigenvalues l
i
of A, Re
l
i
< 0 (or *l
i
* < 1). Then the time-invariant linear continuous (or
discrete) system is BIBO stable.
Finally, we note that BIBO stability is not implied by an observation that a certain bounded input generates
bounded output. All bounded inputs must generate bounded outputs in order to guarantee BIBO stability.
Adaptive-control systems are time-varying systems. Therefore, it is usually difficult to prove that they are
stable. Szidarovszky et al. [1990], however, show a technique for doing this. This new result gives a necessary
and sufficient condition for the existence of an asymptotically stable model-following adaptive-control system
based on the solvability of a system of nonlinear algebraic equations, and in the case of the existence of such
systems they present an algorithm for finding the appropriate feedback parameters.
12.6 Physical Examples
In this section we show some examples of stability analysis of physical systems.
1. Consider a simple harmonic oscillator constructed of a mass and an ideal spring. Its dynamic response is
summarized with
In Example 12.3 we showed that this system is stable but not asymptotically stable. This means that if we leave
it alone in its equilibrium state, it will remain stationary, but if we jerk on the mass it will oscillate forever.
There is no damping term to remove the energy, so the energy will be transferred back and forth between
potential energy in the spring and kinetic energy in the moving mass. A good approximation of such a harmonic
oscillator is a pendulum clock. The more expensive it is (i.e., the smaller the damping), the less often we have
to wind it (i.e., add energy).
It tt
ij
t
t
() (,)=
=
-
?
**t
t
0
1
**tt
ij
jit
t
(,)t
t
???
=
-
0
1
˙
xx=
-
?
è
?
?
?
÷
+
?
è
?
?
?
÷
0
0
0
1
w
w
u
? 2000 by CRC Press LLC
2. A linear second-order electrical system composed of a series connection of an input voltage source, an
inductor, a resistor, and a capacitor, with the output defined as the voltage across the capacitor, can be
characterized by the second-order equation
For convenience, let us define
and assume that z < 1. With these parameters the transfer function becomes
Is this system stable? The roots of the characteristic equation are
If z > 0, the poles are in the left half of the s-plane, and therefore the system is asymptotically stable. If z = 0,
as in the previous example, the poles are on the imaginary axis; therefore, the system is stable but not
asymptotically stable. If z < 0, the poles are in the right half of the s-plane and the system is unstable.
3. An electrical system is shown in Fig. 12.3. Simple calculation shows that by defining the state variables
the system can be described by the differential equations
FIGURE 12.3A simple electrical system. (Source: F. Szidarovszky and A. T. Bahill, Linear Systems Theory, Boca Raton, Fla.:
CRC Press, 1992, p. 125. With permission.)
V
V
LCs RCs
out
in
=
++
1
1
2
wz
n
LC
RC
L
==
1
2
and
V
V
s
n
nn
out
in
=
++
w
zw w
2
22
2
lzwwz
12
2
1
,
=- ± -
nn
j
xixv uv
Lc s12
== =, , and
˙
˙
x
R
L
x
L
x
L
u
x
C
x
CR
x
1
1
12
21
2
2
11
11
=- - +
=-
? 2000 by CRC Press LLC
The characteristic equation has the form
which simplifies as
Since R
1
, R
2
, L, and C are positive numbers, the coefficients of this equation are all positive. The constant term
equals l
1
l
2
, and the coefficient of s
1
is –(l
1
+ l
2
). Therefore
If the eigenvalues are real, then these relations hold if and only if both eigenvalues are negative. If they were
positive, then l
1
+ l
2
> 0. If they had different signs, then l
1
l
2
< 0. Furthermore, if at least one eigenvalue is
zero, then l
1
l
2
= 0. Assume next that the eigenvalues are complex:
Then
and
Hence l
1
+ l
2
< 0 if and only if Re
s < 0.
In summary, the system is asymptotically stable, since in both the real and complex cases the eigenvalues
have negative values and negative real parts, respectively.
4. The classical stick balancing problem is shown in Fig. 12.4. Simple analysis shows that y(t) satisfies the
second-order equation
If one selects L = 1, then the characteristic equation has the form
So, the eigenvalues are
--
?
è
?
?
?
÷
--
?
è
?
?
?
÷
+=s
R
L
s
CR LC
1
2
11
0
ss
R
LCR
R
LCR LC
2 1
2
1
2
11
0++
?
è
?
?
?
÷
++
?
è
?
?
?
÷
=
ll ll
12 12
00+< > and
l
12,
=±Res jIms
ll
12
2+=Res
ll
12
22
=+()()Res Ims
˙˙
()y
g
L
yu=-
sg
2
0-=
l
12,
=±g
? 2000 by CRC Press LLC
One is in the right half of the s-plane and the other is in the left half of the s-plane, so the system is unstable.
This instability is understandable, since without an intelligent input to control the system, if the stick is not
upright with zero velocity, it will fall over.
5. A simple transistor circuit can be modeled as shown in Fig. 12.5. The state variables are related to the input
and output of the circuit: the base current, i
b
, is x
1
and the output voltage, v
out
, is x
2
. Therefore,
The A matrix looks strange with a column of all zeros, and indeed the circuit does exhibit odd behavior. For
example, as we will show, there is no equilibrium state for a unit step input of e
s
. This is reasonable, however,
FIGURE 12.4Stick balancing. (Source: F. Szidarovszky and A. T. Bahill, Linear Systems Theory, Boca Raton, Fla.: CRC
Press, 1992, p. 127. With permission.)
FIGURE 12.5A model for a simple transistor circuit. (Source: F. Szidarovszky and A. T. Bahill, Linear Systems Theory, Boca
Raton, Fla.: CRC Press 1992, p. 127. With permission.)
˙
(,)xxc=
-
?
è
?
?
?
?
?
?
÷
÷
÷
÷
+
?
è
?
?
?
?
?
÷
÷
÷
=
h
L
h
C
L
e
ie
fe
s
T
0
0
1
0
01 and
? 2000 by CRC Press LLC
because the model is for mid-frequencies, and a unit step does not qualify. In response to a unit step the output
voltage will increase linearly until the model is no longer valid. If e
s
is considered to be the input, then the
system is
If u(t) [ 1, then at the equilibrium state:
That is,
Since h
fe
/C 1 0, the second equation implies that x
1
= 0, and by substituting this value into the first equation
we get the obvious contradiction 1/L = 0. Hence, with nonzero constant input no equilibrium state exists.
Let us now investigate the stability of this system. First let
~
x(t)denote a fixed trajectory of this system, and
let x(t) be an arbitrary solution. Then the difference x
d
(t) = x(t) –
~
x(t)satisfies the homogeneous equation
This system has an equilibrium x
d
(t) = 0. Next, the stability of this equilibrium is examined by solving for the
poles of the closed-loop transfer function. The characteristic equation is
which can be simplified as
˙
xx=
-
?
è
?
?
?
?
?
?
÷
÷
÷
÷
+
?
è
?
?
?
?
?
÷
÷
÷
h
L
h
C
L
u
ie
fe
0
0
1
0
-
?
è
?
?
?
?
?
?
÷
÷
÷
÷
?
è
?
?
?
?
÷
÷
+
?
è
?
?
?
?
?
÷
÷
÷
=
?
è
?
?
?
?
÷
÷
h
L
h
C
x
x
L
ie
fe
0
0
1
0
0
0
1
2
-+=
h
L
x
L
ie
1
1
0
h
C
x
fe
1
0=
˙
xx
dd
=
-
?
è
?
?
?
?
?
?
÷
÷
÷
÷
h
L
h
C
ie
fe
0
0
det
--
-
?
è
?
?
?
?
?
?
÷
÷
÷
÷
=
h
L
s
h
C
s
ie
fe
0
0
ss
h
L
ie2
00++=
? 2000 by CRC Press LLC
The roots are
Therefore, the system is stable but not asymptotically stable. This stability means that for small changes in the
initial state the entire trajectory x(t) remains close to
~
x(t).
Defining Terms
Asymptotic stability: An equilibrium state x of a system is asymptotically stable if, in addition to being stable,
there is an e > 0 such that whenever **x – x
0
**
< e, then x(t) ? x as t ? ¥. A system is asymptotically
stable if all the poles of the closed-loop transfer function are in the left half of the s-plane (inside the
unit circle of the z-plane for discrete systems). This is sometimes called stability.
BIBO stability: A system is BIBO stable if for zero initial conditions a bounded input always evokes a bounded
output.
External stability:Stability concepts related to the input-output behavior of the system.
Global asymptotic stability:An equilibrium state x of a system is globally asymptotically stable if it is stable
and with arbitrary initial state x
0
? X, x(t) ? x as t ? ¥.
Internal stability:Stability concepts related to the state of the system.
Instability:An equilibrium state of a system is unstable if it is not stable. A system is unstable if at least one
pole of the closed-loop transfer function is in the right half of the s-plane (outside the unit circle of the
z-plane for discrete systems).
Stability:An equilibrium state x of a system is stable if there is an e
0
> 0 with the following property: for all
e
1
, 0 < e
1
< e
0
, there is an e > 0 such that if **x – x
0
** < e, then **x – x(t)** < e
1
for all t > t
0
. A system is
stable if the poles of its closed-loop transfer function are (1) in the left half of the complex frequency
plane, called the s-plane (inside the unit circle of the z-plane for discrete systems), or (2) on the imaginary
axis, and all of the poles on the imaginary axis are single (on the unit circle and all such poles are single
for discrete systems). Stability for a system with repeated poles on the j
w axis (the unit circle) is
complicated and is examined in the discussion after Theorem 12.5. In the electrical engineering literature,
this definition of stability is sometimes called marginal stability and sometimes stability in the sense of
Lyapunov.
Related Topics
6.2 Applications?7.2 State Equations in Normal Form?100.2 Dynamic Response?100.7 Nonlinear
Control Systems
References
A. T. Bahill, Bioengineering: Biomedical, Medical and Clinical Engineering, Englewood Cliffs, N.J.:Prentice-Hall,
1981, pp. 214–215, 250–252.
R. C. Dorf, Modern Control Systems, 7th ed., Reading, Mass.: Addison-Wesley, 1996.
M. Jamshidi, M. Tarokh, and B. Shafai, Computer-Aided Analysis and Design of Linear Control Systems, Engle-
wood Cliffs, N.J.: Prentice-Hall, 1992.
F. Szidarovszky and A. T. Bahill, Linear Systems Theory, Boca Raton, Fla.: CRC Press, 1992.
F. Szidarovszky, A. T. Bahill, and S. Molnar, “On stable adaptive control systems,” Pure Math. and Appl., vol. 1,
ser. B, no. 2–3, pp. 115–121, 1990.
Further Information
For further information consult the textbooks Modern Control Systems by Dorf [1996] or Linear Systems Theory
by Szidarovszky and Bahill [1992].
ll
1
0==- and
2
h
L
ie
? 2000 by CRC Press LLC