EQUILIBRIUM, CONTROL, AND STABILITY
Equilibrium denotes a steady state of the system, one in which all the state variables are constant in time. The “motion” corresponding to equilibrium is represented by a point in the state space. The nonautonomous inputs associated with equilibrium must be zero or constant, the zero case preferably corresponding to the equilibrium point at the origin. The usual way of changing the equilibrium state, i. e. of exercising control over the system is by means of the nonautonomous inputs, the appropriate subset of which can hence be termed the control vector, and the associated space the control space. The result of applying control is to cause the equilibrium point to move away from the origin in state space, and the locus of all its possible positions defines a region that is a map of the domain of the control vector in control space. The control is adequate only if this region contains all the desired operating states of the system (e. g. orientation angles and speeds of a flight vehicle).
Stability embraces a class of concepts that, while readily appreciated intuitively, are not easily defined in a universal way. In the past, a common view of system stability has been that it is a property of the equilibrium state, as follows. Let a system be in equilibrium, and for convenience let the equilibrium point be chosen as the origin of state space. Now let the initial state for the autonomous system be at a point P (see Fig. 3.3a) in the immediate neighborhood of 0. Three possibilities exist for the subsequent motion, illustrated by the three trajectories a, b, and c in the figure.
(а) The state point moves back to the origin.
(б) It remains finite but >0 for all subsequent time
(c) It goes off to infinity.
The trajectory will of course, for a given system, depend on the direction of OP in state space. For example, Fig. 3.36 shows the equilibrium of a ball on a saddle surface. It is evident that displacement in the x direction leads to a type (c) trajectory and displacement in the у direction (in the presence of damping) to one of type (a). In this view of stability, the equilibrium point would be said to be stable if only type (a) trajectories could occur regardless of the direction of OP, and unstable if type (c) trajectories could occur. The saddle point is therefore an unstable equilibrium. The question of the magnitude of OP must be considered as well. If the system is linear, the conclusion about stability is independent of the magnitude of OP, but if it is not the size of the initial disturbance (i. e. of OP) does matter. It may well be that the system is stable for small disturbances, but unstable for large ones, as illustrated in Fig. 3.3c. The initial states for which the origin is stable in such a case lie within some region 0t of the state space as illustrated in Fig. 3.3a, and this is the “region of stability of 0.”
More recently, the rediscovery of the work on stability by Lyapunov (ref. 3.2) (see also Sec. 3.5) has had a great influence on this subject. In the Lyapunov viewpoint, we speak not of the stability of a system, but of the stability of a particular solution of a system of equations. The solution may be quite general, for example the forced motion of a nonlinear timevarying system with particular initial conditions. Equilibrium is a special case of such a solution. In this special case the Lyapunov definition is as follows. Let <5 and e be the radii of two hyperspheres in state space with centers at the equilibrium point, symbolically represented in two dimensions in Fig. 3.3d. These surfaces are such that for all initial states lying inside Sx the subsequent solution lies for all time inside S2. Then the origin is a stable point if there exists a д > 0 for all e > 0, no matter how small є becomes. That is, the solution can be made arbitrarily small by choosing the initial conditions small enough. If the solutions tends ultimately to zero, then the origin is asymptotically stable. If, when 0 is asymptotically stable, there exists a region M such that all trajectories that originate within it decay to the origin, then ЗІ is a finite region of stability. This notion is identical with that previously described. If 01 is an infinite sphere then the origin is globally stable. Note that if a linear system is asymptotically stable it is also globally stable. This fact is somewhat academic since in nature “linear” systems always become nonlinear for “very large” state vectors.
The Lyapunov condition for a region of stability ЗІ will be met whenever the solution is a “wellbehaved” function of the initial conditions—that is, if Эжг(Т)/Эж3(0) is finite in ЗІ for all i, j and T where x is the state vector. In particular this must hold in the limit as T —* oo.
A striking illustration of this point of view is afforded by the unstable

(f) Fig. 3.3 Stability of equilibrium, (a) Trajectories in state space. (6) Saddle point, (c) Finite region of stability. (d) Lyapunov definition of stability, (e) Illustrating discontinuity in solutions. (/) Limit cycle. 
system of Fig. 3.3e, in which a particle is free to slide without friction along a horizontal pointed ridge. The sides are infinite in the x and у directions. One solution, of course, is uniform rectilinear motion at speed U on the ridge (trajectory a). If a small initial tangential velocity v in the downhill direction be added, the motion is a trajectory such as b. In the limit as v —»■ 0, the limiting trajectory is one like c, tangent to Ox at the origin. Thus there is a gap between a and c that contains no solutions at all for the given U even for finite times. If the top of the ridge were rounded off instead of pointed the solutions for all finite t would be continuous in v. However even in that case, as t > oo the lim yfv —*■ oo, so that 2/(00) is not a continuous
. v—>0
function of v, and hence the basic solution a is unstable.
When the solution to be investigated is not the simple one discussed above, i. e. equilibrium, the stability criterion is still that of continuity, as above.
Alternatively, the general case can be reduced to the particular case as follows. Let the system equation be
x = f(x, t) (3.1,1)
and let the particular solution be x0(f). Now let the variation from x0 associated with a change in the initial condition only be
у = X(t) — x0(t) (3.1,2)
У = x(0 — xoW
= f(x> t) – f(x0. t)
У = % + xo(<)> t) – f(xo(0> t) (31.3)
Since x0(f) is presumed known, then (3.1,3) is an equation of the form
У = g(y, *) (31,4)
for which у = 0 is the solution corresponding to x(f) = x0(f). Thus (3.1,4) defines a system that has an equilibrium point at the origin, and the discussion of its stability has already been given. In this way the stability of any transient solution is reduced to that of stability of equilibrium.
A particular kind of solution that is of interest is the limit cycle, illustrated again in two dimensions, in Pig. 3.3/ by the closed curve G. It may be orbitally stable, in which case neighboring trajectories such as (6) are asymptotic to it, or unstable, in which case neighboring trajectories such as (a) starting arbitrarily close to C, never come back to it.
Finally, we should remark that Lyapunov’s definition is concerned only with variations in the initial conditions of a solution. Clearly there are two other important practical cases: (1) stability with respect to perturbations in the input, and (2) stability with respect to system or environmental parameters. Stability with respect to perturbations in the input or the system parameters can be defined in a manner quite analogous to that with respect to the initial conditions.
Leave a reply