Closed Loop Control

8.1 General Remarks

The development of closed loop control has been one of the major technological achievements of the twentieth century. This technology is a vital ingredient in count­less industrial, commercial, and even domestic products. It is a central feature of air­craft, spacecraft, and all robotics. Perhaps the earliest known example of this kind of control is the fly-ball governor that James Watt used in his steam engine in 1784 to regulate the speed of the engine. This was followed by automatic control of torpedoes in the nineteenth century (Bollay, 1951), and later by the dramatic demonstration of the gyroscopic autopilot by Sperry in 1910, highly relevant in the present context. Still later, and the precursor to the development of a general theoretical approach, was the application of negative feedback to improve radio amplifiers in the 1930s. The art of automatic control was quite advanced by the time of the landmark four­teenth Wright Brothers lecture (Bollay, 1951)’. Most of what is now known as “clas­sical” control theory—the work of Routh, Nyquist, Bode, Evans, and others was de­scribed in that lecture. From that time on the marriage of control concepts with analogue and digital computation led to explosive growth in the sophistication of the technology and the ubiquity of its applications.

Although open-loop responses of aircraft, of the kind studied in some depth in Chap. 7, are very revealing in bringing out inherent vehicle dynamics, they do not in themselves usually represent real operating conditions. Every phase of the flight of an airplane can be regarded as the accomplishment of a set task—that is, flight on a specified trajectory. That trajectory may simply be a straight horizontal line traversed at constant speed, or it may be a turn, a transition from one symmetric flight path to another, a landing flare, following an ILS or navigation radio beacon, homing on a moving target, etc. All of these situations are characterized by a common feature, namely, the presence of a desired state, steady or transient, and of departures from it that are designated as errors. These errors are of course a consequence of the un­steady nature of the real environment and of the imperfect nature of the physical sys­tem comprising the vehicle, its instruments, its controls, and its guidance system (whether human or automatic). The correction of errors implies a knowledge of them, that is, of error-measuring (or state-measuring) devices, and the consequent actuation of the controls in such a manner as to reduce them. This is the case whether control is by human or by automatic pilot. In the former case—the human pilot—the state in­formation sensed is a complicated blend of visual and motion cues, and instrument readings. The logic by which this information is converted into control action is only

‘In 1951 most aeronautical engineers were using slide rules and had not heard of a transfer function!

imperfectly understood, but our knowledge of the physiological “mechanism” that in­tervenes between logical output and control actuation is somewhat better. In the latter case—the automatic control—the sensed information, the control logic, and the dy­namics of the control components are usually well known, so that system perfor­mance is in principle quite predictable. The process of using state information to gov­ern the control inputs is known as closing the loop, and the resulting system as a closed-loop control or feedback control. The terms regulator and servomechanism describe particular applications of the feedback principle. Figure 8.1 shows a general block diagram describing the feedback situation in a flight control system. This dia­gram models a linear invariant system, which is of course an approximation to real nonlinear time-varying systems. The approximation is a very useful one, however, and is used extensively in the design and analysis of flight control systems. In the di­agram the arrows show the direction of information flow; the lowercase symbols are vectors (i. e. column matrices), all functions of time; and the uppercase symbols are matrices (in general rectangular). The vectors have the following meanings:

r: reference, input or command signal, dimensions (pX 1) z: feedback signal, dimensions (pXl) e: error, or actuating, signal, dimensions (pX 1) c: control signal, dimensions (mXl)

g: gust vector (describing atmospheric disturbances), dimensions (/X1) x: airplane state vector, dimensions (nX 1) y: output vector, dimensions (qX) n: sensor noise vector, dimensions (qX 1)

Of the above, x and c are the same state and control vectors used in previous chapters, r is the system input, which might come from the pilot’s controller, from an external navigation or fire control system, or from some other source. It is the com­mand that the airplane is required to follow. The signal e drives the system to make z

9

follow r. It will be zero when z = r. The makeup of the output vector у is arbitrary, constructed to suit the requirements of the particular control objective. It could be as simple as just one element of x, for example. The feedback signal z is also at the dis­cretion of the designer via the choice of feedback transfer function H(.v). The choices made for D(s), E(.v) and Hi. v) collectively determine how much the feedback signal differs from the state. With certain choices z can be made to be simply a subset of x, and it is then the state that is commanded to follow r.

The vector g describes the local state of motion of the atmosphere. This state may consist of either or both discrete gusts and random turbulence. It is three-dimen­sional and varies both in space and time. Its description is inevitably complex, and to go into it in depth here would take us beyond the scope of this text. For a more com­plete discussion of g and its closely coupled companion G’ the student should con­sult Etkin (1972) and Etkin (1981).

In real physical systems the state has to be measured by devices (sensors) such as, for example, gyroscopes and Pitot tubes, which are inevitably imperfect. This im­perfection is commonly modeled by the noise vector n, usually treated as a random function of time.

The equations that correspond to the diagram are (recall that overbars represent Laplace transforms):

e = r – z

(a)

c = J(s)e

(b)

X = G(s)c + G'(.s)g

(c)

(8.1,1)

у = Dx + Ec

id)

z = H(s)(y + n)

(e)

In the time domain (8.1,1c) appears as

x = Ax + Be + Tg

(8.1,2)

It follows that

G(s) = (si – A)_IB and G’fs) = (si – A)-IT

(8.1,3)

The feedback matrix H(.v) represents any analytical operations performed on the out­put signal. The transfer function matrix J(s) represents any operations performed on the error signal e, as well as the servo actuators that drive the aerodynamic control surfaces, including the inertial and aerodynamic forces (hinge moments) that act on them. The servo actuators might be hydraulic jacks, electric motors, or other devices. This matrix will be a significant element of the system whenever there are power- assisted controls or when the aircraft has a fly-by-wire or fly-by-light AFCS.

From (8.1,1) we can derive expressions for the three main transfer function ma­trices. By eliminating x, e, c, and z we get

[I + (DG + E)JH]y = (DG + E)Jr – (DG + E)JHn + DG’g (8.1,4) from which the desired transfer functions are

Gvr = [I + (DG + E)JH] (DG + E)J (a)

Gyn = – [I + (DG + E)JH] -1 (DG + E)JH (b) (8.1,5)

Gvg = [I + (DG + E) JHI DG (c)

The matrices that appear in (8.1,5) have the following dimensions:

D(q X n); G(n X m); E(g X m); J(m X p); H(p X q); G'(n X l) The forward-path transfer function, from e to y, is

F(.v) = (DG + E)J; dimensions (q X p) so the preceding transfer functions can be rewritten as

Gyr = (I + FH)-‘F

Gyn = (I + FH) FH

Gyr = (I + FH) ‘DG’

Note that F and H are both scalars for a single-input, single-output system.

When the linear system model is being formulated in state space, instead of in Laplace transforms, then one procedure that can be used (see Sec. 8.8) is to generate an augmented form of (8.1,2). In general this is done by writing time domain equa­tions for J and H, adding new variables to x, and augmenting the matrices A and В accordingly. An alternative technique for using differential equations is illustrated in Sec. 8.5. There is a major advantage to formulating the system model as a set of dif­ferential equations. Not only can they be used to determine transfer functions, but when they are integrated numerically it is possible, indeed frequently easy, to add a wide variety of nonlinearities. These include second degree inertia terms, dead bands and control limits (see Sec. 8.5), Coulomb friction, and nonlinear aerodynamics given as analytic functions or as lookup tables.