239 80 10MB
English Pages 466 [487] Year 1968
Introduction to Control Theory
PRENTICEHALL INTERNATIONAL SERIES IN THE PHYSICAL AND CHEMICAL ENGINEERING SCIENCES NEAL R. AMUNDSON, EDITOR, University of Minnesota
ADVISORY EDITORS ANDREAS AcRIVOS, Stanford University JOHN DAHLER, University of Minnesota THOMAS J. HANRATTY, University of Illinois JoHN M. PRAUSNITZ, University of California L. E. SCRIVEN, University of Minnesota
AMUNDSON Mathematical Methods in Chemical Engineering ARIS Elementary Chemical Reactor Analysis ARIS Introduction to the Analysis of Chemical Reactors ARis Vectors, Tensors, and the Basic Equations of Fluid Mechanics BouDART Kinetics of Chemical Processes FREDRICKSON Principles and Applications of Rheology HAPPEL AND BRENNER Low Reynolds Number Hydrodynamics HIMMELBLAU Basic Principles and Calculations in Chemical Engineering, 2nd ed. HoLLAND Multicomponent Distillation HoLLAND Unsteady State Processes with Applications in Multicomponent Distillation KoPPEL Introduction to Control Theory: With Applications to Process Control LEVICH Physicoche1nical Hydrodynamics PETERSEN Chemical Reaction Analysis PRAUSNITZ AND CHUEH Computer Calculations for HighPressure VaporLiquid Equilibria PRAUSNITZ, ECKERT, 0RYE, O'CoNNELL Computer Calculations/or Multicomponent VaporLiquid Equilibria WHITAKER Introduction to Fluid Mechanics WILDE Optimum Seeking Methods
PRENTICEHALL, INC. PRENTICEHALL INTERNATIONAL, INC., UNITED KINGDOM AND EIRE PRENTICEHALL OF CANADA, LTD., CANADA
Introduction to Control Theory with Applications to
Process Control
LOWELL B. KOPPEL Professor of Chemical Engineering Purdue University
PRENTICEHALL,
Englewood Cliffs, N. J.
INC.
London PRENTICEHALL OF AUSTRALIA, PTY. LTD., Sydney PRENTICEHALL OF CANADA, LTD., Toronto PRENTICEHALL OF INDIA PRIVATE LTD., New De/hi PRENTICEHALL OF JAPAN, INC., Tokyo PRENTICEHALL INTERNATIONAL, INC.,
©
1968 by PrenticeHall, Inc. Englewood Cliffs, N.J.
All rights reserved. No part of this· book may be reproduced in any form or by any means without permission in writing from the publisher.
Current printing (last digit):
10 9 8 7 6 5 4 3 2 1
Library of Congress Catalog Card Number 689803
Printed in the United States of America
14790~1_5
Preface
This book is written as an introduction to the basic concepts of modern control theory and as an indication of possible application of these concepts to process control. It is assumed that the reader is familiar with the classical control theory covered in most introductory texts on feedback control. In addition, a knowledge of matrix algebra is assumed. Problems are provided at the end of most chapters so that the book can be used as text material for a graduate course on automatic control. A certain bias toward chemical and mechanical engineering aspects of control theory will no doubt be detected. Examples of physical systems are primarily heat exchangers and chemical reactors. However, a primary purpose for writing this book is to present a sufficiently rigorous and general account of control theory to prepare the reader for the rapidly expanding research and development literature on automatic control. Comparison of the recent control literature from any of the several engineering disciplines (electrical, mechanical, chemical, science, etc.) with that of a decade ago shows considerable difference in the background and level of mathematical sophistication assumed of the reader. It is hoped that the material selected for this book will provide some of this background. Automatic control has become a mathematically demanding discipline, perhaps more so than any other normally encountered by engineers. If there is little escape from these demands in applying presentday control theory v
VI
Preface
to problems of technology, there will be less in the future. Concepts such as existence and uniqueness, sufficiency and necessity, once considered the sole province of mathematicians, now appear in the control engineering literature as essential features of the problem solution. Therefore, I have attempted not to gloss over these aspects. The book begins with a review and then treats, for both continuous and discrete systems, the subjects of state variables, Lyapunov stability, and optimization. It concludes with some specific, suggestive illustrations on practical application to process control. The treatment of optimization is in no way competitive with that presented in recent texts devoted exclusively to this subject. I have presented only an incomplete treatment of Pontryagin's minimum principle and Bellman's dynamic programming, as they apply to problems already formulated in the state transition framework of modern control theory. Omitted are all the other interesting techniques for extremizing general functions of several variables. More attention has been given to the minimum principle than to dynamic programming, for optimization in continuous control systems, because it is my opinion that the former is a more useful computational tool for this class of problems. Chapter 6 contains a rather detailed account of minimum integralsquareerror and minimum time control of linear systems. The purpose is to indicate how the complete description of dynamic behavior available for linear systems, in the form of the transition matrix, enables a relatively complete solution to the optimal control problem. Also, these problems illustrate the mathematical care required to definitively solve a problem in control theory. The examples, particularly on stability and optimization, are designed to be as simple as possible, generally not requiring a computer for solution. I believe that the disadvantages of simple examples are outweighed by their major advantage: They enable complete involvement and verification by the reader, while illustrating the essential concept. Where necessary, computersolved examples are presented, or referred to in the literature. No mention has been made of stochastic control systems, because the space required to treat this subject does not appear justified in view of the dearth of presently established applications of statistical control theory to process control. The rather long Appendix C on sampleddata systems is included because chemical engineers generally have no introduction to this basic aspect of control theory. The book can be read without this background, but at some disadvantage. In fact, all material on discrete systems may be omitted without loss of continuity. I wish to acknowledge and thank my colleagues D. R. Coughanowr, H. C. Lim, Y. P. Shih, P. R. Latour, and H. A. Mosler for their contribu
Preface
VII
tions. Purdue Research Foundation generously provided me with a summer grant to assist my writing efforts. The Purdue School of Chemical Engineering allowed use of early drafts as the text material for its graduate course in process control. I am grateful for all this assistance. LowELL
Lafayette, Indiana
B.
KoPPEL
Contents
1.
Review of Classical Methods for Continuous Linear Systems
1
Linear systems analysis, 1. Classical control techniques, 11. of integral square error, 32.
2.
Minimization
Review of Classical Methods for Discrete
43
Linear Systems
General linear transfer function, 44. Pulse transfer function, 45. Deltafunction response, 48. Stability of linear discrete systems, 49. Compensation of linear discrete systems, 50.
3.
State Variables for Continuous Systems
Basic state concepts, 56. Ordinary differential equations, 59. Linear systems with constant coefficients, 62. Linear systems with timevarying coefficients, 73. Controllability and ohservability, 77. Distributedparameter systems, 86. Nonlinear systems, 88. ix
56
Contents
X
4.
93
State Variables for Discrete Systems Basic state concepts, 93. Difference equations, 95. Linear systems with constant coefficients, 96. Linear systems with timevarying coefficients, 100. Controllability and observability, 105. Sampleddata systems, 106.
5.
Lyapunov Stability Theory
113
Nonlinear springmassdamper system, 114. Free, forced, and autonomous systems, 116. Definitions of stability, 117. Basic stability theorems, 119. Proofs of stability theorems, 130. Stability of linear systems, 133. Linear approximation theorems, 135. Krasovskii's theorem, 143. Estimation of transients, 147. Parameter optimization using Lyapunov functions, 152. Use of Lyapunov functions to design controllers, 156. Stability of discrete systems, 161.
6.
Continuous Systems Optimization
171
An introductory problem: variational methods, 171. The minimum principle, 182. General results for linear systems, 210. Optimal control of linear systems with quadratic performance criteria, 211. Timeoptimal control of linear, stationary systems, 226. Timeoptimal control of nonlinear systems, 254. Numerical procedures for optimization by the minimum principle, 258. Dynamic programming, 270.
7.
291
Optimization of Discrete Control Systems Dynamic programming, 292. Discrete minimum principle, 299. optimal control of linear sampleddata systems, 307.
8.
Time
Optimal Control of DistributedParameter Systems
315
Systems with pure delay, 315. Minimum principle for distributedparameter systems, 319. Optimal exit control of distributedparameter systems, 328.
9. 10.
Chemical Process Control
332
TimeOptimal Process Control
339
Analysis of programmed timeoptimal control, 340. control, 360.
11 .
Design of Digital Process Controllers
Feedback timeoptimal
366
Contents
XI
Appendices
A.
Existence and Uniqueness Theorems for Equation (34)
383
B.
Vectors and Matrices
392
c.
Classical Methods for SampledData Systems
398
Basic considerations ofsampling, 399. Ztransfor~n methods,418. Continuous closedloop control with sampled signals, 432. Direct digital control of processes, 437.
D.
Controllability and Observability
447
E.
Filtering, Estimation, Differentiation, and Prediction
451
Singleexponential smoothing, 453. Continuous analog to singleexponential smoothing, 455. Ramp response, 456. Doubleexponential smoothing, 457. Continuous version of doubleexponential smoothing, 459.
Index
463
Introduction to Control Theory
Review of Classical Methods for Continuous Linear Systems
1
Linear Systems Analysis In this chapter we will review briefly some important aspects of what may be called classical continuous control theory. It will be assumed that the reader has some general acquaintance with these topics, although the treatment will be selfcontained. From the vast body of knowledge which might justifiably be included in the term classical control theory, we select only those topics which are important for perspective. Therefore, detailed expositions of design methods and system compensation techniques, although of considerable importance to one wishing to solve problems in automatic control, are omitted here because their inclusion would make the major goal (perspective) considerably more difficult to attain. Furthermore, they have been welldocumented in standard references on the subject of automatic control. Definition of system
For the purposes of the book, a system will be defined as any physical device which, upon application of an externally selected function m(t), produces an observable function c(t). It is further assumed that the system 1
2
Review of Classical Methods for Continuous Linear Systems
Chap. 1
is deterministic, that is, the application of a given function m(t) always produces the same function c(t). This precludes a stochastic system from our analysis. For convenience we define the applied function m(t) as the input signal and the observed function c(t) as the output signal. Normally, the independent variable t will denote time. This definition serves to describe the singlevariable system. When there are more than one input signals, denoted by m 1(t), m 2(t), m 3(t), ... , and possibly more than one output signals, denoted by c1(t), c 2{t), c3(t), ... , we are dealing with a multivariable system. In either event, the essential nature of the system concept is the causeandeffect relationship between input and output signals. In this chapter, we further restrict attention to systems in which input and output signals are available continuously, i.e., at all values of t. In the next chapter, classical methods will be presented for analyzing systems whose signals are available intermittently. The distinction between classical control theory to be studied in this chapter and what is sometimes called modern control theory to be studied subsequently is to some extent artificial. However, a basic factor distinguishing these two branches of control theory is the fact that only the input and output signals are considered in classical control theory. This is incomplete because, even if complete knowledge of the causeandeffect relationship between input and output signals is available, specification of the output signal c(t 0 ) at some arbitrary instant of time t 0 , and of the complete behavior of the input signal m(t) for all future time (i.e., for all t > t 0 ), is insufficient to completely determine the future behavior of the system. If the system relationship is a differential equation of order higher than one, it is necessary also to specify the values of derivatives of the output signal at t 0 (or equivalent information) in order to predict future behavior. This defect is eliminated in modern control theory by the introduction of the concept of state, as will be discussed in Chapter 3. A second factor in distinguishing· classical and modern control theory is chronology. Classical control theory is generally regarded as having begun in the 1920's, while modern control theory largely consists of developments which took place after 1950. A third factor is the compensation concept, to be discussed later in this chapter. To specialize further various aspects of our treatment, it will be necessary to introduce more adjectives. A linear system will be defined as one which obeys the principle of superposition with respect to the input signal. The superposition principle states that the response of the system to a linear combination of two input signals a 1m 1(t) + a2m 2(t), is the same linear combination of the responses to the two input signals. That is, the response is a 1 c 1(t) + a 2 c 2(t), where c 1(t) and c 2(t) are the responses to m 1(t) and m2(t), respectively. A lumpedparameter system is one for which the causeandeffect relationship between input signal and output signal consists of ordinary differen
Chap. 1
3
Linear Systems Analysis
tial equations only, in contrast with a distributedparameter system for which this relationship is described by partial differential equations. Note that we are not exercising particular care in distinguishing between the system and the mathematical relations describing its causeandeffect behavior. This distinction is necessary to develop a mathematically rigorous description of these concepts, 1 but it will not be necessary for our purposes. It will be obvious when we are referring to the physical process and when to the mathematical relationship used to express the causeandeffect behavior between its input signal and output signal. Analysis of linear, lumpedparameter systems
Consider the singlevariable system shown in Fig. 11. This figure is drawn in the usual notational convention for analog diagrams. The triangularly shaped components integrate or sum, depending upon the symbol placed inside the triangle; the circular elements multiply by the constant written in the circle. Thus, the input signal m(t) is summed with a linear combination of the signals labeled x 1(t), x 2(t), ... , Xn(t), to form the signal x 0(t). This latter signal is then integrated to give x 1(t) which is, in turn, integrated to give x 2(t), etc. The signals x 0 (t), x 1(t), . .. , Xn(t) are combined linearly to form the output signal c(t). We assert that the causeandeffect relationship for the linear system of Fig. 11 is given by the linear differential equation (11)
m(t) ">. . .
•••
Xn1
Analog simulation diagram for nthorder, linear, lumpedparameter system.
Figure 1·1
4
Review of Classical Methods for Continuous Linear Systems
Chap. 1
and we shall prove this below. The xi(t) variables will later be recognized (Chapter 3) as a set of state variables, and therefore the output signal c(t) is merely a linear combination of these state variables. Figure 11 has the advantage of making clear that Eq. (11), which is no doubt familiar to the reader, results from a system that integrates, and not from one that differentiates. In fact, there is no real physical system which can differentiate, although the differential operation may be closely approached by some physical systems. EXAMPLE 11
Consider a cylindrical tank of constant crosssectional area A being filled with a liquid at a variable rate m(t) volumes per time. As output variable c(t), we choose the instantaneous level of fluid in the tank, and we further suppose the tank loses liquid through an opening in its bottom at a rate k c(t) volumes per time, where k is a constant. Then we may write a differential mass balance as de
A dt
=== m(t) 
k c(t)
which is merely a special case of Eq. (11) with n === 1, a 0 === k/A, h 1 === 0, b0 === 1/A. In this form we have stated that the instantaneous rate of accumulation is equal to the instantaneous rate of input less the instantaneous rate of output. However, we may also state that the total volume in the tank at any time t is given by the total volume which existed at a previous time t 0 , plus the integral since that time of the inlet rate minus the outlet rate: A c(t) =A c(t 0 )
+ J:. [m(O) 
k c(O)] dO
Thus, the tank physically integrates the difference between inlet and outlet stream rates to arrive at some current inventory. Of course, these equations are easily converted, one into the other, by differentiation or integration. The differential equation form is most often used because there are systematic procedures for its solution. EXAMPLE 12
Suppose a system were described by the inputoutput relationship c(t) === dm(t)
dt This is a perfect differentiator. Physically, there is no difficulty in generating an input signal arbitrarily close to a perfect unitstep function, m(t) === S(t  t 0), where S(t t 0} =
{~
t to
Chap. 1
5
linear Systems Analysis
Then the perfect differentiator must produce an output signal arbitrarily close to an impulse function o(t  to), defined by o( t  to) == 0
t
*t
0
r~ O(t t )dt ==I 0
so that o(O) must be infinite. All physical systems are bounded in some sense and cannot produce arbitrarily large output signals. Hence, no physical system is a perfect differentiator. This reasoning may be extended to explain why the coefficient of the term dnc/dtn in Eq. (11) cannot be zero, while the coefficients of all other terms may be zero. This example further explains why Eq. (11) results from a system which integrates. To prove that Eq. (11) actually describes the system shown in Fig. 11, we note that the input to each of the integrators is the derivative of its output, and hence d nkXn (12) k : : : : 0, 1, 2, ... , n xk == dtn=k' Furthermore, since we obtain from substitution of Eq. (12)
dnx dtnn
dn•x + anI dtnln
+ ...
dx + al dtn
+ aoXn == m
(13)
Differentiating Eq. ( 13) k times yields
dn+kxn dtn+k
dn+klXn
+ an1 dtn+k~ + ...
dk+lxn + al dtk+l
dkxn dkm + ao dtk == dtk
which, upon substitution of Eq. (12), results in, for k = 0, 1, 2, ... , n, dn1 Xnk d nXnk + (14) dtn an1 J{n1 T 1
Finally, Fig. 11 shows that
c(t) = boXn
+ b1Xn1 + · · · + bnXo
(15)
Equation (11) follows immediately from Eqs. (14) and (15) when we take derivatives of c(t) and combine as required in Eq. (11). There are n integrators in Fig. 11, and the order of the system equation, Eq. (11), is also n. In general, an nthorder system will be one which performs n integrations. Furthermore, while it is possible to choose any of the a, or b, as zero, there is no choice which will result in an equation in which the input signal m(t) is differentiated more times than is the output signal c(t). Therefore, any equation in which m(t) is differentiated more times than
6
Review of Classical Methods for Continuous linear Systems
Chap. 1
is c(t) cannot result from a system which integrates and will be physically unrealizable. Transfer function
We shall define the Laplace transform more carefully below. For now it is assumed that the reader is familiar with the basic operational properties of this transform. Taking the Laplace transform of both sides of Eq. (11) and rearranging give the result
C(s) == bnsn + bn_ 1Sn 1 + · · · + b1s + ho ~ G(s) M(s) sn + an_ 1sn 1 + · · · + ats + ao
(16)
which has been used as shown to define a transfer function G(s) as the ratio of two polynomials. Equation ( 16) is, of course, equivalent to
C(s) == G(s) M(s)
(17)
which is the basic causeandeffect relationship between the Laplace transforms of the input and output signals, M(s) and C(s ). To derive Eq. (16) directly from Eq. (11) in this manner requires the assumption that the initial value and first (n  I) initial derivatives of both c(t) and m(t) are zero. Actually, such a restriction is not required to obtain this result which follows from only the assumption that all initial values of the variables x 1, x 2 , • • • , Xn are zero. This derivation is the subject of one of the problems at the end of this chapter. The significance of this is that the transfer function relation of Eq. ( 17) is valid only for the socalled zerostate condition. Without yet defining the state concept, we can describe this zerostate condition as one in which the output variable c(t) will remain at zero if no input is applied to the system, i.e., if m(t) == 0. With reference to Fig. 11, if all initial conditions on the integrators are set to zero, and if m(t) is also held at zero, clearly c(t) == 0. Also note that the initial conditions which appear as a result of applying the Laplace transform operator to a differential expression are evaluated shortly after time zero, usually denoted by t == O+. These initial conditions are on the variables c(t) and m(t) and need not be zero to have the transfer function ofEq. (16) apply. (See Problem 11.) Provided that the initial conditions on the variables x 1 , x 2 , ••• , Xn are zero, the transfer function relation is valid. Note that Fig. 11 provides a way to construct a computer system to simulate the behavior of any transfer function of the form of Eq. ( 16). Impulse response
In what follows, we shall invariably restrict our attention to events occurring after a time arbitrarily designated as t === 0. We suppose that some arbitrary signal m(t) is applied as the input to a linear system, initially in the
Chap. 1
7
linear Systems Analysis
zero state. Such a signal is sketched in Pulse sequence Fig. 12. As shown in the figure, this signal signal may be approximated by a sequence of square pulses. The (k + 1)st pulse begins at a time t = kT, where T is the period of the pulsing and k is an integer; it ends at t = (k + I )T and assumes a height equal to the value of the function at the beginning of the T 2T 3T 4T 5T 6T pulse, m(kT). By making T arbitrarily t .. small, we may represent the signal Figure 1·2 Approximation of m(t) as closely as desired by this pulse input signal by pulse sequence. sequence. Because we are dealing with a linear system, we may apply the principle of superposition. The total response of the system to the sequence of pulses may be obtained by summing its response to each of the individual pulses. Consider the typical pulse occurring at time t == kT. This pulse may be regarded as the application of a step function at time t == kT with magnitude m(kT), followed by the application of a negative step function of the same magnitude at a time t =(k + l)T. Let us denote by H(t  t 0 ) the response of the linear system to a step of unit magnitude applied at time t = t 0 , S(t  t 0 ). Then, by the principle of superposition, the response ck(t) due only to this square pulse is ck(t)
== m(kT){H(t  kT)  H[t 
(k
+
l)T]}
At any time tin the interval kT < t < (k + l)T, we may represent the total response as the sum of the responses caused by each of the previous pulses k
c(t) = ~ m(jT){H(t jT) ~[t {j
+
l)T]}r
(l8)
j=O
We now propose to letT go to zero and k become. infinite in such a manner that the product kT remains constant, kT = t. Furthermore, if we define jT = (), it follows that
T=(j+
l)Tj1~=6.0
Then by definition, the summation in Eq. (18) approaches an integral, and we obtain t dH (19) c(t) == m(O) d() (t  0) dfJ 0
I
We now define the derivative of the response to a unitstep function as the impulse response g( t  t 0 ), that is
d
g(t · t 0 ) = dt H(t t 0 )
8
Review of Classical Methods for Continuous Linear Systems
Chap. 1
The term "impulse response" derives from the fact that g(t  t 0 ) is also the response of the system when subjected to an impulse input, o(t  t 0 ). For physical reasons, an impulse input may never be obtained because it requires infinite values. Therefore, to make the definition physically acceptable, it is desirable to define g(t t 0 ) as we have shown. Then, Eq. (19) becomes
c(t) =
J:
(110)
m(O)g(t  0) dO
A change in the variable of integration transforms this to
c(t) =
s:
(111)
m(t  O)g(O) dO
In deriving Eq. ( 111) it is important to realize that, both by definition and on physical grounds, g(t  t 0 ) = 0 for all time t satisfying t < t 0 • Applying the convolution theorem 2 to either Eq. (110) or Eq. (111) results in Eq. ( 17); note that the unitimpulse response is the inverse transform of the transfer function. This follows from the fact that the Laplace transform of the unit impulse is unity. Equations ( 11 0) and ( 111) provide a complete description of the time behavior of the linear system. Once the function g(t t 0 ) is known, the output signal resulting from an arbitrary input signal may be computed. Once again, we have assumed in the derivation that the system is in the zero state before application of m(t ), which accounts for the absence of any terms other than those which appear in Eq. (18). Fourier and Laplace transforms
Either Eq. ( 11 0) or Eq. ( 111) completely characterizes the linear system in the time domain. There is an alternate way of expressing the causeandeffect relation for linear systems, which involves the frequency domain. To develop this concept we will first have to establish the Fourier and Laplace transforms on more rigorous grounds. A periodic function f(t) may be represented as a linear combination of sine and cosine waves through use of the Fourier series 3
f(t) =
~
fr
12
f(O) dO
+~
T/2 oo
~ . 2n7Ct + l_ T L..J SID T
f
.t
cos 2n;t
n=l
T/2
fr
12
f(O) cos 2"; 0 dO
T/2
f(O) . 2n7CO dO SID T
T/2
n= 1
where Tis the period of the function/(!). We define the frequency roo == 27C/T and use the complex identities cos nru 0 t
_ ex p (jnru 0 l)

+ ex p ( jnruot) 2
Chap. 1
9
Linear Systems Analysis
·
_
stn nru 0 1 
exp (jnru 0 t)  exp (jnru 0 t) 2j
to convert the Fourier series to the equivalent exponential form f(t) =
~ ~
&nr.Jot
T
ITT//(0) /2
eJn.,,o
d()
n=oo Now an arbitrary function f(t) may be regarded as a periodic function whose period becotnes infinite. Thus, defining
2n7t
ru == n(J)o == T
it follows that 1 _ ru 0 _ ( n r·  2x  
+
1)ruo  nruo _ Aru 21t  2x
so that the Fourier series may be written in the form
f(t) === __!__ ~ 21t ~
f
T/'!.
oo
eir.Jt
A(J)
n=oo
f(O)
eir.Je
dfJ
T/2
Allowing T + oo and Aru ~ 0 converts the summation to an integral. This integral relationship may be written as two integrals 1 f( t) == 2x F(jcn) =
Joo F(j (J)) &rAt d (J) CKJ
[~ f(t) eM dt
(112) (113)
where the Fourier transform F(j(J)) is defined by Eq. (113), and Eq. (112) gives a formula for inverting the Fourier transform to recalculate f(t). The Fourier transform F(j(u) is the frequency representation of the time signal/(t). It is a continuous function of the frequency ru. This function describes an aperiodic signal f(t), just as the discrete set of coefficients in the Fourier series describes a periodic signal. In general, F(jru) is complex and therefore possesses a magnitude and argument. A typical way of representing its frequency content is to plot the magnitude 1F(j(J)) I as a function of frequency. Problem 12 illustrates this concept. Mathematically speaking, however, the Fourier transform of a function is not guaranteed to exist unless that function is absolutely integrable, 4 that is, unless
[~ 1/(t)l dt
0 such that
feat 1/(t) I dt
0, it will be true for all u > u 0 • Then u 0 is called the abscissa of convergence if it is the smallest constant for which the integral converges for all u > u 0 • We further insist that either f(t) = 0 for all t < 0, or else that any behavior of f(t) prior to time zero is of no significance. This restriction introduces no difficulties in practice. Then the Laplace transform is defined as the Fourier transform of eat f(t) for all u > u 0 and, as such, is guaranteed to exist. The equation for this Fourier transform is
F(u
+ joo) =
[
/(t) e 0. Since u can be taken arbitrarily close to zero, we may take F(s) arbitrarily close to 1/jru, which is simply the result of replacing s by jru in the Laplace transform to obtain the Fourier transform. Another way of looking at this is that we are effectively approximating the step function with an exponential function which decays at an arbitrarily slow rate. The replacement of sin the Laplace transform by jru, to obtain the Fourier transform, is generally an acceptable practice; infrequently, however, it may lead to an erroneous result. An indication of this may be seen if the inverse Fourier transform of F(jru) = 1/jru is calculated using Eq. (112). The result
.
IS
1 f(t) = 27l'
Joo 00
ejCIJt
. dru = j(J)
{ 1
"'2" 1 2
tO
which is not the unitstep function. However, the inverse of F(jru) = 1/(a + jru) isf(t) == exp (at) for all a > 0. Thus, the usual use of F(jru) = 1/jru may be justified in a practical sense, but not in a strictly mathematical sense. For the step function, the magnitude of F(jru) may therefore be written as 1/ru. This implies that the step function has decreasing frequency content at higher frequencies. As is well known, the use of a step input to experimentally test a process will yield information only about lowfrequency behavior. Phenomena such as highfrequency resonance will go undetected. Pulse functions have higher content at the higher frequencies and, accordingly, are more desirable as experimental test functions. We can now exhibit the duality between the time and frequency domains. To do this we once again note that assuming a linear system allows application of the principle of superposition. In the time domain, we decomposed an arbitrary input m(t) into a sum of pulses. In the frequency domain, we decompose the input into a sum of complex exponentials (sine waves) by rewriting Eq. (112) for m(t) as a summation. The response of the linear system to this sum will be given by the sum of the responses to each term in the sum. Consider the typical term
(l16) The response of Eq. ( 11) to this forcing function will consist of a particular solution and a complementary solution. For now, we ignore the complementary solution since it will be evident when we obtain the final result that
12
Review of Classical Methods for Continuous Linear Systems
Chap. 1
this solution must disappear from our final expression. The particular solution will be of the form c.,(t)
= ci>(jw) ~= M(jw) &.,1
(117)
where tf>(jw) is the constant to be evaluated by the usual method of undetermined coefficients. Substituting Eqs. (116) and (117) into Eq. (11) yields the result
tf>(jw) == bn~jw)n + bnl~jw)nl + · · · + b 1 ~jw) + bo (JW)n + anI(JW)nl + ••• + a.(JW) + ao which, upon comparison with Eq. (16), gives 0; then, l.oo
f = (t) dt == 2 11 sj= / 2
7t .
.
0
(137)
F(s) F(s) ds
)=
where F(s) is the Laplace transform of f(t). The value of Eq. (137) is that we need only calculate the Laplace transform E(s) of the signal e(t), using the techniques already developed for overall transfer function analysis. Then Eq. (137) may be used directly to calculate the value of the performance function J without inversion. We outline a proof of this theorem in the following paragraphs. First, write the inversion integral for one of the f(t) terms in Eq. (137)
} Jjoo
f(t) ==  . 27l'j
joo
est
F(s) ds
The conditions of the theorem guarantee that zero is an acceptable value of u. Then it follows that
f ~ P(t) dt =
2! s= 0
f(t) dt
fj~
e' 1 F(s) ds
1 0 )00 The conditions of the theorem also guarantee that the order of integration may be interchanged (see Churchill 16 ) so that 0
~oo
J j. (t) dt = 2 2
}
7t .
fjoo
.
F(s) ds
foo
est f(t)
dt
1
= 2 7t
.
Jjoo
.
F(s) F(s) ds
1  J= 0 1 and the theorem is proved. When F(s) may be expressed as the ratio of two polynomials, that is, when F(s) is rational, the value of this complex integral has been tabulated in terms of the polynomial coefficients by Newton et al., 17 \Vho are apparently the original developers of the design technique. Let F(s) : = Cnlsnl + Cn_: 2sn 2 + ... + cls + Co dn•con _.J_, dn  1snl +, ... _j__ d1s .....L d0 0
)00
1
1
1
1
34
Review of Classical Methods for Continuous linear Systems
Chap. 1
Then the value of J is
(138) n==3
cido(d1d2  dod3 )
J==
+ (c~ 
2ctc3)dodtd4 + (ci  2coc2)dod3d4 + c5did2d3·  dtd4) 2dod4( d1d2d3  dod~  did4)
n == 4
For values of n higher than four, the forms become very complex. A more complete table, up to n === 10, is available elsewhere. 17 To illustrate the technique, we restrict our attention to the case of parameter optimization, in which the form of the controller transfer function is specified, and only its parameters are to be chosen to achieve a minimum value of J. As discussed by Chang, 18 the method can, in fact, be used to pick the best linear controller transfer function Gc(s), applying a technique referred to as spectral factorization. Parameter optimization is a much less powerful approach to optimization than spectral factorization, because choice of the form of controller offers more flexibility than does choice of only the parameters. However, in Chapter 6 we shall present more general optimization techniques for minimization of the J in Eq. (136); so we confine our present treatment to the following example of parameter optimization.
EXAMPLE 18 Consider the block diagram of Fig. 13, with GP(s) = (s Gu(s) === (s
Gc(s)
1
+ 1)3 1
+ 1)
= K(s
~ bo)
The parameters K and b 0 are to be chosen to give a minimum value of
J = [
e 2(t) dt
when u(t) ~ S(t). This is the same system considered in Examples 14 and 16. Using ordinary blockdiagram algebra, we find E( s) == s 4
+·
(s +· 1) 2 3s 3 t3s 2 + ps
+ q
Chap. 1
Minimization of Integral Square Error
35
147901.5
where
+K
p
A
1
q
A
Kb 0
Obviously, the conditions e(t) < oo for all t, and e(t) of Parseval's theorem are satisfied. Since 19
==
0 for t
0. The finalvalue theorem gives this immediately. Hence we use Eq. {l38) for n == 4 to obtain
J==pq+6q+9p 2q(9p  9q  p 2 ) To minimize, we set oJfop relations
== 8Jf8q == 0 to obtain the simultaneous
(9  p)(9p  9q  p 2 ) == 9q(pq + 6q  p (9 p)(2p  9) == 9q(l  q)
+ 9)
The solution must be found by trial, and it is 4.75
p
==
q
== 0.625
This may be checked as a true local minimum point of J by the calculation of second derivatives. To prove it is a global minimum requires more numerical effort, which was not done. This is one of the difficulties of the method. In terms of the control parameters, the solution is K
==
3.75
b0 == 0.167 The response using these settings is shown in Fig. E l8l. The reader may find it interesting to simulate the control system on an analog computer and observe the change in response which occurs when the control settings are changed from the optimum values. The identical problem was studied by Jackson, and the results of this study are reported by Harriott. 20 In this study, J was plotted for various combinations of K and b0 , each value of J being calculated by actual computation of the response e(t). The minimum point of J was estimated by graphical interpolation to yield results close to those obtained here. Although Example 18 was carefully chosen to avoid this situation, it may happen that the minimum value of J results from the choice of infinite controller gain. In Example 18 this would mean K == oo. Because the tech
36
Review of Classical Methods for Continuous Linear Systems
Chap. 1
1
K=O
:;:
0.5
to
(31)
State properties
Equation (31) states that the future output behavior can be determined from a knowledge of (or equivalently, is a function of) the present state, and a specification of the future input signal. It is possible to show that, if certain care is taken in the mathematical definition of state, then
x(t) == x{ x(t 0 ), m(t o, t]}
t
> t0
(32)
which means the future state behavior of the system also depends only on the present state and the future input. Note that the state of the system is taken as a function of time. We can give an heuristic justification of Eq. (32) by postulating a basic state property, which must obviously be satisfied if Eq. (31) is to be useful:
c{ x(t 0 ), m(t 0 , t]} == c{ x(t 1), m(t h t]} == c(t) where x(t 1 ) is the state produced at t 1 by m(t 0 , t 1], beginning with the system in state x(t 0 ) at t 0 • This equality will be true for any t 1 in the interval t 0 < t 1 < t. Allowing t 1 + t in principle yields the relationship of Eq. (32). In words, the equality guarantees a form of uniqueness of the output. The same output must be reached from the state x(t 1) as is reached from x(t 0 ), when x(t 1) and x(t 0 ) are related in the manner described. It follows immediately from Eq. (32) that
x {x(t o), m(t 0 , t]} == x(x{ x(t 0 ), m(t 0 , t 1]}, m(t 1 , t])
(33)
which is the second basic state property. In words, Eq. (33) may be interpreted as follows: The state that results from beginning the system in state x(t 0 ) and applying the input vector m(t 0 , t] is identical to the state reached
58
State Variables for Continuous Systems
Chap. 3
by starting the system in state x(t 1) and applying the input signal m(t 1 , t], provided that the state x(t 1) is that which would have been reached by starting from x(t0 ) and applying m(t0 , t 1]. Thus, it is a uniqueness property similar to that placed on the output. Clearly, if the state concept is to be a useful description of physical systems, this sort of uniqueness must be guaranteed. We illustrate these concepts with a simple example. EXAMPLE 31 Consider the singlevariable system described by the inputoutput relation
c(t)
== x(t 0 )
+ Jtto m(T) dT + m(t)
It follows that x(t 0 ) qualifies as a state at t 0 according to the definition given in Eq. (31). Further, the output uniqueness condition requires
forto
t0
gives the future state behavior, and
c(t) == 1 + (1  t 0 ) 2 2
+ (t 
gives the future output behavior.
to)
t
> t0
Chap. 3
59
Ordinary Differential Equations
Ordinary Differential Equations
A broad class of continuous physical systems of practical interest may be described by a state vector x{t) which satisfies a system of ordinary differential equations
dx
dt = f [x(t), m(t), t]
(34)
Here, f is a vector of n functions, so that Eq. (34) is a vector abbreviation for a system of n ordinary differential equations, where n is the dimension of the state vector. EXAMPLE 32 The system diagrammed in Fig. 11 is described by then ordinary differential equations dx dt 1 _ m(t) 
OntXt 
an2X2 
••• 
01Xn1 
OoXn
dx 2 dt === xl dx 3 dt === x2
dxn __ dt 
Xn1
Clearly, the vector
x(t) === Xn(t)
qualifies as a state vector because knowledge of x(t 0 ) (the initial conditions on all integrators) and m(t 0 , t] completely determines the behavior of the system for t > t 0 • The vector of functions corresponding to f in Eq. (34) is _(J[x(t), m(t), t]
j;[x(t), m(t), t] f [x(t), m(t), t] =
fn[x(t), m(t), t]
60
State Variables for Continuous Systems
Chap. 3
where
/. = m(t)j; = xl
fn
=
GnlXl 
Qn2X2 · · · 
QlXnl 
GoXn
Xn1
Also, for this system the output is related to the state and input as follows: c(t) == bnm(t) + (bn1  bnanl)xl(t) + (bn2 bnan2)x2(t) + ... + {bl  bna 1 )Xn_ 1(t) + (bo bnao)Xn(t) (E321) As in Examples 31 and 32, the output vector is usually expressed as a function of the present state vector and input signal vector, c(t) == c[x(t), m(t)]
(35)
rather than in the form of Eq. (31). Equations (32) and (35) imply Eq. (31 ). It is shown in Appendix A that, for a given specification of initial state vector x(t 0 ), Eq. (34) has a unique solution, provided certain restrictions are placed on f and m(t). Therefore, state vectors x(t) resulting from the solution of Eq. (34) also obey the property expressed in Eq. (33). A major objective of this chapter will be to study from a state vector viewpoint the behavior of systems which can be described by Eq. (34). Obviously, such systems are restricted to those whose states (at any instant of time) can be expressed as vectors of real numbers. Although Example 32 showed that linear, lumpedparameter systems do indeed meet this restriction, there are many systems which do not. Distributedparameter systems, to be considered later in the chapter, are an example of those whose states cannot be described by vectors of real numbers. However, because the class of systems whose states can be so described is of such great practical interest, we justifiably devote the majority of this chapter (and the next) to a discussion of their dynamic state behavior. In only a few special cases can the solution x(t) to Eq. (34) be written analytically. Most of these special cases belong to the class for which f is a linear function, and this class will be discussed in detail. Again, this is so because a great number of practical applications involve systems which are approximately linear and lumpedparameter in nature. When the function f is nonlinear, an analytical solution is extremely rare. However, solution of Eq. (34) by either analog or digital computer is relatively straightforward when x(t 0 ) is specified. This is because, when all components of the state vector x(t) are known at the initial instant of time 10 , we have a simple initialvalue problem.
Chap. 3
61
Ordinary Differential Equations
Any nthorder, ordinary, nonlinear, nonhomogeneous, differential equation for an unknown function e(t) may, in principle, be rewritten in the form dnc ( de d 2 e dnle ) 2 ' dt g t' e' d t ' d t • • • ' d t n  1' m( t) 71
;:::::
where g is some function, by solving for the highest derivative. We regard this as a system with input m(t) and output e(t). A state vector x(t) may be defined for this system if we choose as components
i = 1, 2, ... , n Then it follows that dxt dt
==
d~n =
die dti == xi+t g(t,
i == 1, 2, . . . , n  I
X 1 • X 2 , ••• , Xn,
m) = g(t, X, m)
so that we obtain Eq. (34), with the components of the vector function f defined by the relations
i = 1, 2, . . . , n  1 fn =g
This discussion shows how general Eq. (34) actually is. Note that the relation between output and state is e(t) == x 1(t), and that knowledge of e(t 0 ) and m(t 0 , t] will not enable prediction of e(t). The initial derivatives, which are the other components of x(t 0 ), are also required. EXAMPLE 33 Consider the van der Pol equation
d 2e dt 2
+ JL(e
2
de  1)dt
+e=
0
where JL is a positive constant. To write this in the form of Eq. (34), let de x 1 = e and x 2 == dt Then,
~~ =
x2
and
dJr
2
= x 1

JL(XI  1)x2
The state vector so defined is x(t) == (x1(t)) x2(t) and the output is given by e(t) == x 1 (t). It is known that the state behavior for this equation is ultimately periodic, 2 but the solution cannot be written analytically.
62
State Variables for Continuous Systems
Chap. 3
It should be emphasized that there is no unique state vector to describe a given system. That is, for any system there may be a number of different ways to define a vector x(t) which will satisfy the state properties given in the beginning of this chapter. We shall see examples of this later. In the next sections we devote considerable attention to the special case of Eq. (34) where the function f is linear. These systems are extremely important for applications, and a relatively complete analysis can be conducted.
Linear Systems with Constant Coefficients In Chapter 1 we studied linear, stationary, multivariable systems from the transfer function approach. The general statevariable equations for these systems are
dx
dt == Ax(t)
+ Bm(t)
(36)
c(t) == Gx(t)
+ Hm(t)
(37)
If there are r input signals m 1(t), ... , m 7 (t), p output signals c 1(t), .. . , cp(t), and n state variables x 1 (t), ... , Xn(t), then the constant matrix A has dimension n x n, B has dimension n x r, G has dimension p x n, and H has dimension p x r. These are special forms of Eqs. (34) and (35). Because t does not appear as an argument off, except through x(t) and m(t), Eq. (36) is termed stationary or autonomous. EXAMPLE 34 (a) For the system discussed in Example 32, Eq. (36) results if we define the rna trices:
1
an2 0
an3 0
0
1
0
0
an1
A==
1 0 B==
0
0
a~
ao
0
0
0
0
0
0
1
0
0
0
0
1
0
Chap. 3
63
Linear Systems with Constant Coefficients
G = ([bnI  bnan11 [bn2  bnan21 · · · [bo  bnao1) H = bn m(t) = m(t) This is a singlevariable case of Eqs. (36) and (37). Thus, these equations can describe the system in Fig. 11. (b) The multivariable linear system of Eq. (124), with transfer function matrix 1 1
G(s) =
+a
s
1
1
may be represented by Eqs. (36) and (37) with
a 1 0
0 a 2 0
0
0 0
0 0
a3 0 0 a 4
01 0)1 These examples show the importance and generality of Eqs. (36) and (37), to which we will now devote considerable attention. Solution of the homogeneous case
Before solving Eq. (36) in its most general form, it is convenient to consider the homogeneous case (m = 0)
dx ==Ax
(38) dt Equation (38) is also called the equation of the free or unforced system. The solution to Eq. (38) is x(t) == eA x(t o) (39) where the exponential matrix is defined by the infinite series eAt
~ I
+ At + A
2
!:_ 2!
+A
3
.£._ 3!
+ ···
(310)
64
State Variables for Continuous Systems
Chap. 3
Equation (39) is easily verified as the solution to Eq. (38) by substitution and term wise differentiation, using the definition in Eq. (31 0). The existence of the infinite series follows from the Sylvester expansion theorem discussed later. Also note that Eq. (39) is correct at t = t 0 • We define the state transition matrix for Eq. (38) by
(t  to) = eA and/(A.i) === e}q(tto>. The advantage of Eq. (313) over Eq. (310) is that the former requires evaluation of only a finite number of terms. The basic reason why Eq. (310), an infinite series, may be reduced to Eq. (313), a finite series, is the CayleyHamilton theorem which may be interpreted to state that only the first n  1 powers of an n x n matrix are linearly independent; in other words, all higher powers of A may be expressed in terms of I, A, A 2, ••• , Ant. EXAMPLE 35
Solve the differential equation
d2c  (32c dt 2 using the transition matrix approach.
===
0
Chap. 3
65
Linear Systems with Constant Coefficients
Using the technique illustrated in Example 33, this equation is equivalent to Eqs. (36) and (37) with
A_ (f320 0I) == (1 0) B == 0 H == 0 G
The state vector x(t) is defined as c(t) x(t) == de dt
To find the eigenvalues A1 and A2 of A, set the determinant of A  AI to zero IAAII==O which leads to A2  f32 == 0 Hence the eigenvalues are A1 == /3, A2 == {3. We now use Eq. (313): eA(tto)
==
el3(tlo)
A + /31 2/3
=
~
e!3(tto)
A  /31 2/3
sinh {3(t  t 0 ) +I cosh f3(t t 0 )
Thus, cosh f3(t  to) ~(t
to)==
/3 sinh f3(t  t 0 )
sinh /3(t  to) /3 cosh f3(t  to)
This knowledge of cl>(t  t 0 ) enables us to write the general solution to the original differential equation through use of Eqs. (37) and (312) and the matrix G: c(t) == x 1(t 0 ) cosh f3(t  t 0 )
+ x 2~ 0 ) sinh {3(t 
t0)
From this [as well as from the definition of the state vector x(t)] it follows that c(t 0 ) ==
X 1(to)
de dt (to) == x2(to) so that the solution can be expressed in terms of the initial conditions on c(t) as well as those on x(t).
66
State Variables for Continuous Systems
Chap. 3
A second method for calculating cl>(t  t 0 ) involves the use of the Laplace transform. Application of the Laplace transform to Eq. (38) yields sX(s)  x(O+) == AX(s)
where X(s) is the vector of Laplace transforms of the xt(t). It is necessary to use t 0 == 0 because of the definition of the Laplace transform, which leads to the appearance of the quantity x(Q+) in the transform of the derivative. Solving this equation for X(s) yields X(s)
==
(sl A) 1x(O+)
If we take the inverse transform of this expression, we obtain an equation for x(t):
x(t)
==
L  1{(sl A) 1}x(O+)
where L  1 { } denotes the inverse Lap1ace transform operation on every element of the matrix. Comparison of this with Eq. (312) shows that
cll{t)
==
L •{(sl  A) 1 }
(314)
and cll(t  t 0 ) may be obtained by substitution oft  t 0 for t in the inverse transform.
EXAMPLE 36 Rework Example 35 using Eq. (314). We obtain (sl  A)
==
( _ s132
s1)
Inverting this matrix, we have 1
s
(sl A) 1
==
Taking the inverse transform of each element gives
cll(t)
cosh {3t
sinh {3t {3
==
{3 sinh {3t cosh {3t and substituting t  t 0 for t gives the identical result obtained in Example 35. A third method for calculating ~(t  t 0 ) is derived when we observe that cll(t  t 0 ) also satisfies Eq. (38), that is d~ ==A~
dt
(315)
Chap. 3
67
linear Systems with Constant Coefficients
as we may verify by substituting Eq. (311) into Eq. (38). Equation (315) may be interpreted by columns. That is, (t  t 0 ) may be written as a row vector ~
== ( c/J(l)
~(2) ••• ~(n))
where cfJCi) is the column vector whose components make up the ith column of the matrix (t  t 0 ). Then, Eq. (315) implies then vector equations
J1'
d.,l,.( ')
=
AcfJU>
i= 1,2,
0
0
0
,n
(316)
Furthermore, since Eq. (311) shows that
(0) ==I it must be true that
0
0 ~ 1 (t  t 0 ) x(t)
(319)
(320)
so that Eq. (319) may be rewritten in terms of x{t):
c~> 1 (t
t 0 ) x(t) == x{t 0 )
+ Jt c~> 1 (T 
to) Bm(T) dT
to
Multiplying this by the transition matrix 4>(t  t 0 ) results in x(t)
== 4>(t  t 0 ) x{t 0 ) + 4>(t t 0 )
Jtto c~> 1 (T 
t 0 ) Bm(T) dT
{321)
From Eq. (31 I) and the properties of the exponential matrix, it may be shown that c~>I(t to) == eA(tto) == cl>(to  t) This expression will be proved later in conjunction with Eq. (334). Accepting it for now as an intuitive property of the exponential matrix, and also accepting Eq. (336), we can reduce Eq. (321) to x(t) =
eA
x(t 0 )
+
Jt
eABm(T) dT
to
== (t T) Bm(T) dT
which is the general solution to Eq. (36). The integral in Eq. (322) is the contribution of the nonhomogeneous term in Eq. (36) to the general solution. The first term on the righthand side of Eq. (322) is already recognized as the zeroinput response. The integral term is called the zerostate response, since it is the response which results when x{t 0 ) = 0 is chosen as the initial state. In our discussion of Fig. 11, we asserted that the transfer function of Eq. ( 16) describes the zerostate behavior of this linear system. (See Problem 11 for a proof.) We may now relate this to Eq. (322). From Example 34 we know the matrices G and H necessary to have Eq. (37) describe the output behavior of the system. We simply set x(t 0 ) == 0 and substitute Eq. (322) for x{t) into Eq. (37). The resulting expression for c(t) must be identical to that obtained by inverting the transfer function expression in Eq. (16), because setting x(t 0 ) = 0 in Eq. (322) implies that m(t) = 0 will produce a state behavior x(t) = 0 which, according to Eq. (37), produces an output behavior c(t) = 0. Thus, we see the relation between the transfer function and the more genera] relation of Eq. (322). The transfer function gives only the zerostate response. Note that Eq. (322) is a specific form of the more general relation given in Eq. (32). That is, Eq. (322) expresses the statetransition behavior for
70
State Variables for Continuous Systems
Chap. 3
the state vector described by the differential equation, Eq. (36). The expression for the output vector c(t) follows when we substitute Eq. (322) into Eq. (37). Without writing out the relationship explicitly, it is clear that Eq. (31) is satisfied, and that we have a valid statetransition relationship, one which enables prediction of the entire future output behavior from a specification of an initial state and an input signal behavior. Equations (322) and (37) therefore combine to produce a key result, one which will be used frequently in the sequeJ, because Eqs. (36) and (37) are so important in applications. EXAMPLE 38 Find the geaeral solution of d2c dt 2
=
{3 2c
+ m(t)
From Example 35, it follows that a state description for this equation is obtained from Eqs. (36) and (37) using A, G, and H as before, and changing B to
B=
(~)
Then from Example 35, eA(tT)
sinh {3(t  T) = {3 {3 sinh {3(t  'T) cosh {3(t  'T) cosh {3(t  'T)
Hence Eq. (322) yields
I
t
x(t)
=
eA x(to)
+
to
sinh {3(t  'T) f1 m(T) dT cosh {3(t  'T)
and, combining with Eq. (37), we obtain c(t) == X 1(t) == X 1(t 0) cosh {3(t  to)
+ x go) sinh f1(t t + ~ S:. m(T) sinh f1(t 2
0)
'T) d'T
which is the general solution. This is of the general form expressed in Eq. (31). The relations between initial conditions are X 1(to) === c(to), x2(to) == c'(to). Diagonalization
We consider again Eq. (36). If the matrix A has distinct eigenvalues, then it is known that there exists a (not necessarily unique) nonsingular matrix P such that
Chap. 3
71
Linear Systems with Constant Coefficients
P 1AP ==
At
0
0
0
0
A.2
0
0
0
0
A.3
0
0
0
0
An
A
A
where A.i are the eigenvalues of A. The matrix A is said to be diagonalized by the similarity transformation pt AP. A suitable diagonalizing matrix P has columns which are the eigenvectors of A. 3 We can take advantage of this diagonalization property to uncouple the system of equations expressed in Eq. (38). To do this, we define a new vector y(t) by the relation
x(t)
= P y(t)
Then, differentiating,
dx dy  = P  =Ax== APy dt dt Solving for dyfdt yields
dy = ptAPy = Ay dt
(323)
The advantage of Eq. (323) over Eq. (38) is that the diagonal form of Eq. (323) completely eliminates coupling between the equations describing the individual components of the transformed state vector y(t). (At this point it would be worthwhile to restate that there is no unique state vector to describe a given dynamic system. Many possible choices of state vector exist, and any vector which satisfies all the properties of a state may be used.) The vector y(t) therefore has the particularly convenient expression At(tto)
y(t) ==
0
0
0
0
eA2(t to)
0
0
0
0
eA3(tto)
0
0
0
...
0
y(to)
(324)
which describes its transition behavior. The difficulty in using the state vector y(t) is that one must first solve for the characteristic values A.i. In addition, the matrix P is required if one wishes to transform back to the state vector x(t). That is, in terms of x(t), Eq. (324) is (325)
72
State Variables for Continuous Systems
Chap. 3
where eA is identical to the diagonal matrix in Eq. (324) by virtue of Eq. (310). Note that Eq. (325) gives an alternative expression for calculating the transition matrix ~(t t 0 ), by comparison with Eq. (312): 4l{t t 0 ) == PeA(t; 10 ) x(t 0 )
(330)
74
State Variables for Continuous Systems
Chap. 3
satisfies Eq. (327). Therefore, the quantity cl>(t; t 0 ) is a transition matrix for the system of Eq. (327). An expression for the transition matrix cl>(t; t 0 ) can be found by repeated integration of Eq. (327). Thus, integration of both sides of Eq. (327) yields
+ Jt
x{t) = x{to)
A(T 1) x(T 1) dT 1
{331)
to
There are two ways to proceed from Eq. (331). The first involves substitution of Eq. (331) into the righthand side of Eq. (327), and then integrating the result to obtain
x(t) = x(to)
+ s:. A(T,) [ x(to) +
r:
A(T2) X(T2) dT2] dT,
(332)
Equation (332) is then substituted into the righthand side of Eq. (327), and the result is integrated. If the process is repeated indefinitely and the result compared with Eq. (330), the following infinite series is obtained:
fll(t; to)=
I+ Jt
A(T 1 ) dT 1
to
+ Jt
to
A(T 1 ) dT 1 JT• A(T 2) dT 2 to
(333) The series in Eq. (333) for the transition matrix cl>{t; t 0 ) is called the matrizant. Amundson 6 proves that, if the elements of A(t) remain bounded between t 0 and t, the series converges absolutely and uniformly to a solution of Eq. (328). An alternate method for arriving at Eq. (333) is to consider Eq. (331) as an integral equation for x{t). The integral equation is solved by repeated substitution of the righthand side into the integral. In addition to Eq. (329), the following must be satisfied by cl>{t; t 0 ): ~t2;
to) = cl>{t2; t 1) ~t 1; to)
cl>(t; t 0 ) =
c~> 1 {t 0 ;
t)
{334) (335)
The property given in Eq. (334) follows directly from the definition of state. Thus, since
x(t2) =
=
~t2;
to) x(to) cl>( t 2; t 1) x( t 1)
= cl>( t 2; t 1) ~ t 1; to) x( to)
Eq. (334) follows immediately. Equation (335) follows from Eqs. (329) and (334) if we set t 2 = t 0 and t 1 = t. A proof that the transition matrix is nonsingular, guaranteeing that c~> 1 {t 0 ; t) exists for all t, may be found in Athans and Falb. 7 The reader should similarly verify that, for the stationary case considered in previous sections,
cl>(t  t 0 ) = C'll(t  'T) W(T  to) from which it follows (using t 0
(336)
= t, 'T = 0) that
c~> 1 (t) =
ell{ t)
(337)
Chap. 3
75
Linear Systems with TimeVarying Coefficients
This verifies our earlier assertion regarding the inverse of exp (At). Note that there are two argutnents, t and t 0 , of the transition matrix in the timevarying case, while there is only one argument t  t 0 in the stationary case. EXAMPLE 310
The transition matrix for Eq. (327) with 0 A(t)
==
1
2
4
.
IS
(t; t 0 ) on the right, and then add to get d dt [W(t; t 0 )cl>(t; t 0 )] == 0 which has the solution W ::::: constant. From Eqs. (339) and (329) it follows that the constant is the unit matrix I; hence W(t; t 0 )
=·l(t;
t0 )
(340)
This can be used with Eq. (335) to obtain
(to; t) == 'lf(t; t0 )
(341)
Equation (341) shows that solution of the adjoint equation gives the variation of with its second argument. The importance of this information \Vill be demonstrated in the next section. Equation (338) can be transforn1ed into a vector equation as follows. We take the transpose of both sides to obtain:
d'VJ:; to)
=  AT(t) WT(t; t 0 )
(342)
76
State Variables for Continuous Systems
Chap. 3
Examining Eqs. (327), (328), and (342), we can see that 'lfT(t; t 0 ) is the transition matrix for a dynamic system with the homogeneous statevector equation (343) Equation (343) is called the adjoint of Eq. (327). The entire discussion of adjoints also applies to the special case A(t) ==A, a constant matrix. EXAMPLE 311
In Example 310, interchange t 0 and t to obtain
:J  (:0 rJ ; [UJ :0 rJ UJ :J L)  (:J
2(
3 
t [(
2
2
(
3 
(
2
as the transition matrix for the adjoint system of equations: 2
0 (2
dx dt
4 x(t)
1

t
The transition matrix W(t; t 0 ) can be shown, by direct substitution, to satisfy Eqs. (329), (334), (335), and (343). Nonhomogeneous case
Consider the general, timevarying, linear state equations
dx di == A(t) x(t)
+ B(t) m(t)
(344)
c(t) = G(t) x(t)
+ H(t) m(t)
(345)
We proceed to solve Eq. (344) exactly as in the case where the matrices A and 8 are constant. Thus, we define
x(t) == «(t; t 0 ) y(t) where «(t; t 0 ) is the transition matrix for the homogeneous case, Eq. (327), and we proceed as in the constantcoefficient case to obtain the following equivalent of Eq. (321)
x(t) == «(t; t 0 ) x(t 0 )
+
cl>(t; t 0 )
J
t
lo
c~> 1 (T;
t 0 ) B(T) m(T) dT
Equations (335) and (334) may be used to transform this to x( t) == «( t ; t 0 ) x( t 0 )
+ Jt
to
«( t ; T) B(T) m(T) d T
(346)
Chap. 3
77
Controllability and Observabil ity
Equation (346) is the general solution to Eq. (344). From Eq. (346) it is clear that the variation of the transition matrix (to 
!2)
=
c~» 1 (1 2
fl>(to  t1)b
=

t 0)
=
=
(A 2) 1
(:=!)•
(:=! t )b = (!)
(A  1 ) 2
~to 
t) ! =
Chap. 4
!)
2
Therefore the system is not controllable. Compare this result with that for Example 312. If we change B to
B=b=(~) we obtain
and this system is therefore controllable.
SampledData Systems General theory
A system which is continuous, but whose output is examined intermittently to determine a suitable input, is called a sampleddata system. We consider here a class of these systems which is most readily analyzed by the discretesystem theory just developed. The linear, stationary system state behavior is described by Eq. (36)
dx
dt = Ax
+ Bm
(36)
The output c(t) of this system is sampled at the equally spaced sampling instants t 0 , t 1 , t 2 , ••• , and based on the sample at t b for example, a constant m(tk) is computed and applied over the entire next sampling interval. In equation form, the input is the stepped signal
m(t) = m(tk)
tk
< t
0 all+ a22 < 0
alla 22  a 12 a 21
which are the conditions for stability. To find the eigenvalues, we write the characteristic equation of the matrix A a11
a21
A.
a12
a22 A.
Chap. 5
135
Linear Approximation Theorems
which yields A. 2  (a 11
+a
22
)A.
+ (a11a22 
a•2a2.)
== 0
The conditions required for the roots A. to have negative real parts are clearly the same as those above. Finally, consider the system
d c de + a  + be == 0 dt dt 2
2
A statevariable representation of this is obtained, as usual, with A_ (
0
1)
a
b
in Eq. (38). Then the stability conditions derived before require
b>O a>O which are the same conditions obtained by classical stability analysis.
Linear Approximation Theorems An important practical technique used in analyzing nonlinear systems is that of linearization. The analyst represents the system by a suitable linear approximation and then investigates the stability of the linear approximation using the methods presented in the previous section. The information which must be obtained from the theory is the extent to which the stability of the linear approximation determines the stability of the original nonlinear system. Stability theorem
The following stability theorem is available: Consider the system of Eq. (511). Define the constant matrix A and the function g(x) by f(x) == Ax
+ g(x)
If dx
==Ax dt is asymptotically stable inthelarge, and if
II g(x) II · o 11·0 II X II 
lim II X
then Eq. (511) is asymptotically stable at the origin.
(518)
136
Lyopunov Stability Theory
Chop. 5
Two things should be noted about this theorem. First, only asymptotic stability, and not asymptotic stability inthelarge, is guaranteed for the nonlinear system. Second, the usual technique for defining A and g(x) when f(x) is analytic (infinitely differentiable) is linearization in a Taylor series expansion about the origin: (519) where
oft
oft
of;
of;
...
ax. ox2
ac ox
ax. of,l
OX2
(520)
ofn
ofn
ax. ox2
OXn
and the subscript on Eq. (519) implies that all partial derivatives are to be evaluated at the origin. Then g(x) follows immediately from g(x) == f(x)  Ax. Clearly, g(x) contains only second or higherorder terms in the components of x, and hence Eq. (518) will be satisfied. The proof of the theorem is somewhat involved, but it is presented because the theorem is so important. However, the proof may be omitted without loss in continuity. We utilize the properties of the norm II A II of a matrix A as described in Appendix B. We may write the solution to Eq. (511) using Eq. (322)
x(t) ==
~(t
 10 ) x(t 0 )
+ Jt
cl>(t  r) g(x('r)) dr
(521)
to
where the transition matrix is given by
cl>(t  to) ==
eA(t  t 0 ) 11. Since we are given that the linear approximation is asymptotically stable, we know II ~(t  t 0 ) II will be composed of negative exponentials (see Eq. (313) and ensuing examples), that is, the eigenvalues have negative real parts. Let {3 be the largest of the real parts of the eigenvalues. Then, clearly, there exists an a > 0 such that
ll«(t  to) II < ael3
t >to
(522)
Other values of {3 > 0 may also be used to guarantee this inequality. Note that, to have this inequality hold at t === t 0 , we must at least choose a > n for the norm (B1), a> Jil for the norm (B3), and a> 1 for the
Chap. 5
137
Linear Approximation Theorems
norm (B2), where n is the order of the system. Then, from Eq. (521)
II x(t) II t 0 , and therefore the origin is stable. Thus we have demonstrated asymptotic stability. EXAMPLE 59
We here apply the linear approximation theorem for stability to the system of Example 57. Using Eq. (520) we obtain
A ~ (1 0 Then g=
2x 2) 1 x=O
=== _
1
(=::  x~) Ax=(~~)
The transition matrix corresponding to A, exp A(t  t 0 ), is easily found by the methods of Chapter 3 to be
cl>(t  t 0 )
_ (e·

0
(t to)
0 e(t
to) II
~
2e
Hence we choose a == 2 and {3 == I. It is clear that no larger value of {3 will guarantee inequality (522) for all t > 10 • Further, the largest value of {3 is desired so that the inequality II g(x) II < {3k llx II/aM for all II x II < ry can be met with the largest possible ry, and hence, the stability region II x(t 0 ) II < ryjaM will be as large as possible. Similar reasoning shows that the smallest possible value of a is desired (which explains the choice a == 2) and that we should choose k == 1  e where e > 0 is very small. Since
II gil
==X~
we must find the largest value of ry guaranteeing
x~
H'(O) oG (0, 0) oxl
Chap. 5
>0
1
As an example, Aris and Amundson 11 studied the following constants:
a
1 4
kV 25 Fe and the following heat removal function: h(y2)
(y2  J)[l
=
+ K(y2 
2)]
This heat removal function is generated by proportional control on the flow rate of cooling water, which is at an average dimensionless temperature of l, using a proportional gain K. For these constants g(yl> h) = Y1 exp
Then, for any K, the point ylS this equilibrium state
=
so( ~  ;J
t, Y 2 s
= 2 is an equilibrium state. For
1 X1  y 1  "2
X2
x
G(x 11
2)
H(x2)
= Y2 2
(x~ + ~) exp [2s(x ~ 2)] ~ = (X2 + t)(l + Kx2)  ! =
2
aG (O O) = 1 OXt
'
aG (O O) = 25 ox2 ' 4
H'(O)
=
1
+ K4
Therefore, the origin is asymptotically stable if
K>i K>9 which clearly require K > 9. On the other hand, if K < 9, the system is unstable at the origin. If K == 9, no information is available.
Chap. 5
143
Krasovskii's Theorem
Gura and Perlmutter 12 have used the technique illustrated in Example 59 to calculate stability regions for this exothermic reactor.
Krasovskii's Theorem Perhaps the most general stability theorem available for nonlinear systems is the theorem due to Krasovskii. 13 We first state and prove a limited form of this theorem, presented by Kalman and Bertram. 3 The nonlinear system
dx 
dt
==
f(x)
f(O) === 0 is asymptotically stable at the origin if the matrix
(ar)T ox
of + F(x) = ·
ox
l
(526)
is negativedefinite for all x; furthermore,
V(x) == [f(x)]T f(x)
(527)
is a l_.yapunov function. Note that the Jacobian matrix of/ox is defined as in Eq. (520), and F(x) is the sum of the Jacobian and its transpose. Note also that we shall assume f has continuous first partial derivatives. The proof consists of demonstrating that V(x) satisfies the conditions of Theorem 52. Thus, \Ve compute
tjV == (~~)T f(x)
ox
dt
=
2 [ (~~r f(x)r f(x)
=
[rr(x)(~~r r(x)r + rr(x) ~~r(x)
But, since the transpose of a scalar is the same as the scalar, we obtain
~ o= fT(x) F(x) f(x)
(528)
\Vhich is negative by hypothesis. Clearly, V(O) == 0. It remains to show that V(x) > 0 for x 0. Equation (527) indicates that this is satisfied unless f(x) == 0 for some x :f::. 0, i.e., unless the system has another equilibrium state. To investigate this possibility, \Ve first prove a vector identity for f(x). Consider d  f(ax) da
*
144
Lyapunov Stability Theory
Chap. 5
where a is a scalar and xis a fixed vector. Using the chain rule for differentiation, we obtain
d f(ax) = da
"'x1l
of
L.J
J
oxj (ax)
(529)
j=l
When we integrate both sides of this relation with respect to to a = 1, the result is f(x) =
J i
da 0
~
af
L.J xi Oxi (ax)
a, from a = 0
(530)
J= I
Now consider a vector x
* 0 for which f(x) = 0. Then (531)
However, by hypothesis, the integrand is negative for every value of a, and the integral must therefore be negative and cannot equal zero. This contradiction shows that a system for which F(x) defined in Eq. (526) is negativedefinite for all x cannot have an equilibrium state other than the origin. Thus, all conditions of Theorem 52 are satisfied, and the system is asymptotically stable at the origin. Asymptotic stability inthelarge requires that F(x) be uniformly bounded from above by a constant, negativedefinite matrix. A proof of this is given by Kalman and Bertram. 3 Application of the general Krasovskii theorem is frequently hindered by the difficulty of satisfying the negativedefinite condition on F(x) for all x. For example, consider again the system of Example 57 wherein the Lyapunov function V(x) = (xl
+ xD + x~ 2
is precisely that given by Eq. (527). For this system of
ax
==
(10 2x2) 1
so that F(x) = 2
(~ 2 ~ 2 )
The condition that F(x) be negativedefinite is identical to the condition that F(x) be positivedefinite, which requires I  x~ > 0 or 1 < X 2 < I. These are the same conditions noted in Example 57. Thus, the conditions of Krasovskii's theorem are not satisfied because F(x) is not negativedefinite
Chap. 5
145
Krasovskii's Theorem
for all x. However, the theorem is useful when combined with Theorem 56 because, as in Example 57, it generates a possible Lyapunov function and conditions guaranteeing that dV/dt given by Eq. (528) will be negative. If these conditions include a region Rh Theorem 56 guarantees asymptotic stability in this region. Berger and Perlmutter 14 used this technique to study the stability of the nonlinear reactor of Example 510. We illustrate their results in the next example. EXAMPLE 511
Consider again the equations (E5l 03) and (E51 04) of the exothermic chemical reactor of Example 510:
dit, = dit = 2
x~  G(x., x2) x 2 + G(xl,
X2) 
H(x2)
Then,
1
ar
ax 
8G ax.
8G 8x2

_ 1 + 8G _ 8H
8G ax.
ox2
8x2
and
F=
2( I+ ax. OG)
ac 8G 8x2 ax.
8G 8x2
2( 1 _
8G
OXI
OG
8x2
+ OH) ox2
Hence V(x) ==[xi
+ G(xt, x2)] 2 + [x2
 G(xi, x2)
+ H(x2)] 2
will be a Lyapunov function provided the inequalities
1 + 8G oxl
>
0
4 ( 1 + 8G)(t _ 8G oxl
8x2
+ 8H) _ 8x2
(oG __ oG) 2 8x2 oxl
>O
are satisfied. Berger and Perlmutter examined specific numerical examples for G(xh x 2 ) and H(x 2) and used these inequalities together with contours V(x) == k to construct regions in which the reactor is guaranteed to be stable, exactly as was done in Example 57. Later, Luecke and McGuire 15 pointed out that these inequalities are sufficient, but are necessary only if one wishes to guarantee F negativedefinite for all x. Since we require that F be negativedefinite only throughout
146
Lyapunov Stability Theory
Chap. 5
the region inside V(x) = k, these inequalities are too restrictive, and the region of asymptotic stability may be extended. The more general fo.rm of Krasovskii's theorem is stated as follows: The nonlinear system
dx  = f(x) dt
f(O)
== 0
is asymptotically stable inthelarge if there exist constant, symmetric, positivedefinite matrices P and Q such that the matrix
F(x)
= (ar)r P +Par+ Q ax
(532)
ax
is negativedefinite for ail x; furthermore,
V(x) = fTPf
(533)
is a Lyapunov function. Note the similarity between this theorem and the theorem on stability of linear systems utilizing Eqs. (515) and (516). The proof is similar to that of the more restricted form. Thus,
dV = 2 (fTP ar) f(x) dt ax
= fTP~~ f + [F(~~r Pfr = fTFf fTQf Since F is negativedefinite and Q positivedefinite by hypothesis, it is clear that dVfdt is negativedefinite. The identity in Eq. (529) may be rewritten in vector notation
!!_f(ax) = af(ax) X da ax Integrating this over a from a
= 0 to a == 1 gives
J: Of~~x)
f(x) =
Consider some fixed x =F 0 for which f(x)
0 = xTPf(x) =
=
J ~ J: l
xTP 0
x da
= 0. Then
of(ax) ox xda
xT F(ax) x da 
~ xTQx
which is clearly a contradiction since F(ax) is negativedefinite and Q is positivedefinite. Hence f(x) cannot be zero unless x = 0, and V(x) > 0
Chap. 5
147
Estimation of Transients
for x =F 0. Now suppose we integrate Eq. (529) from where {3 is a positive constant. The result is f({3x) ===
a == 0 to a == {3,
f/3 of(ax) X da 0
ox
where, as before, we assume x =F 0. Therefore,
~
xTPf(,Bx) =
f
xTF(ax) x da 
~ xTQx
But this shows that xrp f({3x) ~  oo as {3 ~ oo. Since x and P are fixed, this can happen only if II f({3x) II oo as {3 ~ oo which, in turn, implies II f(x) II ~ oo as II x II oo. Finally, this guarantees that V(x) in Eq. (533) satisfies the additional condition of Theorem 53 and demonstrates asymptotic stability inthelarge. Luecke and McGuire 15 show that astute choice of the matrix P, when using Krasovskii's theorem with the method of Example 57, can significantly enlarge the resulting stability region. In view of this theorem, it is not difficult to see why the linear approximation theorem should be valid. )>
)>
Estimation of Transients
The Lyapunov function may be used to estimate the speed with which the state of an asymptotically stable system approaches the origin. Thus, for a given Lyapunov function, we define 1J
==
. {dVfdt} V
m~n
(534)
where the minimization is carried out over all values of x =F 0 in the region for which the system is asymptotically stable and where we assume the minimum exists. Clearly, 1J > 0, and it follows that
dV~(t))
0 such that Io(e)/e I < h (or, equivalently, Io(e) I < Ie I h) for all e satisfying Ie I < k. Now, Eq. (625) may be written (642) where (643)
ox
We assert that it is necessary to have /(t) == 0, to avoid a negative 2(t 1 ). To prove this, suppose /(t) 0. Then we choose k > 0 such that 1 o(e) I < Ie I · II(t) 1/2 for all Ie I < k, and we choose
*
e == k sgn I ==
k { k
Then it follows from Eq. (642) that
ox2(tr) =kill+ o(e) and A< 2 >. The value of b can be varied as required, without invalidating Eq. (672). Therefore, the problem is solved with bas a parameter, and then b is chosen to satisfy q[x(t 1)] == 0 Note that this gives as many relations as there are components of b, since q and b have the same number of components. An analogous remark applies to the determination of a in Eq. (666). Thus, for any set of initial and final conditions and constraints on x(t), we can choose conditions on A*(t) which make Eq. (672) valid, and which provide a sufficient number of conditions to solve Eqs. (654) and (662) simultaneously. Now define the Hamiltonian function
H(x, m, A) = ATf(x, m) Then, oH
om
== ( of)T A
om
(673)
188
Continuous Systems Optimization
Chap. 6
and Eq. (672) becomes
OS(x(t1 )) =
J:~ (~~rr Om dT
(674)
which gives the variation in the performance criterion caused by a variation in the control. Now we know from the example in the previous section that, if the final state x(t 1 ) is not completely free, the control variations are not arbitrary, but must satisfy a generalized version of Eq. (641). Nevertheless, we saw that computationally equivalent conclusions regarding the integrand of Eq. (625) were reached in both free and fixed cases. This equivalence is not easily demonstrated for Eq. (674), but it can be demonstrated. 4 For our purposes, we assume it without proof. Then, if we regard om(t) as arbitrary (which also implies m(t) is not constrained), it follows that (675) to avoid a negative value of oS(x(t 1)). This is analogous to Eq. (627). Equation (675) states that the optimal control m*(t) must be a stationary point of the function H(x*, m, A*). If we consider the control to be constrained
i
===
1, 2, . . . , r
(676)
where ai and (3i are the low and high constraints on the components mi(t), then om(t) may not be arbitrary, and Eq. (675) does not hold. Thus, if the ith optimal control component is on its low constraint, we can only have omi(t) > 0, in which case Eq. (674) requires
oH >
ami
0
m{ ===
at
(677)
0,
0'
>0
for the inertial system dx 1 dt == x2 dx dt
2 =m
under the following different sets of conditions: (a) x(t 0 ) = (b) (c) (d) (e)
(~ 10)
x(t1 ) == 0 x2(t 1 ) == 0; X 1(t 1 ) free No conditions on x(t 0 ) or x(t1 ) ax.(t1 ) + {3x 2(t 1 ) == 1; a, f3 constants
Solution: We define the additional state variable x 3 ==
Jtto [xi + ux~ + pm
2]
dt
Then dx 3 == xi dt

+ ux2 + pm 2 2
so that
f(x, m) == m
xi
+ ux~ +
pm 2
Chap. 6
191
The Minimum Principle
T'hen, from Eq. (682), H(x*, m, l,*) = Aix:
+ A:m + Aj(xi + ax: + pm 2
2
2)
and from Eq. (681), l, * must satisfy dAi  2A*x* dt 3 1 dA: == Ai  2Ajax: dt
dAj =0 dt
For all conditions (a)(e),
= x 3(t 1 )
S(x(t1))
so that in Eq. (664a)
0
c
==
0 1
and hence, from Eq. (670), Aj(t1 ) == 1. This and the differential equation for dAj/dt imply Aj(t) = 1 for all t 0 < t < t 1 . To minimize H with respect to m, we differentiate and set oH/om == 0, since m is unconstrained. In other words, without constraints on m, the absolute minimum of H must be a stationary point (or else must occur for infinite m). Therefore
aH ===A: + 2pm* == 0 om

which yields m*(t)
==  2~A.t(t)
Note that o2Hfom 2 = 2p > 0 so that m*(t) is, in fact, the absolute minimum of H. The equations for the optimal system may now be reduced to dx~ dt
* = x2
dxi = _ _!_A* dt 2p 2
dA~
==
dAi dt
== A*1  2ax*2
dt
(£611)
2x* 1
(a) For this case, the terminal condition x*(t 1 ) is free. Hence we
192
Continuous Systems Optimization
Chap. 6
must have A*(t1 ) === 0. Therefore, we obtain the optimal response by solving Eqs. (E611) with the conditions
Xl(to) = X1o x2(to) = 0 A.1(t1) = A.11
=
0 A.2(t1) = A.21 = 0
Now Eqs. (E6ll) are of the form of Eq. (38) with 0
1
0
0
0
0
0
1  2p 
2
0
0
0
1
0
A=
0
2u
so that the solution may be written
===
0
eA(tto)
A2
A2o
If we knew the initial conditions A. 10 and A. 20 on A, there would be no difficulty in solving the problem. However, with values of x specified at t 0 and values of A specified at t 1 , the problem is much more difficult to solve. We shall consider more systematic procedures for solving such problems later in the chapter. For now, we proceed by guessing A. 10 and A. 20 and observing A. 11 and A. 21 • If these are not zero, we correct A. 10 and A. 20 and continue correcting until A. 11 and A. 21 are zero. Note that, for a given guess of A. 10 and A. 20 , the solution curves are easily generated since all values are known at t 0 • The procedure used to generate the solution curves in Figs. E611 through E618 is as follows: The system of Eqs. (E611) was programmed on an electronic analog computer. For illustrative purposes, the parameters p === 1, u === 0, and initial state X 10 == 0.14 and X 20 == 0 were chosen. Also, we take t 0 == 0 for convenience. Highspeed repetitive operation was used to quickly find a range of values of A. 10 and A. 2o, generating solutions giving A. 1(t) and A. 2 (t) curves which approached zero reasonably simultaneously. As may be seen from Figs. E611 through E6l8, this range is narrow. (It is easy to show that the four eigenvalues of the matrix A satisfy the equation p4 
{]" p
p2
+ p1
==
0
Chap. 6
193
The Minimum Principle
p=l, CT=O ft = 1. 55 A1
x {t,) free
0.2
0.1
0
3 t~
Figure 16·1·1
.
0.4
p =1, a=O
=
ft 2.04 X ( tf) free
0.275
0.2
0.1
0 f
.....
Figure 16·1·2
4
194
Continuous Systems Optimization
0.4
p =1, a=O t, 2. 3
=
x (t,) free
0
Figure 16·1·3
p=1, u=O ~=3.07
x(f,) free
0.1
Figure 16·1·4
Chap. 6
Chap. 6
195
The Minimum Principle
p=1, a=O
t,=3.60 x{t,) free
A.1
0.1
0
Figure 16·1·5
p=1, a=O
t,=5.1 x(ff) free
0.1
f
Figure 16·16
.,..
196
Continuous Systems Optimization
Chap. 6
p= 1, o=O t,=6.0 x(t,) free
0.1
8 Figure 16·1·7
0.2
p=1, u=O t,~oo
x(t,) free
0.1
f
•
Figure 16·1·8
and therefore, that the solution is always unstable, as evidenced by the eventual behavior of the solutions in Figs. E611 through E618. This explains why the range of values of A. 10 and A. 20 is so small. Carelessly chosen initial values result in solutions which quickly diverge.) Returning to realtime operation, the value of A. 20 , which is proportional to the initial value of m, was set near the low end of the thus determined range. Then, A. 10 was varied until an intersection between A. 1{t) and A. 2{t) occurred on the time axis. The time at the intersection is then taken as t 1 , and the resulting solution curves are necessarily optimal for this final time. The value of A. 20 was then increased within the range, and the procedure repeated. In this way, a series of optimal solutions, for various values of t 1 , may be obtained. {The alternate procedure, fixing t 1 and varying A. 10 and A. 2 o, is much more difficult.) Figures E611 through E6l8 show how the optimal solution changes with t 1 . Since m* = A.i /2, the A. 2 curves allow easy visuali
Chap. 6
The M:nimum Principle
197
zation of the optimal control. For small t 1 , the control is always negative, increasing to zero at t 1 . As t 1 increases slightly, the control stays negative throughout the transient, but at t == t 1 , the control approaches zero as an inflection point rather than as a minimum point. The actual t 1 at which the inflection point occurs is difficult to find, but Fig. E613 indicates it to be in the range 2.12.5. For t 1 less than this value, the final value x 1(t 1)is positive, decreasing with increasing t 1 as expected. At the critical t 1 the value of x 1(t 1 ) is zero. As t 1 is increased above the critical value, the control is positive during the later portions of the transient, and the value of x 1(t 1 ) is negative, as shown in Figs. E614 through E617. The reader will undoubtedly notice certain inaccuracies in these figures, such as nonzero final values, etc. The author points out, however, that only minute changes in A. 10 and A. 20 , particularly in A. 20 , are involved in going from one t 1 value to another. The most gentle pressure on the potentiometers fixing A. 10 and A. 20 causes significant changes in the nature of the solution curves. This must be recognized as a computational drawback of use of the minimum principle for optimization. As t 1 + ex:>, the response in Fig. E618 is approached. This response \vas generated by other n1ethods, to be discussed later in the chapter. It is virtually impossible to find the optimal solution for t1 oo by trialanderror search on initial conditions because of the enormous sensitivity to A. 10 and A. 2 o. The effect of \Veighting the response velocity, by selecting p == u == 1, is shown in Figs. £619 through E6112. Comparison of these with Figs. £611 through E618 shows that the penalty in J on excess velocity results in an optimal x 1(t) which falls more slowly, as expected. Figure E6112 suggests that, for this case, the optimal x 1(t) never passes through negative values. (b) This case differs from the last in that now x(t 1 ) is fixed (at 0) and l,(t 1 ) is free. The procedure is identical to that of part (a), except that the initial values A. 10 and A. 20 are varied until x 11 == x 21 == 0. 'Typical results for p = 1, u == 0 are shown in Figs. E6113 through E6l15. As t 1 ~ ex:>, free and fixed terminal state problems become identical, since the former must obviously have x 1(t 1 ) ~ 0 and x 2 (t 1 ) + 0 for the optimal solutions. 1'he optimal response in Fig. E6115 for t 1 :=: 4.5 is already close to those of f ..igs. E616 through E618. (c) For this case we have x 1 (t 1) free, and hence A. 1(t 1 ) = 0. The only difference, therefore, is that we seek A. 10 and A. 20 to give A. 1(t 1 ) ::= x 2 (t 1 ) = 0. This \\'as not done computationally, since the results are so similar to the previous two cases, (a) and (b). )>
198
Continuous Systems Optimization
0.3
p=d=1
t,=2.0 x {f,) free
0.1
0
t
~
Figure 16·1·9
0.4
0.3
p=a=1 t, = 2.7 x {f,) free
0.2
0.1
0 f
~
Figure 16·1·1 0
Chap. 6
199
The Minimum Principle
Chap. 6
0.4
0.3 p=o=1
t,
0.2
= 4.1
x(t,)free
0.1
0
2 ~
f
Figure E6·111
0.3
0.2
p=a=1
t,
...
00
x U,) free
0.1 4
f
5
....
Figure 16·112
(d) The required conditions on A in the absence of conditions on x(t 0 ) and x(t 1 ) are A{t 0 ) = A(t1 ) = 0. Hence, we must search for appropriate initial values x{t 0 ) which, together with A(t 0 ) == 0, will give A(t1 ) = 0. It is not hard to deduce that the appropriate initial values are x{t 0 ) = 0, which yield m*{t) = x*(t) ==A *(t) == 0 and hence J == 0, obviously the minimum value. Thus, the minimum principle
200
Continuous Systems Optimization
Chap. 6
0.446 0.4
0.338 0.3
p =1,
(f
t,=2.6 x(t,)
=0
=0
0.2
0.1
4
0.1
0.2
Figure 16·1·13
gives results agreeing with intuition; if the initial value of x is at our discretion, we should clearly choose x(t 0 ) = 0 on physical grounds. (e) The constraint may be written in the form of Eq. (668) as
q[x1(t1 ), x2(t1 )]
=
axl(t 1 )
+ /3x2(t 1 ) 
1 == 0
Then, Eq. (669) yields
A.1(t1 ) A.2(t1 )
= ba =
b/3
Thus, the final adjoint vector must lie on the line A. 1(t 1 ) = (aj{3)A. 2(t 1 ), which is of course perpendicular to the surface represented by q == 0. The exact position of ~(t1 ) on this line is determined by the parameter b, which is adjusted to yield q == 0, i.e., to cause x(t1 ) to lie on the designated line. This means a double trialanderror. For each value
Chap. 6
201
The Minimum Principle
0.3
0.2
P=1, a= 0 t, = 3.0 X {t,) = 0
0.1
0.1
Figure 16114
0.4
0.3
0.2
p=1,o=O t, = 4.5 x{t,)= 0
0.1
f _______...,
Figure E611 5
202
Continuous Systems Optimization
Chap. 6
of b, A. 1o and A 20 are sought such that A(t1 ) lies on the line A. 1(t 1 ) = (a/{3) A. 2(t 1 ); then b is varied until x(t 1 ) is on q = 0. Observations on computational aspects of the minimum principle
Example 61 points out some features of the minimum principle which are general. The system equations and adjoint equations must be solved with a mixture of initial and final conditions. Such problems are called twopoint boundary value problems, and they usually require trialanderror approaches for their solution. In general, this will require a search for n numbers such that n conditions are satisfied, where n is the order of the system equations. Thus, the minimum principle has the computational effect of reducing a problem, which is originally a search for a vector of functions m(t), to a search for the ndimensional constant vector which is the solution of n simultaneous implicit relations. Clearly, this is a drastic reduction in the complexity of the problem, as Example 61 shows. Search for a vector of functions is impossible, while search for a vector of numbers is quite feasible. The minimum principle provides necessary, but not sufficient, conditions for minimizing S(x(t 1 )). Therefore, if there are several solutions which satisfy the conditions of the minimum principle, we have no recourse but to evaluate S(x(t1 )) for each such solution to find the minimum. The potentially dangerous aspect of this, and of the previous observation on searches, is that the existence of more than one solution satisfying the necessary conditions may go undetected during a trialanderror search over boundary conditions. All that can be done about this, for the general class of systems described by nonlinear equations of the form of Eq. (654), is to offer the reader fair warning. For linear system equations, Eq. (36), there is a bright note. When the objective is to minimize an integral of a quadratic function of x and m, as in Example 61, or to minimize the time required to drive the state to the origin x = 0, under reasonable restrictions the necessary conditions cannot have more than one solution; thus the problem does not arise in such cases. These linear problems are discussed in more detail in later sections. Free final time
In some problems, the final time t 1 will be unspecified. An illustration of this is given in Example 62. Therefore, in considering variations in S(x(t1)), we must allow variations in t 1 . In other words, we consider the effect on S of varying t 1 from the optimal value. Again, we give a heuristic derivation,
Chap. 6
203
The Minimum Principle
ignoring small terms and assuming requisite differentiability conditions, etc. A rigorous derivation is given by Pontryagin et al. 4 Clearly, except for secondorder terms ox(t ,)
where ot1 == t 1

dx*
== dt t; ot f
tj, and tj is the optimal final time. Then
oS(x(t1)) ==
~,T(t 1 )
ox(t1 )
= AT(t ) dx* f
dt
t•
ot,
1
From the definition of H, Eq. (673), this may be rewritten oS(x(t1 ))
====
Hlti ot1
Since ot1 , the variation in final time, may be positive or negative, this relation clearly implies that a negative oS(x(t1 )) can be avoided only if H* It;= 0
which says that a necessary condition satisfied by the optimal response is that the final value of the Hamiltonian is zero. However, we have already stated that the Hamiltonian is constant along the optimal response. Hence, we conclude that when t 1 is unspecified, H*
== 0
(683)
everywhere on the optimal response. Effectively, Eq. (683) gives us one more condition to be used for solving optimal control problems. Clearly, an additional condition is necessary since we have lost the condition given by a specified final time. Performance criteria and Lagrange multipliers
In some applications, the performance criterion to be minimized is of the form
J(m)
=
f totr F(x, m) dt + G(x(t1))
(684)
where F and G are suitably differentiable scalar functions. The minimum principle stated earlier may be used if we define an additional state variable, Xn+ 1(t)
:::::::
Jtto F(x, m) dt
(685)
an augmented state vector, (686)
204
Continuous Systems Optimization
Chap. 6
and an augmented system equation, Eq. (654), with (687)
fa(x, m) = (;) dxa == fa(x m) dt '
This augmentation has been illustrated in the foregoing examples. We must also define a modified performance criterion (688) which changes nothing but the final conditions :A,(t1 ). Now suppose we are required to minimize the performance criterion subject to the constraint
J g(x, m) dt = t,
'Y
(689)
to
where g and 'Y are given vectors of functions and constants, respectively. Then the method of Lagrange multipliers may be used: Let p be a constant vector of Lagrange multipliers, having the same number of components as 'Y and g. Define Jp(m) by t, J Jp(m) = J(m) + pT g(x, m) dt
(690)
to
where J(m) is given by Eq. (684). Then ifm*(t) is the control which minimizes Jp(m) and satisfies Eq. (689), it minimizes J(m) subject to the constraint of Eq. (689), i.e., it is the solution to the original problem. The proof of this assertion follows easily. Suppose there exists an m(t) satisfying Eq. (689) such that J(m)
0
(E622)
This is an example in which the final time is unspecified and, in fact, is to be minimized. Corresponding to Eq. (684), the performance criterion to be minimized may be written
J(m)= f
tr
to
dt==t 1  t0
We define
and using the Lagrange multiplier principle, we recognize that minimization of J P(m) and choice of p to satisfy Eq. (E622) constitute a solution to the original problem. We next define the additional state variable X2
==
Jtto (1 + pm
2)
dt
and hence
Thus, 11 == A, 1(x 1
+ m) + A, (1 + pm 2
2)
from which it follows that A 2 (t) ~ 1
Differentiation yields ()]J
 =
om
i\1
+
2pm == 0
(E623)
206
Continuous Systems Optimization
Chap. 6
and therefore
m ==A.. 2p We have, as stated before, omitted the asterisk notation, since it is understood that this relation gives the optimal m*(t). The solution to the differential equation for A. 1 is clearly "\  "\ e21(t; to}4>22(t; to) we may rewrite Eq. (6111) as two separate relations for x and A. We do this replacing t 0 by t and t by t 1 , so that the final state is expressed as a function of the present state. This is clearly in accord with the state property presented in Chapter 3. x(t 1) =
A(t1)
r +r
~u(t1 ; t) x(t) + ~ 12 (11 ; t) A(t) +
= ~21 (1 1 ; t) x(t) + ~22(t 1 ; t) A(t)
~ 12 (1 1 ; r) W(r) r(r) dr (6113)
~22(t1 ; r) W(r) r(r) dr (6114)
These equations may be solved simultaneously for A(t), by means of the final condition A(t1 ) given in Eq. (6105), to obtain A(t) = K(t) x(t)  p,(t)
(6115)
where
M(t)[Gr(t 1 ) PG(t1 ) 4> 11 (t 1 ; t)  fP 21 (t1 ; t)]
K(t)
===
p,(t)
= M(t)[GT(t,) Pr(t,) GT(tf) PG(tf)
r
(6116)
~12(1/; r) W(r) r(r) dr
+ Jtr fP 22 (t1 ; t) W(r) r(r) dr]
(6117)
t
M(t)
==
[22(t,; t)  GT(t,) PG(t,) 4>12(t,; t)]l
(6118)
Chap. 6
Optimal Control of Linear Systems with Quadratic Performance Criteria
215
and we assume the matrix inverse denoted by M(t) exists. (A proof is given by Kalman. 5 This reference is apparently the original source for much of the general theory on optimal control of linear systems with quadratic criteria.) The timevarying matrix K(t) has dimension n x n, and the timevarying vector IL(t) has dimension n x 1. Structure of the optimal control
At this point, we should assess what has been accomplished. The general theory has shown in Eq. (61 07) that the optimal control is a linear function of the adjoint variables. This result leads to Eq. (6109), a system of 2n linear differential equations in x and A, with initial conditions on x and final conditions on A. As we saw in Example 61, such twopoint boundaryvalue problems are not easy to solve. However, by using the general results of Chapter 3 on transition matrices, we have been able to deduce that the solution must be of the form of Eq. (6115). This is very important since, if we can evaluate K(t) and #L(t), we can obtain the optimal control as a function of the state vector x, by combining Eqs. (6107) and (6115) to obtain m*(t)
== R 1(1) BT(t)[K(t) x(t) 
(6119)
IL(t)]
This will enable us to construct a feedback control system for the optimal control, as illustrated in Fig. 63. This system requires measurement of the state vector rather than of the output vector. There is obviously a great practical advantage to use of a feedback system rather than a programmed system in which the optimal control m(t) is simply computed and applied to the actual process without regard to behavior of the state vector during the transient. Slight deviations of the process dynamics from Eq. (344) will always cause deviations of the physical x(t) from the optimal x*(t), which is why the asterisk is omitted from x(t) in Eq. (6119). To further distinguish between the feedback and programmed systems, we term a control m(t), which is predetermined as a function of time, a control function, whereas a control m(x(t), t) which is computed as a function of the actual state is J..L(f)
+
R 1(t) ar(t)
m(t) ,
dx df =A(t)x(t) + B(t)m(t)
x(t) "7
G(t)
K(t) Figure 6·3
criterion.
Optimal feedback control of linear system with quadratic
c(t)
216
Continuous Systems Optimization
Chap. 6
termed a control/alv. Note that a control law may also be written m(xa{t)) and, therefore, depends only on the augmented state vector. The gains in the loop of Fig. 63 are timevarying. This does not result from consideration of timevarying matrices A, B, G, Q, and R, but is true even when all these matrices are constant, as seen from Eq. (637) and from the present derivation. However, Eq. (636) shows for the firstorder system with r(t) == 0 that the gain becomes constant as t 1 ~ oo, and we will indicate later that this is a general result. The powerful results of Chapter 3 enabled us to write Eqs. (6115), (6116), (6117), and (6118), which show that the optimal gain is independent of the initial state, an important practical observation. The loop of Fig. 63 shows that the state vector, and not the output vector, must be known to generate the optimal control lalv. Therefore, the system must be observable if the feedback loop is to be constructed. In other words, by definition, the output vector, and not the state vector, is available by direct process measurement. Unless we can compute the state vector x from the output vector c, i.e., unless the system is observable, we cannot construct the control law, although of course the control function may still be applied. The signal J.L(t) acts as a kind of setpoint vector for the loop of Fig. 63. We see immediately from Eq. (6117) that, for r(t) == 0, J.L(t) == 0. This case is called the regulator or load problem, since Eqs. (697) and (698) show that when r == 0 the objective is to regulate the output c near zero. The problem with r{t) =1= 0 is called the servomechanism or setpoint problem, since the objective is to have the output signal follow closely a given input signal. A major problem remaining is evaluation of K(t) and J.L(t). Equations (6116) and (6117) are not computationally convenient. There are no simple methods for evaluating transition matrices such as cl»(t; t 0 ) for linear timevarying systems, so these expressions cannot be regarded as useful solutions. In the next section we develop a computerbased method for evaluating K(t) and J.L(t) which is computationally more convenient than Eqs. (6116) and (6117). Evaluation of K(t) and J.L(t)
The approach to this problem is a natural one. We substitute Eq. (6115) into Eq. (6109) and determine conditions on K(t) and J.L(t) which must be satisfied. Differentiation of Eq. (6115) yields d"A _ dK x dt  dt
+ Kdx _ dt
dJL dt
== dKx + K(Ax U"A) dJL dt
dK == x + KAxdt
dt
KUKx
+· KUJ.L dJ.L dt
Chap. 6
Optimal Control of Linear Systems with Quadratic Performance Criteria
217
which may be equated to the expression for d"A/dt obtained from Eq. (6109)
~;
= Vx AT>.,+ Wr
== Vx  ATKx
+ ATIL + Wr
These expressions for d"A/dt must be identical for all x. Therefore, by equating the coefficients ofx and by equating the remaining terms, we obtain the results
a:
=
(KA
1t
+ ATK)  v + KUK
= (KU AT)IL Wr
(6120) (6121)
The boundary conditions on K and 1£ are obtained from Eqs. (6116) to (6118). First note that, since cl>(t; t 0 ) is a transition matrix, 4>{t 0 ; t 0 ) ==I, and therefore, from Eq. (6112), that 11 (t 0; t 0) == 22 (t 0; t 0) == I and 12(to; to)== 4> 2.(to; to)= 0. Then, evaluation of Eqs. (6116) to (6118) at t == t1 yields K(tf) == GT(tf) PG(tf)
1£(1 1 ) == GT(t 1 ) Pr(t 1 )
(6122) (6123)
The important result of Eqs. (6119) to (6123) is that, by starting at t 1 and integrating backward in time to t 0 , we can evaluate K(t) and 1£(1) without trialanderror. Then, knowledge ofK(t) and 1£(1) gives m*(t) from Eq. (6119). The twopoint boundaryvalue problem no longer exists because the problem has been separated into two onepoint boundaryvalue problems. The first onepoint problem is Eqs. (6120) and (6121) for which the final conditions are known, and the second is Eqs. (344) and (6119) for which the initial conditions are known. The solution of the first problem for K(t) and 1£(1) is used in the solution of the second for the optimal response x*(t). It should be noted that Eq. (6120) is nonlinear because of the last term. This set of simultaneous differential equations is of the Riccati form, and therefore Eq. (6120) will be called the Riccati equation. We can now prove that K(t) must be symmetric. First, the existence and uniqueness theorem given in Appendix A guarantees that the solution to Eq. (6120) with the final condition given by Eq. (6122) is unique. Therefore, if we show that KT(t) also satisfies the same equation and final condition, then we must have KT(t) == K(t). But KT(t) is easily shown to satisfy the Riccati equation and final condition if we take the transpose of Eqs. (6120) and (6122). Therefore, K(t) is symmetric and has only n(n + 1)/2 unknown elements, rather than n 2 unknown elements. Further, the Riccati equation contains only n(n + I )/2 simultaneous nonlinear differential equations. The reader should use Eq. (6113) to show that the fixed terminal point problem results in K(t 1 ) . oo.
218
Continuous Systems Optimization
Chap. 6
EXAMPLE 65
At this point we rework the problem given at the beginning of the chapter, using the general theory developed here. Specifically, for the system
~I=
XI+ m
we wish to minimize
== f
tr
J(m)
+ pm (1)] d1
[xi(l)
2
to
The appropriate matrices and vectors to describe this problem in terms of the general theory are
p Q R r(1)
A== 1 B == 1 G == 1 H == 0
== 0 == 2 == 2p == 0
Therefore,
U==_!_ 2p
v
=2 J.£(1) = 0 and the Riccati equation is
dk 1 === 2k  2 1 dl
+
ki 2p
with final condition kl(l,) == 0
The solution to this is found by direct integration
f
dt
o
I' 
I
==
ka
2s  2
+ t2 /2p
which yields k 1 
2
I+ acotha(11 
1)
where a== ..v'l + p 1 • This is the identical result obtained in Eq. (637) since the factor R•B == 1/(2p) is included in the gain k of Eq. (637). The differential equations for J.£(1), Eq. (6121), are linear. They must be integrated backward in time from 11 to 10 , because only the final condition J.£(11 ) is known. But this implies that at 10 , when this backward integration must be performed, we know the setpoint signal r(1) for all 10 I < 11 . In other
t 0 • Use of m = 0 for t > T guarantees c = 0 for all t > T. Therefore, there is a control which gives a finite value of J(m), and the optimal value of J(m) must be finite. If we did not assume controllability, we could not be sure that an optimal control exists since it might happen that every control results in an infinite value of J(m). These considerations also show that the optimal control will surely have c{t) ~ 0 as t ~ oo, and therefore we can take P = 0. Kalman 5 considered the stationary case where the matrices A, B, Q, R, and G are constant, and where r{t) = 0, and he showed that lim {K(t)}
= Ks
tJH>O
where Ks is a constant matrix which satisfies the quadratic equation (6124) In other words, Ks is the "steadystate" solution to Eq. (6120), obtained by allowing dKfdt ~ 0. The practical significance of this result is that the optimal controller for the stationary regulator problem, with t 1 ~ oo, is a simple feedback with constant gains, (6125) The gain in Eq. (636) is a specific example of this result. This control is particularly easy to implement. It should be noted again that the linear stationary system must be both controllable and observable if Eq. (6125) is to be used.
220
Continuous Systems Optimization
Chap. 6
The feedback controller is illustrated by the relatively general result for a secondorder system, obtained in the next example. EXAMPLE 66 We consider control of the secondorder system
d2c dt 2
+ 2s de + c = dt
n1(t)
The objective is to minimize the performance criterion
with a given initial condition c(t 0 ) == c0 , dc(t 0 )/dt == 0. Physically, the problem really is identical to setpoint control of a secondorder process. The output variable c(t) is defined as a deviation around the final steadystate, so that m = 0 is ultimately required to hold the process at the desired steady state, r(t) = 0. This is similar to the observations made in the discussion following Eq. (636). Thus, certain setpoint problems can be treated as regulator problems if this redefinition of output variable is made. A discussion of the problem which arises when this redefinition is not made, and the problem is treated as a servomechanism problem, is given in the next example. Define as state variables x 1 == c, x 2 == dcfdt. Then, this problem is described by
A=(_~ ~~;) B =
(~)
R = p, Q == I G = (1 0), H == 0,
r(t) == 0
P==O '
A check shows that the system is both controllable and observable. From Eq. (6110)
u
=! (~
~).
v =(~
~)
and Eq. (6124) becomes 2kl2  1 + _!_k~2 p
==
0
(E661)
Chap. 6
Optimal Control of Linear Systems with Quadratic Performance Criteria
221
Note that we have used the fact that Ks is symmetric:
Ks == (k11 k12) k 12 k22 We shall demonstrate later (see Example 618) that K must be positivedefinite. This fact enables us to select, from among the multiple roots which satisfy the system (E661) of quadratic equations, those which also satisfy k 11 > 0, k 22 > 0, k 11 k 22 > k~2· These roots are k11
p[a~4s 2
==
+ 2(a
1)  2sl
k12=p(aI) k 2 2 == p[v'~4s":"""':2:+=2:(a1:::)  2s] where a==
~1
m(t)
+ p•.
From Eq. (6119) the optimal control is
1 == [k 12 X 1(t) p
+ k22 x2(t)] +
== [(a l)x. +
(~4s 2
==[(a 1)c +
(~4s 2 + 2(a
2(a 1) 2s)x2]
1)
2S)~~J
A block diagram for the optimal control is shown in Fig. £661. 0
m(t)
" +
Figure 16·6·1
Optimal regulation of output of secondorder system.
Note that the feedback control element is unrealizable and must be approximated by, for example,
(k22/p)s
+ k12/P
+
1 with ry very small. The characteristic equation of this closed loop is rys
S2
+ S~4s 2 + 2(a
1) +a== 0
Therefore, the closed loop has natural frequency Sc given by 6 (I) n
y
(l)n
== .yl{j
2"'
__ 1
~c
/4s 2
+
2(a 
a
1)
and damping factor
222
Continuous Systems Optimization
Chap. 6
As p ____. 0, and therefore a____. oo, less penalty is placed on m(t), the gains k 12 / p and k 22 / p become large, and the closedloop damping factor approaches 0.707, a very reasonable figure. Note that the control is independent of the initial conditions. More general results on this problem are presented by Kalman. 7
EXAMPLE 67 We consider here the servomechanism problem for the firstorder system, de dt
+ c == m
for which the regulator control has been derived in Example 65. The performance index is now J(m) =
S:: {[r(t) 
c(t)P
+ pm (t)} dt 2
The values of A, B, G, H, P, Q, and R are identical to those used in Example 65. Therefore, the value of K(t) is unchanged. Note that K(t) is the same for both servomechanism and regulator problems because Eq. (6120) is the same for both. To solve the servomechanism problem, we write Eq. (6121) as
1t
{p[l
=
+ a co:h a(t1 
t)]
+ I }.a 
2 r(t)
since W == 2 from Eq. (6110). This must be solved with the final condition JL(t 1 ) == 0. An integrating factor for this equation is [sinh a(t 1

+a
t)
cosh a(t 1

t)].
The solution can then be written JL(t)
== sinh a(t 1

+a cosh a(t 1
2
t)
+ a cosh a(t1 

7)]r(7) dT
t)
fl' [sinh . a(t t
1 
r)
(E67l)
This equation shows clearly that to compute fL at any time t requires knowledge of the entire future input r(t). Consider first the case for a unitstep function r(t) = S(t). Then, (t) fL
== 2[cosh a(t1 a
t)

sinh a(t 1

+a t)
+
sinh a(t1  t)  IJ a cosh a(t 1  t)
and Eq. (6119) yields the control. Combining this with the system equation yields
~~ + {I + p[I + a cot~ a(t
1 
t)]}c
1 cosh a(t1  t) +a sinh a(t 1  t) 1  pa sinh a(t 1  t) + a cosh a(t 1  t)
Chop. 6
Optimal Control of Linear Systems with Quadratic Performance Criteria
223
An integrating factor for this equation is just the inverse of that for the equation for JL(t). The solution is
+ sinh a(t 1  t)] [ 1 _a cosh a(t 1  t) + sinh a(t 1 
c(t) ==~[a cosh a(t 1 ry
+
1

t)
I+p
t)
+ sinh a(t 
t0
)J
ry
where ry =a cosh a(t 1
to)+ sinh a(t 1

to)

and C0 = c(to). In a typical case, c0 = 0, and only the second term is of interest. Let us examine this second term for large values of a(t1  10 ). Then, for almost all time during the transient, except near t 0 and near t 1 , we have that cosh a(t 1  t), sinh a(t 1  t), and sinh a(t t 0 ) are much smaller than cosh a(t 1  t 0 ) and sinh a(t 1  t 0 ); therefore, we have that 1 c(t) ;::::: 1
Of course at
t 0,
c = 0, while at c(t)
=
ry{l
a
+
+
P
t1,
p)[cosh a(t 1

t0)

1]
~ (1 ~ aX1! p]
Therefore, the output rises at t 0 to a value 1/(1 + p), less than the desired value of unity. It remains at this "steady" value during almost all of the transient (for large a(t 1  t 0 )), and just before the end of the transient, the output falls to aj(l + a) times the "steady" value. If we let p . 0, then a ~ oo, and the performance is good; the output does closely follow the unitstep input. However, quite large values of m(t) are required near t 0 if p ~ 0. A sketch of a typical transient for small p is shown in Fig. E67 1. 1 
~
~
1+p

~} 1  (1+a~1+p) l I
I
I I I
,, Sketch of optimal servomechanism response for firstorder system, large a(t1  10). Figure E67 1
224
Continuous Systems Optimization
Chap. 6
The reader may well wonder why the performance of the optimal servomechanism is so poor in comparison to that of the optimal regulator. Probably, the answer is that the problem is not wellformulated. The value of m(t) is required to remain near unity to maintain c(t) near unity. Therefore, it probably does not make sense to penalize n1(t) in the criterion function. That is why small values of pare required to obtain acceptable performance. Conversely, small values of p cause large values of 1n near t 0 , and the physical system may saturate. A timevarying p such as p 0 exp (t 0  t) is certainly one possible solution. For t 1 +> oo, the optimal control for this problem does not exist because J(m) +> oo. The problem is better treated if we redefine the output variable c(t) and control signal m(t) as deviations around the desired final value, as in Example 65. Then, the stationary controller for t r +> oo can be obtained in place of the timevarying controller which results from the present analysis. Now consider a problem which is wellformulated as a servomechanism problem. We ask that the controlled output follow the curve r(t)
==
Coe13 0 represents the inverse of the desired time constant. Then, from Eq. (E671) ( )
JL t
2coe13
A ntbU>)
(6141)
No\v, Ao cannot be zero, since otherwise Eqs. (6137) and (6133) imply H == 1 at t == t 0 , and this contradicts the requirement that H == 0 everywhere on the optimal trajectory. Therefore, Eq. (6140) requires that Qi be singular. In other words, if we find that Qi is not singular for any j, j = 1, 2, ... , r, then we can be assured that there are no finite intervals on which PJ(t) = 0 for any j = 1, 2, ... , r, and therefore that Eq. (6134) will completely define the timeoptimal control (if the optimal control exists; we touch on the existence question later). A system for which Qi is not singular for any j is called normal; a controllable system which is not normal is called singular. A normal system is defined as one which cannot have any Pi(t) == 0 for a finite interval of time. Comparison of Eq. (6141) with Eq. (349) shows that a normal system is controllable with respect to each and every component of the control vector m{t). In other words, all but one of the components of m(t) can be held at zero, and the system will still be controllable with respect to the remaining free control component. The reader is cautioned that it is incorrect to conclude that the timeoptimal control does not exist for a singular system. All that singularity implies is that the minimum principle does not explicitly define the timeoptimal control through Eq. (6134). In other words, the necessary conditions may not provide enough information to find the timeoptimal control for a singular system. EXAMPLE 610
Consider the system of Eq. (36) with
A=(=~ ~).
B
=G
~)
238
Continuous Systems Optimization
Chap. 6
This system is controllable, since Eq. (349) gives
K
A
(B AB)
==(1
0 2 1) 1 1 2 0
which clearly has the required two linearly independent columns. However,
Since Q 1 is singular, the system is not normal. EXAMPLE 611
The following simple system dx I m 1 dt 
dx 2 dt = m2
will serve to demonstrate that the timeoptimal control may exist for a system which is not normal. Here
A== 0,
B =I
so that
(B AB)
== (I
0)
has the required two independent columns, and the system is controllable. However,
and
are both singular, so the system is not normal. From Eq. (6138) we obtain
for all t
>
t 0 • Then, Eq. (6136) gives H == 1
+ "A1om 1 + A2om2 == 0
(£6111)
Chap. 6
TimeOptimal Control of Linear, Stationary Systems
239
Further, Eq. (6134) gives Ato
m1 == {11
Ato
1
A2o
m2
= (
1
A2o
< > < >
0 0 0 0
(E6112)
while any values of m 1 or m 2 are acceptable if A1 o = 0 or A. 2 o = 0, respectively. Now assume A, 10 =1= 0 and A, 20 =1= 0. Then the timeoptimal control must be one of the four actions
m1 =
1:
1
= 1 m1 == 1 m 2 == 1 m2
II:
m 1 = 1 m2 = 1 m 1 = 1 m 2 = 1
III:
IV:
for all t > t 0 (until the origin is reached) since neither m 1 nor m 2 can change sign, according to Eq. (E6112). However, if I X1oI =I= IX2o I, the state of the system cannot pass through the origin on such a trajectory. Therefore, unless IX 10 I == IX 20 I, we must have either A1o = 0 or A20 == 0. It is easy to decide which condition must be true. Suppose Ix 10 I > I X 20 1. Then, x 2(t) will reach zero before x 1(t). To be specific, assume X 10 > X 20 > 0. Then intuitively, we should have initially m 1 = m 2 = 1. The system state will follow X1 X2
=
(t to) = X2o. (t to) X1o 
At t === t 0 + x 20 , x 2 === 0 but x 1 =t= 0. Therefore we want to continue with m 1 === 1, m 2 = 0 for t 0 + X2o < t < t 0 + X 10 at which time x = 0 and m 1 = m 2 == 0 will maintain x = 0. In other words, m 2 must switch to zero at t == t 0 + X 20 • The only choice of Ao which will allow this and still satisfy Eq. (E6lll), while producing the absolute minimum value of H, is
This illustrates that p 2(t) === 0 and the system is singular. Continuing
240
Continuous Systems Optimization
Chap. 6
the argument in this manner shows that the correct choices of initial conditions are _ (sgn X 1o) Ao0
0 ) sgn X2o
Ao
=
Ao
x to) = 2I (sgn IX1o I = IX2o I sgn x 20
(
For the specific case considered above, a timeoptimal response is illustrated in Fig. E6ll1. However, it should be clear that this timeoptimal response is not unique. Any behavior of m 2(t) which satisfies
J
lo+X•o
m 2('r) dr
=
 X 20
lo
will satisfy all the necessary conditions for this case. That is, H will be minimized and equal to zero, and the system will reach the origin at the same minimum time t 0 + x 10 • An alternate timeoptimal trajectory, using initially m 1 == 1, m 2 = a for some value of a satisfying X 20 /X 10 0, and let {3j designate the jth component of the column vector P 1b. To obtain Yi == 0 for some t1 , we require a control such that t,
Yio == {3i J
eAJTm(T) dT
lo
which follows from Eq. (322). The quantity YJo represents the initial value Yi(t 0 ). Because of the constraint Im(t) I < 1, we obtain (6144) which may be rewritten eA;to 
e'A.Jt/
> 
A. y jO ) {3j
(6145)
Clearly, Eq. (6145) cannot be satisfied if
ly ;o. 1 > tReference 4, pp. 127135.
I
f3i le'A.Jto
A·)
(6146)
242
Continuous Systems Optimization
Chap. 6
and we cannot force Yi to the origin. In physical terms Eq. (6146) shows that, if the initial condition is too large, the constraint on m(t) will prevent us from driving the unstable system to the origin, despite the fact that the system is controllable. This shows why stability of A is required to guarantee the existence of a timeoptimal control. (Note that if Xi< 0, the direction of the inequality in Eq. (6145) is reversed.) Uniqueness
We now prove the following uniqueness theorem: If the system of Eq. (36) is normal, and if a timeoptimal control exists, then it is unique. The proof follows by the assumption of two different timeoptimal controls, mi(t) and mi(t), which drive the state from X 0 to 0 at the same minimum time tj. Then,
xi(t) = eAxo
+ Itto eABmi('T) d'T
xi(t) = eAxo
+ Itto eABmi('T) d'T
Using the fact that xi(tj)
= xi(tj) = 0, we obtain
Jto eATBmi(T) d'T = Jto eATBmi(T) dT t•
t•
1
1
which may be multiplied by A~, the initial condition on the adjoint vector for the timeoptimal control mi(t), to obtain the scalar equality
Ito A~TeATBmi(T) dT t•
1
t•
=
Jto A~TeATBmi(T) d'T 1
(6147)
The system is assumed to be normal. Therefore, the optimal controls mi(t) and mi(t) and trajectories xi(t) and xi(t) are each uniquely defined by the optimal initial adjoint vectors A~ and A~, by virtue of Eqs. (6134) and (6138). Furthermore, because each of these optimal controls must absolutely minimize the Hamiltonian, it follows from Eq. (6130) that
AiTBmi(t)
(tf)
i === 1, 2, ... , n
e' (to)
A' >(to)
i === 1, 2, ... , n
d'i>(t 1 ) === 22 (t 1 ; to) e(to)
i === 1, 2, ... , n
0
=== A'i>(to) 
0
(6I76)
Then from Eq. (6175)
(6177)
Let D(t1 ) be the n x n matrix whose ith column is then x 1 vector d'i>(t 1 ), and let E(t 0 ) be the n X n matrix whose ith column is eCi>(t 0 ). Then, Eq. (6I77) may be written D(t 1) === 22(t1; to) E(t o) which may be solved to yield
2 2( t t ; to) === D( t 1 ) E 1(to)
(6178)
This assumes E 1(t 0 ) exists, which only requires that we choose the (n + 1) vectors e(to) === ( 0.275 '
(I)
_
A (to) 
(0.446) 0.338 '
Ac 2 >( ) to
==
(0.395) 0.278
and the corresponding values of the final adjoint vectors are (0.036) A (t,  0.005 ' (0)
)
(I)( ) === ( 0.138) 0.068 ' A t,

Ac2 >(
t,
)
=== (
0. 061) 0.040
Then, D(t 1 } =
E
(_~:~~~
) _ (0.081 (to  0.063
0.025) 0.045 0.030) 0.003
Equation (6180) then yields 0.365) (0.53 ( A(to) === 0.275  0.95
0.38) (0.036) 0.46 0.005
= (0.348) 0.239
which is very close to the value A(to)
==
0.346) ( 0.236
obtained by trial in Fig. E611. Control signal iteration
Equation (674) is also true for perturbations about an arbitrary (not necessarily optimal) trajectory, when written without the asterisk notation OS(x(t!}) =
J::(~!r OmdT
(6181)
as is easily verified by repeating the derivation. This gives the variation in the performance criterion caused by a variation in the control signal. If om is always chosen to make oS negative, or at least nonpositive, then we
266
Continuous Systems Optimization
Chap. 6
should approach a locally minimum value of Sand, hence, a trajectory satisfying the minimum principle. This is the basis for control signal iteration. The typical optimal control problem requires simultaneous solution of Eqs. (654), (662), (6166), and (6167), with initial conditions on x and final conditions on A. To use control signal iteration, we simply assume a control signal m(t), so that Eq. (662) can be integrated backv.rard from the known condition on A at t 1 , to t 0 , to obtain A 0 >(t). Finally, knowledge of x 0 >(t) and A< u(t) enables computation of
fli.l>(t)
A0 >T(t) f[x
W(T)
>
om dr < 0
(6183)
to
which shows that m< 2 > will produce a lower (or equal) value of Sand is therefore closer to a solution satisfying the minimum principle. Of course, Eq. (6181) is correct only for small om. Large changes in m may violate Eq. (6183) and must therefore be avoided. The new m< 2 >(t) is now used as the guess, and the entire procedure is repeated to yield a o< 2 >m(t), etc. The procedure is discontinued when S cannot be reduced further. We next consider how to specify the weighting matrix W(t). We give a method based on steepest descent. The object of this method is to change min such a manner as to give the largest value of oS(x(t1 )) for a given size of change, om. To determine the size of change om, we define a distance between two signals m(t) as
os
(os) 2 ==
f
tr
(om)TG(r) om dT
(6184)
lo
Here G(t) is a symmetric, positivedefinite matrix used to define the distance, and we use only the positive root so that >: 0. This is simply a generalization of the norm defined in Eq. (A3) to a timevarying vector. The matrix G(t) is specified to describe distance adequately. For example, we may be able to estimate m(t) near t 1 very accurately, as in a free endpoint problem where it is known in advance that m(t 1 ) === 0. (See, for example, Eq. (6107) with A(t1 ) === 0.) Then, we would choose the components of G(t) to be large near t 1 , so that an attempt to vary m near t 1 will result in a large distance penalty. If we consider infinitesimal changes in m:
os
Chap. 6
267
Numerical Procedures for Optimization by the Minimum Principle
I
(dm) ds dT =
(dm)r ds G(r)
t, t,
(6185)
1
The corresponding infinitesimal change in S follows from Eq. (6181) as a ratio to ds: dS(x(t f)) ds
==
It' (?Ji)T (dm) om
to
dT
(6186)
ds
Our objective is steepest descent. Therefore, for a given arbitrarily small os, we wish to minimize oS(x(t 1 )). To do this we choose dmfds to minimize dSfds in Eq. (6186) subject to the constraint in Eq. (6185). The constraint really defines the given distance ds and restricts the values of to those which give a new control that lies the given distance os from the old control. The minimization of Eq. (6186) subject to Eq. (6185) is solved by means of the minimum principle. As in Eq. (690), the Lagrange multiplier p is introduced, and we seek to minimize
om
I
t,
Sv =
, 1
[(oH)r dm Om ds
dm] + p (dm)r ds G(T) ds dT
We define the state variable y(t)
==
It
to
+ P(dm)r G(T) dm] dT ds ds
[(oH)T dm om ds
Then the state equation is dy dt
== (oH)r dm om
ds
+
(dm)r G(t) dm p ds ds
and we seek to minimize y(t1 ). The Hamiltonian HP for this problem is
H == v [(oH)T dm p om ds
+ p (dm)T ds
G(t)
dm] ds
and the adjoint variable v(t) satisfies dv _ _ oHp _ 0 dtoy 
Further, since v(t 1 ) == 1, we must have v(t) == 1. The optimum dmfds is obtained by minimization of HP with respect to dmfds. Thus we take
oHP
_ aH
o(dm/ds) om
+
dm _ 2 pG(t) ds  O
which yields dm == _ _!_ GI(t) oH ds 2p om
(6187)
If we take the second derivative, as was done in Eq. (6108), we obtain 2pG(t). Since we shall presently show that we take p > 0, this guarantees
268
Continuous Systems Optimization
Chap. 6
that Eq. (6187) provides a minimum. (In fact, since dmfds is presumably unconstrained, this must be the absolute minimum and, therefore, gives the absolute minimum of dSfds.) Substitution of Eq. (6187) into Eq. (6185) and solution of the result for p yields (6188) where we take the positive root. (Taking the negative root will also satisfy Eq. (6185). However, Eq. (6187) will then yield a maximum of H. In fact, this merely gives the direction of steepest ascent as the opposite of the direction of steepest descent.) Combination of Eqs. (6182), (6187), and (6188) for finite om shows that the weighting matrix W(t) will give steepest descent if it is chosen as
W(t)
=
[f' (OH)~~~:Os OH 
to
om
G ('r)dT
]1/2
(6189)
om
os
is the finite distance that m is to be moved. Equation (6189) simply where says that W{t) should be proportional to the inverse of the normdetermining matrix. The proportionality constant is computed only to determine the distance represented by the change om. The direction of steepest descent is dependent only on G•(t)(oHfom). If Eq. (6182) results in a new m(t) for which some or all of the components violate constraints, each such component should be set equal to its constraint value and be allowed to remain on the constraint through succeeding iterations, until either the indicated om moves the component back to the allowable region, or else until the procedure is terminated because S cannot be reduced further. This is in accord with the principles set forth in Eqs. (677) and (678). EXAMPLE 617
Consider minimization of the performance criterion J(m)
Jt' (xi +
==
pm 2) dt
to
for the simple system dx 1 dt == m
(E6171)
with x 1(t 0 ) == X 10 • This is the problem considered in Example 64 with a== 0. The additional state equation, as in Example 64, is
"Jr2 =
xi
+
pm2
Chap. 6
Numerical Procedures for Optimization by the Minimum Principle
Then,
== "A1m
H
269
+ "A2(xi + pm 2)
and the adjoint equations are
d;;, = 2x."A2 0
dA,2
dt 
The final condition is
A(tt)=(~)
(E6172)
which gives "A 2(t) == 1. Then,
an om == "A + 2pm
(E6173)
1
and (E6174) Although this problem can be solved exactly as in Example 64, we solve it here by control signal iteration. We choose as the initial guess the simple control m( t)
==
X 10
to
t ,  to
(t)fxto
m*(t)/x 10
0 0.25 0.5 1.0 1.5 2.0
1 0.765 0.562 0.250 0.062 0
0.965 0.742 0.566 0.313 0.138 0
Further iterations will steadily reduce the differences, provided the step sizes are kept small. After each step to find a new m(t), it is a good policy to use this m to compute the new S. This requires computation of x, which is needed for the next iteration, in any case. If the newS is not less than the previous value, them should be discarded, and a smaller step size used with the previously computed to find a new m. When the step size required to obtain a reduction inS becomes too small, the procedure is terminated, and the last m(t) is taken as optimal.
om
Dynamic Programming The preceding material of this chapter on continuous systems optimization has been based on Pontryagin's methods. We now present a different approach called dynamic programming. The originator and foremost proponent of this technique is Bellman. 12• 13 As before, we give an introductory, heuristic development of the technique. Principle of optimality
The essence of dynamic programming lies in the principle of optimality: 12 hAn optimal policy has the property that, whatever the initial state and initial decision are, the remaining decisions must constitute an optimal
Chap. 6
271
Dynamic Programming
policy with regard to the state resulting from the first decision." (Here, policy and decision should be regarded as referring to the manipulated signal m(1); policy to the entire choice of m(1) on the interval to < t < t1 , and decision to each instantaneous value m(1).) To explain this principle, we consider choosing m(1) to minimize the performance criterion fo) == J
tl
J 0(m; x(t0 ),
F[x(1), m(1), 1] d1
(6190)
to
for a system described by
dx
d1
== f(x, m, t)
(6191)
with a specified initial state x(t0). Here F is a scalar function, such as that in Eq. (697). We denote by m[t0 , t1 ] the entire vector function m(1) during the interval t0 < t < 11 . Note that the initial state and time are included as arguments of ] 0 • As before, an asterisk will indicate the optimal value; m*[t0 , 11 ] is the optimal control and x*(t) is the optimal trajectory. Let 11 be a time such that t 0 < t 1 < t 1 . Then x*(t 1) is the state reached at time 11 by application of the control m*[t0 , t 1) because of the uniqueness and continuity of the solution of Eq. (6191) (see Appendix A). Here, m*[t0 , t 1) represents the entire vector function m*(t) for to< t < t 1• The principle of optimality states that the control which minimizes == J F[x(t), m(1), 1] d1 tr
lo(m; x*(t 1), t 1)
(6192)
t.
is m*[th t1 ]. The proof of this follows easily by contradiction. Suppose the optimal control is not m*[1 1, t1 ] but, rather, is some other control m[th t1 ]. Then we reconsider optimization of Eq. (6190) using the control m+[t0 , 11 ] given by m+( 1)
== {m*(1) m(1)
2
since the control is time optimal. To use Ztransform methods based on output, we must redefine the output variable in terms of its deviation about the initial steady
Chap. 7
TimeOptimal Control of linear SampledData Systems
311
state. Thus, in terms of an output variable c(t) defined about the initial steady state, the response is
c(O)
===
0
c(T) == 1 
c( iT)
1
a
+a
=== 1
1
+a
== I , i > 2
The Ztransform of this is (z + a) + a)z(z 1)
C(z)  (I
Since the input is a unitstep change, R(z) ===
z z 1
so that the desired transmission ratio is T(z) _ C(z) _ (z + a)  R(z)  (1 + a)z 2
The system transfer function may be calculated from the state equations
These are equivalent to d 2c
dt
2
+ 3dc + 2 c === m dt
where c === x 1 the output variable. Hence, C(s) Gp(s) === M(s) == (s
+
1 IXs
+
2)
We obtain the impulse response by inverting this to g(t)
==
et 
ezt
Applying Eq. (240), we obtain 2(kl)
g'(tk to)==
(1  a)  a
ak 1
2
(1  a 2 )
as the deltafunction response. The Ztransform of this, Z{g'(t. t )} k
0
== (1  a)2(z
+ a)
2(z  a )(z  a) 2
312
Optimization of Discrete Control Systems
Chap. 7
is used as Gp(z) in Eq. (233) with the T(z) already computed, to obtain the desired digital compensator 2(z  a 2 Xz  a)
D(z) === (I 
a)2(l
==
+ a)(z + 1 ~ J82
+ 1)
10
ltooo
lso6
142 153 90 93 107 >79
156 170 145 97 115 >80
= first time at which c(t) = c( oo) or e(/7 ) = 0. = time after which c(t) remains within ±0.1 [c( oo)  c(O)] of c( oo ). = time after which c(t) remains within ±0.05 [(c( oo)  c(O)] or c( oo ).
teristics for transients from the process I
Gp(s) =(lOs+ I)1o Definitions of the terms overshoot, rise time In and response time 15q6, 11096 , may be found in Coughanowr and Koppel. 1 Because the system has such a long effective delay, switching must occur before c(t) leaves zero. This makes it difficult to guess switching times. To obtain initial guesses, the step response of this system, shown in Fig. 105a, was modeled graphically 2 to obtain _ exp (55s) Gm(s) (40s + 1)(7s +I)'
~~
== 20.1'
~~
== 22.9
The transient resulting from these switching times, shown in Fig. 105b, was fitted digitally to obtain the model _ exp (51s) Gm(s) (27s 1)(24s
+
+ I)'
~~
== 25.4,
~~
== 3I.2
The transient for these switching times, shown in Fig. I05c, is considerably improved over the first two curves, with improved rise and response times (Table 101 ). Figures 105d and 105e are given for further illustration. They show reductions in overshoot at the expense of longer rise times. The switching times in Figs. I 05d and 105e were selected based upon judgments
; II di II
IIIII
II
I
IJ
I
!l
II
II II
I
I
I I
I
I
I
I
IJ
ll
I
I I
I
:,
'
I!!
li i II
I
I
i
!
. I'I ,
II
I
II :
I
li ill
!I
'
I
; ft
II
II
I
l'':
' I
I
!
I'
l
I
I
I I
ll·
II I If 11t It I I
I
I
ll ll
I I
I
I
I
1 11!
I[
! 11 1
I
uI!u u I' u
Ill! I I
I
I
Jlj
I
il '
I
I
II
'
; i
I
: I
II I
II . II
I
·~
II
I
~
I
'
I
l;
!
I
I
I
I
!
I
I
!
j
I I
I
[I
I
:
!!
I ,
I
I!
I
I·I
I
ll ll
I
I
'.I
: 'l l j
II ll
II II
I
I
I
I ll
I
I
IJ
I
1111 11[1
I
I~ .
I
u
ll
I!
lj
1: 1 '
II II I II I II! I
UUll
u
ll
II
1]
I
I
I
I 11
I II
; roo
I
!
I
II
I
I
ll II II
II
II
I
I
II
I[ I
I
I
I
ll
I ' I
' I I
I
I
II
l
I! I
. I
'
I. li
..
II I
'i
il I
I
II
'
I!
I I
..
I
ll
I
I! !
,,
I
I
II
350
l I
u
II
11
1
1
II 'I
I
I
Chap. 10
351
Analysis of Programmed TimeOptimal Control
from Figs. 105a, 105b, and 105c, to reduce overshoot. Figure 105c represents a reasonable compromise between overshoot and rise time that can be achieved with secondorder switching. Notice that switching times for Fig. 105e are 35% less than those for Fig. I 05d, yet the responses are very similar. The true timeoptimal response for this system in theory requires nine switching reversals to bring the output precisely to rest. Although we cannot calculate this response, we can specify some bounds on its characteristics which will serve as useful comparisons. The true optimal response cannot reach e(t) = 0 more rapidly than the transient resulting from application of full forcing m = K until (or beyond) e(t) = 0. The rise time for such a curve (tr = 82) is listed in Table 101 as a lower bound for the optimal response. If all higher derivatives were zero at time tr (which they are not), the response would remain at rest with no overshoot. In such a case, the response time, l 10o6 or t5o6 , would be less than 17 • These times are listed in Table i 01 as lower bounds for the optimal response. The suboptimal response in Fig. 105c obtained from the modeling procedure is a good compromise, with rise and I 0 %response times which do not differ greatly from the true optimum, by virtue of the lower bounds presented.
.
Nonlinear Exothern1ic Reactor. To further study the utility and limitations of the design procedure, we next consider a highly nonlinear, exothermic reactor simulation. Orent 4 simulated and controlled a modified version of the ArisAmundson, 5 GrethleinLapidus 6 continuous stirredtank reactor, for the irreversible exothermic reaction A ~ B with firstorder kinetics. The modification involved the addition of cooling coil dynamics. The reaction rate constant is
where k = Arrhenius reaction rate constant; sec• k 0 =frequency factor; 7.86 x 10 12 sec 1 E = activation energy; 28,000 cal/mol R = gas constant; 1.987 cal/mol°K T = absolute temperature of reactor; oK The mass balance on the reactor contents, assuming uniform mtxtng, is
dE V dt
==
FEo  FE  VkE
where V ==material volume; 1000 cc E == concentration of A in exit; molfcc Eo ==concentration of A in inlet; 6.5 x 10 3 molfcc F == volumetric flow; 10 ccfsec
I
11 1
I
II
II I
II
I
I
II
I
l
.
II
!
'lit
I!
1
I
I
i:lt
I
I il
1 'I
! II I I
l i • :
'I ,. Ill :IIi I w! ,,. 'I ! 1fT ~ I rT I
I
. '!
11
~. i j
.ill ;
,,
,,
I
I; II
I
II 11:
I
•
I
i
I
I
'
li ! II
I
l
~
!!
I
I
I
!! I' ,. i l I
•
' I
II
i
·.II
II
!
I
I
I
It
I
II
I!
I
1 I. I
Ill
i
I
, I'
II
t;
! I
:
I:
I
;
I
I I!!l
I
I
'
!
"
I
I
I
''I,i!
! II
:
IiI
I
I
,•
i :
I
i
I
I
I :li
1=
:;
i :;
!
II'!
ii
;
II
I
:
I!
I
i
I
I
II
I
I'
' !1111! 11
i
!
I
'
I:
I
I
I
II 'I
!i i
I
I
II ;
~
i
I;'!:
Iii
rli f[i[ i il
rT 1
I!
:
[I
I
I
:I
!I
! I ~I
I
!
i
' I! !
I
i !;ii !
I
!,
i
i
I
!
II
:I : II. 'd I
I
II Ii :
I
!I '
I
l
I I
I
!I
'i
'
0
I
I
'
I
i
•I
I
I
! ~
I
I[ I
I
' :II !
i
I!
~ '
I
: Ii
I
I
I
' II
i
I :
;
tl
I
:li !I•'
I
I
I,
i
I~
ij 'i
i
, j.
'I
I
352
1
I~ :
I
;
'
I
, 11
I:•
I
l
I
il
1!.
! .1! I .
I
II: .
!1:.
I
D
' I:
I
i,
·,
I
'li li
!f. fT fT ill!!
•::»..
·...
I'
I :
l!'
I
';I II I
I
[Ill
I
j
!II .I
I
tl.il
U'' : : I
I I
·:
'ii
,·1. '·
~i
[I!!
I
!I I:!I ;
I
i
I
iti :ti i!l!;
!I
I
I
I
:
I I !
II
!
!!
1r!
11 :
I
I! I
IIi!
1
I
•.. I
fT
il !.. I ! lI I I, lj : I j! I : ~~~~~++4W4+1++1
II
I
[
~
I
I
i
l~ '
. I :
I 1t: ~ II
I
J
iI I
i i !I I~ ill i Ill I Ii l i: ~~I fill iii I! I ! ·! ' til 11i ! I 1 i .li
II
:
:,
~I'
I
11 lj
i
''·
i
I
IT ':.!i l!t
ill
!I : 1
I!
li I
Chap. 1 0
353
Analysis of Programmed TimeOptimal Control
If no heat transfer occurs with the surroundings, and if the physical properties of inlet and outlet streams are identical, the energy balance is
V
dT F (T pu dt === pu .J. 0
AH V:k£
T) 

~

UA(Te Teo) In [(T Tco)/(T Tc)]
where
p
density; 1 gm/cc u === heat capacity; 1 cal/gm°K T == temperature of reactor and exit stream; oK T0 = temperature of inlet stream; 350°K aH ===exothermic heat of reaction; 27,000 cal/mol UA == overall heat transfer coefficient times cooling coil area; 7 cal/sec°K Teo == inlet coolant temperature; 300°K Tc === exit coolant temperature; oK An approximate Iumped energy balance for the cooling coil is ===
Vcpcuc dTe UA(Te Teo) F (T T )  2 dt ==In [(T Tco)/(T Tc)]  cPcUc c  co with
Ve ==coil volume; 100 cc Pc ==coolant density; 1 gm/cc uc =coolant heat capacity; 1 cal/gm°K Fe === coolant flow; 0 < Fe < 20 ccjsec. This cooling coil equation is added to impart higherorder dynamic effects. The manipulated variable is the bounded coolant flow rate. The output temperature is also delayed to introduce an additional sixsecond dead time, representative of the higherorder lags which would occur in the physical system. Only control about the stable, hightemperature, steadystate 460°K ()cs == 419°K Es = 0.162 X 10 3 moljcc Fcs == 5.13 ccjsec ()s ==
is considered here. Orent 4 reported responses to in Fe. He modeled these to the transfer function
± 1ccjsec
step changes
ae(s)  6.1 exp (11s) OK aFc(s) 70s+ 1 ccjsec
The analysis in Example 614 suggests that the true timeoptimal control of this process is bangbang. The response to a step in Fe from 5.13 to 4.13 is shown in Fig. 106a. A bangbang transient from guessed switching times is shown in Fig. 106b,
. II
'
I
I
'
II
I
I r
li
I
I
~
II I
I
'
' i
!
'
I ~ Ii ill II
[I
II ~~
I
I
I
l II I
!
1:
Ii
I!
I
!
li 'li
i
'
I
I I I
l_l
I
II
I
I
[I
I I
III II,
I
I
!
I
I
I
I I
It
,:r
~
I
II
I
I
I
1 I
![
llI !I II
I
'
I 'r:
'II
I
II i
1' I'
1f
i I
'
L
i
;
I
I
I
ll I '
li
I
'
I
11
I
I
II
'
lJ
I
:l [! I
li
I
1
I
I!
1
I
I
II
II li If
I
I
; 111
I'
Jli
i 'I I
I
I
! '
I
.....
i
I
II
.. I
I
0
I
I
I!
•
I
~
I
::I
I
I
I
:[
I
Jl
I
I
II
••
I
I
I
II II
I
I
I
I
I
I
354
Chap. 10
355
Analysis of Programmed TimeOptimal Control
TABLE 102. MoDELING NoNLINEAR REACTOR Revised values
Applied values
tf
~~
Figure
Model
106a
step decrease in Fe 5.1(0.ls + 1) exp ( 5.7s) (25.7s + 1)(6.9s + 1) 5.7(0.3s + 1) exp ( 7.0s) (33.0s + 1)(4.6s + 1) slight overshoot step increase in Fe 4.4( 0.5s + 1) exp ( 4.0s) (126.4s + 1)(6.2s + 1) 5.0(0.7s + 1)exp(5.ls) (141.2s + 1)(3.7s + 1) probably nearly optimum
10.6
12.6
106b
10.9
12.1
106c
12.1
12.7
106d 107a
14.7
15.8
107b
12.6
20.2
107c
12.5
17.9
107d
ti
1~
11.1
12.3
11.3
12.3
12.6
20.1
12.0
16.8
For increased modeling accuracy, a numerator time constant was allowed. However, switching times were based entirely on the denominator (see Chapter 6, p. 251) since these times will still, in theory, drive the output to rest at zero. 2
and the modeling results are listed in Table 102. The predicted switching times were duplicated closely for the improved response in Fig. 106c, which was fitted by a somewhat different transfer function but yielded essentially the same revised switching times. This demonstrates that modeling nearly optimum transients gives repetition of switching times; in other words, modeling convergence is maintained. The undershoot of Fig. 106c prompted the slight increase in t~ and t~ for Fig. 106d. In view of this transient and the accuracy of the time measurements, the model times, 11.3 and 12.3, give an excellent response. Notice that the model major time constant is less than half that obtained by Orent 4 for the simple step response. The response to a returning step, Fe from 4.13 to 5.13, is shown in Fig. 107a. A bangbang transient from guessed switching times is shown in Fig. 107b, and the modeling results are in Table 102. The predicted switching times gave the improved response in Fig. 107c which, in turn, was modeled. The revised mod~l gave predicted switching times 12.0 and 16.8. In view of Fig. 107d, obtained by trial, these times are further improvements. Notice that the model major time constant now is twice that for the simple step response. This nonlinear reactor can have three possible steady states (one is unstable) for a single Fe. Bangbang forcing between full cooling, Fe == 20, and adiabatic operation, Fe === 0, is very severe. Increasing temperature responses requiring Fe == 0 initially are faster than the returning responses requiring Fe == 20 initially. This illustrates a limitation of the method if design is restricted to fixed model parameters, and the system is highly nonlinear, since the parameters depend upon direction. Nevertheless, the bangbang response
1 tr.
Jo . ·~c::; dram ~
,.,
Deadtime flush line

TC1~
J
L.___,
Supply tank
;::::1:
I
~
r
""
I
'
It\·
?;:"\
I
~
~
I
I
Power controller
RotameterTC2
r1 water l ~
Supe.!trb ,
5
1 ,
I
'lr
J
,,~.~.L'....1.
1 ::=s~~l rn1:a=
1

Deadtime
I

r
I
X JJ:
~
220~(
Manifold
]J
Analog computer
I
I
Load heater vance
TC3
1~/}::::l/.l 110 rv;_ts:~
Power meter
Figure 10·8
,_j
.J.,
Bayonet heater
Load heater venae
I
~~
Power meter
~~
Legend :::JI>1
(1143)
The response is desired in terms of c(tk). Therefore, we use Eqs. (114) and (1140) to obtain (1144)
Chap. 11
373
Design of Digital Process Controllers
By use of Eqs. (115), (1141), and (1143) and the definition of y, and with the assumption that the system is initially at rest (dcfdt == 0) at c0 , Eq.(1144) yields for the output at sampling instants (a  f3Xb~ + O)A.k1 c(tk) (aO + /3~X1 b) Co
One can show that IAI at sampling instants.
1
(1145)
I ; hence, the optimal response is always bounded
Implementation of the algorithm
The optimal control in terms of y follows from Eqs. (114) and (1138) as m(tk) = kPy(tk)
(1146) where
+ a()0++ ~/3~

k1  I
k _(a+ 2
1)0
(1147)
+ b(/3 +
a()+
/3~
1)~
(1148)
The response in terms of y is obtained from Eqs. (119) and (114) as y(tk+t)
= ptGPy(tk) + ptbm(tk)
which expands to Yt(tk+t) ~ P11Yt(tk)
+ Pt2Y2(tk) 
Kp(Ptt  1)m(tk)
(1149)
Y2(tk+t)
+ P22Y2(tk) 
KpP21m(tk)
(1150)
:.::=
P21Yt(tk)
where _ ba {3 Ptt  1  1  b
bT
P12
= 1 _ b(a  /3) = hT 2 P21 _
P22
1
+
a b/3 l _ b
(1151) (1152) (1153)
Since it is not expected that y 2(tk), the derivative of the process output, will be measured, we use Eqs. (1149) and (1150) to compute it from the output c. This computation utilizes the fact that the system is observable to compute the state from the output. From Eq. (1149) c(tk)
= PuYt(tkt) + P12Y2(tk1)  Kp(Pu  1)m(tkt)
374
Design of Digital Process Controllers
Chap. 11
and Combining these gives y 2(tk_ 1 )
= c(tk) Pttc(tk1) + KrlP11 
1)m(tk_ 1)
P12
Then, using this with Eq. (1150), we obtain
y 2(tk) == P22c(tk) Dc(tkt)
+ Kp(D P2 2)m(tk_ P12
1)
(1154)
where D == p 11 p 22  p 12 p 2•. In essence, Eq. (1154) uses the process dynamics, with values of the present and previous output and the previous input, to compute the present derivative. Combining Eqs. (1146) and (1154) gives the basic implementable algorithm
m(tk) = 
K1 (k. + 'Tk2P22)c(tk) + Tk2% c(tkl) 'Tk2(D P22)m(tkt) P12 P12 P12 P
P
(1155) Using Eq. (1155), we can compute the optimal m(tk) from past and present output and past input. Equation (1155) is written in terms of variables m and c defined as deviations about the desired steadystate condition. Equation (111) is, of course, written in terms of deviation variables and results from a differential equation originally written in terms of absolute values, ma and ca, of input and output signals (1156) where m 0 is a nonhomogeneous term which corresponds to the value of m yielding a zero output at steady state. The steadystate relation between absolute values of input and output is
Cs == Kp(ms  mo)
(1157)
and the deviation variables in Eq. ( 111) are defined as
c(t) == Ca(t)  Cs m(t) == ma(t)  ms = ma(t)  mo 
f(
(1158)
p
Use of Eq. (1158) in Eq. (1155) converts the control algorithm to absolute values of input and output
ma, ( tk ) == K1 (k 1
+ 'Tk2P22) Ca( tk) f 'Tk2D K Ca ( tk1 )
P12 P _ 'Tk2(D  P22) '(t ) + (1 + k.)cs ma kt K P
P12
(1159)
P
P12
where m~(t)
== ma(t)  mo
(1160)
376
Design of Digital Process Controllers
Chop. 11
where
A3 == A2  A1 B
+ K PTT R
== 'Tk2 PI2
v(tk)
==
111 R(tk)
 mR(tkl)
Equation (1165) is used fork> 2; n1(t0) and n1(t 1) are computed from Eq. (1159). Whenever a setpoint change (new value of cs) is requested, k is reset to zero. This means the last term in Eq. (1165) will be significant only during the first few sampling intervals following a setpoint change. Equation (1155) shows that, if c has been constant at c and m at £/KP, then 1n(t0) == k 10Kw Therefore, (1 + k 1 ) is the closedloop gain of the controller. Inclusion of delay time
If the plant contains delay time as in Eq. (111), we showed in Chapter 8 that a physically plausible redefinition of the criterion function results in an optimal control which is identical to that for the same plant without delay time. However, the optimal response is now delayed by a'T compared to the optimal response of the undelayed system. Therefore, to have a feedback realization, it is necessary to modify Eq. ( 1146) to (1166) The prediction of the future values, y 1(tk + a'T) and y2(tk + a'T), is accomplished through use of the process model. Current values y 1(tk) and Y2(tk), together with the known behavior of 111(1) over the interval tk  a'T < t < tb are sufficient to estimate y 1(tk + aT) and y 2(tk + a'T) from the response of Eq. (111 ). Let a'T == (j
+ v)T
where j is an integer, j == 0, 1, 2, ... , and v is a fraction, 0 < v the uncoupled state equations analogous to Eq. (116) are
~~=Ax+
dm(t  j T  vT)
(1167)
2, i.e., after two sampling intervals beyond the introduction of a setpoint change. The summation is eliminated from Eq. (1164) if we write
mR(tk) mR(tkt) =
m~(tk)

m~(tkI) + K 'j p
[e(tk) e*(tk)]
R
which, upon use of Eq. (1159), gives the working algorithm:
v(tk) == A1e(tk)  A2e(tk1)
+
TA.k1 A3e(tk2)  B(D P22)v(tk1)  K T e(t.) p
R
(1165)
Chop. 11
377
Design of Digital Process Controllers
Using the same procedure which leads to Eq. (119), it follows from Eq. (1168) that
=
x(tk+ t+v)
+ hn1(tkJ
(1169)
+ ~ Gilhm(tk_;)
(1170)
Gx(tk+l')
which leads by induction to j
x(tk+v+J)
=
Gix(tk+v)
i =I
Similarly, (1171) where
(1 h. = (I (
+ a)v + (3)" 
1) I
(1172)
Combining Eq s. ( 1166), ( 1170), and ( 1171) and converting from x to y, we obtain the optimal feedback control for the delayed case. However, as before, we wish to compute y 2 (tk) rather than measure it. Therefore, we use the same procedures as above to derive
(1173) Converting to y and proceeding exactly as in the derivation of Eq. (II54), we obtain an expression for y 2(tk) in terms of c(tk), c(tk_ 1), Jn(tkj 1), and n1(tkj 2). Using this for y 2(tk) results in a physically realizable feedback control 111 a (t k ) 1


1 (k j+ K1 P ll
l'
+k
2T
Pj+ 12 v) Ca (f k )
p
p IK 12
p
+ Kp(D
(k,p~t" + k 2 TP~2'"){p,,ca(tk)D 1 v)n1~~.(tkJ 2 )
1 
D 1ca(tk1)
+ Kp(D 1t' 
P22)1n~(tkjt)}
j

~ [k,(pit''l  pftt')
+ Tk2(p~tvI  P~tv)]n1~(tki)
i=1
 [kt(I Pri) k2TP~t]n1~(tkjt)
+ ~s(1 + kt)
(1174)
p
where i
_
Pll
pi,= i _ P22
Di

+ 1)i 
+
~~Tb[(a +
J)i  ({3
({3
(a
b(a 1 b
+
pi11 pl22
I )i  b(/3 1_ b

pi21 pl12
+
I )i
1)i
+ 1)i] = bT 2p~l
(1175)
378
Design of Digital Process Controllers
Chop. 11
Note that pf 1 is not p 11 raised to the ith power; that the term in braces in Eq. (1174) results from the estimate of y 2(tk); that the variables m and c have been rewritten in terms of absolute values using the substitutions of Eq. (1158), and Eq. (1174) is therefore analogous to Eq. (1159); and that if j == 0 it is understood that the summation over i is identically zero. Since P~1 == P~2 == 1, P~2 == P~1 == 0, and D 1 == D, Eq. (1174) reduces to Eq. (1159) for j == v == 0. As was true for Eq. (1159), Eq. (1174) contains no reset action. To incorporate this we first derive the result analogous to Eq. (1161) which, for the delayed case, is (1176) This is derived as follows: The exponential decay result of Eq. (1144) may also be shown to apply between sampling instants, so that in the undelayed case for k > 1, c(tk+v) == A,k 1c(t 1+v). If this response is delayed, the first sampling instant at which the value of c can serve as a base for the exponential decay is t 2 • Equation (1176) merely delays this by jTto account for the delay. Then, defining e(tk) as in Eq. (1162) and v(tk) as in Eq. (1165), and proceeding as in the derivation of Eq. (1165), we obtain the algorithm with reset action
v(tk) ==
~f+ve(tk)
~~+ve(tk1) ~ ~4+ve(tk2)

.I:: R~+"v(tkt)j
Bi+v[(D DJv)v(tkJ2) ~ (Dtv P22)v(tkj1)]
i=1
_ p vv(t kj1 ) _ K TT 1\J ""' k i 2 e(t j+2 ) p
(1177)
R
where
~f+v ==  1 [k1(Pftv ~ Kp
p 22 p~tv) Pt2
~
k2T (Pfiv
~
p 22 p~;v) Pt2
+ TRTJ
A~+v =
1p {k{pit" + ~:2" (P22 + D)] + k2T [pi;"+ ~;2"
~j+v
~j+v _ ~j+v
3
==
2
1
(D
+ P22)]}
T 'KT p R _1_
pv = k t(1  Prt)  k2T P~t k j+v _L k {T P22j+v Bi+V = I P2t I Pl2 At each setpoint change, the index k is reset to zero. The correct control is computed from Eq. (1174) for t 0 , th ... , tJ+ 2 , and from Eq. (1177) thereafter, until the next setpoint change.
Chap. 11
Design of Digital Process Controllers
379
Design procedure
We summarize here the design procedure for the case with delay in the model. 1. Obtain from dynamic testing, or otherwise, the parameters KP, r, a, and b which describe the process dynamics. 2. Select a sam piing period T. 3. Calculate a and {3 from Eqs. (1112) and (1113). 4. Choose a value of p, 0 p < I. In general, p corresponds to the closedloop gain; a high value of p corresponds to high gain, and vice versa. This \\'ill be evident from the responses to be discussed later. 5. Calculate ry and from Eqs. (1118) and (1119). 6. Calculate r from Eq. (1134). 7. Calculate () and from Eqs. (1136) and (1137). 8. Calculate k 1 from Eq. (1147). The quantity (1 + k 1 ) is effectively the dimensionless closedloop gain. It is the initial change in control for a unitstep change in set point. If it is too high, choose a lower value of p, and vice versa. Repeat steps 4 through 8 until a satisfactory gain is achieved. Then calculate k 2 from Eq. (1148). 9. Calculate A. from Eq. (1142). This represents the complement of the fraction by which the setpoint response decays each sampling period. 10. Calculate the theoretical setpoint response at sampling instants from Eq. (1145). This relation must be delayed by aT if there is delay time, but the shape of the response is identical to Eq. (1145). If this response is too fast or too slo\v, try a different sampling period T. 11. Choose a reset time. Experience with the particular plant is the only guide to selection of this parameter, since it is used only to compensate for our ignorance of exact plant dynamics and load changes. 12. Calculate j and v from Eq. (1167). Calculate the coefficients in Eq. (1174) using the definitions given follo\ving Eq. (1174). This equation is the control algorithm fort= 10, t 1, •.• , tj+ 2• Note that tk == kT. The index k is reset to zero each time a setpoint change occurs. The values ma and ca represent absolute values of the manipulated and output variables, and Cs represents the desired value of the output signal. A setpoint change implies a change in the value of Cs. 13. Calculate the coefficients in Eq. (1177) using the definitions given follo\ving Eq. (1177). This equation is the control algorithm for t == ti+ 3 , tj+ 4 , ••• , until the next setpoint change occurs, at which titne control reverts to Eq. (1174). The value v represents change in the manipulated variable, and e represents error, the difference between desired and actual values of plant output. If TfTR =I= 0,
~
•
0
>
L..
Q)
•
0.6
"U
~ "
0
0.4
0
c
Ol (f)
Signal estimated by double smoothing
T/7=0.1
Signal estimated by single smoothing
0
0.2
0.4
0.6
0.8 f/T
~=0.5
1.0
1.2
1.4
1.6
...,.
Response of smoothing techniques to exponential signal contaminated with noise. Figure E2
of lines is drawn through the smoothed points in Fig. E2 for graphical clarity, this signal should really be represented as a sequence of horizontal lines as is done in Fig. E1, owing to the sam piing process). Thus, it may be seen that the choice of a requires a compromise between noise rejection and speed of signal tracking. The lower the smoothing constant a, the better the noise rejection, but the slower the smoothed signal compared to the original. Returning to Eq. (E1), if we follow the process suggested by this equation for a typical sequence of values of c.,0 we find that its effect is to weight the values of the signal, taken n sampling intervals ago, by the factor a( I  a)n 1 in computing the current smoothed value. This shows the exponential nature of the smoothing process. The older the value, the less heavily it is weighted in the current smoothed value. The advantage of this smoothing process is that only one previous number, cn_ 1 , need be stored in order to achieve this successive weighting of all past values of the signal. To gain further insight into the smoothing process of Eq. (E1), we study its "response" to an input signal which is a unitstep function. To do this, we utilize the sampleddata theory developed in Appendix C. If the Ztransforms of Cn and en are denoted by C(z) and C(z), respectively, then the Ztransform of Eq. (E1) is C(z) az C(z) z (1 a)
(E2)
Appendix E
455
Continuous Analog to Single~Exponentol Smoothing
Assuming a step change in the signal, C(z) == zj(z  I), and inverting Eq. (E2) by the method of partial fractions, we obtain the equation en
== 1  (1  a)n+t
(E3)
Equation (E3) shows that, after a sufficient number of sampling intervals, the smoothed error signal will eventually rise to the current true value of the unsmoothed error signal. In fact, the response is so similar to that of a continuous firstorder system, that we are justifiably tempted to assume an analogy between Eq. (E1) in the discrete process and a firstorder system in the continuous process. This analogy will be developed further in the next section. Also note from Eq. (E3) that the higher the value of a, the more rapidly the smoothed signal reaches the input signal, as anticipated.
Continuous Analog to SingleExponential Smoothing Consider Fig. E3 in which the unsmoothed signal is c(t) 1 shown entering a firstorder system with time constant Ts· In terms of the signalfrequency discussion of Chapter I, statistical fluctuations in the error signal, Figure E3 Illustrating if introduced by the measurement process, would analogy between single tend to be at higher frequencies than those contained exponential smoothing in the actual process output. Therefore, appropriate and firstorder system. choice of the time constant of the firstorder filter in Fig. E3, to cause significant signal attenuation only in the range of frequencies where the noise may be considered to be dominant, should lead to a smoothed version of the original signal. To put these arguments in more quantitative form, we note that from Fig. E3,
Cn
1 fnT e~(nT8);Tsc(IJ)diJ ' == 'Ts
(E4)
o
If the integral in Eq. (E4) is divided into two parts, the first from zero to (n  I)T, and the second from (n l)T to nT, and if the theorem of the mean is used in the latter integral, the result is Cn
= ( 1  eTl·rs)c(t*)
+ eTI•scn1
where the time t* lies in the interval (n  1)T < t* relation
JnT
(nl)T
e81 'sc((}) d(}
:=:
C(t*) JnT