Introduction to control theory


239 80 10MB

English Pages 466 [487] Year 1968

Report DMCA / Copyright

DOWNLOAD PDF FILE

Table of contents :
0001
0002
0003
0004
0005
0006
0007
0008
0009
0010
0011
0012
0013
0014
0015
0016
0017
0018
0019
0020
0021
0022
0023
0024
0025
0026
0027
0028
0029
0030
0031
0032
0033
0034
0035
0036
0037
0038
0039
0040
0041
0042
0043
0044
0045
0046
0047
0048
0049
0050
0051
0052
0053
0054
0055
0056
0057
0058
0059
0060
0061
0062
0063
0064
0065
0066
0067
0068
0069
0070
0071
0072
0073
0074
0075
0076
0077
0078
0079
0080
0081
0082
0083
0084
0085
0086
0087
0088
0089
0090
0091
0092
0093
0094
0095
0096
0097
0098
0099
0100
0101
0102
0103
0104
0105
0106
0107
0108
0109
0110
0111
0112
0113
0114
0115
0116
0117
0118
0119
0120
0121
0122
0123
0124
0125
0126
0127
0128
0129
0130
0131
0132
0133
0134
0135
0136
0137
0138
0139
0140
0141
0142
0143
0144
0145
0146
0147
0148
0149
0150
0151
0152
0153
0154
0155
0156
0157
0158
0159
0160
0161
0162
0163
0164
0165
0166
0167
0168
0169
0170
0171
0172
0173
0174
0175
0176
0177
0178
0179
0180
0181
0182
0183
0184
0185
0186
0187
0188
0189
0190
0191
0192
0193
0194
0195
0196
0197
0198
0199
0200
0201
0202
0203
0204
0205
0206
0207
0208
0209
0210
0211
0212
0213
0214
0215
0216
0217
0218
0219
0220
0221
0222
0223
0224
0225
0226
0227
0228
0229
0230
0231
0232
0233
0234
0235
0236
0237
0238
0239
0240
0241
0242
0243
0244
0245
0246
0247
0248
0249
0250
0251
0252
0253
0254
0255
0256
0257
0258
0259
0260
0261
0262
0263
0264
0265
0266
0267
0268
0269
0270
0271
0272
0273
0274
0275
0276
0277
0278
0279
0280
0281
0282
0283
0284
0285
0286
0287
0288
0289
0290
0291
0292
0293
0294
0295
0296
0297
0298
0299
0300
0301
0302
0303
0304
0305
0306
0307
0308
0309
0310
0311
0312
0313
0314
0315
0316
0317
0318
0319
0320
0321
0322
0323
0324
0325
0326
0327
0328
0329
0330
0331
0332
0333
0334
0335
0336
0337
0338
0339
0340
0341
0342
0343
0344
0345
0346
0347
0348
0349
0350
0351
0352
0353
0354
0355
0356
0357
0358
0359
0360
0361
0362
0363
0364
0365
0366
0367
0368
0369
0370
0371
0372
0373
0374
0375
0376
0377
0378
0379
0380
0381
0382
0383
0384
0385
0386
0387
0388
0389
0390
0391
0392
0393
0394
0395
0396
0397
0398
0399
0400
0401
0402
0403
0404
0405
0406
0407
0408
0409
0410
0411
0412
0413
0414
0415
0416
0417
0418
0419
0420
0421
0422
0423
0424
0425
0426
0427
0428
0429
0430
0431
0432
0433
0434
0435
0436
0437
0438
0439
0440
0441
0442
0443
0444
0445
0446
0447
0448
0449
0450
0451
0452
0453
0454
0455
0456
0457
0458
0459
0460
0461
0462
0463
0464
0465
0466
0467
0468
0469
0470
0471
0472
0473
0474
0475
0476
0477
0478
0479
0480
0481
0482
0483
0484
0485
0486
0487
Recommend Papers

Introduction to control theory

  • 0 0 0
  • Like this paper and download? You can publish your own PDF file online for free in a few minutes! Sign Up
File loading please wait...
Citation preview

Introduction to Control Theory

PRENTICE-HALL INTERNATIONAL SERIES IN THE PHYSICAL AND CHEMICAL ENGINEERING SCIENCES NEAL R. AMUNDSON, EDITOR, University of Minnesota

ADVISORY EDITORS ANDREAS AcRIVOS, Stanford University JOHN DAHLER, University of Minnesota THOMAS J. HANRATTY, University of Illinois JoHN M. PRAUSNITZ, University of California L. E. SCRIVEN, University of Minnesota

AMUNDSON Mathematical Methods in Chemical Engineering ARIS Elementary Chemical Reactor Analysis ARIS Introduction to the Analysis of Chemical Reactors ARis Vectors, Tensors, and the Basic Equations of Fluid Mechanics BouDART Kinetics of Chemical Processes FREDRICKSON Principles and Applications of Rheology HAPPEL AND BRENNER Low Reynolds Number Hydrodynamics HIMMELBLAU Basic Principles and Calculations in Chemical Engineering, 2nd ed. HoLLAND Multicomponent Distillation HoLLAND Unsteady State Processes with Applications in Multicomponent Distillation KoPPEL Introduction to Control Theory: With Applications to Process Control LEVICH Physicoche1nical Hydrodynamics PETERSEN Chemical Reaction Analysis PRAUSNITZ AND CHUEH Computer Calculations for High-Pressure Vapor-Liquid Equilibria PRAUSNITZ, ECKERT, 0RYE, O'CoNNELL Computer Calculations/or Multicomponent Vapor-Liquid Equilibria WHITAKER Introduction to Fluid Mechanics WILDE Optimum Seeking Methods

PRENTICE-HALL, INC. PRENTICE-HALL INTERNATIONAL, INC., UNITED KINGDOM AND EIRE PRENTICE-HALL OF CANADA, LTD., CANADA

Introduction to Control Theory with Applications to

Process Control

LOWELL B. KOPPEL Professor of Chemical Engineering Purdue University

PRENTICE-HALL,

Englewood Cliffs, N. J.

INC.

London PRENTICE-HALL OF AUSTRALIA, PTY. LTD., Sydney PRENTICE-HALL OF CANADA, LTD., Toronto PRENTICE-HALL OF INDIA PRIVATE LTD., New De/hi PRENTICE-HALL OF JAPAN, INC., Tokyo PRENTICE-HALL INTERNATIONAL, INC.,

©

1968 by Prentice-Hall, Inc. Englewood Cliffs, N.J.

All rights reserved. No part of this· book may be reproduced in any form or by any means without permission in writing from the publisher.

Current printing (last digit):

10 9 8 7 6 5 4 3 2 1

Library of Congress Catalog Card Number 68-9803

Printed in the United States of America

14790~1_5

Preface

This book is written as an introduction to the basic concepts of modern control theory and as an indication of possible application of these concepts to process control. It is assumed that the reader is familiar with the classical control theory covered in most introductory texts on feedback control. In addition, a knowledge of matrix algebra is assumed. Problems are provided at the end of most chapters so that the book can be used as text material for a graduate course on automatic control. A certain bias toward chemical and mechanical engineering aspects of control theory will no doubt be detected. Examples of physical systems are primarily heat exchangers and chemical reactors. However, a primary purpose for writing this book is to present a sufficiently rigorous and general account of control theory to prepare the reader for the rapidly expanding research and development literature on automatic control. Comparison of the recent control literature from any of the several engineering disciplines (electrical, mechanical, chemical, science, etc.) with that of a decade ago shows considerable difference in the background and level of mathematical sophistication assumed of the reader. It is hoped that the material selected for this book will provide some of this background. Automatic control has become a mathematically demanding discipline, perhaps more so than any other normally encountered by engineers. If there is little escape from these demands in applying present-day control theory v

VI

Preface

to problems of technology, there will be less in the future. Concepts such as existence and uniqueness, sufficiency and necessity, once considered the sole province of mathematicians, now appear in the control engineering literature as essential features of the problem solution. Therefore, I have attempted not to gloss over these aspects. The book begins with a review and then treats, for both continuous and discrete systems, the subjects of state variables, Lyapunov stability, and optimization. It concludes with some specific, suggestive illustrations on practical application to process control. The treatment of optimization is in no way competitive with that presented in recent texts devoted exclusively to this subject. I have presented only an incomplete treatment of Pontryagin's minimum principle and Bellman's dynamic programming, as they apply to problems already formulated in the state transition framework of modern control theory. Omitted are all the other interesting techniques for extremizing general functions of several variables. More attention has been given to the minimum principle than to dynamic programming, for optimization in continuous control systems, because it is my opinion that the former is a more useful computational tool for this class of problems. Chapter 6 contains a rather detailed account of minimum integral-square-error and minimum time control of linear systems. The purpose is to indicate how the complete description of dynamic behavior available for linear systems, in the form of the transition matrix, enables a relatively complete solution to the optimal control problem. Also, these problems illustrate the mathematical care required to definitively solve a problem in control theory. The examples, particularly on stability and optimization, are designed to be as simple as possible, generally not requiring a computer for solution. I believe that the disadvantages of simple examples are outweighed by their major advantage: They enable complete involvement and verification by the reader, while illustrating the essential concept. Where necessary, computersolved examples are presented, or referred to in the literature. No mention has been made of stochastic control systems, because the space required to treat this subject does not appear justified in view of the dearth of presently established applications of statistical control theory to process control. The rather long Appendix C on sampled-data systems is included because chemical engineers generally have no introduction to this basic aspect of control theory. The book can be read without this background, but at some disadvantage. In fact, all material on discrete systems may be omitted without loss of continuity. I wish to acknowledge and thank my colleagues D. R. Coughanowr, H. C. Lim, Y. P. Shih, P. R. Latour, and H. A. Mosler for their contribu-

Preface

VII

tions. Purdue Research Foundation generously provided me with a summer grant to assist my writing efforts. The Purdue School of Chemical Engineering allowed use of early drafts as the text material for its graduate course in process control. I am grateful for all this assistance. LowELL

Lafayette, Indiana

B.

KoPPEL

Contents

1.

Review of Classical Methods for Continuous Linear Systems

1

Linear systems analysis, 1. Classical control techniques, 11. of integral square error, 32.

2.

Minimization

Review of Classical Methods for Discrete

43

Linear Systems

General linear transfer function, 44. Pulse transfer function, 45. Deltafunction response, 48. Stability of linear discrete systems, 49. Compensation of linear discrete systems, 50.

3.

State Variables for Continuous Systems

Basic state concepts, 56. Ordinary differential equations, 59. Linear systems with constant coefficients, 62. Linear systems with time-varying coefficients, 73. Controllability and ohservability, 77. Distributed-parameter systems, 86. Nonlinear systems, 88. ix

56

Contents

X

4.

93

State Variables for Discrete Systems Basic state concepts, 93. Difference equations, 95. Linear systems with constant coefficients, 96. Linear systems with time-varying coefficients, 100. Controllability and observability, 105. Sampled-data systems, 106.

5.

Lyapunov Stability Theory

113

Nonlinear spring-mass-damper system, 114. Free, forced, and autonomous systems, 116. Definitions of stability, 117. Basic stability theorems, 119. Proofs of stability theorems, 130. Stability of linear systems, 133. Linear approximation theorems, 135. Krasovskii's theorem, 143. Estimation of transients, 147. Parameter optimization using Lyapunov functions, 152. Use of Lyapunov functions to design controllers, 156. Stability of discrete systems, 161.

6.

Continuous Systems Optimization

171

An introductory problem: variational methods, 171. The minimum principle, 182. General results for linear systems, 210. Optimal control of linear systems with quadratic performance criteria, 211. Time-optimal control of linear, stationary systems, 226. Time-optimal control of nonlinear systems, 254. Numerical procedures for optimization by the minimum principle, 258. Dynamic programming, 270.

7.

291

Optimization of Discrete Control Systems Dynamic programming, 292. Discrete minimum principle, 299. optimal control of linear sampled-data systems, 307.

8.

Time-

Optimal Control of Distributed-Parameter Systems

315

Systems with pure delay, 315. Minimum principle for distributed-parameter systems, 319. Optimal exit control of distributed-parameter systems, 328.

9. 10.

Chemical Process Control

332

Time-Optimal Process Control

339

Analysis of programmed time-optimal control, 340. control, 360.

11 .

Design of Digital Process Controllers

Feedback time-optimal

366

Contents

XI

Appendices

A.

Existence and Uniqueness Theorems for Equation (3-4)

383

B.

Vectors and Matrices

392

c.

Classical Methods for Sampled-Data Systems

398

Basic considerations ofsampling, 399. Z-transfor~n methods,418. Continuous closed-loop control with sampled signals, 432. Direct digital control of processes, 437.

D.

Controllability and Observability

447

E.

Filtering, Estimation, Differentiation, and Prediction

451

Single-exponential smoothing, 453. Continuous analog to single-exponential smoothing, 455. Ramp response, 456. Double-exponential smoothing, 457. Continuous version of double-exponential smoothing, 459.

Index

463

Introduction to Control Theory

Review of Classical Methods for Continuous Linear Systems

1

Linear Systems Analysis In this chapter we will review briefly some important aspects of what may be called classical continuous control theory. It will be assumed that the reader has some general acquaintance with these topics, although the treatment will be self-contained. From the vast body of knowledge which might justifiably be included in the term classical control theory, we select only those topics which are important for perspective. Therefore, detailed expositions of design methods and system compensation techniques, although of considerable importance to one wishing to solve problems in automatic control, are omitted here because their inclusion would make the major goal (perspective) considerably more difficult to attain. Furthermore, they have been well-documented in standard references on the subject of automatic control. Definition of system

For the purposes of the book, a system will be defined as any physical device which, upon application of an externally selected function m(t), produces an observable function c(t). It is further assumed that the system 1

2

Review of Classical Methods for Continuous Linear Systems

Chap. 1

is deterministic, that is, the application of a given function m(t) always produces the same function c(t). This precludes a stochastic system from our analysis. For convenience we define the applied function m(t) as the input signal and the observed function c(t) as the output signal. Normally, the independent variable t will denote time. This definition serves to describe the single-variable system. When there are more than one input signals, denoted by m 1(t), m 2(t), m 3(t), ... , and possibly more than one output signals, denoted by c1(t), c 2{t), c3(t), ... , we are dealing with a multivariable system. In either event, the essential nature of the system concept is the cause-and-effect relationship between input and output signals. In this chapter, we further restrict attention to systems in which input and output signals are available continuously, i.e., at all values of t. In the next chapter, classical methods will be presented for analyzing systems whose signals are available intermittently. The distinction between classical control theory to be studied in this chapter and what is sometimes called modern control theory to be studied subsequently is to some extent artificial. However, a basic factor distinguishing these two branches of control theory is the fact that only the input and output signals are considered in classical control theory. This is incomplete because, even if complete knowledge of the cause-and-effect relationship between input and output signals is available, specification of the output signal c(t 0 ) at some arbitrary instant of time t 0 , and of the complete behavior of the input signal m(t) for all future time (i.e., for all t > t 0 ), is insufficient to completely determine the future behavior of the system. If the system relationship is a differential equation of order higher than one, it is necessary also to specify the values of derivatives of the output signal at t 0 (or equivalent information) in order to predict future behavior. This defect is eliminated in modern control theory by the introduction of the concept of state, as will be discussed in Chapter 3. A second factor in distinguishing· classical and modern control theory is chronology. Classical control theory is generally regarded as having begun in the 1920's, while modern control theory largely consists of developments which took place after 1950. A third factor is the compensation concept, to be discussed later in this chapter. To specialize further various aspects of our treatment, it will be necessary to introduce more adjectives. A linear system will be defined as one which obeys the principle of superposition with respect to the input signal. The superposition principle states that the response of the system to a linear combination of two input signals a 1m 1(t) + a2m 2(t), is the same linear combination of the responses to the two input signals. That is, the response is a 1 c 1(t) + a 2 c 2(t), where c 1(t) and c 2(t) are the responses to m 1(t) and m2(t), respectively. A lumped-parameter system is one for which the cause-and-effect relationship between input signal and output signal consists of ordinary differen-

Chap. 1

3

Linear Systems Analysis

tial equations only, in contrast with a distributed-parameter system for which this relationship is described by partial differential equations. Note that we are not exercising particular care in distinguishing between the system and the mathematical relations describing its cause-and-effect behavior. This distinction is necessary to develop a mathematically rigorous description of these concepts, 1 but it will not be necessary for our purposes. It will be obvious when we are referring to the physical process and when to the mathematical relationship used to express the cause-and-effect behavior between its input signal and output signal. Analysis of linear, lumped-parameter systems

Consider the single-variable system shown in Fig. 1-1. This figure is drawn in the usual notational convention for analog diagrams. The triangularly shaped components integrate or sum, depending upon the symbol placed inside the triangle; the circular elements multiply by the constant written in the circle. Thus, the input signal m(t) is summed with a linear combination of the signals labeled x 1(t), x 2(t), ... , Xn(t), to form the signal x 0(t). This latter signal is then integrated to give x 1(t) which is, in turn, integrated to give x 2(t), etc. The signals x 0 (t), x 1(t), . .. , Xn(t) are combined linearly to form the output signal c(t). We assert that the cause-and-effect relationship for the linear system of Fig. 1-1 is given by the linear differential equation (1-1)

m(t) ">--. . .-

•••

Xn-1

Analog simulation diagram for nth-order, linear, lumpedparameter system.

Figure 1·1

4

Review of Classical Methods for Continuous Linear Systems

Chap. 1

and we shall prove this below. The xi(t) variables will later be recognized (Chapter 3) as a set of state variables, and therefore the output signal c(t) is merely a linear combination of these state variables. Figure 1-1 has the advantage of making clear that Eq. (1-1), which is no doubt familiar to the reader, results from a system that integrates, and not from one that differentiates. In fact, there is no real physical system which can differentiate, although the differential operation may be closely approached by some physical systems. EXAMPLE 1-1

Consider a cylindrical tank of constant cross-sectional area A being filled with a liquid at a variable rate m(t) volumes per time. As output variable c(t), we choose the instantaneous level of fluid in the tank, and we further suppose the tank loses liquid through an opening in its bottom at a rate k c(t) volumes per time, where k is a constant. Then we may write a differential mass balance as de

A dt

=== m(t) -

k c(t)

which is merely a special case of Eq. (1-1) with n === 1, a 0 === k/A, h 1 === 0, b0 === 1/A. In this form we have stated that the instantaneous rate of accumulation is equal to the instantaneous rate of input less the instantaneous rate of output. However, we may also state that the total volume in the tank at any time t is given by the total volume which existed at a previous time t 0 , plus the integral since that time of the inlet rate minus the outlet rate: A c(t) =A c(t 0 )

+ J:. [m(O) -

k c(O)] dO

Thus, the tank physically integrates the difference between inlet and outlet stream rates to arrive at some current inventory. Of course, these equations are easily converted, one into the other, by differentiation or integration. The differential equation form is most often used because there are systematic procedures for its solution. EXAMPLE 1-2

Suppose a system were described by the input-output relationship c(t) === dm(t)

dt This is a perfect differentiator. Physically, there is no difficulty in generating an input signal arbitrarily close to a perfect unit-step function, m(t) === S(t - t 0), where S(t- t 0} =

{~

t to

Chap. 1

5

linear Systems Analysis

Then the perfect differentiator must produce an output signal arbitrarily close to an impulse function o(t - to), defined by o( t - to) == 0

t

*t

0

r~ O(t- t )dt ==I 0

so that o(O) must be infinite. All physical systems are bounded in some sense and cannot produce arbitrarily large output signals. Hence, no physical system is a perfect differentiator. This reasoning may be extended to explain why the coefficient of the term dnc/dtn in Eq. (1-1) cannot be zero, while the coefficients of all other terms may be zero. This example further explains why Eq. (1-1) results from a system which integrates. To prove that Eq. (1-1) actually describes the system shown in Fig. 1-1, we note that the input to each of the integrators is the derivative of its output, and hence d n-kXn (1-2) k : : : : 0, 1, 2, ... , n xk == dtn=-k' Furthermore, since we obtain from substitution of Eq. (1-2)

dnx dtnn

dn-•x + an--I dtn-ln

+ ...

dx + al dtn

+ aoXn == m

(1-3)

Differentiating Eq. ( 1-3) k times yields

dn+kxn dtn+k

dn+k-lXn

+ an-1 -dtn+k~ + ...

dk+lxn + al dtk+l

dkxn dkm + ao dtk == dtk

which, upon substitution of Eq. (1-2), results in, for k = 0, 1, 2, ... , n, dn-1 Xn-k d nXn-k + (1-4) dtn an-1 -J{n-1 T 1

Finally, Fig. 1-1 shows that

c(t) = boXn

+ b1Xn-1 + · · · + bnXo

(1-5)

Equation (1-1) follows immediately from Eqs. (1-4) and (1-5) when we take derivatives of c(t) and combine as required in Eq. (1-1). There are n integrators in Fig. 1-1, and the order of the system equation, Eq. (1-1), is also n. In general, an nth-order system will be one which performs n integrations. Furthermore, while it is possible to choose any of the a, or b, as zero, there is no choice which will result in an equation in which the input signal m(t) is differentiated more times than is the output signal c(t). Therefore, any equation in which m(t) is differentiated more times than

6

Review of Classical Methods for Continuous linear Systems

Chap. 1

is c(t) cannot result from a system which integrates and will be physically unrealizable. Transfer function

We shall define the Laplace transform more carefully below. For now it is assumed that the reader is familiar with the basic operational properties of this transform. Taking the Laplace transform of both sides of Eq. (1-1) and rearranging give the result

C(s) == bnsn + bn_ 1Sn- 1 + · · · + b1s + ho ~ G(s) M(s) sn + an_ 1sn- 1 + · · · + ats + ao

(1-6)

which has been used as shown to define a transfer function G(s) as the ratio of two polynomials. Equation ( 1-6) is, of course, equivalent to

C(s) == G(s) M(s)

(1-7)

which is the basic cause-and-effect relationship between the Laplace transforms of the input and output signals, M(s) and C(s ). To derive Eq. (1-6) directly from Eq. (1-1) in this manner requires the assumption that the initial value and first (n - I) initial derivatives of both c(t) and m(t) are zero. Actually, such a restriction is not required to obtain this result which follows from only the assumption that all initial values of the variables x 1, x 2 , • • • , Xn are zero. This derivation is the subject of one of the problems at the end of this chapter. The significance of this is that the transfer function relation of Eq. ( 1-7) is valid only for the so-called zero-state condition. Without yet defining the state concept, we can describe this zero-state condition as one in which the output variable c(t) will remain at zero if no input is applied to the system, i.e., if m(t) == 0. With reference to Fig. 1-1, if all initial conditions on the integrators are set to zero, and if m(t) is also held at zero, clearly c(t) == 0. Also note that the initial conditions which appear as a result of applying the Laplace transform operator to a differential expression are evaluated shortly after time zero, usually denoted by t == O+. These initial conditions are on the variables c(t) and m(t) and need not be zero to have the transfer function ofEq. (1-6) apply. (See Problem 1-1.) Provided that the initial conditions on the variables x 1 , x 2 , ••• , Xn are zero, the transfer function relation is valid. Note that Fig. 1-1 provides a way to construct a computer system to simulate the behavior of any transfer function of the form of Eq. ( 1-6). Impulse response

In what follows, we shall invariably restrict our attention to events occurring after a time arbitrarily designated as t === 0. We suppose that some arbitrary signal m(t) is applied as the input to a linear system, initially in the

Chap. 1

7

linear Systems Analysis

zero state. Such a signal is sketched in Pulse sequence Fig. 1-2. As shown in the figure, this signal signal may be approximated by a sequence of square pulses. The (k + 1)st pulse begins at a time t = kT, where T is the period of the pulsing and k is an integer; it ends at t = (k + I )T and assumes a height equal to the value of the function at the beginning of the T 2T 3T 4T 5T 6T pulse, m(kT). By making T arbitrarily t .. small, we may represent the signal Figure 1·2 Approximation of m(t) as closely as desired by this pulse input signal by pulse sequence. sequence. Because we are dealing with a linear system, we may apply the principle of superposition. The total response of the system to the sequence of pulses may be obtained by summing its response to each of the individual pulses. Consider the typical pulse occurring at time t == kT. This pulse may be regarded as the application of a step function at time t == kT with magnitude m(kT), followed by the application of a negative step function of the same magnitude at a time t =(k + l)T. Let us denote by H(t - t 0 ) the response of the linear system to a step of unit magnitude applied at time t = t 0 , S(t - t 0 ). Then, by the principle of superposition, the response ck(t) due only to this square pulse is ck(t)

== m(kT){H(t - kT) - H[t -

(k

+

l)T]}

At any time tin the interval kT < t < (k + l)T, we may represent the total response as the sum of the responses caused by each of the previous pulses k

c(t) = ~ m(jT){H(t -jT)- ~[t- {j

+

l)T]}r

(l-8)

j=O

We now propose to letT go to zero and k become. infinite in such a manner that the product kT remains constant, kT = t. Furthermore, if we define jT = (), it follows that

T=(j+

l)T-j1~=6.0

Then by definition, the summation in Eq. (1-8) approaches an integral, and we obtain t dH (1-9) c(t) == m(O) d() (t - 0) dfJ 0

I

We now define the derivative of the response to a unit-step function as the impulse response g( t - t 0 ), that is

d

g(t -· t 0 ) = dt H(t- t 0 )

8

Review of Classical Methods for Continuous Linear Systems

Chap. 1

The term "impulse response" derives from the fact that g(t - t 0 ) is also the response of the system when subjected to an impulse input, o(t - t 0 ). For physical reasons, an impulse input may never be obtained because it requires infinite values. Therefore, to make the definition physically acceptable, it is desirable to define g(t- t 0 ) as we have shown. Then, Eq. (1-9) becomes

c(t) =

J:

(1-10)

m(O)g(t - 0) dO

A change in the variable of integration transforms this to

c(t) =

s:

(1-11)

m(t - O)g(O) dO

In deriving Eq. ( 1-11) it is important to realize that, both by definition and on physical grounds, g(t - t 0 ) = 0 for all time t satisfying t < t 0 • Applying the convolution theorem 2 to either Eq. (1-10) or Eq. (1-11) results in Eq. ( 1-7); note that the unit-impulse response is the inverse transform of the transfer function. This follows from the fact that the Laplace transform of the unit impulse is unity. Equations ( 1-1 0) and ( 1-11) provide a complete description of the time behavior of the linear system. Once the function g(t- t 0 ) is known, the output signal resulting from an arbitrary input signal may be computed. Once again, we have assumed in the derivation that the system is in the zero state before application of m(t ), which accounts for the absence of any terms other than those which appear in Eq. (1-8). Fourier and Laplace transforms

Either Eq. ( 1-1 0) or Eq. ( 1-11) completely characterizes the linear system in the time domain. There is an alternate way of expressing the causeand-effect relation for linear systems, which involves the frequency domain. To develop this concept we will first have to establish the Fourier and Laplace transforms on more rigorous grounds. A periodic function f(t) may be represented as a linear combination of sine and cosine waves through use of the Fourier series 3

f(t) =

~

fr

12

f(O) dO

+~

-T/2 oo

~ . 2n7Ct + l_ T L..J SID T

f

.t

cos 2n;t

n=l

T/2

fr

12

f(O) cos 2"; 0 dO

-T/2

f(O) . 2n7CO dO SID T

-T/2

n= 1

where Tis the period of the function/(!). We define the frequency roo == 27C/T and use the complex identities cos nru 0 t

_ ex p (jnru 0 l)

-

+ ex p ( -jnruot) 2

Chap. 1

9

Linear Systems Analysis

·

_

stn nru 0 1 -

exp (jnru 0 t) - exp (-jnru 0 t) 2j

to convert the Fourier series to the equivalent exponential form f(t) =

~ ~

&nr.Jot

T

IT-T//(0) /2

e-Jn.,,o

d()

n=-oo Now an arbitrary function f(t) may be regarded as a periodic function whose period becotnes infinite. Thus, defining

2n7t

ru == n(J)o == T

it follows that 1 _ ru 0 _ ( n r· -- 2x --- -

+

1)ruo - nruo _ Aru 21t - 2x

so that the Fourier series may be written in the form

f(t) === __!__ ~ 21t ~

f

T/'!.

oo

eir.Jt

A(J)

n=-oo

f(O)

e-ir.Je

dfJ

-T/2

Allowing T --+ oo and Aru ~ 0 converts the summation to an integral. This integral relationship may be written as two integrals 1 f( t) == 2x F(jcn) =

Joo- F(j (J)) &rAt d (J) CKJ

[~ f(t) e-M dt

(1-12) (1-13)

where the Fourier transform F(j(J)) is defined by Eq. (1-13), and Eq. (1-12) gives a formula for inverting the Fourier transform to recalculate f(t). The Fourier transform F(j(u) is the frequency representation of the time signal/(t). It is a continuous function of the frequency ru. This function describes an aperiodic signal f(t), just as the discrete set of coefficients in the Fourier series describes a periodic signal. In general, F(jru) is complex and therefore possesses a magnitude and argument. A typical way of representing its frequency content is to plot the magnitude 1F(j(J)) I as a function of frequency. Problem 1-2 illustrates this concept. Mathematically speaking, however, the Fourier transform of a function is not guaranteed to exist unless that function is absolutely integrable, 4 that is, unless

[~ 1/(t)l dt
0 such that

fe-at 1/(t) I dt


0, it will be true for all u > u 0 • Then u 0 is called the abscissa of convergence if it is the smallest constant for which the integral converges for all u > u 0 • We further insist that either f(t) = 0 for all t < 0, or else that any behavior of f(t) prior to time zero is of no significance. This restriction introduces no difficulties in practice. Then the Laplace transform is defined as the Fourier transform of e-at f(t) for all u > u 0 and, as such, is guaranteed to exist. The equation for this Fourier transform is

F(u

+ joo) =

[

/(t) e- 0. Since u can be taken arbitrarily close to zero, we may take F(s) arbitrarily close to 1/jru, which is simply the result of replacing s by jru in the Laplace transform to obtain the Fourier transform. Another way of looking at this is that we are effectively approximating the step function with an exponential function which decays at an arbitrarily slow rate. The replacement of sin the Laplace transform by jru, to obtain the Fourier transform, is generally an acceptable practice; infrequently, however, it may lead to an erroneous result. An indication of this may be seen if the inverse Fourier transform of F(jru) = 1/jru is calculated using Eq. (1-12). The result

.

IS

1 f(t) = 27l'

Joo -00

ejCIJt

-. dru = j(J)

{- 1

"'2" 1 2

tO

which is not the unit-step function. However, the inverse of F(jru) = 1/(a + jru) isf(t) == exp (-at) for all a > 0. Thus, the usual use of F(jru) = 1/jru may be justified in a practical sense, but not in a strictly mathematical sense. For the step function, the magnitude of F(jru) may therefore be written as 1/ru. This implies that the step function has decreasing frequency content at higher frequencies. As is well known, the use of a step input to experimentally test a process will yield information only about low-frequency behavior. Phenomena such as high-frequency resonance will go undetected. Pulse functions have higher content at the higher frequencies and, accordingly, are more desirable as experimental test functions. We can now exhibit the duality between the time and frequency domains. To do this we once again note that assuming a linear system allows application of the principle of superposition. In the time domain, we decomposed an arbitrary input m(t) into a sum of pulses. In the frequency domain, we decompose the input into a sum of complex exponentials (sine waves) by rewriting Eq. (1-12) for m(t) as a summation. The response of the linear system to this sum will be given by the sum of the responses to each term in the sum. Consider the typical term

(l-16) The response of Eq. ( 1-1) to this forcing function will consist of a particular solution and a complementary solution. For now, we ignore the complementary solution since it will be evident when we obtain the final result that

12

Review of Classical Methods for Continuous Linear Systems

Chap. 1

this solution must disappear from our final expression. The particular solution will be of the form c.,(t)

= ci>(jw) ~= M(jw) &.,1

(1-17)

where tf>(jw) is the constant to be evaluated by the usual method of undetermined coefficients. Substituting Eqs. (1-16) and (1-17) into Eq. (1-1) yields the result

tf>(jw) == bn~jw)n + bn-l~jw)n-l + · · · + b 1 ~jw) + bo (JW)n + an-I(JW)n-l + ••• + a.(JW) + ao which, upon comparison with Eq. (1-6), gives 0; then, l-.oo

f = (t) dt == 2 11 sj= / 2

7t .

.

0

(1-37)

F(s) F(-s) ds

-)=

where F(s) is the Laplace transform of f(t). The value of Eq. (1-37) is that we need only calculate the Laplace transform E(s) of the signal e(t), using the techniques already developed for overall transfer function analysis. Then Eq. (1-37) may be used directly to calculate the value of the performance function J without inversion. We outline a proof of this theorem in the following paragraphs. First, write the inversion integral for one of the f(t) terms in Eq. (1-37)

} Jjoo

f(t) == - . 27l'j

-joo

est

F(s) ds

The conditions of the theorem guarantee that zero is an acceptable value of u. Then it follows that

f ~ P(t) dt =

2! s= 0

f(t) dt

fj~

e' 1 F(s) ds

1 0 -)00 The conditions of the theorem also guarantee that the order of integration may be interchanged (see Churchill 16 ) so that 0

~oo

J j. (t) dt = 2 2

}

7t .

fjoo

.

F(s) ds

foo

est f(t)

dt

1

= 2 7t

.

Jjoo

.

F(s) F(-s) ds

1 - J= 0 1 and the theorem is proved. When F(s) may be expressed as the ratio of two polynomials, that is, when F(s) is rational, the value of this complex integral has been tabulated in terms of the polynomial coefficients by Newton et al., 17 \Vho are apparently the original developers of the design technique. Let F(s) : = Cn-lsn-l + Cn_: 2sn- 2 + ... + cls + Co dn•con _.J_, dn -- 1sn-l -+-, ... _j__ d1s .....L d0 0

)00

1

1

1

1

34

Review of Classical Methods for Continuous linear Systems

Chap. 1

Then the value of J is

(1-38) n==3

cido(d1d2 - dod3 )

J==

+ (c~ -

2ctc3)dodtd4 + (ci --- 2coc2)dod3d4 + c5did2d3· -- -dtd4) -2dod4( d1d2d3 - dod~ - did4)

n =-= 4

For values of n higher than four, the forms become very complex. A more complete table, up to n === 10, is available elsewhere. 17 To illustrate the technique, we restrict our attention to the case of parameter optimization, in which the form of the controller transfer function is specified, and only its parameters are to be chosen to achieve a minimum value of J. As discussed by Chang, 18 the method can, in fact, be used to pick the best linear controller transfer function Gc(s), applying a technique referred to as spectral factorization. Parameter optimization is a much less powerful approach to optimization than spectral factorization, because choice of the form of controller offers more flexibility than does choice of only the parameters. However, in Chapter 6 we shall present more general optimization techniques for minimization of the J in Eq. (1-36); so we confine our present treatment to the following example of parameter optimization.

EXAMPLE 1-8 Consider the block diagram of Fig. 1-3, with GP(s) = (s Gu(s) === (s

Gc(s)

1

+ 1)3 1

+ 1)

= K(s

~ bo)

The parameters K and b 0 are to be chosen to give a minimum value of

J = [

e 2(t) dt

when u(t) ~ S(t). This is the same system considered in Examples 1-4 and 1-6. Using ordinary block-diagram algebra, we find E( s) == s 4

-+·

--(s +· 1) 2 3s 3 -t-3s 2 + ps

+- q

Chap. 1

Minimization of Integral Square Error

35

147901.5

where

+K

p

A

1

q

A

Kb 0

Obviously, the conditions e(t) < oo for all t, and e(t) of Parseval's theorem are satisfied. Since 19

==

0 for t


0. The final-value theorem gives this immediately. Hence we use Eq. {l-38) for n == 4 to obtain

J==pq+6q+9-p 2q(9p - 9q - p 2 ) To minimize, we set oJfop relations

== 8Jf8q == 0 to obtain the simultaneous

(9 - p)(9p - 9q - p 2 ) == 9q(pq + 6q - p (9 -p)(2p - 9) == 9q(l - q)

+ 9)

The solution must be found by trial, and it is 4.75

p

==

q

== 0.625

This may be checked as a true local minimum point of J by the calculation of second derivatives. To prove it is a global minimum requires more numerical effort, which was not done. This is one of the difficulties of the method. In terms of the control parameters, the solution is K

==

3.75

b0 == 0.167 The response using these settings is shown in Fig. E l-8-l. The reader may find it interesting to simulate the control system on an analog computer and observe the change in response which occurs when the control settings are changed from the optimum values. The identical problem was studied by Jackson, and the results of this study are reported by Harriott. 20 In this study, J was plotted for various combinations of K and b0 , each value of J being calculated by actual computation of the response e(t). The minimum point of J was estimated by graphical interpolation to yield results close to those obtained here. Although Example 1-8 was carefully chosen to avoid this situation, it may happen that the minimum value of J results from the choice of infinite controller gain. In Example 1-8 this would mean K == oo. Because the tech-

36

Review of Classical Methods for Continuous Linear Systems

Chap. 1

1

K=O

--:;:

0.5

to

(3-1)

State properties

Equation (3-1) states that the future output behavior can be determined from a knowledge of (or equivalently, is a function of) the present state, and a specification of the future input signal. It is possible to show that, if certain care is taken in the mathematical definition of state, then

x(t) == x{ x(t 0 ), m(t o, t]}

t

> t0

(3-2)

which means the future state behavior of the system also depends only on the present state and the future input. Note that the state of the system is taken as a function of time. We can give an heuristic justification of Eq. (3-2) by postulating a basic state property, which must obviously be satisfied if Eq. (3-1) is to be useful:

c{ x(t 0 ), m(t 0 , t]} == c{ x(t 1), m(t h t]} == c(t) where x(t 1 ) is the state produced at t 1 by m(t 0 , t 1], beginning with the system in state x(t 0 ) at t 0 • This equality will be true for any t 1 in the interval t 0 < t 1 < t. Allowing t 1 -----+- t in principle yields the relationship of Eq. (3-2). In words, the equality guarantees a form of uniqueness of the output. The same output must be reached from the state x(t 1) as is reached from x(t 0 ), when x(t 1) and x(t 0 ) are related in the manner described. It follows immediately from Eq. (3-2) that

x {x(t o), m(t 0 , t]} == x(x{ x(t 0 ), m(t 0 , t 1]}, m(t 1 , t])

(3-3)

which is the second basic state property. In words, Eq. (3-3) may be interpreted as follows: The state that results from beginning the system in state x(t 0 ) and applying the input vector m(t 0 , t] is identical to the state reached

58

State Variables for Continuous Systems

Chap. 3

by starting the system in state x(t 1) and applying the input signal m(t 1 , t], provided that the state x(t 1) is that which would have been reached by starting from x(t0 ) and applying m(t0 , t 1]. Thus, it is a uniqueness property similar to that placed on the output. Clearly, if the state concept is to be a useful description of physical systems, this sort of uniqueness must be guaranteed. We illustrate these concepts with a simple example. EXAMPLE 3-1 Consider the single-variable system described by the input-output relation

c(t)

== x(t 0 )

+ Jtto m(T) dT + m(t)

It follows that x(t 0 ) qualifies as a state at t 0 according to the definition given in Eq. (3-1). Further, the output uniqueness condition requires

forto

t0

gives the future state behavior, and

c(t) == 1 + (1 - t 0 ) 2 2

+ (t -

gives the future output behavior.

to)

t

> t0

Chap. 3

59

Ordinary Differential Equations

Ordinary Differential Equations

A broad class of continuous physical systems of practical interest may be described by a state vector x{t) which satisfies a system of ordinary differential equations

dx

dt = f [x(t), m(t), t]

(3-4)

Here, f is a vector of n functions, so that Eq. (3-4) is a vector abbreviation for a system of n ordinary differential equations, where n is the dimension of the state vector. EXAMPLE 3-2 The system diagrammed in Fig. 1-1 is described by then ordinary differential equations dx dt 1 -_ m(t) -

On-tXt -

an-2X2 -

••• -

01Xn-1 -

OoXn

dx 2 dt === xl dx 3 dt === x2

dxn __ dt -

Xn-1

Clearly, the vector

x(t) === Xn(t)

qualifies as a state vector because knowledge of x(t 0 ) (the initial conditions on all integrators) and m(t 0 , t] completely determines the behavior of the system for t > t 0 • The vector of functions corresponding to f in Eq. (3-4) is _(J[x(t), m(t), t]

j;[x(t), m(t), t] f [x(t), m(t), t] =

fn[x(t), m(t), t]

60

State Variables for Continuous Systems

Chap. 3

where

/. = m(t)j; = xl

fn

=

Gn-lXl -

Qn-2X2- · · · -

QlXn-l -

GoXn

Xn-1

Also, for this system the output is related to the state and input as follows: c(t) == bnm(t) + (bn-1 - bnan-l)xl(t) + (bn-2- bnan-2)x2(t) + ... + {bl - bna 1 )Xn_ 1(t) + (bo- bnao)Xn(t) (E3-2-1) As in Examples 3-1 and 3-2, the output vector is usually expressed as a function of the present state vector and input signal vector, c(t) == c[x(t), m(t)]

(3-5)

rather than in the form of Eq. (3-1). Equations (3-2) and (3-5) imply Eq. (3-1 ). It is shown in Appendix A that, for a given specification of initial state vector x(t 0 ), Eq. (3-4) has a unique solution, provided certain restrictions are placed on f and m(t). Therefore, state vectors x(t) resulting from the solution of Eq. (3-4) also obey the property expressed in Eq. (3-3). A major objective of this chapter will be to study from a state vector viewpoint the behavior of systems which can be described by Eq. (3-4). Obviously, such systems are restricted to those whose states (at any instant of time) can be expressed as vectors of real numbers. Although Example 3-2 showed that linear, lumped-parameter systems do indeed meet this restriction, there are many systems which do not. Distributed-parameter systems, to be considered later in the chapter, are an example of those whose states cannot be described by vectors of real numbers. However, because the class of systems whose states can be so described is of such great practical interest, we justifiably devote the majority of this chapter (and the next) to a discussion of their dynamic state behavior. In only a few special cases can the solution x(t) to Eq. (3-4) be written analytically. Most -of these special cases belong to the class for which f is a linear function, and this class will be discussed in detail. Again, this is so because a great number of practical applications involve systems which are approximately linear and lumped-parameter in nature. When the function f is nonlinear, an analytical solution is extremely rare. However, solution of Eq. (3-4) by either analog or digital computer is relatively straightforward when x(t 0 ) is specified. This is because, when all components of the state vector x(t) are known at the initial instant of time 10 , we have a simple initialvalue problem.

Chap. 3

61

Ordinary Differential Equations

Any nth-order, ordinary, nonlinear, nonhomogeneous, differential equation for an unknown function e(t) may, in principle, be rewritten in the form dnc ( de d 2 e dn-le ) 2 ' dt g t' e' d t ' d t • • • ' d t n - 1' m( t) 71

;:::::

where g is some function, by solving for the highest derivative. We regard this as a system with input m(t) and output e(t). A state vector x(t) may be defined for this system if we choose as components

i = 1, 2, ... , n Then it follows that dxt -dt

==

d~n =

die dti == xi+t g(t,

i == 1, 2, . . . , n - I

X 1 • X 2 , ••• , Xn,

m) = g(t, X, m)

so that we obtain Eq. (3-4), with the components of the vector function f defined by the relations

i = 1, 2, . . . , n - 1 fn =g

This discussion shows how general Eq. (3-4) actually is. Note that the relation between output and state is e(t) == x 1(t), and that knowledge of e(t 0 ) and m(t 0 , t] will not enable prediction of e(t). The initial derivatives, which are the other components of x(t 0 ), are also required. EXAMPLE 3-3 Consider the van der Pol equation

d 2e -dt 2

+ J-L(e

2

de - 1)dt

+e=

0

where J-L is a positive constant. To write this in the form of Eq. (3-4), let de x 1 = e and x 2 == dt Then,

~~ =

x2

and

dJr

2

= -x 1

-

J-L(XI - 1)x2

The state vector so defined is x(t) == (x1(t)) x2(t) and the output is given by e(t) == x 1 (t). It is known that the state behavior for this equation is ultimately periodic, 2 but the solution cannot be written analytically.

62

State Variables for Continuous Systems

Chap. 3

It should be emphasized that there is no unique state vector to describe a given system. That is, for any system there may be a number of different ways to define a vector x(t) which will satisfy the state properties given in the beginning of this chapter. We shall see examples of this later. In the next sections we devote considerable attention to the special case of Eq. (3-4) where the function f is linear. These systems are extremely important for applications, and a relatively complete analysis can be conducted.

Linear Systems with Constant Coefficients In Chapter 1 we studied linear, stationary, multivariable systems from the transfer function approach. The general state-variable equations for these systems are

dx

dt == Ax(t)

+ Bm(t)

(3-6)

c(t) == Gx(t)

+ Hm(t)

(3-7)

If there are r input signals m 1(t), ... , m 7 (t), p output signals c 1(t), .. . , cp(t), and n state variables x 1 (t), ... , Xn(t), then the constant matrix A has dimension n x n, B has dimension n x r, G has dimension p x n, and H has dimension p x r. These are special forms of Eqs. (3-4) and (3-5). Because t does not appear as an argument off, except through x(t) and m(t), Eq. (3-6) is termed stationary or autonomous. EXAMPLE 3-4 (a) For the system discussed in Example 3-2, Eq. (3-6) results if we define the rna trices:

1

-an-2 0

-an-3 0

0

1

0

0

-an-1

A==

1 0 B==

0

0

-a~

-ao

0

0

0

0

0

0

1

0

0

0

0

1

0

Chap. 3

63

Linear Systems with Constant Coefficients

G = ([bn-I - bnan-11 [bn-2 - bnan-21 · · · [bo - bnao1) H = bn m(t) = m(t) This is a single-variable case of Eqs. (3-6) and (3-7). Thus, these equations can describe the system in Fig. 1-1. (b) The multivariable linear system of Eq. (1-24), with transfer function matrix 1 1

G(s) =

+a

s

1

1

may be represented by Eqs. (3-6) and (3-7) with

-a 1 0

0 -a 2 0

0

0 0

0 0

-a3 0 0 -a 4

01 0)1 These examples show the importance and generality of Eqs. (3-6) and (3-7), to which we will now devote considerable attention. Solution of the homogeneous case

Before solving Eq. (3-6) in its most general form, it is convenient to consider the homogeneous case (m = 0)

dx -==Ax

(3-8) dt Equation (3-8) is also called the equation of the free or unforced system. The solution to Eq. (3-8) is x(t) == eA x(t o) (3-9) where the exponential matrix is defined by the infinite series eAt

~ I

+ At + A

2

!:_ 2!

+A

3

.£._ 3!

+ ···

(3-10)

64

State Variables for Continuous Systems

Chap. 3

Equation (3-9) is easily verified as the solution to Eq. (3-8) by substitution and term wise differentiation, using the definition in Eq. (3-1 0). The existence of the infinite series follows from the Sylvester expansion theorem discussed later. Also note that Eq. (3-9) is correct at t = t 0 • We define the state transition matrix for Eq. (3-8) by

(t - to) = eA and/(A.i) === e}q(t-to>. The advantage of Eq. (3-13) over Eq. (3-10) is that the former requires evaluation of only a finite number of terms. The basic reason why Eq. (3-10), an infinite series, may be reduced to Eq. (3-13), a finite series, is the Cayley-Hamilton theorem which may be interpreted to state that only the first n - 1 powers of an n x n matrix are linearly independent; in other words, all higher powers of A may be expressed in terms of I, A, A 2, ••• , An-t. EXAMPLE 3-5

Solve the differential equation

d2c - (32c dt 2 using the transition matrix approach.

===

0

Chap. 3

65

Linear Systems with Constant Coefficients

Using the technique illustrated in Example 3-3, this equation is equivalent to Eqs. (3-6) and (3-7) with

A_ (f320 0I) == (1 0) B == 0 H == 0 G

The state vector x(t) is defined as c(t) x(t) == de dt

To find the eigenvalues A1 and A2 of A, set the determinant of A - AI to zero IA-AII==O which leads to A2 - f32 == 0 Hence the eigenvalues are A1 == /3, A2 == -{3. We now use Eq. (3-13): eA(t-to)

==

el3(t-lo)

A + /31 2/3

=

~

e-!3(t-to)

A - /31 2/3

sinh {3(t - t 0 ) +I cosh f3(t- t 0 )

Thus, cosh f3(t - to) ~(t-

to)==

/3 sinh f3(t - t 0 )

sinh /3(t - to) /3 cosh f3(t - to)

This knowledge of cl>(t - t 0 ) enables us to write the general solution to the original differential equation through use of Eqs. (3-7) and (3-12) and the matrix G: c(t) == x 1(t 0 ) cosh f3(t - t 0 )

+ x 2~ 0 ) sinh {3(t -

t0)

From this [as well as from the definition of the state vector x(t)] it follows that c(t 0 ) ==

X 1(to)

de dt (to) == x2(to) so that the solution can be expressed in terms of the initial conditions on c(t) as well as those on x(t).

66

State Variables for Continuous Systems

Chap. 3

A second method for calculating cl>(t - t 0 ) involves the use of the Laplace transform. Application of the Laplace transform to Eq. (3-8) yields sX(s) - x(O+) == AX(s)

where X(s) is the vector of Laplace transforms of the xt(t). It is necessary to use t 0 == 0 because of the definition of the Laplace transform, which leads to the appearance of the quantity x(Q+) in the transform of the derivative. Solving this equation for X(s) yields X(s)

==

(sl- A)- 1x(O+)

If we take the inverse transform of this expression, we obtain an equation for x(t):

x(t)

==

L - 1{(sl- A)- 1}x(O+)

where L - 1 { } denotes the inverse Lap1ace transform operation on every element of the matrix. Comparison of this with Eq. (3-12) shows that

cll{t)

==

L -•{(sl - A)- 1 }

(3-14)

and cll(t - t 0 ) may be obtained by substitution oft - t 0 for t in the inverse transform.

EXAMPLE 3-6 Rework Example 3-5 using Eq. (3-14). We obtain (sl - A)

==

( _ s132

-s1)

Inverting this matrix, we have 1

s

(sl- A)- 1

==

Taking the inverse transform of each element gives

cll(t)

cosh {3t

sinh {3t {3

==

{3 sinh {3t cosh {3t and substituting t - t 0 for t gives the identical result obtained in Example 3-5. A third method for calculating ~(t - t 0 ) is derived when we observe that cll(t - t 0 ) also satisfies Eq. (3-8), that is d~ --==A~

dt

(3-15)

Chap. 3

67

linear Systems with Constant Coefficients

as we may verify by substituting Eq. (3-11) into Eq. (3-8). Equation (3-15) may be interpreted by columns. That is, (t - t 0 ) may be written as a row vector ~

== ( c/J(l)

~(2) ••• ~(n))

where cfJCi) is the column vector whose components make up the ith column of the matrix (t -- t 0 ). Then, Eq. (3-15) implies then vector equations

J1'

d.,l,.( ')

=

AcfJU>

i= 1,2,

0

0

0

,n

(3-16)

Furthermore, since Eq. (3-11) shows that

(0) ==I it must be true that

0

0 ~- 1 (t - t 0 ) x(t)

(3-19)

(3-20)

so that Eq. (3-19) may be rewritten in terms of x{t):

c~>- 1 (t-

t 0 ) x(t) == x{t 0 )

+ Jt c~>- 1 (T -

to) Bm(T) dT

to

Multiplying this by the transition matrix 4>(t - t 0 ) results in x(t)

== 4>(t - t 0 ) x{t 0 ) + 4>(t- t 0 )

Jtto c~>- 1 (T -

t 0 ) Bm(T) dT

{3-21)

From Eq. (3-1 I) and the properties of the exponential matrix, it may be shown that c~>-I(t- to) == e-A(t-to) == cl>(to - t) This expression will be proved later in conjunction with Eq. (3-34). Accepting it for now as an intuitive property of the exponential matrix, and also accepting Eq. (3-36), we can reduce Eq. (3-21) to x(t) =

eA

x(t 0 )

+

Jt

eABm(T) dT

to

== (t- T) Bm(T) dT

which is the general solution to Eq. (3-6). The integral in Eq. (3-22) is the contribution of the nonhomogeneous term in Eq. (3-6) to the general solution. The first term on the right-hand side of Eq. (3-22) is already recognized as the zero-input response. The integral term is called the zero-state response, since it is the response which results when x{t 0 ) = 0 is chosen as the initial state. In our discussion of Fig. 1-1, we asserted that the transfer function of Eq. ( 1-6) describes the zero-state behavior of this linear system. (See Problem 1-1 for a proof.) We may now relate this to Eq. (3-22). From Example 3-4 we know the matrices G and H necessary to have Eq. (3-7) describe the output behavior of the system. We simply set x(t 0 ) == 0 and substitute Eq. (3-22) for x{t) into Eq. (3-7). The resulting expression for c(t) must be identical to that obtained by inverting the transfer function expression in Eq. (1-6), because setting x(t 0 ) = 0 in Eq. (3-22) implies that m(t) = 0 will produce a state behavior x(t) = 0 which, according to Eq. (3-7), produces an output behavior c(t) = 0. Thus, we see the relation between the transfer function and the more genera] relation of Eq. (3-22). The transfer function gives only the zero-state response. Note that Eq. (3-22) is a specific form of the more general relation given in Eq. (3-2). That is, Eq. (3-22) expresses the state-transition behavior for

70

State Variables for Continuous Systems

Chap. 3

the state vector described by the differential equation, Eq. (3-6). The expression for the output vector c(t) follows when we substitute Eq. (3-22) into Eq. (3-7). Without writing out the relationship explicitly, it is clear that Eq. (3-1) is satisfied, and that we have a valid state-transition relationship, one which enables prediction of the entire future output behavior from a specification of an initial state and an input signal behavior. Equations (3-22) and (3-7) therefore combine to produce a key result, one which will be used frequently in the sequeJ, because Eqs. (3-6) and (3-7) are so important in applications. EXAMPLE 3-8 Find the geaeral solution of d2c -dt 2

=

{3 2c

+ m(t)

From Example 3-5, it follows that a state description for this equation is obtained from Eqs. (3-6) and (3-7) using A, G, and H as before, and changing B to

B=

(~)

Then from Example 3-5, eA(t-T)

sinh {3(t - T) = {3 {3 sinh {3(t - 'T) cosh {3(t - 'T) cosh {3(t - 'T)

Hence Eq. (3-22) yields

I

t

x(t)

=

eA x(to)

+

to

sinh {3(t - 'T) f1 m(T) dT cosh {3(t - 'T)

and, combining with Eq. (3-7), we obtain c(t) == X 1(t) == X 1(t 0) cosh {3(t - to)

+ x go) sinh f1(t- t + ~ S:. m(T) sinh f1(t 2

0)

'T) d'T

which is the general solution. This is of the general form expressed in Eq. (3-1). The relations between initial conditions are X 1(to) === c(to), x2(to) == c'(to). Diagonalization

We consider again Eq. (3-6). If the matrix A has distinct eigenvalues, then it is known that there exists a (not necessarily unique) nonsingular matrix P such that

Chap. 3

71

Linear Systems with Constant Coefficients

P- 1AP ==

At

0

0

0

0

A.2

0

0

0

0

A.3

0

0

0

0

An

A

A

where A.i are the eigenvalues of A. The matrix A is said to be diagonalized by the similarity transformation p-t AP. A suitable diagonalizing matrix P has columns which are the eigenvectors of A. 3 We can take advantage of this diagonalization property to uncouple the system of equations expressed in Eq. (3-8). To do this, we define a new vector y(t) by the relation

x(t)

= P y(t)

Then, differentiating,

dx dy - = P - =Ax== APy dt dt Solving for dyfdt yields

dy = p-tAPy = Ay dt

(3-23)

The advantage of Eq. (3-23) over Eq. (3-8) is that the diagonal form of Eq. (3-23) completely eliminates coupling between the equations describing the individual components of the transformed state vector y(t). (At this point it would be worthwhile to restate that there is no unique state vector to describe a given dynamic system. Many possible choices of state vector exist, and any vector which satisfies all the properties of a state may be used.) The vector y(t) therefore has the particularly convenient expression At(t-to)

y(t) ==

0

0

0

0

eA2(t- to)

0

0

0

0

eA3(t-to)

0

0

0

...

0

y(to)

(3-24)

which describes its transition behavior. The difficulty in using the state vector y(t) is that one must first solve for the characteristic values A.i. In addition, the matrix P is required if one wishes to transform back to the state vector x(t). That is, in terms of x(t), Eq. (3-24) is (3-25)

72

State Variables for Continuous Systems

Chap. 3

where eA is identical to the diagonal matrix in Eq. (3-24) by virtue of Eq. (3-10). Note that Eq. (3-25) gives an alternative expression for calculating the transition matrix ~(t- t 0 ), by comparison with Eq. (3-12): 4l{t- t 0 ) == PeA(t; 10 ) x(t 0 )

(3-30)

74

State Variables for Continuous Systems

Chap. 3

satisfies Eq. (3-27). Therefore, the quantity cl>(t; t 0 ) is a transition matrix for the system of Eq. (3-27). An expression for the transition matrix cl>(t; t 0 ) can be found by repeated integration of Eq. (3-27). Thus, integration of both sides of Eq. (3-27) yields

+ Jt

x{t) = x{to)

A(T 1) x(T 1) dT 1

{3-31)

to

There are two ways to proceed from Eq. (3-31). The first involves substitution of Eq. (3-31) into the right-hand side of Eq. (3-27), and then integrating the result to obtain

x(t) = x(to)

+ s:. A(T,) [ x(to) +

r:

A(T2) X(T2) dT2] dT,

(3-32)

Equation (3-32) is then substituted into the right-hand side of Eq. (3-27), and the result is integrated. If the process is repeated indefinitely and the result compared with Eq. (3-30), the following infinite series is obtained:

fll(t; to)=

I+ Jt

A(T 1 ) dT 1

to

+ Jt

to

A(T 1 ) dT 1 JT• A(T 2) dT 2 to

(3-33) The series in Eq. (3-33) for the transition matrix cl>{t; t 0 ) is called the matrizant. Amundson 6 proves that, if the elements of A(t) remain bounded between t 0 and t, the series converges absolutely and uniformly to a solution of Eq. (3-28). An alternate method for arriving at Eq. (3-33) is to consider Eq. (3-31) as an integral equation for x{t). The integral equation is solved by repeated substitution of the right-hand side into the integral. In addition to Eq. (3-29), the following must be satisfied by cl>{t; t 0 ): ~t2;

to) = cl>{t2; t 1) ~t 1; to)

cl>(t; t 0 ) =

c~>- 1 {t 0 ;

t)

{3-34) (3-35)

The property given in Eq. (3-34) follows directly from the definition of state. Thus, since

x(t2) =

=

~t2;

to) x(to) cl>( t 2; t 1) x( t 1)

= cl>( t 2; t 1) ~ t 1; to) x( to)

Eq. (3-34) follows immediately. Equation (3-35) follows from Eqs. (3-29) and (3-34) if we set t 2 = t 0 and t 1 = t. A proof that the transition matrix is nonsingular, guaranteeing that c~>- 1 {t 0 ; t) exists for all t, may be found in Athans and Falb. 7 The reader should similarly verify that, for the stationary case considered in previous sections,

cl>(t - t 0 ) = C'll(t - 'T) W(T - to) from which it follows (using t 0

(3-36)

= t, 'T = 0) that

c~>- 1 (t) =

ell{ --t)

(3-37)

Chap. 3

75

Linear Systems with Time-Varying Coefficients

This verifies our earlier assertion regarding the inverse of exp (At). Note that there are two argutnents, t and t 0 , of the transition matrix in the timevarying case, while there is only one argument t -- t 0 in the stationary case. EXAMPLE 3-10

The transition matrix for Eq. (3-27) with 0 A(t)

==

1

2

4

.

IS

(t; t 0 ) on the right, and then add to get d dt [W(t; t 0 )cl>(t; t 0 )] == 0 which has the solution W :-:::: constant. From Eqs. (3-39) and (3-29) it follows that the constant is the unit matrix I; hence W(t; t 0 )

=·-l(t;

t0 )

(3-40)

This can be used with Eq. (3-35) to obtain

(to; t) == 'lf(t; t0 )

(3-41)

Equation (3-41) shows that solution of the adjoint equation gives the variation of with its second argument. The importance of this information \Vill be demonstrated in the next section. Equation (3-38) can be transforn1ed into a vector equation as follows. We take the transpose of both sides to obtain:

d'VJ:; to)

= - AT(t) WT(t; t 0 )

(3-42)

76

State Variables for Continuous Systems

Chap. 3

Examining Eqs. (3-27), (3-28), and (3-42), we can see that 'lfT(t; t 0 ) is the transition matrix for a dynamic system with the homogeneous state-vector equation (3-43) Equation (3-43) is called the adjoint of Eq. (3-27). The entire discussion of adjoints also applies to the special case A(t) ==A, a constant matrix. EXAMPLE 3-11

In Example 3-10, interchange t 0 and t to obtain

:J - (:0 rJ ; [UJ :0 rJ UJ :J L) - (:J

2(

3 -

t [(

2

2

(

3 -

(

2

as the transition matrix for the adjoint system of equations: 2

0 (2

dx dt-

4 x(t)

-1

-

t

The transition matrix W(t; t 0 ) can be shown, by direct substitution, to satisfy Eqs. (3-29), (3-34), (3-35), and (3-43). Nonhomogeneous case

Consider the general, time-varying, linear state equations

dx di == A(t) x(t)

+ B(t) m(t)

(3-44)

c(t) = G(t) x(t)

+ H(t) m(t)

(3-45)

We proceed to solve Eq. (3-44) exactly as in the case where the matrices A and 8 are constant. Thus, we define

x(t) == «(t; t 0 ) y(t) where «(t; t 0 ) is the transition matrix for the homogeneous case, Eq. (3-27), and we proceed as in the constant-coefficient case to obtain the following equivalent of Eq. (3-21)

x(t) == «(t; t 0 ) x(t 0 )

+

cl>(t; t 0 )

J

t

lo

c~>- 1 (T;

t 0 ) B(T) m(T) dT

Equations (3-35) and (3-34) may be used to transform this to x( t) == «( t ; t 0 ) x( t 0 )

+ Jt

to

«( t ; T) B(T) m(T) d T

(3-46)

Chap. 3

77

Controllability and Observabil ity

Equation (3-46) is the general solution to Eq. (3-44). From Eq. (3-46) it is clear that the variation of the transition matrix (to -

!2)

=

c~»- 1 (1 2

fl>(to - t1)b

=

-

t 0)

=

=

(A 2)- 1

(:=!)•

(:=! t )b = (!)

(A - 1 ) 2

~to -

-t) -! =

Chap. 4

!)

2

Therefore the system is not controllable. Compare this result with that for Example 3-12. If we change B to

B=b=(~) we obtain

and this system is therefore controllable.

Sampled-Data Systems General theory

A system which is continuous, but whose output is examined intermittently to determine a suitable input, is called a sampled-data system. We consider here a class of these systems which is most readily analyzed by the discrete-system theory just developed. The linear, stationary system state behavior is described by Eq. (3-6)

dx

dt = Ax

+ Bm

(3-6)

The output c(t) of this system is sampled at the equally spaced sampling instants t 0 , t 1 , t 2 , ••• , and based on the sample at t b for example, a constant m(tk) is computed and applied over the entire next sampling interval. In equation form, the input is the stepped signal

m(t) = m(tk)

tk

< t
0 all+ a22 < 0

alla 22 - a 12 a 21

which are the conditions for stability. To find the eigenvalues, we write the characteristic equation of the matrix A a11

a21

-A.

a12

a22 -A.

Chap. 5

135

Linear Approximation Theorems

which yields A. 2 - (a 11

+a

22

)A.

+ (a11a22 -

a•2a2.)

== 0

The conditions required for the roots A. to have negative real parts are clearly the same as those above. Finally, consider the system

d c de --+ a - + be == 0 dt dt 2

2

A state-variable representation of this is obtained, as usual, with A_ (

0

1)

-a

-b

in Eq. (3-8). Then the stability conditions derived before require

b>O a>O which are the same conditions obtained by classical stability analysis.

Linear Approximation Theorems An important practical technique used in analyzing nonlinear systems is that of linearization. The analyst represents the system by a suitable linear approximation and then investigates the stability of the linear approximation using the methods presented in the previous section. The information which must be obtained from the theory is the extent to which the stability of the linear approximation determines the stability of the original nonlinear system. Stability theorem

The following stability theorem is available: Consider the system of Eq. (5-11). Define the constant matrix A and the function g(x) by f(x) == Ax

+ g(x)

If dx

-==Ax dt is asymptotically stable in-the-large, and if

II g(x) II ·- o 11-·0 II X II -

lim II X

then Eq. (5-11) is asymptotically stable at the origin.

(5-18)

136

Lyopunov Stability Theory

Chop. 5

Two things should be noted about this theorem. First, only asymptotic stability, and not asymptotic stability in-the-large, is guaranteed for the nonlinear system. Second, the usual technique for defining A and g(x) when f(x) is analytic (infinitely differentiable) is linearization in a Taylor series expansion about the origin: (5-19) where

oft

oft

of;

of;

...

ax. ox2

ac ox

ax. of,l

OX2

(5-20)

ofn

ofn

ax. ox2

OXn

and the subscript on Eq. (5-19) implies that all partial derivatives are to be evaluated at the origin. Then g(x) follows immediately from g(x) == f(x) - Ax. Clearly, g(x) contains only second- or higher-order terms in the components of x, and hence Eq. (5-18) will be satisfied. The proof of the theorem is somewhat involved, but it is presented because the theorem is so important. However, the proof may be omitted without loss in continuity. We utilize the properties of the norm II A II of a matrix A as described in Appendix B. We may write the solution to Eq. (5-11) using Eq. (3-22)

x(t) ==

~(t

- 10 ) x(t 0 )

+ Jt

cl>(t - r) g(x('r)) dr

(5-21)

to

where the transition matrix is given by

cl>(t - to) ==

eA(t - t 0 ) 11. Since we are given that the linear approximation is asymptotically stable, we know II ~(t - t 0 ) II will be composed of negative exponentials (see Eq. (3-13) and ensuing examples), that is, the eigenvalues have negative real parts. Let -{3 be the largest of the real parts of the eigenvalues. Then, clearly, there exists an a > 0 such that

ll«(t - to) II < ae-l3

t >to

(5-22)

Other values of {3 > 0 may also be used to guarantee this inequality. Note that, to have this inequality hold at t === t 0 , we must at least choose a > n for the norm (B-1), a> Jil for the norm (B-3), and a> 1 for the

Chap. 5

137

Linear Approximation Theorems

norm (B-2), where n is the order of the system. Then, from Eq. (5-21)

II x(t) II t 0 , and therefore the origin is stable. Thus we have demonstrated asymptotic stability. EXAMPLE 5-9

We here apply the linear approximation theorem for stability to the system of Example 5-7. Using Eq. (5-20) we obtain

A ~ (-1 0 Then g=

-2x 2) -1 x=O

=== _

1

(=:: - x~)- Ax=(-~~)

The transition matrix corresponding to A, exp A(t - t 0 ), is easily found by the methods of Chapter 3 to be

cl>(t - t 0 )

_ (e·-

--

0

(t -to)

0 e-(t-

to) II

~

2e-

Hence we choose a == 2 and {3 == I. It is clear that no larger value of {3 will guarantee inequality (5-22) for all t > 10 • Further, the largest value of {3 is desired so that the inequality II g(x) II < {3k llx II/aM for all II x II < ry can be met with the largest possible ry, and hence, the stability region II x(t 0 ) II < ryjaM will be as large as possible. Similar reasoning shows that the smallest possible value of a is desired (which explains the choice a == 2) and that we should choose k == 1 - e where e > 0 is very small. Since

II gil

==X~

we must find the largest value of ry guaranteeing

x~

H'(O) oG (0, 0) oxl

Chap. 5

>0

-1

As an example, Aris and Amundson 11 studied the following constants:

a

-1 4

kV- 25 F-e and the following heat removal function: h(y2)

(y2 - J)[l

=

+ K(y2 -

2)]

This heat removal function is generated by proportional control on the flow rate of cooling water, which is at an average dimensionless temperature of l, using a proportional gain K. For these constants g(yl> h) = Y1 exp

Then, for any K, the point ylS this equilibrium state

=

so( ~ - ;J

t, Y 2 s

= 2 is an equilibrium state. For

1 X1 - y 1 - "2

X2

x

G(x 11

2)

H(x2)

= Y2- 2

(x~ + ~) exp [2s(x ~ 2)]- ~ = (X2 + t)(l + Kx2) - ! =

2

aG (O O) = 1 OXt

'

aG (O O) = 25 ox2 ' 4

H'(O)

=

1

+ K4

Therefore, the origin is asymptotically stable if

K>i K>9 which clearly require K > 9. On the other hand, if K < 9, the system is unstable at the origin. If K == 9, no information is available.

Chap. 5

143

Krasovskii's Theorem

Gura and Perlmutter 12 have used the technique illustrated in Example 5-9 to calculate stability regions for this exothermic reactor.

Krasovskii's Theorem Perhaps the most general stability theorem available for nonlinear systems is the theorem due to Krasovskii. 13 We first state and prove a limited form of this theorem, presented by Kalman and Bertram. 3 The nonlinear system

dx -

dt

==

f(x)

f(O) === 0 is asymptotically stable at the origin if the matrix

(ar)T ox

of -+- -F(x) = -·

ox

l

(5-26)

is negative-definite for all x; furthermore,

V(x) == [f(x)]T f(x)

(5-27)

is a l_.yapunov function. Note that the Jacobian matrix of/ox is defined as in Eq. (5-20), and F(x) is the sum of the Jacobian and its transpose. Note also that we shall assume f has continuous first partial derivatives. The proof consists of demonstrating that V(x) satisfies the conditions of Theorem 5-2. Thus, \Ve compute

tjV == (~--~)T f(x)

ox

dt

=

2 [ (~~r f(x)r f(x)

=

[rr(x)(~~r r(x)r + rr(x) ~~r(x)

But, since the transpose of a scalar is the same as the scalar, we obtain

~ o= fT(x) F(x) f(x)

(5-28)

\Vhich is negative by hypothesis. Clearly, V(O) =-= 0. It remains to show that V(x) > 0 for x 0. Equation (5-27) indicates that this is satisfied unless f(x) =--= 0 for some x -:f::. 0, i.e., unless the system has another equilibrium state. To investigate this possibility, \Ve first prove a vector identity for f(x). Consider d -- f(ax) da

*

144

Lyapunov Stability Theory

Chap. 5

where a is a scalar and xis a fixed vector. Using the chain rule for differentiation, we obtain

d -f(ax) = da

"'x-1l

of

L.J

J

oxj (ax)

(5-29)

j=l

When we integrate both sides of this relation with respect to to a = 1, the result is f(x) =

J i

da 0

~

af

L.J xi Oxi (ax)

a, from a = 0

(5-30)

J= I

Now consider a vector x

* 0 for which f(x) = 0. Then (5-31)

However, by hypothesis, the integrand is negative for every value of a, and the integral must therefore be negative and cannot equal zero. This contradiction shows that a system for which F(x) defined in Eq. (5-26) is negativedefinite for all x cannot have an equilibrium state other than the origin. Thus, all conditions of Theorem 5-2 are satisfied, and the system is asymptotically stable at the origin. Asymptotic stability in-the-large requires that F(x) be uniformly bounded from above by a constant, negative-definite matrix. A proof of this is given by Kalman and Bertram. 3 Application of the general Krasovskii theorem is frequently hindered by the difficulty of satisfying the negative-definite condition on F(x) for all x. For example, consider again the system of Example 5-7 wherein the Lyapunov function V(x) = (xl

+ xD + x~ 2

is precisely that given by Eq. (5-27). For this system of

ax

==

(-10 -2x2) -1

so that F(x) = -2

(~ 2 ~ 2 )

The condition that F(x) be negative-definite is identical to the condition that -F(x) be positive-definite, which requires I -- x~ > 0 or -1 < X 2 < I. These are the same conditions noted in Example 5-7. Thus, the conditions of Krasovskii's theorem are not satisfied because F(x) is not negative-definite

Chap. 5

145

Krasovskii's Theorem

for all x. However, the theorem is useful when combined with Theorem 5-6 because, as in Example 5-7, it generates a possible Lyapunov function and conditions guaranteeing that dV/dt given by Eq. (5-28) will be negative. If these conditions include a region Rh Theorem 5-6 guarantees asymptotic stability in this region. Berger and Perlmutter 14 used this technique to study the stability of the nonlinear reactor of Example 5-10. We illustrate their results in the next example. EXAMPLE 5-11

Consider again the equations (E5-l 0-3) and (E5-1 0-4) of the exothermic chemical reactor of Example 5-10:

dit, = dit = 2

-x~ - G(x., x2) -x 2 + G(xl,

X2) -

H(x2)

Then,

-1

ar

ax -

8G ax.

8G 8x2

---

_ 1 + 8G _ 8H

8G ax.

ox2

8x2

and

-F=

2( I+ ax. OG)

ac 8G ---8x2 ax.

8G 8x2

2( 1 _

8G

---OXI

OG

8x2

+ OH) ox2

Hence V(x) ==[xi

+ G(xt, x2)] 2 + [x2

- G(xi, x2)

+ H(x2)] 2

will be a Lyapunov function provided the inequalities

1 + 8G oxl

>

0

4 ( 1 + 8G)(t _ 8G oxl

8x2

+ 8H) _ 8x2

(oG __ oG) 2 8x2 oxl

>O

are satisfied. Berger and Perlmutter examined specific numerical examples for G(xh x 2 ) and H(x 2) and used these inequalities together with contours V(x) == k to construct regions in which the reactor is guaranteed to be stable, exactly as was done in Example 5-7. Later, Luecke and McGuire 15 pointed out that these inequalities are sufficient, but are necessary only if one wishes to guarantee F negative-definite for all x. Since we require that F be negative-definite only throughout

146

Lyapunov Stability Theory

Chap. 5

the region inside V(x) = k, these inequalities are too restrictive, and the region of asymptotic stability may be extended. The more general fo.rm of Krasovskii's theorem is stated as follows: The nonlinear system

dx - = f(x) dt

f(O)

== 0

is asymptotically stable in-the-large if there exist constant, symmetric, positive-definite matrices P and Q such that the matrix

F(x)

= (ar)r P +Par+ Q ax

(5-32)

ax

is negative-definite for ail x; furthermore,

V(x) = fTPf

(5-33)

is a Lyapunov function. Note the similarity between this theorem and the theorem on stability of linear systems utilizing Eqs. (5-15) and (5-16). The proof is similar to that of the more restricted form. Thus,

dV = 2 (fTP ar) f(x) dt ax

= fTP~~ f + [F(~~r Pfr = fTFf- fTQf Since F is negative-definite and Q positive-definite by hypothesis, it is clear that dVfdt is negative-definite. The identity in Eq. (5-29) may be rewritten in vector notation

!!_f(ax) = af(ax) X da ax Integrating this over a from a

= 0 to a == 1 gives

J: Of~~x)

f(x) =

Consider some fixed x =F 0 for which f(x)

0 = xTPf(x) =

=

J ~ J: l

xTP 0

x da

= 0. Then

of(ax) ox xda

xT F(ax) x da -

~ xTQx

which is clearly a contradiction since F(ax) is negative-definite and Q is positive-definite. Hence f(x) cannot be zero unless x = 0, and V(x) > 0

Chap. 5

147

Estimation of Transients

for x =F 0. Now suppose we integrate Eq. (5-29) from where {3 is a positive constant. The result is f({3x) ===

a == 0 to a == {3,

f/3 of(ax) X da 0

ox

where, as before, we assume x =F 0. Therefore,

~

xTPf(,Bx) =

f

xTF(ax) x da -

~ xTQx

But this shows that xrp f({3x) ~ - oo as {3 ~ oo. Since x and P are fixed, this can happen only if II f({3x) II oo as {3 ~ oo which, in turn, implies II f(x) II ~ oo as II x II oo. Finally, this guarantees that V(x) in Eq. (5-33) satisfies the additional condition of Theorem 5-3 and demonstrates asymptotic stability in-the-large. Luecke and McGuire 15 show that astute choice of the matrix P, when using Krasovskii's theorem with the method of Example 5-7, can significantly enlarge the resulting stability region. In view of this theorem, it is not difficult to see why the linear approximation theorem should be valid. -)>

-)>

Estimation of Transients

The Lyapunov function may be used to estimate the speed with which the state of an asymptotically stable system approaches the origin. Thus, for a given Lyapunov function, we define 1J

==

. {-dVfdt} V

m~n

(5-34)

where the minimization is carried out over all values of x =F 0 in the region for which the system is asymptotically stable and where we assume the minimum exists. Clearly, 1J > 0, and it follows that

dV~(t))
0 such that Io(e)/e I < h (or, equivalently, Io(e) I < Ie I h) for all e satisfying Ie I < k. Now, Eq. (6-25) may be written (6-42) where (6-43)

ox

We assert that it is necessary to have /(t) == 0, to avoid a negative 2(t 1 ). To prove this, suppose /(t) 0. Then we choose k > 0 such that 1 o(e) I < Ie I · II(t) 1/2 for all Ie I < k, and we choose

*

e == -k sgn I ==

-k { k

Then it follows from Eq. (6-42) that

ox2(tr) =-kill+ o(e) and A< 2 >. The value of b can be varied as required, without invalidating Eq. (6-72). Therefore, the problem is solved with bas a parameter, and then b is chosen to satisfy q[x(t 1)] == 0 Note that this gives as many relations as there are components of b, since q and b have the same number of components. An analogous remark applies to the determination of a in Eq. (6-66). Thus, for any set of initial and final conditions and constraints on x(t), we can choose conditions on A*(t) which make Eq. (6-72) valid, and which provide a sufficient number of conditions to solve Eqs. (6-54) and (6-62) simultaneously. Now define the Hamiltonian function

H(x, m, A) = ATf(x, m) Then, oH

om

== ( of)T A

om

(6-73)

188

Continuous Systems Optimization

Chap. 6

and Eq. (6-72) becomes

OS(x(t1 )) =

J:~ (~~rr Om dT

(6-74)

which gives the variation in the performance criterion caused by a variation in the control. Now we know from the example in the previous section that, if the final state x(t 1 ) is not completely free, the control variations are not arbitrary, but must satisfy a generalized version of Eq. (6-41). Nevertheless, we saw that computationally equivalent conclusions regarding the integrand of Eq. (6-25) were reached in both free and fixed cases. This equivalence is not easily demonstrated for Eq. (6-74), but it can be demonstrated. 4 For our purposes, we assume it without proof. Then, if we regard om(t) as arbitrary (which also implies m(t) is not constrained), it follows that (6-75) to avoid a negative value of oS(x(t 1)). This is analogous to Eq. (6-27). Equation (6-75) states that the optimal control m*(t) must be a stationary point of the function H(x*, m, A*). If we consider the control to be constrained

i

===

1, 2, . . . , r

(6-76)

where -ai and (3i are the low and high constraints on the components mi(t), then om(t) may not be arbitrary, and Eq. (6-75) does not hold. Thus, if the ith optimal control component is on its low constraint, we can only have omi(t) > 0, in which case Eq. (6-74) requires

oH >

ami

0

m{ ===

-at

(6-77)


0,

0'

>0

for the inertial system dx 1 dt == x2 dx dt

2 -=m

under the following different sets of conditions: (a) x(t 0 ) = (b) (c) (d) (e)

(~ 10)

x(t1 ) == 0 x2(t 1 ) == 0; X 1(t 1 ) free No conditions on x(t 0 ) or x(t1 ) ax.(t1 ) + {3x 2(t 1 ) == 1; a, f3 constants

Solution: We define the additional state variable x 3 ==

Jtto [xi + ux~ + pm

2]

dt

Then dx 3 == xi dt

-

+ ux2 + pm 2 2

so that

f(x, m) == m

xi

+ ux~ +

pm 2

Chap. 6

191

The Minimum Principle

T'hen, from Eq. (6-82), H(x*, m, l,*) = Aix:

+ A:m + Aj(xi + ax: + pm 2

2

2)

and from Eq. (6-81), l, * must satisfy dAi - -2A*x* dt 3 1 dA: == -Ai - 2Ajax: dt

dAj =0 dt

For all conditions (a)-(e),

= x 3(t 1 )

S(x(t1))

so that in Eq. (6-64a)

0

c

==

0 1

and hence, from Eq. (6-70), Aj(t1 ) == 1. This and the differential equation for dAj/dt imply Aj(t) = 1 for all t 0 < t < t 1 . To minimize H with respect to m, we differentiate and set oH/om == 0, since m is unconstrained. In other words, without constraints on m, the absolute minimum of H must be a stationary point (or else must occur for infinite m). Therefore

aH ===A: + 2pm* == 0 om

-

which yields m*(t)

== - 2~A.t(t)

Note that o2Hfom 2 = 2p > 0 so that m*(t) is, in fact, the absolute minimum of H. The equations for the optimal system may now be reduced to dx~ dt

* = x2

dxi = _ _!_A* dt 2p 2

dA~

==

dAi dt

== -A*1 - 2ax*2

dt

(£6-1-1)

-2x* 1

(a) For this case, the terminal condition x*(t 1 ) is free. Hence we

192

Continuous Systems Optimization

Chap. 6

must have A*(t1 ) === 0. Therefore, we obtain the optimal response by solving Eqs. (E6-1-1) with the conditions

Xl(to) = X1o x2(to) = 0 A.1(t1) = A.11

=

0 A.2(t1) = A.21 = 0

Now Eqs. (E6-l-l) are of the form of Eq. (3-8) with 0

1

0

0

0

0

0

1 - 2p -

-2

0

0

0

-1

0

A=

0

-2u

so that the solution may be written

===

0

eA(t-to)

A2

A2o

If we knew the initial conditions A. 10 and A. 20 on A, there would be no difficulty in solving the problem. However, with values of x specified at t 0 and values of A specified at t 1 , the problem is much more difficult to solve. We shall consider more systematic procedures for solving such problems later in the chapter. For now, we proceed by guessing A. 10 and A. 20 and observing A. 11 and A. 21 • If these are not zero, we correct A. 10 and A. 20 and continue correcting until A. 11 and A. 21 are zero. Note that, for a given guess of A. 10 and A. 20 , the solution curves are easily generated since all values are known at t 0 • The procedure used to generate the solution curves in Figs. E6-1-1 through E6-1-8 is as follows: The system of Eqs. (E6-1-1) was programmed on an electronic analog computer. For illustrative purposes, the parameters p === 1, u === 0, and initial state X 10 == 0.14 and X 20 == 0 were chosen. Also, we take t 0 == 0 for convenience. High-speed repetitive operation was used to quickly find a range of values of A. 10 and A. 2o, generating solutions giving A. 1(t) and A. 2 (t) curves which approached zero reasonably simultaneously. As may be seen from Figs. E6-1-1 through E6-l-8, this range is narrow. (It is easy to show that the four eigenvalues of the matrix A satisfy the equation p4 -

{]" p

-p2

+ -p1

==

0

Chap. 6

193

The Minimum Principle

p=l, CT=O ft = 1. 55 A1

x {t,) free

0.2

0.1

0

3 t---~

Figure 16·1·1

.

0.4

p =1, a=O

=

ft 2.04 X ( tf) free

0.275

0.2

0.1

0 f

.....

Figure 16·1·2

4

194

Continuous Systems Optimization

0.4

p =1, a=O t, 2. 3

=

x (t,) free

0

Figure 16·1·3

p=1, u=O ~=3.07

x(f,) free

0.1

Figure 16·1·4

Chap. 6

Chap. 6

195

The Minimum Principle

p=1, a=O

t,=3.60 x{t,) free

A.1

0.1

0

Figure 16·1·5

p=1, a=O

t,=5.1 x(ff) free

0.1

f

Figure 16·1-6

.,..

196

Continuous Systems Optimization

Chap. 6

p= 1, o=O t,=6.0 x(t,) free

0.1

8 Figure 16·1·7

0.2

p=1, u=O t,-~oo

x(t,) free

0.1

f



Figure 16·1·8

and therefore, that the solution is always unstable, as evidenced by the eventual behavior of the solutions in Figs. E6-1-1 through E6-1-8. This explains why the range of values of A. 10 and A. 20 is so small. Carelessly chosen initial values result in solutions which quickly diverge.) Returning to real-time operation, the value of A. 20 , which is proportional to the initial value of m, was set near the low end of the thus determined range. Then, A. 10 was varied until an intersection between A. 1{t) and A. 2{t) occurred on the time axis. The time at the intersection is then taken as t 1 , and the resulting solution curves are necessarily optimal for this final time. The value of A. 20 was then increased within the range, and the procedure repeated. In this way, a series of optimal solutions, for various values of t 1 , may be obtained. {The alternate procedure, fixing t 1 and varying A. 10 and A. 2 o, is much more difficult.) Figures E6-1-1 through E6-l-8 show how the optimal solution changes with t 1 . Since m* = -A.i /2, the A. 2 curves allow easy visuali-

Chap. 6

The M:nimum Principle

197

zation of the optimal control. For small t 1 , the control is always negative, increasing to zero at t 1 . As t 1 increases slightly, the control stays negative throughout the transient, but at t == t 1 , the control approaches zero as an inflection point rather than as a minimum point. The actual t 1 at which the inflection point occurs is difficult to find, but Fig. E6-1-3 indicates it to be in the range 2.1-2.5. For t 1 less than this value, the final value x 1(t 1)is positive, decreasing with increasing t 1 as expected. At the critical t 1 the value of x 1(t 1 ) is zero. As t 1 is increased above the critical value, the control is positive during the later portions of the transient, and the value of x 1(t 1 ) is negative, as shown in Figs. E6-1-4 through E6-1-7. The reader will undoubtedly notice certain inaccuracies in these figures, such as nonzero final values, etc. The author points out, however, that only minute changes in A. 10 and A. 20 , particularly in A. 20 , are involved in going from one t 1 value to another. The most gentle pressure on the potentiometers fixing A. 10 and A. 20 causes significant changes in the nature of the solution curves. This must be recognized as a computational drawback of use of the minimum principle for optimization. As t 1 ----+ ex:>, the response in Fig. E6-1-8 is approached. This response \vas generated by other n1ethods, to be discussed later in the chapter. It is virtually impossible to find the optimal solution for t1 oo by trial-and-error search on initial conditions because of the enormous sensitivity to A. 10 and A. 2 o. The effect of \Veighting the response velocity, by selecting p == u == 1, is shown in Figs. £6-1-9 through E6-1-12. Comparison of these with Figs. £6-1-1 through E6-1-8 shows that the penalty in J on excess velocity results in an optimal x 1(t) which falls more slowly, as expected. Figure E6-1-12 suggests that, for this case, the optimal x 1(t) never passes through negative values. (b) This case differs from the last in that now x(t 1 ) is fixed (at 0) and l,(t 1 ) is free. The procedure is identical to that of part (a), except that the initial values A. 10 and A. 20 are varied until x 11 == x 21 == 0. 'Typical results for p = 1, u == 0 are shown in Figs. E6-1-13 through E6-l-15. As t 1 ~ ex:>, free and fixed terminal state problems become identical, since the former must obviously have x 1(t 1 ) -~ 0 and x 2 (t 1 ) ----+ 0 for the optimal solutions. 1'he optimal response in Fig. E6-1-15 for t 1 :-=: 4.5 is already close to those of f ..igs. E6-1-6 through E6-1-8. (c) For this case we have x 1 (t 1-) free, and hence A. 1(t 1 ) = 0. The only difference, therefore, is that we seek A. 10 and A. 20 to give A. 1(t 1 ) ::= x 2 (t 1 ) = 0. This \\'as not done computationally, since the results are so similar to the previous two cases, (a) and (b). --)>

198

Continuous Systems Optimization

0.3

p=d=1

t,=2.0 x {f,) free

0.1

0

t

~

Figure 16·1·9

0.4

0.3

p=a=1 t, = 2.7 x {f,) free

0.2

0.1

0 f

~

Figure 16·1·1 0

Chap. 6

199

The Minimum Principle

Chap. 6

0.4

0.3 p=o=1

t,

0.2

= 4.1

x(t,)free

0.1

0

2 ~

f

Figure E6·1-11

0.3

0.2

p=a=1

t,

...

00

x U,) free

0.1 4

f

5

....

Figure 16·1-12

(d) The required conditions on A in the absence of conditions on x(t 0 ) and x(t 1 ) are A{t 0 ) = A(t1 ) = 0. Hence, we must search for appropriate initial values x{t 0 ) which, together with A(t 0 ) == 0, will give A(t1 ) = 0. It is not hard to deduce that the appropriate initial values are x{t 0 ) = 0, which yield m*{t) = x*(t) ==A *(t) == 0 and hence J == 0, obviously the minimum value. Thus, the minimum principle

200

Continuous Systems Optimization

Chap. 6

0.446 0.4

0.338 0.3

p =1,

(f

t,=2.6 x(t,)

=0

=0

0.2

0.1

4

-0.1

-0.2

Figure 16·1·13

gives results agreeing with intuition; if the initial value of x is at our discretion, we should clearly choose x(t 0 ) = 0 on physical grounds. (e) The constraint may be written in the form of Eq. (6-68) as

q[x1(t1 ), x2(t1 )]

=

axl(t 1 )

+ /3x2(t 1 ) -

1 == 0

Then, Eq. (6-69) yields

A.1(t1 ) A.2(t1 )

= ba =

b/3

Thus, the final adjoint vector must lie on the line A. 1(t 1 ) = (aj{3)A. 2(t 1 ), which is of course perpendicular to the surface represented by q ==- 0. The exact position of ~(t1 ) on this line is determined by the parameter b, which is adjusted to yield q == 0, i.e., to cause x(t1 ) to lie on the designated line. This means a double trial-and-error. For each value

Chap. 6

201

The Minimum Principle

0.3

0.2

P=1, a= 0 t, = 3.0 X {t,) = 0

0.1

0.1

Figure 16-1-14

0.4

0.3

0.2

p=1,o=O t, = 4.5 x{t,)= 0

0.1

f _______...,

Figure E6-1-1 5

202

Continuous Systems Optimization

Chap. 6

of b, A. 1o and A- 20 are sought such that A(t1 ) lies on the line A. 1(t 1 ) = (a/{3) A. 2(t 1 ); then b is varied until x(t 1 ) is on q = 0. Observations on computational aspects of the minimum principle

Example 6-1 points out some features of the minimum principle which are general. The system equations and adjoint equations must be solved with a mixture of initial and final conditions. Such problems are called two-point boundary value problems, and they usually require trial-and-error approaches for their solution. In general, this will require a search for n numbers such that n conditions are satisfied, where n is the order of the system equations. Thus, the minimum principle has the computational effect of reducing a problem, which is originally a search for a vector of functions m(t), to a search for the n-dimensional constant vector which is the solution of n simultaneous implicit relations. Clearly, this is a drastic reduction in the complexity of the problem, as Example 6-1 shows. Search for a vector of functions is impossible, while search for a vector of numbers is quite feasible. The minimum principle provides necessary, but not sufficient, conditions for minimizing S(x(t 1 )). Therefore, if there are several solutions which satisfy the conditions of the minimum principle, we have no recourse but to evaluate S(x(t1 )) for each such solution to find the minimum. The potentially dangerous aspect of this, and of the previous observation on searches, is that the existence of more than one solution satisfying the necessary conditions may go undetected during a trial-and-error search over boundary conditions. All that can be done about this, for the general class of systems described by nonlinear equations of the form of Eq. (6-54), is to offer the reader fair warning. For linear system equations, Eq. (3-6), there is a bright note. When the objective is to minimize an integral of a quadratic function of x and m, as in Example 6-1, or to minimize the time required to drive the state to the origin x = 0, under reasonable restrictions the necessary conditions cannot have more than one solution; thus the problem does not arise in such cases. These linear problems are discussed in more detail in later sections. Free final time

In some problems, the final time t 1 will be unspecified. An illustration of this is given in Example 6-2. Therefore, in considering variations in S(x(t1)), we must allow variations in t 1 . In other words, we consider the effect on S of varying t 1 from the optimal value. Again, we give a heuristic derivation,

Chap. 6

203

The Minimum Principle

ignoring small terms and assuming requisite differentiability conditions, etc. A rigorous derivation is given by Pontryagin et al. 4 Clearly, except for second-order terms ox(t ,)

where ot1 == t 1

-

dx*

== -dt t; ot f

tj, and tj is the optimal final time. Then

oS(x(t1)) ==

~,T(t 1 )

ox(t1 )

= AT(t ) dx* f

dt

t•

ot,

1

From the definition of H, Eq. (6-73), this may be rewritten oS(x(t1 ))

====

Hlti ot1

Since ot1 , the variation in final time, may be positive or negative, this relation clearly implies that a negative oS(x(t1 )) can be avoided only if H* It;= 0

which says that a necessary condition satisfied by the optimal response is that the final value of the Hamiltonian is zero. However, we have already stated that the Hamiltonian is constant along the optimal response. Hence, we conclude that when t 1 is unspecified, H*

== 0

(6-83)

everywhere on the optimal response. Effectively, Eq. (6-83) gives us one more condition to be used for solving optimal control problems. Clearly, an additional condition is necessary since we have lost the condition given by a specified final time. Performance criteria and Lagrange multipliers

In some applications, the performance criterion to be minimized is of the form

J(m)

=

f totr F(x, m) dt + G(x(t1))

(6-84)

where F and G are suitably differentiable scalar functions. The minimum principle stated earlier may be used if we define an additional state variable, Xn+ 1(t)

:::::::

Jtto F(x, m) dt

(6-85)

an augmented state vector, (6-86)

204

Continuous Systems Optimization

Chap. 6

and an augmented system equation, Eq. (6-54), with (6-87)

fa(x, m) = (;) dxa == fa(x m) dt '

This augmentation has been illustrated in the foregoing examples. We must also define a modified performance criterion (6-88) which changes nothing but the final conditions :A,(t1 ). Now suppose we are required to minimize the performance criterion subject to the constraint

J g(x, m) dt = t,

'Y

(6-89)

to

where g and 'Y are given vectors of functions and constants, respectively. Then the method of Lagrange multipliers may be used: Let p be a constant vector of Lagrange multipliers, having the same number of components as 'Y and g. Define Jp(m) by t, J Jp(m) = J(m) + pT g(x, m) dt

(6-90)

to

where J(m) is given by Eq. (6-84). Then ifm*(t) is the control which minimizes Jp(m) and satisfies Eq. (6-89), it minimizes J(m) subject to the constraint of Eq. (6-89), i.e., it is the solution to the original problem. The proof of this assertion follows easily. Suppose there exists an m(t) satisfying Eq. (6-89) such that J(m)


0

(E6-2-2)

This is an example in which the final time is unspecified and, in fact, is to be minimized. Corresponding to Eq. (6-84), the performance criterion to be minimized may be written

J(m)= f

tr

to

dt==t 1 - t0

We define

and using the Lagrange multiplier principle, we recognize that minimization of J P(m) and choice of p to satisfy Eq. (E6-2-2) constitute a solution to the original problem. We next define the additional state variable X2

==

Jtto (1 + pm

2)

dt

and hence

Thus, 11 == A, 1(-x 1

+ m) + A, (1 + pm 2

2)

from which it follows that A 2 (t) ~ 1

Differentiation yields ()]J

--- =

om

i\1

+

2pm == 0

(E6-2-3)

206

Continuous Systems Optimization

Chap. 6

and therefore

m ==-A.. 2p We have, as stated before, omitted the asterisk notation, since it is understood that this relation gives the optimal m*(t). The solution to the differential equation for A. 1 is clearly "\ - "\ e21(t; to}4>22(t; to) we may rewrite Eq. (6-111) as two separate relations for x and A. We do this replacing t 0 by t and t by t 1 , so that the final state is expressed as a function of the present state. This is clearly in accord with the state property presented in Chapter 3. x(t 1) =

A(t1)

r +r

~u(t1 ; t) x(t) + ~ 12 (11 ; t) A(t) +

= ~21 (1 1 ; t) x(t) + ~22(t 1 ; t) A(t)

~ 12 (1 1 ; r) W(r) r(r) dr (6-113)

~22(t1 ; r) W(r) r(r) dr (6-114)

These equations may be solved simultaneously for A(t), by means of the final condition A(t1 ) given in Eq. (6-105), to obtain A(t) = K(t) x(t) - p,(t)

(6-115)

where

M(t)[Gr(t 1 ) PG(t1 ) 4> 11 (t 1 ; t) - fP 21 (t1 ; t)]

K(t)

===

p,(t)

= M(t)[GT(t,) Pr(t,)- GT(tf) PG(tf)

r

(6-116)

~12(1/; r) W(r) r(r) dr

+ Jtr fP 22 (t1 ; t) W(r) r(r) dr]

(6-117)

t

M(t)

==

[22(t,; t) - GT(t,) PG(t,) 4>12(t,; t)]-l

(6-118)

Chap. 6

Optimal Control of Linear Systems with Quadratic Performance Criteria

215

and we assume the matrix inverse denoted by M(t) exists. (A proof is given by Kalman. 5 This reference is apparently the original source for much of the general theory on optimal control of linear systems with quadratic criteria.) The time-varying matrix K(t) has dimension n x n, and the time-varying vector IL(t) has dimension n x 1. Structure of the optimal control

At this point, we should assess what has been accomplished. The general theory has shown in Eq. (6-1 07) that the optimal control is a linear function of the adjoint variables. This result leads to Eq. (6-109), a system of 2n linear differential equations in x and A, with initial conditions on x and final conditions on A. As we saw in Example 6-1, such two-point boundary-value problems are not easy to solve. However, by using the general results of Chapter 3 on transition matrices, we have been able to deduce that the solution must be of the form of Eq. (6-115). This is very important since, if we can evaluate K(t) and #L(t), we can obtain the optimal control as a function of the state vector x, by combining Eqs. (6-107) and (6-115) to obtain m*(t)

== -R- 1(1) BT(t)[K(t) x(t) -

(6-119)

IL(t)]

This will enable us to construct a feedback control system for the optimal control, as illustrated in Fig. 6-3. This system requires measurement of the state vector rather than of the output vector. There is obviously a great practical advantage to use of a feedback system rather than a programmed system in which the optimal control m(t) is simply computed and applied to the actual process without regard to behavior of the state vector during the transient. Slight deviations of the process dynamics from Eq. (3-44) will always cause deviations of the physical x(t) from the optimal x*(t), which is why the asterisk is omitted from x(t) in Eq. (6-119). To further distinguish between the feedback and programmed systems, we term a control m(t), which is predetermined as a function of time, a control function, whereas a control m(x(t), t) which is computed as a function of the actual state is J..L(f)

+-

R- 1(t) ar(t)

m(t) ,

dx df =A(t)x(t) + B(t)m(t)

x(t) "7

G(t)

K(t) Figure 6·3

criterion.

Optimal feedback control of linear system with quadratic

c(t)

216

Continuous Systems Optimization

Chap. 6

termed a control/alv. Note that a control law may also be written m(xa{t)) and, therefore, depends only on the augmented state vector. The gains in the loop of Fig. 6-3 are time-varying. This does not result from consideration of time-varying matrices A, B, G, Q, and R, but is true even when all these matrices are constant, as seen from Eq. (6-37) and from the present derivation. However, Eq. (6-36) shows for the first-order system with r(t) == 0 that the gain becomes constant as t 1 ~ oo, and we will indicate later that this is a general result. The powerful results of Chapter 3 enabled us to write Eqs. (6-115), (6-116), (6-117), and (6-118), which show that the optimal gain is independent of the initial state, an important practical observation. The loop of Fig. 6-3 shows that the state vector, and not the output vector, must be known to generate the optimal control lalv. Therefore, the system must be observable if the feedback loop is to be constructed. In other words, by definition, the output vector, and not the state vector, is available by direct process measurement. Unless we can compute the state vector x from the output vector c, i.e., unless the system is observable, we cannot construct the control law, although of course the control function may still be applied. The signal J.L(t) acts as a kind of set-point vector for the loop of Fig. 6-3. We see immediately from Eq. (6-117) that, for r(t) == 0, J.L(t) == 0. This case is called the regulator or load problem, since Eqs. (6-97) and (6-98) show that when r == 0 the objective is to regulate the output c near zero. The problem with r{t) =1= 0 is called the servomechanism or set-point problem, since the objective is to have the output signal follow closely a given input signal. A major problem remaining is evaluation of K(t) and J.L(t). Equations (6-116) and (6-117) are not computationally convenient. There are no simple methods for evaluating transition matrices such as cl»(t; t 0 ) for linear timevarying systems, so these expressions cannot be regarded as useful solutions. In the next section we develop a computer-based method for evaluating K(t) and J.L(t) which is computationally more convenient than Eqs. (6-116) and (6-117). Evaluation of K(t) and J.L(t)

The approach to this problem is a natural one. We substitute Eq. (6-115) into Eq. (6-109) and determine conditions on K(t) and J.L(t) which must be satisfied. Differentiation of Eq. (6-115) yields d"A _ dK x dt - dt

+ Kdx _ dt

dJL dt

== dKx + K(Ax- U"A)- dJL dt

dK == x + KAxdt

dt

KUKx

+· KUJ.L- -dJ.L dt

Chap. 6

Optimal Control of Linear Systems with Quadratic Performance Criteria

217

which may be equated to the expression for d"A/dt obtained from Eq. (6-109)

~;

= -Vx- AT>.,+ Wr

== -Vx -- ATKx

+ ATIL + Wr

These expressions for d"A/dt must be identical for all x. Therefore, by equating the coefficients ofx and by equating the remaining terms, we obtain the results

a:

=

-(KA

1t

+ ATK) - v + KUK

= (KU- AT)IL- Wr

(6-120) (6-121)

The boundary conditions on K and 1-£ are obtained from Eqs. (6-116) to (6-118). First note that, since cl>(t; t 0 ) is a transition matrix, 4>{t 0 ; t 0 ) ==I, and therefore, from Eq. (6-112), that 11 (t 0; t 0) == 22 (t 0; t 0) == I and 12(to; to)== 4> 2.(to; to)= 0. Then, evaluation of Eqs. (6-116) to (6-118) at t == t1 yields K(tf) == GT(tf) PG(tf)

1-£(1 1 ) == GT(t 1 ) Pr(t 1 )

(6-122) (6-123)

The important result of Eqs. (6-119) to (6-123) is that, by starting at t 1 and integrating backward in time to t 0 , we can evaluate K(t) and 1-£(1) without trial-and-error. Then, knowledge ofK(t) and 1-£(1) gives m*(t) from Eq. (6-119). The two-point boundary-value problem no longer exists because the problem has been separated into two one-point boundary-value problems. The first one-point problem is Eqs. (6-120) and (6-121) for which the final conditions are known, and the second is Eqs. (3-44) and (6-119) for which the initial conditions are known. The solution of the first problem for K(t) and 1-£(1) is used in the solution of the second for the optimal response x*(t). It should be noted that Eq. (6-120) is nonlinear because of the last term. This set of simultaneous differential equations is of the Riccati form, and therefore Eq. (6-120) will be called the Riccati equation. We can now prove that K(t) must be symmetric. First, the existence and uniqueness theorem given in Appendix A guarantees that the solution to Eq. (6-120) with the final condition given by Eq. (6-122) is unique. Therefore, if we show that KT(t) also satisfies the same equation and final condition, then we must have KT(t) == K(t). But KT(t) is easily shown to satisfy the Riccati equation and final condition if we take the transpose of Eqs. (6-120) and (6-122). Therefore, K(t) is symmetric and has only n(n + 1)/2 unknown elements, rather than n 2 unknown elements. Further, the Riccati equation contains only n(n + I )/2 simultaneous nonlinear differential equations. The reader should use Eq. (6-113) to show that the fixed terminal point problem results in K(t 1 ) ----. oo.

218

Continuous Systems Optimization

Chap. 6

EXAMPLE 6-5

At this point we rework the problem given at the beginning of the chapter, using the general theory developed here. Specifically, for the system

~I=

--XI+ m

we wish to minimize

== f

tr

J(m)

+ pm (1)] d1

[xi(l)

2

to

The appropriate matrices and vectors to describe this problem in terms of the general theory are

p Q R r(1)

A== -1 B == 1 G == 1 H == 0

== 0 == 2 == 2p == 0

Therefore,

U==_!_ 2p

v

=2 J.£(1) = 0 and the Riccati equation is

dk 1 === 2k - 2 1 dl

+

ki 2p

with final condition kl(l,) == 0

The solution to this is found by direct integration

f

dt

o

I' -

I

==

ka

2s - 2

+ t2 /2p

which yields k 1 -

2

I+ acotha(11 -

1)

where a== ..v'l + p- 1 • This is the identical result obtained in Eq. (6-37) since the factor R-•B == 1/(2p) is included in the gain k of Eq. (6-37). The differential equations for J.£(1), Eq. (6-121), are linear. They must be integrated backward in time from 11 to 10 , because only the final condition J.£(11 ) is known. But this implies that at 10 , when this backward integration must be performed, we know the set-point signal r(1) for all 10 I < 11 . In other


t 0 • Use of m = 0 for t > T guarantees c = 0 for all t > T. Therefore, there is a control which gives a finite value of J(m), and the optimal value of J(m) must be finite. If we did not assume controllability, we could not be sure that an optimal control exists since it might happen that every control results in an infinite value of J(m). These considerations also show that the optimal control will surely have c{t) ~ 0 as t ~ oo, and therefore we can take P = 0. Kalman 5 considered the stationary case where the matrices A, B, Q, R, and G are constant, and where r{t) = 0, and he showed that lim {K(t)}

= Ks

tJ-H>O

where Ks is a constant matrix which satisfies the quadratic equation (6-124) In other words, Ks is the "steady-state" solution to Eq. (6-120), obtained by allowing dKfdt ~ 0. The practical significance of this result is that the optimal controller for the stationary regulator problem, with t 1 ~ oo, is a simple feedback with constant gains, (6-125) The gain in Eq. (6-36) is a specific example of this result. This control is particularly easy to implement. It should be noted again that the linear stationary system must be both controllable and observable if Eq. (6-125) is to be used.

220

Continuous Systems Optimization

Chap. 6

The feedback controller is illustrated by the relatively general result for a second-order system, obtained in the next example. EXAMPLE 6-6 We consider control of the second-order system

d2c dt 2

+ 2s de + c = dt

n1(t)

The objective is to minimize the performance criterion

with a given initial condition c(t 0 ) == c0 , dc(t 0 )/dt == 0. Physically, the problem really is identical to set-point control of a second-order process. The output variable c(t) is defined as a deviation around the final steady-state, so that m = 0 is ultimately required to hold the process at the desired steady state, r(t) = 0. This is similar to the observations made in the discussion following Eq. (6-36). Thus, certain set-point problems can be treated as regulator problems if this redefinition of output variable is made. A discussion of the problem which arises when this redefinition is not made, and the problem is treated as a servomechanism problem, is given in the next example. Define as state variables x 1 == c, x 2 == dcfdt. Then, this problem is described by

A=(_~ -~~;) B =

(~)

R = p, Q == I G = (1 0), H == 0,

r(t) == 0

P==O '

A check shows that the system is both controllable and observable. From Eq. (6-110)

u

=! (~

~).

v =(~

~)

and Eq. (6-124) becomes 2kl2 - 1 + _!_k~2 p

==

0

(E6-6-1)

Chap. 6

Optimal Control of Linear Systems with Quadratic Performance Criteria

221

Note that we have used the fact that Ks is symmetric:

Ks == (k11 k12) k 12 k22 We shall demonstrate later (see Example 6-18) that K must be positivedefinite. This fact enables us to select, from among the multiple roots which satisfy the system (E6-6-1) of quadratic equations, those which also satisfy k 11 > 0, k 22 > 0, k 11 k 22 > k~2· These roots are k11

p[a~4s 2

==

+ 2(a-

1) - 2sl

k12=p(a-I) k 2 2 == p[v'~4s":"""':2:--+----=2--:-(a---1::-:-) - 2s] where a==

~1

m(t)

+ p-•.

From Eq. (6-119) the optimal control is

1 == --[k 12 X 1(t) p

+ k22 x2(t)] +

== -[(a- l)x. +

(~4s 2

==-[(a- 1)c +

(~4s 2 + 2(a-

2(a- 1)- 2s)x2]

1)-

2S)~~J

A block diagram for the optimal control is shown in Fig. £6-6-1. 0

m(t)

" +

Figure 16·6·1

Optimal regulation of output of second-order system.

Note that the feedback control element is unrealizable and must be approximated by, for example,

(k22/p)s

+ k12/P

+

1 with ry very small. The characteristic equation of this closed loop is rys

S2

+ S~4s 2 + 2(a-

1) +a== 0

Therefore, the closed loop has natural frequency Sc given by 6 (I) n

y

(l)n

== .yl{j

2"'

__ 1

~c-

/4s 2

+

2(a -

a

1)

and damping factor

222

Continuous Systems Optimization

Chap. 6

As p ____. 0, and therefore a____. oo, less penalty is placed on m(t), the gains k 12 / p and k 22 / p become large, and the closed-loop damping factor approaches 0.707, a very reasonable figure. Note that the control is independent of the initial conditions. More general results on this problem are presented by Kalman. 7

EXAMPLE 6-7 We consider here the servomechanism problem for the first-order system, de dt

+ c == m

for which the regulator control has been derived in Example 6-5. The performance index is now J(m) =

S:: {[r(t) -

c(t)P

+ pm (t)} dt 2

The values of A, B, G, H, P, Q, and R are identical to those used in Example 6-5. Therefore, the value of K(t) is unchanged. Note that K(t) is the same for both servomechanism and regulator problems because Eq. (6-120) is the same for both. To solve the servomechanism problem, we write Eq. (6-121) as

1t

{p[l

=

+ a co:h a(t1 -

t)]

+ I }.a -

2 r(t)

since W == 2 from Eq. (6-110). This must be solved with the final condition JL(t 1 ) == 0. An integrating factor for this equation is [sinh a(t 1

-

+a

t)

cosh a(t 1

-

t)].

The solution can then be written JL(t)

== sinh a(t 1

-

+a cosh a(t 1

2

t)

+ a cosh a(t1 -

-

7)]r(7) dT

t)

fl' [sinh . a(t t

1 -

r)

(E6-7-l)

This equation shows clearly that to compute fL at any time t requires knowledge of the entire future input r(t). Consider first the case for a unit-step function r(t) = S(t). Then, (t) fL

== 2[cosh a(t1 a

t)

-

sinh a(t 1

-

+a t)

+

sinh a(t1 -- t) -- IJ a cosh a(t 1 - t)

and Eq. (6-119) yields the control. Combining this with the system equation yields

~~ + {I + p[I + a cot~ a(t

1 -

t)]}c

1 cosh a(t1 - t) +a sinh a(t 1 - t)- 1 - pa sinh a(t 1 - t) + a cosh a(t 1 - t)

Chop. 6

Optimal Control of Linear Systems with Quadratic Performance Criteria

223

An integrating factor for this equation is just the inverse of that for the equation for JL(t). The solution is

+ sinh a(t 1 - t)] [ 1 _a cosh a(t 1 - t) + sinh a(t 1 -

c(t) ==~[a cosh a(t 1 ry

+

1

-

t)

I+p

t)

+ sinh a(t -

t0

)J

ry

where ry =a cosh a(t 1

to)+ sinh a(t 1

-

to)

-

and C0 = c(to). In a typical case, c0 = 0, and only the second term is of interest. Let us examine this second term for large values of a(t1 - 10 ). Then, for almost all time during the transient, except near t 0 and near t 1 , we have that cosh a(t 1 - t), sinh a(t 1 - t), and sinh a(t- t 0 ) are much smaller than cosh a(t 1 - t 0 ) and sinh a(t 1 - t 0 ); therefore, we have that 1 c(t) ;::::: 1

Of course at

t 0,

c = 0, while at c(t)

=

ry{l

a

+

+

P

t1,

p)[cosh a(t 1

--

t0)

-

1]

~ (1 ~ aX1! p]

Therefore, the output rises at t 0 to a value 1/(1 + p), less than the desired value of unity. It remains at this "steady" value during almost all of the transient (for large a(t 1 - t 0 )), and just before the end of the transient, the output falls to aj(l + a) times the "steady" value. If we let p ---. 0, then a ~ oo, and the performance is good; the output does closely follow the unit-step input. However, quite large values of m(t) are required near t 0 if p ~ 0. A sketch of a typical transient for small p is shown in Fig. E6-7 -1. 1 --

---------~----

~

1+p

-----

--~} 1 - (1+a~1+p) l I

I

I I I

,, Sketch of optimal servomechanism response for firstorder system, large a(t1 - 10). Figure E6-7 -1

224

Continuous Systems Optimization

Chap. 6

The reader may well wonder why the performance of the optimal servomechanism is so poor in comparison to that of the optimal regulator. Probably, the answer is that the problem is not well-formulated. The value of m(t) is required to remain near unity to maintain c(t) near unity. Therefore, it probably does not make sense to penalize n1(t) in the criterion function. That is why small values of pare required to obtain acceptable performance. Conversely, small values of p cause large values of 1n near t 0 , and the physical system may saturate. A time-varying p such as p 0 exp (t 0 - t) is certainly one possible solution. For t 1 ----+> oo, the optimal control for this problem does not exist because J(m) ----+> oo. The problem is better treated if we redefine the output variable c(t) and control signal m(t) as deviations around the desired final value, as in Example 6-5. Then, the stationary controller for t r ----+> oo can be obtained in place of the time-varying controller which results from the present analysis. Now consider a problem which is well-formulated as a servomechanism problem. We ask that the controlled output follow the curve r(t)

==

Coe-13 0 represents the inverse of the desired time constant. Then, from Eq. (E6-7-1) ( )

JL t

2coe-13

A n-tbU>)

(6-141)

No\v, Ao cannot be zero, since otherwise Eqs. (6-137) and (6-133) imply H == 1 at t == t 0 , and this contradicts the requirement that H == 0 everywhere on the optimal trajectory. Therefore, Eq. (6-140) requires that Qi be singular. In other words, if we find that Qi is not singular for any j, j = 1, 2, ... , r, then we can be assured that there are no finite intervals on which PJ(t) = 0 for any j = 1, 2, ... , r, and therefore that Eq. (6-134) will completely define the time-optimal control (if the optimal control exists; we touch on the existence question later). A system for which Qi is not singular for any j is called normal; a controllable system which is not normal is called singular. A normal system is defined as one which cannot have any Pi(t) == 0 for a finite interval of time. Comparison of Eq. (6-141) with Eq. (3-49) shows that a normal system is controllable with respect to each and every component of the control vector m{t). In other words, all but one of the components of m(t) can be held at zero, and the system will still be controllable with respect to the remaining free control component. The reader is cautioned that it is incorrect to conclude that the timeoptimal control does not exist for a singular system. All that singularity implies is that the minimum principle does not explicitly define the timeoptimal control through Eq. (6-134). In other words, the necessary conditions may not provide enough information to find the time-optimal control for a singular system. EXAMPLE 6-10

Consider the system of Eq. (3-6) with

A=(=~ ~).

B

=G

~)

238

Continuous Systems Optimization

Chap. 6

This system is controllable, since Eq. (3-49) gives

K

A

(B AB)

==(1

0 -2 1) 1 1 -2 0

which clearly has the required two linearly independent columns. However,

Since Q 1 is singular, the system is not normal. EXAMPLE 6-11

The following simple system dx I --m 1 dt -

dx 2 dt = m2

will serve to demonstrate that the time-optimal control may exist for a system which is not normal. Here

A== 0,

B =I

so that

(B AB)

== (I

0)

has the required two independent columns, and the system is controllable. However,

and

are both singular, so the system is not normal. From Eq. (6-138) we obtain

for all t

>

t 0 • Then, Eq. (6-136) gives H == 1

+ "A1om 1 + A2om2 == 0

(£6-11-1)

Chap. 6

Time-Optimal Control of Linear, Stationary Systems

239

Further, Eq. (6-134) gives Ato

m1 == {-11

Ato

1

A2o

m2

= (

-1

A2o

< > < >

0 0 0 0

(E6-11-2)

while any values of m 1 or m 2 are acceptable if A1 o = 0 or A. 2 o = 0, respectively. Now assume A, 10 =1= 0 and A, 20 =1= 0. Then the time-optimal control must be one of the four actions

m1 =

1:

1

= 1 m1 == 1 m 2 == -1 m2

II:

m 1 = -1 m2 = 1 m 1 = -1 m 2 = -1

III:

IV:

for all t > t 0 (until the origin is reached) since neither m 1 nor m 2 can change sign, according to Eq. (E6--11-2). However, if I X1oI =I= IX2o I, the state of the system cannot pass through the origin on such a trajectory. Therefore, unless IX 10 I == IX 20 I, we must have either A1o = 0 or A20 == 0. It is easy to decide which condition must be true. Suppose Ix 10 I > I X 20 1. Then, x 2(t) will reach zero before x 1(t). To be specific, assume X 10 > X 20 > 0. Then intuitively, we should have initially m 1 = m 2 = -1. The system state will follow X1 X2

=

(t- to) = X2o.- (t- to) X1o -

At t === t 0 + x 20 , x 2 === 0 but x 1 =t= 0. Therefore we want to continue with m 1 === -1, m 2 = 0 for t 0 + X2o < t < t 0 + X 10 at which time x = 0 and m 1 = m 2 == 0 will maintain x = 0. In other words, m 2 must switch to zero at t == t 0 + X 20 • The only choice of Ao which will allow this and still satisfy Eq. (E6-ll-l), while producing the absolute minimum value of H, is

This illustrates that p 2(t) === 0 and the system is singular. Continuing

240

Continuous Systems Optimization

Chap. 6

the argument in this manner shows that the correct choices of initial conditions are _ (sgn X 1o) Ao0

0 ) sgn X2o

Ao

=

Ao

x to) = 2I (sgn IX1o I = IX2o I sgn x 20

(

For the specific case considered above, a time-optimal response is illustrated in Fig. E6-ll-1. However, it should be clear that this timeoptimal response is not unique. Any behavior of m 2(t) which satisfies

J

lo+X•o

m 2('r) dr

=

- X 20

lo

will satisfy all the necessary conditions for this case. That is, H will be minimized and equal to zero, and the system will reach the origin at the same minimum time t 0 + x 10 • An alternate time-optimal trajectory, using initially m 1 == -1, m 2 = -a for some value of a satisfying X 20 /X 10 0, and let {3j designate the jth component of the column vector P- 1b. To obtain Yi == 0 for some t1 , we require a control such that t,

Yio == -{3i J

e-AJTm(T) dT

lo

which follows from Eq. (3-22). The quantity YJo represents the initial value Yi(t 0 ). Because of the constraint Im(t) I < 1, we obtain (6-144) which may be rewritten e-A;to -

e-'A.Jt/

> -

A. y jO ) {3j

(6-145)

Clearly, Eq. (6-145) cannot be satisfied if

ly ;o. 1 > tReference 4, pp. 127-135.

I

f3i le-'A.Jto

A·)

(6-146)

242

Continuous Systems Optimization

Chap. 6

and we cannot force Yi to the origin. In physical terms Eq. (6-146) shows that, if the initial condition is too large, the constraint on m(t) will prevent us from driving the unstable system to the origin, despite the fact that the system is controllable. This shows why stability of A is required to guarantee the existence of a time-optimal control. (Note that if Xi< 0, the direction of the inequality in Eq. (6-145) is reversed.) Uniqueness

We now prove the following uniqueness theorem: If the system of Eq. (3-6) is normal, and if a time-optimal control exists, then it is unique. The proof follows by the assumption of two different time-optimal controls, mi(t) and mi(t), which drive the state from X 0 to 0 at the same minimum time tj. Then,

xi(t) = eAxo

+ Itto eABmi('T) d'T

xi(t) = eAxo

+ Itto eABmi('T) d'T

Using the fact that xi(tj)

= xi(tj) = 0, we obtain

Jto e-ATBmi(T) d'T = Jto e-ATBmi(T) dT t•

t•

1

1

which may be multiplied by A~, the initial condition on the adjoint vector for the time-optimal control mi(t), to obtain the scalar equality

Ito A~Te-ATBmi(T) dT t•

1

t•

=

Jto A~Te-ATBmi(T) d'T 1

(6-147)

The system is assumed to be normal. Therefore, the optimal controls mi(t) and mi(t) and trajectories xi(t) and xi(t) are each uniquely defined by the optimal initial adjoint vectors A~ and A~, by virtue of Eqs. (6-134) and (6-138). Furthermore, because each of these optimal controls must absolutely minimize the Hamiltonian, it follows from Eq. (6-130) that

AiTBmi(t)


(tf)

i === 1, 2, ... , n

e' (to)

A' >(to)

i === 1, 2, ... , n

d'i>(t 1 ) === 22 (t 1 ; to) e(to)

i === 1, 2, ... , n

0

=== A'i>(to) -

0

(6-I76)

Then from Eq. (6-175)

(6-177)

Let D(t1 ) be the n x n matrix whose ith column is then x 1 vector d'i>(t 1 ), and let E(t 0 ) be the n X n matrix whose ith column is eCi>(t 0 ). Then, Eq. (6-I77) may be written D(t 1) === 22(t1; to) E(t o) which may be solved to yield

2 2( t t ; to) === D( t 1 ) E- 1(to)

(6-178)

This assumes E- 1(t 0 ) exists, which only requires that we choose the (n + 1) vectors e(to) === ( 0.275 '

(I)

_

A (to) -

(0.446) 0.338 '

Ac 2 >( ) to

==

(0.395) 0.278

and the corresponding values of the final adjoint vectors are (0.036) A (t, - 0.005 ' (0)

)

(I)( ) === ( 0.138) -0.068 ' A t,

-

Ac2 >(

t,

)

=== (

0. 061) -0.040

Then, D(t 1 } =

E

(_~:~~~

) _ (0.081 (to - 0.063

0.025) -0.045 0.030) 0.003

Equation (6-180) then yields 0.365) (0.53 ( A(to) === 0.275 - 0.95

-0.38) (0.036) 0.46 0.005

= (0.348) 0.239

which is very close to the value A(to)

==

0.346) ( 0.236

obtained by trial in Fig. E6-1-1. Control signal iteration

Equation (6-74) is also true for perturbations about an arbitrary (not necessarily optimal) trajectory, when written without the asterisk notation OS(x(t!}) =

J::(~!r OmdT

(6-181)

as is easily verified by repeating the derivation. This gives the variation in the performance criterion caused by a variation in the control signal. If om is always chosen to make oS negative, or at least nonpositive, then we

266

Continuous Systems Optimization

Chap. 6

should approach a locally minimum value of Sand, hence, a trajectory satisfying the minimum principle. This is the basis for control signal iteration. The typical optimal control problem requires simultaneous solution of Eqs. (6-54), (6-62), (6-166), and (6-167), with initial conditions on x and final conditions on A. To use control signal iteration, we simply assume a control signal m(t), so that Eq. (6-62) can be integrated backv.rard from the known condition on A at t 1 , to t 0 , to obtain A 0 >(t). Finally, knowledge of x 0 >(t) and A< u(t) enables computation of

fli.l>(t)

A0 >T(t) f[x

W(T)

>

om dr < 0

(6-183)

to

which shows that m< 2 > will produce a lower (or equal) value of Sand is therefore closer to a solution satisfying the minimum principle. Of course, Eq. (6-181) is correct only for small om. Large changes in m may violate Eq. (6-183) and must therefore be avoided. The new m< 2 >(t) is now used as the guess, and the entire procedure is repeated to yield a o< 2 >m(t), etc. The procedure is discontinued when S cannot be reduced further. We next consider how to specify the weighting matrix W(t). We give a method based on steepest descent. The object of this method is to change min such a manner as to give the largest value of --oS(x(t1 )) for a given size of change, om. To determine the size of change om, we define a distance between two signals m(t) as

os

(os) 2 ==

f

tr

(om)TG(r) om dT

(6-184)

lo

Here G(t) is a symmetric, positive-definite matrix used to define the distance, and we use only the positive root so that >: 0. This is simply a generalization of the norm defined in Eq. (A-3) to a time-varying vector. The matrix G(t) is specified to describe distance adequately. For example, we may be able to estimate m(t) near t 1 very accurately, as in a free end-point problem where it is known in advance that m(t 1 ) === 0. (See, for example, Eq. (6-107) with A(t1 ) === 0.) Then, we would choose the components of G(t) to be large near t 1 , so that an attempt to vary m near t 1 will result in a large distance penalty. If we consider infinitesimal changes in m:

os

Chap. 6

267

Numerical Procedures for Optimization by the Minimum Principle

I

(dm) ds dT =

(dm)r ds G(r)

t, t,

(6-185)

1

The corresponding infinitesimal change in S follows from Eq. (6-181) as a ratio to ds: dS(x(t f)) ds

==

It' (?Ji)T (dm) om

to

dT

(6-186)

ds

Our objective is steepest descent. Therefore, for a given arbitrarily small os, we wish to minimize oS(x(t 1 )). To do this we choose dmfds to minimize dSfds in Eq. (6-186) subject to the constraint in Eq. (6-185). The constraint really defines the given distance ds and restricts the values of to those which give a new control that lies the given distance os from the old control. The minimization of Eq. (6-186) subject to Eq. (6-185) is solved by means of the minimum principle. As in Eq. (6-90), the Lagrange multiplier p is introduced, and we seek to minimize

om

I

t,

Sv =

, 1

[(oH)r dm Om ds

dm] + p (dm)r ds G(T) ds dT

We define the state variable y(t)

==

It

to

+ P(dm)r G(T) dm] dT ds ds

[(oH)T dm om ds

Then the state equation is dy dt

== (oH)r dm om

ds

+

(dm)r G(t) dm p ds ds

and we seek to minimize y(t1 ). The Hamiltonian HP for this problem is

H == v [(oH)T dm p om ds

+ p (dm)T ds

G(t)

dm] ds

and the adjoint variable v(t) satisfies dv _ _ oHp _ 0 dtoy -

Further, since v(t 1 ) == 1, we must have v(t) == 1. The optimum dmfds is obtained by minimization of HP with respect to dmfds. Thus we take

oHP

_ aH

o(dm/ds)- om

+

dm _ 2 pG(t) ds - O

which yields dm == _ _!_ G-I(t) oH ds 2p om

(6-187)

If we take the second derivative, as was done in Eq. (6-108), we obtain 2pG(t). Since we shall presently show that we take p > 0, this guarantees

268

Continuous Systems Optimization

Chap. 6

that Eq. (6-187) provides a minimum. (In fact, since dmfds is presumably unconstrained, this must be the absolute minimum and, therefore, gives the absolute minimum of dSfds.) Substitution of Eq. (6-187) into Eq. (6-185) and solution of the result for p yields (6-188) where we take the positive root. (Taking the negative root will also satisfy Eq. (6-185). However, Eq. (6-187) will then yield a maximum of H. In fact, this merely gives the direction of steepest ascent as the opposite of the direction of steepest descent.) Combination of Eqs. (6-182), (6-187), and (6-188) for finite om shows that the weighting matrix W(t) will give steepest descent if it is chosen as

W(t)

=

[f' (OH)~-~~:Os OH -

to

om

G ('r)-dT

]1/2

(6-189)

om

os

is the finite distance that m is to be moved. Equation (6-189) simply where says that W{t) should be proportional to the inverse of the norm-determining matrix. The proportionality constant is computed only to determine the distance represented by the change om. The direction of steepest descent is dependent only on G-•(t)(oHfom). If Eq. (6-182) results in a new m(t) for which some or all of the components violate constraints, each such component should be set equal to its constraint value and be allowed to remain on the constraint through succeeding iterations, until either the indicated om moves the component back to the allowable region, or else until the procedure is terminated because S cannot be reduced further. This is in accord with the principles set forth in Eqs. (6-77) and (6-78). EXAMPLE 6-17

Consider minimization of the performance criterion J(m)

Jt' (xi +

==

pm 2) dt

to

for the simple system dx 1 dt == m

(E6-17-1)

with x 1(t 0 ) == X 10 • This is the problem considered in Example 6-4 with a== 0. The additional state equation, as in Example 6-4, is

"Jr2 =

xi

+

pm2

Chap. 6

Numerical Procedures for Optimization by the Minimum Principle

Then,

== "A1m

H

269

+ "A2(xi + pm 2)

and the adjoint equations are

d;;, = -2x."A2 0

dA,2-

dt -

The final condition is

A(tt)=(~)

(E6-17-2)

which gives "A 2(t) == 1. Then,

an om == "A + 2pm

(E6-17-3)

1

and (E6-17-4) Although this problem can be solved exactly as in Example 6-4, we solve it here by control signal iteration. We choose as the initial guess the simple control m( t)

==

-X 10

to

t , - to

(t)fxto

-m*(t)/x 10

0 0.25 0.5 1.0 1.5 2.0

1 0.765 0.562 0.250 0.062 0

0.965 0.742 0.566 0.313 0.138 0

Further iterations will steadily reduce the differences, provided the step sizes are kept small. After each step to find a new m(t), it is a good policy to use this m to compute the new S. This requires computation of x, which is needed for the next iteration, in any case. If the newS is not less than the previous value, them should be discarded, and a smaller step size used with the previously computed to find a new m. When the step size required to obtain a reduction inS becomes too small, the procedure is terminated, and the last m(t) is taken as optimal.

om

Dynamic Programming The preceding material of this chapter on continuous systems optimization has been based on Pontryagin's methods. We now present a different approach called dynamic programming. The originator and foremost proponent of this technique is Bellman. 12• 13 As before, we give an introductory, heuristic development of the technique. Principle of optimality

The essence of dynamic programming lies in the principle of optimality: 12 hAn optimal policy has the property that, whatever the initial state and initial decision are, the remaining decisions must constitute an optimal

Chap. 6

271

Dynamic Programming

policy with regard to the state resulting from the first decision." (Here, policy and decision should be regarded as referring to the manipulated signal m(1); policy to the entire choice of m(1) on the interval to < t < t1 , and decision to each instantaneous value m(1).) To explain this principle, we consider choosing m(1) to minimize the performance criterion fo) == J

tl

J 0(m; x(t0 ),

F[x(1), m(1), 1] d1

(6-190)

to

for a system described by

dx

d1

== f(x, m, t)

(6-191)

with a specified initial state x(t0). Here F is a scalar function, such as that in Eq. (6-97). We denote by m[t0 , t1 ] the entire vector function m(1) during the interval t0 < t < 11 . Note that the initial state and time are included as arguments of ] 0 • As before, an asterisk will indicate the optimal value; m*[t0 , 11 ] is the optimal control and x*(t) is the optimal trajectory. Let 11 be a time such that t 0 < t 1 < t 1 . Then x*(t 1) is the state reached at time 11 by application of the control m*[t0 , t 1) because of the uniqueness and continuity of the solution of Eq. (6-191) (see Appendix A). Here, m*[t0 , t 1) represents the entire vector function m*(t) for to< t < t 1• The principle of optimality states that the control which minimizes == J F[x(t), m(1), 1] d1 tr

lo(m; x*(t 1), t 1)

(6-192)

t.

is m*[th t1 ]. The proof of this follows easily by contradiction. Suppose the optimal control is not m*[1 1, t1 ] but, rather, is some other control m[th t1 ]. Then we reconsider optimization of Eq. (6-190) using the control m+[t0 , 11 ] given by m+( 1)

== {m*(1) m(1)

2

since the control is time optimal. To use Z-transform methods based on output, we must redefine the output variable in terms of its deviation about the initial steady

Chap. 7

Time-Optimal Control of linear Sampled-Data Systems

311

state. Thus, in terms of an output variable c(t) defined about the initial steady state, the response is

c(O)

===

0

c(T) == 1 -

c( iT)

1

a

+a

=== 1

1

+a

== I , i > 2

The Z-transform of this is (z + a) + a)z(z- 1)

C(z) -- (I

Since the input is a unit-step change, R(z) ===

z z- 1

so that the desired transmission ratio is T(z) _ C(z) _ (z + a) - R(z) - (1 + a)z 2

The system transfer function may be calculated from the state equations

These are equivalent to d 2c

dt

2

+ 3dc + 2 c === m dt

where c === x 1 the output variable. Hence, C(s) Gp(s) === M(s) == (s

+

1 IXs

+

2)

We obtain the impulse response by inverting this to g(t)

==

e-t -

e-zt

Applying Eq. (2-40), we obtain 2(k-l)

g'(tk- to)==

(1 - a) - a

ak- 1

2

(1 - a 2 )

as the delta-function response. The Z-transform of this, Z{g'(t.- t )} k

0

== (1 -- a)2(z

+ a)

2(z - a )(z - a) 2

312

Optimization of Discrete Control Systems

Chap. 7

is used as Gp(z) in Eq. (2-33) with the T(z) already computed, to obtain the desired digital compensator 2(z - a 2 Xz - a)

D(z) === (I -

a)2(l

==

+ a)(z + 1 ~ J82

+ 1)-

10

ltooo

lso6

142 153 90 93 107 >79

156 170 145 97 115 >80

= first time at which c(t) = c( oo) or e(/7 ) = 0. = time after which c(t) remains within ±0.1 [c( oo) - c(O)] of c( oo ). = time after which c(t) remains within ±0.05 [(c( oo) - c(O)] or c( oo ).

teristics for transients from the process I

Gp(s) =(lOs+ I)1o Definitions of the terms overshoot, rise time In and response time 15q6, 11096 , may be found in Coughanowr and Koppel. 1 Because the system has such a long effective delay, switching must occur before c(t) leaves zero. This makes it difficult to guess switching times. To obtain initial guesses, the step response of this system, shown in Fig. 10-5a, was modeled graphically 2 to obtain _ exp (-55s) Gm(s)- (40s + 1)(7s +I)'

~~

== 20.1'

~~

== 22.9

The transient resulting from these switching times, shown in Fig. 10-5b, was fitted digitally to obtain the model _ exp (-51s) Gm(s)- (27s 1)(24s

+

+ I)'

~~

== 25.4,

~~

== 3I.2

The transient for these switching times, shown in Fig. I0-5c, is considerably improved over the first two curves, with improved rise and response times (Table 10-1 ). Figures 10-5d and 10-5e are given for further illustration. They show reductions in overshoot at the expense of longer rise times. The switching times in Figs. I 0-5d and 10-5e were selected based upon judgments

; II di II

IIIII

II

I

IJ

I

!l

II

II II

I

I

I I

I

I

I

I

IJ

ll

I

I I

I

:,

'

I!!

li i II

I

I

i

!

. I'I ,

II

I

II :

I

li ill

!I

'

I

; ft

II

II

I

l'':

' I

I

!

I'

l

I

I

I I

ll·

II I If 11t It I I

I

I

ll ll

I I

I

I

I

1 11!

I[

! 11 1

I

uI!u u I' u

Ill! I I

I

I

Jlj

I

il '

I

I

II

'

; i

I

: I

II I

II . II

I

·~

II

I

~

I

'

I

l;

!

I

I

I

I

!

I

I

!

j

I I

I

[I

I

:

!!

I ,

I

I!

I

I·I

I

ll ll

I

I

'.I

: 'l l j

II ll

II II

I

I

I

I ll

I

I

IJ

I

1111 11[1

I

I~ .

I

u

ll

I!

lj

1: 1 '

II II I II I II! I

UUll

u

ll

II

1]

I

I

I

I 11

I II

; roo

I

!

I

II

I

I

ll II II

II

II

I

I

II

I[ I

I

I

I

ll

I ' I

' I I

I

I

II

l

I! I

. I

'

I. li

..

II I

'i

il I

I

II

'

I!

I I

..

I

ll

I

I! !

,,

I

I

II

350

l I

u

II

11

1

1

II 'I

I

I

Chap. 10

351

Analysis of Programmed Time-Optimal Control

from Figs. 10-5a, 10-5b, and 10-5c, to reduce overshoot. Figure 10-5c represents a reasonable compromise between overshoot and rise time that can be achieved with second-order switching. Notice that switching times for Fig. 10-5e are 35% less than those for Fig. I 0-5d, yet the responses are very similar. The true time-optimal response for this system in theory requires nine switching reversals to bring the output precisely to rest. Although we cannot calculate this response, we can specify some bounds on its characteristics which will serve as useful comparisons. The true optimal response cannot reach e(t) = 0 more rapidly than the transient resulting from application of full forcing m = K until (or beyond) e(t) = 0. The rise time for such a curve (tr = 82) is listed in Table 10-1 as a lower bound for the optimal response. If all higher derivatives were zero at time tr (which they are not), the response would remain at rest with no overshoot. In such a case, the response time, l 10o6 or t5o6 , would be less than 17 • These times are listed in Table i 0-1 as lower bounds for the optimal response. The suboptimal response in Fig. 10-5c obtained from the modeling procedure is a good compromise, with rise and I 0 %-response times which do not differ greatly from the true optimum, by virtue of the lower bounds presented.

.

Nonlinear Exothern1ic Reactor. To further study the utility and limitations of the design procedure, we next consider a highly nonlinear, exothermic reactor simulation. Orent 4 simulated and controlled a modified version of the Aris-Amundson, 5 Grethlein-Lapidus 6 continuous stirred-tank reactor, for the irreversible exothermic reaction A ~ B with first-order kinetics. The modification involved the addition of cooling coil dynamics. The reaction rate constant is

where k = Arrhenius reaction rate constant; sec-• k 0 =frequency factor; 7.86 x 10 12 sec- 1 E = activation energy; 28,000 cal/mol R = gas constant; 1.987 cal/mol-°K T = absolute temperature of reactor; oK The mass balance on the reactor contents, assuming uniform mtxtng, is

dE V dt

==

FEo - FE - VkE

where V ==material volume; 1000 cc E == concentration of A in exit; molfcc Eo ==concentration of A in inlet; 6.5 x 10- 3 molfcc F == volumetric flow; 10 ccfsec

I

11 1

I

II

II I

II

I

I

II

I

l

.

II

!

'lit

I!

1

I

I

i:lt

I

I il

1 'I

! II I I

l i • :

'I ,. Ill :IIi I w! ,,. 'I ! 1fT ~ I rT I

I

. '!

11

~. i j

.ill ;

,,

,,

I

I; II

I

II 11:

I



I

i

I

I

'

li ! II

I

l

~

!!

I

I

I

!! I' ,. i l I



' I

II

i

·.II

II

!

I

I

I

It

I

II

I!

I

1 I. I

Ill

i

I

, I'

II

t;

! I

:

I:

I

;

I

I I!!l

I

I

'

!

"

I

I

I

''I,i!

! II

:

IiI

I

I

,•

i :

I

i

I

I

I :li

1=

:;

i :;

!

II'!

ii

;

II

I

:

I!

I

i

I

I

II

I

I'

' !1111! 11

i

!

I

'

I:

I

I

I

II 'I

!i i

I

I

II ;

~

i

I;'!:

Iii

rli f[i[ i il

rT 1

I!

:

[I

I

I

:I

!I

! I ~I

I

!

i

' I! !

I

i !;ii !

I

!,

i

i

I

!

II

:I : II. 'd I

I

II Ii :

I

!I '

I

l

I I

I

!I

'i

'

0

I

I

'

I

i

•I

I

I

! ~

I

I[ I

I

' :II !

i

I!

~ '

I

: Ii

I

I

I

' II

i

I :

;

tl

I

:li !I•'

I

I

I,

i

I~

ij 'i

i

, j.

'I

I

352

1

I~ :

I

;

'

I

, 11

I:•

I

l

I

il

1!.

! .1! I .

I

II: .

!1:.

I

D

' I:

I

i,

·,

I

'li li

!f. fT fT ill!!

•::»..

·-...

I'

I :

l!'

I

';I II I

I

[Ill

I

j

!II .I

I

tl.il

U'' : : I

I I

·:

'ii

,·1. '·

~i

[I!!

I

!I I:!I ;

I

i

I

iti :ti i!l!;

!I

I

I

I

:

I I !

II

!

!!

1r!

11 :

I

I! I

IIi!

1

I

•.. I

fT

il !.. I ! lI I I, lj : I j! I : ~~~~~++4W4+1++1

II

I

[

~

I

I

i

l~ '

. I :

I 1t: ~ II

I

J

iI I

i i !I I~ ill i Ill I Ii l i: ~~I fill iii I! I ! ·! ' til 11i ! I 1 i .li

II

:

:,

~I'

I

11 lj

i

''·

i

I

IT ':.!i l!t

ill

!I : 1

I!

li I

Chap. 1 0

353

Analysis of Programmed Time-Optimal Control

If no heat transfer occurs with the surroundings, and if the physical properties of inlet and outlet streams are identical, the energy balance is

V

dT F (T pu dt === pu .J. 0

AH V:k£

T) -

-

~

-

UA(Te- Teo) In [(T- Tco)/(T- Tc)]

where

p

density; 1 gm/cc u === heat capacity; 1 cal/gm-°K T == temperature of reactor and exit stream; oK T0 = temperature of inlet stream; 350°K -aH =-==exothermic heat of reaction; 27,000 cal/mol UA == overall heat transfer coefficient times cooling coil area; 7 cal/sec-°K Teo == inlet coolant temperature; 300°K Tc === exit coolant temperature; oK An approximate Iumped energy balance for the cooling coil is ===

Vcpcuc dTe UA(Te- Teo) F (T T ) - 2 dt ==In [(T- Tco)/(T- Tc)] - cPcUc c - co with

Ve ==coil volume; 100 cc Pc ==coolant density; 1 gm/cc uc =coolant heat capacity; 1 cal/gm-°K Fe === coolant flow; 0 < Fe < 20 ccjsec. This cooling coil equation is added to impart higher-order dynamic effects. The manipulated variable is the bounded coolant flow rate. The output temperature is also delayed to introduce an additional six-second dead time, representative of the higher-order lags which would occur in the physical system. Only control about the stable, high-temperature, steady-state 460°K ()cs == 419°K Es = 0.162 X 10- 3 moljcc Fcs == 5.13 ccjsec ()s ==

is considered here. Orent 4 reported responses to in Fe. He modeled these to the transfer function

± 1-ccjsec

step changes

ae(s) - 6.1 exp (-11s) OK aFc(s) 70s+ 1 ccjsec

The analysis in Example 6-14 suggests that the true time-optimal control of this process is bang-bang. The response to a step in Fe from 5.13 to 4.13 is shown in Fig. 10-6a. A bang-bang transient from guessed switching times is shown in Fig. 10-6b,

. II

'

I

I

'

II

I

I r

li

I

I

~

II I

I

'

' i

!

'

I ~ Ii ill II

[I

II ~~

I

I

I

l II I

!

1:

Ii

I!

I

!

li 'li

i

'

I

I I I

l_l

I

II

I

I

[I

I I

III II,

I

I

!

I

I

I

I I

It

,:r

~

I

II

I

I

I

1 I

![

llI !I II

I

'

I 'r:

'II

I

II i

1' I'

1f

i I

'

L

i

;

I

I

I

ll I '

li

I

'

I

11

I

I

II

'

lJ

I

:l [! I

li

I

1

I

I!

1

I

I

II

II li If

I

I

; 111

I'

Jli

i 'I I

I

I

! '

I

.....

i

I

II

.. I

I

0

I

I

I!



I

~

I

::I

I

I

I

:[

I

Jl

I

I

II

-••

I

I

I

II II

I

I

I

I

I

I

354

Chap. 10

355

Analysis of Programmed Time-Optimal Control

TABLE 10-2. MoDELING NoNLINEAR REACTOR Revised values

Applied values

tf

~~

Figure

Model

10-6a

step decrease in Fe -5.1(0.ls + 1) exp ( -5.7s) (25.7s + 1)(6.9s + 1) -5.7(0.3s + 1) exp ( -7.0s) (33.0s + 1)(4.6s + 1) slight overshoot step increase in Fe -4.4( -0.5s + 1) exp ( -4.0s) (126.4s + 1)(6.2s + 1) -5.0(-0.7s + 1)exp(-5.ls) (141.2s + 1)(3.7s + 1) probably nearly optimum

10.6

12.6

10-6b

10.9

12.1

10-6c

12.1

12.7

10-6d 10-7a

14.7

15.8

10-7b

12.6

20.2

10-7c

12.5

17.9

10-7d

ti

1~

11.1

12.3

11.3

12.3

12.6

20.1

12.0

16.8

For increased modeling accuracy, a numerator time constant was allowed. However, switching times were based entirely on the denominator (see Chapter 6, p. 251) since these times will still, in theory, drive the output to rest at zero. 2

and the modeling results are listed in Table 10-2. The predicted switching times were duplicated closely for the improved response in Fig. 10-6c, which was fitted by a somewhat different transfer function but yielded essentially the same revised switching times. This demonstrates that modeling nearly optimum transients gives repetition of switching times; in other words, modeling convergence is maintained. The undershoot of Fig. 10-6c prompted the slight increase in t~ and t~ for Fig. 10-6d. In view of this transient and the accuracy of the time measurements, the model times, 11.3 and 12.3, give an excellent response. Notice that the model major time constant is less than half that obtained by Orent 4 for the simple step response. The response to a returning step, Fe from 4.13 to 5.13, is shown in Fig. 10-7a. A bang-bang transient from guessed switching times is shown in Fig. 10-7b, and the modeling results are in Table 10-2. The predicted switching times gave the improved response in Fig. 10-7c which, in turn, was modeled. The revised mod~l gave predicted switching times 12.0 and 16.8. In view of Fig. 10-7d, obtained by trial, these times are further improvements. Notice that the model major time constant now is twice that for the simple step response. This nonlinear reactor can have three possible steady states (one is unstable) for a single Fe. Bang-bang forcing between full cooling, Fe == 20, and adiabatic operation, Fe === 0, is very severe. Increasing temperature responses requiring Fe == 0 initially are faster than the returning responses requiring Fe == 20 initially. This illustrates a limitation of the method if design is restricted to fixed model parameters, and the system is highly nonlinear, since the parameters depend upon direction. Nevertheless, the bang-bang response

1 t-r--.

Jo . ·~c::; dram ~

,.,

Deadtime flush line

-

TC1~

J

L.___,

Supply tank

--;::::1:

I

~

r-

""

I

'-

It\·

?;:"\

I

~

~

I

I

Power controller

RotameterTC2

r1 water- -l ~

Supe.!trb ,

5

1 ,

I

'lr

J

,,~.~.L'------....1.

1 ::=s~~l rn1:a=

1

--

Deadtime

I

-

r----

I

X JJ:

~

220~(

Manifold

]J

Analog computer

I

I

Load heater vance

TC3

1~/}::::l--/--.l 110 rv;_ts:~

Power meter

Figure 10·8

,-_j

.J.,

Bayonet heater

Load heater venae

I

~~

Power meter

~------------------------~

Legend :::JI>1

(11-43)

The response is desired in terms of c(tk). Therefore, we use Eqs. (11-4) and (11-40) to obtain (11-44)

Chap. 11

373

Design of Digital Process Controllers

By use of Eqs. (11-5), (11-41), and (11-43) and the definition of y, and with the assumption that the system is initially at rest (dcfdt == 0) at c0 , Eq.(11-44) yields for the output at sampling instants -(a - f3Xb~ + O)A.k-1 c(tk)- (aO + /3~X1 -b) Co

One can show that IA-I at sampling instants.


1

(11-45)

I ; hence, the optimal response is always bounded

Implementation of the algorithm

The optimal control in terms of y follows from Eqs. (11-4) and (11-38) as m(tk) = -kPy(tk)

(11-46) where

+ a()0++ ~/3~

-

-k1 - I

-k _(a+ 2-

1)0

(11-47)

+ b(/3 +

a()+

/3~

1)~

(11-48)

The response in terms of y is obtained from Eqs. (11-9) and (11-4) as y(tk+t)

= p-tGPy(tk) + p-tbm(tk)

which expands to Yt(tk+t) ~ P11Yt(tk)

+ Pt2Y2(tk) -

Kp(Ptt - 1)m(tk)

(11-49)

Y2(tk+t)

+ P22Y2(tk) -

KpP21m(tk)

(11-50)

:.::=

P21Yt(tk)

where _ ba- {3 Ptt - 1 - 1 - b

-bT

P12

= 1 _ b(a - /3) = -hT 2 P21 _

P22-

1

+

a- b/3 l _ b

(11-51) (11-52) (11-53)

Since it is not expected that y 2(tk), the derivative of the process output, will be measured, we use Eqs. (11-49) and (11-50) to compute it from the output c. This computation utilizes the fact that the system is observable to compute the state from the output. From Eq. (11-49) c(tk)

= PuYt(tk-t) + P12Y2(tk-1) - Kp(Pu - 1)m(tk-t)

374

Design of Digital Process Controllers

Chap. 11

and Combining these gives y 2(tk_ 1 )

= c(tk)- Pttc(tk-1) + KrlP11 -

1)m(tk_ 1)

P12

Then, using this with Eq. (11-50), we obtain

y 2(tk) == P22c(tk)- Dc(tk-t)

+ Kp(D- P2 2)m(tk_ P12

1)

(11-54)

where D == p 11 p 22 - p 12 p 2•. In essence, Eq. (11-54) uses the process dynamics, with values of the present and previous output and the previous input, to compute the present derivative. Combining Eqs. (11-46) and (11-54) gives the basic implementable algorithm

m(tk) = -

K1 (k. + 'Tk2P22)c(tk) + Tk2% c(tk-l)- 'Tk2(D- P22)m(tk-t) P12 P12 P12 P

P

(11-55) Using Eq. (11-55), we can compute the optimal m(tk) from past and present output and past input. Equation (11-55) is written in terms of variables m and c defined as deviations about the desired steady-state condition. Equation (11-1) is, of course, written in terms of deviation variables and results from a differential equation originally written in terms of absolute values, ma and ca, of input and output signals (11-56) where m 0 is a nonhomogeneous term which corresponds to the value of m yielding a zero output at steady state. The steady-state relation between absolute values of input and output is

Cs == Kp(ms - mo)

(11-57)

and the deviation variables in Eq. ( 11-1) are defined as

c(t) == Ca(t) -- Cs m(t) == ma(t) -- ms = ma(t) - mo -

f(

(11-58)

p

Use of Eq. (11-58) in Eq. (11-55) converts the control algorithm to absolute values of input and output

ma, ( tk ) == --K1 (k 1

+ 'Tk2P22) Ca( tk) --f- 'Tk2D K Ca ( tk-1 )

P12 P _ 'Tk2(D -- P22) '(t ) + (1 -+- k.)cs ma k-t K P

P12

(11-59)

P

P12

where m~(t)

== ma(t) -- mo

(11-60)

376

Design of Digital Process Controllers

Chop. 11

where

A3 == A2 - A1 B

+ K PTT R

== 'Tk2 PI2

v(tk)

==

111 R(tk)

- mR(tk-l)

Equation (11-65) is used fork> 2; n1(t0) and n1(t 1) are computed from Eq. (11-59). Whenever a set-point change (new value of cs) is requested, k is reset to zero. This means the last term in Eq. (11-65) will be significant only during the first few sampling intervals following a set-point change. Equation (11-55) shows that, if c has been constant at c and m at £/KP, then 1n(t0) == -k 10Kw Therefore, (1 + k 1 ) is the closed-loop gain of the controller. Inclusion of delay time

If the plant contains delay time as in Eq. (11-1), we showed in Chapter 8 that a physically plausible redefinition of the criterion function results in an optimal control which is identical to that for the same plant without delay time. However, the optimal response is now delayed by a'T compared to the optimal response of the undelayed system. Therefore, to have a feedback realization, it is necessary to modify Eq. ( 11-46) to (11-66) The prediction of the future values, y 1(tk + a'T) and y2(tk + a'T), is accomplished through use of the process model. Current values y 1(tk) and Y2(tk), together with the known behavior of 111(1) over the interval tk - a'T < t < tb are sufficient to estimate y 1(tk + aT) and y 2(tk + a'T) from the response of Eq. (11-1 ). Let a'T == (j

+ v)T

where j is an integer, j == 0, 1, 2, ... , and v is a fraction, 0 < v the uncoupled state equations analogous to Eq. (11-6) are

~~=Ax+

dm(t - j T - vT)

(11-67)


2, i.e., after two sampling intervals beyond the introduction of a set-point change. The summation is eliminated from Eq. (11-64) if we write

mR(tk)- mR(tk-t) =

m~(tk)

-

m~(tk-I) + K 'j p

[e(tk)- e*(tk)]

R

which, upon use of Eq. (11-59), gives the working algorithm:

v(tk) == A1e(tk) - A2e(tk-1)

-+

TA.k-1 A3e(tk-2) - B(D- P22)v(tk-1) - K T e(t.) p

R

(11-65)

Chop. 11

377

Design of Digital Process Controllers

Using the same procedure which leads to Eq. (11-9), it follows from Eq. (11-68) that

=

x(tk+ t+v)

+ hn1(tk-J

(11-69)

+ ~ Gi-lhm(tk_;)

(11-70)

Gx(tk+l')

which leads by induction to j

x(tk+v+J)

=

Gix(tk+v)

i =I

Similarly, (11-71) where

(1 h. = (I (

+ a)v + (3)" -

1) I

(11-72)

Combining Eq s. ( 11-66), ( 11-70), and ( 11-71) and converting from x to y, we obtain the optimal feedback control for the delayed case. However, as before, we wish to compute y 2 (tk) rather than measure it. Therefore, we use the same procedures as above to derive

(11-73) Converting to y and proceeding exactly as in the derivation of Eq. (II-54), we obtain an expression for y 2(tk) in terms of c(tk), c(tk_ 1), Jn(tk-j- 1), and n1(tk-j- 2). Using this for y 2(tk) results in a physically realizable feedback control 111 a (t k ) 1

--

-

1 (k j+ K1 P ll

l'

+k

2T

Pj+ 12 v) Ca (f k )

p

p IK 12

p

+ Kp(D

(k,p~t" + k 2 TP~2'"){p,,ca(tk)D 1 -v)n1~~.(tk-J- 2 )

1 -

D 1ca(tk-1)

+ Kp(D 1-t' -

P22)1n~(tk-j-t)}

j

-

~ [k,(pit''-l - pftt')

+ Tk2(p~tv-I - P~tv)]n1~(tk-i)

i=1

- [kt(I- Pri)- k2TP~t]n1~(tk-j-t)

+ ~s(1 + kt)

(11-74)

p

where i

_

Pll-

pi,= i _ P22-

Di

-

+ 1)i -

+

~-~Tb[(a +

J)i - ({3

({3

(a

b(a 1- b

+

pi11 pl22

I )i - b(/3 1_ b

-

pi21 pl12

+

I )i

1)i

+ 1)i] = -bT 2p~l

(11-75)

378

Design of Digital Process Controllers

Chop. 11

Note that pf 1 is not p 11 raised to the ith power; that the term in braces in Eq. (11-74) results from the estimate of y 2(tk); that the variables m and c have been rewritten in terms of absolute values using the substitutions of Eq. (11-58), and Eq. (11-74) is therefore analogous to Eq. (11-59); and that if j == 0 it is understood that the summation over i is identically zero. Since P~1 == P~2 == 1, P~2 == P~1 == 0, and D 1 == D, Eq. (11-74) reduces to Eq. (11-59) for j == v == 0. As was true for Eq. (11-59), Eq. (11-74) contains no reset action. To incorporate this we first derive the result analogous to Eq. (11-61) which, for the delayed case, is (11-76) This is derived as follows: The exponential decay result of Eq. (11-44) may also be shown to apply between sampling instants, so that in the undelayed case for k > 1, c(tk+v) == A,k- 1c(t 1+v). If this response is delayed, the first sampling instant at which the value of c can serve as a base for the exponential decay is t 2 • Equation (11-76) merely delays this by jTto account for the delay. Then, defining e(tk) as in Eq. (11-62) and v(tk) as in Eq. (11-65), and proceeding as in the derivation of Eq. (11-65), we obtain the algorithm with reset action

v(tk) ==

~f+ve(tk)

~~+ve(tk-1) ~ ~4+ve(tk-2)

--

-.I:: R~+"v(tk-t)j

Bi+v[(D- DJ--v)v(tk-J-2) ~ (Dt-v- P22)v(tk-j-1)]

i=1

_ p vv(t k-j-1 ) _ K TT 1\J ""' k- i- 2 e(t j+2 ) p

(11-77)

R

where

~f+v == - 1 [k1(Pftv ~ Kp

p 22 p~tv) Pt2

~

k2T (Pfiv

~

p 22 p~;-v) Pt2

+ TRTJ

A~+v =

1p {k{pit" + ~:2" (P22 + D)] + k2T [pi;"+ ~;2"

~j+v

~j+v _ ~j+v

3

==

2

1

(D

+ P22)]}

T 'KT p R _1_

pv = k t(1 - Prt) -- k2T P~t k j+v _L k {T P22j+v Bi+V = I P2t I Pl2 At each set-point change, the index k is reset to zero. The correct control is computed from Eq. (11-74) for t 0 , th ... , tJ+ 2 , and from Eq. (11-77) thereafter, until the next set-point change.

Chap. 11

Design of Digital Process Controllers

379

Design procedure

We summarize here the design procedure for the case with delay in the model. 1. Obtain from dynamic testing, or otherwise, the parameters KP, r, a, and b which describe the process dynamics. 2. Select a sam piing period T. 3. Calculate a and {3 from Eqs. (11-12) and (11-13). 4. Choose a value of p, 0 p < I. In general, p corresponds to the closed-loop gain; a high value of p corresponds to high gain, and vice versa. This \\'ill be evident from the responses to be discussed later. 5. Calculate ry and from Eqs. (11-18) and (11-19). 6. Calculate r from Eq. (11-34). 7. Calculate () and from Eqs. (11-36) and (11-37). 8. Calculate k 1 from Eq. (11-47). The quantity (1 + k 1 ) is effectively the dimensionless closed-loop gain. It is the initial change in control for a unit-step change in set point. If it is too high, choose a lower value of p, and vice versa. Repeat steps 4 through 8 until a satisfactory gain is achieved. Then calculate k 2 from Eq. (11-48). 9. Calculate A. from Eq. (11-42). This represents the complement of the fraction by which the set-point response decays each sampling period. 10. Calculate the theoretical set-point response at sampling instants from Eq. (11-45). This relation must be delayed by aT if there is delay time, but the shape of the response is identical to Eq. (11-45). If this response is too fast or too slo\v, try a different sampling period T. 11. Choose a reset time. Experience with the particular plant is the only guide to selection of this parameter, since it is used only to compensate for our ignorance of exact plant dynamics and load changes. 12. Calculate j and v from Eq. (11-67). Calculate the coefficients in Eq. (11-74) using the definitions given follo\ving Eq. (11-74). This equation is the control algorithm fort= 10, t 1, •.• , tj+ 2• Note that tk == kT. The index k is reset to zero each time a set-point change occurs. The values ma and ca represent absolute values of the manipulated and output variables, and Cs represents the desired value of the output signal. A set-point change implies a change in the value of Cs. 13. Calculate the coefficients in Eq. (11-77) using the definitions given follo\ving Eq. (11-77). This equation is the control algorithm for t == ti+ 3 , tj+ 4 , ••• , until the next set-point change occurs, at which titne control reverts to Eq. (11-74). The value v represents change in the manipulated variable, and e represents error, the difference between desired and actual values of plant output. If TfTR =I= 0,




~



0

>

L..

Q)



0.6

"U

~ "-

0

0.4

0

c

Ol (f)

Signal estimated by double smoothing

T/7=0.1

Signal estimated by single smoothing

0

0.2

0.4

0.6

0.8 f/T

~=0.5

1.0

1.2

1.4

1.6

...,.

Response of smoothing techniques to exponential signal contaminated with noise. Figure E-2

of lines is drawn through the smoothed points in Fig. E-2 for graphical clarity, this signal should really be represented as a sequence of horizontal lines as is done in Fig. E-1, owing to the sam piing process). Thus, it may be seen that the choice of a requires a compromise between noise rejection and speed of signal tracking. The lower the smoothing constant a, the better the noise rejection, but the slower the smoothed signal compared to the original. Returning to Eq. (E-1), if we follow the process suggested by this equation for a typical sequence of values of c.,0 we find that its effect is to weight the values of the signal, taken n sampling intervals ago, by the factor a( I - a)n- 1 in computing the current smoothed value. This shows the exponential nature of the smoothing process. The older the value, the less heavily it is weighted in the current smoothed value. The advantage of this smoothing process is that only one previous number, cn_ 1 , need be stored in order to achieve this successive weighting of all past values of the signal. To gain further insight into the smoothing process of Eq. (E-1), we study its "response" to an input signal which is a unit-step function. To do this, we utilize the sampled-data theory developed in Appendix C. If the Ztransforms of Cn and en are denoted by C(z) and C(z), respectively, then the Z-transform of Eq. (E-1) is C(z) az C(z)- z- (1 -a)

(E-2)

Appendix E

455

Continuous Analog to Single~Exponentol Smoothing

Assuming a step change in the signal, C(z) == zj(z - I), and inverting Eq. (E-2) by the method of partial fractions, we obtain the equation en

== 1 - (1 - a)n+t

(E-3)

Equation (E-3) shows that, after a sufficient number of sampling intervals, the smoothed error signal will eventually rise to the current true value of the unsmoothed error signal. In fact, the response is so similar to that of a continuous first-order system, that we are justifiably tempted to assume an analogy between Eq. (E-1) in the discrete process and a first-order system in the continuous process. This analogy will be developed further in the next section. Also note from Eq. (E-3) that the higher the value of a, the more rapidly the smoothed signal reaches the input signal, as anticipated.

Continuous Analog to Single-Exponential Smoothing Consider Fig. E-3 in which the unsmoothed signal is c(t) 1 shown entering a first-order system with time constant Ts· In terms of the signal-frequency discussion of Chapter I, statistical fluctuations in the error signal, Figure E-3 Illustrating if introduced by the measurement process, would analogy between single tend to be at higher frequencies than those contained exponential smoothing in the actual process output. Therefore, appropriate and first-order system. choice of the time constant of the first-order filter in Fig. E-3, to cause significant signal attenuation only in the range of frequencies where the noise may be considered to be dominant, should lead to a smoothed version of the original signal. To put these arguments in more quantitative form, we note that from Fig. E-3,

Cn

1 fnT e~-(nT-8);Tsc(IJ)diJ ' == --'Ts

(E-4)

o

If the integral in Eq. (E-4) is divided into two parts, the first from zero to (n -- I)T, and the second from (n- l)T to nT, and if the theorem of the mean is used in the latter integral, the result is Cn

= ( 1 -- e-Tl·rs)c(t*)

+ e-TI•scn-1

where the time t* lies in the interval (n - 1)T < t* relation

JnT

(n-l)T

e81 'sc((}) d(}

:=:

C(t*) JnT