Modern Control Theories: Nonlinear, Optimal and Adaptive Systems 0569072379, 9780569072373


338 28 91MB

English Pages 1096 [1095] Year 1972

Report DMCA / Copyright

DOWNLOAD PDF FILE

Recommend Papers

Modern Control Theories: Nonlinear, Optimal and Adaptive Systems
 0569072379, 9780569072373

  • 0 0 0
  • Like this paper and download? You can publish your own PDF file online for free in a few minutes! Sign Up
File loading please wait...
Citation preview

MODERN CONTROL THEORIES NONLINEAR, OPTIMAL AND ADAPTIVE SYSTEMS by

Prof. Dr. Corresponding

Member

FRIGYES CSAKI of the Hungarian

Academy

of Sciences

i\

COLLEGE

AKADUMIAI KIADO, BUDAPEST

1972

a y-

The

original

“Korszeru szabalyozaselmelet

Nemlinearis, optimalis es

adaptiv rendszerek” published by Akademiai Kiado, Budapest 1970

Translated by P. Szoke Translation revised by B.

® Akademiai

Balkay

Kiad6, Budapest 1972

Printed in Hungary

CONTENTS

Symbols

13

Preface

17

27

References

PART 1.

1

31

Introduction 1.1 Characteristic features of

nonlinear systems

1.1.1

Importance of nonlinear systems

31

1.1.2

Fundamental equations State and phase equations

31

1.1.3

33

special features of nonlinear systems

Some

1.1.5

Classification of nonlinearities

38

1.1.6

Analysis of nonlinear systems

40 40

PART

Methods of 2.1

2

linearization

Linearization about an operating point; tangential approximation Determination of the linearized coefficients by expansion 2.1.1

An

2.1.3

method of linearization The most usual ways of algebraic linearization

2.1.4

Linearization of characteristic curves

2.1.2

2.2

36

1.1.4

References

2.

31

alternate

45 46 46 52

54

2.1.5

54 Determination of linearized coefficients by least-square approximation 57

2.1.6

The

2.1.7

Summary

first test

method of Lyapunov

61

66

References

67

Harmonic linearization 2.2.1 Fundamental assumptions of the describing function method Fundamental relations 2.2.2

68

77

2.2.5

Generalized describing functions Describing functions of some simple nonlinearities Approximate determination of the describing function

2.2.6

An alternate approximate method for the determination of describing

2.2.3

2.2.4

68 69 78 86

functions

90

2.2.7

Stability test

93

2.2.8

Examples for the uses of describing functions The root locus of a nonlinear system Drawbacks of the describing-function method Compensation in nonlinear systems The harmonic-balance method

2.2.9

2.2.10 2.2.11

2.2.12

103

110 114 119

120

CONTENTS

6

2.2.13

Harmonic

linearization of state equations

123

References

132

2.3 Statistical linearization

136

2.3.2

Statistical linearization of nonlinear characteristics

140

2.3.3

142

2.3.5

Expressions of linearized gains Examples of statistical linearization A variant of linearized-gain calculation

2.3.6

Statistical linearization of

2.3.7

Statistical

relations

136

some

typical nonlinearities

systems analysis

2.4.2 2.4.3

2.4.4 2.4.5 2.4.6

147 156

157 162 169

Combined describing functions 2.4.1

171

Dual-input describing functions Incremental locus Approximate dual-input describing function

171

Combined harmonic and random linearization A review of combined linearization formulae Approximate combined linearization

183

References

174 178

185 187 189

PART

3

Transient processes 3.1 Graphical methods

193

194

3.1.1

First-order linear system

194

3.1.2

Linear integrator Single energy-storage nonlinear system Feedback control systems

197

Supplementary comments The secant method Supplementary comments to the secant method The tangent method

201

3.1.3

3.1.4 3.1.5 3.1.6 3.1.7 3.1.8

References 3.2

128

Fundamental

References

3.

125

2.3.1

2.3.4

2.4

121

Comparison of the three methods 2.2.15 Other describing functions 2.2.16 The inverse problem 2.2.14

Numerical methods 3.2.1

3.2.2 3.2.3

3.2.4 3.2.5 3.2.6 3.2.7

3.2.8

198 199

202

205 209 211

212

Taylor series expansion The Euler method The modified Euler method

212

The Adams method The Milne method The Runge-Kutta method Predictor-corrector methods Checking the numerical methods

217

215 216 219

220 222 226

3.2.9

Least-square fitting

227

3.2.10

The use of 2 -forms in the evaluation of transient processes Naumov’s grapho-analytical method

232

3.2.11

References

229

235

CONTENTS 3.3 Analytical 3.3.1

3.3.2 3.3.3

3.3.4 3.3.5 3.3.6

3.3.7 3.3.8

3.3.9

3.3.10 3.3.11

3.3.12 3.3.13

3.3.14 3.3.15 3.3.16

237

methods

Variation of parameters Expansion according to a small parameter special case of expansion according to a small parameter Finding of periodic solutions by the perturbation method The method of reversion The method of Lighthill and Temple The method of collocation The Galerkin method The Ritz-Galerkin method

A

The Lie series method The asymptotic series method The Taylor-Cauchy transformation The method of recurrence relations The method of complex convolution The method of successive integrations Method of Lalesco’s nonlinear integral equations

References 3.4 Closed-form solutions for transient processes Directly integrable differential equations 3.4.1 3.4.2 Linear differential equations of the first order Separable differential equations 3.4.3 3.4.4 3.4.5

Introduction of a homogeneous variable Solution of equations derived from a total differential

Introduction of an integrating factor Introduction of a new variable 3.4.7 Incomplete second-order differential equations 3.4.8 The Bernoulli equation 4.1 3.4.9 3.4.10 The Riccati equation 3.4.11 The Euler-Cauchy equation 3.4.6

238 241

244 246 249

250 253 256 258 261

263 263

265 268

270 271

275 276 277

278 279 279

280 281 281

282 283

283

284

3.4.13 Elliptic functions

284 285

3.4.14 Hyperelliptic functions

289

References

290

3.4.12 Solution of variable-coefficient linear differential equations

PART 4.

7

4

The state-plane and phase-plane method The state-plane and phase-plane method Writing up the phase equations 4.1.1 4.1.2

4.1.3

4.1.4 4.1.5 4.1.6 4.1.7

4.1.8 4.1.9

Determination of the phase portrait by computation Graphical methods for constructing phase trajectories evolute methods

The Time calibration of phase trajectories The Poincare analysis of singular points Some comments on the Poincare method Energy relationships and the phase portrait Phase trajectory construction by energy considerations

293 294 296 301 312 322 326 333

348 351

353

CONTENTS

8

4.1.10 Limit-cycle examination

356

References

362

4.2 Piecewise linear systems

4.2.2

Saturation or limitation Dead band or threshold

4.2.3

Variable gain

4.2.4

Backlash or hysteresis Adhesion and Coulomb

4.2.1

4.2.5

4.2.6 4.2.7

364 364 367 368 371

374

friction

Variable damping Examination of piecewise

377 linear systems

380 380

References 4

4.3 On-off control systems

382

4.3.1

The block diagram of

4.3.2

Analytical determination of phase trajectories

385

4.3.3

Examples of on-off control The method of characteristic curves The method of point transformation

398

4.3.4 4.3.5 4.3.6

4.3.7 4.3.8

on-off control

Limit cycle calculation in relay systems Optimal relay control system Minimum-time systems

References

PART 5.

Lyapunov’s second or direct method 5.1.1 The fundamental ideas of stability 5.1.2 The notion of sign-definiteness 5.1.3

5.1.4 5.1.5

5.1.6

5.1.7

5.2

Lyapunov functions The Lyapunov theorems Examples for the application of the Lyapunov method to autonomous systems Proof of the Lyapunov method Application of the Lyapunov method to nonautonomous systems

5.1.8

Practical stability

5.1.9

Eventual stability

400 409 416 425 433

439 441

442 445 448

450 456 466

470 472 473

References

475

Determination of the Lyapunov functions Determination of Lyapunov functions for autonomous linear systems 5.2.2 Determination of Lyapunov functions for autonomous nonlinear systems 5.2.3 The Krasovskii method

477

5.2.1

5.2.4 5.2.5 5.2.6 5.2.7

5.3

389

5

Stability of nonlinear systems 5.1

384

The Aizerman method The variable-gradient method for generating Lyapunov functions The Zubov method Determination of Lyapunov functions for nonautonomous systems

477

480 483

489 491

494 498

References

500

Canonical forms and transformations 5.3.1 The basic equations of direct control

504

501

CONTENTS

0

5.3.2

The fundamental equations of

5.3.3

Closed formulae of canonical transformation for direct control

518

Closed formulae of canonical transformation for indirect control for constructing the Lyapunov function. Introduction

5.3.4

indirect control

512 523

5.3.5

Methods

528

5.3.6

Construction of the Lyapunov function for indirect control Construction of the Lyapunov function for direct control Lur’e’s polynomial transformation Special cases of Lur’e’s polynomial transformation

529

5.3.7 5.3.8

5.3.9

and zero

539

547

550

5.3.10 Simplified stability criteria 5.3.11 Pole shifting

537

shifting

551

560 References 561 5.4 Synthesis by the Lyapunov method 561 5.4.1 Synthesis on the basis of an integral criterion 5.4.2 Syn thesis of linear excited systems on the basis of the integral 563 criterion 5.4.3

Synthesis of closed-loop control systems

565

5.4.4

Synthesis of an excited nonlinear system

566

5.4.5

The parameter-identification method

569

5.4.6

Synthesis of nonlinear adaptive control systems Synthesis of asymptotically stable optimal nonlinear systems

571

5.4.7

Estimation of the damping rate of a transient process References 5.5 Sampled-data systems 5.4.8

579 586 587

5.5.1

Stability definitions

587

5.5.2

Stability theorems

688

5.5.3

The relation between the Routh -Hurwitz criterion and the Lyapunov 593 method 595 systems The Krasovskii method for discrete-data

5.5.4

Synthesis of discrete-data systems Estimating the transient process References 5.6 Absolute stability 5.5.5

5.5.6

599

600 601

602

5.6.1

Definition of absolute stability

603

5.6.2

The Pojdov

605

5.6.3

5.6.4 5.6.5 5.6.6

5.6.7 5.6.8

5.6.9

5.7

573

criterion and its proof Geometric interpretation of Popov’s criterion Extension of the Popov criterion Application examples of Popov’s criterion Absolute stability of the control process in nonlinear systems The stability degree of nonlinear systems The integral criterion for nonlinear systems Relationship between the Popov and Lyapunov methods

References Absolute stability of nonlinear discrete-data systems 5.7.1 Absolute stability in discrete-data systems

Popov

5.7.2

Interpretation of the

5.7.3

Generalization of the stability criterion

5.7.4

The necessary and

5.7.5

Estimation of the degree of stability

criterion

sufficient conditions of absolute stability

609

614 617 621

624 626 628

629 631

631

635 636 637

637

10

CONTENTS 5.7.6 5.7.7

Quadratic estimation Relationship between the

638

Popov

criterion

and the Lyapunov method

in discrete-data control systems

639

References

641

5.8 Generalization of the

5.8.1

5.8.2 5.8.3

frequency method 642 Absolute stability of systems including a nonlinearity of limited slope 642 Modified stability criteria in the frequency domain 646 Absolute stability of multivariable systems 650

Generalization for time -variable nonlinearities References 5.8.4

PART 6.

654 655

6

Optimal systems

663

6.1 Application of the calculus of variations to the solution of

optimal control

problems 6.1.1

665

The principal theorems of the

classical calculus of variations

665

Variants of the optimal control problems 6.1.3 Optimal control of time-invariant systems 6.1.4 Linear optimization problems References 6.2 Pontryagin’s principle 6.1.2

6.2.1

Pontryagin’s

maximum

682 693 696

707 709

principle

6.2.2

Some examples

6.2.3

The Pontryagin minimum

involving the

maximum

709 principle

727

principle

735

Optimal control of linear time-invariant controlled plants 6.2.5 Some features of optimal systems 6.2.6 Synthesis of minimum-time systems 6.2.7 Design of fuel-optimal systems 6.2.8 Design of energy-optimal control 6.2.9 Optimal control with a hypersphere-type constraint 6.2.10 Optimal control of systems with transportation lags 6.2.4

751

763

820 830 837 ~"

References 6.3

745

846 N

855

Dynamic programming 6.3.1

6.3.2

(maximum) 6.3.3

principle

878

Connection between dynamic programming and the calculus of variations

6.3.4

867

Fundamentals 867 Connection between dynamic programming and the minimum

881

Connection between the Lyapunov functions and dynamic program-

ming References 6.4 Functional analysis in the solution of optimal control problems 6.4.1 Optimal control of single- variable plants 6.4.2 Optimal control of multivariable controlled plants 6.4.3 Optimal control of time-variable multivariable controlled plants

884 888 891 891

895 899

6.4.4

Complementary comments

900

6.4.5

Some numerical examples

901

References

904

CONTENTS

PART 7.

11

7

Adaptive control systems 7.1

909 910 912 912 914 914 915 916

Variants of adaptive control systems 7.1.1 Passive adaptation 7.1.2 Input- variable adaptation 7.1.3 Extremal or optimizing systems 7.1.4 System- variable adaptation 7.1.5 System-characteristic adaptation 7.1.6

Supplementary comments

References

917

Some examples

918

of adaptive systems High-gain adaptive systems 7.2.2 Adaptive systems with a prescribed damping factor 7.2.3 Adaptive missile acceleration control 7.2.4 Input signal self-adaptation of a tracking servo 7.2.5 Model-reference adaptive systems References 7.3 Optimizing methods 7.3.1 Fundamental concepts of optimizing systems 7.3.2 Some types of optimizing systems 7.3.3 Analysis of quasi-stationary processes 7.3.4 Methods of searching in complicated optimizing systems References 7.4 The theoretical bases of adaptation, learning, and optimizing 7.4.1 The criteria of optimality 7.4.2 The adaptation process and its algorithm 7.4.3 Adaptation under constraints 7.4.4 Pattern recognition 7.2

7.2.1

924 926 929

930 931

932 945

950 954 956 956 957

958

959

Identification

962

7.4.6

Adaptive filters Adaptive (dual) control

963

7.4.7

964 966

PART

8

Appendix 8.1

922

7.4.5

References

8.

918 919

971

Some fundamental principles of matrix calculus and vector Some fundamental theorems of matrix algebra

analysis

8.1.1

and quadratic forms

972 972 975

8.1.2

Bilinear

8.1.3

Norms

976

8.1.4

Fundamentals of vector analysis

981

8.1.5 Some rules of differentiation References 8.2 State variables, state equations 8.2.1

8.2.3

988 989

Deduction of the transfer matrix of a controlled plant from the state equations

8.2.2

985

990

Deduction of the state equations from the transfer function or 991 transfer matrix 996 State equations of feedback systems

CONTENTS

12 8.2.4

Normal plants

997

8.2.5

Canonical form

998

8.2.6

Determination of the phase -variable form

1002

1004

References 8.3 Solution of the state differential equations 8.3.1

1007

Solution of a time-invariant linear homogeneous vector differential

equation 8.3.2 Determination of the fundamental matrix 8.3.3 Determination of the fundamental matrix in cases with multiple eigenvalues 8.3.4 Solution of the time-invariant inhomogeneous state equations References

1007 1009

1014 1015

8.4.2

1016 1017 Solution of time- variable homogeneous state equations 1017 Solution of the time-variable inhomogeneous differential equation 1019

8.4.3

The adjoint system

*

8.4 Variable-coefficient differential equations 8.4.1

1021

Determination of the transition matrix References 8.5 Reachable states, controllability, observability 8.5.1 Reachable states

1022

8.4.4

1026 1027 1027

and observability

8.5.2

Definition of controllability

8.5.3

Controllability of linear time-invariant systems

1028

8.5.4

Observability of linear time-invariant systems

1031

8.5.5

Normal plants

1033

1027

References 8.6 State 8.6.1

8.6.2

1034

and phase equations of sampled-data systems 1035 Determination of homogeneous phase equations 1035 Determination of the phase-variable form from the pulsed-data transfer function

8.6.3

General state equations of sampled-data systems

1040

8.6.4

Solution of linear state equations

1043

8.6.5

^-transformation

1045

8.6.6

Determination of the transition matrix Determination of the transition matrix of a time-variable plant

1047

8.6.7 8.6.8

Controllability

and observability

References 8.7

1038

H

Some connections with 8.7.1

8.7.2

>

1048 1050

1052

theoretical mechanics

1053

Fundamental concepts and connections The Lagrange equation

1053

References

1056 1061

List of references

1063

Index

1087

SYMBOLS

=

>-

is equivalent to does not equal equals by definition; denotes equals identically is not identically equal to is greater than

than

greater or equal to is less or equal to symbol of factorial for all (universal quantifier) there exists (existential quantifier) is

0 x< 0 x

1,

=

[sgn xv

-f-

1,

.

.

x,

sgn

,

.

x

a: n

T ]

1

Xn)

>



•>

xn)

,

•>



Xn)

(2.1.1-14)



This nonlinear system of equations can be written as a single vector fune tion with a vector argument

y

= g(x)-

Let us assume that the arbitrary nonlinear functions glt ,gq can be exdetermined series the operating Taylor around point by panded into Xq]. Then, with the expansion completed and the higherX [X1; 2 order terms neglected, .

.

=

X

M

,

.

.

.

.

= gM~g(X) +

y

— %i)

(*/ ,i=

1

OXi \

M

(2.1.1-15)

Here d l±\

9g dXj is

M

a vector of q of components, that If treatment ,

is

dx i

dXj is,

a

_

qx 1 column

matrix.

around the operating point

restricted to small variations

then

y

- g(X) = y

Axi

(2.1.1-16)

M

fT, dXj

The above expression establishes a linear relationship between the variations Ayj and Ax,, resp. Ay and Ax. The proportionality factors (that is, the linearized coefficients) are furnished by the partial derivatives at the point of operation. With the variations Ax,-, considered as input variables, and the variations of Ayj as output variables, one can plot the block diagram illustrated

by

Fig. 2. 1.1-2,

where Xj*

= 1, 2, = 1, 2, j

=

i

dx.

M

.

.

.

n

.,

.

(2.1.1-17)

.,

q

represents the gains, readily determined from the partial derivatives. For small variations around the operating point M, the equation system of the linear model of the multivariable transfer element is

Ay

K^2Ax2

Ky^Ax^

-|-

Ay

^ X-2\Ax^

-j-

K

Ayq

X\

-f-

J^-q^A x 2

j

22

.

.

.

-(-

K^n Axn

,

-

X2n Axn

,

\

Ax2

~

(2.1.1-18) -J-

.

.

.

-j-

K.qn A xn

,

DETERMINATION OF LINEARIZED COEFFICIENTS or,

5J

written in a vector-equation form,

Ay where Ay

^ KzJx = A Ax,

(2.1.1—19)

qx 1 and Ax an nx 1 column

vector (column matrix), and n), with the gains is a

y —-

g(pci>

*

(2.1.5-1)

LINEARIZATION ABOUT AN OPERATING POINT

58

The dependent

variable at the point of operation

Y = g(X X2 x,

This value

is

for

m

g(Xxk

X

known

.

.

,

.,

Xn

M

is

(2. 1.5-2)

) .

arbitrary sampling points around the operating

M:

point

Xk — The

,

2k ,

Xnk







m

)

(1c

,

1,2,..

m

)

(2.1.5—3)

.

M and the

between the operating point

differences

.,

sampling points

are expressed as

X

X = Ax x

lfc

lk

(1c

=

1, 2,

(1c

=

1,2,..

Xn — Axnk Y - Y = Ay

.

.

.,m)

(2. 1.5-4)

Xnk

and

k

The

linear

model

is

k

.,

m).

(2.1.5-5)

again assumed to have the form

Ay

m % Ki Axi = R

1

Ax1 +...

j=i

+ Kn Axn

(2.1.5-6)

.

Now let

us form the sum of the squared deviations for the sampling points, using the coefficients of the linearized model and the outputs of the model and the nonlinear element:

m

E = 2 (K fc

A

=

1

K

Ax lk +

first

dE

__

BKX

2 %(K

BKj

sum

E

is

that

all

dE

_

_

BK 2

K ^ Ax x

X

Axlk

k=\

+X

mm

divided

fc= 1

d

by

lk

2

Ax2k

2

=

(i

(2. 1.5-8)

Kn

Axik

+ X 2 ^ x k dx 2

2

1. 2,

-j-

Kn Axnk — Ayk

.

.

.

.

.

.,n)

ik

H-

.

.

-f"

Kn

fe=l

The hnearized coefficients the system of n equations is, if

+

and rearranged:

(i=

(that

of the quadratic

(2.1.5-7)

is,

BE

or,

+ ...+ K„ Ax„ k - Ayk f

partial derivatives be zero:

BE that

Axkk

minimum

necessary condition for the

the

k

K

1,2, ...,»)

=

)

Axik

mm k=

Axnk Axik

1

—0 (2. 1.5-9)

— £ Ayk Ax k=

ik

(2.1.5-10)

.

can be determined from sampling has been correct the determinant of the equation system does not equal zero). f

(i

1, 2,

.

.

.

(2.1.5-10), if the

,ri)

DETERMINATION BY LEAST-SQUARE APPROXIMATION

= g(x),

For a single-variable function y will be

59

the equation of the linear model

Ay^KAx, where the linearized

(2.1.5-11)

coefficient or equivalent gain is

m

2 AVk A*k

K = -^

m>

,

1

.

(

2

.

1 . 5 - 12 )

2

fc=

For a function of two independent model equation is

variables, y

^ K Ax + K Ax

Ay

1

2

l

2

= y(x

lt

x2 ), the linearized

,

where (

m

'

l*-»

m

2

\2 Ay k Ax lk

K =

m

j

(^«)

(*->

m

x

2

2 M*«)

/

— 2 k=

2

2

k=

/c=

m

Axlk Ax2k j

m

m

1

r

- 2 Ay k Ax 2k 2 k=l

Axlk Ax 2k (2.1.5-13)

'

{2 Ayk Ax2k

\

m

2

k=1

\

(Ax u ) 2

\

m

m (

- \2

(

Ay k Axlk

(fc=i

/

j

K.,=

m

m

2

2

k=

k= 1

2 Ax

m

2

{^x2k ) 2

lk

Ax 2k

l*-»

Ax ik dx2 k

/

12

fc=

I

(2.1.5-14) If the nonlinear relation can be expressed analytically, the sums may be replaced by integrals [T20], e.g. in the case of the single -variable nonlin-

earity y

= y(x

)

AX

K=

f

Ag(Ax)Axd (Ax) .

f

-AX

(Ax)

2

d (Ax)

The same procedure can be followed for two or more variables. The method of least squares is much more complicated and seldom

(2.1.5-15)

therefore

resorted to in practice.

Example

2.1.5.

Let us determine the transfer function of a flyweight tachometer [T20] by the least-square method. As has been pointed out before, the reduced force is fr

= McEa>

2 .

LINEARIZATION ABOUT AN OPERATING POINT

60

For small variations about the point of operation,

Fr where

FR =

MCRQ\.

= MCR(Q +

Aco)\

0

Simphfying by this expression

Afr

We

Afr

= MCR ( 2 Q

0

Ao

+

(Aco)*)

yields,

.

wish the linearized model equation to assume the form

KAco

Afr

The gain can be determined by equation

K=

J

M R(2D Aco +

(Aco ) 2

0

c

(2.1.5-15):

)

Acod{Aco)

J {Aa>yd{Aco) Integrated, this becomes

mcrq (aQo 0

3 )

— 2 Mc R O 0

K= -{AQ0 f o

Thus the method of

least squares yields in this case a result identical with that obtained by the tangential approximation, since force is a quadratic function of angular velocity.

Example

2. 1.5.2

Let us linearize the relation

V at an operating point

X

= 9{x) = xn

=?a 0.

Tangential approximation gives



= nxn ~

n~ = nX nX"-' l

x

x=X

x=*X

and hence

Ay

Now

x

Ax

.

the method of least squares yields

Y that

= nXn ~

-f-

Ay

==

(X

-f-

Ax) n

is,

Ay

= {X + Ax) n - X n

Hence, AX

K=

((X

— J

.

+ Ax) n - X n ) Axd{Ax) .

AX J

-AX

(Ax) 2 d(Ax)

FIRST TEST

now

Integration

METHOD OF LYAPUNOV

61

yields

my h=i

2

n

2h+l

2h



Xn-2h+l (/\X) 2h +

X

l, J

where the symbol If n = 1, then

]

[

K=

refers here to the integer part of the 1; if,

on the other hand, n

K=

= 2X

|



(

A X)

= 2,

mixed

fraction.

then

.

3

3

Thus the methods of least squares and of tangential approximation give the same result. If n = 3, then the gain 2

K=

X*(AX) 3

(AX) +4 *

—3 (AX)

5

= 3X + 4(zlX) 5 2

3

X, also on the variations of AX and by the tangential approximation. if n = 4,

d.epends, besides

2 ,

therefore differs from

that furnished Similarly,

4-x 3 (^x) 3 K=-

+45 x z|j
4.

THE FIRST TEST METHOD OF LYAPUNOV by the

introduction of state or phase variables, any n-th order nonlinear differential equation It is well

(see Section 1.1.3) that,

Cn

X

may

— f(x,

(2. 1.6-1)

(«- 1)

#

X, ...

,

X

)

be reduced to the simultaneous set of nonlinear

of first order

differential equations

LINEARIZATION ABOUT AN OPERATING POINT

f>2

or to the nonlinear vector differential equation

x

= f(x)

(

2

1

.

.

6 -3 )

Let us assume that the nonlinearity can be expressed by a single-valued analytic function: then, at a given equilibrium operating point M,

X = [X v x

.

2,

X„]

.

T

and thereabout, the first partial derivatives of the nonlinear functions are finite, continuous, and unique. The nonlinear functions f\

fn or,





= fn(xV a

'2>

'> **7i)



•>

*



(

2 1 6 -4 )

(

2

.

.

Xn)

written in vector form, the nonlinear vector function f

= f(x)

.

1 . 6 —5 )

can therefore be hnearized by making use of the first two terms of the Taylor series, the so-called first approximation. With the higher-order terms neglected, the approximate linear-model equation of the nonlinear system becomes

Ax

M that

is,

Af

^ Azlx

(

2

.

1

.

(

2

.

1

.

6- 6 ) 6 -7 )

where the elements of the matrix A are: Aj = dfj/dXi. According to Lyapunov’s first stability theorem [L2, L5, L8], if the nonlinear system is substituted by its linear model of first approximation, and the characteristic equation of the differential equation t

(n)

Ax

= Af(Ax

(n— 1)

#

,

Ax

,

.

.

.

Ax)

thus obtained, or of the vector differential equation Azlx Ax

=

(

2 1 6-8 ) .

.

(2.1.6—9)

has roots whose real parts are non-zero, then the stability of the nonlinear system can always be decided on the basis of the linear approximation (in other words, the higher-order terms do not influence stability). Thus, if the roots of the characteristic equation of the scalar (differential equation system) or vector differential equation furnished by the linear approximation are negative, or have negative real parts, then the nonlinear system is stable, that is, if deflected from its equilibrium operating point, it will return thereto. On the other hand, if some of the roots are positive or have positive real parts, the nonlinear system will be unstable and, if deflected from its point of operation, will never return to equilibrium.

FIRST TEST

METHOD OF LYAPUNOV

63

If the real part of any one of the roots is zero, this criterion cannot be applied. In such cases, stability may depend, for example, on the direction of displacement. In control engineering, roots with a real part close to zero are undesirable (since then the control system is liable to exhibit excessive hunting) and, therefore, the exclusion of roots with zero for their real part will not be uncomfortable for practice. It should be emphasized that the Lyapunov stability theorem applies only to small deviations about the point of equilibrium. The characteristic roots of the linear differential equation system

Axx

=A

11

A ln Axn

Ax1

(

Axn

= A nlAx + 1

.

.

2

.

1 . 6 - 10 )

+ A nn Axn

.

that is, the linear differential equation (2. 1.6-9) obtained by the linearization of the nonlinear vector differential equation (2.1. 6-3) and the nonlinear differential equation system (2. 1.6-2) can be determined from the characteristic equation:

= A — «I =

D(s)

|

|

0

=

K{s)

or

si

= 0.

—[A

(2.1.6-11)

\

\

The

characteristic roots are, therefore, the eigenvalues of the matrix A. Writing out the determinant in detail gives

=

D(s)

=0 A ni

Now

this determinant

D(s)

Here the

=

(-s) n

An

(

2

.

1 . 6 - 12 )

A„„—S

2

expands to

+ an-it-s)"- +

+ %(-«) + a =

1

.

.

0

.

0.

(2.1.6-13)

coefficients dfi—\

=

^11

= -^11

“t

-^-22 “f*



+

...

+

=

-^21 -^22

An\A n 2 sums of the

determinant

|

A

principal minors of

itself). |

-

*

“t

1,

^nn n—l

i

f

n

(2.1.6-14)

^A n n—l ,

A^iA^ aQ



An—

-^12

-^21 -^21

are the

.





A -^

77,77

A in















-^ 2 n

first,

second, etc. order (a 0

is

the

LINEARIZATION ABOUT AN OPERATING POINT

64

Example

2. 1.6.1

Let us examine whether the nonlinear system described by the differential equation d2 « ax -\-C sin x U +D

=

d£ 2



stable (such a differential equation describes, for example, a synchronous motor or generator, or an oscillating system consisting of a nonlinear spring,

is

=

x. mass). By the way, the output is y In equilibrium state, the time derivatives are zero, and the equation giving the coordinates of the operating point is

damper and

C

X=

sin

Let us assume that the system

U.

slightly displaced

from

X sin Ax ^ sin X +

cos

is

its

operating

point:

x

=X=

AX.

Since

d 2X

=0

dt 2 and sin

x

= sin X cos Ax +

cos

the linearized differential equation of the variations 6? Ax

dAx

D

f-

dl2

The

—0



Ax,

be

.

dt

characteristic equation

is

then,



s2

+ Ds + C cos X = 0

K(s)

By

„ „ G cos X -Ax .

,

will

X

the Routh-Hurwitz criterion, the condition of stability of this linear equation is

differential

D> and

this

Example

is

C cos X

>0

the stability criterion also for the original nonlinear equation.

2. 1.6.2

Let us examine the nonlinear by means of phase variables. Introducing x±

= x,

x2

= x, #2

x2

and

0;

differential

equation of the above example

the phase equations become >

= —O sin x — Dx + U l

2

therefore, fi(xi>

x2

)



X2>

f2 ( xi,

X2 )

= —C

sin

xx

— Dx2 +

U.

'

FIRST TEST

METHOD OF LYAPUNOV

65

The equilibrium state of the system can obviously be determined from the equations a?!

x2

= 0, =0 .

Hence, the coordinates of the point of operation are

v



U

-i

G

X = 2

0.

Now let us examine the stability of the system for slight displacements about the point of equilibrium. The Jacobian matrix

_

J(f,x)

dxT

M and the

1

— G cos X

M

x

—D

linearized system of differential equations can be written as

Axx Ax2 The

0

df

'

is

= Ax — —C cos X 2

,

1

Ax1

characteristic equation determined

K{s)= D(s)



=

— DAx

2

from equation 1

C cos Xj

.

—s

-D-s

2 -f-

(2.1.6-12)

is:

Ds + G cos^! =

0

of course, identical with the one in the previous example, just as are the stability conditions.

it is,

Example

2. 1.6.3

Let us examine the stability of the simple control system illustrated in Fig. 2. 1.6-1, for slight deviations about the state of equilibrium r(t) 0, if



ue

= eg(e)

and

u

r(t)

^

eftJ

°, K > 0, K < a a

> °,

x

2

.

Problem 2. 1.6.1 Determine the conditions of stability of a nonlinear system characterzed by the Van Der Pol differential equation

— dl

2

-2£

• •

;

;

'

*

HARMONIC LINEARIZATION

;

*\

’ \

Oiving to the difficulties of their treatment, nonlinear systems are usually reduced to linear ones, if possible. One solution (linearization in the time domain) was discussed in the foregoing section. Another possibility is the socalled harmonic linearization in the frequency domain. Here the nonlinear transfer element is substituted by a linear one, equivalent as far as the fundamental component is concerned. Linearization in the time domain and the frequency domain are not competitive but complementary procedures. The linearization about a point of operation involves continuous characteristic curves easy to differentiate, and small deviations. Harmonic linearization, on the other hand, can be successfully employed also in cases of discontinuous curves and substantial deviations but presupposes the possibility of quasi-stationary oscillations. In the present chapter, the various methods of harmonic linearization will be discussed. In addition to a detailed treatment of the so-called describing function method, most frequently used in control engineering, the so-called harmonic-balance method and the harmonic linearization of the state-equation coefficients will also be considered [G6, G7, G16, G28, 03,

P14, P27, T8, W10].

FUNDAMENTAL ASSUMPTIONS OF THE DESCRIBING FUNCTION METHOD 2.2.1

If a nonlinear transfer element is excited by a sine-wave input signal, then the output signal will contain, in addition to the fundamental, also some harmonics. The describing function expresses the amplitude ratio and phase shift of the output fundamental as compared to the sine-wave input. For a multivalued nonlinearity, the describing function is a complex quantity composed of the amplitude ratio (the absolute value), and the phase shift (an angle, or argument) that is, it will describe not only the magnitude ratio but also the phase-shift conditions. For a single-valued nonlinearity, on the other hand, the describing function furnishes only the magnitude ratio of the output fundamental to the harmonic input. Owing to the presence of a nonlinearity, the describing function depends on the magnitude and, sometimes, on the frequency of the pure sine-wave input [ 1 —7 j Thus the nonlinear element is linearized by the describing function and the subsequent analysis or synthesis is based on the magnitude- or, possibly, frequency-dependent linearized element. Thus, by making use of the describing function, the nonlinear system can be readily studied in the frequency domain with the frequency-response methods developed for linear systems, and both analysis and synthesis are fairly easy to perform. The

DESCRIBING FUNCTION METHOD

69

most important advantage of the method is the theoretically unlimited number of time constants in the linear part. From the viewpoint of accuracy, the greater the number of time constants, the better. The method of describing functions is based on the following assumptions [1-7]:

The output

and the frequency of its fundamental is idensine-wave input. This means that subharmonic generation is excluded. It is generally assumed, furthermore, that the nonlinearity is symmetrical and, consequently, the output wave has no constant component of zero frequency (the method can be generalized, however, to cover asymmetrical nonlinearities as well). 2° Only the fundamental of the output must be taken into account as the higher harmonics will be attenuated by the linear transfer elements of the system to such a degree as to become negligible (the linear part of the system is, in the frequency domain considered, of a low-pass character). 3° The nonlinear element does not vary with time. Time-variable (nonautonomous) elements have no describing function, as they may have no quasi-stationary periodic output, either. A frequency-dependent element, on the other hand, does have a describing function which is, both amplitudeand frequency-dependent. 4° Only one nonlinear element is permissible in the control system: the other elements must all be linear. If the system has two or more nonlinearities, it is most reasonable to combine them into a single one, and determine the describing function for this combined element (although the describing function method can be extended to cover several nonlinearities too). The method of describing functions has also its limitations [1-7]: l°No adequate technique has as yet been devised for checking the accuracy of the result. 2° The method will yield quantitative and qualitative estimates only. Thus the stability or instability of the system can be assessed, and so can the existence, the magnitude and the frequency of the limit cycle. An analysis performed in the frequency domain, will not, however, furnish even an approximate indication of the dynamic behaviour of the system in the time domain, that is, on overshoot, transient processes, settling time, etc. At the same time, it should be emphasized that the describing function method is the only simple practical procedure for the treatment of nonlinear systems higher than second-order. The method of describing functions is particularly useful, if the describing function itself is not oversensitive to curve shape variations. 1°

is

periodic,

tical to that of the

2.2.2

FUNDAMENTAL RELATIONS

Figure 2. 2. 2-1 is the block diagram of two simple single-loop control systems. The nonlinearities have been combined into a single nonlinear element. Let the sine-wave input of the nonlinear element be [C16, L41] u(t )

= B sin

cot

.

(

2 2 2 -1 ) .

.

HARMONIC LINEARIZATION

70

b)

Fig. 2.2.2-1. Simple single-loop nonlinear control

systems

periodical output variable of the nonlinear element can usually be expanded into a Fourier series:

The

y(t)

= A0

-f-

2 Mn cos ncot B

n sin

n cot)

(2. 2.2-2)

n=

Assuming a symmetrical nonlinearity, the from the following equations: A.q

=0

coefficients

can be determined

,

n y(t)

-

1

cos n

cot

dcot

(2. 2. 2-3)

,

J 0 71

y(t) sin

-

n cot d cot

(2. 2. 2-4)

J' 0

Generally,

A n = A n (B,jco)

and

Bn (B,jco),

both amplitude- and frequency-dependent.

that

is,

the coefficients are

FUNDAMENTAL RELATIONS

When

71

using the describing-function method, only the fundamental of

the output signal

is

taken into consideration: y(t)

=B

sin

x

cot

+A

x

cos

cot

(2. 2. 2-5)

.

Introducing the expressions

oi sin cpx

= ysrMf, Ax

— —^

)

B

=

(2.2.2-16)

If the symmetrical nonlinearity is single-valued, then therefore 0, and hence Gx x

q'

=

=B

A t =0,

cp x

=

0,

;

N(B,jco)

B

= q(B,jco) =

(2.2.2-17)

.

The above expressions show that the describing function N(B,jco) of the nonlinear element can be treated hereafter in the same way as the frequency function

G(jco) of a linear transfer element, except that the depen-

dence on the amplitude

B

of the input variable must also be taken into

account. It will be repeatedly emphasized that harmonics are not reckoned with because a strong damping by the linear transfer elements is assumed. Consequently, the harmonic content of the feedback variable may be con-

sidered as negligible.

Example

2.2.2.

Let us determine the describing function of a nonlinearity resemb lin g a simple limiter or a sudden saturation (Fig. 2. 2.2-2). [11, 12]. The characteristics of electronic amplifiers are approximately of this type or, with a considerable simplification, so are the magnetization curves of iron cores. Let the linear section have a slope N or, in other words, let it have a gain N The sine-wave input of amplitude I? 6 gives rise to a truncated sine-wave output

K

K

>

.

= Knu = KnB sin y = KN b; y = Knu = Kn B sin y

etc.,

a

where

= sin ~ The

1

b

is

cot

a b/B), into the describing function is

.

K =

>

K

N(B)

4

k

71

B

.

K

of the ideal relay, characterized by

V

Example

— 9(u) = k sgn u.

2.2.2.2

Let us determine the describing function of a backlash or hysteresis type nonlinearity [8, 17, 23, 35, 37]. The nonlinear characteristic and the input and output variables are presented in Fig. 2.2.2-3. When changing the sign

Fig

.

2. 2. 2-3.

Establishing the describing function of a hysteresis type nonlinearity

FUNDAMENTAL RELATIONS

75

of the input variable, the output will remain unchanged for a period whose h length is governed by the backlash. Thus the output is, for B

= Kn {B sin — h); y = Kn (B — h); + h); y = Kn (B sin

0

.r,

[jr [7~ sm {h £

!h

h*!

B^B'



2

\

i

Three

y

-

yj!L.

k

position

Relay

d

IT, d

-e

e

with

d b

0

n

Pit

*

*

!L» n rt±)\

it

B

1

.i

\

\2ir

Deficiency

Z

if

d~B

F l

oi*s\n'd/B

m V yd:

Negative

t

el »

j3=r-s\if e/B

-k

Hysteresis

Compound

'

,

V v>

n

1

0

\

Ft

0

ut

Kn

B^b

if

,

i

~

P

.

r 'f +s

sxnFd

i

'

d-siri'b/B

+ Wsm

*

*l

an

4s in*cT

r Hma*?

HARMONIC LINEARIZATION

80

we have seen before. Figure 2. 2.4-1 represents the describing function divided by the slope, that is, the ratio N(B)/Kn vs. the reciprocal relative amplitude b/B. Incidentally, the expression N(B)/Kn is often is

real as

called the normalized describing function.

b B Fig.

If

tude

2. 2. 4-1.

Describing function of saturation (limitation)

B 0 u)

(2.2.5-23)

relations can be written

sin cot

.

up

for

HARMONIC LINEARIZATION

90

Example 2. 2. 5.1 The equation V of a nonlinear element (2.2.5-12), (2.2.5-13),

?

= g(n)

u,

u>

0

u

,

0

0)

2 . 2 6— 1 ) .

and the

AN ALTERNATE APPROXIMATE METHOD

minus sign to the quadrants II and III (when d (u/B)

mind that

= -

dVl-(w/5) a r

1

(

^L

Yl-

91


0, u > 0; by 2 and by g t if u < 0, u > 0 (Fig.

Let, furthermore, function g(u) be denoted if

u>

0,

w

< 0;

by g z

,

if

< 0, u < 0;

u

then gx g x B is

=—

A,

=—

,

= j-

- g2

f ( 9l tiB J

)

(2.2.6-10)

d«.

0

Naturally,

if g(u) is single- valued

Example 2.2.6. In Example

2. 2. 2. 2,

there

the value of

A

is

x

no hysteresis

loop,

and A x

=

0.

was -

A

x

=- KnB

71

On

the other hand,

by equation

'A]

_u

(2.2.6-10)

and

— Kn (B —

A which

'(h) 2

is

h)

Fig. 2. 2. 2-3,

2h

the same result. 2.2.7

STABILITY TEST

of the describing function method, the approximate stability examination of nonlinear systems can readily be performed, the approximate conditions of limit cycle generation can be stated, and the amplitude and frequency of the oscillation at the input of the nonlinear element can also be defined [G6, G7, G16, G28, P14]. Figure 2. 2. 7-1 represents the block diagram of two versions of a simple single-loop control system. The nonlinear element is substituted by the describing function N(B, jco). The frequency responses of the two closed-

By means

loop systems will be, approximately,

W

a {jco)

N(B,jco)G x (ja>)

= 1

+

N(B,

jco)

(2.2. 7-1)

Gx (jco) G2 (jco)

and

W

b (jo>)

=

[jw)

(2. 2. 7-2)

l+N(B,jco)Gx (j(o)G.2 (jco) undamped, that

respectively. The closed-loop control system will be at the limit of stability if 1

+

N(B,jco) Gx {jco)

G2 (jco)

=0

.

is,

(2.2.7-3)

HARMONIC LINEARIZATION

94

a)

b)

Fig. 2.2.7—1. Block diagrams of single-loop control systems

If this condition is satisfied, the system will have a permanent hunting, that is, a limit cycle will be produced. Oscillation amplitude and frequency at the input of the nonlinear element can be determined on the basis of equation (2 2.7-3). Multiloop control systems are to be reduced to a simple single-loop type [29]. The simplification of a block diagram is shown in Fig. 2.2.7—2. Now let us return to Fig. 2.2. 7-1. At the input of the nonlinear element, the expression u(t) sin cot Im U(B,jco)eJ at (2.2.7-4)

=B

=

=

was assumed to

=

arise (in part (a) of the figure: e u, in part (b): y u). After the opening of the control loop before the nonlinear element, the fundamental component of the return signal will assume the form Vi(t)

=C

x

sin(co£

+

)=H(B,ja>) where

=G

,

(2. 2. 7-6)

G(jco) has the same role 1 (ja>)G2 (jco). The function H(B,ja>) in nonlinear systems as the frequency response G(jco) in linear ones, but

STABILITY TEST

Fig.

2. 2.7-2.

Simplification of the block diagram of a multiloop control

system including a single nonlinear element

95

HABMONIC LINEABIZATION

96

H

a two-variable function of frequency co and input amplitude B, wherefore a set of curves will be obtained instead of a single Nyquist diagram [P27]. Figure 2.2.7-3 shows three| curves of the set. ^"0 is passed on the right when running Bs the point 1 1 If B through the respective Nyquist plot in the direction of increasing frequencies, Bj, which indicates the stability of the system. At an amplitude of (

B,

jco) is

in this case,

=

— =— +

,

B=

Fig. 2. 2.7-3. Frequency-response diagram of a control system containing a nonlinear element



passed on the left, thus indicating the instability of the sysnonB L) the system is in a neutral state, that is, the linear system will generate a limit cycle, with an amplitude L and a frequency co co L at the input of the nonlinear element. The foregoing considerations apply to minimum-phase and stable in itself H{B, jco) functions. In other words, the function H(B,jco) is supposed to have zeros and poles in the upper half of the complex co plane only (which means that the function H(B, s) has only left-hand-plane zeros and poles). Similarly to the common Nyquist criterion, the method outlined above can readily be extended to cases where the limitations referred to are absent. Hence, on the basis of the generalized amplitude and frequencydependent functions H(B,jco) or characteristics, the following statements may be made: If the equation

the point 1 tem. Finally,

is

if

=B

B=B

=

l+H{BL ,ja>L = )

0

H(BL ,jco L

or

BL

)

= -1

(2.2.7-7)

then one (or more ) limit cycle( s may be generated in the nonlinear system. This limit cycle (or limit cycles) will be either convergent (permanent) or divergent (dissolving). These are often called stable and unstable limit cycles, respectively. If the condition of limit cycle generation is satisfied and, for the adjacent 0 of the diagram G(BL -f- bB, jco) produced by a slight increase bB amplitude BL the system appears as stable, then the limit cycle will prove to is satisfied for

a given pair (or pairs)

,

co L ,

>

,

STABILITY TEST

97

B

be convergent as, in such cases, the amplitude will decrease to L ConverseSB, joo) indicates an unstable system, the limit cycle is ly, if G{B l .

+

divergent. system, that Finally, if for values min H{B, jco) indicates a stable 1 is to the left of the plot H{B, jco) in the case of a minimumis, the point phase stable loop, then any finite initial conditions will give rise to bounded output variables in the system. On the other hand, if stability sets in for amplitudes m ax then the system will exhibit a stable be0. This, however, does not invariably mean a stable haviour about point of equilibrium since the system may oscillate at very high frequencies

B>B



B=

B), and indeed passes 1 at any amplitudes does not pass through the point this point on the right, then the system is stable under any finite initial conditions. This means a case of global stability. Instead of the so-called single-locus procedure outlined above, the socalled double-locus technique is often resorted to (rather exceptional in the case of linear systems, this technique is quite usual for the nonlinear cases) [30-33]. There is no need to plot the entire set of curves, but one amplitude-dependent diagram and one frequency-response curve will generally suffice. The stability test may be performed by using either the Nyquist, the inverse Nyquist, or the Nichols diagram.



B

of the Nyquist diagram transfer function of the linear part resultant the If in equation (2. 2. 7-3), of the open loop is Gi(joo)G2 (jco) —G(joo), then the limit condition of stability may be written as 2. 2.7.1 Stability

check by

means

G{'jco)

=

JL

(2. 2. 7-8)

~N{B,ja>)

stability test, the polar sponse locus, is first plotted in the

For the

diagram G(jco), called the frequency recomplex plane by the usual method; the

locus of the negative reciprocal of the describing function

is

plotted after-

wards. In order to simplify the procedure, frequency-independent describing lfN(B) is functions N(B) are at first assumed (Fig. 2.2.7—4). Generally, 1/N(B) will a complex quantity. For a real describing function N(B), the curves on arrows The plane. complex the follow the negative real axis of respectively. amplitudes, point in the direction of increasing frequencies and For minimum -phase stable transfer functions G(s), the criterion of





closed-loop system stability

is

as follows:



1/N(B) does not intersect the The system is stable if and only if the locus polar diagram G(jco) and running through the main branch of the latter in the direction of increasing frequencies the former will in its full extent be passed ,

on

the right.

7

HARMONIC LINEARIZATION

98

The example illustrated by Fig. 2. 2. 7-4 refers to a stable system since the locus 1/N{B) is situated on the stable side (8) of the locus G(jco). I indicates the unstable domain. The Nyquist criterion is clearly a special case relating to linear systems of the criterion described above. The describing function of the proportional linear element is a constant independent of amplitude 1/N(B) is reduced to a certain point of the negative and hence, real axis.





B

Fig.

2. 2. 7 -4.

by the Nyquist diagram

means

of the inverse Nyquist diagram the initial equation (2. 2. 7-3) in the form

2.2.7 .2 Stability test by

Let us write up

Stability testing

_L

— N(B)

(2. 2. 7-9)

G(jco)

and

plot, in the

complex plane, the inverse frequency -response locus

as well as the negative describing-function locus

Fig. 2.2.7 -5. Stability testing

by the

—N(B)

l/G(jco),

(Fig. 2. 2. 7-5).

inverse Nyquist diagram

STABILITY TEST

99

Now

the stability criterion can be defined as follows: is stable if and only if the negative describing-junction locus does not intersect the inverse Nyquist locus for various amplitudes,

The system

l/G(jco) but is situated completely within the S domain at the left of the latter one, when the HG(jco) locus is traversed in the direction of increasing frequencies. Of course, this criterion applies again to such systems only where the transfer function G(s) of the open loop has no right-half-plane poles or zeros.

apparent that the condition defined above is the generalization of the common inverse Nyquist stability criterion. Figure 2. 2. 7-5 illustrates a stable example. It

is

2.2.7 .3 Stability test by the Nichols diagram Stability can be checked also on the basis of the Nichols (log gain vs. phase angle or phase margin) diagram. Plotting the frequency-response

and the negative function describing 1/N(B) on the basis of absolute values expressed in decibels, and phase angles (or phase margins) (Fig. 2.2. 7-6), we get the following criterion of stability: When running through the frequency-response plot G(jco) in the direction of increasing frequencies, we must pass the am1/N(B) plitude-dependent locus on its left (i.e. the latter has to lie on the right-hand side S of the former). As an explanation, let us add diagram

G(j(o)

reciprocal





=

G that the origin 20 log 0, (corresponddiagram the of 0 q> of the 1 ing to the point complex plane) is similarly to the right of the Nichols plot of a Fig. 2.2.7 -6. Stability testing by the Nichols diagram stable system. The reference direction is reversed here because and the Nichols InCr ]n |6r cp 7i on the horizontal diagram has 20 log G on the vertical and

0 and

-1 whereas -1 yield co 2 sec c -1 The frequencies thus obtained are in -1 gives oo 4.9 sec 25 sec H l good agreement with the results farther above.

For example,

K =

TM —

Problem 2.2.9. Figure 2. 2. 9-6

1

sec

=

and

Kh =

=

5 sec

,

.

the block diagram of a simple control system. In addition to the nonlinear device (ideal relay), there is also a dead- time element, besides the integrator with delay. Investigate the stability of the closedloop system by both the root-locus method and other methods. is

Fig. 2.2. 9-6. Block diagram of a control system

2.2.10

(cf.

Problem

2. 2. 9.1)

DRAWBACKS OF THE DESCRIBING-FUNCTION METHOD

of describing functions is approximate in that it neglects the higher harmonics arising at the output of the nonlinear element, that is,

The method

assumes a pure sine- wave output variable. This approximate method provides good enough results if the higher harmonics neglected do not in effect deteriorate the control process [G9, G28, T20]. As an example, let us consider the describing function of saturation, whose formula is given by equation (2. 2.4-1). Let us express the amplitude ratio of the third harmonic and the input sine wave; that is, let us plot a describ-

it

DRAWBACKS OF THE DESCRIBING-FUNCTION METHOD

115

ing function for the third harmonic:

N (B) = S

4 _

_

3n

Kn -

2 "| 3/2

B

N

The normaHzed describing-function diagrams for N(B)/Kn and 3 (B)/Kn respectively, are shown in Fig. 2.2.10—1. It is seen that, for weak input ,

the third harmonic may be neglected whereas stronger input signals, for which the output variable approximates a square-wave shape, result in a third harmonic of an order of magnitude identical with that of the fundamental. (Low -pass linear elements will of course reduce the third harmonic much more, than the fundamental.) The same applies to the signals,

fifth

harmonic.

Fig. 2.2.10-1. Describing functions of saturation-type nonlinearities

Figure 2.2.10-2 shows the diagrams of the fundamental and of the third and fifth harmonic for the characteristic of an ideal relay (or dry friction) Fig. 2.2.10-3 shows the same for a dead band. These figures warrant conclusions similar to those above. The method is of an uncertain accuracy also if the two curves are shaped as in Fig. 2.2.10-4, illustrating the describing functions of backlash and its third harmonic. If the control system has a convergent limit cycle of small amplitude (one of the points of the range of weak inputs in figure), then the system may or may not be acceptable depending on the permissible control deviation. Figure 2.2.10-4 reveals, however, that the neglected third harmonic is of a significant value just in this signal amplitude range, and if the filtering provided by the linear part is insufficient the result 8*

DRAWBACKS furnished by the

OB’

THE DESCRIBING-FUNCTION METHOD

117

method will not approximate the real system closely enough.

As an example, the describing function may indicate stability for a certain nonlinear control system which is in effect unstable. The method may yield an inaccurate result even if the frequency response locus of the linear

Fig. 2.2.10-4. Fundamental and third -harmonic development for backlash vs. the input variable

plant and the curve of the negative reciprocal describing function intersect at a small angle. As an illustration, Fig. 2.2.10-5 presents four Nichols diagrams: (a) the point of intersection determining the limit cycle is clearly defined, (b) the point of intersection is acceptable, (c) the intersection is uncertain at a small angle, and (d) the two curves run parallel without intersection. The two latter examples show with dashed fines where the curves corresponding to an actual nonlinearity wouldrun (it must be noted here, however, that the dashed curve is only of a theoretical interest, as it would necessitate the taking into consideration of all the higher harmonics and also of the linear system as a filter). In case (c), where the intersection of the approximate curve 1/N(B) with the locus G(jco) indicates the existence of a convergent limit cycle, there is no limit cycle in the actual system and, in case (d), where the two



HARMONIC LINEARIZATION

118

cj

Fig. 2.2.10-5. Limit cycle

d)

examination by the Nichols diagram

intersection or limit plots run parallel, the approximation indicates no oscillations. permanent cycle whereas, in reality the system will exhibit To summarize, it may be stated that in higher-order systems, where the zeros, that is, linear part of the open-loop system has several poles and functions describing filtering effect is satisfactory, the method of

where the

secondacceptable, and the results obtained will be correct. In first-, not alwill part linear the by and possibly third-order systems filtration of demethod the ways be satisfactory and, therefore, results provided scribing functions should be checked by other methods.

is

COMPENSATION IN NONLINEAR SYSTEMS 2.2.11

119

COMPENSATION IN NONLINEAR SYSTEMS

In nonlinear systems, there are two theoretically possible ways of compensation. First, we may deliberately introduce into the linear system a nonlinear compensator unit to improve dynamic behaviour and, on the other hand, the instability of a nonlinear system may be eliminated by the introduction of linear compensators [34-37].

Fig 2.2.11-1 Block diagram of nonlinear feedback compensation .

Compensation in nonlinear systems is best checked by means of an analog computer as the entire dynamic behaviour has to be studied. For preliminary investigations, aimed at the elimination of instability or of the limit cycle, the method of describing functions will be found useful. By applying linear signal compensators, the frequency response of the linear part may be modified and intersection with the amplitude-dependent locus can be avoided while the loop gain is held constant or indeed increased.

In nonlinear control systems, nonlinear cascade or feedback compensation

is

also often used.

Figure 2.2.11-1 illustrates the internal feedback compensation of a nonlinear system. Its characteristic equation is 1

if

+ G HN h {B) + GxG H N(B) = 2

x

the nonlinear signal compensator

N(B) then the

critical condition

N(B) the other hand,

if

0.

(2.2.11-1)

chosen so that

= N h {B),

(2.2.11-2)

of system stability v 1

On

is

0

ill

be

= Gx(G H, + H). 2

the linear compensator

H=GH 2

0

is

(2.2.11-3)

chosen so that (2.2.11-4)

HARMONIC LINEARIZATION

120

then the

critical

condition of stability

may

1

be written as

= G G Ho. x

N(B)Tn^)

2

(2.2.11-5)

In the third case, when both conditions (2.2.11-2) and (2.2.11-4) are fied, the critical condition is given by

= G-jGnHn 2

2N(B) The most

interesting case is

when

N„(B) is satisfied.

.

(

satis-

2 2 11 — 6 ') .

.

the condition

= 1 - N(B)

(2.2.11-7)

Equation (2.2.11-5) then takes the form

—1 — GG H X

2

0

,

that is, the nonlinear system can be linearized by means of an appropriate nonlinear compensation satisfying (2.2.11-7). The type of nonlinear compensator realizing this aim may be pinpointed by means of the inverse describing function method. By means of equations (2.2.11-3) or (2.2.11-5), and (2.2.11-6) or (2.2.11-7), respectively the effect of the signal compensator on system stability is relatively easy to check.

2.2.12

Of

THE HARMONIC-BALANCE METHOD

the harmonic linearization methods, that of the describing functions is of greatest importance in control engineering. Sometimes, however, an old method, the method of harmonic balance can also be used [1, 64], [Kl, K3, P27]. This method is suitable for the demonstration of the possible existence of a limit cycle only and will not indicate the convergence or divergence of the limit cycle. The method consists, essentially, in substituting the expressions 2 co sincotf, etc., into the homogeneous coB cos cot, u u sin cot, u nonlinear differential equation and, neglecting higher harmonics, making the coefficients of the sine and cosine terms separately equal to zero. It is these conditions that determine the amplitude L and frequency co L of the limit cycle. Divergence or convergence of the limit cycle has to be decided separately. all

=B



=— B

B

Example 2.2.12.1 In Example 2. 2. 2. 3, the assumption x linearized equation

B

2

=

0 leads to the harmonically

HARMONIC LINEARIZATION OF STATE EQUATIONS

The condition of harmonic balance

-

(1

co

is

B=

2 )

BT 4

Hence, the limit-cycle data are

=B

121

0

= 0.

coB

,

BL =

=

2, co L

1.

sin cot, the harmonically linearized differential Incidentally, since u equation may be written also in the form

ii

B2

—s

'

u

4-

=0

u

4

2.2.13

HARMONIC LINEARIZATION OF STATE EQUATIONS

linearization method described in the above section can be generalized to the linearization of state equations as well. Such equations are often encountered in control engineering and, consequently, their har-

The harmonic

monic linearization is an important problem [P27] Let us start from the homogeneous nonlinear vector x

=

Ax

g(x)

-j-

differential equation

(2.2.13-1)

.

assumed that g(x) is an odd nonlinear function, components is represented by an odd function:

It

is

= 0;

g(0) g(

— x) = — g(x);

g{ (

g

t(

— xv — x = 1, 2 2,

0,

{i

,

.

.

.

.

,

gA x\> x2 >

)

.,

that each of

= 0;

0)

.,

— xn =

.,

.

.

0

i.e.

its

(2.2.13-2) •



•>

xn)

(2.2.13-3)

n).

Let us find a linearized vector equation x

= Hx+Kx,

(2.2.13-4)

B

are functions of the amplitudes it T and of the frequency vector b n] that is, of the amplitude [2?i, 2 substituted sequentially co, respectively- The components of vector x are

whose components

Hij( b, co); A’ ; ; (b, -

co)

B



by the values g

(

=

Bj ( sin (i

=

cot

+

l, 2,



,

ipi);

.

.

.,



y>i

n)

,

B

=

0

(2.2.13-5)

.

Thus, by substituting into the nonlinear function g(x), neglecting higher harmonics, and determining the vector resultants of the individual components,

we

get

g

^C where

B — Cn ix

i

ix

cos

(B1

sin

sin

{cot

jj)

sin

(cot

+

ipj)

dcot,

(2.2.13-14)

0 2n

K = ij

~^

B

J

9ij

(Bj sin (cot

J

+ Vj

))

cos

(cot

+ Vj

)

d cot

.

(2.2.13-15)

o

The

stability of

a harmonically linearized system can be determined from

the roots of the characteristic equation

D(s)

= H + sK — si = |

In possession of the culated.

Bouth-Hurwitz

|

0

criterion, the roots

(2.2.13-16)

need not be

cal-

In the case in hand, the Hurwitz determinants depend on the amplitudes By In linear systems, the last Hurwitz determinant but one, n_ x usually assumes zero value and hence the characteristic equation has two conjugate complex roots. In the case in hand, therefore, it is once more best to plot the surface 0 in the parameter space of the coefficients n_x and Hij} H-ij, to determine whether the curve obtained by varying the

H

.

H

=

,

HARMONIC LINEARIZATION OF STATE EQUATIONS

123

amplitudes Bj in the domain 0

ex2



exxx2

.

=—

ex\x2 is of equation (2.2.13-13), the only nonlinearity g 21 (xx x 2 the multivariable type. Thus, by the multivariable version of (2.2.13-14),

By

,

)

2n

^J>

H = *1

2 sin 8i

cot

X B.2 sin (cot

+

ip

2 ))

sin

cot

d cot

or

B—0 cos 3eBx-— y> 2

1

and from that of

(2.2.13-15), 2n

K =2X

nooB

r

J

f

2 cot X B 81 5? sin

2

sin

(

cot

+

*p 2 ))

cos

cot

do#

or

B2 eB x—

K =

y 4co

21

Since

x2

=x

x,

K = — eB /4; 2

2X

.

- sin

ip

2

.

=

B =

H ——1,

and ti/2, coBx and equation phase and the harmonically linearized

we

have

2

—x x2 = — x

xx

2.2.14

2

2X

is

,

x

-j-

e

x2



COMPARISON OF THE THREE METHODS

three methods of harmonic linearization are equivalent but their application involves more or less computation. This will be shown by an example: that of the control system with backlash discussed in Example 2.2.8. 1.

The

HARMONIC LINEARIZATION

124

Example 2.2.8. 1 illustrates the use of the describing function method. The same problem was studied in Examples 2. 2. 9. 2 and 2. 2. 9. 3 by the root-locus method based on a describing function. Now two further alternatives will be presented: the harmonic balance method, and the harmonic linearization of state equations.

Example

2.2.14.1

Let us solve Example

2.2.8. 1

command

If the reference

by the harmonic balance method.

r(t)

zero, the differential equation of the

is

nonlinear control system becomes

B

KL g(u) = 0

+ = B sin

it -f-

KL = KEKM If u cot, and the nonlinearity is substituted by the describing function, then, taking coefficients A x and Bx from Example 2. 2. 2. 2 and assuming B >• h.

where

.

no

kl kn 7t

71

_

2

+

B

[B

sin-i

2

2

—j-

}

1



1

h

h2

B

B

u

=0

2

The condition of existence of a limit cycle is (since the sine and cosine, or real and imaginary terms of the equation obtained by substituting u = B sin cot or BeJwt have to become equal to zero):



co 2

Tm

-j

—7i

(-

Sin -1

1

[2

B

[

and 4

.

K L KN 71(0

With a time constant of

K = K =

TM =

h2

(5

B

\

n

2 ,

the convergent limit cycle data for 0.18 Ti and co H 4.69, whereas for they are B c 0.50 h and coc 1.59, in good earlier results. (The divergent limit cycle data are also

25 sec -1 are -1 5 sec

a gain of h a gain of L agreement with our

lh

1 sec,

BH =



=

obtained.) Limit-cycle convergence itself of other considerations.

=

must be decided on the

basis

Example

2.2.14.2 Let us solve Example 2.2.8. 1 equations.

On

substitution of xx

=u

become x,

=

by harmonic

and x2

=

u, the

linearization of the state

phase -variable equations

will

COMPARISON OP THE THREE METHODS

now

125

the coefficients of the harmonically linearized equations ~

1

*®2

^2

>

— #21

1 *®1 ~f"

#21

•a:.

*®1

M are to be found. Assuming give

B0; u

ff

u)

= J 9iu )p(u — oo 00

=J 9oi( m u>^u)

Kso(

ff(u)

)

H (u) p^u) du

J

KS0 (m

cr

u,

,

t

2 I g (u) Pxi U )

m u> T do).

(2. 3. 7-4)

(? 0

J

-j 00 4

=

time (r 0), the correlation function provides the meansquare value of the variable or if u 0, its variance:

For zero

s hifting

m =

= al

) dco "J

ol

.

(2.3.7- 6

2n J

Between the power-density spectra of the output and input of the system

illustrated in Fig. 2. 3. 7-1, there

is

linear

the relationship (2.3.7— 7)

Fig.

2. 3. 7-1.

Power-density spectra

of the linear system

This is the so-called index-changing rule [C26]. The output-signal variance will be 00

o 2y

= y*(t) =y {t)= — 2

2rt 1

11 *

!

J — 00

&yy(ja>) dco.

(2.3.7-S)

164

STATISTICAL LINEARIZATION

As seen above, one of the

statistical linearization

methods creates a connection between the standard deviations at the input and output of the nonlinear static element:

= K*s ol

(2.3.7—9)

,

statistical analysis may be extended to cover nonlinear systems too. If the transfer functions and characteristics of the open-loop system are known, the aim of the analysis is to determine statistical characteristics such as standard deviations at zero mean value of the variables arising at different points of the closed loop.

whence

Example 2.3.7. The nonlinearity in the control system a limiter with KN = 1, k = b = 2.

illustrated

by

2. 3. 7-2

Fig.

is

a)

Fig. 2.3.7-2a. Block diagram of a control system

(cf.

Example

2. 3. 7.1)

b)

Fig. 2.3.7-2b. Block diagram of the same system, with the statistical parameters of the variables and the describing functions

The complete input

signal

r(t)

of the closed-loop system

falls

into

two

parts: r(t)

=m

nr (t)

r -f-

m

r is the expected value of the complete input variable, and n r (t) the random noise component of zero expectation. Since the process is stationary, the expectation is independent of time. The distribution of the complete input is a normal distribution of mean value m, (Fig. 2.3.7— 3). Let the power-density spectrum of the noise component of the complete input signal be

where is

N

®nnU™) 1

2

+0) 2

N 1

+jco

N 1

—joi

STATISTICAL SYSTEMS ANALYSIS

165

Let us study first the effect of the expected value of the complete input signal on the closed-loop control system. The transfer function defining the input of the limiter is KeS_ s -f

for the useful signal

K e Km

=W

a {8)

component, and

Ke* s

for the noise ic

-|-

Ke K s

component since the nonlinearity may be considered as a statKM and Ks for the useful signal and noise component, re-

element of gain

spectively. The expected value of the input signal u(t) of the nonlinear element is zero. This is easy to see since the control system is of Type 1 and therefore the steady-state value of the actuating error is zero. Hence the nonlinearity transfers no useful signal in the steady state. let us turn to the effect of the standard deviation of the complete input signal. Taking into consideration the power-density spectrum and the transfer function of the noise component, and since s ja>, the powerdensity spectrum at the input of the nonlinearity will be

Now

=

= W (— 8) W {s) 0nn {s) = - ke* N N Ell s Ke Kg 1 s — + Ke Kg 1 — s KeNs_ -KeNs 8* + {\+Kb K8 8 Ke Ks s* (1 + Kb Kg) s + Ke Ks ®uu{s)

b

b

-

-f-

-f-

)