312 98 2MB
English Pages [196] Year 2018
AL-FARABI KAZAKH NATIONAL UNIVERSITY
S. Aisagaliev
LECTURES ON QUALITATIVE THEORY OF DIFFERENTIAL EQUATIONS Educational manual
Almaty «Qazaq University» 2018
UDC 51 (075) LBC 22.1 я 73 A 28 Recommended for publication by the decision of the Academic Council of the Faculty of Mechanics and Mathematics, Editorial and Publishing Council of Al-Farabi Kazakh National University (Protocol №6 dated 04.04.2018)
Reviewers: Doctor of Physical and Mathematical Sciences, Academician T.Sh. Kalmenov Doctor of Physical and Mathematical Sciences, Professor M.T. Jenaliev Doctor of Physical and Mathematical Sciences, Professor S.Ya. Serovaysky
A 28
Aisagaliev S. Lectures on qualitative theory of differential equations: educational manual / S. Aisagaliev. ‒ Almaty: Qazaq University, 2018. ‒ 196 p. ISBN 978-601-04-3485-1 The book is written on the basis of lectures delivered at the Mechanics and Mathematics Faculty of KazNU named after al-Farabi, as well as scientific works on the qualitative theory of differential equations. It describes the solvability and construction of solutions of integral equations, boundary-value problems of ordinary differential equations with phase and integral constraints, and a constructive method for solving the boundary-value problem of a linear integro-differential equation. The book is intended for undergraduates and doctoral students in the specialty «Mathematics». Published in authorial release.
UDC 51 (075) LBC 22.1 я 73 ISBN 978-601-04-3485-1
© Aisagaliev S., 2018 © Al-Farabi KazNU, 2018
2
CONTENTS
Preface .............................................................................................................................5 Chapter ,. Integral equations ........................................................................................8 Lecture 1. Integral equations solvable for all right hand sides .........................................8 Lecture 2. Solvability of an integral equation with fixed right hand side ........................12 Lecture 3. Solvability of the first kind Fredholm integral equation .................................18 Lecture 4. An approximate solution of the first kind Fredholm integral equation ...........23 Lecture 5. Integral equation with a parameter ..................................................................28 Lecture 6. Integral equation for a function of several variables........................................32 Comments .........................................................................................................................37 Chapter ,,. Boundary value problems of linear ordinary differential equations .........................................................................................................................40 Lecture 7. Two-point boundary value problem. A necessary and sufficient condition for existence of a solution ................................................................................41 Lecture 8. Construction of a solution to two-point boundary value problem №1 ...........46 Lecture 9. Boundary value problems with phase constraints ............................................55 Lecture 10. Boundary value problems with phase and integral constraints .....................59 Comments .........................................................................................................................69 Chapter ,,,. Boundary value problems for nonlinear ordinary differential equations .........................................................................................................................71 Lecture 11. Two-point boundary value problem ..............................................................72 Lecture 12. Boundary value problems with state variable constraints .............................81 Lecture 13. Boundary value problems with state variable constraint and integral constraints......................................................................................................85 Comments .........................................................................................................................93 Chapter ,V. Boundary value problem with a parameter for ordinary differential equations .....................................................................................................95 Lecture 14. Statement of the problem. Imbedding principle ............................................95 Lecture 15. Optimization problem ...................................................................................102 Lecture 16. Solution to the Sturm-Liouville problem ......................................................105 Lecture 17. Boundary value problems for linear systems with a parameter ....................112 Lecture 18. Boundary value problems for nonlinear systems with a parameter ...............121 Lecture 19. Boundary value problems with a parameter with pure state variable constraints..........................................................................................................................126 Comments .........................................................................................................................129 Chapter V. Periodic solutions of autonomus dynamical systems ..............................131 Lecture 20. Statement of the problem ...............................................................................131 3
Lecture 21. A periodic solution of a nonlinear autonomous dynamical system. Transformation .................................................................................................................135 Lecture 22. A necessary and sufficient condition for existence of a periodic solution ............................................................................................................140 Lecture 23. An optimization problem .............................................................................145 Lecture 24. Construction of periodic solutions ................................................................148 Lecture 25. Periodic solutions to linear autonomous dynamical systems ........................156 Lecture 26. The case without constraints .........................................................................161 Lecture 27. The case with state variable constraints ........................................................168 Comments .........................................................................................................................171 Chapter V,,. Constructive method for solving boundary value problems of linear integro-differential equations ........................................................................174 Lecture 28. Statement of the problem. Transformation ...................................................174 Lecture 29. Linear controllable system. Optimization problem .......................................177 Lecture 30. Gradient of the functional. Minimizing sequence .........................................181 Lecture 31. Existence of a solution. Constructing a solution to the boundary value problem ...................................................................................................................185 Comments .........................................................................................................................194
4
PREFACE These notes build upon a course I taught at the department of mechanics and mathematics of al-Farabi Kazakh National University during the recent years as well as upon scientific work of the author on the qualitative theory of differential equations. This is a course for undergraduates and doctoral students specializing in mathematical control theory on specialty "Mathematics". Mathematical control theory occurred at the junction of differential equations, calculus of variations and analytical mechanics is a new direction in qualitative theory of differential equations. For solving topical problems of natural sciences, technology, economy, ecology and other new mathematical methods are required to solve complex boundary value problems. Mathematical models for processes of control of nuclear and chemical reactors, power systems, robotic systems and economics and other are complex boundary value problems of ordinary differential equations and integro-differential equations. Boundary value problems are said to be complex if in addition to boundary conditions phase constraints and integral constraints are posed on state variables of the system. The main problems are to provide a necessary and sufficient condition for existence of a solution to the boundary value problem and to develop methods for constructing solutions to complex boundary value problems. The existing system of training of mathematicians: bachelor's, master's, and doctoral programs require new textbooks and teaching aids for each level of education. Graduate and doctoral students in mathematical control theory must have fundamental knowledge on the theory of integral equations, controllability theory, the theory of extremal problems, the theory of boundary value problems for differential and integrodifferential equations. Therefore there is a need for the creation of textbooks in these areas with the results of new fundamental research. The book contains results on solvability and constructing a solution for integral equations, boundary value problems of ordinary 5
differential equations with phase and integral constraints, the constructive method for solving boundary value problem of linear integro-differential equation. In Chapter I solvability and construction of a solution for integral equations at any right hand side as well as the case of fixed right hand side are presented. A necessary and sufficient condition of solvability is obtained and a constructing a general solution is presented. The new method for studying solvability and constructing a solution to the first kind Fredholm integral equation has been developed. The first kind Fredholm integral equation of the unknown functions of one variable and of several variables are considered separately (lectures 1-6). Chapter II covers the research results on two-point boundary value problems, boundary value problems with phase constraints, boundary value problems with phase and integral constraints for linear system of ordinary differential equations. A necessary and sufficient condition of solvability has been obtained for the mentioned problems and methods for solving them have been developed (lectures 7-10). Chapter III deals with solving boundary value problems for nonlinear systems of ordinary differential equations. The boundary value problems with boundary conditions on given convex closed sets, boundary value problems with phase and integral constraints are considered. A necessary and sufficient condition for an existence of a solution to the mentioned boundary value problems has been derived and constructing solutions to them are presented (lectures 11-13). In Chapter IV the method for studying solvability and constructing a solution of boundary value problem with a parameter in the presence of phase and integral constraints is considered. The basics of the imbedding principle and reducing to the free end point optimal control problem as well as solutions to the Sturm-Liouville problem are presented (lectures 14-19). Chapter V treats an existence of periodic solutions in autonomous dynamical systems and the methods of constructing them for linear and nonlinear systems. A periodic solution to the Duffing equation is considered (lectures 20-27). Chapter VI deals with studying a solvability and constructing a solution to linear integro-differential equations with phase and integral constraints. A necessary and sufficient condition for solvability and construction of a solution by generating minimizing sequences have been derived. The basics for the proposed method of solving boundary value problem is the imbedding principle (lectures 28-31). 6
This book presumes that a reader masters the course of functional analysis and optimal control in the amount of the book «Lectures on optimal control», – Almaty: Kazakh universiteti, 2007. – 278 p. (in russian) by the author. The author is grateful to the reviewers – academician Kalmenov T.Sh., prof. doctor of phys.-math.sc. Dzhenaliev M.T., doctor of phys.-math.sc. Serovajsky S.Ya. The author expresses deep gratitude to the associate professors cand. of phys.-math.sc. Zhunusova Zh.Kh., cand. of phys.-math.sc. Kabidoldanova A.A. for translating this book into English, and also grateful to the scientific employee Ayazbaeva A.A. for assisting in the preparation of the manuscript for publication. The author would like to thank the staff of the Department of differential equations and control theory of the faculty of mechanics and mathematics of al-Farabi Kazakh National University for help in preparing the manuscript to edition and would be grateful for all who send their feedback and comments on this book. Aisagaliev S.
7
Chapter I INTEGRAL EQUATIONS
Research results on solvability and construction of a solution to integral equations subject to one-variable functions and multi-variable functions are presented in this chapter. The first kind Fredholm equation and other different types of integral equations are considered. Lecture 1. Integral equations solvable for all right hand sides A solving controllability problems, optimal control problems for processes described by ordinary differential equations and boundary value problems of ordinary differential equations with state variable constraints and integral constraints involves the study of the integral equation : b
Ku
³ K (t,W )u(W )dW
f (t ), t [t0 , t1 ],
(1.1)
a
A special case of (1.1) is the integral equation b
K1w
³ K (t ,W )w(W )dW *
E , t* [t0 , t1 ],
(1.2)
a
where K (t* , W ) K (W ) || Kij (W ) ||, i 1, n, j 1, m is a given matrix with elements from L2 , t* [t0 , t1 ] is a fixed point, K ij (W ) L2 ( I 1 , R1 ), w(W ) L2 ( I 1 , R1 ) is unknown function, E R n . Problem 1. Provide a necessary and sufficient condition for existence of a solution to the integral equation (1.2) for any E R n . Problem 2. Find a general solution to the integral equation (1.2) for any E Rn .
The following theorem provides a necessary and sufficient condition for existence of a solution to the integral equation (1.2). Theorem 1. A necessary and sufficient condition for existence of a solution to the integral equation (1.2) for any E R n is that the n u n matrix b
C = ³K (W ) K * (W )dW
(1.3)
a
be positive definite for all a, transposed.
b,
b > a, where the superscript (*) means 8
Proof. Sufficiency. Let the matrix C be positive definite. Show that integral equation (1.2) has a solution for any E R n . Choose w(W ) = K * (W )C 1E , W I1 = [a, b]. Then b
K2 w
³ K (W ) K
*
(W )dW C 1E
E.
a
Consequently in the case C > 0, the integral equation (1.2) has at least one solution w(W ) = K * (W )C 1E , W I1, here E R n is an arbitrary vector. This concludes the sufficiency. Necessity. Let us assume that the integral equation (1.2) has a solution for any fixed E R n . Show that the matrix C > 0. Since C t 0, it is sufficient to show that the matrix C is nonsingular. Assume the converse. Then the matrix C is singular. Therefore there exists a vector c Rn , c z 0 such that c*Cc = 0. Define the function v(W ) = K * (W )c, W I1, v() L2 ( I1, Rm ). Note that b
b
³ v (W )v(W )dW
c* ³ K * (W ) K (W )dW c
*
a
c*C c
0.
a
This means that the function v(t ) = 0, W , W I1. Since the integral equation (1.2) has a solution for any E Rn , in particular, there exists a function w () L2 ( I1 , R m ) such that ( E = c ) b
³ K (W )w (W )dW
c.
a
Then we have b
0
* ³ v (W )w (W )dW a
b
c* ³ K (W ) w (W )dW
c*c.
a
This contradicts the fact that c z 0. This finishes the proof of a necessity. The proof of the theorem is complete. The following theorem provides a general solution to the integral equation (3). Theorem 2. Let the matrix C defined by (1.3) be positive definite. Then for any E R n b
w(W ) = K * (W )C 1E p(t ) K * (W )C 1 ³K (K ) p(K )dK, W I1 = [a, b],
(1.4)
a
is a general solution to the integral equation (1.2), where p() L2 ( I1, Rm ) is an arbitrary function, E R n is an arbitrary vector. Proof. Let us introduce the sets b
W
{w() L2 ( I1 , R m ) / ³ K (W ) w(W )dW a
9
E },
(1.5)
Q
{w() L2 ( I1 , R m ) / w(W )
K * (W )C 1 E p(t ) K * (W )C 1 u
(1.6)
b
u ³ K (K ) p(K )dK , p(), p() L2 ( I1 , R m )}. a
The set W contains all solutions of the integral equation (1.2) under the condition C > 0 . The theorem states that the function w() L2 ( I1, Rm ) belongs to the set W if and only if it is contained in Q , i.e. W = Q. Show that W = Q. In order to prove this it is sufficient to show that Q W and W Q. Show that Q W . Indeed, if w(W ) Q , then as it follows from (7), the following equality holds b
b
³ K (W )w(W )dW ³ K (W ) K a
a
b
u ³ K (K ) p(K )dK a
b
*
b
(W )dW C 1 E ³ K (W ) p(W )dW ³ K (W ) K * (W )dW C 1 u a
a
b
b
a
a
E ³ K (W ) p(W )dW ³ K (K ) p(K )dK
E.
This implies that w(W ) W . Show that W Q. Let w* (W ) W , i.e. the following equality holds for the function w* (t ) W (see (1.5)): b
³ K (W )w (W )dW *
E.
a
Note that the function p(t ) L2 ( I1, Rm ) is an arbitrary in the relation (1.4). In particular, we can choose p(t ) = w* (W ), W I1. Now the function w(W ) Q can be rewritten in the form w(W )
b
K * (W )C 1E w* (W ) K * (W )C 1 ³ K (W ) w* (W )dW a
b
w* (W ) K * (W )C 1 ³ K (W ) w* (W )dW
b
K * (t )C 1 [ ³ K (W ) w* (W )dW ] a
w* (W ), W I1 .
a
Consequently w* (t ) = w(t ) Q. This yields that W Q. It follows from the inclusions Q W , W Q that W = Q. The theorem is proved. The main properties of solutions of the integral equation (1.2): 1. The function w(W ), W I1 can be represented in the form * 1 w(W ) = w1 (W ) w2 (W ), where w1 (W ) = K (W )C E is a particular solution of the integral b
equation (1.2), w2 (W ) = = p(t ) K * (W )C 1 ³K (K ) p(K )dK, W I1, is a solution of the a
b
homogeneous integral equation
³K (W )w (W )dW = 0, 2
a
arbitrary function. 10
where p(t ) L2 ( I1, Rm ) is an
Indeed, b
³ K (W )w1 (W )dW a
b
³ K (W )w2 (W )dW a
b
³ K (W ) K
*
(W )C 1EdW
E , E , E R n ,
a
b
b
b
a
a
a m
* 1 ³ K (W ) p(W )dW ³ K (W ) K (W )C EdW ³ K (K ) p(K )dK
0.
2. The functions w1 (W ) L2 ( I1, Rm ), w2 (W ) L2 ( I1, R ) are orthogonal in L2 , i.e. w1 A w2 . Indeed, b
w1 , w2 ! L2
* ³ w1 (W )w2 (W )dW
u C 1 ³ K (K ) p(K )dK
b
a
a
b
b
* 1 * 1 * ³ E C K (W ) p(W )dW ³ E C K (W ) K (W )dW u
a
b
b
E *C 1 ³ K (W ) p(W )dW E *C 1 ³ K (K ) p(K )dK 0.
a
a
a
3. The function w1 (W ) K (W )C E , W I 1 is a solution of the integral equation (1.2) with minimal norm in L2 ( I1 , R m ) . Indeed, || w(W ) ||2 || w1 (W ) ||2 || w2 (W ) ||2 . Hence || w(W ) ||2 t || w1 (W ) ||2 . If the function p (W ) 0, W I1 , then the function w2 (W ) 0, W I1 . Hence || w(W ) || || w1 (W ) ||, w(W ) w1 (W ), W I1. 4. The solution set for the integral equation (1.2) is convex. As it follows from the proof of theorem 2 the set of all solutions to the equation (1.2) is Q . Show that Q is a convex set. Let *
w (W )
1
b
K * (W )C 1E p(W ) K * (W )C 1 ³ K (K ) p(K )dK , a
w (W )
b
K * (W )C 1E p(W ) K * (W )C 1 ³ K (K ) p(K )dK a
be arbitrary elements of the set Q. The function wD (W ) Dw (W ) (1 D ) w (W )
b
K * (W )C 1E pD (W ) K * (W )C 1 ³ K (K ) pD (K )dK Q, D , D [0,1], a
where pD (W ) Dp(W ) (1 D ) p(W ) L2 ( I1 , R m ). 1
Example 1. Consider the integral equation K2 w
³ w(W )dW
E , where
0
K (W ) 1, w() L2 ( I1 , R1 ), I1[0,1] . For this example C
1
³ dW
1 ! 0 . Consequently this
0
integral equation has a solution for any E R1 . By the formula (1.4) the general 1
solution is w(W ) E p(W ) ³ p(K )dK, W I1 , where p(W ) L2 ( I1, R1 ) is an arbitrary 0
function. The particular solution w1 (W ) E , the solution of the homogeneous 1
1
0
0
integral equation ³ w2 (W )dW 0 is w2 (W ) p(W ) ³ p(K )dK, W I1 [0,1], p(), p() L2 ( I1 , R1 ). The results described above hold true for integral equations with respect to multivariable unknown function. In particular, for the integral equation 11
b d
Kw
³ ³ K (t,W )w(t,W )dW dt
E , E Rn ,
(1.7)
a c
we have the following theorems, here K (t ,W ) || Kij (t ,W ) ||, i 1, n, j 1, m is an nu m given matrix, Kij (t,W ) L2 (G, R1 ), w(t , W ) L2 (G, R m ) is an unknown function, G {( t ,W ) / a d t d b, c d W d d },
b d
³ ³| K
ij
m n (t,W ) |2 dW dt f, K : L2 (G, R ) o R , Ku
a.
a c
Theorem 3. A necessary and sufficient condition for existence of a solution to the integral equation (1.7) for any E R n is that the n u n matrix b d
T (a, b, c, d )
³ ³ K ( t ,W ) K
*
(t,W )dW dt
(1.8)
a c
be positive definite. Theorem 4. Let the matrix T (a, b, c, d ) defined by (1.8) be positive definite. Then for any E R n b d
w(t,W ) v(t,W ) K * (t,W )T 1 (a, b, c, d ) E K * (t,W )T 1 (a, b, c, d ) ³ ³ K (K, [ )v(K , [ )dK d[ ,
(1.9)
a c
is a general solution of the integral equation (1.7), here v(t,W ) L2 (G, Rm ) is an arbitrary function, E R n is an arbitrary vector. Proofs of theorems 3, 4 can be found in § 1.7. The main properties of the solution. The general solution of the equation (1.9) defined by (1.7) has the following properties: 1. The function w(t,W ) w1 (t,W ) w2 (t,W ), (t,W ) G, 1 * m w1 (t , W ) K (t , W )T (a, b, c, d )E L2 (G, R ) is a particular solution to the equation (1.7), and the function w2 (t , W ) is a solution of the homogeneous equation
integral where integral integral
b d
³ ³ K (t,W )w (t,W )dW dt 2
0.
a c
2. The functions w1 (t , W ) L2 (G, Rm ) and w2 (t ,W ) L2 (G, R m ) are orthogonal: w1 A w2 .
3. The function w1 (t,W ) L2 (G, R m ) is a solution with minimal norm for the integral equation (1.7). 4. The solution set for the integral equation (1.7) is convex. Lecture 2. Solvability of an integral equation with fixed right hand side The question naturally arises: if the matrix C is not positive definite, has the integral equation (1.2)? The answer is unambiguous, in this case the integral equation (1.2) can have solution, but not for any vector E R n . The condition C ! 0 is a rigid for the kernel of the integral equation. The analogue of this 12
condition is an existence of the inverse matrix A1 for the linear algebraic equation Ax = b, which provides an existence of a solution for any b R n . The algebraic equation Ax = b can have solution in the case of non-existence of the inverse ( rangA = rang ( A, b), by the matrix too, but not for any vector b R n Kronecker-Capelli theorem). Solutions to the following problems are presented below. Problem 1. Provide a necessary and sufficient condition for existence of a solution to the integral equation (1.2) for a given E R n . Problem 2. Find a general solution to the integral equation (1.2) for a given E Rn . Consider problems 1 and 2. The study of the extremal problem is needed to solve problems 1and 2: b
J ( w)
| E ³ K (W ) w(W )dW |2 o inf
(1.10)
w() L2 ( I1 , R m ),
(1.11)
a
subject to where E R is a given vector. Theorem 5. Let a kernel of the operator K (W ) be measurable and belong to the class L2 . Then: 1. the functional (1.10) under the condition (1.11) is continuously Frechet differentiable, the gradient of the functional J c( w) L2 ( I1, Rm ) for any point w() L2 ( I1, Rm ) is defined by n
b
J c(w) 2K * (W )E 2³ K * (W ) K (V )w(V )dV , W I1;
(1.12)
a
2. the gradient of the functional J c( w) L2 ( I1, Rm ) satisfies the Lipchitz condition || J c(w h) J c(w) || d l || h ||, w, w h L2 ( I1 , R m ); (1.13) 3. the functional (1.10) under the condition (1.11) is convex, i.e. J (Dw (1 D )u) d DJ (w) (1 D ) J (u), w, u L2 ( I1 , R m ), D , D [0,1]; (1.14) 4. the second Frechet derivative is defined by J cc(w) 2K * (V ) K (W ), V , W I1; (1.15) 5. if the inequality 2
b b
* * ³ ³ [ (V ) K (V ) K (W )[ (W )dWdV a a
ªb º « ³ K (W )[ (W )dW » t ¬a ¼
b
(1.16)
t P ³ | [ (W ) | dW , [ , [ (t ) L2 ( I1 , R ), 2
m
a
holds, then the functional (1.10) under the condition (1.11) is strongly convex. Proof. As it follows from (1.10), the functional b
b b
a
a a
J (w) E *E 2E * ³ K (V )w(V )dV ³ ³ w* (W ) K * (W ) K (V ) w(V )dVdW . 13
Then the increment of the functional (w, w h L2 ( I1 , R m )) b
'J
b
³ 2 K (V )E 2³ K
J ( w h) J ( w)
a
*
(V ) K (W ) w(W )dW , h(V ) ! dV
a
(1.17)
b b
³ ³ h* (W ) K * (W ) K (V )h(V )dVdW J c( w), h ! L2 o(h), a a
where b b
| o(h) | | ³ ³ h* (W ) K * (W ) K (V )h(V )dVdW | d c1 || h ||2L2 . a a
It follows from (1.17) that J c(w) is defined by (1.12). As b
J c( w h) J c( w) 2 K * (W ) ³ K (V )h(V )dV , a
we have b
| J c( w h) J c( w) | d 2 || K * (W ) || ³ || K (V ) || | h(V ) | dV d c2 (W ) || h || L2 , W I1. a
Hence 1/ 2
§b · || J c( w h) J c( w) || ¨¨ ³ | J c( w h) J c( w) |2 dW ¸¸ d l || h ||, ¹ ©a for any w, w h L2 ( I1 , R m ) . This implies the inequality (1.13).
Show that the functional (1.10) is convex. Since the functional J (w) C1,1 ( L2 ( I1 , R m )), for the functional (1.10) to be convex it is necessary and sufficient to have b
J c( w1 ) J c( w2 ), w1 w2 ! L2 2³ K * (W ) K (V )[ w1 (V ) w2 (V )]dV , w1 (W ) w2 (W ) ! L2 a
b b
2³ ³ [ w1 (W ) w2 (W )]* K * (W ) K (V )[ w1 (V ) w2 (V )]dVdW t 0. a a
This means that the functional (1.10) is convex, i.e. the inequality (1.14). holds. As it follows from (1.12), the increment J c( w h) J c( w) J cc( w), h ! L2 2 K * (V ) K (W ), h(W ) ! L2 b
2 ³ K * (W ) K (V )h(V )dV . a
Consequently, J cc(w) is defined by (1.15). It follows from (1.15), (1.16) that J cc(w)[ , [ ! L2 t P || [ ||2 , w, w L2 ( I1 , Rm ), [ L2 ( I1 , Rm ). This means that the functional J (w) is strongly convex in L2 ( I1, R m ). The theorem is proved. Theorem 6. Let the sequence {wn (W )} L2 ( I1 , R m ) be constructed for extremal problem (1.10), (1.11) by the rule wn1 (W )
wn (W ) D n J c( wn ), g n (D n ) min g n (D ),
g n (D )
D !0
J ( wn DJ c( wn )), n 0,1, 2,... . 14
Then the numerical sequence {J ( wn )} decreases monotonically, the limit lim J c( wn ) = 0. nof
If besides the set M ( w0 ) = {w(W ) L2 ( I1 , R m )/J ( w) d J ( w0 )} is bounded, then: 1) the sequence {wn (W )} is minimizing, i.e. сл (1.18) lim J (wn ) J* inf J (w), w() L2 ( I1 , R m ), wn o w* при n o f, nof
where w*
w* (W ) W* {w* (W ) L2 ( I1 , R m ) / J ( w* )
min J ( w)
J*
wM ( w0 )
inf
wL2 ( I1 , R m )
J ( w)},
2) the following convergence rate estimation holds 0 J ( wn ) J ( w* ) d
m0 , m0 n
(1.19)
const ! 0, n 1, 2, ...
3) there exists a solution to the integral equation (1.2) if and only if J (w* ) 0, w* W* . In this case w* W* is a solution of the integral equation (1.2); 4) if J (w* ) ! 0, then the integral equation (1.2) hasn’t a solution. 5) if the inequality (1.16) holds, then || wn w* ||o 0 as n o f . Proof. Minimization methods in Hilbert space [19] can be applied to proof theorem. The conditions g n (D n ) d g n (D ), J (w) C1,1 ( L2 ( I1 , R m )) imply that J ( wn ) J ( wn DJ c( wn )) t D (1
Dl 2
) || J c( wn ) ||2 , n
0,1, 2,...,
where l const ! 0 is the Lipchitz constant from (1.13). Then 1 || J c( wn ) ||2 ! 0. 2l lim J c(un ) = 0 and
J ( wn ) J ( wn1 ) t
J c( wn ) 0 This yields that lim nof
nof
the numerical sequence
{J (un )} decreases monotonically. The first statement of the theorem is proved.
As the functional J (w) is convex the set M ( w0 ) is convex. Then 0 d J (wn ) J (w* ) d J c(wn ), wn w* ! d || J c(wn ) || || wn w* || d D || J c(wn ) ||,
here D is a diameter of set M ( w0 ). Since M ( w0 ) is weakly bicompact, the functional J (w) is weakly lower semicontinuous, it follows that the set W* z , W* M ( w0 ) and {wn } M ( w0 ), w* M ( w0 ). Note that (see (1.18)) 0 d lim J ( wn ) J ( w* ) d D lim || J c( wn ) || 0, lim J ( wn ) nof
nof
nof
J ( w* )
J*.
Consequently the sequence {wn } M ( w0 ) is minimizing. The estimation (1.19), where m0 = 2 D 2l , follows from the inequalities J ( wn ) J ( wn1 ) t
1 || J c( wn ) ||2 , 0 d J ( wn ) J ( w* ) d D || J c( wn ) ||, 2l сл wn o w* при n o f
The second statement of the theorem is proved. It follows from (1.10) that J ( w) t 0, w, w L2 ( I1, Rm ). The sequence {wn } L2 ( I1 , R m ) is minimizing for any initial guess w0 = w0 (W ) L2 ( I1 , R m ), i.e. J ( w* ) = min J ( w) = J * = inf J ( w). If J ( w* ) = 0, then wL2 ( I1 , R m )
wL2 ( I1 , R m )
15
b
³ K (W )w (W )dW .
E
*
a
Therefore the integral equation (1.2) has a solution if and only if J (w* ) 0, where w* w* (W ) L2 ( I1, Rm ) is a solution to the integral equation (1.2). If J ( w* ) ! 0, then w* = w* (W ), W I1 is not a solution of the integral equation (1.2). In other words, whenever J (w* ) > 0, the integral equation (1.2) hasn’t a solution for the given E R n . Thus the statements 3, 4 are proved. If the inequality (1.16) holds, then the functional (1.10) under the condition (1.11) is strongly convex. Whence J ( wn ) J ( w* ) d J c( wn ), wn w* ! J ( wn ) J ( wn1 ) t
P 2
|| wn w* ||2 d 2P || J c( wn ) ||2 , n
1 || J c( wn ) ||2 , n 2l
0,1, 2,...
0,1, 2,...
Hence an an1 t P an , where an J (wn ) J (w* ) . Consequently 0 d an1 d an (1 P ) qan , q
1
P l
l
l
, q 1. Then an d qan1 d q an2 d ... d q a0 . n
2
This implies
0 d J ( wn ) J ( w* ) d [ J ( w0 ) J ( w* )]q n , q 1
P l
, 0 d q d 1, P ! 0.
It can be shown that the estimation §2· || wn w* || d ¨¨ ¸¸[ J ( w0 ) J ( w* )]q n , n 0,1, 2,... . ©P¹ holds for any strongly convex functional. Then Pwn w*P o 0 as n o f. Theorem
is proved. Consider more general problems. Problem 3. Provide a necessary and sufficient condition for existence of a solution to integral equation (1.2) with a given E R n , and the unknown function w(W ) W (W ) L2 ( I1 , R m ).
In particular, the set W (W ) is defined by: either W (W ) {w() L2 ( I1 , Rm ) / Di (W ) d wi (W ) d Ei (W ), i 1, m, п.в. W I1}.
or W (W ) {w() L2 ( I1 , R m ) / || w ||2L2 d R 2 },
where D (W ) (D1 (W ),..., D m (W )), E (W ) (E1 (W ),..., Em (W )), W I1 are given continuous functions, R > 0 is a given number. Problem 4. Find a solution to integral equation (1.2) with a given E R n , and w(W ) W (W ) L2 ( I1 , R m ). Solving problems 3, 4 is reduced to the study of the extremal problem b
J1 ( w, u ) | E ³ K (W ) w(W )dW | || w u ||2L2 o inf
(1.20)
w() L2 ( I1 , R m ), u(W ) W (W ), W I1.
(1.21)
a
subject to
16
Theorem 7. Let a kernel of the operator K (W ) be mesurable and belong to Then: L2 . 1) the functional (1.20) under the condition (1.21) is continuously Frechet differentiable, the gradient J1c( w, u ) ( J1cw ( w, u ), J1cu ( w, u )) L2 ( I1 , R m ) u L2 ( I1 , R m )
for any point ( w, u) L2 ( I1, Rm ) u W (W ) is defined by b
J1cw (w, u) 2K * (W )E 2³ K * (W ) K (V )w(V )dV 2(w u ) L2 ( I1 , R m ),
(1.22)
a
J1cu ( w, u ) 2( w u ) L2 ( I1 , R m );
(1.23) 2) the gradient of the functional J1c( w, u) satisfies the Lipchitz condition || J1c( w h, u h1 ) J1c( w, u ) || d l1 (|| h || || h1 ||),
(1.24)
( w, u ), ( w h, u h1 ) L2 ( I1 , R m ) u L2 ( I1 , R m );
3) the functional (1.20) under the condition (1.21) is convex. A proof the theorem is similar to theorem 5’s proof. Theorem 8. Let for the extremal problem (1.20), (1.21) the sequences be constructed by wn1 (W ) wn (W ) D n J1cw ( wn , un ), n 0,1, 2, ..., un1 (W ) PW [un (W ) D n J1cu ( wn , un )], n 0,1, 2, ...
where PW [] is a projection of a point onto the set W , H0 d Dn d
l1
2 , H 0 ! 0, H 1 ! 0, n l1 2H 1
0,1, 2,...,
is the Lipchitz constant from (1.24), in the case H1
l1 , H0 2
Dn
1 , l1
J1cw (wn , un ), J1cu (wn , un ) are defined by (1.22), (1.23) respectively. Then the numerical
sequence {J1 (wn , un )} is monotone decreasing, the limits lim || wn wn1 ||o 0, lim || un un1 || 0. nof
nof
If in addition the set M (w0 , u0 ) {( w, u) L2 u W / J1 (w, u) d J1 (w0 , u0 )} bounded, then: 1) the sequence {wn , un } M (w0 , u0 ) is minimizing, i.e. lim J ( wn , un ) nof
is
inf J1 ( w, u ), (w, u) L2 uW ;
J1*
2) the sequence {wn , un } M (w0 , u0 ) weakly converges to the set X * {( w* , u* ) L2 uW / J1 (w* , u* ) J1* inf J1 (w, u) min J1 (w, u), (w, u) L2 uW}.
3) a necessary and sufficient condition for integral equation (1.2) under the condition wW to have a solution is that J1 (w* , u* ) J1* 0. A proof of theorem is similar to the proof of theorem 6.
17
Lecture 3. Solvability of the first kind Fredholm integral equation Consider the integral equation b
³ K (t,W )u(W )dW
Ku
f (t ), t I [t0 , t1 ],
(1.25)
a
where K (t , W ) || Kij (t , W ) ||, i 1, n, j 1, m is an nu m given matrix, elements of the matrix K (t ,W ) the functions K ij (t ,W ) are measurable and belong to L2 on the set S1 {( t ,W ) R 2 / t0 d t d t1 , a d W d b}, b t1
³ ³| K
ij
(t ,W ) |2 dtdW f,
a t0
the function f (t ) L2 ( I , R ) is given, u(W ) L2 ( I1, Rm ), I1 [a, b] is an unknown function, the values t0 , t1, a, b are fixed, t1 ! t0 , b ! a, K : L2 ( I1 , R m ) o L2 ( I , R n ). The following problems are imposed: Problem 1. Provide a necessary and sufficient condition for existence of a solution to integral equation (1.25) with a given f (t ) L2 ( I , R n ); Problem 2. Find a solution to integral equation (1.25) with a given n
f (t ) L2 ( I , R n ).
Consider problems 1 and 2 for the integral equation (1.25). For solving problems 1 and 2 study the extremal problem: t
b
t0
a
2
J (u) = ³ 1 f (t ) ³ K (t,W )u(W )dW dt o inf ,
(1.26)
under the condition u(W ) L1 ( I1, Rm ), (1.27) n where f (t ) L2 ( I , R ) is a given function, the symbol | | denotes an Euclidean
norm. Theorem 9. Let a kernel of the operator K (t ,W ) be measurable and belong to the class L2 in the rectangle S1 = {( t,W ) / t I = [t0 , t1 ], W I1 = [a, b]} .Then: 1. the functional (1.26) under the condition (1.27) is continuously Fréchet differentiable, the gradient of the functional J c(u) L2 ( I1, Rm ) at any point u() L2 ( I1, Rm ) is defined by the formula t
t
t0
t0 a
b
J c(u) = 2³ 1K (t,W ) f (t )dt 2³ 1 ³ K * (t,W ) K (t, V )u(V )dVdt L2 ( I1, Rm );
(1.28)
2. the gradient of the functional J c(u) L2 ( I1, Rm ) satisfies the Lipschitz condition || J c(u h) J c(u) ||d l || h ||, u, u h L2 ( I 1 , R m ); (1.29) 3. the functional (1.26) under the condition (1.27) is convex, i.e. J (Du (1 D )v) d DJ (u) (1 D ) J (v), u, v L2 ( I1, Rm ), D , D [0,1]; (1.30) 4. the second Fréchet derivative is defined by t (1.31) J cc(u) = 2³ 1K * (t, V ) K (t,W )dt. t 0
18
5. if the inequality
³
2
1ª 1 ª 1 * º º * ³ [ (V )«³ K (t,V ) K (t,W )dt »[ (W )dWdV = ³ «³ K (t,W )[ (W )dW » dt
b b
t
¬
a a
t
t0 ¬ t0 ¼ P > 0, [ , [ L2 ( I1 , R m ),
t0
b
t
t P ³ | [ (W ) | dW , 2
a
¼
(1.32)
holds, then the functional (1.26) under the condition (1.27) is strongly convex. Proof. As it follows from (1.26) the functional t
b
b b
t0
a
a a
J (u) = ³ 1[ f * (t ) f (t ) 2 f * (t )³ K (t,W )u(W )dW ³
³ u (W )K (t,W )K (t,V )u(V )dV ]dt. *
*
Then the increment of the functional
³
'J = J (u h) J (u ) b
t1 b
a
t0 a
³ 2³
³K
*
b
a
t
2³ 1K * (t ,V ) f (t )dt ,h(V ) dV t0
t
(t ,V ) K (t ,W )u (W )dWdt ,h(V ) dV ³ 1 ³
b b
³ h (W ) K *
t0 a a
*
(t ,W ) K (t ,V )h(V )dVdWdt
¢ J c(u), h² L o(h), 2
(1.33)
where | o(h) |=
³ ª«¬³ ³ h (W )K (t, W )K (t, V )h * (V )dVdW º»¼ dt d c t1
b b
t0
a a
*
*
1
2
hL . 2
From (1.33) it follows, that J c(u ) is defined by the formula (1.28). Since t
b
J c(u h) J c(u) = 2³ 1 ³ K * (t,W ) K (t, V )h(t, V )dVdt, t0 a
we have t
b
| J c(u h) J c(u) |d 2³ 1 ³ || K * (t ,W ) || || K (t , V ) || | h(t , V ) | dVdt d c2 (W ) || h || L , 2
t0 a
c2 (W ) > 0, W I1.
Then t || J c(u h) J c(u) || L = §¨ ³ 1 | J c(u h) J c(u) |2 dW ·¸ 2 t © 0 ¹
1/2
d l || h || L , 2
for any u, u h L2 ( I1, R ) . This implies the inequality (1.29). Let us show, that the functional (1.26) under the condition (1.27) is convex. In fact, for every u, w L2 ( I1, Rm ) the following inequality holds: m
¢ J c(u ) J c( w), u w² L
2
2³
t1
t0
t
b
2³ 1 ³ K * (t , W ) K (t , V )[u (V ) w(V )]dVdt , u w t0 a
L2
^³ ³ [u(V ) w(V )] K (t,W )K (t, V )[u(V ) w(V )]dVdt`dW b b
*
*
a a
2
t b 2³ 1 ª³ K (t , V )[u(V ) w(V )]dV º dt t 0. »¼ t0 « ¬a
This means that the functional (1.26) is convex, i.e. the inequality (1.30) holds. As it follows from (1.28), t
J c(u h) J c(u ) = ¢ J cc(u ), h² = 2³ 1K * (t , V ) K (t ,W )dt ,h t0
t
L2
= 2³ 1K * (t ,W ) K (t , V )h(V )dVdt. t0
Consequently, the second derivative J cc(u) is defined by the formula (1.31). It follows from (1.31), (1.32) that 19
¢ J cc(u )[ , [ ² L t P || [ || 2 , 2
u, u L2 ( I 1 , R m ), [ , [ L2 ( I 1 , R m ).
This means that the functional J (u ) is strongly convex in L2 ( I1, Rm ) . The theorem is proved. Theorem 10. Let a sequence {un (W )} L2 ( I1 , R n ) be constructed for the extremal problem (1.26), (1.27) by algorithm [5] un1 (W ) = un (W ) Dn J c(un ),
gn (Dn ) = min gn (D ), D t 0,
gn (D ) = J (un DJ c(un )),
n = 0,1,2,
Then the numerical sequence {J (un )} decreases monotonically, the limit lim J c(un ) = 0 . n of
If, in addition, the set M (u0 ) = {u() L2 ( I1, Rn )/J (u) d J (u0 )} is bounded, then: 1. the sequence {un (W )} M (u0 ) is minimizing, i.e. lim J (un ) = J * = inf J (u), nof
u L2 ( I , Rm );
2. the sequence {un } weakly converges to the set U* , where ½ ° ° U* = ®u* (W ) L2 ( I1 , R m )/J (u* ) = min J (u ) = J * = inf J (u ) ¾, m) uM ( u0 ) ° ° u L I R ( , ¯ ¿ 2 1
weak
un o u* as n o f;
3. the estimation of convergence rate holds: 0 d J (un ) J (u* ) d
m0 , n
m0 = const. > 0, n = 1,2,
(1.34)
4. if the inequality (1.32) holds, then the sequence {un } L2 ( I1 , Rn ) strongly converges to the point u* U* and the following estimates hold 0 d J (un ) J (u* ) d [ J (u0 ) J * ]q n ,
where
§2· || un u* || d ¨¨ ¸¸>J (u0 ) J (u* )@q n , ©P¹ J* = J (u* );
q = 1
P l
, 0 d q d 1, P > 0,
(1.35)
n = 0,1,2, ,
5. A necessary and sufficient condition for the first kind Fredholm integral equation (1.25) to have a solution it is that J (u* ) = 0, u* U* . In this case the function u* (W ) L2 ( I1, Rm ) is a solution of the integral equation (1.25). 6. if the value J (u* ) > 0, then the integral equation (1.25) has no solution for a given f (t ) L2 ( I , Rn ) . Proof. Since gn (Dn ) d gn (D ), we get J (un ) J (un1 ) t J (un ) J (un DJ c(un )), D t 0, n = 0,1,2,
On the other hand, from the inclusion J (u) C1,1 ( L2 ( I1, Rm )) it
follows, that
§ Dl · J (u n ) J (u n DJ c(u n )) t D ¨1 ¸ || J c(u n ) || 2 , 2¹ ©
D t 0, n 0,1,2,
Then J (u n ) J (u n 1 ) t
20
1 || J c(u n ) || 2 > 0. 2l
It follows from this that the numerical sequence {J (un )} decreases monotonically and lim J c(un ) = 0 . The first statement of the theorem is proved. n of
Since the functional J (u ) is convex at u L2 , the set M (u0 ) is convex. Then 0 d J (un ) J (u* ) d ¢ J c(un ), un u* ² L d|| J c(un ) || || un u* ||d D || J c(un ) ||, 2
un M (u0 ), u* M (u0 ), where D is a diameter of the set M (u0 ) . Since M (u0 ) is a bounded convex closed
set in L2 , it is weakly bicompact. Any convex continuously differentiable functional J (u ) is weakly lower semicontinuous. Then the set U* z , is an empty set, U* M (u0 ), {un } M (u0 ), u* M (u0 ) . Note, that 0 d lim J (un ) J (u* ) d D lim || J c(un ) ||= 0, nof
lim J (un ) = J (u* ) = J * .
nof
n of
Consequently, on the set M (u0 ) the lower bound of the functional J (u ) is attained at the point u* U* , the sequence {un } M (u0 ) is minimizing. Thus, the second statement of the theorem is proved. The third statement of the theorem follows from the inclusion {un } M (u0 ), from the fact that M (u0 ) is a weakly bicompact set, J (u* ) = min J (u) = J* = inf J (u), weak
u M (u0 ) . Therefore, un o u* as n o f .
From the inequalities J (u n ) J (u n 1 ) d
1 || J c(u n ) || 2 , 2l weak
un o u*
0 d J (u n ) J (u* ) d D || J c(u n ) ||,
as n o f.
follows the estimation (1.34), where m0 = 2 D 2l . The fourth statement of the theorem is proved. If the inequality (1.32) holds, then the functional (1.26) under the condition (1.27) is strongly convex. Then P
2
un u d 2P || J c(un ) || 2 , n = 0,1,2, , 2 1 J (un ) J (un 1 ) t || J c(un ) || 2 , n = 0,1,2, 2l
J (un ) J (u* ) d ¢ J c(un ), un u* ²
Hence an an 1 t P an , where an = J (un ) J (u* ) . Therefore, 0 d an1 d an (1 P ) = qan . l
l
Then an d qan 1 d q 2an 2 d d qn a0 , where a0 = J (u0 ) J (u* ) . The estimations (1.35) follow from this. The fifth statement of the theorem is proved. As it follows from (1.26), the value J (u ) t 0, u, u L2 ( I1, Rm ) . The sequence {un } L2 ( I1 , R m ) is minimizing for any starting point u0 = u0 (W ) L2 ( I1 , Rm ) , i.e. J (u* ) =
min
uL2 ( I1 ,R m )
J (u ) = J * =
inf
uL2 ( I1 ,R m )
J (u ).
b
If J (u* ) = 0, then f (t ) = ³a K (t,W )u* (W )dW . Thus, the integral equation (1.25) has a solution if and only if the value J (u* ) = 0, where u* = u* (W ) L2 ( I p , Rn ) is a solution 21
b
of the integral equation (1.25). If the value J (u* ) > 0, then f (t ) z ³ K (t,W )u* (W )dW , a consequently, u* = u* (W ), W I1 is not a solution of the integral equation (1.25). The theorem is proved. Consider the case when the original function u(W ) U (W ) L2 ( I1, Rm ), where, in particular, either U (W ) = {u() L2 ( I1, Rm )/D (W ) d u(W ) d E (W ), a.e. W I1}, or U (W ) = {u() L2 ( I 1 , R m )/ || u || 2 d R 2 }. Example 2. The integral equation is given 1
³e
Ku
( t 1)W
u (W )dW
f (t ), f (t )
0
1 (et 2 1), t I t2
(1.36)
[0,1].
For the example optimization problem (1.26), (1.27) has the follwing form 2
1 ª 1 º ( t 1)W t 2 1 ( e 1 ) « ³0 ¬ t 2 ³0 e u(W )dW »¼ dt o inf, u() L2 ( I1 , R ), I1 1
J (u )
[0,1].
The gradient of the functional 1
J c(u) 2³ e(t 1)W 0
1
1
(et 2 1) dt 2³ e(t 1)W ³ e(t 1)V u(V )dV dt. t2 0 0
The Lipschitz constant 1 1
l d 2³ ³ e 2(t 1)W dWdt. 0 0
J (un DJ c(un )), n 0,1, 2,... . The sequence un1 (W ) un (W ) D n J c(un ), g n (D n ) min D t0
The sequence {u n } converges to u* (W ) eW , W [0,1] . The value J (u* ) 0, the solution to the integral equation (1.36) is u* (W ) eW , W [0,1] . It can easily be checked that J c(u* ) 0. Let us consider more general problems: Problem 3. Find necessary and sufficient conditions for existence of a solution to the integral equation (1.25) for a given f (t ) L2 ( I , R n ), and an unknown function u(W ) U (W ) L2 ( I1 , R m ), where, in particular, either U (W ) {u() L2 ( I1 , R m ) / D (W ) d u(W ) d E (W ), п.в. W I1},
or U (W ) {u() L2 ( I1 , R m ) / || u ||2 d R 2 }.
Problem 4. Find a solution to the integral equation (1.25) for a given f (t ) L2 ( I , R m ), where u(W ) U (W ) L2 ( I1 , R m ). Solutions to problems 3, 4 can be constructed by solving the optimization problem: t b (1.37) J 1 (u, v) = ³ 1 | f (t ) ³ K (t,W )u(W )dW |2 dt || u v || 2L o inf 2 t a 0
under the conditions u() L2 ( I1, Rm ), v(W ) U (W ) L2 ( I1, Rm ), W I1, f (t ) L2 ( I , Rn ). 22
(1.38)
Theorem 2.8. Let a kernel of the operator K (t ,W ) be measurable and belong to L2 in the rectangle S1 = {( t,W ) R2 /t I , W I1} . Then: 1. the functional (1.37) under the condition (1.38) is continuously Fréchet differentiable, the gradient of the functional J 1c(u, v ) = ( J 1cu (u, v ), J 1cv (u, v )) L2 ( I1 , R m ) u L2 ( I1 , R m )
at any point (u, v) L2 ( I1, Rm ) u L2 ( I1, Rm ) is defined by the formula t
t
t0
t0 a
b
J 1cu (u, v ) = 2 ³ 1K (t ,W ) f (t )dt 2 ³ 1 ³ K * (t ,W ) K (t , V )u(V )dVdt 2(u v ) L2 ( I1 , R m ),
(1.39)
J 1cv (u, v ) = 2(u v ) L2 ( I1 , R ); m
2. the gradient of the functional J1c(u, v) satisfies the Lipschitz condition || J 1c (u h, v h1 ) J 1c(u, v) ||d l 2 (|| h || || h1 ||),
(u, v), (u h, v h1 ) L2 ( I1, Rm ) u L2 ( I1, Rm );
3. the functional (1.37) under the condition (1.38) is convex. The proof of the theorem is similar to the proof of Theorem 2.6. Theorem 2.9. Let the sequences un1 = un Dn J1cu (un , vn ), vn1 = PU [vn Dn J1c(un , vn )], n = 0,1,2,, H0 d D d
2 , H 0 > 0, H1 > 0, n = 0,1,2, l2 2H1
be constructed (see (1.39)) for optimization problem (1.37), (1.38). Then the {J1 (un , vn )} numerical sequence decreases monotonically, and lim nof || un un 1 ||= 0, lim nof || v n v n 1 ||= 0 . If, in addition, the set M (u0 , v0 ) = {( u, v) L2 u U/J1 (u, v) d J (u0 , v0 )} is bounded, then: 1. the sequence {un , vn } M (u0 , v0 ) is minimizing, i.e. lim J1 (un , vn ) = J * = inf J (u, v), nof
weak
(u, v) L2 u U ;
weak
2. un o u* , vn o v* as n o f, (u* , v* ) U* = {( u* , v* ) L2 uU/J1(u* , v* ) = min J1(u, v) = J* = inf J1(u, v), (u, v) L2 uU};
3. necessary and sufficient condition for the integral equation (1.25) to have a solution under the condition u(W ) U is J1(u*, v* ) = 0 . The proof of the theorem is similar to the proof of Theorem 2.7. Lecture 4. An approximate solution of the first kind Fredholm integral equation Consider the integral equation of the form b
Ku = ³ K (t,W )u(W )dW = f (t ), a
t I = [t0 , t1 ].
Problem 1. Find an approximate solution to the integral equation. 23
(1.40)
Let a complete system be given in L2 , in particular 1, t, t 2 ,, and {M k (t )}fk =1 , t I = [t0 , t1 ] be the corresponding complete orthonormal system. Since the condition of Fubini’s theorem about changing the order of integration holds
³ §¨© ³ K t1
b
t0
a
ij
b b t (k ) (t ,W )u j (W )dW ·¸M k (t )dt = ³ §¨ ³ 1K ij (t ,W )M k (t )dt ·¸u j (W )dW = Lij (W )u j (W )dW , a © t0 a ¹ ¹ i = 1, n, j = 1, m, k = 1,2,,
³
³
t1
t0
f i (t )Mk (t )dt = aik ,
i = 1, n, k = 1,2,,
where K (t,W ) =|| K ij (t,W ) ||, i = 1, n, j = 1, m, f (t ) = ( f1 (t ), , f n (t )), t I , W I1, L(ijk ) (W ) t1
³ K (t,W )M (t )dt .
denotes
ij
t0
k
Then [3]
³ §¨© ³ K (t,W )u(W )dW ·¸¹M t1
b
t0
a
k
(t )dt =
§ b§ t1K (t ,W )M (t )dt ·u (W )dW b§ t1K (t ,W )M (t )dt ·u (W )dW · ¸ m ¸ 1 ¨ ³a ¨ ³t 0 11 ¸ k k ³a ¨© ³t 0 1m ¹ ¹ ¨ © ¸ ¨ b t ¸ b t ¨ ³ §¨ ³ 1K n1 (t ,W )M k (t )dt ·¸u1 (W )dW ³ §¨ ³ 1K nm (t ,W )M k (t )dt ·¸u m (W )dW ¸ a © t0 ¹ ¹ ¨¨ a © t 0 ¸¸ © ¹ § bL( k ) (W )u (W )dW bL( k ) (W )u (W )dW · 1 ¨ ³a 11 ¸ ³a 1m m b ¨ ¸ L( k ) (W )u(W )dW , k = 1,2,, ¸ ¨ b (k ) b (k ) a ¨ ³a Ln1 (W )u1 (W )dW ³a Lnm (W )u m (W )dW ¸ ¨ ¸ © ¹ t1 § f (t )M (t )dt · § a 1( k ) · k ¨ ³t 0 1 ¸ ¨ ¸ ¨ ¸ ¨ ¸ t1 (k ) a = ³ f (t )M k (t )dt = ¨ t k = 1,2, ¸ = ¨ ( k ) ¸, 1 t0 ¨ ³t 0 f n (t )M k (t )dt ¸ ¨ a n ¸ ¸ ¨ ¸ ¨ ¹ © ¹ © Now, for each index k we get
³
b
³L a
(k )
(k )
(W )u(W )dW = a ,
(1.41)
k = 1,2,, (k )
where L( k ) (W ) is a matrix of order n u m , a Rn , § L(1k ) (W ) · ¸ ¨ ¨ ¸ (k ) L (W ) = ¨ ( k ) ¸, L (W ) ¸ ¨ n ¸ ¨ ¹ ©
k) L(jk ) (W ) = L(jk1) (W ), , L(jm (W ) ,
Let us denote § L(1) (W ) · ¨ (2) ¸ ¨ L (W ) ¸ , L(W ) = ¨ ¸ ¸ ¨ ¸ ¨ ¹ © 24
§ a (1) · ¨ (2) ¸ ¨a ¸ a=¨ ¸. ¨ ¸ ¨ ¸ © ¹
k = 1,2,
Then the relations (1.41) are rewritten in the form b
³ L(W )u(W )dW = a,
(1.42)
a
where L(W ) is a matrix of order Nn u m , N = f . (k ) It should be noted that if for some k = k* , L j * (W ) = 0 and corresponding (k )
a j * = 0 , then the relations b (k ) * j
³L a
(k )
(W )u(W )dW = a j *
should be excepted from the system (1.42) (k ) (k ) Note that if L j * (W ) = 0, but a j * z 0, then the integral equation (1.40) has no solution. Theorem 3.1. Let the matrix b
CN = ³ LN (W ) L*N (W )dW a
of order nN u nN be positive definite. Then the general solution of the integral equation (1.40) is determined by the formula b (1.43) uN (W ) = L*N (W )CN1 a N pN (W ) L*N (W )CN1 ³ LN (K ) pN (K )dK, W I1, a
where pN (W ) L2 ( I1, Rm ) is an arbitrary function. The proof for finite N can be found in [3]. Example 3. Consider the integral equation 1
³u(W )dW = t ,
(1.43)
t [1,1], u() L2 ( I1 , R1 ),
0
where a = 0, b = 1, t0 = 1, t1 = 1, K (t ,W ) { 1, f (t ) = t. As it follows from (1.18) this integral equation has not a solution. The system 1, t , t 2 , is linear independent in L2 ( I , R1 ), where I = [ 1,1]. The corresponding complete orthonormal system {M k (t )} fk =1 is the Legendre polynomial, where 5 2 3 1 t, M3 = (3t 1), , M2 = 8 2 2 For the considered example ( n = 1, m = 1), we have
M1 (t ) = 1
1
1
³ K (t ,W )M (t )dt = ³M (t )dt = ³ 1
1
1
1
1
1
1
1 dt = 2 ; 2 1
3 t2 K ( t , W ) M ( t ) dt = M ( t ) dt = 2 2 ³ ³ 2 2 1 1
Therefore a
(k )
L(1) (W ) = 2 ,
a
(1)
1
1
1 t2 M t t dt ( ) = 1 ³ 2 2 1 1
= 0, ³tM 2 (t )dt = 1
L(2) (W ) = 0,
= 0,
§ 0 · § 2· ¸ ¨ ¸ ¨ = 0, k > 2. Then L(W ) = ¨ ¸, a = ¨ 2 ¸. ¨ ¸ © 0 ¹ © 3¹ 25
a
(2)
=
1
=0 1
2 . 3 2 , 3
L( k ) (W ) = 0,
k > 0,
For this example
L( 2) (W )
but,
0,
2 z0. 3
a ( 2)
Consequently, the equation
(1.43) has not a solution. As it follows from (1.41), the trancated equation for k = 1,2,, N has the following form b
³L
N
(W )u N (W )dW = a N ,
(1.44)
a
where § a(1) · ¸ ¨ § L (W ) · ¨ (2) ¸ ¸ ¨ a ¸ LN (W ) = ¨ ¸, a N = ¨ ¨ ¸ ¨ (N ) ¸ ¨ (N ) ¸ © L (W ) ¹ ¨a ¸ © ¹ (1)
Theorems 1, 2 and theorems 5, 6 can be applied for solving the integral equation (1.44). Theorem 13. Let the nN u nN matrix b
CN = ³LN (W ) L*N (W )dW a
be positive definite. Then a solution to the integral equation (1.44) is defined by b
u N (W ) = L*N (W )CN1aN pN (W ) L*N (W )CN1 ³LN (K ) pN (K )dK , W I1 , 1
where
(1.45)
a
is an arbitrary function. The proof is based on theorems 1, 2. Define the function f1 (t ), t I as pN (W ) L2 ( I1 , R m )
b
f1N (t )
³K (t,W )u
N
(W )dW , t I
[t0 , t1 ],
a
where uN (W ), W I1 is defined by (1.45). Then the difference b
f (t ) f1N (t )
f (t ) ³K (t ,W )u N (W )dW , t I , a
where f (t ), t I is a given function. The value || f f1N || L can be used for determining N. The integral equation (1.44) can have a solution in the case when the matrix C N is not positive definite too. A necessary and suficient condition for solvability of the integral equation (1.44) can be derived by applying theorems 5, 6. Theorem 5 implies that in the extremal problem 2
b
J (u N ) | aN ³LN (W )u N (W )dW |2 o inf
(1.46)
a
u N () L2 ( I1 , R m )
the Frechet derivative is defined by (see (1.46)) b
J c(u N ) 2L*N (W )aN 2³L*N (W ) LN (V )u N (V )dV . a
26
(1.47)
The gradient J c(u N ) satisfies the Lipschitz condition || J c(u N hN ) J c(u N ) || d l N || hN ||, u N , u N hN L2 ( I1 , R m ).
(1.48)
Given uN( n ) (W ) L2 ( I1, R m ) compute the next iterations as follows u N( n1) (W ) u N( n ) (W ) D n J c(u N( n ) ), g n (D n ) min g n (D ), D t0
g n (D )
J (u
( n) N
DJ c(u N( n ) ), n 0,1, 2,... .
(1.49)
where J c(u N ) is defined by (1.47) and the inequality (1.48) holds. Theorem 14. Let the sequence {u N( n ) } L2 ( I1 , R m ) be defined by (1.49), the set M (u N( 0) ) {u N (W ) L2 ( I1 , R m ) / J (u N ) d J (u N( 0) )} be bounded. Then: 1) lim J c(uN( n) ) J* inf J (uN ), uN L2 ( I1, Rm ); n of
weakly o u *N as n o f, J (u *N ) min J (u N ) J * inf J (u N ), u N L2 ( I1 , R m ). 2) u N( n )
3) 0 J (u N ) J (u *N ) d mN , mN const ! 0, n 1, 2, ... n
4) a necessary and sufficient condition for the integral equation (1.44) to have a solution is J (u *N ) 0 . In this case u *N u *N (W ), W I1 is a solution to the integral equation (1.44). The proof is based on theorems 5,6. Define the function f 2 (t ), t I as b
³K (t,W )u
f 2 (t )
* N
(W )dW , t I
[t0 , t1 ].
a
Then the difference b
f (t ) f 2 (t )
f (t ) ³K (t ,W )u *N (W )dW , t I . a
The value || f f 2 || ia an estimate for determinig N. Example 4. Consider the integral equation 1
³te
tW
u(W )dW = e t 1, t I = [ 1,1].
(1.50)
0
Find an approximate solution to the equation (1.50). A complete orthonormal system {M k (t )} fk =1 has been presented in example 3. Since M1 (t ) =
1 , 2
we have 1
³te
tW
1 W W 1 W W 1 (e e ), W I1 = [0,1], W2 W2 2
M1 (t )dt =
1
1
1 (e e 1 2). 2 1 1 W W 1 W W 1 (1) e (e ). The integral equation for M1 (t ) is Therefore L (W ) = W2 W2 2
³ (e
t
1)M1 (t )dt =
1
1
0
0
(1) W ³L (W )uN (W )dW = ³(e
W 1 W W 1 e )u N (W )dW = e e 1 2. W2 W2 27
3 t we get 2
For M 2 (t ) = 1
³te
tW
3 W 1 2 2 1 2 2 [e ( 2 3 ) e W ( 2 3 )], 2 W W W W W W
M 2 (t )dt =
1
1
³ (e
t
1)M 2 (t )dt =
1 1
1
0
0
(2) W ³L (W )uN (W )dW = ³(e
³te
tW
W 2 2W 2 W W 2 2W 2 e )uN (W )dW = 2e 1 . W3 W3
5 2 (3t 1) we have 8
Similarily, for M 3 (t ) = 1
5 3 W 3 [ e (W 3W 2 6W 6) 8 W4
M 3 (t )dt =
1
1
³L
W4
W
(W )u N (W )dW = ³{ 0
1
W2
1
1
eW (W 1) 2 e W (W 1)], W2 W 1 5 t 1 ³1(e 1)M 3 (t )dt = 8 (2e 14e 2),
e (W 3 3W 2 6W 6)
1
(3)
0
3
3 1 (2e ), 2
3
W
4
[eW (W 3 3W 2 6W 6) e W (W 3 3W 2 6W 6)]
[eW (W 1) e W (W 1)]}u N (W )dW = 2e 14e 1 2.
Then § e e 1 2 · § L(1) (W ) · ¸ 1 ¨ ¸ ¨ ¸, ³L3 (W )u N (W )dW = a 3 . L3 (W ) = ¨ L(2) (W ) ¸, a 3 = ¨ 2e 1 ¸ ¨ ¨ (3) ¸ ¨ 2e 14e 1 2 ¸ 0 ¨ L (W ) ¸ ¹ © ¹ ©
The matrix 1
C3 = ³L3 (W ) L*3 (W )dW , 0
The approximate solution is 1
u3 (W ) = L*3 (W )C31 a 3 p3 (W ) L*3C31 ³L3 (W ) p3 (W )dW , 0
where p3 (W ) L2 ( I1 , R1 ), I1 = [0,1] is an arbitrary function. Lecture 5. Integral equation with a parameter Consider the integral equation b
K1v
³ K (t,W )v(t,W )dW
P (t ), t I [t0 , t1 ],
(1.51)
a
here K (t , W ) || Kij (t , W ) ||, i 1, n, j 1, m , is a given matrix with elements defined in L2 , the function v(t , W ) L2 (S1 , R m ) is unknown, t is a parameter, P (t ) L2 ( I , R n ). 28
Theorem 15. A necessary and sufficient condition for the integral equation (1.51) to have a solution for any P (t ) L2 ( I , R n ) is that the n u n matrix b
C (t ) = ³K (t ,W ) K * (t ,W )dW t I
(1.52)
a
be positive definite for all t I , where (*) means transposed. Proof. Sufficiency. Let the matrix C (t ) > 0, t , t I , i.e. the quadratic form c*C (t )c > 0, c, c R n , c z 0, t I . Let us show that the integral equation (1.51) has a solution for any P (t ) L2 ( I , R n ) . Indeed, since the matrix C (t ) > 0, t I , there exists the inverse matrix C 1 (t ), t I . Choose v(t ,W ) = K * (t ,W )C 1 (t )P (t ), t I , W I1 , where P (t ) L2 ( I , R n ) is a given function. Then b
b
a
a
K1v = ³K (t ,W )v(t ,W )dW = ³K (t ,W ) K * (t ,W )C 1 (t )P (t )dW = b
= ³K (t ,W ) K * (t ,W )dWC 1 (t )P (t ) = C (t )C 1 (t ) P (t ) = P (t ), t I . a
Thereffore, in the case C (t ) > 0, t I , the integral equation (1.51) has at least one solution u(t , W ) = K * (t , W )C 1 (t )P (t ), t I for any function P (t ) L2 ( I , Rn ) . This proves the sufficiency. Necessity. Let the integral equation (1.51) has a solution for any given P (t ) L2 ( I , R n ). Let us show that the matrix C (t ) > 0, t , t I . Note that the quadratic form c*C (t )c t 0, c, c* R n for all t I . Indeed, as it follows from (1.52) b
b
a
a
c*C (t )c = ³c* K (t ,W ) K (t ,W )cdW = ³ < K * (t ,W )c, K * (t ,W )c > dW t 0,
where < , > denotes a scalar product of vectors. Consequently, to prove C (t ) > 0, t , t I , it suffices to show that the matrix C (t ), t I is nonsingular. Assume the converse. Let the matrix C (t ), t I be singular. Then there exists t = [ I and a vector c = c([ ) R n , c z 0 such that c*C ([ )c = 0. Define the vector v([ ,W ) = K * ([ ,W )c, [ I , W I1 , v([ ,W ) L2 ( S1 , R m ). Note that b
b
a
a
* * * * ³v ([ ,W )v([ ,W )dW = c ³K ([ ,W )K ([ ,W )dWc = c C([ )c = 0.
(1.53)
Then v([ ,W ) { 0, [ I , W , W I1 . As the integral equation (1.51) has a solution for any P (t ) L2 ( I , R n ), in particular, there exists a function u([ ,W ) L2 ( S1 , R m ) such that ( P ([ ) = c, [ I ) b
³K ([ ,W )u([ ,W )dW = c = c([ ),
[ I.
a
As it follows from (1.53), (1.54) b
b
a
a
0 = ³v * ([ ,W )u([ ,W )dW = ³c* K ([ ,W )u([ ,W )dW = c*c, [ , [ I . 29
(1.54)
This contradicts that c = c([ ) z 0. The contradiction arose because of the assumption that the matrix C (t ), t I is singular. Hence the matrix C (t ) > 0, t , t I . This concludes the prove. The following theorem is the key to constructing a general solution of the integral equation (1.51) for any P (t ) L2 ( I , R n ). Theorem 16. Let the matrix C (t ), t , t I defined by (1.52) be positive definite. Then a general solution to the integral equation (1.51) for any P (t ) L2 ( I , R n ) is defined by b
v(t ,W ) = K * (t ,W )C 1 (t ) P (t ) J (t ,W ) K * (t ,W )C 1 (t ) ³K (t ,W )J (t ,W )dW , t I , W I1 , (1.55) a
where J (t ,W ) L2 ( S1 , R m ) is an arbitrary function, P (t ) L2 ( I , R n ). Proof. Let us introduce the sets b
V = {v(t ,W ) L2 ( S1 , R m ) / ³K (t ,W )v(t ,W )dW = P (t ), t I ,}
(1.56)
a
Q = {v(t ,W ) L2 ( S1 , R m ) / v(t ,W ) = K * (t ,W )C 1 (t ) P (t ) J (t ,W ) b
K * (t ,W )C 1 (t ) ³K (t ,W )J (t ,W )dW , J (t ,W ), J (t ,W ) L2 ( S1 , R m )},
(1.57)
a
Wher the set V contains all the solutions to the integral equations (1.51). Theorem states that the function v(t ,W ) L2 ( S1 , R m ) belongs to the set V if anf only if it is contained in Q , i.e. V = Q. Prove that V = Q. It suffices to show that V Q and Q V . Let us prove that Q V . Indeed, if v(t ,W ) Q, then as it follows from (1.57), it holds b
b
a b
a
* 1 * 1 ³K (t ,W )v(t ,W )dW = ³K (t ,W )[ K (t ,W )C (t )P (t ) J (t ,W ) K (t ,W )C (t )
b
b
³K (t ,W )J (t ,W )dW ]dW = ³K (t ,W ) K * (t ,W )dWC 1 (t )P (t ) ³K (t ,W )J (t ,W )dW a
a
a
b
b
a
a
³K (t ,W ) K * (t ,W )dWC 1 (t ) ³K (t ,W )J (t ,W )dW = P (t ), t I .
Whence we get v(t ,W ) V . Therefore the set Q V . Show that V Q. Let v* (t ,W ) V , i.e. for the function v* (t ,W ) V it holds (see (1.56)) b
³K (t ,W )v (t ,W )dW = P (t ), *
t I.
a
Note that the function J (t ,W ) L2 ( S1 , R m ) is arbitrary in (1.57). In particular, one can choose J (t ,W ) = v* (t ,W ), (t ,W ) S1 . Now the function v(t ,W ) Q can be rewritten as b
v(t , W ) = K * (t , W )C 1 (t ) P (t ) v* (t , W ) K * (t , W )C 1 (t ) ³K (t , W )v* (t , W )dW = a
b
= K * (t ,W )C 1 (t )[ ³K (t ,W )v* (t ,W )dW ] v* (t ,W ) K * (t ,W )C 1 (t ) a
30
b
³K (t ,W )v* (t ,W )dW = v* (t ,W ), (t ,W ) S1 . a
Therefore, v* (t ,W ) = v(t ,W ) Q. Whence V Q. From the inclusions Q V , V Q it follows that V = Q. This concludes the proof. The main properties of solutions of the integral equation (1.51): v(t ,W ) Q = V from (1.55) can be represented in the form
1. The function
v(t ,W ) = v1 (t ,W ) v2 (t ,W ), where v1 (t ,W ) = K * (t ,W )C 1 (t )P (t ), is a particular solution of b
J (t , W ) K * (t , W )C 1 (t )³K (t , W )J (t , W )dW
the integral equation (1.51), v2 (t ,W )
is a
a
b
solution of the homogeneous integral equation
³K (t, W )v
2
(t , W )dW = 0 , where
a
J (t ,W ) L2 ( S1 , R m ) is an arbitrary function. Indeed, b
b
* 1 ³K (t ,W )v1 (t ,W )dW = ³K (t ,W )K (t ,W )dWc (t )P (t ) = P (t ), t I , a
a
b
b
a
a
b
b
a
a
1 t ,W ³K (t ,W )v2 (t ,W )dW = ³K (t ,W )J (t ,W )dW ³K (t ,W )K dWC (t )³K (t ,W )J (t ,W )dW = 0;
2. The functions v1 (t ,W ) L2 ( S1 , R ), L2 ( S1 , R m ) , i.e. v1 A v2 . Indeed, m
b
v2 (t ,W ) L2 ( S1 , R m )
are orthogonal in
b
< v1 , v2 > L = ³v1* (t ,W )v2 (t ,W )dW = ³P * (t )C 1 (t ) K (t ,W )[J (t ,W ) 2
a
a
b
b
K * (t ,W )C 1 (W ) ³K (t ,W )J (t ,W )W ]dW = P * (t )C 1 (t )³K (t ,W )J (t ,W )dW a
a
b
b
a
a
P * (t )C 1 (t )³K (t ,W ) K * (t ,W )dWC 1 (t ) ³K (t ,W )J (t ,W )dW { 0, (t ,W ) S1 ;
3. The function v1 (t ,W ) = K * (t ,W )C 1 (t )P (t ), (t ,W ) S1 is a solution of the integral equation (1.51) with minimal norm in L2 ( S1 , R m ) . Indeed, If the function || v(t ,W ) ||2 =|| v1 (t ,W ) ||2 || v2 (t ,W ) ||2 . Hence || v(t ,W ) ||2 t|| v1 (t ,W ) ||2 . $ J (t ,W ) { 0, (t ,W ) S1 , then the function v2 (t ,W ) { 0, (t ,W ) S1 . Hence v(t ,W ) = v1 (t ,W ), || v || = || v1 ||;
4. The solution set for the integral equation (1.51) is convex. As it follows from the proof of theorem 2 the set of all solutions to the equation (1.51) is $Q.$ Show that Q is a convex set. Let b
v(t ,W ) = K * (t ,W )C 1 (t )P (t ) J (t ,W ) K * (t ,W )C 1 (t )³K (t ,W )J (t ,W )dW , a
b
v(t ,W ) = K * (t ,W )C 1 (t ) P (t ) J (t ,W ) K * (t ,W )C 1 (t ) ³K (t ,W )J (t ,W )dW a
be arbitrary elements of the set Q . The function 31
vD (t ,W ) = D v(t ,W ) (1 D )v(t ,W ) = K * (t ,W )C 1 (t ) P (t ) J D (t ,W ) b
K * (t ,W )C 1 (t )³K (t ,W )J D (t ,W )dW Q, D , D [0,1], a
where J D (t ,W ) = D J (t ,W ) (1 D )J (t ,W ) L2 ( S1 , R m ). Example 5. Consider the integral equation 1
K1v = ³etW v(t ,W )dW = sin t , t [1;2], W [0,1]. 0
For this example K (t , W ) = e tW . Then 1
C (t ) = ³e2tW dW = 0
1 2t [e 1] > 0, t , t [1,2]. 2t
Consequently this integral equation has a solution for any J (t , W ) L2 (S1 , R 1 ) : v(t ,W ) =
1
2t 2t e tW sin t J (t ,W ) 2t e tW e tW J (t ,W )dW , e 2t 1 e 1 ³0
where C 1 (t ) = 22t t , t [1,2], e 1
S1 = {( t ,W ) / 1 d t d 2, 0 d W d 1} .
The function v(t ,W ) = v1 (t ,W ) v2 (t ,W ), where v1 (t ,W ) = 22t t e tW sin t , (t ,W ) S1 , e 1
is a particular solution v2 (t ,W ) = J (t ,W ) 1
³e
homogeneous integral equation
tW
1
2t etW e tW J (t ,W )dW is a solution of the e 1 ³0 2t
v2 (t ,W )dW = 0.
It is easily shown that
0
< v1 , v2 > L = 0, t , t [1,2]. 2
Lecture 6. Integral equation for a function of several variables Methods for solving integral equations with respect to a function of several variables are developed. Conditions for existence of a solution are obtained. An approximate method for solving the first kind Fredholm integral equation for a function of several variables is developed. Consider the integral equation b d
/u
³ ³ /(t, [ ,W )u([ ,W )d[ dW
f (t ) , t [t 0 , t1 ] ,
(1.58)
a c
where / (t , [ ,W )
Oij (t , [ , W ) , i 1, n , j
1, m , is a given nu m matrix, elements
Oij (t , [ ,W ) of the matrix / (t , [ , W ) are measurable and belong to L2 on the set :
^(t,[ ,W ) R
`
/ t 0 d t d t1 , a d W d b, c d [ d d ,
3
t1 b d
³³³ O
ij
2
(t , [ ,W ) d[ dW dt f ,
t0 a c
32
the function f (t ) L2 ( I , R n ) , I [t 0 t1 ] is a given segment, u([ ,W ) L2 (Q, R m ) is an unknown function, Q ^([ ,W ) / c d [ d d , a d W d b`, the values t0 , t1 , a, b, c, d are fixed, the operator / : L2 (Q, R m ) o L2 ( I , R n ) , /u f . A method for solving the integral equation (1.58) by reducing it to the integral equation b d
Ku
³ ³ K (t,W )u(t,W )dW dt
a , a Rn ,
(1.59)
a c
is proposed, where K (t ,W )
K ij (t ,W ) , i 1, n ,
is a given
j 1, m
nu m
matrix,
elements of the matrix K (t ,W ) functions K ij (t ,W ) are measurable and belong to L2 on the square Q
^(t,W ) / a d t d b,
c d W d d`,
b d
³³ K
2
ij
(t ,W ) dW dt f ,
a c
u(t ,W ) L2 (Q, R ) is an unknown function, the values m n a, b, c, d are given, the operator K : L2 (Q, R ) o R , Ku a . m
a R is a given vector, n
The following problems are posed: Problem 1. Find a necessary and sufficient condition for existence of a solution to the integral equation (1.59) for any a R n . Problem 2. Find a general solution to the integral equation (1.59) for any a R n . Problem 3. Let elements Oij (t,[ ,W ) L2 (:, R1 ) , (t , [ ,W ) : , of the matrix /(t , [ ,W ) , have traces Oij (,[ ,W ) L2 ( I , R1 ) that are continuous with respect to the metrics on L2 ( I , R1 ) , i.e. t1
2
lim ³ Oij (t , [ ,W ) Oij (t , [* ,W * ) dt 0 ,
[ o[* W oW * t0
for all ([* ,W * ) Q . Find an approximate solution u([ ,W ) L2 (Q, R m ) to the equation (1.58). Consider a solution of the problem 1 for the integral equation (1.59). Theorem 17. A necessary and sufficient condition for the integral equation (1.59) to have a solution for any a R n it is that the nu n matrix b d
T (a, b, c, d )
³³ K (t,W ) K
*
(t ,W )dW dt
(1.60)
a c
be positive definite. Proof. Necessity. Let the integral equation (1.59) have a solution. Show that the matrix T (a, b, c, d ) ! 0 . As it follows from (1.60) the quadratic form b d
y *T (a, b, c, d ) y
³³ y K (t,W ) K *
*
(t ,W ) ydW dt
a c
b d
³³
K * (t,W ) y, K * (t,W ) y dW dt t 0
a c
for any vector y R . Hence T (a, b, c, d ) t 0 . Then to prove T (a, b, c, d ) ! 0 it remains to show that the matrix T (a, b, c, d ) is nonsingular. n
33
Assume the converse, i.e. let the matrix T (a, b, c, d ) be singular. Then there exists a vector c R n such that c*T (a, b, c, d )c 0 , c z 0 . Let the vector-function w(t ,W ) K * (t ,W )c , w() L2 (Q, R m ) . Then b d
* ³³ w (t,W )w(t,W )dW dt
c
*
a c
b d
³³ K (t,W ) K
*
*
(t ,W )dW dt c c T (a, b, c, d )c 0
(1.61)
a c
It follows from this that w(t ,W ) { 0 , (t ,W ) Q . Since the integral equation (1.59) has a solution for any vector a R n , in particular for a c R n there exists a vector-function v(t,W ) L2 (Q, R m ) such that b d
³ ³ K (t,W )v(t,W )dW dt
c, c z 0.
a c
From (1.61) we have b d
³³ w (t,W )v(t,W )dW dt *
c
*
a c
b d
³³ K (t,W )v(t,W )dW dt
*
c c 0.
a c
contradicts the condition c z 0 . Consequently, the matrix T (a, b, c, d ) ! 0 . This proves a necessity. Sufficiency. Let the matrix T (a, b, c, d ) ! 0 (see (1.60)). Let us show that the integral equation (1.59) has a solution. As T (a, b, c, d ) ! 0 , there exists an inverse matrix T 1 (a, b, c, d ) . Let the vector-function u(t,W ) K * (t,W )T 1 (a, b, c, d )a , (t , W ) Q , u(t ,W ) L2 (Q, R m ) . Then This
b d
Ku
³ ³ K (t,W )u(t,W )dW dt a c
b d
³ ³ K (t,W )K
*
(t ,W )dW dtT 1 (a, b, c, d )a
a c
T (a, b, c, d )T 1 (a, b, c, d )a a .
Therefore, in the case when the matrix T (a, b, c, d ) ! 0 , the integral equation (1.59) has at least one solution u(t ,W ) K * (t ,W )T 1 (a, b, c, d )a , a R n . A sufficiency is proved. The theorem is proved. Consider a solution of problem 2 for the integral equation (1.59). Theorem 18. Let the matrix T (a, b, c, d ) ! 0 . Then the general solution to the integral equation (1.59) is defined by u (t , W )
v (t ,W ) K * (t ,W )T 1 ( a, b, c, d )a b d
K * (t ,W )T 1 ( a, b, c, d ) u ³ ³ K (t ,W )v (t ,W )dW dt ,
(1.62)
a c
where v(t,W ) L2 (Q, R m ) is an arbitrary function, a R n is an arbitrary vector. Proof. Let us introduce the sets (см. (1.62)) b d ½ m ®u(t ,W ) L2 (Q, R ) / ³ ³ K (t ,W )u(t ,W )dW dt a ¾ , a c ¯ ¿ * 1 m {u(t ,W ) L2 (Q, R ) / u(t ,W ) v (t ,W ) K (t ,W )T ( a, b, c, d )a
W
U
b d
K * (t , W )T 1 ( a, b, c, d ) ³ ³ K (t , W )u(t ,W )dW dt , v, v (t ,W ) L2 (Q, R m )} a c
34
(1.63) (1.64)
Where the set W contains all the solutions to the integral equation (1.59). The theorem states that the function u(t ,W ) L2 (Q, R m ) belongs to the set W if and only if it belongs to U , i.e W U Let us show that W U . For this it is sufficient to prove that а) U W ; б) W U Show that U W . Indeed if u (t , W ) U , then as it follows from (1.64), it holds b d
b d
³³ K (t,W )u(t,W )dW dt
³³ K (t,W )[v(t,W ) K
a c
*
(t ,W )T 1 (a, b, c, d )a
a c
b d
K * (t ,W )T 1 (a, b, c, d ) ³³ K (t ,W )v(t ,W )dW dt ]dWdt a c
b d
³³ K (t,W ) u a c
b d
u v(t ,W )dWdt ³³ K (t ,W ) K * (t ,W )dW dtT 1 (a, b, c, d )a a c
b d
b d
a c
a c
³ ³ K (t ,W ) K * (t ,W )dW dtT 1 (a, b, c, d )³ ³ K (t ,W )v(t ,W )dW dt
a.
Whence u (t ) W . Therefore the set U W . Prove that W U . Let u* (t,W ) W i.e. for the function u* (t,W ) W it holds (see (1.63)) b d
³³ K (t,W )u (t,W )dW dt *
a.
a c
Note that in (1.64) the function v(t,W ) L2 (Q, R m ) is arbitrary. In particular v(t ,W ) u* (t ,W ) , (t ,W ) Q . Now the function u (t , W ) U is rewritten in the form u(t ,W ) u* (t ,W ) K * (t ,W )T 1 (a, b, c, d )a K * (t ,W )T 1 (a, b, c, d ) u b d
u ³ ³ K (t ,W )u* (t ,W )dW dt
u* (t ,W ) u* (t ,W ) , (t , W ) Q .
a c
Consequently, u* (t,W ) u(t,W ) U . It follows from this that W U . From the inclusions U W , W U it follows that W U . This concludes the proof. Consider a solution of problem 3 for the integral equation (1.58). Let a full system be given in L2 , in particular, 1, t , t 2 , and the corresponding full orthogonal system M1 (t ), M2 (t ), . As assumptions of Fubini’s (change of variables) theorem hold, we have §b d · ³t ¨¨© ³a ³c Oij t, [ ,W u j [ ,W d[dW ¸¸¹M k t dt 0 t1
§ t1 · ³a ³c ¨¨ t³ Oij t, [ ,W Mk t dt ¸¸u j [ ,W d[dW ©0 ¹ b d
b d
³ ³ l [ ,W u [ ,W d[dW , ijk
j
i 1, n,
j 1, m, k
1,2,...,
a c
t1
³ f t M t dt i
k
aik , i 1, n, k 1,2,...,
t0
where f i t L2 ( I , R1 ) , i 1, n. Then 35
· §b d /t , [ ,W u [ ,W d[dW ¸¸M k t dt ³t ¨¨© ³³ a c ¹ 0 t1
b d § t1 § b d § t1 · · · ¨ ¨ O11 t , [ , W M k t dt ¸u1 [ ,W d[dW ... ¨ O1m t , [ , W M k t dt ¸um [ , W d[dW ¸ ³ ³ ³ ³ ³ ³ ¸ ¨ ¸ ¨ a c ¨t ¸ a c © t0 ¹ ¹ ©0 ¨ ¸ ¨ ¸ b d § t1 ¨ b d § t1 ¸ · · ¨ ³ ³ ¨ ³ On1 t , [ , W M k t dt ¸u1 [ ,W d[dW ... ³ ³ ¨ ³ Onm t , [ , W M k t dt ¸um [ , W d[dW ¸ ¸ ¸ ¨ ¨ a c ¨t ¸ a c © t0 ¹ ¹ ¹ © ©0
b d §b d · ¨ ³ ³ l11k [ ,W u1 [ , W d[dW ... ³ ³ l1mk [ , W um [ , W d[dW ¸ ¸ ¨a c a c ¨ ¸ ¨ ¸ b d ¸ ¨b d l [ , W u [ , W d [ d W ... lnmk [ ,W um [ , W d[dW ¸¸ ¨¨ ³ ³ n1k 1 ³ ³ a c ©a c ¹
b d
³³ K [ ,W u[ ,W d[dW , k
1,2,...,
k
a c
t1
ak
³ f t M t dt k
t0
· § t1 ¨ f1 t M k t dt ¸ ³ ¸ ¨ t0 ¸ ¨ ¨ ¸ ¸ ¨ t1 ¨ f n t M k t dt ¸ ¸ ¨ t³ ¹ ©0
§ a1k · ¨ ¸ ¨ . ¸ ¨ . ¸, k ¨ ¸ ¨ . ¸ ¨a ¸ © nk ¹
1,2, .
Now for every index k we have b d
³³ K [ ,W u[ ,W d[dW k
ak , k
1,2, ,
a c
where K k [ ,W is n u m matrix, ak R n . The truncated equation for k 1,2,, N b d
³³ K [ ,W u[ ,W d[dW
a,
(1.65)
a c
where K [ ,W
§ a1 · ¨ ¸ § K1 [ ,W · ¨ . ¸ ¨ ¸ m ¨ ¸ ¨ ... ... ... ¸ , a ¨ . ¸ , u[ ,W L2 Q, R is a solution to the integral ¨ K [ ,W ¸ ¨ . ¸ © N ¹ ¨a ¸ © N¹
equation (1.65). Theorem 19. Let the Nn u Nn matrix T1 a, b, c, d
b d
³³ K [ ,W K [ ,W d[dW *
a c
be positive definite. Then the general solution to the integral equation (1.65) is defined by 36
b d
u[ ,W K * [ ,W T11 a, b, c, d a Z1 [ ,W K * [ ,W T11 a, b, c, d ³³ K [ ,W Z1 [ ,W d[dW ,
(1.66)
a c
where Z1 [ ,W L2 Q, R m is an arbitrary function. The proof is based on theorems 17, 18. Let us calculate the function bd
f 1 (t )
³ ³ /(t, [ ,W )u ([ ,W )d[dW ,
tI
[t0 , t1 ],
ac
where u ([ ,W ) is defined by (1.66). Then the difference bd
f (t ) ³ ³ /(t, [ ,W )u ([ ,W )d[dW , t I .
f (t ) f 1 (t )
ac
The value of the norm || f f1 || is an estimate for determining N. Comments Solutions of controllability problem for dynamical systems [1-3], mathematical theory for optimal proccesses [4-6], boundary value problems for differential equations with state variable and integral constraints [7] are reduced to solvability and construction a general solution for the first kind Fredholm integral equation t1
Ku = ³ K (t , W )u(W )dW = f (t ),
(1.66)
t0
where K (t ,W ) is a measurable function on the set S0 = {( t ,W ) R 2 / t0 d t d t1 , t0 d W d t1} and there exists the integral 2
P =
t1 t1
³ ³ | K (t ,W ) |
2
dtdW < f,
t0 t0
the function f (t ) L2 ( I , R1 ). The problem is to find a solution u(W ) L2 ( I , R1 ), where I = [t0 , t1 ]. Solvability and construction a solution of the first kind Fredholm integral equation is one of the little studied problems in mathematics. As it follows from [8], the norm || K || d P, the operator K with a kernel from L2 ( S 0 ) is a completely continuous operator, which transforms every weakly converging sequence to strongly converging one. An inverse operator is not bounded [9], the equation Ku = f can not solvable for all f L2 . This leads to the fact that a small inaccuracy in f leads to arbitrarily large error in a solution to the equation (1.66). Known results on solvability of the equation (1.66) are related to the case when K (t ,W ) = K (W , t ) i.e. to the equation (1.66) with a symmetric kernel. One of the main results on solvability of the equation (1.66) is the Picard theorem [10]. But to apply this theorem it is needed to prove a completeness of eigenfunctions of a symmetric kernel. 37
Thus, solvability and construction a solution of the integral equation (1.66) is a difficult and a little studied incorrect problem. The following methods for solving incorrect problems should be noted: 1) Regularization method [11], based on reducing an original problem to a correct problem. Given data of the problem have to satisfy a priori requirements for regularization. Methods for solving incorrect problem after regularization have been proposed in [12, 13]. Unfortuntely additional requirements imposed on given data of the problem are satisfied not always and methods for solving correct problem are hard; 2) The method of successive approximations [14] for solving the equation (1.66) is applicable, in the case when K (t,W ) has a symmetric positive kernel in L2 and determination of the least eigenvalue is required; 3) Method of undetermined coefficients [15]. The search for a solution to the equation (1.66) in the form of a series is proposed. But in general case, it is extremely difficult to determine coefficients of the series. As follows from the above the study of solvability and construction a solution to the equation (1.66) is topical. The main results presented in lectures 1 – 6 are the following: – A class of the first kind integral equations with respect to a one variable function as well as to several variable functions solvable for any right hand side is singled out. For this class of integral equations necessary and sufficient conditions of solution existence have been derived and their general solutions are found. Necessary and sufficient conditions for existence of a solution to the mentioned equations with a given right hand side have been obtained for all the cases. – Solvability of the first kind Fredholm integral equation is studied. Necessary and sufficient conditions for existence of a solution to the equations with a given right hand side are derived for two cases: unknown function belongs to the space L2 ; unknown function belongs to a given set in L2 . Solvability conditions and method for construction of approximate solution to the first kind integral equation on the base of properties of solutions to the first kind integral equations of a singled class. ‒ Integral equations with respect to both one variable unknown function and several variable unknown function are one of little studied problems of integral equations. Equations with thermal and diffusion processes are reduced to thr first kind Fredholm equation with respect to several variables unknown function. Controllability problem and optimal control problems for processes described by ordinary differential equations are reduced to the first kind Fredholm integral equations with respect to one variable unknown function. For this reasons the obtained results are great importance for the solution of topical problems of qualitative theory of differential equations. Chapter contains the results of fundamental research presented in [1-7], [16-21]. 38
References: 1. 2. 3. 4. 5. 6. 7. 8. 9. 10. 11. 12. 13. 14. 15. 16. 17. 18. 19. 20. 21.
S.A. Aisagaliev. Controllability of a differential equation system. Differential Equations, vol 27, No 9, 1991, p. 1037-1045). S.A. Aisagaliev and A.P. Belogurov. Controllability and speed of the process described by a parabolic equation with bounded control. Siberian Mathematical Journal, Vol. 53, No. 1, pp. 13-28, 2012). S.A. Aisagaliev. Controllability theory of dynamical systems. – Almaty, Kazakh universiteti, 2014. – 158 p. (in russian). S.A. Aisagaliev. Controllability and Optimal Control in Nonlinear Systems. Journal of Computer and Systems. – Sciences International, No 32(1.5), 1994, p. 73-80. S.A. Aisagaliev, A.A. Kabidoldanova. Optimal control of dynamical systems. Saarbrucken, Palmarium Academic Publishing, 2012. – 288 p. (in russian). S.A. Aisagaliev, A.A. Kabidoldanova. On the Optimal Control of Linear Systems with Linear Performance Criterion and Constraints // Differential Equations. 2012, Vol. 48, No 6, pp 832-844). S.A. Aisagaliev, T.S. Aisagaliev. Methods for solving boundary value problems. – Almaty, «Кazakh University» publishing house, 2002. – 348 p. (in russian). V.I. Smirnov. A course of higher mathematics. Vol. IV, part 1, М.: 1974. 336 p. (in russian). A.N. Kolmogorov, S.V. Fomin. Elements of theory of functions and functional analysis. – М.: Nauka, 1989, – 624 p. (in russian). M.L. Krasnov. Integral equations. ‒ М.: L.: "Nauka" , 1975, 304 p. (in russian). A.N. Tihonov, V. Ya. Arsenin. Methods for solving ill-posed problems. – М.: Nauka, 1986. – 288 p. (in russian). М.М. Lavrentev. On some ill-posed problems of mathematical physics. Izd-vo SО RAN SSSR, 1962, – 305 p. (in russian). V.K. Ivanov. On the first kind Fredholm integral equations. – Differential Equations, 1967, 3, № 3, p. 21-32. (in russian). V.M. Fridman. The method of successive approximations for the first kind Fredholm integral equation, UMN XI, issue I, 1956. p. 56-85. (in russian). F.M. Mors, G. Feshbah. Methods of mathematical physics. TT. I, II. IL, 1958. – 536 p. (in russian). S.A. Aisagaliev. A general solution to one class of integral equations // Mathematical journal, 2005, Vol. 5, № 4 (1.20), p. 17-34. (in russian). S.A. Aisagaliev. Constructive theory of boundary value problems of optimal control. – Almaty: «Кazakh University» publishing house, 2007. – 328 p. (in russian). S.A. Aisagaliev, A.P. Belogurov, I.V. Sevryugin. To solving the first kind Fredholm integral equation for multivariable functions // Vestnik KazNU, ser. math., mech., inf. – 2011, № 1(68), p. 3-16. (in russian). S.A. Aisagaliev. Lectures on optimal control. – Almaty: «Кazakh University» publishing house, 2007. – 278 p. (in russian). S.A. Aisagaliev, Zh.Kh. Zhunussova. Solvability and constructing a solution to the first kind Fredholm integral equation // Vestnik KazNU, ser. math., mech., inf. – 2016. – № 1 (88). – P. 3-16. (in russian) S.A. Aisagaliev, S.S. Aisagalieva, A.A. Kabidoldanova. Solvability and construction of solution of integral equation // Bulletin math., mech., comp. science series. – 2016. – № 2(89). – P. 3-18.
39
Chapter II BOUNDARY VALUE PROBLEMS OF LINEAR ORDINARY DIFFERENTIAL EQUATIONS
Boundary value problems with boundary conditions on given convex closed sets, boundary value problems with state variable constraints and integral constraints are considered. Necessary and sufficient conditions for existence of a solution to these problems are derived. Consider the boundary value problem
A(t ) x P t , t I
x
xt0
x0 , x t1
(2.1) (2.2)
[t0 , t1 ],
x1 S R , 2n
with state variable constraints
xt Gt ; Gt {x R n / Zt d Lt x d M t , t I },
and integral constraints
g j x d c j ,
g j x
j
1, m1 ,.
g j x c j ,
j
t1
³ a t , x ! dt, j
j
1, m2 .
m1 1, m2 ,
(2.3) (2.4) (2.5)
t0
Here A(t ), Lt are matrices of orders n u n, s u n, respectively, with piecewise continuous elements, S is a given closed set, Z t , M t , t I are s u 1 continuous vector valued functions, a j t , j 1, m2 are given n u1 piecewise continuous vector valued functions, c j , j 1, m2 are known constants, t 0 , t1 are fixed time moments, P t P1 t ,..., P n t is a given piecewise continuous function, the symbol , ! denotes a scalar product. The following problems are posed: Problem 1. Find necessary and sufficient conditions for existence of a solution and construct a solution to the equation (2.1) with the boundary conditions (2.2). Problem 2. Find necessary and sufficient conditions for existence of a solution and construct a solution to the equation (2.1) with the boundary conditions (2.2) and the state variable constraints constraints (2.3). Problem 3. Find necessary and sufficient conditions for existence of a solution and construct a solution to the equation (2.1) with the boundary conditions (2.2) subject to the state variable constraints constraints (2.3) and the integral constraints (2.4), (2.5). 40
In particular the set S S0 u S1, x0 S0 , x1 S1 , where S0 R n , S1 R n , S 0 , S1 are given sets. For example, S0 {x0 R n / C1 x0 b0 }, S1 {x1 R n / D1 x1 b1}, here n m C1 , D1 are given m u n, n m u n matrices respectively, b0 R m , b1 R . The cases either b0 0, b1 0, or b0 0, b1 z 0, or b0 z 0, b1 0 are not excluded. In practice, S {x0 , x1 R 2 n / Cx0 Dx1 b} is often met, here C, D are k u n constant matrices, k d n, b R k . In general, S is a given convex closed set. The special case of the set S is S
{x0 , x1 R 2 n / H j x0 , x1 d 0, j
j
s1 1, p1}, where H j x0 , x1 ,
x 0 , x1 , x0
xt 0 , x1
1, s1 , H j x0 , x1 d j , x0 ! e j , x1 ! D j
j 1, s1
0,
are convex functions with respect to
xt1 , d j R , e j R n , j s1 1, p1 , a j , j s1 1, p1 are given n
numbers. Lecture 7. Two-point boundary value problem. A necessary and sufficient condition for existence of a solution Consider solution of problem 1. The problems are to find a necessary and sufficient condition for existence of a solution to the boundary value problem x A(t ) x P (t ) , t I [t 0 , t1 ] , (2.6) 2n ( x(t 0 ) x0 , x(t1 ) x1 ) S R . (2.7) and construct a solution to the linear system (2.6) with the boundary conditions (2.7). Let us represent the n u n matrix A(t ) with piecewise continuous elements as the sum A(t ) A1 (t ) B(t ) , t I so that the n u n matrix t1
W (t 0 , t1 )
³ )(t
0
, t ) B(t ) B * (t )) * (t 0 , t )dt
t0
is positive definite, where )(t ,W ) T (t )T 1 (W ) , T (t ) is a fundamental matrix solutions of the linear homogeneous system [ A1 (t )[ . Note that the matrix T (t ) is a solution of the equation T(t ) A1 (t )T (t ) , T (t 0 ) I n , where I n is an n u n identity matrix. There are many versions for representation of the matrix A(t ) as the sum A(t ) A1 (t ) B(t ) , t I : 1. The matrix A1 (t ) can be chosen as a constant matrix A1 of the order n u n . In this case T (t ) e A t , t I ; 2. The matrix B (t ) can be chosen in the form B(t ) B1 (t ) P , where B1 (t ) is a matrix of order n u m , P is a mu n constant matrix, moreover P I m ,0 m, nm , where I m is an mu m identity matrix, 0 m , n m is a rectangular zero matrix of order m u n m . Since the matrix A(t ) A1 (t ) B(t ) , t I , the equation (2.6) is rewritten in the form 1
41
x
A1 (t ) x B(t ) x P (t ) , t I
[t 0 , t1 ] .
(2.8)
In the case of B(t ) B1 (t ) P the equation (2.8) has the form (2.9) x A1 (t ) x B1 (t ) Px P (t ) , t I , where B1 (t ) is a matrix of order n u m , Px is an m u 1 vector valued function. If m n , then P I n , B(t ) B1 (t ) , t I . Without loss of generality, further, assume that the equation (2.8) is represented in the form (2.9), the matrix t1
W1 (t 0 , t1 )
³ )(t
0
, t ) B1 (t ) B1* (t )) * (t 0 , t )dt .
(2.10)
t0
Together with (2.9) consider a linear control system of the form (2.11) y A1 (t ) y B1 (t )u(t ) P (t ) , t I , 2n y(t 0 ) x0 , y(t1 ) x1 S R , (2.12) m u() L2 ( I , R ) . (2.13) note that if u (t ) Px(t ) , t I , then the system (2.11)-(2.13) coincides with the original system (2.6), (2.7). Theorem 1. Let the n u n matrix W1 (t 0 , t1 ) be positive definite. Then a control u() L2 ( I , R m ) brings the trajectory of the system (2.11) from an initial point y (t 0 ) x0 R n into a terminal state y(t1 ) x1 R n if and only if u(t ) U
{u() L2 ( I , R m ) / u(t )
v(t ) O1 (t, x0 , x1 )
(2.14)
N1 (t ) z(t1 , v ), t I , v, v() L2 ( I , R m )},
where O1 (t , x0 , x1 )
B1* (t )) * (t 0 , t )W11 (t 0 , t1 )a , a
t1
)(t 0 , t1 ) x1 x0 ³ )(t 0 , t ) P (t )dt , t0
N1 (t )
B1* (t )) * (t 0 , t )W11 (t 0 , t1 ))(t 0 , t1 ) .
the function z (t , v) , t I is a solution of the differential equation m (2.15) z A1 (t ) z B1 (t )v(t ) , z (t 0 ) 0 , v() L2 ( I , R ) . Solution of the differential equation (2.11), corresponding to the control u (t ) U is defined by the formula y(t ) z(t , v) O2 (t , x0 , x1 ) N 2 (t ) z(t1 , v) , t I , (2.16) where O2 (t , x0 , x1 ) )(t , t 0 )W1 (t , t1 )W11 (t 0 , t1 ) x0 )(t , t 0 )W1 (t 0 , t )W11 (t 0 , t1 ))(t 0 , t1 ) x1 t1
t
³ )(t ,W ) P (W )dW )(t , t 0 )W1 (t 0 , t )W11 (t 0 , t1 ) ³ )(t 0 , t ) P (t )dt , t0
t0
N 2 (t ) t
W (t 0 , t )
³ )(t
0
1 1
)(t , t 0 )W1 (t 0 , t )W (t 0 , t1 ))(t 0 , t1 ) ,
,W ) B1 (W ) B1* (W )) * (t 0 ,W )dW , W (t , t1 ) W (t 0 , t1 ) W (t 0 , t ) .
t0
Proof. Solution of the system (2.11) has the form y (t )
t
t
t0
t0
)(t , t 0 ) x0 ³ )(t ,W ) B1 (W )u (W )dW ³ )(t ,W ) P (W )dW , t I . 42
Then the control which transfers a trajectory of the system (2.11) from initial state x0 R n to the state x1 R n satisfies the condition y (t1 )
x1
t1
t1
t0
t0
)(t1 , t0 ) x0 ³ )(t1 , t ) B1 (t )u(t )dt ³ )(t1 , t ) P (t )dt .
This implies t1
t1
t0
t0
³ )(t1 , t ) B1 (t )u(t )dt x1 )(t1 , t 0 ) x0 ³ )(t1 , t )P (t )dt .
(2.17)
Since )(t1 , t ) )(t1 , t 0 ))(t 0 , t ) , ) 1 (t1 , t 0 ) )(t 0 , t1 ) , the expression (2.17) ) can be rewritten in the form t1
t1
t0
t0
³ )(t 0 , t ) B1 (t )u(t )dt )(t 0 , t1 ) x1 x0 ³ )(t 0 , t )P (t )dt
a.
(2.18)
Thus, the unknown control function u() L2 ( I , R m ) is a solution of the integral equation (2.18). The Integral equation (2.18) can be represented as (see (2.10)) t1
³ K (t
Ku
0
, t )u (t )dt
)(t 0 , t ) B1 (t ) , t I .
a , K (t 0 , t )
t0
It is known (see Chapter I, Lecture 1) that the integral equation (2.18) has a solution if and only if the n u n matrix t1
* ³ K (t0 , t ) K (t0 , t )dt
C (t 0 , t1 )
t0
t1
³ )(t , t ) B (t ) B (t )) (t , t )dt 0
1
* 1
*
0
W1 (t 0 , t1 )
t0
is positive definite. Consequently, the set U z if and only if W1 (t 0 , t1 ) ! 0 , here is an empty set. This means that the system (2.11)-(2.13) is controllable. It follows from theorem 2 (see lecture 1), that a general solution of the integral equation (2.18) has the form t1
u (t )
K * (t0 , t )C 1 (t0 , t1 )a v(t ) K * (t0 , t )C 1 (t0 , t1 ) ³ K (t0 , t )v(t )dt t0
where K (t 0 , t ) )(t 0 , t ) B1 (t ) , C (t 0 , t1 ) W1 (t 0 , t1 ) . This implies u (t )
B1* (t )) * (t0 , t )W11 (t0 , t1 )a v(t ) B1* (t )) * (t0 , t ) u
(2.19)
t1
u W 1 (t0 , t1 ) ³ ) (t0 , t ) B1 (t )v t dt , t I , t0
where v() L2 ( I , R m ) is an arbitrary function. Note, that a solution of the differential equation (2.15) has the form t
z (t )
z (t , v)
)(t , t0 ) z (t0 ) ³ )(t ,W ) B1 (W )v(W )dW t0
t
³ )(t,W ) B (W )v(W )dW , 1
(2.20)
t0
where z (t 0 ) 0 . Consequently, z (t1 )
z (t1 , v)
t1
t1
t0
t0
³ )(t1, t ) B1 (t )v(t )dt )(t1, t0 )³ )(t0 , t ) B1 (t )v(t )dt .
It follows from (1.26)-(1.28), that the sought control is u(t ) v(t ) O1 (t , x0 , x1 ) N1 (t ) z (t1 , v) , t I , v , v() L2 ( I , R m ) . 43
(2.21)
This implies the statement of the theorem that u (t ) U . Thus the inclusion (2.14) is proved. Let ut U . Then the solution of the differential equation (2.11) has the form t
y (t )
)(t , t0 ) x0 ³ )(t ,W ) B1 (W )[v(W ) O1 (W , x0 , x1 ) N1 (W ) z (t1 , v)]dW t0
t
³ )(t ,W ) P (W )dW t0
t
t
³ )(t,W ) B1 (W )v(W )dW )(t , t0 ) x0 ³ )(t ,W ) B1 (W ) u
t0
t0
t1
u B1* (W )) * (t 0 ,W )dW W11 (t 0 , t1 )[)(t 0 , t1 ) x1 x0 ³ )(t 0 , t ) P (t )dt ] t0
t
³ )(t ,W ) B1 (W ) B1* (W ))* (t0 ,W )dW W11 (t0 , t1 ))(t0 , t1 ) z (t1 , v)
z (t , v)
t0
O2 (t , x0 , x1 ) N 2 (t ) z (t1 , v) , t I .
i.e. a solution to the system (2.11) can be represented in the form (1.23). Theorem is proved. It is easily proved that y(t0 ) z(t0 , v) O2 (t0 , x0 , x1 ) N2 (t0 ) z(t1, v) x0 , y(t1 ) z (t1 , v) O2 (t1 , x0 , x1 ) N 2 (t1 ) z (t1 , v) x1 . Since the satement of the theorem is valid for any x0 R n , x1 R n , it holds in particular when ( x0 , x1 ) S R 2 n . Lemma 1. Let the matrix W1 (t 0 , t 1 ) be positive definite. Then the boundary value problem (2.6), (2.7) is equivalent to the following problem: v(t ) T1 (t ) x0 T2 (t ) x1 P (t ) N1 (t ) z(t1 , v) Py(t ) , t I , (2.22) m (2.23) z A1 (t ) z B1 (t )v(t ) , z (t 0 ) 0 , t I , v() L2 ( I , R ) , ( x0 , x1 ) S , (2.24) where T1 (t ) B1* (t )) * (t 0 , t )W11 (t 0 , t1 ) , T2 (t ) B1* (t )) * (t 0 , t )W 1 (t 0 , t1 ))(t 0 , t1 ) , t1
P (t ) B1* (t )) * (t 0 , t )W11 (t 0 , t1 ) ³ )(t 0 , t ) P (t )dt , t0
z(t , v) C1 (t ) x0 C2 (t ) x1 f (t ) N2 (t ) z(t1 , v) ,
y(t )
1 1
)(t , t 0 )W1 (t , t1 )W (t 0 , t1 ) , C 2 (t )
C1 (t )
f (t )
(2.25) 1 1
)(t , t 0 )W1 (t 0 , t )W (t 0 , t1 ))(t 0 , t1 ) ,
t
t1
t0
t0
1 ³ )(t,W )P (W )dW )(t, t 0 )W1 (t 0 , t )W1 (t 0 , t1 )³ )(t 0 , t )P (t )dt .
A proof of the lemma follows from theorem 3 and the equality y (t ) x(t ) , t I for u (t ) U , u (t ) Py(t ) , t I . It is easily shown that the expressions (2.22) – (2.24) are equivalent to the expressions (2.6), (2.7) under the condition W1 (t 0 , t1 ) ! 0 . Consider the optimal control problem t1
I (v, x0 , x1 )
³ v(t ) T (t ) x 1
2
0
T2 (t ) x1 P (t ) N1 (t ) z (t1 , v) Py(t ) dt o inf
t0
under the conditions 44
(2.26)
A1 (t ) z B1 (t )v(t ) , z (t 0 )
z
0, t I ,
(2.27) (2.28)
v() L2 ( I , R ) , ( x0 , x1 ) S , m
where y(t ) y(t , v, x0 , x1 ) , t I is defined by the formula (2.25). Note that: 1. A value of the functional I (v, x0 , x1 ) t 0 . Consequently, the functional I (v, x0 , x1 ) is bounded below on the set X L2 ( I , R m ) u S , where (v, x0 , x1 ) X H , H L2 ( I , R m ) u R 2n . 2. If I (v* , x0* , x1* ) 0 , where (v* , x0* , x1* ) X is a solution to the optimization problem (2.26)-(2.28), then x* (t ) y* (t , v* , x0*, x1* ) z (t , v* ) C1 (t ) x0* C2 (t ) x1* f (t ) N 2 (t ) z (t1 , v* ) , t I is a solution of the boundary value problem (2.6), (2.7). Theorem 2. Let the matrix W1 (t 0 , t1 ) be positive definite. A necessary and sufficient condition for the boundary value problem (2.6), (2.7) to have a solution is I (v* , x0* , x1* ) 0 , where (v* , x0* , x1* ) X is a solution to the optimization problem (2.26) – (2.28). Proof. Necessity. Let the boundary value problem (2.6), (2.7) has a solution. Show, that the value I (v* , x0* , x1* ) 0 . Let x(t; t 0 , x0* , x1* ) , t I , ( x0* , x1* ) S be a solution of the differential equation (2.6). As it follows from lemma 1, the boundary value problem (2.6), (2.7) is equivalent to the problem (2.22) – (2.24). Hence, v* (t ) T1 (t ) x0* T2 (t ) x1* P (t ) N1 (t ) z(t1 , v* ) Py* (t ) , t I , (2.29) m (2.30) z(t , v* ) A1 (t ) z(t, v* ) B1 (t )v* (t ) , z (t 0 ) 0 , t I , v* () L2 ( I , R ) , y* (t ) z(t , v* ) C1 (t ) x0* C2 (t ) x1* f (t ) N 2 (t ) z(t1 , v* ) , t I where ( x0* , x1* ) S , u (t ) U , u(t ) Py* (t ) , t I , y* (t ) x(t; t 0 , x0* , x1* ) , t I . Then t1
I (v* , x0* , x1* )
³ v (t ) T (t ) x *
1
0*
2
T2 (t ) x1* P (t ) N1 (t ) z (t1 , v* ) Py* (t ) dt
0,
t0
by the identities (2.29), (2.30). Necessity is proved. Sufficiency. Let the value I (v* , x0* , x1* ) 0 . Show, that the boundary value problem (2.6), (2.7) has a solution. In fact, the value I (v* , x0* , x1* ) 0 if and only if v* (t ) O1 (t , x0* , x1* ) N1 (t ) z(t1 , v* ) Py(t , v* , x0* , x1* ) , where y(t , v* , x0* , x1* ) z(t , v* ) O2 (t , x0* , x1* ) N2 (t ) z(t1, v* ) , t I . The function y(t , v* , x0* , x1* ) , t I is a solution of the differential equation (2.11) under the conditions (2.12), (2.13). Consequently, y (t , v* , x0* , x1* )
A1 (t ) y(t , v* , x0* , x1* ) B1 (t )u* (t ) P (t )
A1 (t ) y(t , v* , x0* , x1* ) B1 (t ) Py(t , v* , x0* , x1* ) P (t ) ,
where
u* (t ) v* (t ) O1 (t , x0* , x1* ) N1 (t ) z(t1 , v* ) ,
y(t 0 )
x0* ,
y(t1 )
x1* ,
( x0* , x1* ) S ,
This implies that yt , v* , x0* , x1* xt; t0 , x0* , x1* , t I is a solution of the boundary value problem (2.6), (2.7). Sufficiency is proved. Theorem is proved. u* (t ) U .
45
Lecture 8. Construction of a solution to two-point boundary value problem As it follows from theorem 2, if the value I (v* , x0* , x1* ) ! 0 , then the boundary value problem (2.6), (2.7) has not a solution. Thus, for constructing a solution of the boundary value problem (2.6), (2.7) it is necessary to solve the optimization problem (2.26) – (2.28). Lemma 2. Let the matrix W1 (t 0 , t1 ) be positive definite, the function F0 (q, t )
2
v T1 (t ) x0 T2 (t ) x1 P (t ) N1 (t ) z (t1 ) Py(t , v, x0 , x1 ) ,
(2.31)
where y(t , v, x0 , x1 ) q
z C1 (t ) x0 C2 (t ) x1 f (t ) N 2 (t ) z(t1 ) ,
(v, x0 , x1 , z, z (t1 )) R m u R n u R n u R n u R n .
Then the partial derivatives wF0 (q, t ) wv wF0 (q, t ) wx0 wF0 (q, t ) wx1 wF0 (q, t ) wz wF0 (q, t ) wz (t1 )
[2v T1 (t ) x0 T2 (t ) x1 P (t ) N1 (t ) z (t1 ) Py] ,
(2.32)
[2T1* (t ) 2C1* (t ) P* ][v T1 (t ) x0 T2 (t ) x1 P (t ) N1 (t ) z (t1 ) Py] ,
(2.33)
[2T2* (t ) 2C2* (t ) P* ][v T1 (t ) x0 T2 (t ) x1 P (t ) N1 (t ) z (t1 ) Py] ,
(2.34)
2 P* (t )[v T1 (t ) x0 T2 (t ) x1 P (t ) N1 (t ) z (t1 ) Py] ,
(2.35)
[2 N1* (t ) 2 N 2* (t ) P* ][v T1 (t ) x0 T2 (t ) x1 P (t ) N1 (t ) z (t1 ) Py] .
(2.36)
The formulas (2.32) – (2.36) can be obtained by directly differentiating the function F0 (q, t ) with respect to q . Lemma 3. Let the matrix W1 (t 0 , t1 ) ! 0 be positive definite, the set S be convex. Then 1) The functional (2.26) under the conditions (2.27), (2.28) is convex; § wF0 (q, t ) wF0 (q, t ) wF0 (q, t ) wF0 (q, t ) wF0 (q, t ) · ¸ ¨¨ , , , , wx0 wx1 wz wz (t1 ) ¸¹ © wv wF0 (q 'q, t ) wF0 (q, t ) satisfies the Lipschitz condition d L 'q , q , q 'q R m4 n , wq wq
2) The derivative
wF0 (q, t ) wq
wherе L const ! 0 . Proof. It is easily shown that F0 (q, t ) q * E * (t ) E t q 2q * E * (t )[ P (t ) Pf (t )] [ P (t ) Pf (t )]* [ P (t ) Pf (t )] , t I ,
where E is a matrix of order m u 4n . Then
w 2 F0 (q, t ) wq 2
2 E * (t ) E (t ) t 0 for any t , t I .
Consequently, the function F0 (q, t ) is convex with respect to q , i.e. 46
F0 (Dq1 (1 D )q 2 , t ) d DF0 (q1 , t ) (1 D ) F0 (q 2 , t ) , q 1 , q 2 R m4n , D , D [0,1] . (2.37)
For any v1 () L2 ( I , Rm ) , v2 () L2 ( I , R m ) , and for all D ! 0 , D [0,1] solution of the differential equation (2.28) subject to vD (t ) Dv1 (t ) (1 D )v2 (t ) , t I can be represented in the form z(t , vD ) Dz (t , v1 ) (1 D ) z(t , v2 ) , t I . (2.38) Indeed, t
z (t , vD )
t
³ )(t,W ) B (W )>Dv (W ) (1 D )v
³ )(t,W ) B1 (W )vD (W )dW
1
t0
t
t
t0
t0
1
2
(W )@dW
t0
D ³ )(t ,W ) B1 (W )v1 (W )dW (1 D ) ³ )(t ,W ) B1 (W )v2 (W )dW
Dz (t , v1 ) (1 D ) z (t , v2 ) , t I .
Let [1 (v1 (t ), x01 , x11 ) X , [ 2 (v2 (t ), x02 , x12 ) X . Then the point [D
D[1 (1 D )[ 2
(Dv1 (1 D )v2 , Dx01 (1 D ) x02 , Dx11 (1 D ) x12 ) X .
The value of the functional I ([D )
t1
³ F Dv (1 D )v ,Dx 0
1
1 0
2
(1 D ) x02 ,Dx11 (1 D ) x12 , z t ,Dv1 (1 D )v2 ,
t0
t1
³ F Dv (1 D )v ,Dx
z (t1 , Dv1 (1 D )v2 ), t )dt
0
1
2
1 0
(1 D ) x02 , Dx11 (1 D ) x22 ,
t0
t1
Dz (t , v1 ) (1 D ) z (t , v2 ), Dz (t1 , v1 ) (1 D ) z (t1 , v2 ), t dt d D ³ F0 (Dq1 (t ) (1 D )q 2 (t )) dt d t0
t1
t1
d D ³ F0 (q1 (t ), t )dt (1 D ) ³ F0 (q 2 (t ), t )dt DI ([1 ) (1 D ) I ([ 2 ) , [1 , [ 2 X , t0
t0
by (2.37), (2.38), where q (t ) (v1 (t ), x01 , x11 , z (t , v1 ), z (t1 , v1 )) , q 2 (t ) (v2 (t ), x02 , x12 , z(t, v2 ), z(t1 , v2 )) . This implies the first statement of the lemma. Since the derivative 1
wF0 (q, t ) wq
2 E * (t ) E (t )q 2 E * (t )[ P (t ) Pf (t )] ,
the difference wF0 (q 'q, t ) wF0 (q, t ) wq wq
2 E * (t ) E (t )'q ,
where 'q ('v, 'x0 , 'x1 , 'z, 'z(t1 )) , E * (t ) E (t ) is the m 4n u m 4n matrix piecewise continuous elements. Then
with
wF0 (q 'q, t ) wF0 (q, t ) d L 'q , wq wq
where L
sup E * (t ) E (t ) ! 0 . The lemma is proved.
t 0 dt dt1
Theorem 3. Let the matrix W (t 0 , t1 ) be positive definite. Then the functional (2.26) under the conditions (2.27),(2.28) is continuously Frechet differentiable, the functional gradient 47
I c(v, x0 , x1 )
( I vc (v, x0 , x1 ), I xc0 (v, x0 , x1 ), I xc1 (v, x0 , x1 ) H )
at any point (v, x0 , x1 ) X is calculated by wF0 ( q(t ), t ) * B1 (t )\ (t ) L2 ( I , R n ), wv t1 t1 wF0 (q(t ), t ) wF0 (q (t ), t ) n n c dt R , ( , , ) I v x x 0 1 x1 ³t wx0 ³ dx1 dt R , t 0
I vc ( v, x0 , x1 )
I xc0 (v, x0 , x1 )
(2.39)
0
where the partial derivatives are defined by the formulas (2.32)-(2.36), q(t ) (v(t ), x0 , x1 , z (t ), z (t , v)) the function z (t ) , t I is a solution of the differential equation (2.27) at v v(t ) , and the function \ (t ) , t I is a solution of the adjoint system \
t1
wF0 (q(t ), t ) A1* (t ) , \ (t1 ) wz
Moreover, the gradient condition
I c([ ) ,
wF0 (q(t ), t ) dt . wz (t1 ) t0
³
[
(v, x0 , x1 ) X
(2.40) satisfies the Lipschitz
I c([1 ) I c([ 2 ) d K [1 [ 2 , [1 , [ 2 X ,
(2.41)
where K const ! 0 is the Lipschitz constant. Proof. Let [ (t ) (v, x0 , x1 ) X , [ (t ) '[ (t ) v(t ) h(t ) , ( x0 'x0 , x1 'x1 ) X . Then the functional increment 'I
t1
³ >F (q(t ) 'q(t ), t ) F (q(t ), t )@dt ,
I ([ '[ ) I ([ )
0
(2.42)
0
t0
where q(t ) 'q(t ) ([ (t ) '[ (t ), z(t ) 'z(t ), z(t1 ) 'z(t1 )) , t1
t1
t0
t0
'z (t ) d ³ Ф(t ,W ) B1 (W ) h(W ) d (W ) d C1 ³ h(t ) dt d C 2 h C1
sup Ф(t ,W ) B1 (W ) , t 0 d t1 , W d t1 , C2
L2
, t I ,
(2.43)
C1 t1 t 0 .
Since the function F0 (q, t ) has continuous derivatives with respect to q , (see (2.42)) the increment 'I
t1
³ [h (t ) F *
0v
( q(t ), t ) 'x0* F0 x0 ( q(t ), t ) 'x1* F0 x1 ( q(t ), t )
(2.44)
t0
5
'z * (t ) F0 z ( q(t ), t ) 'z * (t1 ) F0 z ( t1 ) ( q(t ), t )]dt ¦ Ri i 1
where t1
t1
³ 'x >F
* ³ h (t )>F0v (q(t ) 'q(t ), t ) F0v (q(t ), t )@dt , R2
R1
t0
t1
R3
>
@
* ³ 'x1 F0 x1 (q 'q, t ) F0 x1 (q, t ) dt , R4
t0
t1
R5
³ 'z
*
>
* 0
0 x0
@
(q 'q, t ) F0 x0 (q, t ) dt ,
t0
t1
³ 'z (t )>F *
0z
(q 'q, t ) F0 z (q, t )@dt ,
t0
@
(t1 ) F0 z (t1 ) (q 'q, t ) F0 z (t1 ) (q, t ) dt .
t0
48
From (2.44) taking into account (2.27), (2.40) t1
* ³ 'z (t1 )F0 z (t1 ) (q(t ), t )dt
'z * (t1 )[ \ (t1 )]
t0
t1
t1
³ 'z* (t )\ (t )dt
t1
³ ['z * (t )\ (t )]dt
t0
* ³ 'z (t )[ F0 z (q(t ), t ) A1 (t )\ (t )]dt *
t0
d ['z * (t )\ (t )]dt dt t0
>
@
³ 'z * (t ) A1* (t ) h* (t ) B1 (t )\ (t )dt
t0
t1
t1
³
t0
t1
t1
t0
t0
³ h * (t ) B1* (t )\ (t )dt ³ 'z * (t ) F0 z ( q(t ), t )dt ,
we get t1
5
t0
i 1
³ {h * (t )[ F0v (q(t ), t ) B1* (t )\ (t )] 'x0* F0 x0 (q(t ), t ) 'x1* F0 x1 (q(t ), t )}dt ¦ R i .
'I
As it follows from lemma 3, the partial derivatives F0v F0 x1
wF0 , F0 z (t1 ) wz
wF0 , Foz wx1
(2.45)
wF0 , F0 x0 wv
wF0 , wx0
wF0 satisfy the Lipschitz condition, then (see (2.43)) wz (t 1)
F0v (q 'q, t ) F0v (q, t ) d L1 'q , F0 x0 (q 'q, t ) F0 x0 (q, t ) d L2 'q , F0 x1 (q 'q, t ) F0 x1 (q, t ) d L3 'q , F0 z (q 'q, t ) F0 z (q, t ) d L4 'q ,
F0 z (t1 ) (q 'q, t ) F0 z (t1 ) (q, t ) d L5 'q ,
where 'q(t ) |d| h(t ) 'x0 | 'x1 | 'z(t ) 'z(t1 ) , 'z (t ) d C 2 h L , 'z (t1 ) d C 2 h 2
L2
.
Then 2
2
2
2
2
R1 d L1C 3 '[ , R2 d L2 C 4 '[ , R3 d L3C5 '[ , R4 d L4 C 6 '[ , R5 d L5 C 7 '[ ,
'[
where
h
2
2 1/ 2
2
'x0 'x1
. Therefore,
5
5
i 1
i 1
R | ¦ Ri |d ¦ | Ri | d C8 '[ . 2
It
follows from (2.45) that the gradient I c([ ) I c(v, x0 , x1 ) is defined by the formula (2.39). Let [1 [ '[ , [ 2 [1 . Then (2.39) yields I c([1 ) I c([ 2 ) F0 v (q(t ) 'q(t ), t ) F0v (q(t ), t ) B1* (t )'\ (t ) , t1
³ >F
t1
@ ³ >F
0 x0 ( q (t ) 'q (t ), t ) F0 x0 ( q (t ), t ) dt ,
t0
0 x1
@
(q(t ) 'q(t ), t ) F0 x1 (q(t ), t ) dt .
t0
Consequently, I c([1 ) I c([ 2 ) d L6 | 'q(t ) | L7 | '\ (t ) | L8 || 'q || ,
I c([1 ) I c([ 2 )
2
t1
³
t0
Since '\ (t )
t1
I c([1 ) I c([ 2 ) dt d L9 'q L10 ³ '\ (t ) dt . 2
2
t0
>F0 z (q(t ) 'q(t ), t ) F0 z (q(t ), t )@ A1* (t )'\ (t ) ,
'\ (t1 )
t1
³ [F
t0
0 z ( t1 )
2
(q(t ) 'q(t ), t ) F0 z (t1 ) (q(t ), t )]dt ,
the function 49
t I
(2.46)
'\ (t )
t1
'\ (t1 ) ³ {[ F0 z (q(W ) 'q(W ),W ) F0 z (q(W ),W )] A1* (W )'\ (W )}dW .
(2.47)
t0
The formula (2.47) yields that t1
t1
'\ (t ) d '\ (t1 ) ³ >F0 z (q(W ) 'q(W ),W ) F0 z (q(W ),W )@dW A1*max ³ '\ (W ) dW d t0
t1
t1
t0
t
t0
t1
d L5 ³ 'q(t ) dt L4 ³ 'q(W ) d (W ) A1*max ³ '\ (W ) d (W ) d ( L5 t1 t 0 C9 L4 t1 t 0 C9 ) '[ t0
t1
A1*max ³ '\ (W ) d (W ) , 'q d C9 '[ . t
Hence, taking in account Gronwall’s lemma, we get '\ (t ) d t1 t 0 C9 ( L4 L5 )e A (t t ) '[ , t I . (2.48) The estimate I c([1 ) I c([ 2 )) d K [1 [ 2 , [1 , [ 2 X follows from (2.46), (2.48). The theorem is proved. On the base of the formulas (2.39)-(2.41) construct the sequences * 1 max
vn D n I vc ([ n ), x0n 1
vn 1
x1n 1
1 0
Ps [ x0n D n I xc0 ([ n )],
Ps [ x1n D n I xc1 ([ n )], n
(2.49)
0,1,2,...,
where D n ! 0 , n 0,1,2... is chosen from the condition 0 H 0 d D n d K ! 0 is the Lipschitz constant from (2.41), [ n
2 , H1 ! 0 , K 2H 1
(vn , x0 n , x1n ) X , Ps [] is a
projection of a point onto the set S . Theorem 4. Let the matrix W1 (t 0 , t1 ) be positive definite, the set S be convex and closed, the sequence ^[ n ` (vn , x0n , x1n ) X be defined by (2.49). Then: 1) The numerical sequence {J ([ n )} decreases strictly; 2) vn vn1 o 0 , x0n x0n1 o 0 , x1n x1n1 o 0 , as n o f ; If, in addition, the set M [ 0 ^[ X / J ([ ) d J [ 0 ` is bounded, then: 3) The sequence ^[ n ` X , is minimizing, i.e. lim I ([n ) I * inf I ([ ) ; 4) The set X *
^[ X / I ([ ) *
*
I*
`
nof
[X
inf I ([ ) z , the symbol denotes an empty [ X
set; 5) The sequence ^[ n ` X weakly converges to the set X * ; 6) the following estimate of convergence rate holds: 0 d J ([ n ) J * d n 1,2,... , m0
m0 , n
const ! 0 .
7) The boundary value problem (2.6), (2.7) has a solution if and only if J ([ * ) 0 . Proof. It follows from (2.49) that vn1 vn D n I vc ([ n ), v vn1 ! L t 0 , v , v L2 ( I , R m ) ; (2.50) x0n1 x0n D n I xc ([ n ), x0 x0n1 ! R t 0 , x0 , x0 S ; x1n1 x1n D n I xc ([ n ), x1 x1n1 ! R t 0 , x1 , x1 S ; 2
n
0
n
1
50
From (2.50) we have I c([ n ), [ [ n 1 !t
where I c([ n )
1
Dn
[ n [ n 1 , [ [ n 1 ! , [ , [ X ,
[
[n
(v, x0 , x1 ) , ( I vc ([ n ), I xc0 ([ n ), I xc1 ([ n )) .
[ n1
(vn , x0n , x1n ) ,
(2.51) (vn1 , x0n1 , x1n1 ) ,
As it follows from Lemma 3 and Theorem 5 the functional I ([ ) C1,1 ( X ) is convex. Consequently, the equality I ([1 ) I ([ 2 ) t I c([1 ), [1 [ 2 !
holds. Whence at [1 [ n , [ 2 [ n1 we have I ([ n ) I ([ n 1 ) t I ([ n ), [ n [ n 1 !
k 2 [1 [ 2 , [1 , [ 2 X . 2
k 2 [ n [ n 1 , [ n , [ n1 X . 2
(2.52)
From the expressions (2.51),(2.52) taking into account the inequality 2 , H 1 ! 0 , we get k 2H 1 1 K I ([ n ) I ([ n 1 ) t ( ) [ n [ n 1 Dn 2
0 H0 d Dn d
2
2
t H 1 [ n [ n 1 , n
0,1,2,... .
(2.53)
It follows from (2.53) that the numerical sequence {I ([ n )} increases strictly. Since I ([ n ) , and from the functional I ([ ) , [ X is bounded below, there exists lim nof [ I ([ n ) I ([ n 1 )] 0 . Letting n o f in (2.53) existence of the limit follows, that lim n of
we get [ n [ n1 o 0 . By assumption of the theorem the set M ([ 0 ) {[ X / I ([ ) d I ([ 0 )} is bounded, where [ 0 (v0 (t ), x00 , x10 ) is an initial guess for the sequence {[ n } . Since the functional I ([ ) C 1,1 ( X ) is convex, the set M ([ 0 ) is convex and closed. Consequently, M ([ 0 ) is a weakly bicompact set in H . The convex functional I ([ ) C 1,1 ( X ) is weakly lower semicontinuous. Hence, the functional I ([ ) attains its lower bound on the set M ([ 0 ) . Since X * M ([ 0 ) , the set X * z . Since along the sequence {[ n } functional value decreases strictly, we have {[ n } M ([ 0 ) . Let us show, that the sequence {[ n } M ([ 0 ) is minimizing. It follows from convexity of the functional I ([ ) C 1,1 ( M ([ 0 )) that I ([ n ) I ([* ) d I c([ n ), [ n [ * !d I c([ n ), [ n [ n1 [ n1 [* ! I c([ n ), [ n [ n1 ! I c([ n ), [* [ n1 ! .
Hence, taking into account the inequality (2.51) for [ [ * , we obtain I ([ n ) I ([* ) d I c([ n ) d [ I c([ n )
1
Dn
1
Dn
[* [ n 1 ] [ n [ n 1 d [ sup I c([ n )
where C 0 [ sup I c([ n ) M ([ 0 )
([* [ n 1 ), [ n [ n 1 !d I c([ n )
M ( [0 )
D
H0
],
D
Dn
d
D
H0
, H0 d Dn , 51
1
Dn
D
Dn d
1
Dn
([* [ n 1 ) [ n [ n 1 d
] [ n [ n 1 d C0 [ n [ n 1 , 1
H0
.
(2.54)
The inequality (2.54) implies that lim [ I ([ n ) I ([* )] d lim [ n [ n1 nof
lim I ([ n ) n of
nof
0 . Then
I ([ * ) . This means that the sequence {[ n } M ([ 0 ) is minimizing. Since
{[ n } M ([ 0 ) and the set M ([ 0 ) is weakly bicompact, all weak limit points of the
minimizing sequence {[ n } belong to the set X * M ([ 0 ) . As an
it
follows
from
2
a n a n 1 t H 1 [ n [ n 1 ,
(2.53),
(2.54):
I ([ n ) I ([* ) d C0 [ n [ n1 . Consequently,
a n a n1 t
This implies the estimation 0 d I ([ n ) I ([* ) d
H1 C02
a n2 ,
an ! 0 , n 1,2,... .
C2 m0 , m0 0 ! 0 . H1 n
The last statement of the theorem follows from theorem 4. The theorem is proved. Example 1. The equation of motion has the form K K cos t , t I [0, S ] , x(0) 0 , x(S ) 0 . (2.55) Let us denote K x1 , K x1 x2 , and rewrite the equation (2.55) in the vector form x
A1 x Bx P (t ) , x(0)
§ x1 (S ) 0 · ¨¨ ¸¸ S1 , © x2 (S ) ¹
§ x1 (0) 0 · ¨¨ ¸¸ S 0 , x(S ) © x2 (0) ¹
x0
(2.56)
where § x1 · §0 1· § 0 0· ¨¨ ¸¸ , A1 ¨¨ ¸¸ , B ¨¨ ¸¸ , P (t ) © 0 0¹ © 1 0¹ © x2 ¹ x(0) S 0 {( x1 (0), x 2 (0)) R 2 / x1 (0) 0, x 2 (0)
§ 0 · ¨¨ ¸¸ , © cos t ¹ x 20 R1 } ,
x
x(S ) S1
{( x1 (S ), x2 (S )) R 2 / x1 (S ) 0, x2 (S ) The matrix B represent in the form B B1 P , where §0· B1 ¨¨ ¸¸ , P (1,0) . © 1¹
x21 R1} .
Now the equation (2.56) is rewritten as (2.57) x A1 x B1 Px P (t ) , t I , x(0) S 0 , x(S ) S1 . Let T (t ) be a fundamental matrix solutions of the linear homogeneous system [ A1 [ , where T(t ) A1 T (t ) , T (0) I 2 , I 2 is an identity matrix of order 2 u 2 . It can easily be checked that T (t )
e A1t
§1 t · ¨¨ ¸¸ , T 1 (t ) 0 1 © ¹
§1 t · ¨¨ ¸¸ , )(t ,W ) T (t )T 1 (W ) . 0 0 © ¹
e A1t
The matrices (see (2.57)) W1 (0, S )
S
³ )(0, t ) B B ) 1
* 1
S *
(0, t )dt
0
W (0, S ) 1 1
§ 12 ¨ 3 ¨S ¨ 6 ¨ 2 ©S
6 · ¸ S 2 ¸ , W (0, S ) 4 ¸ ¸ S ¹
³e
A1t
*
B1 B1* e A1 t dt
0
§ t3 ¨ ¨ 32 ¨ t ¨ © 2
52
t2 · ¸ 2 ¸ , W (t , S ) ¸ t ¸ ¹
§ S3 S2 · ¨ ¸ 2 ¸ ! 0, ¨ 32 ¨ S ¸ S ¸ ¨ © 2 ¹ 3 3 § S S 2 t2 t ¨ 2 2 ¨ 3 2 32 t ¨ S S t ¨ 2 © 2
· ¸ ¸. ¸ ¸ ¹
The vector § Sx21 2 · ¨¨ ¸. x20 ¸¹ © x21
S
a
)(0, S ) x(S ) x(0) ³ )(0, t ) P (t )dt 0
The linear controllable system (2.11) – (2.13) has the form y
A1 y B1u(t ) P (t ), t I , y (0) y (S )
x(0) S0 ,
x(S ) S1 , u() L2 ( I , R1 )
(2.58)
As it follows from theorem 3, a control u (t ) U is defined by (2.14), where O1 (t , x(0), x(S )) N 1 (t )
6t 4S · § 12t 6S (Sx 21 2), ( x 21 x 20 ) ¸ , ¨ 3 S2 © S ¹ S S 12 t 6 6 t 2 · § , B1* ) * (0, t )W 1 (0, S ))(0, S ) ¨ ¸. S3 S2 ¹ ©
B1* ) * (0, t )W11 (0, S )a
Then the control u (t )
v (t ) ( Sx21 2)
( 12t 6S )
S3
z1 (S , v )
12t 6S
S3
(6t 4S )
S2
( x 21 x20 )
6t 4S
z 2 (S , v ), t I
S2
(2.59)
[0, S ]
where z (t , v) , t I is a solution to the differential equation 1 (2.60) z A1 z B1v(t ) , z (0) 0 , v() L2 ( I , R ) . The solution of the differential equation (2.58), corresponding to the control (2.59), is defined by y(t )
§ y1 (t ) · ¨¨ ¸¸ © y 2 (t ) ¹
z (t , v) O2 (t , x(0), x(S )) N 2 (t ) z (S , v)
z (t , v) C1 (t ) x(0) C2 (t ) x(S ) f (t ) ,
where · 4x t 3 S 2t S 2 t2 St ) ¸ ) 20 ( 1 3 6 2 S 2 2 ¸, C1 (t ) x(0) ) (t ,0)W (t , S )W (0, S ) x(0) 6 x 20 S 2 t 2 4G ¸ St ) ( (S t ) ¸ 2 2 2 S S ¹ § 2t 3 3t 2 t 3 t 2 · ¨ 3 2 ¸ S2 S ¸, C 2 (t ) )(t ,0)W (0, t )W 1 (0, S ))(0, S ) ¨ S 2 S 2 6t 3t 2t ¸ ¨ 6t ¸ ¨ 3 2 S S2 S ¹ © S § 4t 3 6t 2 · ¨1 cos t 3 2 ¸ t S S S ¸, f (t ) ³ )(t ,W )P (W )dW C 2 (t ) ³ )(S , t )P (t )dt ¨ 2 12 t 12 t ¸ ¨ 0 0 ¨ sin t 3 2 ¸ S S © ¹ § t3 t2 · ¨ x 21 ( 2 ) ¸ t S ¸ , )(S ,W )P (W )dW §¨1 cos t ·¸ . S C 2 (t ) x(S ) ¨ ¨ sin t ¸ 3t 2 2t ¸ ³0 ¨ © ¹ ¨ x 21 ( 2 ) ¸ S ¹ S © § 6 x 20 ¨ 2 ¨ S ¨ ¨ ©
53
(
S3
Then 6 x20 S 3 t 3 S 2 t 4x S 2 t2 ( ) 20 ( St ) 2 S 3 6 2 S 2 2 t3 t2 4t 3 6t 2 2t 3 3t 2 t3 t2 x21 ( 2 ) 1 cos t 3 2 ( 3 2 ) z1 (S , v ) ( 2 ) z 2 (S , v ), z1 (t , v )
y1 (t )
S
S
S
S
S
S2
S
S
S
4 x20 t2 3t 2 2t S t ) ( S t ) x ( ) 21 S2 2 2 S S2 S 12t 2 12t 6t 2 6t 3t 2 2t sin t ( 3 2 ) ( 3 2 ) z1 (S , v ) ( 2 ) z 2 (S , v ), t I . y 2 (t )
z 2 (t , v )
S
6 x20
(
S
S
(2.61)
S
S
(2.62)
S
Note that y1 (0) 0 , y 2 (0) x20 , y1 (S ) 0 , y2 (S ) x21 . The optimization problem (2.26) – (2.28) for this example is written as S
S
0
0
2 ³ | u(t ) y1 (t ) | dt
I (v, x20 , x21 )
³ F (v(t ), x 0
20
, x21 , z (t , v), z (S , v), t )dt o inf
under the conditions (2.58), where u (t ) , t I is defined by (2.59), and the function y1 (t ) , t I is given by (2.61). The partial derivatives wF0 (q(t ), t ) 2[u(t ) y1 (t )] ; wv wF0 (q(t ), t ) 6t 4S 6 S 3 t 3 S 2t 4 S 2 t2 2[u (t ) y1 (t )][ ( ) ( St )] ; wx20 2 S 2 2 S2 S2 3 6 3 2 wF0 (q(t ), t ) S (12t 6S ) 6t 4S t t ( 2 )] ; 2[u (t ) y1 (t )][ S wx21 S3 S2 S wF0 (q(t ), t ) wF0 (q(t ), t ) 2[u (t ) y1 (t )] ; 0; wz1 wz 2 wF0 (q(t ), t ) wz1 (S )
2[u (t ) y1 (t )][
wF0 (q(t ), t ) wz 2 (S )
12t 6S
2[u (t ) y1 (t )][
S3 6t 4S
(
(
2t 3
S3
t3
3t 2
S2 t2
)] ;
)] ;
S S S q(t ) (v(t ), x20 , x21 , z1 (t , v), z 2 (t , v), z1 (S , v), z 2 (S , v)) . 2
2
Minimizing sequences {vn (t )} , {x 20n } , {x21n } are generated by wF (q(t ), t ) ª wF (q(t ), t ) º n 1 n vn (t ) « 0 B1*\ n (t )»D n , x20 , Dn 0 x20 wv wx20 ¬ ¼ wF (q(t ), t ) n , q n (t ) (vn (t ), x20n , x21n , z (t , vn ), z (S , vn )) , n 0,1,2,... . Dn 0 x21 wx21
v n1 (t ) n 1 x21
\ n (t )
wF0 (q(t ), t ) A1 \ n (t ) , \ n (S ) wz
S
wF0 (qn (t ), t ) dt . wz (S ) 0
³
The following results have been obtained for the initial guess [0
0 (v0 (t ) { 1, x 20
S
8
0 , x 21
S
) , Dn 8
0.01 const ! 0 :
54
v n (t ) o v (t )
S t n
sin t sin t , x 20 o x 20 2 4
S 4
, x 21n o
S 4
as n o f .
The functions (see (2.60))
S S 1 cos t 1 t sin t sin t t , t I [0, S ] , 2 4 4 S S 1 z 2 (t , v ) (sin t t cos t ) cos t , t I . 2 4 4 2 S The values z1 (S , v ) 2 , z 2 (S , v ) 0 . The solution to the original 4 z1 (t , v )
problem is x1 (t )
x 2 (t )
y 2 (t )
( x1 (0), x 2 (0))
S 1 t sin t sin t , t I , 2 4 S 1 1 sin t t cos t cos t , t I . 2 2 4
y1 (t )
S
(0; ) , ( x1 (S ), x 2 (S )) 4
S
(0; ) , 4
where y1 (t ) , y2 (t ) , t I are defined by (2.61), (2.62). Lecture 9. Boundary value problems with state variable constraints constraints Consider solution to problem 2 for the equation of motion (2.63) x A(t ) x P (t ) , t I [t 0, t1 ] , with the boundary conditions ( x(t 0 ) x0 , x(t1 ) x1 ) S R 2 n , (2.64) and the state variable constraints constraints (2.65) x(t ) G(t ) : G(t ) ^x R n / Z (t ) d L(t ) x d M (t )`, t I . The problem is to find a necessary and sufficient condition for existence of a solution to the boundary value problem (2.63) – (2.65) and construct its solution. Let us represent the matrix A(t ) in the form A(t ) A1 (t ) B1 (t ) P , therefore, (2.66) x A1 (t ) x B1 (t ) Px P (t ) , t I . The corresponding linear control system has the form (2.67) y A1 (t ) y B1 (t )u(t ) P (t ) , t I , 2n ( y (t 0 ) x0 , y (t1 ) x1 ) S R . (2.68) m u() L2 ( I , R ) . (2.69) The statement of theorem 3 holds true for the linear control system (2.67) – (2.69). Then u (t ) v(t ) O1 (t , x0 , x1 ) N1 (t ) z (t1 , v) U , v , v() L2 ( I , R m ) , (2.70) y(t ) z (t , v) O2 (t , x0 , x1 ) N 2 (t ) z (t1 , v) , t I , (2.71) where (2.72) z A1 (t ) z B1 (t )v(t ) , z (t 0 ) 0 , v() L2 ( I , R m ) . 55
Lemma 4. Let the matrix W1 (t 0 , t1 ) be positive definite. Then the boundary value problem (2.63) – (2.65) (or (2.64) – (2.66)) is equivalent to the following problem: u(t ) v(t ) O1 (t , x0 , x1 ) N1 (t ) z(t1 , v) Py(t ) , t I , (2.73) m (2.74) z A1 (t ) z B1 (t )v , z (t 0 ) 0 , v() L2 ( I , R ) . y (t ) G (t ) , ( x0 , x1 ) S , (2.75) where the function y (t ) , t I is defined by (2.71). A proof of the lemma follows from (2.70) – (2.75) and (2.67) – (2.69). Consider the following optimal control problem t1
I (v, x0 , x1 , w)
³ v (t ) T (t ) x 1
2
0
T2 (t ) x1 P (t ) N1 (t ) z (t1 , v ) Py(t )
t0
2
w(t ) L(t ) y (t ) dt
(2.76)
t1
³ F (q(t ), t )dt o inf 1
t0
under the conditions z
A1 (t ) z B1 (t )v(t ) , z (t 0 )
0 , v() L2 ( I , R m ) ,
(2.77) (2.78) (2.79)
w(t ) W {w() L2 ( I , R ) / Z (t ) d w(t ) d M (t )} , t I , x0 , x1 S , s
where F1 (q(t ), t )
2
2
v(t ) O1 (t , x0 , x1 ) N1 (t ) z (t1 , v) Py(t ) w(t ) L(t ) y (t ) ,
O1 (t , x0 , x1 ) T1 (t ) x0 T2 (t ) x1 P (t ) , q(t ) (v(t ), x0 , x1 , w(t ), z (t , v), z (t1 , v)) .
Theorem 5. Let the matrix W1 t 0 ,t1 be positive definite. A necessary and sufficient condition for the boundary value problem (2.63)-(2.65) to have a solution is I v* , x0* , x1* , w* 0 , where (v* , x0* , x1* , w* ) X L2 ( I , R m ) u S u W H , H L2 ( I , R m ) u R n u R n u L2 ( I , R s ) is a solution to the optimization problem (2.76)(2.79). A proof of the theorem is similar to the proof of theorem 2. Note that if the value I v* , x0* , x1* , w* ! 0 , then the boundary value problem (2.63) – (2.65) has not a solution. Lemma 5. Let the matrix W1 t 0 ,t1 be positive definite. Then the partial derivatives wF1 q, t wv
2V q; t ;
wF1 q; t ww
2' q, t ;
wF1 q, t * * * 2T1 t 2C1 t P * V q, t 2C1 t L* t 'q, t ; wx0 wF1 q, t * *
2T2 t 2C2 P * V q, t 2C2 t L* t 'q, t ; wx1 wF1 q, t 2P *V q, t 2L* t 'q, t ; wz wF1 q, t 2 N1* (t ) 2 N 2* (t ) P * V (q, t ) 2 N 2* (t ) L* (t )'(q, t ), wz (t1 )
>
>
@
@
>
@
56
(2.80)
where V (q, t ) v T1 (t ) x0 T2 (t ) x1 P (t ) N 2 (t ) z(t1 ) Py(t ) , '(q, t ) w(t ) L(t ) y (t ) , z (t ) C1 (t ) x0 C2 (t ) x1 f (t ) N 2 (t ) z (t1 ) .
y(t )
Lemma 6. Let the matrix W1 t 0 ,t1 be positive definite, the set S be convex. Then: 1) the functional (2.76) under the conditions (2.77)-(2.79) is convex; 2) the derivative
wF1 (q, t ) wq
(
wF1 (q, t ) wF1 (q, t ) wF1 (q, t ) wF1 (q, t ) wF1 (q, t ) wF1 (q, t ) , , , , , ) wv wx0 wx1 ww wz wz (t1 )
satisfies the Lipschitz condition
wF1 (q 'q, t ) wF1 (q, t ) d L1 'q , q , wq wq
q 'q R m S 4n ,
where L1 const ! 0 , 'q ('v, 'x0 , 'x1 , 'w, 'z, 'z (t1 )) R m S 4 n . A proof of the lemma is similar to the proof of lemma 3. Theorem 6. Let the matrix W1 t 0 ,t1 be positive definite. Then the functional (2.76) under the conditions (2.77)-(2.79) is continuously Freshet differentiated, the gradient I c([ )
I c(v, x0 , x1 , w)
( I vc ([ ), I xc0 ([ ), I xc1 ([ ), I wc ([ )) H
(v, x0 , x1 , w) X L2 ( I , R m ) u S u W is computed by wF1 ( q(t ), t ) B1* (t )\ (t ) L2 ( I , R m ), I vc ([ ) wv
at any point [
I xc0 ([ )
t1
wF ( q(t ), t ) n ³t 1 wx0 dt R , I xc1 ([ ) 0 I wc ([ )
where q(t )
partial
the formula
t1
wF0 ( q(t ), t ) dt R n , wx0 t0
³
(2.81)
wF1 ( q(t ), t ) L2 ( I , R s ) ww
derivatives
are
defined
by
the
formula
(2.80),
(v(t ), x0 , x1 , w(t ), z (t , v), z (t1 .v)) , the function z (t ) , t I is a solution of the
differential equation (2.77) at the conjugate system \
v(t ) ,
v
and the function \ (t ) , t I is a solution of
wF1 (q(t ), t ) A1* (t )\ , \ (t1 ) wz
t1
wF1 (q(t ), t ) dt . wz (t1 ) t0
³
(2.82)
Moreover, the gradient I c([ ) , [ X satisfies the Lipschitz condition I c([1 ) I c([ 2 ) d K1 [1 [ 2 , [1 , [ 2 X , where K1 const ! 0 is the Lipschitz constant. A proof of the theorem is similar to the proof of theorem 3. On the basis of the formulas (2.81) – (2.83) construct the sequences vn 1
vn D n I vc ([ n ), x0 n 1 wn 1
PS [ x0 n D n I xc0 ([ n )], x1n 1
PW [ wn D n I wc ([ n )], n
PS [ x1n D n I xc1 ([ n )],
0,1,2,... 2 1 , H 1 ! 0 , in particular D n where 0 H 0 d D n d K1 K 1 2H 1
Lipschitz constant from (2.83), [ n
(vn , x0n , x1n , wn ) X . 57
(2.83)
(2.84)
const ! 0 , K1 ! 0 is the
Note that: 1. If the set S {( x0 , x1 ) R 2 n / Cx0 Ax1 b} , where C , D are constant matrices of order nu n . Then the projection of the point e R 2 n on the set S is defined as PS [e]
e (C , D ) * [( C , D )( C , D )* ]1 (Ce0
( PS [e0 ], PS [e1 ])
(e0 , e1 ) , e0 R n , e1 R n .
De1 b) , e
2. If the set S 0 {x0 R n / C1 x0 b0 } , S1 {x1 R n / D1 x1 b1} where C1 , D1 are constant matrices of order mu n , (n m) u n , then projections of the point e R n on S 0 and are defined as PS [e] e C1* (C1C1* ) 1 (C1e b0 ) , PS [e] e D1* ( D1 D1* ) 1 ( D1e b1 ) 3. Projection of the point u() L2 ( I , R s ) on the set W {w() L2 ( I , R s ) / Z (t ) d w(t ) d M (t ), t I } , is defined as 0
1
PW [u]
Zi (t ), если ui (t ) d Zi (t ), ° ®ui (t ), если wi (t ) d ui (t ) d M i (t ), °M (t ), если u (t ) t M (t ). i i ¯ i
i
1, s, t I
Theorem 7. Let the matrix W1 t 0 ,t1 be positive definite, the set S be convex and closed, the sequence {[ n } {vn , x0n , x1n , wn } X be defined by the formula (2.84). Then: 1) The numerical sequence {J ([ n )} decreases strictly; 2) vn vn1 o 0 , x0n x0n1 o 0 , x1n x1n1 o 0 , wn wn1 o 0 as n o f ; If in addition, the set M ([ 0 ) {[ X / I ([ ) d I ([ 0 )} is bounded, then 3) the sequence {[ n } X is minimizing, i.e. lim J ([ n ) J * inf J ([ ) ; [ X
nof
4) the set X * {[* X / I ([* ) I * inf I ([ )} z , here denotes an empty set; [ X 5) the sequence {[ n } X weakly converges to the set X * ; 6) the following estimation of a convergence rate holds 0 d J ([ n ) J * d
m1 ,n n
1,2,... , m1
const ! 0 ;
7) the boundary value problem (2.63)-(2.65) ) has a solution if and only if the value J ([* ) 0 . A proof of the theorem is similar to the proof of theorem 4. Example 2. Consider the boundary value problem (2.55), (2.56) considered in example 1 with the additional constraint: 0,37t
S
0,37 d x1 (t ) d
0,44t
S
, t I [0, S ] .
(2.85)
For this example the optimization problem (2.76)-(2.79) has the form S
I (v, x20 , x21 , w)
³ [ u(t ) y1 (t ) w(t ) y1 (t ) ]dt 2
2
0
under the conditions (see (2.85)) m z A1 z B1v(t ) , z (0) 0 , v() L2 ( I , R ) , 58
S
³ F (q(t ), t )dt o inf 1
(2.86)
0
(2.87)
w(t ) W
{w() L2 ( I , R1 ) /
0,37t
S
0,37 d w(t ) d
0,44t
S
, t I}
x 20 R 1 , x21 R1 .
(2.88) (2.89)
The partial derivatives are F0v , F1w (q, t )
F1v (q, t )
2[ w(t ) y1 (t )] ,
ª 6 § S 3 t 3 S 2t · 4 § S 2 t 2 ·º ¸¸ ¨¨ F0 x20 2[ w(t ) y1 (t )]« 2 ¨¨ St ¸¸» , 6 2 ¹ S© 2 2 ¹¼ ¬ S © 3 3 2 ª §t t ·º 2[ w(t ) y1 (t )]« ¨¨ 2 ¸¸», F1z1 (q, t ) F0 z1 2[ w(t ) y1 (t )] , F1z2 (q, t ) 0 , S ¹¼ ¬ ©S ª § t 3 3t 2 ·º F1z1 (S ) (q, t ) F0 z1 (S ) 2[ w(t ) y1 (t )]« ¨¨ 3 2 ¸¸» , S ¹¼ ¬ ©S ª § t 3 t 2 ·º F1z2 (S ) (q, t ) F0 z2 (S ) 2[ w(t ) y1 (t )]« ¨¨ 2 ¸¸» . S ¹¼ ¬ ©S
F1x20 (q, t )
F1x21 (q, t )
F0 x21
The minimizing sequences v n 1 (t ) n 1 x20
n n 1 x20 D n F1x20 (qn (t ), t ) , x21
n x21 D n F1x21 (qn (t ), t ) ,
Pw [wn (t ) D n F1w (q(t ), t )] , n
wn1 (t )
\ n (t )
v n (t ) D n [ F1v (q n (t ), t ) B1*\ n (t )] ,
F1z (q n (t ), t ) A1*\ n (t ) , \ n (S )
0,1,2,... ,
S
³ F1z (S ) (qn (t ), t )dt . 0
Since the functional (2.86) under the conditions (2.87) – (2.89) is convex, then the sequence {[ n } {vn , x0n , x1n , wn } is minimizing. It can be shown, that v n (t ) o v* (t )
S t sin t sin t , wn (t ) o w* (t ) 2 4
n * o x 20 t I , x 20
S
4
* , x 21n o x 21
S
4
S t sin t sin t , 2 4
при n o f .
Lecture 10. Boundary value problems with state variable constraints and integral constraints As it follows from the statement of problem 3, equations have the form x A(t ) x P (t ) , t I . (2.90) with boundary conditions ( x(t 0 ) x0 , x(t1 ) x1 ) S R 2 n , (2.91) state variable constraints constraints x(t ) G(t ) : G(t ) {x R n / Z (t ) d L(t ) x d M (t ), t I } , (2.92) and integral constraints g j ( x) d c j , j 1, m1 ; g j ( x) d c j , j m1 1, m2 , (2.93) t1
g j ( x)
³ a
j
(t ), x ! dt , j
t0
59
1, m2 .
(2.94)
Transformation. By introducing the vector function K (t ) (K1 (t ),...,K m (t )) , m t I and the vectors d (d1 ,..., d m ) R the constraints (2.93) can be rewritten in the form 2
1
1
t1
³ a
g j ( x)
j
cj d j , j
(t ), x(t ) ! dt
1, m1 , d Г
{d R
m
/ d t 0} ,
t0
t
³ a (W ), x(W ) ! dW ,
K j (t )
j
j 1, m2 , t I .
t0
Then K
A (t ) x(t ) , K (t 0 )
0 , K (t1 ) c ,
where
c
§ a1* (t ) · ¨ ¸ A (t ) ¨ ... ¸ , a *j (t ) (a1 j (t ),..., anj (t )) , ¨ a * (t ) ¸ © m2 ¹ (c1 ,..., cm2 ) , c j c j d j , j 1, m1 , c j c j , j m1 1, m2 .
As in the previous section, we represent the matrix A(t ) A1 (t ) B1 (t )P . Therefore, x A1 (t ) x B1 (t ) Px P (t ) , t I , Now the origin boundary value problem (2.90) – (2.94) is written in the form (2.95) x A1 (t ) x B1 (t ) Px P (t ) , t I , K A (t ) x , K (t 0 ) 0 , K (t1 ) c , (2.96) ( x0 , x1 ) S , x(t ) G (t ) , t I , d * . (2.97) Represent the boundary value problem (2.95)-(2.97) in the form [ A2 (t )[ B2 (t )[ B3 (t ) P (t ) , (2.98) [ (t 0 ) [ 0 ( x0 , om ) , [ (t1 ) [1 ( x1 , c ) , (2.99) ( x0 , x1 ) S , d Г , P1[ (t ) G(t ) , t I , (2.100) where 2
[
§ x· ¨¨ ¸¸, A2 (t ) ©K ¹
§ A1 (t ) Onm2 · ¨ ¸ ¨ A (t ) On m ¸ , B2 (t ) 2 2 ¹ © P1 ( I n , Onm2 ) ,
§ B1 (t ) P Onm2 · ¨ ¸ ¨ Om ,n Om m ¸ , B3 2 2 2 ¹ © P1[ x ,
§ In · ¨ ¸ ¨ Om ,n ¸ , © 2 ¹
where O j ,k is a zero matrix of order j u k , Oq R q is a q u 1 zero vector. Linear control systems. As in the previous sections, corresponding linear control system has the form (see (2.98) – (2.100)) (2.101) y A2 y B2 (t )u B1 (t )P(t ) , t I , y (t 0 ) [ 0 ( x0 , om ) , y(t1 ) [1 ( x1 , c ) (2.102) m ( x0 , x1 ) S , d Г , u() L2 ( I , R ) , (2.103) 2
§ B1 · ¸ , B2[ ¸ © Om2 ,m ¹
where the matrix B2 (t ) ¨¨
B2 (t )u , t I .
60
Let 6(t ,W ) V (t )V 1 (W ) , where V (t ) is a fundamental matrix of the solutions of linear homogeneous system O A2 (t )O . The (n m2 ) u m matrix B2 (t ) , t I is such that the (n m2 ) u (n m2 ) matrix t1
³ 6(t
W2 (t 0 , t1 )
0
, t ) B2 (t ) B2* (t )6 * (t 0 , t )dt
t0
is positive definite. Theorem 8. Let the matrix W2 t 0 ,t1 be positive definite. Then a control u() L2 ( I , R m ) transfers a trajectory of the system (2.101) from any initial point [ 0 R n m to any final state [1 R n m2 if and only if 2
u(t ) U
v(t ) O1 (t, [ 0 , [1 ) N1 (t ) z(t, v ),
{u() L2 ( I , R m2 ) / u(t )
t I , v, v() L2 ( I , R m )},
where
(2.104)
t1
O1 (t , [ 0 , [1 ) B2* (t )6* (t 0 , t1 )W21 (t 0 , t1 )a , a 6(t 0 , t1 )[1 [ 0 ³ 6(t 0 , t1 ) B1 (t ) P (t )dt , t0
N1 (t )
1 2
B (t )6 (t 0 , t1 )W (t 0 , t1 )6(t 0 , t1 ) , * 2
*
is a solution of the differential equation 0 , v() L2 ( I , R m ) . (2.105) A solution of the differential equation (2.101), corresponding to the control u (t ) U , is defined by the formula (see (2.102), (2.103)) y(t ) z(t, v) O2 (t, [0 , [1 ) N 2 (t ) z(t1 , v) , t I , (2.106) where the function
z (t , v ) , t I
z(t )
A2 (t ) z B2 (t )v(t ) , z (t 0 )
O2 (t , [ 0 , [1 ) 6(t , t 0 )W2 (t , t1 )W21 (t 0 , t1 )[ 0 6(t , t 0 )W2 (t 0 , t )W21 (t 0 , t1 )6(t 0 , t1 )[1 t1
t
³ 6(t ,W ) B1 (W ) P (W )dW 6(t , t 0 )W2 (t 0 , t )W21 (t 0 , t1 ) ³ 6(t 0 , t ) B1 (t ) P (t )dt , t0
N 2 (t )
t0
t
³ 6(t
6(t , t 0 )W2 (t 0 , t )W21 (t 0 , t1 )6(t 0 , t1 ) , W2 (t 0 , t )
0
,W ) B2 (W ) B2* (W )6 * (t 0 ,W )dW ,
t0
W2 (t , t1 ) W2 (t 0 , t1 ) W2 (t 0 , t ) . A proof of the theorem is similar to the proof of theorem 1. Lemma 7. Let the matrix W2 t 0 ,t1 be positive definite. Then the boundary value problem (2.90) – (2.94) is equivalent to the problem: v(t ) T1 (t )[0 T2 (t )[1 r(t ) N1 (t ) z(t1 , v) PP1 y(t ) , (2.107) m (2.108) z A2 (t ) z B2 (t )v(t ) , z (t 0 ) 0 , v() L2 ( I , R ) , [ 0 ( x0 ,0 m ) , [1 ( x1 , c ) , ( x0 , x1 ) S , d Г , (2.109) (2.110) Py(t ) G (t ) , P2 (0 m n , I m ), P1 ( I n , Onm ), where T1 (t ) B2* (t )6 * (t 0 , t )W21 (t 0 , t1 ) , T2 (t ) B2* (t )6* (t 0, t )W21 (t 0 , t1 )6(t.0 , t1 ) , 2
2
2
2
t1
r (t )
B2* (t )6 * (t 0 , t )W21 (t 0 , t1 ) ³ 6(t 0 , t ) B1 (t ) P (t )dt , t0
the function
y (t ) , t I
is defined by the formula (2.106). 61
A proof of the lemma follows from the equalities P1 y(t ) x(t ) , P2 y(t ) K(t ) , t I and theorem 8 (see (2.101) – (2.103), (2.98) – (2.100), (2.104) – (2.106)). Consider the optimal control problem (see (2.107) – (2.110)) t1
³ [| v(t ) T (t )[
J (v, x0 , x1 , d , w)
1
0
T2 (t )[1 r (t ) N 1 (t ) z (t1 , v )
t0
(2.111)
PP1 y (t ) |2 | w(t ) L(t ) P1 y (t ) |2 ]dt o inf
under the conditions z A2 (t ) z B2 (t )v(t ) , z (t0 ) 0 , v() L2 ( I , R m ) , ( x0 , x1 ) S , d Г , [ 0 ( x0 ,0 m2 ) , [1 ( x1 , c ) ,
(2.112) (2.113) S w(t ) W ^w() L2 ( I , R ) / Z(t ) d w(t ) d M (t ), t I `. (2.114) Theorem 9. Let the matrix W2 t 0 ,t1 be positive definite. A necessary and sufficient condition for the boundary value problem (2.90) – (2.94) to have a solution is J v* , x0* , x1* , d* , w* 0 , where v (t ), x0 , x1 , d* , w* X is a solution to the optimization problem (2.111) – (2.114), H L2 I , R m u R n u R n u R m u L2 I , R s . X L2 I , R m u S u Г u W H , A proof of the theorem follows from lemma 7 and theorem 8. Note that *
*
*
1
v(t ) T1 (t )[ 0 T2 (t )[1 r(t ) N1 (t ) z(t1 , v )
v (t )
T11 (t ) x0 T21 (t ) x1 T22 (t )d r1 (t ) N1 (t ) z(t1 , v )
y (t )
z(t, v ) O2 t, [ 0 , [1 N 2 (t ) z(t1 , v )
z (t , v )
C11 (t ) x0 C21 (t ) x1 C22 (t )d f (t ) N 2 (t ) z(t1 , v )
(2.115) (2.116)
as [ 0 ( x0 , Om ) , [1 ( x1 , c ) , c c1 ,..., cm , c j c j d j , j 1, m1 , c j c j , j m1 1, m2 . Let us introduce the notation 2 2 F2 ( q(t ), t ) v(t ) T1 (t )[ 0 T2 (t )[1 r(t ) N1 (t ) z(t1 , v ) P1 y (t ) w(t ) L(t ) P1 y (t ) , 2
2
where u (t ) U , y (t ) , t I are defined by the expressions (2.115), (2.116) respectively, q(t ) v (t ), x0 , x1 , d , w(t ), z (t , v ), z (t1 , v ) . Lemma 8. Let the matrix W2 t 0 ,t1 be positive definite. Then the partial derivatives F2 v ( q, t )
2V 1 ( q, t ), F2 w ( q, t )
2'1 ( q, t ),
F2 d ( q, t )
>2T (t ) 2C >2T (t ) 2C >2T (t ) 2C
F2 z ( q, t )
2 P1*V 1 ( q, t ) 2 P * L* (t ) '1 ( q, t ),
F2 x0 ( q, t ) F2 x1 ( q, t )
F2 z ( t1 ) ( q, t )
* 11
* 11
* 21
* 21
* 22
* 22
>2 N
* 1
@ (t ) P @V ( q, t ) 2C (t ) P @V ( q, t ) 2C
(t ) P V 1 ( q, t ) 2C11* (t ) P * L* (t ) '1 ( q, t ), * 1
* 1
1
* 21
(t ) P * L* (t ) '1 ( q, t ),
* 1
1
* 22
(t ) P * L* (t ) '1 ( q, t ),
(2.117)
@
(t ) 2 N 2* (t ) P1* V 1 ( q, t ) 2 N 2* (t ) P * L* (t )'1 ( q, t ).
where V 1 ( q, t ) v(t ) T11 (t ) x0 T21* (t ) x1 T22 (t )d r1 (t ) N1 (t ) z (t1 , v ) P1 [ z(t , v ) C11 (t ) x0 C21 (t ) x1 C22 (t )d f (t ) N 2 (t ) z (t1 , v )] , '1 ( q, t )
w(t ) L(t ) P1 [ z(t , v ) C11 (t ) x0 C21 (t ) x1 C22 (t )d f (t ) N 2 (t ) z (t , v )] . 62
Lemma 9. Let the matrix W2 t 0 ,t1 be positive definite, the set S be convex. Then: 1) the functional (2.111) under the conditions (2.112)- (2.114) is convex; 2) the derivative F2 q (q, t ) F2v , F2 x , F2 x , F2 d , F2 w , F2 z , F2 z (t ) satisfies the Lipschitz condition || F2 q (q 'q, t ) F2 q (q, t ) ||d K2 | 'q |, q, q 'q R msm 2n2( m n ) , where K2 const ! 0 , 'q ('v, 'x0 , 'x1 , 'd , 'w, 'z, 'z(t1 )) . Theorem 10. Let the matrix W2 t 0 ,t1 be positive definite. Then the functional (2.111) under the conditions (2.112)- (2.114) is continuously Freshet differentiated, the gradient 0
1
1
1
I c( v, x0 , x1 , d , w)
I c(T )
at any point T v , x0 , x1 , d , w X I vc (T )
2
I c (T ), I c (T ), I c (T ), I c (T ), I c (T ) H v
x0
x1
d
w
L2 ( I , R ) u S u Г u W is computed by the formula m2
F2 v ( q(t ), t ) B2* (t )\ (t ) L2 ( I , R m2 ), I xc0 (T )
t1
³F
2 x0
( q(t ), t )dt R n ,
t0
I xc1 (T )
t1
³F
2 x1
( q(t ), t )dt R n , I dc (T )
t1
³F
2d
( q(t ), t )dt R m1 ,
(2.118)
t0
t0
I wc (T )
F2 w ( q(t ), t ) L2 ( I , R S ),
where partial derivatives are defined by the formula (2.117), the function z (t ) z (t , v) , t I t I is a solution of the differential equation (2.112), and the function \ (t ) , t I is a solution of the conjugate system \
t1
F2 z (q(t ), t ) A2*\ , \ (t1 ) ³ F2 z (t1 ) (q(t ), t )dt .
(2.119)
t0
Moreover, the gradient I c(T ) , T X satisfies the Lipschitz condition I c(T1 ) I c(T 2 ) d K 3 T1 T 2 , T1 , T 2 X , (2.120) where K 3 const ! 0 is the Lipschitz constant. On the basis of the formulas (2.118) – (2.120)) construct the sequences vn 1 x1n 1
vn D n I vc (T n ), x0n 1
PS [ x0n D n I xc0 (T n )]
PS [ x1n D n I xc1 (T n )], d n 1
PГ [d n D n I dc (T n )],
PW [ wn D n I cw(T n )], n 0,1,2,... 2 where 0 H 0 d D n d , H 1 ! 0 , in particular, D n K 3 2H 1
(2.121)
wn 1
1 K3
const ! 0 , K 3 ! 0
is the Lipschitz constant from (2.120), T n (vn , x0n , x1n , d n wn ) X , S , *, W are convex and closed sets. Theorem 11. Let the matrix W2 t 0 ,t1 be positive definite, the set S be convex and closed, the sequence ^T n ` ^vn , x0n , x1n , d n , wn ` X be defined by the formula (2.121). Then: 1) The numerical sequence ^I (T n )` decreases strictly; 2) T n T n1 o 0 as n o f ; 63
If in addition, the set M (T 0 ) ^T X / I (T ) d I (T 0 )` is bounded, then: I (T n ) I * inf I (T ) ; 3) the sequence ^T n ` X is minimizing, i.e. lim n of T X
^T X / I (T )
4) the set X *
*
*
`
inf I (T ) z ;
T X
5) the sequence ^T n ` X weakly converges to the set X * ; 6) the following estimation of a convergence rate holds 0 d I (T n ) I * d
m2 , n 1,2,... , m2 n
const ! 0 ;
7) the boundary value problem (2.90)- (2.94) has a solution if and only if the value I (T* ) 0 . Example 3. Consider the boundary value problem (2.55), (2.56) considered in example 1 with the state variable constraints constraint (2.85) and the integral constraint S
³ x (t )dt d 1 . 1
0
For this example x Ax Bx P (t ) , t I , where §0 1· § 0 0· § 0 · ¸¸ , B ¨¨ ¸¸ , P (t ) ¨¨ ¸¸ , A ¨¨ © 0 0¹ © 1 0 ¹ © cos t ¹ x0 S 0 ( x1 (0), x 2 (0)) R 2 / x1 (0) 0, x 2 (0) x 20 R1 ,
^ ^( x (S ), x (S )) R
x1 S1
J (t )
1
0.37t
S
2
/ x1 (S )
0.44t
0.37 , G (t )
m1 1 , m2
2
S
0, x2 (S )
, g1 ( x1 )
x21
` R `, 1
S
³ x (t )dt , c 1
1
1,
0
1, L(t ) { (1,0) , a1 (t )
(1,0) , t I
[0, S ] .
Transformation. The function K (t ) K1 (t ) , t I , where K (t )
t
³ x (W )dW , 1
0
K (0) 0 , K (S ) 1 d1 , d1 t 0 . The set Г {d1 R1 / d1 t 0} . Let [ (t ) ([1 (t ), [ 2 (t ), [ 3 (t )) , where [1 (t ) x1 (t ) , [ 2 (t ) x2 (t ) , [ 3 (t ) K (t ) . Then [(t ) A2[ B2[ P1 (t ) , t I [0, S ] , (2.122) ( 0 ) 0 x § 1 · § · § x1 (S ) · § 0 · ¨ ¸ ¨ ¸ ¨ ¸ ¨ ¸ [ (0) [ 0 ¨ x2 (0) ¸ ¨ x20 ¸ , [ (S ) [1 ¨ x2 (S ) ¸ ¨ x21 ¸ , (2.123) ¨ K (0) ¸ ¨ 0 ¸ ¨ K (S ) ¸ ¨1 d ¸ 1¹ © ¹ © ¹ © ¹ ©
where A2
§ 0 1 0· ¨ ¸ ¨ 0 0 0 ¸ , B2 ¨1 0 0¸ © ¹
§ 0 0 0· ¨ ¸ ¨ 1 0 0 ¸ , P1 (t ) ¨ 0 0 0¸ © ¹
§ 0 · ¨ ¸ ¨ cos t ¸ , P1 ¨ 0 ¸ © ¹
§ 1 0 0· ¨¨ ¸¸ . © 0 1 0¹
The state variable constraints constraint is rewritten in the form 0.37t
S Here x 20 R 1 , x21 R1 , d1 Г
0.44t
, t I [0, S ] . S d1 R 1 / d1 t 0 .
0.37 d [1 (t ) d
^
`
64
(2.124)
The linear controllable system (2.101) – (2.103) has the form (2.125) y A2 y B2 (t )u(t ) P1 (t ) , t I [0, S ] , 1 2 y (0) [ 0 , y(S ) [1 , ( x20 , x21 ) R , u () L2 ( I , R ) , d Г , where the matrix B2* (0,1,0) , the linear homogeneous system O A2O , (O1 O2 , O2 0, O3 O1 ) . Fundamental matrix of the linear homogeneous system O A2O is defined by the formula: t 0· §1 ¨ ¸ 0 1 0 ¸ , V 1 (t ) ¨ ¨ t t 2 / 2 1¸ © ¹
V (t ) e A t 2
t 0· §1 ¨ ¸ 0 1 0 ¸ , 6(t ,W ) V (t )V 1 (W ) . ¨ ¨ t t 2 / 2 1¸ © ¹
The vector S
a
6(0, t )[1 [ 0 ³ 6(0, t ) P1 (t )dt
(Sx21 2; x21 x20 ;S 2 x21 / 2 1 d1 S ) .
0
The matrices W2 (0, S )
W21 (0, S )
§ S 3 / 3 S 2 / 2 S 4 / 8· ¸ ¨ * * 2 S 3 /6 ¸ ! 0 , ³0 6(0, t ) B2 B2 6 (0, t )dt ¨¨ S 4 / 2 S3 ¸ 5 © S / 8 S / 6 S / 20 ¹
S
§ 192 / S 3 ¨ ¨ 36 / S 2 ¨ 360 / S 4 ©
36 / S 2 9/S 60 / S 3
W2 (S , t )
360 / S 4 · ¸ 60 / S 3 ¸ , W2 (0, t ) 720 / S 5 ¸¹
§ t 3 / 3 t 2 / 2 t 4 / 8· ¸ ¨ 2 t t3 / 6 ¸ ! 0 , ¨ t / 2 ¨ t 4 / 8 t 3 / 6 t 5 / 20 ¸ ¹ ©
§ (S 3 t 3 ) / 3 (t 2 S 2 ) / 2 (t 4 S 4 ) / 8 · ¸ ¨ 2 S t (S 3 t 3 ) / 6 ¸ . ¨ (t S 2 ) / 2 ¨ (t 4 S 4 ) / 8 (S 3 t 3 ) / 6 (S 5 t 5 ) / 20 ¸ ¹ ©
Then (see (2.122) – (2.124)) O1 (t , [ 0 , [1 )
B2*6* (0, t )W21 (0, S )a
(Sx21 2)( 180t 2 192St 36S 2 ) / S 4 ( x21 x20 )[ 30t 2
36St 9S 2 ] / S 3 (S 2 x21 / 2 1 d1 S )(360t 2 360St 60S 2 ) / S 5 ; 6(t ,0)W2 (t , S )W21[ 0
· § [12t (8St 3S 2 5t 2 ) St (S 3 18St 2 9S 2t 10t 3 )] / S 3 ¸ ¨ 3 2 2 3 3 x20 ¨ [S 18St 9S t 10t ] / S ¸, ¨ [24t 2 (8St 3S 2 5t 2 ) St 2 (S 3 18St 2 9S 2t 10t 3 ) 3St 3 (3St S 22t 2 )] / 2S 4 ¸ ¹ © 1 6(t ,0)W2 (0, t )W2 (0,S )6(0,S )[1 § x21 (8St 3 3S 2 t 2 5t 4 ) / 2S 3 (1 d1 )( 60St 3 30S 2 t 2 30t 4 ) / S 5 · ¸ ¨ ¨ x21 (12St 2 3S 2 t 10t 3 ) / S 3 (1 d1 )( 180St 2 60S 2 t 120t 3 ) / S 5 ¸ , ¨ x (2St 4 S 2 t 3 t 5 ) / 2S 3 (1 d )( 15St 4 10S 2 t 2 6t 5 ) / S 5 ¸ 21 1 ¹ © t
x
0
0
1 ³ 6(t,W )B1 (W )P (W ) 6(t,0)W2 (0, t )W2 (0, S )³ 6(0, t )B1 (t )P (t )dt
65
§ cos t 1 [4St 3 6S 2t 2 ] / S 4 · ¨ ¸ ¨ sin t [12St 2 12S 2t ] / S 4 ¸ , ¨ t sin t [St 4 2S 2t 2 ]S 4 ¸ © ¹
where B2*6* (0, t )W21 (0,S ) ([180t 2 192St 36S 2 ] / S 4 ,[30t 2 36St 9S 2 ]S 3 ,[360t 2 360St 60S 2 ] / S 5 ) , 6(t ,0)W2 (0, t )W21 (0, S )6(0, S ) § [28St 3 12S 2t 2 15t 4 ] / S 4 [8St 3 3S 2t 2 5t 4 ] / 2S 3 [60St 3 30S 2t 2 30t 4 ] / S 5 · ¸ ¨ ¨ [84St 2 24S 2t 60t 3 ] / S 4 [12St 2 3S 2t 10t 3 ] / S 3 [180St 2 60S 2t 120t 3 ] / S 5 ¸ , ¨ [7St 4 4S 2t 3 3t 5 ] / S 4 [2St 4 S 2t 3 t 5 ] / 2S 3 [15St 4 10S 2t 2 6t 5 ] / S 5 ¸¹ © N1 (t ) B2*6* (0, t )W21 (0,S )6(0,S ) ([180t 2 168St 24S 2 ] / S 4 , [30t 2 24St 3S 2 ] / S 3 ,[360t 2 360St 60S 2 ] / S 5 ) , 1
N 2 (t ) 6(t ,0)W2 (0, t )W2 (0, S )6(0, S )
As it follows from theorem 10, the control v(t ) O1 (t , [ 0 , [1 ) N 1 (t ) z (t1 , v )
u
v(t ) ( Sx21 2)( 180t 2 192St
36S 2 ) / S 4 ( x21 x20 )( 30t 2 36St 9S 2 ) / S 3 (S 2 x21 / 2 1
d1 S )( 360t 2 360St 60S 2 ) / S 5 [( 180t 2 168St 24S 2 ) / S 4 ]z1 ( x, v) (2.126) [( 30t 2 24St 3S 2 ) / S 3 ]z2 ( x, v ) [( 360t 2 360St 60S 2 ) / S 5 ]z3 ( x, v ) , t I [0, S ] ,
is a solution of the differential equation (2.127) A2 z B2v(t ) , z (0) 0 , v() L2 ( I , R1 ) . Solution of the differential equation (2.125) corresponding to the control (2.126), is y(t ) ( y1 (t ), y2 (t ), y3 (t )) z (t , v ) O2 (t , [0 , [1 ) N 2 (t ) z (S , v ) , t I where where
z (t , v) , t I
[0, S ]
z
y1 (t )
z1 (t , v ) x20 [12t (8St 3S 2 5t 2 ) St (S 3 18St 9S 2 t 10t 3 )]S 3
x21 (8St 3 3S 2t 2 5t 4 ) / 2S 3 (1 d1 )(60St 3 30S 2t 2 30t 4 )S 5 [4St 3 6S 2t 2 ]S 4 cos t 1 [( 28St 3 12S 2 t 2 15t 4 ) / S 4 ]z1 (S , v) [(8St 3 3S 2 t 2 5t 4 ) / 2S 3 ]z2 (S , v ) [( 60St 3 30S 2 t 2 30t 4 ) / S 5 ]z3 (S , v ), y 2 (t )
(2.128)
z2 (t , v ) x20 [S 3 18St 2 9S 2 t 10t 3 ] / S 3 x21 ( 12St 2 3S 2 t 10t 3 ) / S 3
(1 d1 )(18St 2 60S 2t 120t 3 ) / S 5 sin t (12St 2 12S 2t ) / S 4 [(84St 2 24S 2t 60t 3 ) / S 4 ]z1 (S , v) [( 12St 2 3S 2 t 10t 3 ) / S 3 ]z2 (S , v) [( 180St 3 60S 2 t 120t 3 ) / S 5 ]z3 (S , v ) , (2.129) y 3 (t )
z3 (t , v ) x20 [24t 2 (8St 3S 2 5t 2 ) St 2 (S 3 18St 2 9S 2 t 10t 3 )]
3St 3 (3St 2t 2 S 2 ) / 2S 4 x21 (2St 4 S 2t 3 t 5 ) / 2S 3 (1 d1 ) (15St 4 10S 2t 2 6t 5 ) / S 5 t sin t (St 4 2S 2t 2 ) / S 4 [(7St 4 4S 2t 3 3t 5 ) / S 4 ]z1 (S , v ) [( 2St 4 S 2 t 3 t 5 ) / 2S 3 ]z2 (S , v) [( 15St 4 10S 2 t 2 6t 5 ) / S 5 ]z3 (S , v ) , t I . 66
(2.130)
Note that y1 (0) 0 , y2 (0) x20 , y3 (0) 0 , y1 (S ) 0 , y2 (S ) x21 , y3 (S ) (1 d1 ) . The optimization problem (2.111) – (2.114) is written in the form S
J (v, x20 , x21, d , w)
³ [ u (t ) y (t ) 1
2
2
w(t ) y1 (t ) ]dt o inf
(2.131)
0
under the conditions (2.127), where v() L2 ( I , R1 ) , x20 R1 , x21 R1 , w(t ) W (t ) , d1 Г . (2.132) The function u(t ) , t I is defined by the formula (2.126), and the function y1 (t ) , t I is defined by the expression (2.128), the set W (t ) {w() L2 ( I , R1 ) / 0.37t / S 0.37 d w(t ) d 0.44t / п.в. t I } , (2.133) The function 2 2 F2 ( q(t ), t ) u(t ) y1 (t ) w(t ) y1 (t ) , q(t ) (v(t ), x20 , x21, d , w(t ), z(t, v), z(S , v)) , t I . The partial derivatives F2v ( q, t ) 2[u(t ) y1(t )] ; F2 w ( q, t ) 2[ w(t ) y1(t )] ; F2 x20 (q, t )
2[u(t ) y1(t )][( 30t 2 36St 9S 2 ) 12t (8St 3S 2 5t 2 )
St (S 3 18St 2 9S 2t 10t 3 )] / S 3 2[w (t ) y1(t )][ 12t (8St 3S 2 5t 2 ) St ((S 3 18St 2 9S 2t 10t 3 )] / S 3 ; F2 x21 (q, t ) 2[u(t ) y1(t )][( 30t 2 36St 9S 2 ) (180t 2 180St 30S 2 ) (180t 2 192St 36S 2 ) (4St 3 1.5S 2t 2 2.5t 4 )] / S 3
2[ w(t ) y1(t )][8St 3 3S 2 t 2 5t 4 ] / 2S 3 ; F2d1 (q, t ) 2[u(t ) y1(t )][( 360t 2 360St 60S 2 ) (60St 3 30S 2t 2 30t 4 )] / S 5 2[ w(t ) y1(t )][ 60St 3 30S 2 t 2 30t 4 ] / S 5 ; F2 z1 ( q, t )
0 ; F2 z3 (q, t )
0;
2[u(t ) y1(t )][( 180t 168St 24S ) (28St 12S t 15t )] / S 4 2
F2 z1 (S ) (q, t ) F2 z2 (S ) (q, t ) F2 z3 (S ) (q, t )
2[u(t ) y 1 (t )] 2[ w(t ) y 1 (t )] ; F2 z2 (q, t ) 2
3
2 2
4
2[ w(t ) y1(t )][ 28St 3 12S 2 t 2 15t 4 ] / S 4 ; 2[u(t ) y1(t )][( 30t 2 24St 3S 2 ) (4St 3 1.5S 2t 2 2.5t 4 )] / S 3
2[ w(t ) y1(t )][ 8St 3 3S 2 t 2 5t 4 )] / 2S 3 ; 2[u(t ) y1(t )][( 360t 2 360St 60S 2 ) (60St 3 30S 2t 2 30t 4 )] / S 5 2[ w(t ) y1(t )][ 60St 3 30S 2 t 2 30t 4 )] / S 5 .
It is easily proved that the functional (2.131) under the conditions (2.127), (2.132), (2.133) is convex. Therefore, the sequences presented below are minimizing. For this problem the control T (Q , х20 , х21, d , w) X . Chose an initial guess 0 1 0 0 0 R1 , d10 Г , T 0 (v (t ), x20 , x21 , d10 , w0 (t )) X , where v0 () L2 ( I , R1 ) , x20 R , x21 w0 (t ) W (t ) . In particular v0 (t ) 1 , w0 (t ) 0.405t / S 0.185 W (t ) , d10 0.5 , 0 0 x20 S / 8 , x21 S / 8 . Calculate a solution to the differential equation z10 z20 , z20 v0 (t ) , z30 z10 , t >0, S @ , where z10 (0) 0 , z 20 (0) 0 , z30 (0) 0 . Thus, z10 (t ) , 0 0 z 20 (t ) , z30 (t ) , t >0, S @ are known. Then q0 (v0 , x20 x21, d10 , w0 , z0 (t ), z0 (S )) , where 67
z0 (t ) ( z10 (t ), z20 (t ), z30 (t )) ,
t [0, S ] . Calculate the partial derivatives
F2v (q0 , t ) ,
F2 x20 (q0 , t ) , F2 x21 (q0 , t ) , F2 d1 (q0 , t ) , F2 w (q0 , t ) , F2 z0 (q0 , t ) , F2 z0 (S ) (q0 , t ) .
The next approach v0 D 0 I vc (T 0 ) , w1
v1
PW >w0 D 0 I wc (T 0 )@ , d11
0 x20 D 0 I xc20 (T0 ) , x121
x120
P* [d10 D 0 I dc1 (T )] ,
0 x21 D 0 I xc21 (T0 ) ,
where I vc (T 0 )
F2v (q0 (t ), t ) B2*\ 0 (t )
I wc (T 0 )
F2 w (q0 (t ), t ) , I xc20 (T 0 )
(\ 10 ,\ 20 ,\ 30 ) , S
³ Fc
2 x20
(q0 (t ), t )dt ,
0
I xc21 (T 0 )
S
S
0
0
³ F2cX 21 (q0 (t ), t )dt , I dc1 (T0 )
³ Fc
2 d1
(q0 (t ), t )dt .
Here \ 0 (t ) (\ 10 (t ),\ 20 (t ),\ 30 (t )) , t I is a solution to the conjugate system \10 (t ) F2 z (q0 (t ), t ) \ 20 , \ 20 F2 z (q0 (t ), t ) 0 , \ 30 F2 z (q0 (t ), t ) \ 10 \ 10 , 10
\ 10 (S )
20
S
0
The value D 0
const
S
0
0
³ F2cz20 (S ) (q0 (t ), t )dt , \ 30 (S ) ³ F2cz30 (S ) (q0 (t ), t )dt .
³ F2cz10(S ) (q0 (t ), t )dt , \ 20 (S ) 1 k3
30
S
0.1 . As a result T1
c , x21 c , d11 , w1 ) (v1 , x20
is found.
Further it is repeated the construction process ^T n ` with initial point T1 , with the value D n 0.1 , n 0,1,2... . It can be shown, that vn o v* (t ) [2t sin t S sin t ] / 4 , n * n * wn o w* >2t sin t t sin t @ / 4 , d n1 o d*1 1 , х20 o x21 S / 4 , as n o f o х20 S / 4 , x21 The functions z1 (t, v* ) cos t 1 >2t sin t S sin t St @/ 4 , t I , z2 (t, v* ) (sin t t cos t ) / 2 S >cos t 1@/ 4 , t I , z3 (t , v* ) sin t t (sin t t cos t ) / 2 S ( cos t 1) / 4 S 2t 2 / 8 , t I , where z1 (S , v* ) S 2 / 4 2 , z2 (S , v* ) 0 , z3 (S , v* ) S 3 / 8 S . Then (see (2.128) – (2.130)) y1* (t ) [2t sin t S sin t ] / 4 x1* (t ) , t I [0, S ] , y2* (t ) x2* (t ) [2 sin t 2t cos t S cos t ] / 4 , t I , y3* (t ) [2 sin t 2t cos t S cos t S ] / 4 , t I . The solution to the original problem is x1* (t ) x2* (t )
y1* (t ) , t I ,
0.37t
S
y2* (t ) , t I , x0
0.37 d x1* (t ) d
x* (0)
S
³x
1*
(t )dt
0
68
S
(0, ) , x1 4
0 d 1.
0.44t
S
, t I ,
x* (S )
S
(0, ) , 4
Comments The result of the research is solution of the following unsolved problems: – A necessary and sufficient condition for existence of a solution to a boundary value problem for linear ordinary differential equations with conditions on boundaries in given convex and closed set. A method for constructing solution to the boundary value problem by imbedding the original problem to a special optimal control problem has been developed; – A necessary and sufficient condition for existence of a solution to a boundary value problem for linear ordinary differential equations with conditions on boundaries in given set and state variable constraints constraints. A method for constructing solution to the boundary value problem with state variable constraints constraints by constructing minimizing sequences in functional space has been proposed. Estimations of convergence rate for minimizing sequences are obtained. An algorithm for solving the problem has been developed and the constructiveness of the proposed method is shown in example; – A necessary and sufficient condition for existence of a solution to a boundary value problem for linear ordinary differential equations with conditions on boundaries, state variable constraints and integral constraints. A method for reducing the integral constraints to differential equations with boundary conditions by introducing fictive control functions. The original problem is reduced to a special free end point optimal control problem with unusual functional. The Frechet derivetives of the functional with respect to control function is computed and minimizing sequences in functional space has been constructed. A convergence of minimizing sequences has been studied. The main provisions of the proposed method for solving the boundary value problem are illustrated by example. The derived results are a significant contribution to the theory of ordinary differential equations. The constructive theory of boundary value problems for linear ordinary differential equations which has no analogues in the Republic of Kazakhstan and abroad has been developed. I has been shown that boundary value problems for linear systems of ordinary differential equations can be reduced to free end point optimal control problems. By solving free end point optimal control problem solutions to boundary value problems with constraints, boundary value problems with parameter and problems of constructing periodic solutions for autonomous systems can be obtained. Basic for the proposed methods of solving boundary value problems with different constraints is the opportunity of reducing the mentioned problems to a class of linear Fredholm’s integral equation of the first kind. The first kind Fredholm integral equation is one of the little-studied problems of mathematics. And that is why the fundamental research on integral 69
equations and solving boundary value problems for linear ordinary differential equations on its base are a new perspective direction in mathematics. References: 1. 2. 3. 4. 5. 6. 7.
S.A. Aisagaliev. A general solution to one class of integral equations // Mathematical journal, 2005, Vol. 5, № 4 (1.20), p. 17-34. (in russian). S.A. Aisagaliev, Zh.Kh. Zhunussova, M.N. Kalimoldayev. The imbedding principle for boundary value problems of ordinary differential equations // Mathematical journal. – 2012. – Vol. 12. № 2(44). – p. 5-22. (in russian). S.A. Aisagaliev, M.N. Kalimoldayev, E.M. Pozdeyeva. To the boundary value problem of ordinary differential equations // Vestnik KazNU, ser. math., mech., inf. 2013, № 1(76), p. 5-30. (in russian). S.A. Aisagaliev, M.N. Kalimoldayev. Constructive method for solving boundary value problems of ordinary differential equations // Differential Equations. – 2014, Vol. 50, № 8. – p. 901-916. (in russian). S.A. Aisagaliev, T.S. Aisagaliev. Methods for solving boundary value problems. – Almaty, «Кazakh University» publishing house, 2002. – 348 p. (in russian). S.A. Aisagaliev. Constructive theory of boundary value problems of ordinary differential equations. – Almaty, «Кazakh University» publishing house, 2015. – 207 p. (in russian). S.A. Aisagaliev. The problems of qualitative theory of differential equations. – Almaty, «Кazakh University» publishing house, 2016. – 397 p. (in russian).
70
Chapter III BOUNDARY VALUE PROBLEMS FOR NONLINEAR ORDINARY DIFFERENTIAL EQUATIONS
Methods for solving boundary value problems with constraints are proposed. Boundary value problems with boundary conditions imposed on convex closed sets, boundary value problems with state variable constraints and integral constraints are considered. Solvability conditions for the mentioned boundary value problems are derived and methods for solving them are developed. Consider the boundary value problem x
A(t ) x B(t ) f ( x, t ) P (t ),
( x(t 0 )
x (t ) G (t ) : G ( t )
[t 0 , t1 ] ,
(3.1) (3.2)
x1 ) S R ,
x0 , x(t1 )
with the state variable constraints
tI 2n
^x R
n
J (t ) d F ( x, t ) d G (t ), t I `,
(3.3)
and the integral constraints j 1, m1 ,
g j ( x) d c j ,
g j ( x)
t1
g j ( x)
³f
0j
( x(t ), t )dt ,
cj,
j
m1 1, m2 ,
(3.4)
j 1, m2 .
(3.5)
t0
Here A(t ), B(t ) are given n u n, n u m matrices respectively, P (t ), t I is a given n u1 vector valued function with piecewise continuous elements, the given m u 1 vector valued function f ( x, u , t ) is defined and continuous with respect to the set of variables ( x, t ) R n u I and satisfies the conditions: f ( x, t ) f ( y, t ) d l (t ) x y , f ( x, t ) d c0 x c1 (t ),
( x, t ), ( y, t ) R n u I , c0
const t 0,
l
const ! 0,
c1 (t ) L1 ( I , R1 ) ,
S is a given convex closed set. The r u 1 vector valued function F ( x, t ) ( F1 ( x, t ),..., Fr ( x, t )), t I is continuous with respect to a set of arguments,
J (t ) (J 1 (t ),..., J r (t )) , G (t ) (G1 (t ),..., G r (t )), t I are given continuous functions.
The values c j , j 1, m2 are given constants, f 0 j ( x, t ), j 1, m2 are given functions continuous with respect to a set of arguments, satisfying the conditions f 0 j ( x, t ) f 0 j ( y, t ) d l j x y , ( x, t ), ( y, t ) R n u I , j 1, m2 ; f 0 j ( x, t ) d c0 j x c1 j (t ),
c0 j
const t 0,
71
c1 j (t ) L1 ( I , R1 ),
j
1, m2 .
Note that: 1) if A(t ) { 0, m rewritten in the form x
f ( x, t ) P (t )
n, B(t ) f ( x, t ),
I n , then the equation (3.1) is
tI,
(3.6)
where I n is identity matrix of the order n u n . Therefore the presented below results hold true for equation of the form (3.6) under the conditions (3.2) – (3.5); 2) if f ( x, t ) x P1 (t ) (or f ( x, t ) C(t ) x P1 (t ) ), then the equation (3.1) can be rewritten in the form x A(t ) x B(t ) x B(t ) P1 (t ) P (t ) A(t ) x P (t ), t I (3.7) where A(t ) A(t ) B(t ), P (t ) B(t ) P1 (t ) P (t ) . Hence the equation (3.7) is a special case of the equation (3.1). In particular, the set S is defined by S ^( x0 , x1 ) R 2 n / H j ( x0 , x1 ) d 0, j 1, p; a j , x0 ! b j , x1 ! d j 0, j p 1, s`, where H j ( x0 , x1 ), j 1, p are convex function with respect to the variables ( x0 , x1 ), x0
x(t 0 ), x1
x(t1 ), a j R n , b j R n , d j R1 , j
p 1, s are given vectors and
numbers, the symbol , ! denotes a scalar product. Problem 1. Provide a necessary and sufficient condition for existence of a solution to the boundary value problem (3.1), (3.2). Construct a solution to the boundary value problem (3.1), (3.2). Problem 2. Provide a necessary and sufficient condition for existence of a solution to the boundary value problem (3.1) – (3.3). Construct a solution to the boundary value problem (3.1) – (3.3). Problem 3. Provide a necessary and sufficient condition for existence of a solution to the boundary value problem (3.1) – (3.5). Construct a solution to the boundary value problem (3.1) – (3.5). The novelty and peculiarity of the proposed methods for solving boundary value problems is that in the first phase of the study by introducing dummy control functions the original problems are immersed in the problems of controllability. Further, checking the existence of solutions of original boundary value problems and constructing their solutions are implemented by solving appropriate optimal control problems of a special form. In this approach, the necessary and sufficient conditions for the existence of the solution of the boundary value problem can be obtained from the conditions for achieving the lower bound of the functional on a given set, and solutions to the original boundary problems are limit points of minimizing sequences. Lecture 11. Two-point boundary value problem Consider the solving of problem 1. The boundary value problem is posed: x A(t ) x B(t ) f ( x, t ) P (t ), t I [t0 , t1 ] , (3.8) 2n ( x(t 0 ) x0 , x(t1 ) x1 ) S R . (3.9) 72
Problem A. Find necessary and sufficient conditions for existence of a solution to the boundary value problem (3.8), (3.9). Problem B. Construct a solution to the boundary value problem (3.8) – (3.9). As it follows from the problem statement one needs to proof an existence of a pair ( x0 , x1 ) S such that a solution to the system (3.8), starting from the point x 0 at time t 0 , passes through the point x1 , at time t1 . In particular, the set S is defined by S* {( x0 , x1 ) R 2n / H j ( x0 , x1 ) d 0, j 1, p; a j , x0 ! b j , x1 ! d j 0, j p 1, s1} , where H j ( x0 , x1 ), j 1, p are convex functions with respect to the variables ( x0 , x1 ), x0
x(t1 ); a j R n , b j R n , d j R1 , j
x(t 0 ), x1
p 1, s1
are given vectors
and numbers, the symbol , ! denotes a scalar product. Consider the linear controllable system together with the differential equation (3.8) under the boundary conditions (3.9) y A(t ) y B(t )u (t ) P (t ), t I , (3.10) y (t 0 ) x0 , y (t1 ) x1 , ( x 0 , x1 ) S , (3.11) (3.12) u() L2 ( I , R m ) . m It can be easily checked that a control u() L2 ( I , R ) that moves the state of the system (3.10) starting at arbitrary point x 0 to the point x1 , is a solution to the integral equation t1
³ )(t , t ) B(t )u(t )dt 0
(3.13)
a,
t0
where )(t ,W ) T (t )T 1 (W ), T (t ) is a fundamental matrix solution to the linear homogeneous system K A(t )K , t1
a
a ( x0 , x1 ) )(t0 , t1 ) x1 x0 ³ )(t0 , t ) P (t )dt. t0
Let us introduce the following notations: O1 (t , x0 , x1 ) C (t )a, C (t ) t1
W (t 0 , t1 )
³ )(t
0
B * (t )) * (t 0 , t )W 1 (t 0 , t1 )
, t ) B(t ) B * (t )) * (t 0 , t )dt , W (t , t1 ) W (t 0 , t1 ) W (t 0 , t ) ,
t0
O2 (t , x0 , x1 ) C1 (t ) x0 C 2 (t ) x1 P1 (t ), C1 (t ) )(t , t 0 )W (t , t1 )W 1 (t 0 , t1 ) , C 2 (t )
)(t , t 0 )W (t 0 , t )W 1 (t 0 , t1 ))(t 0 , t1 ), P1 (t )
t1
t
)(t , t 0 ) ³ )(t 0 ,W ) P (t )dW C 2 (t ) ³ )(t1 , t ) P (t )dt , t0
N1 (t )
C (t ))(t 0 , t1 ), N 2 (t )
t0
C 2 (t ), t I
Theorem 1. Let the matrix W t 0 ,t1 be positive definite. Then a control u() L2 ( I , R m ) brings the trajectory of the system (3.10) from an initial point x0 R n into a terminal state x1 R n if and only if
73
^u() L ( I , R
u ( t ) U
m
2
) / u(t ) v(t ) O1 (t, x0 , x1 )
(3.14)
N1 (t ) z(t1 , v ), t I , v() L2 ( I , R m )`, where the function z (t ) z (t , v), t I , is a solution of the differential equation z A(t ) z B(t )v(t ), z (t 0 ) 0, v() L2 ( I , R m ). (3.15)
The solution of the differential equation (3.10), corresponding to the control u (t ) U , has the form (see (3.11) – (3.13)) (3.16) y(t ) z (t , v) O2 (t , x0 , x1 ) N 2 (t ) z (t1 , v), t I , A proof of the theorem is presented in Chapter II (see theorem 1). Lemma 1. Let the matrix W t 0 ,t1 be positive definite. Then the boundary value problem (3.8), (3.9) is equivalent to the following problem (3.17) u(t ) v(t ) O1 (t , x0 , x1 ) N1 (t ) z (t1 , v) f ( y(t ), t ), t I m (3.18) z A(t ) z B(t )v(t ), z (t0 ) 0, v() L2 ( I , R ) , m where y(t ), t I is defined by (3.16), v() L2 ( I , R ) is an arbitrary function. A proof of the lemma follows from (3.8), (3.9) the equalities (3.14) – (3.18). Consider the optimal control problem t1
³ v(t ) O (t , x , x ) N (t ) z(t , v) f ( y(t ), t )
J (v, x0 , x1 )
1
0
1
1
2
1
dt
t0
(3.19)
t1
³ F (v(t ), x , x , z(t , v), z(t , v), t )dt 0
0
1
1
t0
under the conditions z
A(t ) z B(t )v(t ), z (t 0 )
v () L2 ( I , R ), m
0,
(3.20) (3.21)
I,
( x 0 , x1 ) S .
where y(t ), t I is defined by (3.16). Let us denote & J*
L2 ( I , R m ) u S H
inf J ([ ), [ [ &
L2 ( I , R m ) u R n u R n
(v, x0 , x1 ) &, &*
{[* & / J ([* )
J * }.
Theorem 2. Let the matrix W t 0 ,t1 be positive definite, the set &* z , here the symbol denotes an empty set. A necessary and sufficient condition for the boundary value problem (3.8), (3.9) to have a solution is J ([* ) 0 , where [ * (v* , x0* , x1* ) & is an optimal control for the problem (3.19) – (3.21). If J * J ([* ) 0 , then the function x* (t ) z (t , v* ) O2 (t , x0* , x1* ) N 2 (t ) z (t , v* ), t I , (3.22) is a solution to the boundary value problem (3.8), (3.9). If J * ! 0 , then the boundary value problem (3.8), (3.9) has not a solution. Proof. Necessity. Let the boundary value problem (3.8), (3.9) has a solution. Then as it follows from lemma 1, the equality u* (t ) f ( y* (t ), t ) , t I holds, where u (t ), t I is defined by (3.11). Consequently, J ([* ) 0 , y* (t ) x* (t ), u* (t ) v* (t ) 74
O1 (t , x0* , x1* ) N1 (t ) z (t1 , v* ), t I . Note that the value J ([ )
J (v, x0 , x1 ) t 0, [ & . This
concludes the necessity. Sufficiency. Let the value J ([* ) 0 . This holds if and only if u* (t ) f ( y* (t ), t ) , t I . Therefore (see (3.22)) v* (t ) O1 (t , x0* , x1* ) N1 (t ) z (t1 , v* ) f ( y (t , v* ), x0* , x1* ), t I , where y (t ) y (t , v* , x0* , x1* ) z (t , v* ) O2 (t , x0* , x1* ) N 2 (t ) z (t1 , v* ), t I . The function y* (t ), t I is a solution to the differential equation (3.10). Then y * (t )
A(t ) y* (t ) B(t )u* (t ) P (t )
A(t ) y* (t ) B(t ) f ( y* (t ), t ) P (t ),
.
y* (t 0 ) x , y* (t1 ) x1* , ( x0* , x1* ) S x* (t ) z (t , v* ) O2 (t , x0* , x1* ) N 2 (t ) z (t , v* ), t I . This completes * 0
Whence y* (t ) the sufficiency. The proof of Theorem 2 is complete. Lemma 2. Let the matrix W t 0 ,t1 be positive definite, the function F0 (q, t )
v O1 (t , x0 , x1 ) N1 (t ) z (t1 ) f ( y(t ), t )
2
v T1 (t ) x0 T2 (t ) x1 P (t ) N1 (t ) z (t1 ) f ( z C1 (t ) x0 C 2 (t ) x1 P1 (t ) N 2 (t ) z (t1 ), t )
2
t1
where T1 (t ) С(t ), T2 (t ) C (t ))(t 0 , t1 ), P (t ) C (t ) ³ )(t 0 , t )P (t )dt . Let besides, the t0
function F0 (q, t ) be defined and continuously differentiable with respect to q ([ , z, z (t1 )) (v, x0 , x1 , z, z (t1 )) .Then the partial derivatives (3.23) F0 (q, t ) 2>v T1 (t ) x0 T2 (t ) x1 P (t ) N1 (t ) z(t1 ) f ( y(t ), t )@ 2w1 (q, t ) , * * * (3.24) F0 x (q, t ) 2T1 (t )w1 (q, t ) 2C1 (t ) f x ( y, t )w1 (q, t ) , (3.25) F0 x (q, t ) 2T2* (t )w1 (q, t ) 2C2* (t ) f x* ( y, t )w1 (q, t ) , * F0 z (q, t ) 2 f x ( y, t ) w1 (q, t ) , (3.26) * * * (3.27) F0 z (t ) (q, t ) 2N1 (t )w1 (q, t ) 2N 2 (t ) f x ( y, t )w1 (q, t ) , where w1 (q, t ) v T1 (t ) x0 T2 (t ) x1 P (t ) N1 (t ) z(t1 ) f ( y(t ), t ) , y y(t ) z C1 (t ) x0 C2 (t ) x1 P1 (t ) N 2 (t ) z (t1 ), t I . The formulas (3.23) – (3.27) can be obtained by direct differentiation of the function F0 (q, t ) with respect to q . Let us denote F0q (q, t ) ( F0v (q, t ), F0 x (q, t ), F0 x (q, t ), F0 z (q, t ), F0 z (t ) (q, t )), (q, t ) R m4n u I . Lemma 3. Let the assumptions of lemma 2 hold, the set S be convex and the inequality F0 q (q1 , t ) F0 q (q 2 , t ), q1 q 2 t 0, q1 , q 2 R k m 2 n . (3.28) hold. Then the functional (3.19) under the conditions (3.20), (3.21) is convex. Proof. The inequality (3.28) is a necessary and sufficient condition for convexity of the smooth function F0 (q, t ) with respect to q for any fixed t I . 0
1
1
0
1
1
F0 (Dq1 (1 D )q2 , t ) d DF0 ( q1 , t ) (1 D ) F0 ( q2 , t ), q1 , q2 R m4 n , 75
D , D [0,1].
(3.29)
Since the differential equation (3.20) is linear, for any v1 (), v2 () L2 ( I , R m ) the value z(t,Dv1 (1 D )v2 ) Dz(t, v1 ) (1 D ) z(t, v2 ) at all D [0,1] , t I . Then J (Dv1 (1 D )v2 , Dx01 (1 D ) x02 , Dx11 (1 D ) x12 )
t1
³ F (Dq 0
1
(1 D )q 2 , t )dt d
t0 t1
t1
d D ³ F0 (q1 , t )dt (1 D ) ³ F0 (q 2 , t )dt t0
t0
where z2
D J (v1 , x01 , x11 ) (1 D ) J (X 2 , x02 , x12 ),
q1
z(t, v2 ), z1 (t1 )
z(t1 , v1 ), z2 (t1 )
(v1 , x01 , x11 , z1 , z1 (t1 )), q2
(v2 , x02 , x12 , z2 , z2 (t1 )), z1
z (t , v1 ),
z(t1 , v2 ) by the inequality (3.29). The proof of the
lemma is complete. Definition 1. We say that the derivative F0 q (q, t ) satisfies the Lipschitz condition with respect to q in R N , N m 4n , if F0 v ( q 'q, t ) F0 v ( q, t ) d L1 'q ,
F0 x0 ( q 'q, t ) F0 x0 ( q, t ) d L2 'q ,
F0 x1 ( q 'q, t ) F0 x1 ( q, t ) d L3 'q , F0 z ( q 'q, t ) F0 z ( q, t ) d L4 'q , F0 z (t1 ) (q 'q, t ) F0 z (t1 ) (q, t ) d L5 'q ,
where Li const ! 0, i 1,5 , the norm 'q ('v, 'x0 , 'x1 , 'z, 'z(t1 )) . Theorem 3. Let the assumptions of lemma 1 hold, the derivative F0q (q, t ), q R N , t I satisfy the Lipschitz condition. Then the functional (3.19) under the conditions (3.20), (3.21) is Frechet differentiable, the gradient J c(v, x0 , x1 )
( J vc ([ ), J xc0 ([ ), J x' 1 ([ )) H , [
(v, x0 , x1 )
at any point [ X is defined by J vc ([ )
t1
³F
F0v ( q(t ), t ) B* (t )\ (t ), J xc0 ([ )
0 x0
( q(t ), t )dt,
t0
J xc1 ([ )
(3.30)
t1
³F
0 x1
( q(t ), t )dt
t0
where q(t ) (v(t ), x0 , x1 , z(t, v), z(t1 , v)) , z (t , v ) , t I , is a solution of the differential equation (3.20), and the function \ (t ) , t I , is a solution of the adjoint system \
F0 z (q(t ), t ) A* (t )\ ,
\ (t1 )
t1
³ F0 z (t1 ) (q(t ), t )dt , t I .
(3.31)
t0
Moreover, the gradient J c([ ) H satisfies the Lipschitz condition (3.32) J c([1 ) J c([ 2 ) H d l1 [1 [ 2 X , [1 , [ 2 X , Proof. The functional (3.19) does not belong to a class of known functionals: neither the Lagrange functional, nor the Mayer functional, nor the Bolza one. Therefore we need to prove every statement of the theorem. Let [ (t ) (v(t ), x0 , x1 ) X , [ (t ) '[ (t ) (v(t ) h(t ), x0 'x0 , x1 'x1 ) X . Then 'J
J ([ '[ ) J ([ )
t1
³ >F (q(t ) 'q(t ), t ) F (q(t ), t )@dt 0
0
t0
t1
³ >h (t ) F *
0v
@
5
( q(t ), t ) 'x0* F0 x0 ( q(t ), t ) 'x1* F0 x1 ( q(t ), t ) 'z * (t ) F0 z ( q(t ), t ) dt ¦ Ri , i 1
t0
76
where t1
³ h (t )>F *
R1
0v
(q(t ) 'q(t ), t ) F0v (q(t ), t )@dt ,
t0 t1
³ 'x >F * 0
R2
@
(q(t ) 'q(t ), t ) F0 x0 (q(t ), t ) dt ,
0 x0
t0
t1
³ 'x >F * 1
R3
0 x1
@
(q(t ) 'q(t ), t ) F0 x1 (q(t ), t ) dt ,
t0
t1
³ 'z (t )>F *
R4
0z
(q(t ) 'q(t ), t ) F0 z (q(t ), t )@dt .
t0 t1
³ 'z
R5
*
>
@
(t1 ) F0 z (t1 ) (q(t ) 'q(t ), t ) F0 z (t1 ) (q(t ), t ) dt
t0
Further, similar to the proof of theorem 5, Chapter II we have 'z (t ) d c2 h L , 2
'J
t1
³ ^h (t )>F *
0v
@
`
5
5
i 1
i 1
( q(t ), t ) B* (t )\ (t ) 'x0* F0 x 0 ( q(t ), t ) 'x1* F0 x1 ( q(t ), t ) dt ¦ Ri , ¦ Ri d c3 '[ .
t0
2
This implies the formula (3.30). Let [1 [ '[ , [ 2 [ . Then 2
J c([1 ) J c([ 2 ) d C 4 'q(t ) C 5 '\ (t ) C 6 q , ,
J c([1 ) J c([ 2 )
t1
³ J c([ ) J c([
2
1
2
2
2
) dt d l12 [1 [ 2 , [1 , [ 2 X .
t0
The proof of the theorem is complete. Note that the proof of theorem 3 is similar to the proof of theorem 3 from Chapter II. So only the main ideas of the proof has been presented here. On the base of the formulas (3.30)–(3.32) construct the sequence ^[ n ` ^vn , x0n , x1n ` X by the rule vn1 n 1 1
x
where 0 D n d
vn D n J cv ([n ), x0n1
>
@
>
@
PS x0n D n J xc0 ([n ) ,
(3.33)
PS x D n J xc1 ([n ) , n 0,1,2,..., n 1
2 , H ! 0 . In particular, for H l1 2H
l1 , the value D n 2
1 l1
const ! 0 .
Theorem 4. Let the assumptions of theorem 3 hold, the set S be convex and closed, the sequence {[ n } X be defined by (3.33). Then 1) The numerical sequence {J ([ n )} strictly decreases; 2) vn vn1 o 0, x0n x0n1 o 0, x1n x1n1 o 0 as n o f . If besides, the inequality (3.28) holds, the set M ([ 0 ) ^[ X / J ([ ) d J ([ 0 )`, [ 0 (v0 , x00 , x10 ) X is bounded, then 3) {[ n } is a minimizing sequence, i.e. lim J ([ n ) J * inf J ([ ); ; [ X
nof
77
4) the sequence {[ n } weakly converges to the set X * , [ n o [* as n o f, [* X * ;
5) the set X * ^[* X / J ([* ) J * inf J ([ )` , here the symbol denotes an empty set; 6) The convergence rate can be estimated as 0 d J ([ n ) J * d
m0 , n 1,2,..., n
7) The boundary value problem (3.8), (3.9) has a solution if and only if J ([* )
0.
A proof of the theorem is similar to the proof of theorem 6 from Chapter 1. Example 1. Consider the equation 2( x sin t ) x 2 S t I [0; ], 2 x cos t cos 2t 2 §S · Let us denote x1 (0) x0 , x¨ ¸ x1 . The boundary conditions ©2¹ ( x0 , x1 ) S ( x0 , x1 ) R 2 / 0 d x0 d 1,1 d x1 d 2 S0 u S1 , x
x
S0
^
^x
R / 0 d x0 d 1`, S1 1
0
^x R
1
1
`
/ 1 d x1 d 2`.
The given data for this example are 2( x sin t ) x 2 , P (t ) { 0, t I , 2 x cos t cos 2t ( x0 , x1 ) R 2 / 0 d x0 d 1,1 d x1 d 2 S 0 u S1 ,
1, B 1, f ( x, t )
A
( x0 , x1 ) S
^
`
where S , S 0 , S1 are convex closed sets. 1. The linear controllable system (see (3.10)-(3.12)) y
A y B u(t ),
y (0)
tI
§S · x0 , y ¨ ¸ ©2¹
ª Sº «¬0; 2 »¼
,
x1 , ( x0 , x1 ) S , u() L2 ( I , R1 )
S S Since T (t ) e t , )(t ,W ) e (t W ) , )(t ,0) e t , )(0, t ) e t , )§¨ t , ·¸ e t e 2 ,
© 2¹
the matrix t
W (0, t )
³ e BB e dW W
0
§ S· W ¨ 0, ¸ © 2¹
* W
1 2t (e 1), t I 2
§ S· § S· W ¨ t , ¸ W ¨ 0, ¸ W (0, t ) © 2¹ © 2¹
1 S (e 1) ! 0, 2
Then a
a( x0 , x1 )
>0,S 2@,
S
e 2 x1 x0 , C (t )
O1 (t , x0 , x1 ) T1 (t ) x0 T2 (t ) x1
78
1 S (e e 2t ), t I . 2
2e t 2 ,W 1 0, S , S S 2 e 1 e 1 2e t 2e t S 2 S x0 S e x1 , e 1 e 1
(3.34)
(3.35)
e t (e S e 2 t ) e t (e 2t 1) x0 x1 , S e 1 eS 1 2e t S 2 e t (e 2t 1) S 2 S e , N 2 (t ) e . e 1 eS 1
O2 (t , x0 , x1 ) C1 (t ) x0 C 2 (t ) x1 N1 (t )
The function y (t )
z (t , v) O2 (t , x0 , x1 ) N 2 (t ) z (t1 , v) S
z (t , v)
e t (e S e 2 t ) x0 eS 1
S
,
(3.36)
e t (e 2t 1)e 2 § S · x1 z¨ , v ¸, t I eS 1 ©2 ¹ §S · Note that y (0) x0 , y¨ ¸ x1 . The function ©2¹ 2e t 2e t S 2 2e t S 2 § S · w1 (q(t ), t ) v(t ) S x0 S e x1 S e z¨ , v ¸ f ( y(t ), t ), t I (3.37) e 1 e 1 e 1 ©2 ¹ Where the function y (t ), t I is defined by (3.36). e t (e 2t 1)e eS 1
2
2. Now the optimization problem (3.19) – (3.21) corresponding to the boundary value problem (3.34) – (3.35) is written in the form S 2
J ([ )
J (v, x0 , x1 )
³ | w (q(t ), t ) |
2
1
(3.38)
dt o inf
0
under the conditions z
z v,
z1 (0)
0, z 2 (0)
0,
>0,S 2@,
tI
(3.39)
v() L2 ( I , R1 ), ( x0 , x1 ) S ,
(3.40)
where w1 (q(t ), t ), t I is defined by (3.37). Let us denote 2
F0 (q(t ), t )
w1 (q(t ), t )
2
§ §S · · w1 ¨¨ v(t ), x0 , x1 , z (t , v), z¨ , v ¸, t ¸¸ . ©2 ¹ ¹ ©
3. Calculate a gradient of the functional (3.38) under the conditions (3.39), (3.40) at arbitrary point [ 0 (v0 (t ), x00 , x10 ) X L2 ( I , R1 ) u S . In particular, choose v0 (t ) sin t , x00 0, x10 1, ( x00 , x10 ) u S . a. solve the differential equation z0
z0 v0 (t ), z0 (0)
0, t I
ª Sº «¬0, 2 »¼ .
The result is the function t
z0 (t ) e t ³ eW sin WdW 0
1 1 (sin t cos t ) e t , t I . 2 2
1 S (1 e 2 ) . 2 b. By (3.36) the function y0 (t ), t I has the form
This yields that z0 S 2
y0 (t )
1 e t (e 2t 1)e (sin t cos t e t ) 2 eS 1 79
S
2
e t (e 2t 1)e eS 1
S
2
1 S (1 e 2 ), t I 2
c. Calculate w10 (q(t ), t ) using (3.37): w10 ( q(t ), t ) sin t
2e t S 2 § 1 · 2( y0 (t ) sin t ) y02 (t ) S 2 e ( 1 e ) ,t I . ¨ ¸ eS 1 © 2 ¹ 2 y0 (t ) cos t cos 2t
d. Compute partial derivatives of the function F0 ( q, t ) at the point [0 X according to (3.23) – (3.27), where S
S
2e t 2e t e 2 e t (eS e 2t ) e t (e 2t 1)e 2 T1 (t ) S , T2 (t ) , C1 (t ) , C2 ( t ) S S e 1 e 1 e 1 eS 1 8 y03 (t ) cos t 2 y02 sin 2t 6 y02 (t ) cos 2t 4 y0 (t ) sin t cos 2t . f x ( y0 (t ), t ) [2 y0 (t ) cos t cos 2t ]2
e. Solve the conjugate system (see (3.31)) \ 0
S
2 wF (q(t ), t ) §S · \ 0¨ ¸ ³ 0 dt. wz S 2 ©2¹ 0
wF0 (q(t ), t ) \ 0 (t ), wz [0
[0
As a result we have \ 0 (t ), t I . f. Using (3.30) calculate the gradient J c([0 ) of the functional (3.38) under the conditions (3.39), (3.40) at the point [ 0 : J vc ([ 0 )
S
wF0 (q(t ), t ) \ 0 (t ) L2 ( I , R1 ), J xc0 [ 0 wv [0 S
wF0 (q(t ), t ) wx1 0
J xc1 [ 0
2
³
wF0 (q(t ), t ) wx0 0 2
³
dt R1 , [0
dt R1 . [0
Consequently the next iteration [1 (v1 (t ), x01 , x11 ) X is defined as v1 (t )
v0 (t ) D 0 J vc ([ 0 ) L2 ( I , R1 ),
x11
x01
PS0 [ x00 D 0 J xc0 ([ 0 )],
PS1 [ x10 D 0 J xc1 ([ 0 )],
where D 0 ! 0 is chosen from the condition J ([1 ) J ([ 0 ), the value S
2
³w
J ([ 0 )
0 1
2
(q(t ), t ) dt.
0
4. It is not hard to prove that an optimal control [ * (v* (t ), x0* , x1* ) X * for the boundary value problem (3.34), (3.35) is defined as weakly
v n (t ) o v * (t )
2 cos t L2 ( I , R 1 ), x0n o x0*
1 S0 ,
x1n o x1*
1 S1 , as n o f .
Indeed, the function z* (t ) z(t, v* ) is a solution to the differential equation z* 2 cos t , z* (0)
z*
0, t I
1 e . The function z (t ) O (t , x , x ) N (t ) z S sin t cos t , t I . 2
Hence z* (t ) cos t sin t e t , t I . Then z* S 2 y* (t )
>0,S 2@.
*
2
* 0
* 1
2
Since f ( y* (t ), t ) 2 cos t , the function 80
*
S
2
w1 ( q(t ), t ) [
*
2 cos t
t
t
t
2e 2e S 2 * 2e S 2 x0* S e x1 S e e 1 e 1 e 1 S
2 f ( y (t ), t ) z S 2 cos t 0, t I , 2
v* (t ) O1 (t , x0* , x1* ) N1 (t ) z* S
where x0* 1, x1* 1, z* S 2 1 e
S
2
*
*
, f ( y* (t ), t ) 2 cos t.
Therefore the value J ([* ) 0 . Then by theorem 2 the function x* (t ) y* (t ) sin t cos t, t I is a solution to the boundary value problem (3.34), (3.35). We see that ( x0* 1, x1* 1) S . Lecture 12. Boundary value problems with state variable constraints Consider the solving of problem 2. The boundary value problem x A(t ) x B(t ) f ( x, t ) P (t ), t I [t 0 , t1 ] , (3.41) with the conditions on boundaries ( x(t 0 ) x0 , x(t1 ) x1 ) S R 2 n , (3.42) and the state variable constraints (3.43) x(t ) G(t ) : G(t ) ^x R n J (t ) d F ( x, t ) d G (t ), t I `, is given. Problem А. Provide a necessary and sufficient condition for existence of a solution to the boundary value problem (3.41) – (3.43). Problem B. Construct a solution to the boundary value problem (3.41) – (3.43). The corresponding linear controllable system has the form y A(t ) y B(t )u (t ) P (t ), t I , (3.44) 2n ( y (t 0 ) x0 , y (t1 )) S R , (3.45) m (3.46) v() L2 ( I , R ). The assertion of theorem 1 holds true for the linear controllable system (3.44) – (3.46). Then u (t ) v(t ) O1 (t , x0 , x1 ) N1 (t ) z (t , v) U , v, v() L2 ( I , R m ), (3.47) (3.48) y(t ) z (t , v) O2 (t , x0 , x1 ) N 2 (t ) z (t1 , v), t I , m z A(t ) z B(t )v(t ), z (t 0 ) 0, v() L2 ( I , R ). (3.49) Lemma 4. Let the matrix W t 0 ,t1 be positive definite. Then the boundary value problem (3.41) – (3.43) is equivalent to the following u(t ) v(t ) O1 (t , x0 , x1 ) N1 (t ) z (t , v) f ( y(t ), t ), t I , (3.50) m z A(t ) z B(t )v(t ), z (t0 ) 0, v() L2 ( I , R ), (3.51) y(t ) G(t ), ( x0 , x1 ) S , (3.52) where y (t ), t I is defined by (3.48). A proof of the lemma follows from theorem 1 and the relations (3.47) – (3.49). 81
Consider the optimal control problem Consider the optimal control problem (see (3.50) – (3.52))
³ >v(t ) T (t ) x t1
J (v, x0 , x1 , w)
1
0
T2 (t ) x1 P (t ) N1 (t ) z (t1 , v)
t0 2
2
@
f ( y (t ), t ) w(t ) F ( y (t ), t ) dt
(3.53)
t1
³ F (q(t ), t )dt o inf 1
t0
under the conditions A(t ) z B(t )v(t ), z (t 0 )
z
^
0, v() L2 ( I , R m ),
(3.54) (3.55) (3.56)
`
w(t ) W w() L2 ( I , R ) / J (t ) d w(t ) d G (t ), t I , ( x0 , x1 ) S , r
where F1 ( q(t ), t )
2
v(t ) O1 (t , x0 , x1 ) N1 (t ) z (t1 , v ) f ( y (t ), t ) 2
w(t ) F ( y (t ), t ) , q(t )
( v (t ), x0 , x1 , w(t ), z (t , v ), z (t1 , v )),
the function y (t ), t I is defined by (3.48). Theorem 5. Let the matrix W t 0 ,t1 be positive definite. A necessary and sufficient condition for the boundary value problem (3.41)–(3.43) to have a solution is J (v* , x0* , x1* , w* ) 0 , where (v* , x0* , x1* , w* ) X is a solution to the optimization problem (3.53)–(3.56), X L2 ( I , R m ) u S u W H , H L2 ( I , R m ) u R n u u R n u L2 ( I , R r ) . The proof of the theorem is similar to the proof of theorem 2. Lemma 5. Let the matrix W t 0 ,t1 be positive definite, the functions f ( x, t ), F ( x, t ), t I , be continuously differentiable with respect to x . Then the partial derivatives F1v (q, t ) 2V (q, t ), V (q, t ) v T1 (t ) x0 T2 (t ) x1 P (t ) N1 (t ) z(t1 ) f ( y, t ) ; F1w (q, t ) 2'(q, t ), '(q, t ) w F ( y, t ) ; F1x0 (q, t )
2T1* (t )V (q, t ) 2C1* (t ) f x* ( y, t )V (q, t ) 2C1* (t ) Fx* ( y, t )'(q, t );
F1x1 (q, t )
2T2* (t )V (q, t ) 2C2* (t ) f x* ( y, t )V (q, t ) 2C2* (t ) Fx* ( y, t )'(q, t );
F1z (q, t )
2 f x* ( y, t )V (q, t ) 2 Fx* ( y, t )'(q, t );
F1z (t1 ) (q, t )
2N1* (t )V (q, t ) 2N 2* (t ) f x* ( y, t )V (q, t ) 2N 2* (t ) Fx* ( y, t )'(q, t ),
where y y( x0 , x1 , z, z(t1 ), t ) z C1 (t ) x0 C2 (t ) x1 P1 (t ) N 2 (t ) z(t1 ). Let us denote F1q (q, t ) ( F1v , F1w , F1x , F1x , F1z , F1z (t ) ), (q, t ) R r m4n u I . Lemma 6. Let the assumptions of lemma 5 hold, the set S be convex and the inequality F1q (q1 , t ) F1q (q 2 , t ), q1 q 2 !t 0 , q1 , q2 R r m 4 n . (3.57) hold. Then the functional (3.53) under the conditions (3.54) – (3.56) is convex. The proof of the lemma is similar to the proof of lemma 3. Definition 2. We say that the derivative F1q (q, t ) satisfies the Lipschitz condition with respect to q in R r m 4 n , if 0
1
82
1
F1v (q 'q, t ) F1v (q, t ) d L1 'q ,
F1w (q 'q, t ) F1w (q, t ) d L2 'q ,
F1x0 (q 'q, t ) F1x0 (q, t ) d L3 'q , F1x1 (q 'q, t ) F1x1 (q, t ) d L4 'q ,
F1z (q 'q, t ) F1z (q, t ) d L5 'q ,
F1z (t1 ) (q 'q, t ) F1z (t1 ) (q, t ) d L6 'q ,
where Li const ! 0, i 1,6 , the norm 'q 'v, 'x0 , 'x1 , 'w, 'z, 'z(t1 ) . Theorem 6. Let the assumptions of lemma 5 hold, the derivative F1q (q, t ) , r m 4 n qR , t I satisfy the Lipschitz condition. Then the functional (3.53) under the conditions (3.54) – (3.56) is Frechet differentiable, the gradient J c([ )
J c(v, x0 , x1 , w) ( J vc ([ ), J xc0 ([ ), J xc1 ([ ), J wc ([ )) H , [
at any point [ X
(v, x0 , x1 , w)
L2 ( I , R ) u S u W is defined by J vc ([ ) F1v ( q(t ), t ) B * (t )\ (t ) L2 ( I , R m ), m
J xc0 ([ )
t1
t1
n ³ F1x0 ( q(t ), t )dt R , J xc1 ([ )
³F
1 x1
t0
J wc
( q(t ), t )dt R n ,
(3.58)
t0
F1w ( q(t ), t ) L2 ( I , R r ),
where q(t ) (v(t ), x0 , x1 , w(t ), z(t, v), z(t1 , v)), z(t, v), t I is a solution of the differential equation (3.54), and the function \ (t ), t I is a solution of the adjoint system \
F1z (q(t ), t ) A* (t )\ , \ (t1 )
t1
³ F1z (t1 ) (q(t ), t )dt , t I .
(3.59)
t0
Moreover, the gradient J c([ ) H satisfies the Lipschitz condition J c([1 ) J c([ 2 ) d l2 [1 [ 2 , [1 , [ 2 X .
(3.60)
The proof of the theorem is similar to the proof of theorem 3. On the base of the results (3.58) – (3.60) construct the sequence ^[ n ` ^vn (t ), x0n , x1n , wn (t )` by the rule vn1 n 1 1
x
where 0 D n d
>
vn D n J vc ([ n ), x0n1
@
PS x D n J xc1 ([ n ) , wn1 n 1
>
@
PS x0n D n J xc0 ([ n ) ,
PW >wn D n J wc ([ n )@, n
2 , H ! 0. In particular, for H l2 2H
l2
2
, Dn
1 l2
0,1,...,
(3.61)
const ! 0.
Theorem 7. Let the assumptions of theorem 6 hold, the set S be convex and closed, the sequence {[ n } X be defined by (3.61). Then: 1) The numerical sequence ^J ([ n )` strictly decreases; 2) vn vn1 o 0 , x0n x0n 1 o 0, x1n x1n 1 o 0, wn wn1 o 0 as n o f . If besides, the inequality (3.57) holds, the set M ([ 0 )
^[ X
J ([ ) d J ([ 0 )`
is bounded, [ 0 (v0 , x , x , w0 ) X is an initial guess, then: 3) {[ n } X is a minimizing sequence, i.e. lim J ([ n ) lim J (vn , x0n , x1n , wn ) J * inf J ([ ) ; nof nof [ X 0 0
4) the set X *
0 1
^[ X / J ([ ) *
*
J*
`
inf J ([ ) z ; [ X
5) the sequence {[ n } weakly converges to the set X * , 83
сл сл vn o v* , x0n o x0* , x1n o x1* , wn o w* при n o f ;
6) The convergence rate can be estimated as 0 d J ([ n ) J * d
m1 , n 1,2,..., m1 n
const ! 0 ;
7) the boundary value problem (3.41) – (3.43) has a solution if and only if J ([* )
0.
A proof of the theorem is similar to the proof of theorem 4 from Chapter II. Example 2. Consider the boundary value problem (3.34), (3.35) from example 1 in the presence of the state variable constraint x (t ) G (t )
1 ® x (t ) AC ( I , R ) ¯
2
t d x 2 (t ) d 2, t I
S
½ [0, S 2] ¾ . ¿
For this example the optimization problem (3.53) – (3.56) has the following form S
J (v, x0 , x1 , w)
³ >w (q(t ), t ) 2
2
1
2
@
w2 (q(t ), t ) dt o inf
0
under the conditions
z v(t ), z(0) 0, I [0, S 2], v() L2 ( I , R1 ), ( x0 , x1 ) S ,
z
2 ½ 1 ®w() L 2 ( I , R ) / t d w(t ) d 2, t I ¾, S ¯ ¿ where w1 (q(t ), t ) is defined by (3.37), the function w2 (q(t ), t ) y (t ), t I is given by (3.36). w(t ) W
w(t ) y 2 (t ), t I ,
The partial derivatives F1v (q(t ), t ) 2w1 (q(t ), t ); F1v (q(t ), t ) 2w1 (q(t ), t ); F1x0 (q(t ), t ) 2T1 (t ) w1 (q(t ), t ) 2C1 (t ) f x ( y (t ), t ) w1 (q(t ), t ) 2C1 (t ) Fx ( y ((t ), t ) w2 (q(t ), t ); F1x1 (q(t ), t )
2T1 (t ) w1 (q(t ), t ) 2C 2 (t ) f x ( y (t ), t ) w1 (q(t ), t ) 2C 2 (t ) Fx ( y ((t ), t ) w2 (q(t ), t );
F1z (q(t ), t ) 2 f x ( y(t ), t )w1 (q(t ), t ) 2Fx ( y((t ), t )w2 (q(t ), t ); F1z (t1 ) (q(t ), t ) 2 N1 (t ) w1 (q(t ), t ) 2 N 2 (t ) f x ( y (t ), t ) w1 (q(t ), t ) 2 N 2 (t ) Fx ( y ((t ), t ) w2 (q(t ), t ),
where T1 (t ), T2 (t ), C1 (t ), C2 (t ) are defined as in examplе 1, the derivatives Fx ( y(t ), t ) 2 y(t ) , 8 y 3 (t ) cos t 2 y 2 (t ) sin 2t 6 y 2 (t ) cos 2t 4 y (t ) sin t cos 2t
f x ( y (t ), t )
>2 y(t ) cos t cos 2t @2
.
The gradient of the functional J c([ ) ( J vc ([ ), J xc ([ ), J xc ([ ), J wc ([ )) , where 0
S
J vc ([ )
F1v (q(t ), t ) \ (t ),
J xc0 ([ )
1
S
2
³ F1x0 (q(t ), t )dt ,
J xc1 ([ )
0
J wc ([ )
F1w (q(t ), t ).
The sequence {[ n } vn1 n 1 1
x
^v (t ), x n
n 0
`
, x1n , wn (t ) :
vn D n J vc ([ n ), x0n1
>
@
PS x D n J xc1 ([ n ) , wn1 n 1
>
84
³F
1 x1
0
@
PS x0n D n J xc0 ([ n ) ,
PW >wn D n J wc ([ n )@,
where D n ! 0 is chosen from the condition J ([ n1 ) J ([ n ) .
2
(q(t ), t )dt ,
weakly
It can be shown that v n (t ) o v* (t ) 2 cos t , ª Sº wn (t ) o w* (t ) 1 sin 2t , t «0; » , lim z(t , vn ) ¬ 2 ¼ nof sin t cos t, t I . weakly
x0n o x0*
1,
cos t sin t e t ,
z* (t )
x1n o x1*
x* (t )
1,
y* (t )
Lecture 13. Boundary value problems with state variable constraint and integral constraints Consider the solving of problem 3. The boundary value problem x A(t ) x B(t ) f ( x, t ) P (t ), t I [t 0 , t1 ] , ( x(t 0 ) x0 , x(t1 ) x1 ) S R 2 n , x(t ) G(t ) : G(t ) ^x R n J (t ) d F ( x, t ) d G (t ), t I `, g i ( x) d c j , j 1, m; g i ( x)
cj, j
m1 1, m2 ,
t1
g i ( x)
³f
0j
(3.62) (3.63) (3.64) (3.65) (3.66)
( x(t ), t )dt , j 1, m2
t0
is given. Problem А. Find necessary and sufficient conditions for existence of a solution to the boundary value problem (3.62) – (3.66); Problem B. Construct a solution to the boundary value problem (3.62) – (3.66). Transformation. Let the vector f 0 ( f 01 ,..., f 0 m ) . Let us introduce the vector valued function x (t ) ( x1 (t ),..., xm (t )) by the expression 2
2
t
x (t )
³ f ( x(W ),W )dW , t I . 0
t0
Hence x
Q
^c R
f 0 ( x(t ), t ), t I ,
x (t0 ) 0, x (t1 ) m2
/
cj d j, j
c Q
`
m1 1, m2 , d j t 0, j 1, m1 ,
where c (c1 ,..., cm ), d (d1 ,..., d m ), and moreover g j ( x) c j d j , j 1, m1 , d t 0 – is an unknown vector. Now the original boundary value problem (3.62) – (3.66) is rewritten as x A(t ) x B(t ) f ( x(t ), t ) P (t ), t I , (3.67) x f 0 ( x(t ), t ), x (t0 ) 0, x (t1 ) Q, (3.68) ( x0 , x1 ) S , x(t ) G(t ), t I . (3.69) Let us introduce the vectors and matrices 2
1
§ A(t ) Onm2 · § B(t ) · §O · § x· ¸, B1 (t ) ¨ ¸, B2 ¨ nm2 ¸, ¨¨ ¸¸, A1 (t ) ¨¨ ¸ ¨ ¸ ¨ Im ¸ © x¹ © Om2n Om2m2 ¹ © Om2m ¹ © 2 ¹ where Okl is a k u l rectangular zero matrix, I m2 is an m2 u m2 identity matrix.
K
85
Let P1 I n Onm , P2 Om n I m , where I n is an n u n identity. Now the system (3.67) – (3.69) is rewritten in the matrix form (3.70) K A1 (t )K B1 (t ) f ( P1K , t ) B2 f 0 ( P1K , t ) P1 (t ), (3.71) ( P1K (t 0 ), P1K (t1 )) S , P2K (t 0 ) 0, P2K (t1 ) c Q, (3.72) P1K (t ) G(t ), t I . Linear controllable system. Consider the linear controllable system corresponding to the system (3.70) – (3.72) (3.73) y A1 (t ) y B1 (t ) w1 (t ) B2 w2 (t ) P1 (t ), t I [t0 , t1 ], 2
2
2
§ x0 · ¨¨ ¸¸, y (t1 ) K (t1 ) K1 © x(t0 ) ¹ w1 () L2 ( I , R m ), w2 () L2 ( I , R m2 ), y (t0 ) K (t0 ) K0
§ x1 · ¨¨ ¸¸, ©c¹
(3.74) (3.75)
where § x(t 0 ) · § x0 · § x(t ) · § x · ¸, K (t1 ) K1 ¨¨ 1 ¸¸ ¨¨ 1 ¸¸, ¨¨ ¸¸ ¨¨ ¸ © x(t1 ) ¹ © c ¹ © x(t 0 ) ¹ © Om2 1 ¹ K 0 S 0 u Om2 1 , K1 S1 u Q, S S 0 u S1 , P1K (t 0 ) x0 , P1K (t1 )
K (t 0 ) K 0
x1 ,
§ P (t ) · ¨ ¸ ¨ Om 1 ¸ . © 2 ¹ Let the (n m2 ) u (m m2 ) matrix E(t ) ( B1 (t ), B2 ) , and the vector w(t ) (w1 (t ), w2 (t )) L2 ( I , R mm2 ), /(t ,W ) F (t ) F 1 (W ) , where F (t ) is a fundamental matrix solution of the linear homogeneous system ] A1 (t )] . Define the following
P2K (t 0 )
x(t 0 ), P2K (t1 )
c
x(t1 ) Q, P1 (t )
matrices and vectors by the given data of the system (3.73) – (3.75) t1
'(t 0 , t1 )K1 K 0 ³ /(t 0 , t ) P1 (t )dt , W1 (t 0 , t1 )
a
t0
t
W1 (t 0 , t )
³ /(t
0
t1
³ /(t
0
, t ) E (t ) E * (t )/* (t 0 , t )dt ,
t0
,W ) E (W ) E * (W )/* (t 0 ,W )dW , W1 (t , t1 ) W1 (t 0 , t1 ) W1 (t 0 , t ),
t0
*1 (t ,K0 ,K1 ) M 1 (t )
E * (t )/* (t 0 , t )W11 (t 0 , t1 )a,
§ B1* (t )/* (t 0 , t )W11 (t 0 , t1 )/(t 0 , t1 ) · § M 11 (t ) · ¨ ¸ ¨ B * /* (t , t )W 1 (t , t )/(t , t ) ¸ ¨¨ M (t ) ¸¸, 2 0 1 0 1 0 1 © ¹ © 12 ¹ /(t , t 0 )W1 (t , t1 )W11 (t 0 , t1 )K 0 /(t , t 0 )W1 (t 0 , t )W11 (t 0 , t1 )/(t 0 , t1 )K1 P 2 (t ),
E * (t )/* (t 0 , t )W11 (t 0 , t1 )/(t 0 , t1 )
*2 (t ,K 0 ,K1 )
P 2 (t )
t1
t
/(t , t 0 ) ³ /(t 0 ,W ) P1 (W )dW /(t , t 0 )W1 (t 0 , t )W11 (t 0 , t1 )/(t 0 , t1 ) ³ /(t1 , t ) P1 (t )dt , t0
t0
M 2 (t )
/(t , t 0 )W1 (t 0 , t )W11 (t 0 , t1 )/(t 0 , t1 ), t I .
Theorem 8. Let the matrix W1 t 0 ,t1 be positive definite. Then a control w() L2 ( I , R m m ) brings the trajectory of the system (3.73) – (3.75) from an initial point K 0 R n m into a terminal state K1 R nm if and only if 2
2
2
w(t ) : {w() L2 ( I , R mm2 ) / w(t ) v(t ) *1 (t,K0 ,K1 ) M 1 (t ) z(t, v ), v() L2 ( I , R mm2 ), t I }, 86
(3.76)
where v() L2 ( I , R mm ) is an arbitrary function, the function z (t ) z (t , v), t I is a solution of the differential equation (3.77) z A1 z E (t )v(t ), z (t 0 ) 0, t I . The solution of the differential equation (3.73), corresponding to the control w(t ) : has the form (3.78) y(t ) z (t ) *2 (t ,K 0 ,K1 ) M 2 (t ) z (t1 , v), t I . A proof of the theorem is similar to the proofs of theorems presented in Chapter I. Note that the components of the vector function w(t ) : from (3.76) are defined by the expressions (3.79) w1 (t ) v1 (t ) B1* (t )/* (t 0 , t )W11 (t 0 , t1 )a M 11 (t ) z(t1 , v), t I , * * 1 (3.80) w2 (t ) v2 (t ) B2 / (t 0 , t )W1 (t 0 , t1 )a M 12 (t ) z(t1 , v), t I , m m where v(t ) (v1 (t ), v2 (t )), v1 () L2 ( I , R ), v2 () L2 ( I , R ) . Substituting values for a,K 0 ,K1 from (3.79), (3.80) we get (3.81) w1 (t ) v1 (t ) T11 (t ) x0 T21 (t ) x1 T22 (t )d r1 (t ) M 11 (t ) z (t1 , v), t I , (3.82) w2 (t ) v2 (t ) С11 (t ) x0 С21 (t ) x1 С22 (t )d r2 (t ) M 12 (t ) z (t1 , v), t I , By the similar way the function y (t ), t I from (3.78) can be represented in the form (3.83) y(t ) z(t, v) D11 (t ) x0 D21 (t ) x1 D22 (t )d f (t ) M12 (t ) z(t1 , v), t I , The differential equation (3.77) is rewritten in the form 2
2
z
A1 z B1 (t )v1 (t ) B2 v2 (t ), z(t0 ) 0, t I ,
(3.84)
v1 () L2 ( I , R m ), v2 () L2 ( I , R m2 ) Lemma 7. Let the matrix W1 t 0 ,t1 be positive definite. Then the boundary
value problem (3.62) – (3.66) (or (3.70) – (3.72)) is equivalent to the following problem (3.85) w1 (t ) v1 (t ) T11 (t ) x0 T21 (t ) x1 T22 (t )d r1 (t ) M 11 (t ) z (t1 , v) f ( P1 y(t ), t ), t I , w2 (t ) v2 (t ) С11 (t ) x0 С21 (t ) x1 С22 (t )d r2 (t ) M 12 (t ) z (t1 , v) f 0 ( P1 y(t ), t ), t I , (3.86) (3.87) z A1 z B1 (t )v1 (t ) B2 v2 (t ), z (t 0 ) 0, t I , m m (3.88) v1 () L2 ( I , R ), v2 () L2 ( I , R ) , ( x0 , x1 ) S , d D ^d R m / d t 0`, P1 y (t ) G (t ), t I . (3.89) A proof of the lemma follows theorem 8 and the equalities (3.79) – (3.84). Optimization problem. Consider the optimal control problem (see (3.85) – (3.89)) 2
1
³ >w (t ) f ( P y(t ), t ) t1
J (v1 , v2 , x0 , x1 , d , p)
1
1
t0
2
2
2
w2 (t ) f 0 ( P1 y (t ), t )
@
(3.90)
p(t ) F ( P1 y (t ), t ) dt o inf,
under the conditions z
A1 z B1 (t )v1 (t ) B2 v2 (t ), z (t 0 ) 0, t I ,
v1 () L2 ( I , R ), v2 () L2 ( I , R ) ( x0 , x1 ) S , d D, m2
m
87
(3.91) (3.92) (3.93)
^p() L (I , R ) / J (t ) d p(t ) d G (t ), t I `,
(3.94) where the functions w1 (t ), w2 (t ), y(t ), t I are defined by (3.81) – (3.83) respectively, the function z (t ), t I is a solution to the differential equation (3.84). Let us introduce the following notations p(t ) R(t )
[
r
2
(v1 (t ), v2 (t ), x0 , x1 , d , p(t )) X
H
L2 ( I , R m ) u L2 ( I , R m2 ) u S u D u R(t ),
L2 ( I , R m ) u L2 ( I , R m2 ) u R n u R n u R m1 u L2 ( I , R r ), X H , X * [* X / J ([* ) J * inf J ([ ) .
^
`
[X
Theorem 9. Let the matrix W1 t 0 ,t1 be positive definite. A necessary and sufficient condition for the boundary value problem (3.62) – (3.66) to have a solution is J ([ * ) J (v1* , v2* , x0* , x1* , d * , p* ) 0 , where [ * (v1* , v2* , x0* , x1* , d * , p* ) X is a solution to the problem (3.90) – (3.94). Lemma 8. Let the matrix W1 t 0 ,t1 be positive definite, the functions f ( x, t ), F ( x, t ), f 0 ( x, t ) are continuously differentiable with respect to x , the function F2 (q(t ), t )
2
2
2
w1 (t ) f ( P1 y(t ), t ) w2 (t ) f 0 ( P1 y (t ), t ) p(t ) F ( P1 y (t ), t ) , q(t ) (v1 (t ), v2 (t ), x0 , x1 , d , p(t ), z (t ), z (t1 )).
Then the partial derivatives of the function F2 (q, t ) are: 2>w1 (t ) f ( P1 y, t )@, F2v2 (q, t )
F2v1 (q, t )
>2T
* 11
F2 x0 ( q, t )
>
2>w2 (t ) f 0 ( P1 y, t )@,
@
(t ) 2 D11* (t ) P1* f x* ( P1 y, t ) >w1 f ( P1 y , t )@
@
>
@
2C (t ) 2 D (t ) P f ( P y, t ) >w2 f 0 ( P1 y , t )@ 2 D11* (t ) P1* Fx* ( P1 y, t ) > p F ( P1 y, t )@; * 11
F2 x1 ( q, t )
>
>2T
* 21
* 11
* * 1 0x 1 * * * 21 1 x
@
(t ) 2 D (t ) P f ( P1 y, t ) >w1 f ( P1 y, t )@
@
> @ (t ) 2 D (t ) P f ( P y, t ) >w f ( P y, t )@ (t ) 2 D (t ) P f ( P y, t )@ >w f ( P y, t )@ > 2 D
@
* 2C (t ) 2 D (t ) P f ( P1 y, t ) >w2 f 0 ( P1 y, t )@ 2 D21 (t ) P1* Fx* ( P1 y, t ) > p F ( P1 y, t )@; * 21
>2T
* * 1 0x
* 22
F2 d ( q, t )
>
* 21
* 22
* * 1 x
2C F2 p (q, t ) 2> p F ( P1 y, t )@; * 22
* 22
* * 1 0x
1
1
1
2
1
* 22
1
@
(t ) P1* Fx* ( P1 y, t ) > p F ( P1 y, t )@;
2 P1* f x* ( P1 y, t )>w1 f ( P1 y, y )@ 2 P1* f 0*x ( P1 y, t )>w2 f 0 ( P1 y, t )@
F2 z ( q, t ) F2 z ( t1 ) ( q, t )
>
>
@ @ ( P y , t )@ >w
2 P1*Fx* ( P1 y, t ) > p F ( P1 y, t )@; 2 M 2 N 2* P1* f x* ( P1 y, t ) >w1 f ( P1 y, t )@
>
* 11
2 M 12* (t ) 2 N 2* P1* f 0*x
1
2
>
@
f ( P1 y, t )@ 2 N 2* P1* Fx* ( P1 y, t ) > p F ( P1 y, t )@,
where y (t ), t I is defined by (3.83). Let F2 q (q, t ) denote the derivative F2 q (q, t )
( F2v1 , F2v2 , F2 x0 , F2 x1 , F2 d , F2 p , F2 z , F2 z (t1 ) ).
Lemma 9. Let the assumptions of lemma 8 hold, the set S be convex and the inequality F2 q (q1 , t ) F2 q (q2 , t ), q1 q2 !t 0, q1 , q2 R N , N
m m2 2n m1 r 2(n m2 )
m r 3m2 4n m1 ,
(3.95)
hold. Then the functional (3.90) under the conditions (3.91) – (3.94) is convex. Note that the sets D, R(t ) are convex and closed. 88
Definition 3. We say that the derivative F2 q (q, t ) satisfies the Lipschitz condition with respect to q R N , if F2v1 (q 'q, t ) F2v1 (q, t ) d L1 'q ,
F2v2 (q 'q, t ) F2v2 (q, t ) d L2 'q ,
F2 x0 (q 'q, t ) F2 x0 (q, t ) d L3 'q , F2 x1 (q 'q, t ) F2 x1 (q, t ) d L4 'q ,
F2d (q 'q, t ) F2d (q, t ) d L5 'q , F2 p (q 'q, t ) F2 p (q, t ) d L6 'q , F2 z (q 'q, t ) F2 z (q, t ) d L7 'q ,
F2 z (t1 ) (q 'q, t ) F2 z (t1 ) (q, t ) d L8 'q ,
where Li const ! 0, i 1,8 , 'q ('v1 , 'v2 , 'x0 , 'x1 , 'd , 'p, 'z, 'z(t1 ) . Theorem 10. Let the assumptions of lemma 8 hold, the derivative F2 q (q, t ) , satisfy the Lipschitz condition. Then the functional (3.90) under the conditions (3.91) – (3.94) is Frechet differentiable, the gradient J c([ )
J c(v1 , v2 , x0 , x1 , d , p) ( J vc1 ([ ), J vc2 ([ ), J xc0 ([ ), J xc1 ([ ), J dc ([ ) J cp ([ ))
[
at any point [ X
(v1 , v2 , x0 , x1 , d , p)
L2 ( I , R m ) u L2 ( I , R m2 ) u S u D u R(t ) is defined by J vc1 ([ ) F2 v1 ( q(t ), t ) B1* (t )\ (t ) L2 ( I , R m ), J vc2 ([ )
F2 v2 ( q(t ), t ) B2* (t )\ (t ) L2 ( I , R m2 ),
t1
n ³ F2 x0 ( q(t ), t )dt R , J xc1 ([ )
J xc0 ([ )
t0
t1
³F
J dc ([ )
2d
( q(t ), t )dt R m1 , J cp
t1
³F
2 x1
( q(t ), t )dt R n ,
(3.96)
t0
F2 p ( q(t ), t ) L2 ( I , R r ),
t0
where q(t ) (v1 (t ), v2 (t ), x0 , x1 , d , p(t ), z(t , v), z(t1 , v)), z(t , v), t I is a solution of the differential equation (3.91), and the function \ (t ), t I is a solution of the adjoint system \
t1
³ F2 z (t1 ) (q(t ), t )dt , t I .
F2 z (q(t ), t ) A1* (t )\ , \ (t1 )
(3.97)
t0
Moreover, the gradient J c([ ) H satisfies the Lipschitz condition J c([1 ) J c([ 2 ) d l3 [1 [ 2 , [1 ,[ 2 X .
(3.98) On the base of the formulas (3.96) – (3.98) construct the sequence ^[ n ` ^v1n (t ), v2n (t ), x0n , x1n , d n , pn ` X by the rule v1n1 x0n1 d n1
>
v1n D n J vc1 ([ n ), v2n1
@
PS x0n D n J xc0 ([ n ) , x1n1
PD >d n D n J dc ([ n )@, pn1
where 0 D n d
v2n D n J vc2 ([ n ),
>
@
PS x1n D n J xc1 ([ n ) ,
PR ( t ) >pn D n J cp ([ n )@, n
2 , H ! 0. In particular, for H l3 2H
l3
2
, Dn
(3.99) 0,1,..., 1 l3
const ! 0.
Theorem 11. Let the assumptions of theorem 10 hold, the set S be convex and closed, the sequence {[ n } X be defined by (3.99). Then 1) The numerical sequence ^J ([ n )` strictly decreases; 89
2) v1n v1n1 o 0 , v2n v2n1 o 0,
x0n x0n 1 o 0,
x1n x1n 1 o 0,
d n d n1 o 0,
pn pn1 o 0 as n o f ;
If besides, the inequality (3.95) holds, the set M ([ 0 ) ^[ X J ([ ) d J ([ 0 )` is bounded, then: 3) {[ n } X is a minimizing sequence, i.e. lim J ([ n ) J * inf J ([ ) ;
^[ X / J ([ )
[ X
nof
4) the set X *
*
J*
*
`
inf J ([ ) z ; [ X
5) the sequence {[ n } weakly converges to the set X * , сл
сл
сл
v1n o v1* , v2n o v2* , x0n o x0* , x1n o x1* , d n o d * , pn o p* as n o f ,
[*
(v1* , v2* , x0* , x1* , d * , p* ) X * ;
6) the convergence rate can be estimated as 0 d J ([ n ) J * d
m2 , n 1,2,..., m2 n
const ! 0 ;
7) the boundary value problem (3.62) – (3.66) has a solution if and only if J ([* )
0.
Example 3. Let us demonstrate an application of the presented above results on the example. The boundary value problem (see examples 1,2): x
S 2( x sin t ) x 2 , t I [0, ], 2 x cos t cos 2t 2 x0 , x(S 2) x1 ) S ^( x0 , x1 ) R 2 / 0 d x0 d 1,1 d x1 d 2`,
x
( x(0)
x(t ) G (t ); G (t ) S
g1 ( x )
2
½ 1 2 2 ® x R / t d x d 2, t I ¾, S ¯ ¿
(3.100) (3.101) (3.102)
³ x dt d 4.
(3.103)
3
0
is given. f 01 ( x, t )
For this example A(t ) 1, f ( x, t ) x 3 , t0
0, t1
S 2
2( x sin t ) x 2 , 2 x cos t cos 2t
B 1, F ( x, t )
x2 ,
, P (t ) { 0.
Transformation. The function x(t )
ª Sº
t
³ x (W )dW , t «¬0, 2 »¼. Then 3
0
x(t )
x 3 (t ), x(0) 0, xS 2 4 d , d D
^d R
1
`
/d t 0 .
For this example the system (3.70) – (3.72) is written as K
A1K B1 f ( P1K , t ) B2 f 01 ( P1K , t ), t I
( P1K (0)
x0 , P1K (S 2)
>0,S 2@,
x1 ) S , P2K (0) 0, P2K (S 2)
P1K (t ) G(t ), t >0,S 2@,
4 d,
where A1
§ 1 0· ¨¨ ¸¸, B1 © 0 0¹
§1· ¨¨ ¸¸, B2 © 0¹
§ 0· ¨¨ ¸¸, P1 ©1¹ 90
(1,0), P1K
x,K
§ x· ¨¨ ¸¸, © x¹
(3.104) (3.105) (3.106)
P2
(0,1), P2K
x, f ( P1K, t )
f ( x, t ), f 01 ( P1K, t )
x3.
The linear controllable system corresponding to the system (3.104) – (3.106) is
A1 y B1w1 (t ) B2 w2 (t ), t I >0,S 2@, §x · y(0) K (0) K 0 ¨¨ 0 ¸¸, y (S 2) K S 2 K1 ©0¹
(3.107)
y
§ x1 · ¨¨ ¸¸, ©4 d ¹
(3.108) (3.109)
w1 () L2 ( I , R1 ), w2 () L2 ( I , R1 ).
Since F (t ) e A t 1
§ e (t W ) 0 · ¨ ¸, ¨ 0 1 ¸¹ © § eS 2 0 · ¸, /(0, t ) /(0, S 2) ¨ ¨ 0 1¸ ¹ ©
§ e t ¨ ¨ 0 ©
0· ¸, /( x, t ) 1 ¸¹
§1 0· ¸¸, E (t ) ( B1 , B2 ) ¨¨ ©0 1¹
§ et ¨ ¨0 ©
0· ¸, 1 ¸¹
due to the given data of the system (3.107) – (3.109) we have § 1 2t · ¨ (e 1) 0 ¸ W1 (0, S 2) ³ /(0, t ) EE / (0, t )dt ¨ 2 ¸ ! 0, S ¸ ¨ 0 0 2¹ © S 1 § · 2t § 2 (e 1) 0 · ¸, W1 (0, t ) ¨ 2 (e 1) 0 ¸, W11 (0, S 2) ¨ 2 ¨ ¸ ¨ ¸ 0 0 t¹ S¹ © © · §1 S 2t 0 ¸ ¨ (e e ) § eS 2 x1 x0 · ¸, ¸, a /(0, S 2)K1 K 0 ¨¨ W1 (t , S 2) ¨ 2 ¸ S ¸ ¨¨ © 4d ¹ 0 t¸ 2 ¹ © · § 2e t ¨ S (eS 2 x1 x0 ) ¸ 1 * * e 1 ¸, *1 (t ,K 0 ,K1 ) E / (0, t )W1 (0, S 2)a ¨ 2 ¸ ¨ d ( 4 ) ¸ ¨ S ¹ © t S 2 § 2e e · ¨ 0 ¸, M 1 (t ) E * /* (0, t )W11 (0, S 2)/(0, S 2) ¨ eS 1 ¸ ¨ 0 2 / S ¸¹ © *2 (t ,K 0 ,K1 ) /(0, t )W1 (t , S 2)W11 (0, S 2)K 0 /(t ,0)W1 (0, t )W11 (0, t1 )/ (0, S 2)K1 S 2
*
*
§ t e S e 2 t e t e S 2 (e 2t 1) · ¨e x0 x1 ¸ S e 1 eS 1 ¨ ¸, 2t ¨ ¸ ( 4 ) d ¨ ¸ S © ¹
M 2 (t )
§ e t eS 2 (e 2t 1) ¨ eS 1 /(0, t )W1 (0, t )W11 (0, S 2)/(0, S 2) ¨ ¨ 0 ¨ ©
· 0 ¸ ¸. 2t ¸¸ S¹
Then w1 (t )
v1 (t )
2e t 2e t 2e t eS 2 z1 (S 2 , v), x1 S x0 S S e 1 e 1 e 1
91
(3.110)
w2 (t ) v2 (t ) y1 (t )
2
S
(4 d )
2
S
z 2 (S 2 , v),
(3.111)
eS e 2t e t eS 2 (e 2t 1) e t eS 2 (e 2t 1) x x z1 (S 2 , v), 0 1 eS 1 eS 1 eS 1 2t 2t z 2 (t , v) (4 d ) z 2 (S 2 , v),
z1 (t , v) e t
y 2 (t )
S
S
(3.112) (3.113) (3.114) (3.115)
z1 (t ) z1 v1 , z1 (0) 0, v1 () L2 ( I , R1 ), z2 (t ) v2 , z 2 (0) 0, v2 () L2 ( I , R1 ).
For this example the optimization problem (3.90) – (3.94) is written as S 2
J (v1 , v2 , x0 , x1 , d , p )
³ 0
2 ª 2( y1 (t ) sin t ) y12 (t ) « w1 (t ) 2 y1 (t ) cos t cos 2t «¬
2 w2 (t ) y13 (t ) p(t ) y12 (t ) ºdt o inf »¼ 2
under the conditions
x0 S 0
z1 (t ) z1 v1 (t ), z1 (0) 0, v1 () L2 ( I , R1 ), z2 (t ) v2 , z 2 (0) 0, v2 () L2 ( I , R1 ), x0 R1 / 0 d x0 d 1 , x1 S1 x1 R1 / 1 d x1 d 2 ,
^
`
d D, p R(t )
^
`
2 ½ 1 ® p() L2 ( I , R ) / t d p(t ) d 2, t I ¾, S ¯ ¿
where w1 (t ), w2 (t ), y1 (t ), t I , are defined by (3.110) – (3.112) respectively, z1 (t ), z2 (t ), t I is a solution to the system (3.114), (3.115), the function y2 (t ), t I is defined by (3.113). As 2e t S 2 2e t e , T22 (t ) 0, r1 (t ) 0, , T ( t ) 21 eS 1 eS 1 2e t eS 2 2 8 N11 (t ) S , C11 (t ) 0, C21 (t ) 0, C22 (t ) , r2 (t ) , S S e 1 § t e S e 2 t · § t eS / 2 (e 2t 1) · 2 ¨e ¸ ¨e ¸ S T12 (t ) , D11 (t ) ¨ e 1 ¸, D21 (t ) ¨ eS 1 ¸, S ¨ ¸ ¨ ¸ 0 0 © ¹ © ¹ t S /2 2t § e e (e 1) · § 0 · §0· ¨ ¸ D11 (t ) ¨ 2t ¸, f (t ) ¨ 8t ¸, N 2 (t ) ¨ eS 1 ¸¸, ¨ ¸ ¨ ¸ ¨ t S 2 / S S © ¹ © ¹ © ¹ T11 (t )
One can calculate partial derivatives and construct the sequence v1n1
@
>
@
PS0 x0n D n J xc0 ([ n ) , x1n1 PS1 x1n D n J xc1 ([ n ) , PD >d n D n J dc ([ n )@, p n1 PR > p n D n J cp ([ n )@, n 0,1,..., D n const x0n1
d n1
>
v2n D n J vc2 ([ n ),
v1n D n J vc1 ([ n ), v2n1
0,01.
It can be shown that J ([* ) 0, [* (v , v , x , x , d* , p* ) X , where v o v1* 2 cos t , * 1
v2n o v2*
* 2
* 0
* 1
sin 2t (sin t 3 cos t ) cos 2 t (cos t 3sin t ), x0n o x0*
p n o p n* (t ) 1 sin 2t as n o f. 92
n 1
1, x1n o x1*
1, d n o d *
2 , 3
Then 2 2 5 cos 3 t cos t sin t sin 3 t , 3 3 3 2 2 5 x* (t ) y1* (t ) sin t cos t , x * (t ) y 2* (t ) cos 3 t cos t sin t sin 3 t , t >0, S 2@. 3 3 3 S 2 10 It can easily be checked that x0* S 0 , x1* S1 , x* (t ) G (t ), ³ x*3 (t )dt 4, 3 0 z1* (t , v1* , v2* ) cos t sin t e t , z 2* (t , v1* , v2* )
2
S
t d x* (t ) d 2, t >0, S 2@ . Thus a solution to the boundary value problem (3.100) –
(3.103) has been found. Comments As it follows from the problem statement it is necessary to prove an existence of a pair ( x0 , x1 ) S such that a solution to the system (3.1), starting from the point x0 at time t0 , passes through the point x1 at time t1 , and at that a trajectory of the system (3.1) satisfies the state variable constraint (3.3) for all time instants, the integral constraints (3.5) and the boundary conditions (3.4). In many cases, the studied process is described by the equations of the form (3.1) in the region of the phase space of the system, defined by the state variable constraint of the form (3.3). Outside of the mentioned region the process is described by absolutely different equations, or the studied process does not exist. In particular, such phenomena have a place in studies of the dynamics of nuclear and chemical reactors (outside of the region (3.3), a reactor does not exist). The integral constraints of the form (3.4) characterize the total load endured by elements and nodes in the system (for example, total overload of astronauts), which must not exceed predetermined values, and equalities in the integral constraints corresponds to total constraints, imposed on the system (for example, fuel consumption equal to a specified value). The Lagrange principle in variational calculus, the L. S. Pontryagin maximum principle in optimal control, in the end are reduced to boundary value problems of the form (3.1)–(3.5). The constructiveness of the proposed method is that a solvability and a construction of a solution to the boundary value problem are solved together by constructing minimizing sequences oriented on the application of computer technology. To check for solvability and to construct a solution of the boundary value problem one have to solve the optimization problem (3.90) – (3.94), where lim J (T n ) inf J (T n ) 0 is the solvability condition and by limits points {T*} of the nof T X sequence {T n } a solution to the boundary value problem is determined. This is the fundamental difference of the proposed method from the known [6, 7, 8]. Proofs of theorems 1, 2 can be found in [1]. Chapter III has been presented on the base of [2-4]. 93
In general case, the optimization problem (3.90)–(3.94) may have infinite number of solutions {T*} , for which J ({T*}) 0 . Depending on an initial guess minimizing sequences converge to some element of the set {T*} . Let T* (v1* , v2* , p* , d * , x0* , x1* ) , where J (T* ) 0 is some solution. Here x0* x(t0 ), x1* x(t1 ), ( x0 , x1 ) S , where x 0* is initial state of the system. In the problem statement conditions imposed on a right hand side of the differential equation (3.1) under which the Cauchy problem has an unique solution are listed. Therefore the differential equation (3.1) under the initial condition x(t0 ) x0* has an unique solution for t [t0 , t1 ] . Moreover, x(t1 ) x1* and the constraints (3.2)–(3.5) hold. By T* (v1* , v2* , p* , d * , x0* , x1* ) such that J (T* ) 0 we construct a solution to the boundary value problem (3.1) – (3.5). References: 1.
S.A. Aisagaliev. A general solution to one class of integral equations // Mathematical journal, 2005, Vol. 5, № 4 (1.20), p. 17-34. (in russian). 2. S.A. Aisagaliev, Zh.Kh. Zhunussova, M.N. Kalimoldayev. The imbedding principle for boundary value problems of ordinary differential equations // Mathematical journal. – 2012. – Vol. 12. № 2(44). – P. 5-22. (in russian). 3. S.A. Aisagaliev, M.N. Kalimoldayev, E.M. Pozdeyeva. To the boundary value problem of ordinary differential equations // Vestnik KazNU, ser. math., mech., inf. 2013, № 1(76), p. 5-30. (in russian). 4. S.A. Aisagaliev, M.N. Kalimoldayev. Constructive method for solving boundary value problems of ordinary differential equations // Differential Equations. – 2014, Vol. 50, № 8. – P. 901-916. (in russian). 5. S.A. Aisagaliev, T.S. Aisagaliev. Methods for solving boundary value problems. – Almaty, «Кazakh University» publishing house, 2002. – 348 p. (in russian). 6. N.I. Vasilyev, Eu.A. Klokov. The basics of the theory of boundary value problems for ordinary differential equations. – Riga: Zinatye, 1978, 307 p. (in russian). 7. Eu.A. Klokov. On some boundary value problems for second order systems // Differential Equations, 2010, Vol. 48, № 10, p. 1368-1373. (in russian). 8. A.N. Tikhonov, A.B. Vasilyeva, A.G. Sveshnikova. Differential Equations. – M.: Nauka, 1985. 231 p. (in russian). 9. S.A. Aisagaliev. Constructive theory of boundary value problems of ordinary differential equations. – Almaty, «Кazakh University» publishing house, 2015. – 207 p. (in russian). 10. S.A. Aisagaliev. The problems of qualitative theory of differential equations. – Almaty, «Кazakh University» publishing house, 2016. – 397 p. (in russian).
94
Chapter IV BOUNDARY VALUE PROBLEM WITH A PARAMETER FOR ORDINARY DIFFERENTIAL EQUATIONS
A method for solving a boundary value problem with a parameter under the state variable constraint and integral constraints is presented. A necessary and sufficient condition for existence of a solution to the b oundary value problem with a parameter for ordinary differential equations is obtained. A method for constructing a solution to the boundary value problem with a parameter and constrints is developed by constructing minimizing sequences. Lecture 14. Statement of the problem. Imbedding principle Consider the boundary value problem with a parameter x
A(t ) x B(t ) f ( x, O , t ) P t ,
xt0
x0 , x t1
tI
x1 S R ,
[t0 , t1 ],
2n
(4.1) (4.2)
with the state variable constraints xt Gt ;
Gt {x R n / Zt d F x, O , t x d M t , t I },
and the integral constraints
g j x0 , x1 , O , d c j , j
g j x0 , x1 , O
1, m1 ,. g j x0 , x1 , O c j , j
t1
³ f x(t ), x , x , O , t dt, 0j
0
1
j 1, m2 ,
m1 1, m2 ,
(4.3) (4.4) (4.5)
t0
with the parameter O / Rs , O
(O1 ,..., Os ).
(4.6) Here A(t ), Bt are given n u n, n u m matrices with piecewise continuous elements respectively, the vector valued function f ( x, O , t ) ( f1 ( x, O , t ),..., f m ( x, O , t )) is continuous with respect to the set of variables ( x, O , t ) R n u R s u I , and satisfies the Lipchitz condition with respect to x, i.e. | f ( x, O , t ) f ( y, O , t ) | d l (t ) | x y |, ( x, O , t ), ( y, O , t ) R n u R s u I (4.7) and the condition | f ( x, O , t ) | d c0 (| x | | O |2 ) c1 (t ), ( x, O , t ), (4.8) 1 1 where l (t ) t 0, l (t ) d L1 ( I , R ), c0 const ! 0, c1 (t ) t 0, c1 (t ) L1 ( I , R ). Note that under the conditions (4.7), (4.8) the differential equation (4.1) has an unique solution for a fixed x0 x(t0 ) R n , O R s and t I . 95
The vector valued function F ( x, O, t ) ( F1 ( x, O, t ),..., Fr ( x, O, t )) is continuous with respect to the set of variables ( x, O , t ) R n u I . The function f 0 ( x (t ), x0 , x1 , O , t ) ( f 01 ( x, x0 , x1 , O , t ),..., f 0m ( x, x0 , x1, O , t )) is continuous with respect to a set of variables and satisfies the condition 2
| f 0 ( x, x0 , x1 , O , t ) | d c2 (| x | | x0 | | x1 | | O |2 ) c3 (t ), ( x, x0 , x1 , O , t ) R n u R n u R n u R s u I , c2
const t 0, c3 (t ) t 0, c3 (t ) L1 ( I , R1 )
Z (t ), M (t ), t u I are given r dimensional continuous functions, S is a given bounded convex closed set in R 2n , / is a given bounded convex closed set in R S , the time instants t 0 , t1 are fixed, t1 ! t0 . Note that if A(t ) { 0, m n, B(t ) I n , where I n is n u n identity matrix, then the equation (1) is rewritten in the form x f ( x, O , t ) P (t ), t I . (4.9) For this reason the results presented below hold true for the equation of the form (4.9) under the conditions (4.2) – (4.6). In particular, the set S is defined by { ( x0 , x1 ) R 2 n / H 0 ( x0 , x1 ) d 0, j
S
1, p; a j , x0 ! b j , x1 ! }
where H j ( x0 , x1 ), j 1, p are convex functions with respect to variables ( x0 , x1 ), x0 x(t0 ), x1 x(t1 ), a j R n , b j R n , e j R1 , j p1 1, s are given vectors and numbers, the symbol , ! denotes a scalar product. In particular, the set {O R S / h j (O ) d 0, j
/
h j (O ), j
1, p; a j , O ! e j
0, j
p1 1, s1},
O, are convex functions with respect to a j R , e j R , j p1 1, s1 are given vectors and numbers. The following problems are posed: Problem 1. Find necessary and sufficient conditions for existence of a solution to the boundary value problem (4.1) – (4.6). Problem 2. Construct a solution to the boundary value problem (4.1) – (4.6). As it follows from the problem statement it is necessary to prove an existence of a pair ( x0 , x1 ) S and a parameter O / such that a solution to the system (4.1) starting from the point x 0 at time t 0 , passes through the point x1 at time t1 , and at that for a solution x(t ) x(t; x0 , t0 , O ), t I , to the system (4.1), the boundary conditions x(t 0 ) x0 , x(t1 ) x1 , the state variable constraint (4.3) holds and the integrals (4.5) satisfy the condition (4.4). In particular, from the boundary value problem (4.1) – (4.5) in the absence of state variable constraint and integral constraint the Sturm–Liouville problem follows. The application of the Fourier method to the solution of problems of mathematical physics results to solving the following problem [4]: find value of a parameter O , such that on the finite segment [t0 , t1 ] there exists a nonzero solution to the homogeneous equation L[ y ] Or (t ) y (t ) { 0, (4.10)
where
s
1, p1
1
96
satisfying the conditions D1 y(t0 ) D 2 y (t0 ) 0, E1 y(t1 ) E 2 y (t1 ) 0,
(4.11)
d [ p(t ) y (t )] q(t ) y (t ), p(t ) ! 0, t [t0 , t1 ]. dt Denoting y(t ) x1 (t ), x1 (t ) x2 (t ), t [t0 , t1 ], the equation (4.10) can be repre-
where L[ y ]
sented in the form x
A(t ) x B(t ) f ( x1 , O , t ), t I
(4.12)
[t0 , t1 ],
where A(t )
§ 0 ¨ q( t ) ¨ © p(t )
1 · p (t ) ¸, B(t ) ¸ p(t ) ¹
0 · §0 ¨ r (t ) ¸, f ( x1 , O , t ) ¨0 ¸ p (t ) ¹ ©
§0 · ¨¨ ¸¸. © Ox1 ¹
The boundary condition (4.11) is rewritten in the form D1 x10 D 2 x20
0, E1 x11 E 2 x21
0,
(4.13)
where x(t0 ) ( x10 , x20 ), x(t1 ) ( x11, x21 ). The parameter O R1 . The equation (4.12), the boundary condition (4.13), O R1 are special cases of (4.1), (4.2), (4.6) respectively. As it is known from [5], solving the Sturm-Liouville problem is reduced to the second kind Fredholm integral equation t1
y (t )
O ³ G (t , [ )r ([ ) y ([ )d[ ,
(4.14)
t0
where G (t , [ ) is the Green function. Note that constructing the Green function G (t , [ ) and solving the integral equation (4.14) are quite difficult. It is therefore of interest to develop new research methods for solving the boundary value problems (4.1)–(4.6). In [1] sufficient conditions of solvability for two-point homogeneous boundary value problem for a system consisting of two second order nonlinear differential equations are proposed and a priori estimates of solutions has been obtained. In [2] eigenvalue and eigen function problems for the second quasilinear differential equation are considered. Requirements imposed on nonlinearity under which the problem has multiple eigenvalues are studied. The paper [3] is devoted to the study of nonlinear eigenvalues problems for the Sturm-Liouville operator. For the problem when the boundary conditions on both sides of the interval depend on a spectral parameter an existence of an eigenfunctions system forming a basis in L p (0,1), p ! 1 is established. A development of a general theory of boundary value problems with a parameter for ordinary differential equations of artibrary order with complicated boundary conditions, state variable constraints and integral constraints is a topical problem. In many cases in practice a studied process is described by equation of the form (4.1) in a region of a phase space defined by state variable constraint of the form (4.3). Outside of the mentioned region the process is described by absolutely 97
different equations or the studied process does not exist. In particular, such phenomena have a place in studies of the dynamics of nuclear and chemical reactors (outside of the region (4.3) the reactor does not exist). The integral constraint of the form (4.4) characterize the total load endured by elements and nodes in the system (for example, total overload of astronauts), which must not exceed predetermined values, and the equalities (4.4) corresponds to total constraints, imposed on the system (for example, fuel consumption equal to a specified value). Basis for the proposed method of solving the boundary value problem with a parameter is imbedding principle. The essence of the imbedding principle is that the original boundary value problem with constraints is reduced to free end point optimal control problem. This approach is possible due to finding a general solution to one class of the first kind Fredholm integral equations. Further, clarification of an existence of a solution to the original problem and a construction of its solution are realized by solving the optimal control problem of a special kind. In this approach, the necessary and sufficient conditions for the existence of a solution to the boundary value problem (4.1) – (4.6) can be obtained from the conditions for achieving the lower bound of the functional on a given set, and a solution to the original boundary problem is defined by limit points of minimizing sequences. This is the fundamental difference between the proposed method from the known methods. Let us consider the integral constraints (4.4), (4.5). By introducing auxiliary variables d (d1 ,..., d m ) R m , d t 0, the relations (4.4), (4.5) are represented in the form 1
1
t1
³f
g j ( x0 , x1 , O )
0j
( x(t ), x0 , x1 , O , t )dt
c j d j , j 1, m1 ,
t0
where d * {d R m / d t 0} . Let the vector c (c1 ,..., cm ), where c j 1
2
j 1, m1 , c j
cj, j
cj d j,
m1 1, m2 .
Introduce the vector valued function K (t ) (K1 (t ),...,Km (t )), t I , where 2
K (t )
t
³f
0
( x(W ), x0 , x1 , O ,W )dW , t I
[t0 , t1 ].
t0
Then
K f 0 ( x(t ), x0 , x1 , O , t ), t I , K (t0 ) 0, K (t1 ) c , d *.
Now the boundary value problem (4.1) – (4.6) is rewritten in the form [ A1 (t )[ B1 (t ) f ( P[ , O, t ) B2 f 0 ( P[ , x0 , x1 , O, t ) B3 P(t ), t I , [ (t0 ) [ 0 ( x0 , Om2 ), [ (t1 ) [1 ( x1 , c ), ( x0 , x1 ) S , d *, P[ (t ) G(t ), t I , O /,
where § x (t ) ·
§ A(t )
O
·
§ B(t ) ·
nm ¸, B1 (t ) ¨ ¸ [ (t ) ¨¨ ¸¸, A1 (t ) ¨¨ ¸ ¨ Om n ¸, ©K (t ) ¹ © Om n Om m ¹ © ¹ 2
98
2
2
2
2
(4.15) (4.16) (4.17)
§ Onm2 · ¨ ¸, P ( I n , Onm ), P[ x, 2 ¨ Im ¸ © 2 ¹ is j u k zero matrix, Oq R q is q u 1 zero vector, [ ([1 ,..., [ n , [ n 1 ,..., [ n m2 ). B2
O j ,k
§ In · ¨ ¸ ¨ Om n ¸, B3 © 2 ¹
Basis for the proposed method of solving problems 1, 2 are the following theorems about properties of a solution to the first kind Fredholm integral equation t1
Ku
³ K (t , t )u(t )dt
(4.18)
a,
0
t0
where K (t 0 , t ) || K ij (t 0 , t ) ||, i 1, n1 , j 1, s1 is n u m known matrix with piecewise continuous elements with respect to t at fixed t 0 , u () L2 ( I , R s ) is unknown function, I [t 0 , t1 ], a R n is a given vector. Theorem 1. A necessary and sufficient condition for the integral equation (4.18) to have a solution at any fixed a R n is that the n1 u n1 matrix 1
1
1
t1
C (t0 , t1 )
³ K (t , t ) K 0
*
(4.19)
(t0 , t )dt ,
t0
be positive definite, where the symbol (*) denotes a transposition. Theorem 2. Let the matrix C (t0 , t1 ) defined by (4.19) be positive definite. Then a general solution to the integral equation (4.18) is defined by t1
u (t )
K * (t0 , t )C 1 (t0 , t1 )a v(t ) K * (t0 , t )C 1 (t0 , t1 ) ³ K (t0 , t )v(t )dt , t I ,
(4.20)
t0
where v() L2 ( I , R s ) is an arbitrary function, a R n is arbitrary vector. For the proofs of theorems 1, 2 we refer the reader to [8]. Application of the theorem to solving controllability and optimal control problems is described in [79], and solutions of boundary value problems of ordinary differential equations are presented in [7]. Together with the differential equation (4.15) under the boundary conditions (4.16) consider the linear controllable system 1
y
1
A1 (t ) y B1 (t )w1 (t ) B2 w2 (t ) P1 (t ), t I ,
y (t 0 ) [ 0
( x0 , Om2 ), y (t1 ) [1
( x1 , c ),
w1 () L2 ( I , R ), w2 () L2 ( I , R ), ( x0 , x1 ) S , d *, m2
m
(4.21) (4.22) (4.23) (4.24)
where P1 (t ) B3 P (t ), t I . Let the (n m2 ) u (m2 m) matrix B (t ) ( B1 (t ), B2 ) , and a vector valued function w(t ) ( w1 (t ), w2 (t )) L2 ( I , R mm ) . It is not hard to show that a control w() L2 ( I , R mm ) which brings the trajectory of the system (4.21) from an initial point [0 into a terminal state [1 , is a solution to the integral equation 2
2
t1
³ )(t , t ) B (t )w(t )dt 0
t0
99
a,
(4.25)
where )(t0 , t ) T (t )T 1 (W ), T (t ) is a fundamental matrix solution for the linear homogeneous system ] A1 (t )] , the vector (see (4.22) – (4.25)) a
a ([ 0 , [1 )
t
)(t0 , t1 )[[1 )(t1 , t0 )[ 0 ] ³ )(t0 , t ) P1 (t )dt. t0
As it follows from (4.15), (4.25) the integral equation (4.25) coincides with (4.18), if the matrix K (t0 , t ) )(t0 , t ) B (t ) . Let us introduce the notations t1
W (t0 , t1 )
t
* * ³ )(t0 , t ) B (t ) B (t )) (t0 , t )dt, W (t0 , t )
³ )(t ,W ) B (W ) B 0
t0
W (t , t1 ) W (t0 , t1 ) W (t0 , t ), E (t )
*
(W )) * (t0 ,W )dW ,
t0
B (t )) * (t0 , t )W 1 (t0 , t1 ), *
t1
P2 (t ) E (t ) ³ )(t0 , t ) P1 (t )dt , E1 (t ) )(t , t0 )W (t , t1 )W 1 (t0 , t1 ), t0
E2 (t )
t
t
t0
t0
)(t , t0 ) ³ )(t0 ,W ) P1 (W )dW E2 (t ) ³ )(t1 , t ) P1 (t )dt.
)(t , t0 )W (t0 , t )W 1 (t0 , t1 ))(t0 , t1 ), P3
Calculate the functions O1 (t, [0 , [1 ), O2 (t, [0 , [1 ), N1 (t ), N 2 (t ) by O1 (t , [ 0 , [1 ) E (t ) T1 (t )[ 0 T2 (t )[1 P2 (t ), O2 (t , [ 0 , [1 ) E1 (t )[ 0 E2 (t )[1 P3 (t ), N 1 (t )
E (t ))(t0 , t1 ), N 2 (t )
E2 (t ), t I .
Theorem 3. Let the matrix W (t 0 , t1 ) be positive definite. Then a control w() L2 ( I , R mm ) brings the trajectory of the system (4.21) from an initial point [ 0 R n m into any terminal state [1 R nm if and only if 2
2
w(t ) W
2
{w() L2 ( I , R mm2 ) / w(t )
v(t ) O1 (t, [ 0 , [1 )
(4.26)
N1 (t ) z(t1 , v ), t I , v() L2 ( I , R mm2 )}, where the function z(t ) z(t, v ), t I is a solution to the differential equation z A1 (t ) z B (t )v(t ), z (t0 ) 0, t I , v() L2 ( I , R mm2 ). (4.27)
The solution of the differential equation (4.21), corresponding to the control w(t ) W , is defined by y(t ) z(t ) O2 (t, [0 , [1 ) N 2 (t ) z(t1 , v ), t I . (4.28) Proof. As it follows theorem 1, for existence of a solution to the integral equation (4.25) it is necessary and sufficient to have W (t0 , t1 ) C (t0 , t1 ) ! 0 , where K (t0 , t ) ) (t0 , t ) B (t ) . Now the relation (4.20) is rewritten in the form (4.26). The solution to the system (4.21), corresponding to the control (4.26), is defined by (4.28), where z(t ) z(t, v ), t I is a solution to the differential equation (4.27). This concludes the proof. Lemma 1. Let the matrix W (t0 , t1 ) ! 0 . Then the boundary value problem (4.1)-(4.6) is equivalent to the following w(t ) ( w1 (t ), w2 (t )) W , w1 (t ) f ( Py(t ), O , t ), w2 (t ) f 0 ( Py(t ), x0 , x1 , O , t ), (4.29) z A1 (t ) z B1 (t )v1 (t ) B2 v2 (t ), z(t0 ) 0, t I , (4.30) m m v(t ) (v1 (t ), v2 (t )) v1 () L2 ( I , R ), v2 () L2 ( I , R ), (4.31) 2
100
( x0 , x1 ) S , O /, d *, Py(t ) G(t ), t I , m m2
(4.32) ) is an arbitrary function, y (t ), t I is defined by
where v() (v1 (), v2 ()) L2 ( I , R (4.28). Proof. As the relations (4.29) – (4.32) hold, the function y(t ) [ (t ), t I , Py(t )
P[ (t ) G(t ), t I , w(t )
( w1 (t ), w2 (t )) W .
This completes the proof. Consider the optimization problem (see (4.20) – (4.32)) J ( v1 , v2 , p, d , x0 , x1 , O )
t1
³ [| w (t ) f ( Py(t ), O , t ) |
2
1
t0
| w2 (t ) f 0 ( Py (t ), x0 , x1 , O , t ) |2 | p(t ) F ( Py (t ), O , t ) |2 ]dt
(4.33)
t1
³ F (t, v (t ), v (t ), p(t ), d , x , x , O , z(t ), z(t ))dt o inf 0
1
2
0
1
1
t0
under the conditions z
A1 (t ) z B1 (t )v1 (t ) B2 v2 (t ), z(t0 )
0, t I ,
(4.34) (4.35) (4.36)
v1 () L2 ( I , R ), v2 () L2 ( I , R ), ( x0 , x1 ) S , d *, O /, m
m2
p(t ) V (t ) { p() L2 ( I , R r ) / Z(t ) d p(t ) d M (t ), t I },
where w1 (t )
v1 (t ) O11 (t0 , [0 , [1 ) N11 (t ) z(t1 , v ), t I ,
w2 (t )
v2 (t ) O12 (t0 , [0 , [1 ) N12 (t ) z(t1 , v ), t I ,
(4.37)
N1 (t ) ( N11 (t ), N12 (t )), O1 (t, [0 , [1 ) (O1 (t, [0 , [1 ), O2 (t, [0 , [1 )).
(4.38)
Denote X
L2 ( I , R m m2 ) u V (t ) u * u S u / H
u L2 ( I , R r ) u R m1 u R n u R n u R s , J *
T
( v1 , v 2 , p, d , x0 , x1 , O ) X , X *
L2 ( I , R m ) u L2 ( I , R m2 ) u
inf J (T ), T X
{T * X / J (T * )
0}.
Theorem 4. Let the matrix W (t 0 , t1 ) be positive definite, X * z , here the symbol denotes an empty set. A necessary and sufficient condition for the boundary value problem (4.1)–(4.6) to have a solution is J (T* ) 0 J * , where T * (v1* , v 2* , p* , d * , x 0* , x1* , O* ) X is an optimal control for the problem (4.33) – (4.38). If J * J (T* ) 0 , then the function x* (t )
P[ z (t , v1* , v2* ) O2 (t , [ 0* , [1* ) N 2 (t ) z (t1 , v1* , v2* )], t I
Is a solution to the boundary value problem (4.1) – (4.6). If J * ! 0 , then the boundary value problem (4.1) – (4.6) has not a solution. Proof. Necessity. Let the boundary value problem (4.1)–(4.6) have a solution. Then as it follows from lemma 1, the equalities w1* (t ) f ( Py* (t ), O* , t ), w2* (t ) f 0 ( Py* (t ), x0* , x1* , O* , t ) hold, where w* (t ) ( w1* (t ), w2* (t )) W , y* (t ), t I is [ 0* ( x0* , Om ), [1* ( x1* , c* ), c* (c j d *j ), j 1, m1 ; c j , defined by (4.28), 2
j
m1 1, m2 . The inclusion Py* (t ) G(t ), t I is equivalent to p* (t ) 101
F ( Py* (t ), O* , t ),
t I , where Z(t ) d p* (t ) F ( Py* (t ), O* , t ) d M (t ), t I . Therefore the value J (T* ) 0 . This concludes the proof of the necessity. Sufficiency. Let J (T* ) 0 . This holds if and only if w1* (t ) f ( Py* (t ), O* , t ),
w2* (t )
f 0 ( Py* (t ), x0* , x1* , O* , t ),
F ( Py* (t ), O* , t ),
p* (t )
( x0* , x1* ) S ,
v1* () L2 ( I , R m ),
v2* () L2 ( I , R m2 ) . This completes the proof.
Reducing the boundary value problem (4.1) – (4.6) to the problem (4.33) – (4.38) is called an imbedding principle. Lecture 15. Optimization problem Consider the optimization problem (4.33) – (4.38). Note that the function F0 (t, v1 , v2 , p, d , x0 , x1 , O ) | w1 f ( Py, O , t ) |2 | w2 f 0 ( Py, x0 , x1 , O , t ) |2 | p F ( Py, O , t ) |2 F0 (t, q), q (T , z, z ), where w1 , w2 are defined by (4.37), (4.38) respectively, the function y z O2 (t, x0 , x1 , d ) N 2 (t ) z , Py x.
Theorem 5. Let the matrix W (t 0 , t1 ) be positive definite, the function F0 (t, q) be defined and continuously differentiable with respect to q (T , z, z ) , and the following conditions hold | F0 z (t, T 'T , z 'z, z 'z ) F0 z (t, T , z, z ) | d L(| 'z | | 'z | | 'T |), | F0 z (t, T 'T , z 'z, z 'z ) F0 z (t, T , z, z ) | d L(| 'z | | 'z | | 'T |), | F0T (t, T 'T , z 'z, z 'z ) F0T (t, T , z, z ) | d L(| 'z | | 'z | | 'T |), T R mm2 r m1 2ns , z R nm2 , z R nm2 .
Then the functional (4.33) under the conditions (4.34)-(4.36) is continuous and Frechet differentiable at any point T X , and the gradient J c(T )
( J vc1 (T ), J vc2 (T ), J cp (T ), J dc (T ), J xc 0 (T ), J xc1 (T ), J Oc (T )) H ,
is given by the formulas J vc1 (T ) J dc (T )
F0 v1 (t , q) B1* (t )\ (t ), J vc2 (T ) t1
³F
0d
(t , q)dt , J xc 0 (T )
t0
J Oc (T )
t1
³F
0 x0
F0 v 2 (t , q) B2*\ (t ), J cp (T )
(t , q)dt , J xc1 (T )
t0
t1
³ F O (t, q)dt, 0
q
F0 p (t , q),
t1
³F
0 x1
(t , q)dt ,
(4.39)
t0
(T , z (t ), z (t1 )),
t0
the function z (t ), t I is a solution to the differential equation (4.34) corresponding to v1 () L2 ( I , Rm ), v2 () L2 ( I , R m ) , аnd the function \ (t ), t I is a solution to the conjugate system \
F0 z (t , q(t )) A (t )\ , \ (t1 ) * 1
t1
³ F0 z ( t1 ) (t , q(t )) dt. t0
102
(4.40)
Moreover, the gradient J c(T ), T X satisfies the Lipchitz condition (4.41)
|| J c(T1 ) J c(T2 ) || d K || T1 T2 ||, T1,T2 X ,
where K ! 0 is the Lipchitz constant. Proof. Let T , T 'T X , where 'T ('v1, 'v2 , 'p, 'd , 'x0 , 'x1, 'O ) . It is easily shown that 'z
A1 (t )'z B1 (t )'v1 B2 'v2 , 'z(t0 ) 0,
the increment of the functional
J (T 'T ) J (T ) J vc1 (T ), 'v1 ! L2 J vc2 (T ), 'v2 ! L2
'J
J cp (T ), 'p ! L2 J dc (T ), 'd ! R m1 J xc0 (T ), 'x0 ! R n 7
J xc1 (T ), 'x1 ! R n J Oc (T ), 'O ! R s R,
R
¦R , i
i 1
where | R | d c* || 'T ||2X , | R | / || 'T ||X o 0 as || 'T ||X o 0, c* const ! 0. This implies (4.39), where \ (t ), t I is a solution to the equation (4.40). Let T1 (v1 'v1, v2 'v2 , p 'p, d 'd , x0 'x0 , x1 'x1, O 'O ) X , T2 (v1, v2 , p, d , x0 , x1 , O ) . Since | J c(T1 ) J c(T2 ) | d c0 | 'q(t ) | c1 | '\ (t ) | c2 | 'T |, '\ '\ (t1 )
[ F0 z (t , q 'q) F0 z (t , q)] A1* (t )'\ , t1
³ [ F0 z ( t1 ) (t , q 'q) F0 z ( t1 ) (t , q)]dt, t0
the equalitites || 'q || d c3 || 'T ||, | '\ (t ) | d c4 || 'T || hold. Then || J c(T1 ) J c(T 2 ) ||2
t1
³ | J c(T ) J c(T ) |
2
1
2
dt d K || 'T ||2 .
t0
This completes the proof. On the base of (4.39)–(4.41), construct a sequence {Tn } {v1n , v2n , pn , d n , x0n , x1n , On } X by the rule: v1n 1
v1n D n J vc1 (T n ), v2 n 1
pn 1
PV [ pn D n J cp (T n )], d n 1
x0 n 1
PS [ x0 n D n J xc 0 (T n )], x1n 1
On 1
where 0 D n
P/ [On D n J Oc (T n )], n
v2 n D n J vc2 (T n ), P* [d n D n J dc (T n )],
(4.42)
PS [ x1n D n J xc1 (T n )],
0,1, 2,...,
2 , H ! 0, K ! 0 is the Lipchitz constant from (4.41). Denote K 2H
M0
{T X / J (T ) d J (T 0 )}, X **
{T ** X / J (T ** ) inf J (T )}, T X
where T 0 (v10 , v20 , p0 , d 0 , x10 , x20 , O0 ) X is a starting point for the iteration process (4.42). Theorem 6. Let the assumptions of theorem 5 hold, the functional J (T ), T X is bounded from below,the sequence {T n } X is defined by (4.42). Then the following assertions hold. 1) J (T n ) J (T n 1 ) t H || T n T n 1 ||2 , n 0,1, 2,...; (4.43) 103
2) lim || T n T n 1 || 0.
(4.44)
nof
Proof. Since T n 1 is a projection of the point Tn Dn J c(Tn ) onto the set X , the inequality Tn 1 Tn Dn J c(Tn ),Tn Tn 1 ! H t 0, T , T X holds. Hence taking into account that J (T ) C1,1 ( X ) , we obtain § 1 K· J (Tn ) J (Tn 1 ) t ¨¨ ¸¸ || Tn Tn 1 ||2 t H || Tn Tn 1 ||2 . © Dn 2 ¹ Consequently, the numerical sequence {J (Tn )} strictly decreases, and the inequality (4.43) holds. A boundedness from below of the functional J (T ), T X yields (4.44). Note that J (T ) t 0, T , T X . This concludes the proof. Theorem 7. Let the assumptions of theorem 5 hold, the set M 0 is bounded
and the inequality F0 q (t, q1 ) F0 q (t, q2 ), q1 q2 ! R N t 0, q1 , q2 R N , N
m m2 2n s m1 r 2(n m2 ).
(4.45)
holds. Then the following assertions hold. 1) The set M 0 is weakly bicompact, X ** z , here the symbol denotes an empty set; 2) The sequence {T n } is minimizing, i.e. lim J (T n )
nof
J*
inf J (T );
T X
3) The sequence {Tn } M 0 weakly converges to a point T** X **; 4) The convergence rate can be estimated as 0 d J (T n ) J * d
c1 , c1 n
const ! 0, n 1, 2,...;
5) The boundary value problem (4.1) – (4.6) has a solution if and only if lim J (T n )
nof
J*
inf J (T )
T X
J (T** ) 0;
6) If J (T** ) 0 , where T** T* (v1* , v2* , p* , d* , x0* , x1* , O* ) X * , then the solution of the boundary value problem (4.1) – (4.6) is defined by x* (t )
Py* (t ), y* (t )
z (t , v1* , v2* ) O2 (t , [0* , [1* ) N 2 (t ) z (t1; v1* , v2* ), t I .
7) If J (T** ) ! 0 ,then the boundary value problem (4.1)–(4.6) has not a solution. Proof. It follows from (4.45) that the functional J (T ) C1,1 ( X ) is convex. The first assersion of the theorem follows from the fact that M 0 is a convex closed set of the reflexive Banach space H , as well as from weakly semicontinuity from below of the functional J (T ) on the weakly bicompact set M 0 . The second assertion follows from the inequality J (T n ) J (T n 1 ) t H || T n T n 1 ||2 , n 0,1, 2,... . Whence we have J (Tn 1 ) J (Tn ), || Tn Tn 1 ||o 0 as {Tn } M 0 . Then from convexity of the functional J (T ), T M 0 follows that the sequence {T n } is minimizing. The third assertion follows from weakly bicompactness of the set M 0 , {T n } M 0 . The estimation of a convergence rate follows from J (Tn ) J (T** ) d c1 || Tn Tn 1 || . Assertions 5), 6) follow from theorem 4. This concludes the proof. 104
Note that if f ( x, O , t ), f 0 j ( x, x0 , x1 , O , t ), j 1, m2 , F ( x, O ) are linear functions with respect to ( x, x0 , x1, O ) , then the functional J (T ) is convex. Lecture 16. Solution to the Sturm-Liouville problem Consider the Sturm-Liouville problem (4.10), (4.11). Let us introduce the notations a21(t )
q( t ) , a22 (t ) p (t )
p (t ) , D p(t )
(D1,D2 ), E
( E1, E2 ), b22 (t )
r (t ) . Now the p (t )
equation (4.10) and the boundary conditions (4.11) are rewritten in the form x1 x2 , x2 a21(t ) x1 a22 (t ) x2 b22 (t )Ox1, Dx0 0, Ex1 0, t I [t0 , t1 ] . Then in the matrix form the problem (4.10), (4.11) is represented as x A(t ) x B(t )Ox1 , t I [t0 , t1 ], Dx0 0, Ex1 0, (4.46) where § x10 · ¨¨ ¸¸, x1 © x20 ¹
§ x11 · ¨¨ ¸¸. © x21 ¹ We seek the solution of the equation (4.46) for the case when O / , where / {O R / / J d O d G }, J , G are given numbers. Suppose that the time moments t 0 , t1 , t1 ! t 0 are fixed. 1 · § 0 ¸¸, B(t ) A(t ) ¨¨ © a21(t ) a22 (t ) ¹
§x · § 0 · ¸¸, x ¨¨ 1 ¸¸, x0 ¨¨ © x2 ¹ © b22 (t ) ¹
For this example P (t ) { 0, f ( x, O , t ) { Ox1, f 0 { 0, F { 0, t I . Then [ (t ) x(t ), t I , A1 (t ) A(t ), B1 (t ) B(t ), B2 0, B3 0 . Therefore, the linear controllable system (4.21) is rewritten in the form y A(t ) y B(t ) w1 (t ), y (t 0 ) x0 , y (t1 ) x1 , w1 () L2 ( I , R1 ). (4.47) The solution of the differential equation (4.47) has the form t
y (t ) )(t, t0 ) y (t0 ) ³ )(t ,W ) B(W ) w1 (W )dW , t I . t0
As y(t1 ) x1, y(t0 ) x0 , the integral equation (4.25) for this example has the form t1
³ )(t , t ) B(t )w (t )dt 0
1
a
) (t0 , t1 )[ x1 ) (t1 , t0 ) x0 ],
(4.48)
t0
where )(t ,W ) T (t )T 1 (W ), T t ) is a fundamental matrix solution to the differential equation ] A(t )] . Since T A(t )T , t I , T (t 0 ) I 2 , where I 2 is 2 u 2 identity matrix, elements of the matrix T (t ), t I are solutions to the differential equations T11 (t ) T 21 (t ),
T11 (t 0 ) 1; T12 (t ) T 22 (t ), T12 (t 0 ) 0; T 21 (t ) a21 (t )T11 (t ) a22 (t )T 21 (t ), T 21 (t0 ) 0; T22 (t ) a21 (t )T12 (t ) a22 (t )T 22 (t ), T 22 (t0 ) 1.
(4.49)
By solving the differential equations (4.49) elements of the matrix T (t ) || Tij (t ) ||, i, j 1, 2 can be found. The inverse matrix 105
1 § T22 (W ) T12 (W ) · ¨ ¸, '(W ) T11(W )T22 (W ) T12 (W )T21(W ). '(W ) ¨© T21(W ) T11(W ) ¸¹
T 1 (W )
Note that )(t0 , t ) T (t0 )T 1 (t ) T 1 (t ) . As )(t0 , t ) B(t )
1 § T12 (t )b22 (t ) · ¸, ¨ '(W ) ¨© T11(t )b22 (t ) ¸¹
the matrix t1
³ )(t , t ) B(t ) B (t )) (t , t )dt *
W (t0 , t1 )
0
*
0
t0
§ W11 (t0 , t1 ) W12 (t0 , t1 ) · ¨¨ ¸¸, ©W12 (t0 , t1 ) W22 (t0 , t1 ) ¹
where t1
W11 (t0 , t1 )
1 2 2 ³t '2 (t ) T12 (t )b22 (t )dt, W12 (t0 , t1 ) 0 t1
W22 (t0 , t1 )
1
³ ' (t ) T 2
2 11
t1
1
³ ' (t ) T
11
2
2 (t )T12 (t )b22 (t )dt ,
t0
2 (t )b22 (t )dt , W12 (t0 , t1 ) W21 (t0 , t1 ).
t0
The inverse matrix W 1 (t0 , t1 )
1 § W22 (t0 , t1 ) W12 (t0 , t1 ) · ¸ ¨ '1 (t0 , t1 ) ¨© W12 (t0 , t1 ) W11(t0 , t1 ) ¸¹
§W11(t0 , t1 ) W12 (t0 , t1 ) · ¸¸ ¨¨ ©W12 (t0 , t1 ) W22 (t0 , t1 ) ¹
where '1 (t0 , t1 ) W11(t0 , t1 )W22 (t0 , t1 ) W112 (t0 , t1 ) . The matrices §W (t, t ) W12 (t, t1 ) · §W (t, t ) W12 (t, t1 ) · ¸¸, ¸¸, W (t0 , t ) ¨¨ 11 1 W (t, t1 ) ¨¨ 11 1 ©W12 (t, t1 ) W22 (t, t1 ) ¹ ©W12 (t, t1 ) W22 (t, t1 ) ¹ 1 1 W22 (t0 , t1 ), W12 (t0 , t1 ) W12 (t0 , t1 ), where W11(t0 , t1 ) '1 (t0 , t1 ) '1 (t0 , t1 ) 1 W11 (t0 , t1 ) . It can be easily shown that W (t0 , t1 ) W * (t0 , t1 ) ! 0 . W22 (t0 , t1 ) '1 (t0 , t1 ) Since the vectors a, E (t ) (see (4.48)) are defined by 1 1 · § T 22 (t1 ) x11 T12 (t1 ) x12 ¸ ¨ x10 ( ) ( ) t t ' ' 1 1 ¸, a )(t0 , t1 ) x1 x0 T 1 (t1 ) x1 x0 ¨ ¨ x 1 T (t ) x 1 T (t ) x ¸ ¨ 20 '(t ) 21 1 11 '(t ) 11 1 12 ¸ 1 1 © ¹ 1 E (t ) B* (t )) * (t0 , t )W 1 (t0 , t1 ) ( W11 (t0 , t1 )T12 (t )b22 (t ) W12 (t0 , t1 )T11 (t )b22 (t ) '(t ) W12 (t0 , t1 )T12 (t )b22 (t ) W22 (t0 , t1 )T11 (t )b22 (t ),
the vector valued function
O1 (t, x0 , x1 ) E (t )a T11(t ) x10 T12 (t ) x20 T21(t ) x11 T22 (t ) x12 ,
where T11 (t ) T12 (t )
1 [W11 (t0 , t1 )T12 (t )b22 (t ) W12 (t0 , t1 )T11(t )b22 (t )], '(t ) 1 [W12 (t0 , t1 )T12 (t )b22 (t ) W22 (t0 , t1 )T11(t )b22 (t )], '(t )
106
(4.50)
T 22 (t1 )
[ W11 (t0 , t1 )T12 (t )b22 (t ) W12 (t0 , t1 )T11 (t )b22 (t )] '(t ) '(t1 ) T (t ) 21 1 [ W12 (t0 , t1 )T12 (t )b22 (t ) W22 (t0 , t1 )T11 (t )b22 (t )], '(t ) '(t1 ) T (t ) T22 (t ) 12 1 [ W11 (t0 , t1 )T12 (t )b22 (t ) W12 (t0 , t1 )T11 (t )b22 (t )] '(t ) '(t1 ) T11 (t1 ) [ W12 (t0 , t1 )T12 (t )b22 (t ) W22 (t0 , t1 )T11 (t )b22 (t )]. '(t ) '(t1 ) T21 (t )
The 2 u 1 matrix N1 (t ) § T 22 (t1 ) [W11 (t0 , t1 )T12 (t )b22 (t ) ¨ '(t ) '(t1 ) N1 (t ) E (t ))(t0 , t1 ) E (t )T 1 (t ) ¨ ¨ T 21 (t1 ) [W (t , t )T (t )b (t ) 22 ¨ '(t )'(t ) 11 0 1 12 1 © T (t ) · W12 (t0 , t1 )T11 (t )b22 (t )] 21 1 [W12 (t0 , t1 )T12 (t )b22 (t ) W22 (t0 , t1 )T11 (t )b22 (t )] ¸ '(t )'(t1 ) ¸ T11 (t1 ) W12 (t0 , t1 )T11 (t )b22 (t )] [W12 (t0 , t1 )T12 (t )b22 (t ) W22 (t0 , t1 )T11 (t )b22 (t )] ¸¸ '(t ) '(t1 ) ¹
§ N11(t ) · ¸¸, t I . ¨¨ © N12 (t ) ¹
(4.51)
Then the function (see (4.50), (4.51))
w1 (t ) v1 (t ) O1 (t, x0 , x1 ) N1 (t ) z(t1 , v1 ) v1 (t ) T11(t ) x10 T12 (t ) x20 T21(t ) x11 T22 (t ) x12 N11(t ) z1 (t1 , v1 ) N12 (t ) z2 (t1 , v1 ), t I ,
(4.52)
where the function z(t ) z(t, v1 ), t I is a solution to the differential equations z A(t ) z B(t )v1 (t ), z (t0 ) 0, t I , v1 () L2 ( I , R1 ). (4.53) By the similar way the matrices E1 (t ), E2 (t ), N 2 (t ) are computed § e (t ) e12 (t ) · ¸¸, E1 (t ) )(t, t0 )W (t, t1 )W 1 (t0 , t1 ) T (t )W (t, t1 )W 1 (t0 , t1 ) ¨¨ 11 © e21(t ) e22 (t ) ¹ § c (t ) c12 (t ) · ¸¸, E2 (t ) )(t, t0 )W (t0 , t )W 1 (t0 , t1 ))(t0 , t1 ) T (t )W (t0 , t )W 1 (t0 , t1 )T 1 (t1 ) ¨¨ 11 © c21(t ) c22 (t ) ¹ N 2 (t )
§ c (t ) c12 (t ) · ¸¸. ¨¨ 11 © c21(t ) c22 (t ) ¹
Then the function § y (t ) · y (t ) ¨¨ 1 ¸¸ © y2 (t ) ¹
§ z1 (t , v1 ) e11 (t ) x10 e12 (t ) x20 c11 (t ) x11 c12 (t ) x1 ¨¨ © z2 (t , v1 ) e21 (t ) x10 e22 (t ) x20 c21 (t ) x11 c22 (t ) x1 c11 (t ) z1 (t1 , v1 ) c12 (t ) z2 (t1 , v1 ) · ¸. c21 (t ) z1 (t1 , v1 ) c22 (t ) z2 (t1 , v1 ) ¸¹
107
(4.54)
Now the optimization problem (4.33) for this example is written as J ( v1 , x10 , x20 , x11, x12 , O )
t1
³ | w ( t ) Oy ( t ) |
2
1
dt
1
t0 t1
³ | v (t ) T (t ) x 1
1
0
T2 (t ) x1 N1 (t ) z (t1 , v1 ) O [ z1 (t , v1 ) e1 (t ) x0
(4.55)
t0
c1 (t ) x1 N 21 (t ) z (t1 , v1 )] |2 dt
t1
³ F (t, q)dt o inf 0
t0
under the conditions z
A(t ) z B(t )v1 (t ), z(t0 ) 0, t I ,
(4.56) (4.57)
v1 () L2 ( I , R ), x0 S0 , x1 S1 , O /, 1
where T1 (t ) (T11(t ), T12 (t )), T2 (t ) (T21(t ), T22 (t )), e1 (t ) (e11(t ), e12 (t )), c1 (t ) (c11(t ), c12 (t )), {x0 R 2 / Dx0
N 31(t ) ( c11(t ),c12 (t )), S0
0}, S1 {x1 R 2 / Ex1
0}, S0 u S1
S R4 ,
q (v1 , x0 , x1 , O , z(t ), z(t1 )) .
The functions w1(t ), y1(t ), t I are defined by (4.52), (4.54), the function z (t ), t I , is a solution of the differential equations (4.53). Note that T
(v1 (t ), x0 , x1 , O ) X
the function
L2 ( I , R1 ) u S0 u S1 u / H
L2 ( I , R1 ) u R 2 u R 2 u R1 ,
F0 (t , q) | v1 T1 (t ) x0 T2 (t ) x1 N1 (t ) z (t1 , v1 ) O[ z1 (t ) e1 (t ) x0 c1 (t ) x1 N 21 (t ) z(t1 , v1 )] |2 , N 21(t ) ( c11(t ), c1 (t )).
The partial derivatives F0v1 (t, q)
zw1 (t ), w1 (t ) v1 (t ) T1 (t ) x0 T2 (t ) x1 N1 (t ) z(t1 , v ) O[ z1 (t ) e1 (t ) x0 c1 (t ) x1 N 21(t ) z(t1 , v1 )],
F0 x0 (t, q) [2T (t ) 2Oe1* (t )]w1 (t ), F0 x1 (t, q) [2T2* (t ) 2Oc1* (t )]w1 (t ), * 1
F0 z1 (t, q) F0 z (t , q)
* 2Ow1 (t ), F0 z2 (t, q) 0, F0 z ( t1 ) (t, q) [2N1* (t ) 2ON21 (t )]w1 (t ),
§ 2Ow1 (t ) · ¨¨ ¸¸, F0O (t , q) 0 © ¹
2[ z1 (t ) e1 (t ) x0 c1 (t ) x1 N 21 (t ) z(t1 , v1 )]w1 (t ).
According to the formula (4.39) the Frechet derivative of the functional (4.55) under the conditions (4.56), (4.57) is defined by J c(T ) ( J vc1 (T ), J xc 0 (T ), J xc1 (T ), J Oc (T )) H ,
where J vc1 (T )
t1
³F
F0 v1 (t , q) B* (t )\ (t ), J xc0 (T )
0 x0
(t , q)dt ,
t0
J xc1 (T )
t1
t1
t0
t0
³ F0 x1 (t, q)dt, J Oc (T )
³ F O (t, q)dt, 0
The function z(t, v1 ), t I is a solution of the differential equations (4.56), and the function \ (t ), t I is a solution to the conjugate system \
F0 z (t , q) A* (t )\ , \ (t1 )
t1
³ F0 z ( t1 ) (t , q)dt. t0
108
(4.58)
For this example the equation (4.56) has the form z2 , z2 (t ) a21(t ) z1 a22 (t ) z2 b22 (t )v1 (t ), z2 (t0 ) 0, z2 (t0 ) 0.
z1
The equation (4.58) is written as \1
2Ow1 (t ) a21 (t )\ 2 , \ 2
§\ 1 (t1 ) · ¸¸ ¨¨ ©\ 2 (t1 ) ¹
\ 1 a22 (t )\ 2 , \ (t1 )
t1
³ F0 z ( t1 ) (t , q)dt. t0
As it follows from (4.42), for this example the sequence {Tn } {v1n , x0n , x1n , On } X , where v1n 1
v1n Dn J vc1 (Tn ), x0n 1
PS 0 [ x0n Dn J xc0 (Tn )], x1n 1
On 1
P/ [On Dn J Oc (Tn )], n 0,1, 2,..., Dn ! 0.
PS1 [ x1n Dn J xc1 (Tn )],
(4.59)
Note that D *D [ x0n Dn J xc (Tn )] | D |2 E *E x1n 1 PS [ x1n Dn J xc (Tn )] [ x1n Dn J xc (Tn )] [ x1n Dn J xc (Tn )] | E |2 J , если On D n J Oc (Tn ) J , ° On 1 P/ [On D n J Oc (Tn )] ®On D n J Oc (Tn ), если J d On D n J Oc (Tn ) d G , °G , если O D J c (T ) ! G . n n O n ¯ Along the sequence (T n ) X , defined by (4.59) the numerical sequence x0n 1
PS 0 [ x0n Dn J xc0 (Tn )] [ x0n Dn J xc0 (Tn )] 1
1
0
1
1
weakly
{J (Tn )} strictly decreases, J (T ) t 0, T , T X . If v1n o v1* ,
On o O* as n o f and the value J (T* ) 0 , where T*
x0 n o x0* ,
x1n o x1* ,
( v1* , x0* , x1* , O* ) , then a solution to
the Sturm-Liouville is the function x* (t )
z (t , v1* ) E1 (t ) x0* E2 (t ) x1* N 2 (t ) z(t1, v1* ), t I
y* (t )
corresponding to the value O* / . Example 1. The boundary value problem is given x
x(0)
O2 x 2 O
x0 S 0
1 t2
,
tI
[0,1]
[1;0] , x(1)
x1 S1
[0,1] ,
O / [0,1] , x(t ) G(t ) G(t , O ) : G(t , O ) {x R / t d Ox 2 t d 2, , t I } . 1
g ( x, O )
1
³ [(O
2
t 2 ) x 3 (t ) x(t )]dt d 1.
0
Transformation. The function K (t )
t
³ [(O
2
W 2 ) x 3 (W ) x(W )]dW , t [0,1] .
0
Then K(t )
x(t ) (O2 t 2 ) x 3 (t ) , K (0)
0 , K (1)
1 1 d , d D {d R / d t 0} .
The system (4.15) – (4.17) for the given example is written as [ A1[ B1 f ( P1K, O, t ) B2 f 0 ( P1K, O, t ) , t [0,1] , ( P1[ (0) x0 , P1[ (1) x1 ) S , P2[ (0) 0 , P2[ (1) 1 d , P1K (t ) G(t, O ) , O / , t I , 109
where
x , P1[
P2[
§1· § 0· § x· ¨¨ ¸¸ , B2 ¨¨ ¸¸ , [ ¨¨ ¸¸ , P1 (1,0) , P2 (0,1) , ©0¹ ©1¹ ©K ¹ 2 2 O x O , f 0 ( P1[ , O , t ) (O2 t 2 ) x 3 (t ) , t [0,1] . x , f ( P1[ , O , t ) 1 t2
§ 0 0· ¸¸ , B1 ¨¨ ©1 0¹
A1
The linear controllable system has the form (see (4.21) – (4.24)) y A1 y B1w1 (t ) B2 w2 (t ) , t I , § x0 · § x · ¨¨ ¸¸ , y (1) [ (1) ¨¨ 1 ¸¸ , ©0¹ ©1 d ¹ 1 w1 () L2 ( I , R ) , w2 () L2 ( I , R1 ) .
y (0) [ (0)
Since 0· §1 0 · § 1 ¨¨ ¸¸ , /(t ,W ) T (t )T 1 (W ) e A1 (t W ) ¨¨ ¸¸, E ( B1 B2 ) ©t 1¹ © t W 1 ¹ § 1 0· * §1 t · §x · § x · ¸¸ , / (0, t ) ¨¨ ¸¸ , y (0) ¨¨ 0 ¸¸ , y (1) ¨¨ 1 ¸¸ , /(0, t ) ¨¨ 0 t 1 0 1 © ¹ © ¹ ©1 d ¹ © ¹
T (t ) e A t 1
§1 0· ¨¨ ¸¸ , ©0 1¹
the matrices 1
W (0,1)
§16 § 1 1 ·¸ ¨ 2 ! 0 , W 1 (0,1) ¨ 13 1 ¨6 4 ¸ ¨ 1 2 3 ¹ © © 13 2 2 § ( 1 ) t · 1 t t 2 2 ¸ , W (t ,1) ¨ ¨ (t 2 1) 1 (t 3 3t 4) t 3 t ¸¸ ¨ 3 ¹ 2 3 ©
* * ³ )(0, t )EE ) (0, t )dt 0
§ t ¨ W1 (0, t ) ¨ 2 ¨ t 2 ©
6 · 13 ¸ , 12 ¸ 13 ¹
· ¸ ¸. ¸ ¹
Then § x1 x0 · ¨¨ ¸¸ , O1 (t, [0 , [1 ) B *)* (0, t )W11 (0,1)a © x1 1 d ¹ 6t 16 6 12t 12t 6 · § 10 18t x1 x0 d¸ ¨ 13 13 13 13 ¨ ¸ , N1 (t ) E */* (0, t )W11 (0,1)/(0,1) 6 6 12 12 ¨¨ ¸¸ x1 x0 d 13 13 13 13 © ¹ § 10 16t 6 12t · ¨ ¸ 13 13 ¸ , O (t ,K ,K ) )(t ,0)W (t ,1)W 1 (0,1)[ ¨ 0 1 1 1 0 6 12 ¸ 2 ¨¨ ¸ 13 ¹ © 13 § 3t 2 16t 13 · 10t 3t 2 6t 6t 2 6t 6t 2 ¨ x0 x1 d ¸ 1 13 13 13 13 ¸, ) (t ,0)W1 (0, t )W1 (0,1) / (0,1)[1 ¨ 3 2 2t 3 3t 2 12t 5t 2 t 3 6t ¨ t 8t 7t ¸ x0 x1 (1 d ) ¸ ¨ 13 13 13 © ¹ § 10t 3t 2 6t 2 6t · ¨ ¸ 13 13 ¸. N 2 (t ) ) (t ,0)W1 (0, t )W 1 (0,1) / (0,1) ¨ 2 3 3 2 ¨ 5t t 6t 2t 3t 12t ¸ ¨ ¸ 13 13 © ¹ a
/(0,1)[1 [0
Consequently, the control functions 110
10 6t 6t 16 x1 x0 13 13 6 12t 12t 6 10 6t 12t 6 d z1 (1, v) z 2 (1, v) , 13 13 13 13 6 6 w2 (t ) [v2 (t ) O12 (t ,K0 ,K1 ) N12 (t ) z (1, v )] v2 (t ) x1 x0 13 13 12 12 6 12 d z1 (1, v) z 2 (1, v) , 13 13 13 13 3t 2 16t 13 10t 3t 2 y1 (t ) [ z(t , v ) O2 (t ,K0 ,K1 ) N 2 (t ) z(1, v )]1 z1 (t , v ) x0 x1 13 13 6t 6t 2 6t 6t 2 10t 3t 2 6t 2 6t d z1 (1, v) z 2 (1, v) , 13 13 13 13 t 3 8t 2 7t 5t 2 t 3 6t y2 (t ) [ z (t , v) *2 (t ,K 0 ,K1 ) M 2 (t ) z (1, v)]2 z 2 (t , v) x0 x1 13 13 2t 3 3t 2 12t 2t 3 3t 2 12t 5t 2 t 3 6t 2t 3 3t 2 12t d z1 (1, v) z 2 (1, v) , 13 13 13 13 t I . w1 (t )
[v1 (t ) O11 (t ,K0 ,K1 ) N 11 (t ) z (1, v )]
v1 (t )
The optimization problem (4.33) – (4.38) for the given example is written as I (v1 , v2 , x0 , x1 , O , d , w)
t1
³ [| Z1 (t )
O2 y12 O 1 t 2
t0
|2 | Z2 (t ) (O2 t 2 ) y13 (t ) |2
| w(t ) [Oy1 (t ) t ] |2 ]dt o inf 2
under the conditions 1 1 z1 v1 , z2 z1 v2 , z1 (0) 0 , z 2 (0) 0 , v1 () L2 ( I , R ) , v2 () L2 ( I , R ) , x0 S 0 , x1 S1 , O / , d D , w(t ) W {w() L2 ( I , R1 ) / t d w(t ) d 2, t I } , where w1 (t ) , w2 (t ) , y1 (t ) , t I are defined by expressions presented above. The function F0 (t , q) | v1 T11 (t ) x0 T21 (t ) x1 T22 (t )d r1 (t ) M 11 (t ) z(1, v )
O2 y12 O
|2 1 t2 | v2 C11 (t ) x0 C21 (t ) x1 C22 (t )d r2 (t ) M 12 (t ) z (1, v) (O2 t 2 ) y13 |2 | w Oy12 t |2 ,
6t 16 10 18t 12t 6 6 12t , T21 (t ) , T22 (t ) , r1 (t ) , 13 13 13 13 6 6 § 10 16t 6 12t · M 11 (t ) ¨ ¸ , C11 (t ) , C 21 (t ) , 13 13 13 13 © ¹ § 3t 2 16t 13 · ¨ ¸ 12 12 6 12 · ¸, C 22 (t ) , r2 , M 12 (t ) §¨ ¸ , D11 (t ) ¨ 3 132 ¨ t 8t 7t ¸ 13 13 13 ¹ © 13 ¨ ¸ 13 © ¹ § 10t 3t 2 · § 6t 6t 2 · § · 6t 6t 2 ¨ ¸ ¨ ¸ ¨ ¸ 13 ¸ , D22 (t ) ¨ 3 132 ¸ , f (t ) ¨ ¸. D21 (t ) ¨ 2 133 3 2 ¨ 5t t 6t ¸ ¨ 2t 3t 12t ¸ ¨ 2t 3t 12t ¸ ¨ ¸ ¨ ¸ ¨ ¸ 13 13 13 © ¹ © ¹ © ¹ n where q (v1 , v2 , x0 , x1 , O , d , w, z, z (1)) R . T11 (t )
111
Define the Frechet derivatives of the functional I vc ([ ) , I vc ([ ) , I xc ([ ) , I xc ([ ) , I Oc ([ ) , I dc ([ ) , I wc ([ ) by (4.39). Construct sequences {v1n } , {v2n } , { x0n } , {x1n } , {On }, {d n } , { pn } by (4.42). It can be shown that I ([* ) 0 , where [* (v1* (t ), v2* (t ), x0* , x1* , O* , d* , p* (t )) X * , x0n o x0* 0 , On o O* 1 , v1n (t ) o v1* (t ) 1 , v2n (t ) o v2* (t ) t 3 t 5 , x1n o x1* 1 , 1
d n o d*
1 , pn (t ) o p* (t ) 12
I ([* )
2
0
1
t 3 t 5 as n o f . Indeed
1
³ [| w (q (t ), t ) | 1
| w2 (q* (t ), t ) |2 | w3 (q* (t ), t ) |2 ]dt ,
2
*
0
where w1 (q* (t ), t ) Z1* (t )
O*2 y12* O*
1 t 2 w3 (q* (t ), t )
, w2 (q* (t ), t ) Z2* (t ) (O*2 t 2 ) y13* (t ) , w* (t ) [O* y12* (t ) t ] .
As z1* v1* 1 , z1* (0) 0 , the function z1* (t ) t . Then z2* z1* v2* ,
3t 2 16t 13 * 10t 3t 2 * 6t 6t 2 6t 6t 2 x0 x1 d* 13 13 13 13 10t 3t 2 6t 2 6t z1* (1, v* ) z 2* (1, v* ) t , t [0,1] , 13 13 11 . where z1* (1) 1, z 2* (1) 12 Since w1 (q* , t ) 0 , w2 (q* , t ) 0 , w3 (q* , t ) 0 , the value I ([* ) 0 . Note that y1* (t )
x* (0)
x0*
z1* (t )
0 S 0 , x* (1)
x1* 1 S1 , O* 1 / , x* (t ) G(t, O* ) . The value of the
functional g ( x* , O* )
1
2 2 3 ³ [(O* t ) x* (t ) x* (t )]dt 0
1 1 1 2 4 6 x* (t )
11 1 ; d* 12
1
³ [(1 t
2
)t 3 t ]dt
0
11 1 12
1 . 12
So, the solution to the original boundary value problem is the function t , t [0,1] the parameter O* 1 . Lecture 17. Boundary value problems for linear systems with a parameter
Consider the following boundary value problem (4.60) x A(t ) x B(t ) P(O ) x P (t ) , t I [t 0 , t1 ] , 2n 1 ( x(t 0 ) x0 , x(t1 ) x1 ) S R , O R . (4.61) The two cases are to be considered: а) O 0 , b) O z 0 . In the case O 0 , we have the boundary value problem (4.62) x A(t ) x P (t ) , t I [t 0 , t1 ] , 2n ( x(t 0 ) x0 , x(t1 ) x1 ) S R . (4.63) 112
Solution of the boundary value problem (4.62), (4.63) is presented in § 1.3. Further suppose that O z 0 . Let T 0 (t ) , t I is a fundamental solution matrix of the linear homogeneous system [ A(t )[ . Consider the linear controllable system (4.64) y A(t ) y B(t )u (t ) P (t ) , t I , 2n ( y (t 0 ) x0 , y (t1 ) x1 ) S R , (4.65) m u() L2 ( I , R ) . (4.66) Let the nu n matrix t1
W0 (t 0 , t1 )
³)
0
(t 0 , t ) B(t ) B * (t )) *0 (t 0 , t )dt ,
(4.67)
t0
be positive definite. Note that if u(t ) P(O ) x(t ) , O z 0 , then the system (4.64) – (4.66) coincides with (4.60), (4.61). Here ) 0 (t ,W ) T 0 (t )T 01 (W ) , t I , W I . The boundary value problem (4.64) – (4.66) different from the boundary value problem (4.21) – (4.23) that A(t ) z A1 (t ) , B(t ) B1 (t ) , t I . The assertion of theorem 3 (Chapter I) holds true for the boundary value problem (4.64)–(4.66) after replacing A1 (t ) , B1 (t ) by A(t ) , B (t ) , t I . Theorem 8. Let the nu n matrix W0 (t 0 , t1 ) given by (4.67) be positive definite. Then a control u() L2 ( I , R m ) brings the trajectory of the system (4.64) from an initial point y(t 0 ) x0 R n into a terminal state y(t1 ) x1 R n if and only if {u() L2 ( I , R m ) / u(t ) v(t ) O10 (t, x0 , x1 ) N10 (t ) z(t1 , v ),
u ( t ) U
t I , v, v() L2 ( I , R m )}
(4.68)
where O10 (t , x0 , x1 )
B * (t )) *0 (t 0 , t )W01 (t 0 , t1 )a0 , a0
t1
³ ) 0 (t 0 , t ) P (t )dt , N10 (t )
) 0 (t 0 , t1 ) x1 x0
B * (t )) *0 (t 0 , t )W01 (t 0 , t )) 0 (t 0 , t1 ) ,
t0
The function
z (t , v) , t I
is a solution to the differential equation m (4.69) z A(t ) z B(t )v(t ) , z (t 0 ) 0 , v() L2 ( I , R ) . The solution of the differential equation (4.64), corresponding to the control u (t ) U , is defined by y(t ) z(t , v) O20 (t , x0 , x1 ) N 20 z(t1 , v) , t I , (4.70) where O20 (t , x0 , x1 ) ) 0 (t , t0 )W0 (t , t1 )W01 (t0 , t1 ) x0 ) 0 (t , t0 )W0 (t0 , t )W01 (t0 , t1 ) u t
t1
t0
t0
u ) 0 (t0 , t1 ) x1 ³ ) 0 (t ,W ) P (W )dW ) 0 (t , t0 )W0 (t0 , t )W01 (t0 , t1 ) ³ ) 0 (t0 , t ) P (t )dt ,
N 20 (t )
) 0 (t , t 0 )W0 (t 0 , t )W01 (t 0 , t1 )) 0 (t 0 , t1 ) , W0 (t0 , t )
t
³ ) (t ,W ) B(W ) u 0
0
t0
u B * (W )) *0 (t0 ,W )dW , W0 (t, t1 ) W0 (t0 , t1 ) W0 (t0 , t ) .
Lemma 2. Let the matrix W0 (t 0 , t1 ) be positive definite. Then the boundary value problem (4.60), (4.61) is equivalent to the problem: v(t ) T10 (t ) x0 T20 (t ) x1 P0 (t ) N10 (t ) z(t1 , v) P(O ) y(t ) , t I , (4.71) 113
z
A(t ) z B(t )v(t ) , z (t 0 )
0 , t I , v() L2 ( I , R m ) ,
(4.72) (4.73)
( x0 , x1 ) S ,
where 1 * * B * (t )) *0 (t 0 , t )W01 (t 0 , t1 ) , T20 (t ) = B (t )) 0 (t 0 , t )W0 (t 0 , t1 )) 0 (t 0 , t1 ) ,
T10 (t )
t1
P0 (t ) B* (t )) *0 (t0 , t )W01 (t0 , t1 ) ³ ) 0 (t0 , t )P (t )dt , t0
y(t ) = z (t , v) + C10 (t ) x0 C20 (t ) x1 f 0 (t ) N 20 (t ) z (t1 , v) , t I , 1 0
(4.74)
1 0
C10 (t ) = ) 0 (t , t0 )W0 (t , t1 )W (t0 , t1 ) , C20 (t ) = ) 0 (t , t0 )W0 (t0 , t )W (t0 , t1 )) 0 (t0 , t1 ) , t
t1
f 0 (t ) = ³ ) 0 (t ,W )P (W )dW - ) 0 (t , t0 )W0 (t0 , t )W01 (t0 , t1 ) ³ ) 0 (t0 , t1 )P (t )dt . t0
t0
Consider the optimal control problem (see (4.67) – (4.74)) J (v, x0 , x1 , O )
2
t1
³ v (t ) T
10
(t ) x0 T20 (t ) x1 P0 (t ) N10 (t ) z(t1 , v ) P(O ) y (t ) dt o inf
(4.75)
t0
under the conditions z
A(t ) z B(t )v(t ) , z (t0 )
0 , v() L2 ( I , R m ) , t I ,
( x0 , x1 ) S , O R , O z 0 , y(t , v, x0 , x1 ) , t I is defined by (4.74). 1
where y(t ) Note that 1. the value H
I (v, x0, x1,O ) t 0 ,
(v, x0, x1,O ) X
(4.76) (4.77)
L2 ( I , Rm ) u S u R1 H ,
L2 ( I , R m ) u u R n u R n u R1 ; 2. If I (v , x0 , x1 , O ) 0, where (v , x0 , x1 , O ) X is a solution of the opti-
mization problem (4.75) – (4.76), then a solution to the boundary value problem (4.60), (4.61) is the function x (t )
y (t , v , x0 , x1 , O )
z(t , v ) C10 (t ) x0 C20 (t ) x1 f 0 (t ) N 20 (t ) z(t1 , v ) , t I .
Theorem 9. Let the matrix W0 (t 0 , t1 ) be positive definite. A necessary and sufficient condition for the boundary value problem with a parameter (4.60), (4.61) to have a solution is I (v , x0 , x1 , O ) 0 , where (v , x0 , x1 , O ) X is a solution to the optimization problem (4.75)-(4.77). Lemma 3. Let the matrix W0 (t 0 , t1 ) be positive definite, the function Q(q, t ) | v T10 (t ) x0 T20 (t ) x1 P 0 (t ) N10 (t ) z (t1 ) P(Oy ) |2 , where y z C10 (t ) x0 C20 (t ) x1 f 0 (t ) N 20 (t ) z (t1 ) , q
(v, x0 , x1 , O , z, z (t1 )) R m u R n u R n u R1 u R n u R n
Then the partial derivatives are computed by
Qv ( q, t ) 2[v T10 (t ) x0 T20 (t ) x1 P0 (t ) N10 (t ) z(t1 ) P(O ) y ] ;
Qx0 (q, t ) 2[T10 (t ) C10 (t ) P O ][v T10 (t ) x0 T20 (t ) x1 P0 (t ) N10 (t ) z(t1 ) P(O ) y] ; 114
Qx1 (q, t ) 2[T20 (t ) C20 (t ) P O ][v T10 (t ) x0 T20 (t ) x1 P0 (t ) N10 (t ) z(t1 ) P(O ) y] ;
QO ( q, t ) 2[v T10 (t ) x0 T20 (t ) x1 P0 (t ) N10 (t ) z (t1 ) P(O ) y ] Py ; Qz ( q, t ) 2 P (t )[ v T10 (t ) x0 T20 (t ) x1 P0 (t ) N10 (t ) z (t1 ) P(O ) y ]O ;
Qz ( t1 ) (q, t ) 2[ N1 (t ) N 2 (t ) P O ][v T10 (t ) x0 T20 (t ) x1 P0 (t ) N10 (t ) z(t1 ) P(O ) y] . (4.78)
Denote
wQ( q, t ) wq
Qq ( q, t )
(Qv ( q, t ), Qx0 ( q, t ), Qx1 ( q, t ), QO ( q, t ), Qz ( q, t ), Qz ( t1 ) ( q, t )) .
Lemma 4. Let the matrix W0 (t 0 , t1 ) be positive definite and the inequality Qq (q1 , t ) Qq (q 2 , t ), q1 q 2 !t 0 , q1 , q2 R m14 n , t I . (4.79) holds. Then the functional (4.75) under the conditions (4.76), (4.77) is convex. We say that the derivative Qq (q, t ) satisfies the Lipchitz condition with respect to q in R N , N m 1 4n , if | Qv (q 'q, t ) Qv (q, t ) |d L1 | 'q | , | Qx (q 'q, t ) Q x (q, t ) |d L2 | 'q , | Q x (q 'q, t ) Q x (q, t ) |d L3 | 'q | , QO (q 'q, t ) Q O (q, t ) |d L4 | 'q | , | Qz (q 'q, t ) Qz (q, t ) |d L5 | 'q | , | Q z (t ) (q 'q, t ) Qz (t ) (q, t ) |d L6 | 'q | , where 'q ('v, 'x0 , 'x1, 'O , 'z, 'z(t1 )) . Theorem 10. Let the matrix W0 (t 0 , t1 ) be positive definite, the partial derivative Qq (q, t ) satisfy the Lipchitz condition. Then the functional (4.75) under the conditions (4.76), (4.77) is Frechet differentiable, the gradient I c(Z ) ( I vc (Z ), I xc (Z ), I xc (Z ), I Oc (Z )) H , at any point Z (v, x0 , x1 , O ) X is computed by 0
1
0
1
1
0
I vc (Z )
1
1
t1
³Q
Qv (q(t ), t ) B* (t )\ (t ) , I xc0 (Z )
x0
(q(t ), t )dt ,
t0 t1
³ Qx1 (q(t ), t )dt , I Oc (Z )
I xc1 (Z )
t0
t1
³ QO (q(t ), t )dt ,
(4.80)
t0
where the partial derivatives are defined by (4.78), q(t ) (v(t ), x0 , x1, O , z(t , v), z(t1, v)) , the function z (t ) , t I is a solution to the differential equation (4.76) at v v(t ) , and the function \ (t ) , t I is a solution to the conjugate system \
t1
Qz (q(t ), t ) A (t )\ , \ (t1 ) ³ Qz (t ) (q(t ), t )dt , *
1
(4.81)
t0
I c(Z ) , Z X satisfies the Lipchitz condition c (4.82) || I (Z1 ) I c(Z2 ) ||d K 4 || Z1 Z2 || , Z1 , Z2 X . As it follows from theorem 10 the functional I (Z ) C 1,1 ( X ) . Generate the sequence {Zn } {vn , x0n , x1n , On } X by the rule:
Besides, the gradient
vn1
x1n1
vn D n I vc (Zn ) , x0n1
PS [ x1n D n I xc1 (Zn )] , On1
115
PS [ x0n D n I xc0 (Zn )] ,
On D n I Oc (Z n ) , n 0,1,2,.. ,
(4.83)
where the Frechet derivatives I vc (Zn ) , I xc (Z n ) , I xc (Z n ) , I Oc (Zn ) , Zn (vn , x0n , x1n , On ) are defined by (4.80), (4.81), the value D n ! 0 is chosen from the condition 0
1
2 , H 1 ! 0 , K 4 ! 0 is the Lipchitz constant from (4.82). K 4 2H 1 1 1 , if H 1 , H0 Dn ! 0 . In particular, D n K4 K4
0 H0 d Dn d
Note that, for a fixed value of O R1 the functional I (Z ) C 1,1 ( X ) is convex, and it to check the condition (4.79). Theorem 11. Let the matrix W0 (t 0 , t1 ) be positive definite, the set S is convex and closed, the sequence ^Zn ` X is defined by (4.83). Then 1) the numerical sequence ^I Z n ` strictly decreases; 2) Zn Zn 1 o 0 as n o f ; If besides, the set M (Z0 ) {Z X | I (Z ) d I (Z0 )} is bounded and the inequality (4.79) holds, then the following assertions hold. 3) The sequence ^Zn ` X is minimizing, i.e. lim I (Z n ) I * inf I (Z ) ; n of ZX 4) The set X {Z X / I (Z ) I ZinfX I (Z )} z ;
5) The sequence ^Zn ` X weakly converges to the set X * ; 6) The convergence rate can be estimated as 0 d I (Z n ) I * d
c0 , n 1,2,... , c0 n
const ! 0 ;
7) The boundary value problem with a parameter (4.60), (4.61) has a solution if and only if I (Z* ) 0 . Proofs of theorems 9-11 and lemmas 2-4 run as proofs of the similar theorems and lemmas presented in the previous lectures. In applied problems often need to find the value O on the given segment / [a, b] . In this case the sequence is generated by the rule On1 P/ [On D n I Oc (Zn )] , n 0,1,2... , where P/ [x] is a projection of the point x onto the set / , a, if x a, ° P/ [ x] ®b, if x ! b, ° x, if a d x d b. ¯
Example 2. Let us give an example to illustrate the main results of this lecture. Consider the equation U OU 0 , t I [0,1] . (4.84) Denoting U x1 , U x1 x2 , U x2 , rewrite the equation (4.84) in the form x
§x · Ax BP (Ox) , x(0) ¨¨ 10 ¸¸ , x(1) © x20 ¹
where 116
§ x11 · ¨¨ ¸¸ , ( x(0), x(1)) S , © x21 ¹
§ x1 · At ¨¨ ¸¸ , e © x2 ¹ §1 t · ¨¨ ¸¸ , ©0 1 ¹
§0 1· §0· ¸¸ , B ¨¨ ¸¸ , P(O ) (O , 0) , x A ¨¨ © 0 0¹ © 1¹ §1 t · ¸¸ , ) 0 (t ,W ) e At e AW , ) 0 (0, t ) e At ¨¨ 0 1 © ¹
§1 t · ¨¨ ¸¸ T0 (t ) , © 0 1¹ § 1 1· ¸¸ ) 0 (0,1) ¨¨ ©0 1 ¹
The corresponding linaer controllable system (4.64) – (4.66) has the form Ay Bu (t ), t I , y (0)
y
( y (0)
x(0), y (1)
x(1)
(4.85)
x(1)) S , u() L2 ( I , R1 )
x(0), y (1)
where S is a given convex closed set. Different forms of the set S are presented below. Since 1
³ Ф (0, t ) BB ) *
W0 (0,1)
0
* 0
(0, t )dt
0
Ф0 (0,1) x(1) x(0)
a0
B *Ф0* (0, t )W01 (0,1)a0 N10 (t )
§ 1/ 3 1/ 2 · ¨¨ ¸ ! 0 , W01 (0,1) 1 ¸¹ © 1/ 2 § x11 x21 x10 · ¨¨ ¸¸ , O10 (t , x(0), x(1)) © x21 x20 ¹
§12 6 · ¨¨ ¸¸ , © 6 4¹
(6 12t ) x10 (4 6t ) x 20 (12t 6) x11 (6t 2) x 21 , (12t 6,6t 2) ,
B Ф0* (0, t )W01 (0,1)Ф0 (0,1) *
the control u(t) U defined in (4.68) is written as u (t )
v(t ) (6 12t ) x0 (4 6t ) x20 (12t 6) x11 ( 6t 2) x21 ( 12t 6) z1 (1, v ) (6t 2) z2 (1, v ), t I .
(4.86)
As (see (4.85)) O20 (t , x(0), x(1)) Ф0 (t ,0)W0 (t ,1)W01 (0,1) x(0) Ф0 (t ,0)W0 (0, t ) W01 (0,1)Ф0 (0,1) x(1) § (2t 3 3t 2 1) x10 (t 3 2t 2 t ) x20 (2t 3 3t 2 ) x11 (t 3 t 2 ) x21 · ¨ ¸, ¨ (6t 2 6t ) x (3t 2 4t 1) x (6t 2 6t ) x (3t 2 2t ) x ¸ 10 20 11 21 © ¹ 3 2 3 2 § 2t 3t t t · ¸, N 20 (t ) Ф0 (t ,0)W0 (0, t )W01 (0,1)Ф0 (0,1) ¨¨ 2 2 ¸ © 6t 6t 3t 2t ¹
by (4.70) we obtain ( y(t ) ( y1 (t ), y2 (t )) , where y1 (t )
z1 (t, v) (2t 3 3t 2 1) x10 (t 3 2t 2 t ) x20 ( 2t 3 3t 2 ) x11
y 2 (t )
(t 3 t 2 ) x21 (2t 3 3t 2 ) z1 (1, v ) ( t 3 t 2 ) z2 (1, v ) z2 (t, v ) (6t 2 6t ) x10 (3t 2 4t 1) x20 ( 6t 2 6t ) x11 (3t 2 2t ) x21 (6t 2 6t ) z1 (1, v) ( 3t 2 2t ) z2 (1, v ), t I .
(4.87) (4.88)
Now the optimization problem (4.75)–(4.77) is written as I (v, x(0), x(1), O )
1
³ | u(t ) Oy (t ) |
2
1
(4.89)
dt o inf
0
under the conditions Az Bv , z (0) 0 , v() L2 ( I , R1 ) , t I , ( x(0), x(1)) S , O R 1 , O z 0 , z
where u (t ) , y1 (t ) , t I are defined by (4.86), (4.87) respectively. Case I. Let the set S be given by S
{( x(0), x(1) R 4 / x10
0, x 20 R1 , x11
117
0, x 21
R1 } R 4 .
(4.90) (4.91)
For this case we have from (4.86) – (4.88) v(t ) (4 6t ) x20 (6t 2) x21 (12t 6) z1 (1, v) (6t 2) z2 (1, v) ,
(4.92) (4.93) y1 (t ) z1 (t , v) (t 2t t ) x20 (t t ) x21 (2t 3t ) z1 (1, v) (t t ) z2 (1, v) , 2 2 2 2 (4.94) y2 (t ) z2 (t , v) (3t 4t 1) x20 (3t 2t ) x21 (6t 6t ) z1 (1, v) (3t 2t ) z2 (1, v) . The corresponding optimization problem is defined by (4.89) – (4.91), where u (t ) , y1 (t ) , t I are defined by (4.92), (4.93) respectively, where u(t )
3
2
3
2
3
2
3
2
§ 0 t 3 2t 2 t · §0 t3 t 2 · § 0 · ¸ ¸ , C10 (t ) ¨ ¨¨ ¸¸ , C20 (t ) ¨¨ 2 ¸ ¨ 0 3t 2 4t 1¸ . © 6t 2 ¹ © 0 3t 2t ¹ © ¹ Q Q Q The partial derivatives Qv , x0 , x1 , QO , Qz , z (1) are defined by (4.78),
T10 (t )
§ 0 · ¨¨ ¸¸ , T20 (t ) © 4 6t ¹
(v, x20 , x21, O , z1 , z2 , z1 (1), z2 (1)) .
Q(q, t ) | u Oy1 | 2 , q
The sequence {Zn } {vn , x , x , O } X is generated by: n 20
v n 1 (t ) n 1 x20
n 21
n
v n (t ) D n I vc (Z n )
v n (t ) D n [Qv (q n (t ), t ) \ 2n (t )] ,
1
n n 1 x20 D n ³ Qx20 (q n (t ), t )dt , x21 0
On1
1
n x21 D n ³ Qx21 (qn (t ), t )dt , 0
1
On D n ³ QO (qn (t ), t )dt , n 0,1,2,... , 0
where qn (t ) (vn (t ), x20n , x21n , On , z1 (t , vn ), z2 (t , vn ), z1 (1, vn ), z2 (1, vn )) , zn Azn Bvn , z n (0) 0 , t I , zn z n (t ) ( z1 (t , vn ), z2 (t , vn )) , \ n
Qz (qn (t ), t ) A \ n , \ n (1)
1
³ Qz (1) (q n (t ), t )dt , 0
\ n \ n (t ) (\ 1n (t ),\ 2n (t )) , t I .
As a result of solving the optimization problem (4.89)–(4.91) for this example we have vn (t ) o v ( m ) (t ) m 2S 2 sin nSt (4 6t )mS (6t 2)mS cos mS , m 1,2,... ; n
( m ) n
( m ) x20 o x20 mS , m 1,2,... , x21 o x21 mS cos mS , m 1,2,... ; ( m) 2 2 On o O m S , m 1,2,... , as n o f ; The functions (see (4.93), (4.94)) z1 (t , v ( m) ) sin mSt mSt mS (2t 2 t 3 ) (t 3 t 2 )mS cos mS , m 1,2,... ; z2 (t , v ( m) ) mS cos mSt mS mS (4t 3t 2 ) (3t 2 2t )mS cos mS , m 1,2,... ; z1 (1, v ( m) ) 0 , z 2 (1, v ( m) ) 0 ; y1 (t ) x1 (t ) sin mSt , m 1,2,... , y2 (t ) x2 (t ) mS cos mSt , m 1,2,... , t I . *( m ) *(m ) , x21 , O*m ) 0 for m 1,2,... . The values I (v ( m ) , x20 Case II. Let the set S be given by S {( x(0), x(1)) R 4 / x10 0, x 20 R1 , x11 R1 , x 21 0} . For this case it follows from (4.86) – (4.88) that u (t )
v(t ) (4 6t ) x20 (12t 6) x11 ( 12t 6) z1 (1, v ) (6t 2) z2 (1, v ), t I ,
118
(4.95)
z1 (t, v ) (t 3 2t 2 t ) x20 ( 2t 3 3t 2 ) x11
y1 (t )
(4.96)
(2t 3 3t 2 ) z1 (1, v ) ( t 3 t 2 ) z2 (1, v ), z2 (t, v) (3t 2 4t 1) x20 ( 6t 2 6t ) x11
y 2 (t )
(4.97)
(6t 2 6t ) z1 (1, v ) ( 3t 2 2t ) z2 (1, v ).
For this case the optimization problem (4.89)–(4.91) has the form 1
³ u(t ) Oy (t )
I (v, x20 , x11, O )
2
1
1
³ Q (q(t ), t )dt o inf
dt
1
0
,
(4.98)
0
under the conditions 1 0 , v() L2 ( I , R ) ,
Az Bv , z (0)
z
, x11 R , O R , O z 0 , where u (t ) , y1 (t ) , t I are defined by (4.95), (4.96) respectively, x 20 R
1
1
*
T10 (t ) Q1 (q, t )
1
(4.99) (4.100)
*
§12t 6 · ¨¨ ¸¸ , C10 (t ) © 0 ¹
§ 0 t 3 2t 2 t · § 2t 3 3t 2 0 · ¨ ¸ ¸ ¨ , C 20 ¨ 6t 2 6t 0 ¸ , ¨ 0 3t 2 4t 1¸ ¹ © © ¹ 2 Q Q Q u Oy1 , the partial derivatives Q1v , 1x2 0 , 1x1 1 , Q1O , Q1z , 1z 1 are § 0 · ¨¨ ¸¸ , T20 (t ) © 4 6t ¹
computed by (4.78). The sequences 1
n x20 D n [ ³ Q1x20 (qn (t ), t )dt ] ,
n 1 v n D n [Q1v (q n (t ), t ) \ 2n (t )] , x20
v n 1
0
x11n1
1
1
x11n D n [³ Q1x11 (qn (t ), t )dt ] , On1
On D n [ ³ Q1O (qn (t ), t )dt ] ,
0
n
\ n
0
n n , x11 , On , z1 (t , v n ), z 2 (t , v n )) , t I , 0,1,2, , q n (t ) (v n (t ), x 20 z n Az n Bv n , z n (0) 0 , z n ( z1 (t , vn ), z 2 (t , vn )) , t I ,
1
³ Q1z (1) (q n (t ), t )dt , \ n
Q1z (q n (t ), t ) A*\ n , \ n (1)
(\ 1n (t ),\ 2n (t )) , t I .
0
The solution to the optimization problem (4.98) – (4.100) is vn (t ) o v*( m ) (t )
(2m 1) 2
S2 4
n *( m ) x 20 o x 20
n *(m ) x11 o x11
On o O*( m )
(2m 1) 2
The functions (см. (4.97)) z1 (t , v*( m ) )
z 2 (t , v*( m ) )
sin( 2m 1)
(2m 1)
S 2
S
2
t (2m 1)
cos( 2m 1)
S
S
S
t (4 6t )( 2m 1) (12t 6) sin( 2m 1) , t I 2 2 2 as n o f , m 1,2, ; S (2m 1) при n o f , m 1,2, ; 2 S sin( 2m 1) при n o f , m 1,2, ; 2
sin( 2m 1)
S
S2 4
при n o f ,
t (2m 1)
S
2
t (2m 1)
S
2
119
1,2, .
(2t 2 t 3 ) (2t 3 3t 2 ) sin( 2m 1)
2 2 ( m) m 1,2, , z1 (1, v* )
S
m
0; S
(2m 1)
2
S 2
,
(4t 3t 2 ) (6t 2 6t ) sin( 2m 1)
S 2
,
m
y1* (t )
x1* (t )
sin( 2m 1)
S 2
( m) 1,2, , z 2 (1, v* )
t , y 2* (t )
*(m ) *(m ) The values I (v*( m ) , x 20 , x11 , O*( m ) ) 0 , Case III. Let the set
m
(2m 1)
S
2
0; cos( 2m 1)
S 2
t , t I , m 1,2, .
1,2, .
S{( x(0), x(1)) R 4 / x10
0, x 21 R 1 ; x11 hx 21
0} ,
where x(0)
§ x10 · ¨¨ ¸¸ R 2 , x(1) © x20 ¹
§ x11 · ¨¨ ¸¸ R 2 . © x21 ¹
For case we have from (4.86) – (4.88), u(t )
v(t ) (4 6t ) x20 (12t 6) x11 ( 6t 2) x21 ( 12t 6) z1 (1, v ) (6t 2) z2 (1, v ), t I
y1 (t )
z1 (t, v ) (t 3 2t 2 t ) x20 ( 2t 3 3t 2 ) x11 (t 3 t 2 ) x21
y 2 (t )
(2t 3 3t 2 ) z1 (1, v ) ( t 3 t 2 ) z2 (1, v ), z2 (t, v) (3t 2 4t 1) x20 ( 6t 2 6t ) x11 (3t 2 2t ) x21
(4.101)
[0,1],
(4.102) (4.103)
(6t 2 6t ) z1 (1, v ) ( 3t 2 2t ) z2 (1, v ), t I .
The optimization problem (4.89) – (4.91) is written in the form I (v, x20 , x11 , x21 , O )
1
³ u(t ) Oy1 (t ) dt 2
0
1
³ Q (q(t ), t )dt o inf ,
(4.104)
2
0
under the conditions z Az Bv , z (0) 0 , v() L2 ( I , R1 ) , ( x(0), x(1)) S 0 u S1 S , O R 1 , O z 0 ,
where
u (t ) , y1 (t ) , t I
are computed by (4.101), (4.102) respectively, *
T10 (t )
Q2 ( q , t )
*
§ 0 t 3 2t 2 1· § 12t 6 · ¸, ¨¨ ¸¸ , C10 (t ) ¨¨ 2 ¸ © 6t 2 ¹ © 0 3t 4t 1¹ § 2t 3 3t 2 t 3 t 2 · ¸, C 20 (t ) ¨¨ 2 2 ¸ © 6t 6t 3t 2t ¹ (v, x20 , x11 , x21 , O , z1 , z 2 , z1 (1), z 2 (1)) , the partial derievatives Q2 v ,
§ 0 · ¨¨ ¸¸ , T20 (t ) © 4 6t ¹
2
u Oy1 , q
(4.105) (4.106)
Q 2 x2 0 , Q2 x1 1 , Q2 x2 1 , Q2 O , Q2 z1 , Q 2 z 2 , Q2 z1 1 , Q2 z2 1 ,
are calculated by (4.78).
Note that the sets S0
{x(0)
( x10 , x 20 ) R 2 / x10
0, x 20 R1 }
S1
{x(1)
( x11 , x21 ) R 2 / x11 hx21
0} .
Let D1 (1, h) R . Then the projection of the point e (e1 , e2 ) R onto the set S1 is computed by PS (e) e D1* ( D1 D1* ) D1e. Since D1 D1* 1 h 2 , the projection 2
2
1
PS1 (e)
1 · § ¨ e1 2 (e1 he2 ) ¸ h ¸. ¨ ¨ e 1 (e he ) ¸ ¨ 2 1 2 ¸ h ¹ ©
The sequences 120
n 1 v n D n [Q2v (q n (t ), t ) \ 2n (t )] , x20
v n 1
1
n x20 D n [ ³ Q2 x20 (qn (t ), t )dt ] , 0
x11n1
1
1
PS1 [ x11n D n ³ Q2 x11 (qn (t ), t )dt ] x11n D n ³ Q2 x11 (qn (t ), t )dt 0
0
1
1
1 n [ x11 D n ³ Q2 x11 (qn (t ), t )dt h( x21 D n ³ Q2 x21 (qn (t ), t )dt )] , h2 0 0
n 1 x21
1
n PS1 [ x21 D n ³ Q2 x21 (qn (t ), t )dt ] 0
1
n x21 D n ³ Q2 x21 (qn (t ), t )dt 0
1
1
1 [ x11n D n ³ Q2 x11 (qn (t ), t )dt h( x21 D n ³ Q2 x21 (qn (t ), t )dt )] , h 0 0
On1
1
On D n [ ³ Q2O (qn (t ), t )dt ] , n 0,1,2, , 0
z n
Az n Bv n , z n (0)
0 , zn
( z1 (t , vn ) , z 2 (t , vn )) , t I ,
Q2 z (q n (t ), t ) A*\ n , \ n (1)
\ n
1
³ Q2 z (1) (qn (t ), t )dt , 0
where q n (t ) (vn (t ), x , x , x , On , z1 (t , vn ), z 2 (t , vn ), z1 (1, vn ), z 2 (1, vn )) . The solution to the optimization problem (4.104)–(4.106) is: vn (t ) o v* (t ) P 2 sin Pt (4 6t ) P (12t 6) sin P (6t 2) P cos P , as n o f ; * n * n * sin P , x21 o x21 P cos P , as n o f ; x 20 o x 20 P , x11n o x11 n 2 3 3 2 z1 (t , v ) o z1* (t ) sin Pt Pt (2t t )P (2t 3t ) sin P (t 3 t 2 )P cos P as n o f ; z2 (t , vn ) o z2* (t ) P cos Pt P (4t 3t 2 )P (6t 2 6t ) sin P (3t 2 2t )P cos P as n o f ; * * x11 hx21 sin P hP cos P 0 , On o O* P 2 . The functions (see (4.102), (4.103)) y1* (t ) x1* (t ) sin Pt , * * * I (v* , x 20 , x11 , x 21 , O* ) 0 , The values y2* (t ) x2* (t ) P cos Pt , P , t I . sin P hP cos P 0 . n 20
n 11
n 21
Lecture 18. Boundary value problems for nonlinear systems with a parameter Consider the boundary value problem with a parameter x A(t ) x B(t ) f ( x, O , t ) P (t ), t I [t 0 , t1 ] , ( x(t 0 ) x0 , x(t1 ) x1 ) S R 2 n , O / Rs ,
(4.107) (4.108) (4.109)
where / is a given convex closed set. Problem А. Provide a necessary and sufficient condition for existence of a solution to the boundary value problem (4.107) – (4.109). Problem B. Construct a solution to the boundary value problem (4.107) – (4.109). 121
The linear controllable system for the boundary value problem (4.107) – (4.109) has the form (4.110) y A(t ) y B(t )u (t ) P (t ), t I , y(t 0 ) x0 , y(t1 ) x1 , ( x0 , x1 ) S , (4.111) m u() L2 ( I , R ). (4.112) In the notation of Chapter III we have u(t ) v(t ) O1 (t , x0 , x1 ) N1 (t ) z (t1 , v) U , (4.113) m z A(t ) z B(t )v(t ), z (t 0 ) 0, v() L2 ( I , R ), (4.114) y(t ) z (t , v) O2 (t , x0 , x1 ) N 2 (t ) z (t1 , v), t I . (4.115) W ( t , t ) Lemma 5. Let the matrix 1 0 1 be positive definite. Then the boundary value problem (4.107)–(4.109) is equivalent to the following u(t ) v(t ) O1 (t , x0 , x1 ) N1 (t ) z (t1 , v) f ( x, O , t ), t I , (4.116) m z A(t ) z B(t )v(t ), z (t 0 ) 0, v() L2 ( I , R ), (4.117) ( x0 , x1 ) S , O / R S ,
where the function y (t ), t I is defined by (4.115), v() L2 ( I , R m ) is an arbitrary function. Proof of the lemma follows from (4.110)–(4.115). The optimization problem corresponding to the boundary value problem (4.107)–(4.109) has the form
³ >v(t ) O (t, x , x ) N (t ) z(t , v) f ( y(t ), O , t ) t1
J (v, x0 , x1 , O )
1
0
1
1
1
2
dt o inf,
(4.118)
t0
under the conditions z Az B(t )v(t ), z (t 0 ) 0, t I , v() L2 ( I , R m ), ( x0 , x1 ) S , O /,
(4.119) (4.120)
where the function y (t ), t I is defined by (4.115). Let us introduce the notations (see (4.46), (4.117)) X
L2 ( I , R m ) u S u / H L2 ( I , R m ) u R n u R n u R S , J * inf J ([ ), [ (v, x0 , x1 , O ) X ,
^[ X / J ([ ) [X
X*
*
*
J*
`
inf J ([ ) . [X
Theorem 12. Let the matrix W1 (t 0 , t1 ) be positive definite, the set X * z . A necessary and sufficient condition for the boundary value problem (4.107) – (4.109) to have a solution is J ([* ) 0 , where [* (v* , x0* , x1* , O* ) X is an optimal control for the problem (4.118) – (4.120). If J * J ([* ) 0 , then the function x* (t )
z (t , v* ) O1 (t , x0* , x1* ) N 2 (t ) z (t1 , v* ), t I
is a solution to the boundary value problem (4.107) – (4.109). If J * ! 0 , then the boundary value problem (4.107)–(4.109) has not a solution. Proof of the theorem follows from lemma 10 and the relations (4.118) – (4.120). Note that v* (t ) v* (t , O* ), x0* x0* (O* ), x1* x1* (O* ) . 122
Lemma 6. Let the matrix W1 (t 0 , t1 ) be positive definite, the function v O1 (t , x0 , x1 ) N1 (t ) z (t1 ) f ( y, O , t )
F3 ( q, t )
2
v T1 (t ) x0 T2 (t ) x1 P (t ) N1 (t ) z (t1 ) f ( z C1 (t ) x0 C2 (t ) x1 ) P1 (t ) N 2 (t ) z (t1 )
be
defined
and
continuously
differebtiable
2
with
q(t ) ([ , z, z (t1 )) (v, x0 , x1 , O , z, z (t1 )). Then the partial derivatives F3v (q, t ) 2w1 (q, t ), F3x0 (q, t )
F3 x1 (q, t )
>2T (t) 2C (t) f * 2
* 2
* x
@
>2T (t) 2C (t) f * 1
* 1
@
( y, O, t ) w1 (q, t ), F3O (q, t ) 2 f O ( y, O , t ) w1 (q, t ),
F3 z (q, t ) 2 f ( y, O , t ) w1 (q, t ), F3z (t1 ) (q, t )
>2N
* 1
to
( y, O, t ) w1 (q, t ), *
* x
where
* x
respect
@
(4.121)
2N f ( y, O, t ) w1 (q, t ). * * 2 x
w1 (q, t ) v T1 (t ) x0 T2 (t ) x1 P (t ) N1 (t ) z (t1 ) f ( y, O , t ), y z C1 (t ) x0 C2 (t ) x1 P1 (t ) N 21 (t ) z (t1 ), t I .
Formulas from (4.121) can be derived directly by differentiation the function F3 (q, t ) with respect to q . Denote F3q (q, t ) ( F3v (q, t ) F3 x (q, t ), F3 x (q, t ), F3O (q, t ), F3 z (q, t ), F3 z (t ) (q, t )), q R m s 4n , t I . Lemma 7. Let the assumptions of lemma 6 hold, the sets S , / be convex and the inequality F3q (q1 , t ) F3q (q2 , t ), q1 q 2 !t 0 , q1 , q2 R m s 4n , t I . (4.122) hold. Then the functional (4.118) under the conditions (4.119), (4.120) is convex. Proof of the lemma is similar to that of lemma 3. Definition 1. We say that the derivative F3q (q, t ) satisfies the Lipchitz condition with respect to q in R m s 4 n , if 0
1
1
F3v (q 'q, t ) F3v (q, t ) d L1 'q , F3 x (q 'q, t ) F3 x (q, t ) d L4 'q , 0
0
F3 x1 (q 'q, t ) F3 x1 (q, t ) d L3 'q , F3O (q 'q, t ) F3O (q, t ) d L4 'q ,
F3z (q 'q, t ) F3 z (q, t ) d L5 'q , F3 z (t ) (q 'q, t ) F3 z (t ) (q, t ) d L6 'q , 1
1
where Li const ! 0, i 1,6 , 'q ('v, 'x0 , 'x1 , 'O, 'z, 'z(t1 ) . Theorem 13. Let the assumptions of lemma 6 hold, the derivative F3q (q, t ) , s m 4 n qR , t I satisfies the Lipchitz condition. Then the functional (4.118) under the conditions (4.119) – (4.120) is Frechet differentiable, the gradient J c(v, x0 , x1 , O ) ( J vc ([ ), J xc ([ ), J xc ([ ), J Oc ([ )) H , [ (v, x0 , x1 , O ) at any point [ X is computed by 0
J vc ([ )
1
t1
³F
F3v (q(t ), t ) B * (t )\ (t ), J xc0 ([ )
3 x0
(q(t ), t )dt ,
t0
J xc1 ([ )
t1
³ F3x1 (q(t ), t )dt , J Oc ([ )
t0
123
t1
³ F O (q(t ), t )dt , 3
t0
(4.123)
where q(t ) (v(t ), x0 , x1 , O, z(t , v), z(t1 , v)), z(t , v), t I is a solution to the differential equations (4.119), the function \ (t ), t I is a solution to the conjugate system t1
F3 z (q(t ), t ) A* (t )\ , \ (t1 ) ³ F3 z (t1 ) (q(t ), t )dt , t I .
\
(4.124)
t0
Moreover, the gradient J c([ ) H satisfies the Lipchitz condition J c([1 ) J c([ 2 ) d l4 [1 [ 2 , [1 , [ 2 X . (4.125) Proof of the theorem is similar to that of theorem 3. Unlike the previous cases one of components of the vector q is a parameter O / . By theorem 13, on the base of (4.123)–(4.125) construct the sequences x1n1
where 0 D n d
>
vn D n J vc ([ n ), x0n1
vn1
>
@
@
PS x0n D n J xc0 ([ n ) ,
P/ >On D n J Oc ([ n )@, n
PS x1n D n J xc1 ([ n ) , On1
2 , H ! 0. In particular for H l 4 2H
l4
2
1 l4
,D n
0,1,2...,
(4.126)
const ! 0.
Theorem 14. Let the assumptions of theorem 13 hold, the sets S , / are convex and closed, the sequence {[ n } ^vn , x0n , x1n , On ` X is defined by (4.126). Then 1) the numerical sequence ^J ([ n )` strictly decreases; 2) vn vn1 o 0 , x0n x0n1 o 0, x1n x1n1 o 0, On On1 o 0, as n o f ; If besides, the inequality (4.122) holds, the set M ([ 0 ) ^[ X J ([ ) d J ([ 0 )` is bounded, where [ 0 (v0 , x00 , x10 , O0 ) X ,then the following assertions hold. 3) The sequence {[ n } X is minimizing, i.e. lim J ([ n ) J * inf J ([ ) ;
^[ X / J ([ )
[ X
nof
4) the set X *
*
*
`
inf J ([ ) z ;
J*
[ X
5) the sequence {[ n } X weakly converges to the set X * as n o f; weakly
weakly
weakly
v1n o v1* , v 2n o v 2* , x0n o x0* , x1n o x1* , d n o d * , p n o p* as n o f ,
[*
(v1* , v2* , x0* , x1* , d * , p* ) X * ;
6) The rate of convergence can be estimated as 0 d J ([ n ) J * d
m3 , n 1,2,..., m3 n
const ! 0 ;
7) The boundary value problem of controllability (4.107) – (4.109) has a solution if and only if J ([* ) 0. Proofs of similar theorems are presented above. Example 3. Consider the boundary value problem with a parameter
x(0)
O2 x 2 O
, t I [0,1], 1 t2 x0 S 0 [1,0], x(1) x1 S1 [0,1], O / [0,1] . x
For the problem (4.127), (4.128) we have A 0, B 1, f ( x, O , t )
O2 x 2 O
124
1 t 2
, t0
0, t1 1.
(4.127) (4.128)
Since
T (t ) 1, )(t ,W ) 1, W (0,1) 1, W 1 (0,1) 1, W (0, t ) t , W (t ,1) 1 t , T1 (t ) 1, T2 (t ) 1, C1 (t ) 1 t,
the functions
C2 (t )
1,
t , N1 (t )
N 2 (t )
t ,
w1 (q, t ) v(t ) x0 x1 z (1, v) f ( x, O , t ), t [0,1], y(t ) z (t , v) (1 t ) x0 tx1 tz(1, v), t [0,1].
For this example the optimization problem (4.118)–(4.120) is written as:
³ >v(t ) x 1
J (v, x0 , x1 , O )
2
0
x1 z (1, v) f ( y(t ), O , t ) dt o inf,
0
under the conditions
z v, z (0) 0, t I , v() L2 ( I , R1 ), x0 S 0 , x1 S1 , O /,
where the function
2
F3 (q, t )
(v, x0 , x1 , O , z, z (1)) R 6 .
w1 (q, t ) , q
The partial derivatives
> 2 2(1 t ) f x ( y, O , t )@w1 (q, t ),
F3v (q, t ) 2w1 (q, t ), F3 x0 (q, t )
>2 2tf x ( y, O , t )@w1 (q, t ),
F3O (q, t ) 2 f O ( y, O , t )w1 (q, t ), F3 z (q, t ) 2 f x ( y, O , t )w1 (q, t ), F3 z (t1 ) (q, t ) > 2 2tf x ( y, O , t )@w1 (q, t ). F3 x1 (q, t )
where 2 yO2 2O2 [ z (1 t ) x0 tx1 tz(1)] , 1 t 2 1 t 2 2 yO2 1 1 2O[ z (1 t ) x0 tx1 tz(1)] 2 . 1 t 2 1 t 2
f x ( y, O , t ) f O ( y, O , t )
The gradient of the functional J vc ([ )
1
³F
F3v (q(t ), t ) \ (t ), J xc0 ([ )
3 x0
(q(t ), t )dt ,
0
J xc1 ([ )
1
³F
3 x1
1
³ F O (q(t ), t )dt,
(q(t ), t )dt , J Oc ([ )
3
0
0
1
where \ F3 z (q(t ), t ),\ (1) ³ F3 z (1) (q(t ), t )dt. 0
The minimizing sequences vn1
>
@
PS x D n J xc1 ([ n ) , On1
n1 1
x
where D n
x0n1
vn D n J vc ([ n ),
1 , [n l4
n 1
>
@
PS x0n D n J xc0 ([ n ) ,
P/ >On D n J Oc ([ n )@, n 0,1,2...,
(vn (t ), x0n , x1n , On ), q(t ) (v(t ), x0 , x1 , O , z (t , v), z (1, v)).
It can be shown that the solution to this optimization problem is vn (t ) o v* (t ) 1, x0n o x0* 0, x1n o x1* 1, On o O* 1, as n o f. The function z(t, v* ) t. Then y* (t ) z(t , v* ) (1 t ) x0 tx1 tz(1, v* ) t , t [0,1] . It is not hard to check that J ([* ) 0, where [* (v* (t ), x0* , x1* , O* ) (t;0,1,1). Indeed w1 (q* (t ), t )
v* (t ) x0 x1 z (1, v* )
Therefore J ([* ) 0. Then x* (t )
O*2 y*2 (t ) O* 1 t 2 y* (t ) t , t I .
125
1 0 1 1
t 2 1 { 0, t [0,1] . 1 t 2
Lecture 19. Boundary value problems with a parameter with pure state variable constraints Consider problem 5. The boundary value problem x A(t ) x B(t ) f ( x, O , t ) P (t ) , t [t 0 , t1 ] , ( x(t 0 ) x0 , x(t1 ) x1 ) S R 2 n , O / , x(t ) G(t ) : G(t ) {x R n / J (t ) d F ( x, O , t ) d G (t ), t I } .
(4.129) (4.130) (4.131)
is given. Problem А. Provide a necessary and sufficient condition for existence of a solution to the boundary value problem (4.129) – (4.131). Problem B. Construct a solution to the boundary value problem (4.129) – (4.131). Linear controllable system for the boundary value problem (4.129)-(4.131) has the form (2.129)–(2.131), where u (t ) U , the function y (t ) , t I is defined by (2.132), (2.134) respectively, the function z (t , v) , t I is a solution to the differential equations (2.133). Lemma 8. Let the matrix W (t 0 , t1 ) be positive definite. Then the boundary value problem (4.129) – (4.131) is equivalent to the problem u(t ) v(t ) O1 (t , x0 , x1 ) N1 (t ) z (t1 , v) f ( x, O , t ) , t I , (4.132) y(t ) z (t , v) O2 (t , x0 , x1 ) N 2 (t ) z (t1 , v) G(t ) G(t , O ) , t I , (4.133) m (4.134) z A(t ) z B(t )v(t ) , z (t 0 ) 0 , v() L2 ( I , R ) , ( x0 , x1 ) S , O / . (4.135) The optimization problem corresponding to the boundary value problem (4.129) – (4.131) has the form I (v, x0 , x1 , O , w)
t1
³ [| v(t ) T (t ) x 1
0
T2 (t ) x1 P (t ) N1 (t ) z (t1 , v )
t0
f ( x, O , t ) | | w(t ) F ( y (t ), O , t ) | ]dt 2
2
(4.136)
t1
³ F (q(t ), t )dt o inf 4
t0
under the conditions z
0 , v() L2 ( I , R m ) ,
A(t ) z B(t )v(t ) , z (t 0 )
(4.137) (4.138) (4.139)
w(t ) W (t ) {w() L2 ( I , R ) / J (t ) d w(t ) d G (t ), t I } , ( x0 , x1 ) S , O / , r
where F4 (q(t ), t ) | v(t ) O1 (t , x0 , x1 ) N1 (t ) z (t1 , v) f ( y, O , t ) |2
| w(t ) F ( y(t ), O , t ) |2 , q(t ) (v(t ), x0 , x1 , O , w(t ), z (t , v), z (t1 , v)) .
Denote I*
X L2 ( I , R m ) u S u / uW H L2 ( I , R m ) u R n u R n u R s u L2 ( I , R r ) , inf I ([ ) , [ (v(t ), x0 , x1 , O , w(t )) X , X * {[* X / I ([* ) I * inf I ([ )} . [x
[ X
Theorem 15. Let the matrix W (t 0 , t1 ) be positive definite, the set X * z . A necessary and sufficient condition for the boundary value problem (4.129) – 126
(4.131) to have a solution is I ([* ) 0 , where [ * (v* , x0* , x1* , O* , w* ) X is an optimal control for the problem (4.136) – (4.139). If I* I ([* ) 0 , then the function x* (t )
z (t , v* ) O2 (t , x0* , x1* ) N 2 (t ) z (t1 , v* ), t I
is a solution to the boundary value problem (4.129)-(4.131). If I * ! 0 , then the boundary value problem (4.129)-(4.131) has not a solution. Note that the condition I* I ([* ) 0 is equivalent to (4.132) – (4.135). Lemma 9. Let the matrix W (t 0 , t1 ) be positive definite, the function F4 (q, t ) be defined and continuously differentiable with respect to q (v, x0 , x1 , O , w, z, z(t1 )) . Then the partial derivatives F4v (q, t ) 2w1 (q, t ) , F4 x (q, t ) [2T1* (t ) 2C1* (t ) f x* ( y, O, t )]w1 (q, t ) 2C1* (t ) Fx* ( y, O , t ) w2 (q, t ) , F4 x (q, t ) [2T2* (t ) 2C2* (t ) f x* ( y, O, t )]w1 (q, t ) 2C 2* (t ) Fx* ( y, O , t ) w2 (q, t ) , F4O (q, t ) 2 f x* ( y, O , t ) w1 (q, t ) 2 FO* ( y, O , t ) w2 (q, t ) , F4 w (q, t ) 2w2 (q, t ) , F4 z (q, t ) 2 f x* ( y, O , t ) w1 (q, t ) 2 Fx* ( y, O , t ) w2 (q, t ) , F4 z (t ) (q, t ) [2N1* (t ) 2N 2* (t ) f x* ( y, O, t )]w1 (q, t ) 2N 2* (t ) Fx* ( y, O, t )w2 (q, t ) , where w1 (q, t ) v O1 (t , x0 , x1 ) N1 (t ) z (t1 , v) f y, O , t , w2 (q, t ) w(t ) F ( y, O, t ) , t I , y z O2 (t , x0 , x1 ) N 2 (t ) z (t1 ) , t I . Lemma 10. Let the assumptions of lemma 8 hold, the sets S , / be convex and the inequality F4 q (q1 , t ) F4 q (q2 , t ), q1 q2 !t 0 , q1 , q2 R m s r 4n , t I , (4.140) hold, where F4 q (q, t ) ( F4v , F4 x , F4 x , F4O , F4 w , F4 z , F4 z (t ) ) .Then the functional (4.136) under the conditions (4.137) –(4.139) is convex. Definition 2. We say that the derivative F4 q (q, t ) satisfies the Lipchitz condition with respect to q in R m s r 4n , if | F4v (q 'q, t ) F4v (q, t ) |d L1 | 'q | , | F4 x (q 'q, t ) F4 x (q, t ) |d L2 | 'q | , | F4 x (q 'q, t ) F4 x (q, t ) |d L3 | 'q | , | F4O (q 'q, t ) F4O (q, t ) |d L4 | 'q | , | F4 w (q 'q, t ) F4 w (q, t ) |d L5 | 'q | , | F4 z (q 'q, t ) F4 z (q, t ) |d L6 | 'q | , | F4 z (t ) (q 'q, t ) F4 z (t ) (q, t ) |d L7 | 'q | , Li const ! 0 , i 1,7 , | 'q | | 'v, 'x0 , 'x1 , 'O , 'w, 'z, 'z (t1 ) | . Theorem 16. Let the assumptions of lemma 8 hold, the derivative F4 q (q, t ) , s m r 4 n qR , t I satisfy the Lipchitz condition. Then the functional (4.136) under the conditions (4.137)–(4.139) is Frechet differentiable, the gradient 0
1
1
0
1
1
0
1
0
1
1
1
I c(v, x0 , x1 , O , w) ( I vc ([ ), I xc0 ([ ), I xc1 ([ ), I Oc ([ ), I wc ([ )) H
at any point [ X is calculated by
127
I vc ([ )
F4v (q(t ), t ) B * (t )\ (t ) , I xc0 ([ )
t1
³F
4 x0
(q(t ), t )dt ,
t0
I xc1 ([ )
t1
t1
³ F4 x1 (q(t ), t )dt , I Oc ([ )
³ F O (q(t ), t )dt , 4
t0
I wc ([ )
F4 w (q(t ), t ) , t I
(4.141)
t0
where q(t ) (v(t ), x0 , x1 , O, w(t ), z(t , v), z(t1 , v)) , z (t , v) , t I is a solution to the differential equation (4.137), and the function \ (t ) , t I is a solution to the conjugate system t1
F4 z (q(t ), t ) A* (t )\ , \ (t1 ) ³ F4 z (t1 ) (q(t ), t )dt .
\
(4.142)
t0
Moreover, the gradient I c([ ) H satisfies the Lipchitz condition I c([1 ) I c([ 2 ) d l5 [1 [ 2 , [1 ,[ 2 X . (4.143) On the base of the formulas (4.141) – (4.143) construct the sequences vn1 vn D n I vc ([ n ) , x0n 1 Ps [ x0n D n I xc ([ n )] , x1n 1 Ps [ x1n D n I xc ([ n )] , On1 P/ [On D n I Oc ([ n )] , (4.144) wn1 PW [wn D n I wc ([ n )] , n 0,1,2,, 0
1
where 0 D n d
2 , H ! 0 . In particular, for H l5 2H
l5 , Dn 2
1 l5
const ! 0 .
Theorem 17. Let the assumptions of theorem 16 hold, the sets S , / be convex and closed, the sequence ^[ n ` ^vn , x0n , x1n , On , wn ` X be defined by (4.144). Then: 1) the numerical sequence ^I ([ n )` strictly decreases; 2) vn vn1 o 0 , x0n x0n1 o 0 , x1n x1n1 o 0 , On On1 o 0 , wn wn1 o 0 as n o f ;
If besides, the inequality (4.140) holds, the set M ([ 0 ) ^[ X / I ([ ) d I ([ 0 )` is bounded, then the following assertions hold. 3) The sequence ^[ n ` X is minimizing, i.e. lim I ([ n ) I * inf I ([ ) ;
^[ X / I ([ ) *
I*
*
`
[X
nof
4) The set X *
inf I ([ ) is not empty; [X
5) The sequence ^[ n ` X weakly converges to the set X * as n o f ; 6) The rate of convergence can be estimated as 0 d I ([ n ) I * d
m4 , n 1,2,..., m4 n
const ! 0 ;
7) The boundary value problem (4.129) – (4.131) has a solution if and only if I ([* ) 0 . Example 4. Consider the following boundary value problem (see example 2) x
O2 y 2 O
x(0)
1 t 2
x0 S 0
,
tI
[0,1] ,
[1;0] , x(1) 128
(4.145) x1 S1
[0,1] , O / [0,1] ,
(4.146)
x(t ) G(t ) G(t , O ) : G(t , O ) {x R1 / t d Ox 2 t d 2, , t I } .
(4.147) Note that all the original data are the same as in example 2 with the additional pure state variable constraint (4.147). For this example the optimization problem (4.136) – (4.139) has the form (see (4.145) – (4.147)) t1
³ [| v(t ) O1 (t, x0 , x1 ) N1 (t ) z(t1 , v)
I (v, x0 , x1 , O , w)
O2 y 2 O 1 t 2
t0
|2
| w(t ) [Oy 2 (t ) t ] |2 dt o inf
under the conditions , v() L2 ( I , R1 ) , x0 S 0 , x1 S1 , O / ,
v , z (0)
z
0 , t I
w(t ) W {w() L2 ( I , R1 ) / 0 d w(t ) d 2, t [0,1]} .
Partial derivatives are computed by the formula given in lemma 7, where Fx ( y, O , t ) 2Oy(t ) , FO ( y, O , t ) y 2 (t ) , t I . The minimizing sequences vn1 (t ) vn (t ) D n [ F4v (qn (t ), t ) \ n (t )] , x0n1
1
PS0 [ x0n D n ( ³ F4 x0 (qn (t ), t )dt )] , 0
x1n1
1
1
PS1 [ x1n D n ( ³ F4 x1 (qn (t ), t )dt )] , On1
P/ [On D n ( ³ F4O (qn (t ), t )dt )] ,
0
wn1
0
PW [ wn D n F4O (qn (t ), t )] ,
where \ n
1
F4 z (qn (t ), t ) , \ n (1) ³ F4 z (1) (qn (t ), t )dt , 0
qn (t ) (vn (t ), x0n , x1n , On , wn (t ), z (t , vn ), z (1, vn )) t I .
It can be shown that I ([* ) 0 , [ * (v* (t ), x0* , x1* , O* , w* (t )) (1,0,1,1, t 2 t ) . Then * * 2 x* (t ) t , t [0,1] , x0 0 S 0 , x1 1 S1 , O* 1 / , w* (t ) t t , 0 d w* (t ) d 2 , w* (t ) W , x(t ) G(t , O* ) . Comments A method for solving a boundary value problem with a parameter for ordinary differential equations under pure state variable constraints and integral constraints is developed. The imbedding principle is a basis of the proposed method. The essence of the imbedding principle is that the original boundary value problem with a parameter under state variable constraints and integral constraints is reduced to an equivalent free end point optimal control problem. This approach This approach became possible due to finding a general solution of a class of the first kind Fredholm integral equations. A question about the existence of a solution to the boundary value problem with a parameter and constraints is reduced to constructing minimizing sequences and determining an infimum of the objective functional. 129
As an example a solving the Strum-Liouville problem is presented.In general case the optimization problem (4.33)–(4.56) may have infinitely many solutions {T*} X such that J ({T*}) 0 . Depending on the choice of the initial approximation the minimizing sequeneces converge to some element of the set {T* } . Let T * ( v1* , x0* , x1* , O* ) , where J (T* ) 0 is some solution. Here x0* x (t0 ), x1* x(t1 ), ( x0* , x1* ) S 0 u S1 S , O* / , where x 0* is the initial state of the system. In the statement of the problem the requirements (4.7), (4.8) imposed on the right hand side of the differential equations (4.1) at which the Cauchy problem has a solution is presented. Consequently, the differential equation (4.1) with the initial state x (t0 ) x 0* at O O* / has an unique solution for t [t0 , t1 ] . Moreover, x(t1 ) x1* and the constraints (4.2)–(4.6) are satisfied. Irrespective of that which solution is found by the iterative procedure, in the case J (T* ) 0 , we find the corresponding solution to the boundary value problem (4.1)–(4.6). The fundamental difference of the proposed method lies in the fact that a solvability and constructing a solution to the boundary value problem with a parameter and constraints are solved together by constructing minimizing sequences focused on applying computer technology. To check for solvability and to construct a solution of the boundary value problem one have to solve the J (T n ) inf J (T ) 0 is the solvability optimization problem (4.33)–(4.36), where nlim of T X condition and by limits points T* of the sequence {T n } a solution to the boundary value problem is determined. References: 1. 2. 3. 4. 5. 6. 7. 8. 9.
Eu.A. Klokov. On some boundary value problems for second order systems // Differential Equations, 2010, Vol. 48, № 10, p. 1368-1373. (in russian). D.P. Kolmogorov, B.A. Sheika. The problem about multiple and positive eigenfunctions for second order homogeneous quasilinear equation // Differential equations, 2012, Vol. 48, № 8, P. 1096-1104. (in russian). A.S. Malkin, G.V. Tompson. About decompositions on eigenfunctions of the SturmLiuville nonlinear operator with boundary conditions depending on spectral pareameter // Differential Equations, 2012, Vol. 48, № 2, p. 171-182. (in russian). V.I. Smirnov. A course of higher mathematics. Vol. 4. P. II (6-th edition) – М.: Nauka, 1981. 550 p. (in russian). A.N. Tikhonov, A.B. Vasilyeva, A.G. Sveshnikova. Differential equations. – M.: Nauka, 1985. 231 p. (in russian). B.A. Trenogin. Functional analysis. – M.: Nauka, 1980. 495 p. (in russian). S.A. Aisagaliev. Constructive theory of boundary value problems of ordinary differential equations. – Almaty, «Кazakh University» publishing house, 2015. – 207 p. (in russian). S.A. Aisagaliev. The problems of qualitative theory of differential equations. – Almaty, «Кazakh University» publishing house, 2016. – 397 p. (in russian). S.A. Aisagaliev, Zh.Kh. Zhunussova. To solving the boundary value problem with a parameter for ordinary differential equations // ISSN 2074-1863. Ufimskyi matematicheskyi jurnal. Vol 8. № 2 (2016). P. 3-13 (in russian). 130
Chapter V PERIODIC SOLUTIONS OF AUTONOMUS DYNAMICAL SYSTEMS
The method of research of periodic solutions of autonomus dymanic systems described by ordinary differential equations with state variable constraints and integral constraints is proposed. A general problem оf periodic solution as a boundary value problem with constraints is formulated. By introducing a fictitious control function the boundary value problem is reduced to controllability problem of dynamical systems with state variable constraints and integral constraints. Solving the controllability problem is reduced to the first kind Fredholm equation. A necessary and sufficient condition for an existence of a periodic solution is obtained and algorithm for constructing a periodic solution by limit points of minimizing sequences is developed. The scientific novelty of the obtained results is that brand new approach to the study periodic solutions in nonlinear systems focused on the use of modern informatics tools. An existence of periodic solutions and constructing a solution are solved together. Lecture 20. Statement of the problem Consider the nonlinear autonomous system x = Ax Bf ( x), t I* = [0, T* ], x (0) = x (T* ) = x0 S R , n
(5.1) (5.2)
under the pure state variable constraints x(t ) G : G = {x Rn /a d F ( x) d b, t I*}, (1)
(5.3)
and the integral constraints g j ( x ) d c j , j = 1, m1 ; g j ( x ) = c j , j = m1 1, m 2 , T*
g j ( x) =
³f
0j
( x (t )) dt , j = 1, m 2 .
(5.4) (5.5)
0
Here A, B are n u n, n u m constant matrices respectively, the m -dimensional vectorvalued function f ( x ) is defined and continuous with respect to x D and satisfies the conditions: | f ( x ) f ( y ) |d l | x y |, x, y D, l = const > 0, | f ( x) |d c0 , x D, c0 = const > 0, G D,
131
where D Rn is a bounded closed set. S is a given convex closed set. The function F ( x) = ( F1 ( x), , Fs ( x)) is continuous with respect to x D, the vectors a Rs , b R s are given. T* is a period. Values c j , j = 1, m2 are given constants, f 0 j ( x ), j = 1, m2 are given continuous functions satisfying the conditions: | f 0 j ( x ) f 0 j ( y ) |d l j | x y |, x, y D, | f 0 j ( x ) |d c0 j , c0 j = const > 0, j = 1, m2 .
The following problems are posed: Problem 1. Find necessary and sufficient conditions for existence of a periodic solution to the system (5.1) – (5.5). Problem 2. Construct a periodic solution to the sysetm (5.1) – (5.5). Find a period T*. Note that: 1) if the matrices A = 0, B = I n , then (5.6) x = f ( x), t I*, I* = [0, T* ], where I n is an n u n identity matrix. Therefore results presented below hold true for the equation (5.6) under the conditions (5.2) – (5.5); 2) if the matrix A and the vector function Bf (x ) are defined as § 0 O 0 ¨ 0 0 ¨O ¨0 0 b33 A=¨ ¨ ¨0 0 bn 3 ¨ ¨ ©
0· § X ( x1 ,, xn ) · ¸ ¨ ¸ 0¸ ¨ Y ( x1 ,, xn ) ¸ ¨ Z ( x , , x ) ¸ b3n ¸ n ¸, Bf ( x ) = ¨ 3 1 ¸, ¸ ¨ ¸ ¨ Z ( x , , x ) ¸ bnn ¸ n ¸ ¨ n 1 ¸ ¸ ¨ ¸ ¹ © ¹
then the equation (5.1) is rewritten in the form
x1 = Ox2 X ( x1 ,, xn ), x2 = Ox1 Y ( x1 ,, xn ), n
x j = ¦b js xs Z j ( x1 ,, xn ), j = 3, n.
(5.7)
s =3
In the case when the equation '(O ) =| OI n A |= 0 has the simple roots r iO , there are no roots in the form r ip O , where p is arbitrary integer number, X , Y , Z j , j = 3, n are analytical functions such that their decompositions begins with terms of the second order, then the equation (5.1) is the Lyapunov system of the form (5.7). 3) if the matrix A and the vector valued function Bf (x ) are given as § 0 1· ¸, A = ¨¨ 2 ¸ ©Z 0¹
§ 0 · ¸¸, Bf ( x) = ¨¨ F x x H ( , ) 1 1 ¹ ©
then the equation (5.1) is rewritten in the form x1 Z 2 x1 = HF ( x1, x1 ),
where Z is some real number, and H is a small parameter. The equation (5.8) is the Van der Pol equation. 4) If the matrix A and the vector valued function Bf (x ) are given as
132
(5.8)
1 0 0 0· 0 § 0 § · ¨ ¸ ¨ ¸ 2 0 0 0¸ ¨ Z1 0 ¨ HF1 ( x1 ,, xn , x1 ,, xn ) ¸ ¨ 0 ¨ ¸ 0 0 1 0¸ 0 ¨ ¸ ¨ ¸ 2 0 Z2 0 0¸ ¨ 0 ¨ HF2 ( x1 ,, xn , x1 ,, xn ) ¸ , Bf ( x ) = ¨ , A=¨ ¸ ¸ ¨ ¸ ¨ ¸ 0 0 0 1¸ ¨ 0 ¨ ¸ ¨ 0 ¸ ¨ HF ( x ,, x , x ,, x ) ¸ 2 0 0 Z n n 1 n ¸ ¨ ¸ ¨ n1 1 ¨ ¸ ¨ ¸ © ¹ © ¹
then the equation (5.1) is rewritten in the form x1 Z12 x1 = HF1 ( x1 , , xn , x1 , , x n ), x1 = x2 , x3 Z22 x3 = HF2 ( x1 , , xn , x1 , , x n ), x 3 = x4 , x Z 2 x = HF ( x , , x , x , , x ), x = x n1
n1
n1
n1
1
n
1
n
n1
(5.9) n1 1
,
where H is a small parameter. The equation (5.9) is the Poincare system. In other words, from the equation (5.1), in particular, can be obtained the Lyapunov equations system and the Poincare equations system, as well as the Van der Pol equation. In particular, when no the pure state variable contraints and integral constraints (5.3) – (5.5) are imposed, problems 1, 2 are posed as: Problem 3. Find necessary and sufficient conditions for existence of a periodic solution to the equation (5.1). Problem 4. Construct a periodic solution to the equation (5.1). Find a period T*. Solutions to problems 3, 4 follow from solutions to problems 1, 2. Known results are approximate solutions to problems 3, 4. As it follows from the problem statement, there is a need to find a periodic solution x(t ) = x(t T* ), t, t t 0 satisfying the conditions x(t ) = x(t T* ) G, t, t t 0, and such that the relations (5.4), (5.5) hold. The main research methods periodic solutions of processes described by ordinary differential equations are small parameter method, asymptotic method of motions separation, the phase space method, the method of point mappings and the method of harmonic linearization. The small parameter method originates from the works of A. Poincare [1, 2] and A.M. Lyapunov [3]. The theory of Poincare-Lyapunov is based on the properties of analyticity of the right hand sides of differential equations. But in many applied problems equations of motion of the systems do not possess these properties. A pretty effective way solutions of nonlinear vibrations of systems with one degree of freedom was proposed by the Dutch engineer Van der Pol [4]. To apply the technique of Van der Pol is not required analyticity of right hand sides of differential equations and it could be used for study of steady motions, and for the study of transient processes. However this method is purely intuitive character, is not justified mathematically. 133
In 30th of ХХ century a general approach to study equations with a small parameter was proposed by N.M. Krylov and M.N. Bogolyubov [5]. Its main content is to define a change of variables which allows to separate the "fast" variables from the "slow". This change of variables allows to represent the solution as an asymptotic series with the first term coinciding with the solution obtained by the van der Pol method. The method of Krylov-Bogoliubov with the addition, namely with the study of systems with "slowly" varying parameters is described in [6]. The idea of the Krylov-Bogolyubov was developed in the work of Е.P. Popov [7, 8] and in 50th of ХХ century the method of harmonic linearization which have wide practical application was developed by him. This method is more than any other has been able in the simplest possible way to capture the most important specific properties of nonlinear processes in system. The small parameter method and the method of motions separation, the method of harmonic linearization relate to approximate methods. Their advantages are the versatility and simplicity, and the disadvantage is the impossibility in many cases to predict or estimate the amount of error. The methods of phase space and point mappings originate from the works of M.G. Le´autey and A. Pfarr. A significant contribution to the development of this method was made by A. Poincare [2], G.D. Lloyd [9], A.A. Andronov [10], Eu.I. Neumark [11], R.A. Nelepin [12]. It is highly effective, if the order of the autonomous system is n = 2. This method was extended to nonlinear systems of high order (n t 3) by R.A. Nelepin [3]. The method of cross-sections of parameter space proposed by him allows to study a nonlinear system of high-order as simple and almost as completely as this is done via a phase plane in the case of second order systems. The disadvantage of the method is that firstly the behavior of the system of high order in a specially chosen cross-sections of parameter space of the system is studied; secondly the number of opening sections may be very small or absent in the parameter space that we are interested in. The Lyapunov-Poincare method received development in the works of G.V. Kamenkov and B.G. Malkin. In more detail the Poincare method, the Lyapunov method, the Van der Pol method, the Krylov- Bogoliubov method, the method of harmonic linearization, the method of cross-sections of parameter space are presented in [13]. The concepts of a limit cycle and an existence criterion for a limit cycle are contained in [14, 15]. Currently, periodic motion in linear and quasilinear systems, and in especially nonlinear systems containing a small positive parameter are studied most fully. Unfortunately, the problem of existence and construction methods for periodic solutions of ordinary differential equations still remain a little studied area of the qualitative theory. For this reason development of methods for constructing periodic solutions to ordinary differential equations in a general form is a topical problem. 134
Lecture 21. A periodic solution of a nonlinear autonomous dynamical system. Transformation Consider the integral constraints (5.4), (5.5). Introduce the vector valued function K (t ) = (K1 (t ),,Km2 (t )), t I*, by the expression t
K j (t ) = ³ f 0 j ( x(W )) dW , j = 1, m2 , t I*.
(5.10)
0
It follows from (5.10) that K = f 0 ( x), f 0 ( x) = ( f 01( x), , f 0m2 ( x)),
(5.11)
where K = (K1,,Km2 ), m
K (t0 ) = 0, K (T* ) = c, c Q = {c R 2 / c = (c1 ,, c m2 ),
c j = c j d j , j = 1, m1 , c j = c j , j = m1 1, m2 , d j t 0, j = 1, m1 }.
(5.12)
Introduce the following vectors and matrices Onm · § x· § A § B · § Onm2 · 2 ¸ ¨ ¸ ¨ ¨ ¸ ¨ ¸ [ = ¨K ¸, A1 = ¨ Om2n Om2m2 ¸, B1 = ¨ Om2n ¸, B2 = ¨ I m2 ¸ ¨ ¸ ¨ ¸ ¨ ¸ ¨ ¸ © ¹ © ¹ © ¹ © ¹ P1 = ( I n , Onm ), P2 = (Om n , I m ). 2
2
2
Now the system of equations (5.1), (5.11) is represented in the matrix form (5.13) [ = A1[ B1 f ( P1[ ) B2 f0 ( P1[ ), t I* , with the boundary conditions (see (5.12)) (5.14) P1[ (0) = P1[ (T* ) = x0 S , P2[ (0) = 0, P2[ (T* ) = c :, (5.15) P1[ (t ) G, P1[ = x, P2[ = K. Note that the relations (5.1)–(5.6) are equivalent to (5.13) – (5.15). Then problems 1, 2 are equivalent to the problems: Problem 1′. Find necessary and sufficient conditions for existence of a solution to the boundary value problem (5.13) – (5.15). Problem 2′. Construct a solution to the boundary value problem (5.13) – (5.15). For solving the boundary value problem (5.13) – (5.15) we need the theorems about properties of the first kind Fredholm integral equation from [13, 17]. Consider the integral equation of the form t1
Ku { ³ K (t0 , t )u(t )dt = a, t I = [t0 , t1 ],
(5.16)
t0
where K (t0 , t ) =|| K ij (t0 , t ) ||, i = 1, n j = 1, m is a given n u m matrix with piecewise continuous with respect to t elements at fixed t0 , t1, u() L2 ( I , Rm ) is an unknown function, a R n is a given vector. Theorem 1. The integral equation (5.16) has a solution for any fixed a R n if and only if the n u n matrix 135
t1 C (t0 , t1 ) = ³ K (t0 , t ) K * (t0 , t )dt
(5.17)
t0
is positive definite, where the symbol (*) denotes a transposition, t1 > t0 . Theorem 2. Let the matrix C (t0 , t1 ) defined by (5.17) be positive definite. Then the general solution to the integral equation (5.16) is defined by t1 u(t ) = v(t ) K * (t0 , t )C 1 (t0 , t1 )a K * (t0 , t )C 1 (t0 , t1 ) ³ K (t0 , t )v(t )dt , t I , t0
where v() L2 ( I , R ) is an arbitrary function, a R is an arbitrary vector. Together with the differential equation (5.13) with the boundary conditions (5.14) consider the linear controllable system (5.18) y = A1 y B1w1(t ) B2 w2 (t ), t I* = [0, T* ], m
n
§ x0 · § x0 · ¸ ¨ ¨ ¸ y (0) = [ (0) = [0 = ¨ Om 1 ¸, y (T* ) = [ (T* ) = [1 = ¨ c ¸, ¨ 2 ¸ ¨ ¸ © ¹ ¹ © m
(5.19) (5.20)
w1 () L2 ( I , Rm ), w2 () L2 ( I , R 2 )
where § x (T* ) · § x0 · § x (0) · § x0 · ¸ ¸ ¨ ¸ ¨ ¸ ¨ ¨ [ (0) = [0 = ¨K (0) ¸ = ¨ Om 1 ¸, [ (T* ) = [1 = ¨K (T* ) ¸ = ¨ c ¸, ¸ ¨ ¸ ¨ ¸ ¨ 2 ¸ ¨ ¹ © ¹ © ¹ © © ¹ n n [0 R u Om ,1, [1 R u Q, P1[0 = x(0) = x0 , P2[0 = K (0) = Om ,1, 2 2
P1[1 = x(T* ) = x0 , P2[1 = K(T* ) = c, x0 S , c Q.
Note that the equation (5.18) is obtained from (5.13) after replacing f ( P1[ ), f 0 ( P1[ ) by w1 (t ), w2 (t ) respectively. Let the matrix the function (n m2 ) u (m m2 ) E = ( B1, B2 ) , At mm2 1 w(t ) = ( w1 (t ), w2 (t )) L2 ( I , R ), the matrix 0, ' 2 = = > 0, 1/4 29/24 24 2
1/2 1/4 1/4 1/4 29/24 5/24 1 61 '3 = = > 0, ' 4 =| W (1,2) |= > 0. 1/4 5/24 5/24 24 17280
Consequently, the matrix W (1,2) is positive definite. The inverse matrix § 93 ¨ ¨ 5184 ¨ 1 1 ¨ 576 W 1 (1,2) = ¨ 61 '¨ 2880 ¨ 1 ¨ ¨¨ 288 ©
where ' = ' 4 =
1 576 241 17280 61 17280 1 48
61 2880 61 17280 793 17280 0
1 · ª 310 ¸ 288 ¸ « 61 1 ¸ « 30 « 48 ¸ « 61 ¸= 0 ¸ « 6 « 1 ¸ « 60 ¸ « 24 ¸ « 61 ¸ ¹ ¬
61 . The matrix 17280 t
W (1, t ) = ³T 1 (W ) B(W ) B* (W )T *1 (W )dW = 1
ª 1 « t 1 « « « t 1 1 « 2 2t =« « t 1 « 1 « 2 2t « «2 t 1 1 «¬t 2 6t 2
t 1 1 2 2t
t3 t 1 1 12 2 4t 3 t3 t 1 2 12 2 4t 3
t 4 t 3 5t 2 1 2 5 t 24 12 12 12t 3 24
190
310 61 240 61 1 360 61
6 1 793 61 0
60 º 61 » 360 » » 61 » , 0 » » 720 » » 61 » ¼
t2 t 1 1 6 2 6t 2
t 1 1 2 2t
t3 t 1 2 12 2 4t 3
t 4 t 3 5t 2 1 2 5 t 24 12 12 12t 3 24
t3 t 1 2 12 2 4t 3
t 4 t 3 2t 5t 2 1 7 24 12 3 12 12t 24
t 4 t 3 t 2 2t 1 7 t 24 12 12 3 12 24
t 5 t 4 5t 3 1 5 17t 2 78 t 45 12 12 36t 6 18 360
º » » » » » » » » » » » » » ¼
The matrix W (t ,2) = W (1,2) W (1, t ) =
ª 1 1 « 2 t « « « 5 t 1 « 4 2 2t « =« 5 t 1 « « 4 2 2t « « 5 t 1 2 « t 12 2 6t « « «¬
5 t 1 4 2 2t
37 t 3 t 1 24 12 2 4t
11 t 3 t 1 24 12 2 4t
9 t 4 t 3 5t 2 1 2 t 24 24 12 12 12t 3
5 t 1 4 2 2t
5 t2 t 1 12 6 2 6t
11 t 3 t 1 24 12 2 4t
t 4 t 3 5t 2 9 1 2 t 24 24 12 12 12t 3
11 t 3 t 1 24 12 2 4t
t4 t3 2 9 5t 2 1 t 24 24 12 3 12 12t
t4 t3 t2 2 9 1 t 24 24 12 12 3 12t
211 t 5 t 4 5t 3 1 5 17t 2 t 360 45 12 12 36t 6 18
º » » » » » » » » » » » » » ¼»
By the known matrices T (t ), T (1) = T 1 (1) = I 4 , T 1 (t ), T 1 (W ), W (1,2), W 1 (1,2), W (1, t ), W (t ,2) define 2
a = T 1 (2)[1 [0 ³T 1 (t ) P (t )dt, O1 (t, [0 , [1 ) = B* (t )T *1 (t ) u 1
191
uW 1 (1,2) a, N1 (t ) = B* (t )T *1 (t )W 1 (1,2)T 1 (2), O2 (t , [0 , [1 ) = T (t )W (t ,2)W 1 (1,2)[0 T (t )W (1, t )W 1 (1,2)T 1 (t1 )[1 2
t
³T (t )T 1 (W ) P (W )dW T (t )W (1, t )W 1 (1,2) ³T 1 (W ) P (W )dW , 1
1
N 2 (t ) = T (t1 )W (1, t )W 1 (1,2)T 1 (t1 ),
and the functions,
w(t ) = v(t ) O1 (t, [0 , [1 ) N1 (t ) z(2, v), y(t ) = z(t ) O2 (t, [0 , [1 ) N 2 (t ) z(2, v ), t I = [1;2],
where z(t ) = z(t, v ), t I is a solution to the differential equation z = A(t ) z B(t )v(t ), z(1) = 0, v() L2 ( I , R 2 ).
Note that t
W
1
1
z(t ) = ³T (t )T 1 (W ) B(W )v(W )dW , z(W ) = ³T (W )T 1 (K ) B(K )v(K )dK.
Optimization problem. For this example, the optimization problem (6.16)– (6.20) has the form 2
J (v, u, p, x0 , x1 , d ) = ³{| w1 (t ) u1 (t ) |2 | w2 (t ) u2 (t ) |2 1
2
| p1 (t ) y1 (t ) |2 | p2 (t ) y2 (t ) |2 | w1 (t ) ³etW y2 (W )dW |2 1
2
| w2 (t ) ³et W y1 (W )dW |2 }dt o inf 2
1
under the conditions z = A(t ) z B(t )v(t ), z(1) = 0, v() = (v1 (), v2 ()) L2 ( I , R 2 ), 1 u1 (t ) U1 = {u1 () L2 ( I , R1 )/e 2 /2 d u1 (t ) d (3e 4 e 2 ), t I }, 8 e2 u2 (t ) U 2 = {u2 () L2 ( I , R1 )/e 2 d u2 (t ) d (7e 4 3)}, 16 p1 (t ) V 1 = { p1 () L2 ( I , R1 )/1 d p1 (t ) d 2, t I }, 3 p2 (t ) V 2 = { p2 () L2 ( I , R1 )/0 d p2 (t ) d , t I }, 2 Ex0 Fx1 = e, d {d R1/d t 0} = D,
Where
w(t ) = ( w1, w2 (t )), 4 S = {( x0 , x1 ) R /Ex0 Fx1 = e}.
w() L2 ( I , R 2 ),
v(t ) = (v1 (t ), v2 (t )),
v() L2 ( I , R 2 ),
Set X = L2 ( I , R1 ) u L2 ( I , R1 ) uU1 uU 2 uV1 uV2 u S u D, H = L2 ( I , R1 ) u L2 ( I , R1 ) u L2 ( I , R1 ) u L2 ( I , R1 ) u L2 ( I , R1 ) u L2 ( I , R1 ) u R 4 u R1 , m1 = 1.
Since the functions
w1 (t ) = v1 (t ) T11 (t ) x0 T21x1 T31 (t )d P11 (t ) N11 (t ) z(2, v1 , v2 ), t I , w2 (t ) = v2 (t ) T12 (t ) x0 T22 x1 T32 (t )d P12 (t ) N12 (t ) z(2, v1 , v2 ), 192
n = 2,
y1 (t ) = z1 (t, v1 , v2 ) E11(t ) x0 E21(t ) x1 E31(t )d P31(t ) N 21(t ) z(2, v), t I , y2 (t ) = z2 (t, v1 , v2 ) E12 (t ) x0 E22 (t ) x1 E32 (t )d P32 (t ) N 22 (t ) z(2, v), t I , y3 (t ) = z3 (t, v) E13 (t ) x0 E23 (t ) x1 E33 (t )d P33 (t ) N 22 (t ) z(2, v), t I , y4 (t ) = z4 (t, v) E14 (t ) x0 E24 (t ) x1 E34 (t )d P34 (t ) N 24 (t ) z(2, v), t I ,
we have F0 ( q(t ), t ) =| w1 u1 |2 | w2 u2 |2 | p1 y1 |2 | p2 y2 |2 , 2
2
1
1
F1 ( q, t ) =| w1 ³etW y2 (W )dW |2 | w2 ³et W y1 (W )dW |2 , 2
where 2
2
T 11 (t ) = T11 (t ) ³etW E12 (W )dW , T 21 (t ) = T21 (t ) ³etW E21 (W )dW , 1
1
2
2
1
1
T 31 (t ) = T31 (t ) ³etW E32 (W )dW , P 21 (t ) = P11 (t ) ³etW P32 (W )dW , 2
N 11 (t ) = N11(t ) ³etW N 22 (W )dW , w1 (t ) = v1 (t ) T 11 (t ) x0 T 21 (t ) x1 1
2
T 31 (t )d N 11 (t ) z(2, v ), w1 (t ) ³etW y2 (W )dW = w1 (t ) 1
2
2
1
1
2
³etW z2 (W , v )dW , w2 (t ) ³et W y1 (W )dW = w2 (t ) ³et W z1 (W , v )dW . 2
2
1
Partial derivatives of the function F (q, t ) = F0 (q, t ) F1 ( q, t ) are computed by (6.22) and the sequence {T n } X is generated by the algorithm (6.30): v1n 1 = PV [v1n D n J vc (T n )], vn21 = PV [vn2 D n J vc (T n )], 1
1
1
2
u1n 1 = PU [u1n D n J uc (T n )], un21 = PU [un2 D n J uc (T n )], 1
p
1 n 1
1
2
2
= PV 1 [ p D n J cp (T n )], pn21 = PV 2 [ pn2 D n J cp (T n )], 1 n
1
2
x0n 1 = PS [ x0n D n J xc (T n )], x1n 1 = PS [ x1n D n J xc (T n )], 0
d n 1
1
1 = PD [d n D n J dc (T n )], n = 0,1,2,, D n = = const > 0. 1 K
Note that ( x0n 1 , x1n 1 ) = ( x0n D n J xc (T n ), x1n D n J xc (T n )) 0
§ E · § E ·½ ° ° ¨¨ * ¸¸ ®( E , F ) ¨¨ * ¸¸ ¾ F F ° © ¹¯ © ¹° ¿ *
*
1
1
§ x D n J xc (T n ) · ½ ° 0 ¸ e° , ®( E , F ) ¨¨ ¸ ¾ c D T x J ( ) n x n ° 1 © ¹ ° ¿ ¯ n 0 n 1
where 1
§ E * ·½ § E * · § 4 3· ° §15/51 3/51· ° ( E , F )¨¨ * ¸¸ = ¨¨ ¸¸, ®( E , F )¨¨ * ¸¸ ¾ = ¨¨ ¸¸. 3 15 ° ¹ ¯ © 3/51 4/51¹ © F ¹° ©F ¹ © ¿
Constructing of a minimizing sequence 1. Choose an initail guess T0 = (v01 , v02 , u01 , u02 , p01 , p02 , x00 , x10 , d 0 ) X . In particular v01 (t ) = sin t ,
v02 (t ) = cos t ,
e2 (7e 4 3)]/2, 16 193
u02 (t ) = [e 2
u01 (t ) = [
e2 1 4 2 (3e e )]/2, 2 8
p01 (t ) = 3/2,
x00 = ( x1 (1) = 3, x2 (1) = 1),
p02 (t ) = 3/4,
x10 = ( x1 (2) = 1/2, x2 (2) = 0),
( x00 , x10 ) S .
2. Find a solution to the differential equation z0 = A(t ) z0 B(t )v0 (t ), z0 (1) = 0, v0 (t ) = ( v01 (t ), v02 (t )). As a result we have z0 (t ) = z0 (t , v00 , v01 ), t I = [1;2]. 2
wF ( q0 (t ), t ) dt, q0 (t ) = (v01 , v02 , u01 , u02 , p01 , p02 , x00 , x10 , wz0 (t1 ) 1
3. Compute the value \ 0 (2) = ³
d 0 , z0 (t ), z0 (t1 )). Solve the differential equation
\ 0 (t ) =
2
wF ( q0 (t ), t ) wF ( q0 (t ), t ) A* (t )\ 0 (t ), \ 0 (2) = ³ dt wz0 wz0 (t1 ) 1
and define the function \ 0 (t ), t [1,2]. 4. Calculate the partial derivatives for the initial guess T0 X . After the wF ( q0 (t ), t ) wF ( q0 (t ), t ) wF ( q0 (t ), t ) , , values , wv wu wp
2
wF ( q (t ), t ) ³1 w0x0 dt,
2
wF ( q (t ), t ) ³1 w0x1 dt,
t1
³
t0
wF (q0 (t ), t ) dt are to wd
be known. 5. Define v11 = PV [v01 D 0 J vc (T0 )], v12 = PV [v02 D 0 J vc (T0 )], 1
1
1
2
u = PU [u D 0 J uc (T 0 )], u12 = PU [u02 D 0 J uc (T 0 )], 1 1
1
1 0
1
2
2
p11 = PV 1 [ p01 D 0 J cp (T0 )], p12 = PV 2 [ p02 D 0 J cp (T0 )], 1
2
x01 = PS [ x00 D 0 J xc (T 0 )], x11 = PS [ x10 D 0 J xc (T 0 )], 0
1
d1 = PD [d 0 D0 J dc (T0 )], n = 0,1,2,, D0 = const > 0. 1
6. Repeat steps 1–5. As it follows from theorem 4, the constructed sequence is minimizing, i.е. u2* (t ), p1* (t ), p2* (t ), lim J (Tn ) = J * = inf J (T ) = J (T* ), where T* = (v1* (t ), v2* (t ), u1* (t ), TX1
nof
is a solution to the optimization problem. If J (T* ) = 0, then y1* (t ) = x1* (t ), y2* (t ) = x2* (t ), t [1;2] is a solution to the boundary value problem (4.40)–(4.43). For this example the following results are found: x1* (t ) = t, x0* , x1* , d* ) X
x2* (t ) =
t2 1 , 2 2
t [1;2],
x0* = (1;0),
x1* = (2;3/2),
d* = 1/2,
u1* (t ) =
1 et et 2 [ (3t 4t 2) 2 t t2
2
2 1 et ( 2t 2)], u2* (t ) = 4 [(2t 2 1)et t 2 1], t [1;2]. 2 t t
Comments A necessary and sufficient condition for solvability of the boundary value problem for linear integro-differential equations with state avriable constraints and integral constraints has been obtained. The method for constructing a solution to the boundary value problem with constraints by generating minimizing sequences 194
has been developed. The method for solving the boundary value problem is based on the imbedding principle. The imbedding principle has been developed by constructing a general solution to the first kind Fredholm integral equation. The fundamental difference of the proposed method is that at first the original boundary value problem is immersed to the controllability problem with a fictive control function from functional space, and then it is reduced to free end point optimal control problem. The problems of solvability and a construction of a solution to the boundary value problem are solved together by solving the optimization problem. The development of a general theory for boundary value problems of linear integro-differential equations with complicated boundary conditions, state variable constraints and integral constraints is a topical problem with numerous applications in the natural sciences, ecomonics and ecology. References: 1.
Ya.V. Bykov. On some problems of integro-differential equations. Frumze, 1957. – 400 c. (in russian). 2. A.M. Samoilenko, O.A. Boichuk, S.A. Krivosheya. Boundary value problems of linear integro-differential equations with a singular // Ukr. mat. zhurn. 1996. Vol. 48. № 11. ‒ P. 1576–1579. (in ukrainian). 3. D.S. Dzhumabayev, E.A. Bakirova. About the signs of unique solvability of the linear two point boundary value problem for systems of integro-differential equations // Differential equations. 2013. Vol. 49, № 9. P. 1125-1-140. (in russian). 4. S.A. Aisagaliev. The constructive theory of boundary value problems of integrodifferential equations with state variable constraints // Trudy Mezhdunarodnoi nauchnoi konferencii "Problemy matematiki I informatikiv XXI veke". Vestnik KGNU. Ser. 3, issue 4, 2000. P. 127–133. (in russian). 5. S.A. Aisagaliev , T.S. Aisagaliev. Methods for solving boundary value problems. – Almaty, «Kazakh University» publishing house, 2002. – 348 p. (in russian). 6. S.A. Aisagaliev. Controllability of some system of differential equations. Differential Equations, vol 27, No 9, 1991, p. 1475–1486. 7. S.A. Aisagaliev and A.P. Belogurov. Controllability and speed of the process described by a parabolic equation with bounded control. Siberian Mathematical Journal, Vol. 53, No. 1, pp. 20-36, 2011. 8. S.A. Aisagaliev. A general solution to a class of integral equations // Mathematical journal, 2005, Vol. 5, № 4 (1.20), p. 17-34. (in russian). 9. S.A. Aisagaliev, A.A. Kabidoldanova. On the Optimal Control of Linear Systems with Linear Performance Criterion and Constraints // Differential equations. 2012, Vol. 48, No 6, pp 832-844). 10. S.A. Aisagaliev, M.N. Kalimoldayev. Constructive method for solving boundary value problems of ordinary differential equations // Differential equations. 2015. Vol.51. № 2. P. 147–160. (in russian). 11. S.A. Aisagaliev. Constructive theory of boundary value problems of ordinary differential equations. – Almaty, «Kazakh University» publishing house, 2015. – 207 p. (in russian). 12. S.A. Aisagaliev. The problems of qualitative theory of differential equations. – Almaty, «Kazakh University» publishing house, 2016. – 397 p. (in russian). 195
Еducational issue
Aisagaliev Serikbai LECTURES ON QUALITATIVE THEORY OF DIFFERENTIAL EQUATIONS Educational manual Сover design G. Кaliyeva Cover design used photos from sites www.depositphotos_22141915-stock-photo-mathematics-abstraction.com
IB №12080 Signed for publishing 13.06.2018. Format 70x100 1/12. Offset paper. Digital printing. Volume 16,3 printer’s sheet. 80 copies. Order №3815. Publishing house «Qazaq University» Al-Farabi Kazakh National University KazNU, 71 Al-Farabi, 050040, Almaty Printed in the printing office of the «Qazaq University» publishing house.
8