310 67 46MB
English Pages 606 [608] Year 1992
Ill-Posed Problems In Natural Sciences
Ill-Posed Problems in Natural Sciences Proceedings of the International Conference Held in Moscow - August 19-25, 1991
Editor-in-Chief Andrei N. Tikhonov
///I/SP///
UTRECHT, THE N E T H E R L A N D S TOKYO, JAPAN
Editors Leonov A.S. Prilepko A.I. Vasin I.A. Vatutin V.A. Yagola A.G.
TVP
SCIENCE P U B L I S H E R S M O S C O W , RUSSIA
VSPBV P.O. 346 3700 AH Zeist The Netherlands
TVP Sci. Pubi. Vavilov st. 42 117966 Moscow GSP-1 Russia
© 1992 VSP BV / TVP. Sci. Pubi.
First published in 1992 ISBN 90-6764-141-3
All rights reserved. No part of this publication may be reproduced, stored in a retrieval system, or transmitted in any form or by any means, electronic, mechanical, photocopying, recording or otherwise, without the prior permission of the copyright owner.
CIP-DATA KONEMKLUKE BIBLIOTHEEK, DEN HAAG Ill-posed problems in natural sciences: Proceedings of the International Conference / eds.: A.N. Tikhonov et al. - Utrecht: VSP / Moscow: TVP Sci. Publ. ISBN 90-6764-141-3 bound NUGI8011 Subject heading: natural sciences
Printed in Russia by Contact, Moscow
CONTENTS Preface
xi
Part 1. THEORY A N D METHODS OF SOLVING ILL-POSED PROBLEMS Generalized maximum likelihood method and its application for solving ill-posed problems V.Ya. Arsenin and A.V. Krianev
3
Iterative method for the solution of nonlinear ill-posed problems and their applications A.B. Bakushinskii
13
Uniform regularization of first kind operator equations in Hilbert space A.V. Cherepakhin
18
On the discretization error in regularized projection methods with parameter choice by discrepancy principle U. Hamarik
24
Regularization find uniqueness of solutions of systems of Volterra nonlinear integral equations of the first kind with two arguments M.I. Imanaliev and A.A. Asanov
29
Pointwise residual method for solving systems of linear algebraic equations and inequalities A.Yu. Ivanitskii, F.P. Vasii'ev, and V.A. Morozov
33
Optimality region of the m-step Lavrentiev method T. Kiho
44
Multilevel J.T. King iterative methods for ill-posed problems
48
The method of minimal pseudoinversed matrix. Basic statements A.S. Lconov
57
Variational algorithms with a posteriori choice of the regularization parameter for solving ill-posed extremal problems A.S. Leonov
63
Tikhonov's approach for constructing regularizing algorithms A.S. Leonov and A.G. Yagola
71
vi
Regularization algorithms for solving ill-posed criterion problems A.M. Levin
84
On the numerical solution of the ill-posed problems of construction of approximate solutions of singular integral equations of second kind A.F. Matveev
88
Regularization methods for ill-posed boundary problems IV. Mel'nikova
93
Modified Tikhonov regularization for nonlinear ill-posed problems leading to optimal convergence rates A. Neubauer
104
On dynamical restoration of parameters of elliptic systems Yu.S. Osipov and A.I. Korotkii
108
Extreme value estimation, a method for regularizing ill-posed inversion problems P. Paatero
118
Two iterative schemes for solving linear non-necessarily well-posed problems in Banach spaces R. Plato
134
On the regularization parameter choice for ill-posed problems with a quasisolution T. Raus
144
An operator method of regularization of nonlinear monotone ill-posed problems I. Rjazanceva
149
On the numerical inversion of the Laplace transform with boundary constraints G. Rodriguez
and S. Seatzu
155
Regularization of difference schemes A.A. Samarskii and P.N. Vabishchevich
166
Ill-posed problems of on-line conditionally optimal filtering I.N. Sinitsyn
174
The estimation of the error of the regularization method for ill-posed problems with noninjective operator V.P. Tanana and T.N. Rudakova
184
vii
Some numerical methods of parameter identification in nonlinear models 0. Vaarmann
191
Identification of filtration coefficient G. Vainikko
202
Ill-posed problems and iterative approximation of fixed points of pseudo-contractive mappings V.V. Vasin
214
P a r t 2. I N V E R S E P R O B L E M S IN M A T H E M A T I C A L P H Y S I C S Nonlinear inverseand problems of acoustic potential G.V. Alekseyev A.Yu. Chebotarev
227
A measure thoretic approach for studying inverse problems of the heat equation and the wave equation G. Anger
233
New methods and results in multidimensional inverse problems for kinetic equations Yu.E. Anikonov
244
An inverse problem originating from magnetohydrodynamics. Some numerical experiments E. Beretta, E. Fischer, and M. Vogelius
254
Inverse conductivity problem in the two-dimensional case V.G. Cherednichenko and G.V. Veryovkina
270
Inverse problems for a system of nonlinear partial differential equations A.M. Denisov
277
An inverse problem in three-dimensional linear thermoviscoelasticity of Boltzmann type M. Grasselli
284
An inverse boundary value problem for the equation of the plate in the Love-KirchhofF theory M. Ikehata
300
Inverse problems for the nonstationary kinetic transport equation A.L. Ivankov
307
Optimization methods of solving inverse problems of geoelectrics S.I. Kabanikhin and A.L. Karchevsky
312
On passage to the limit in the inverse problems for nondivergence parabolic equations V.L. Kamynin
326
vili
One method of solution of the inverse problems for layered media M.M. Lavrent'ev,
Jr.
333
Identification problems for integrodifferential equations A. Lorenzi
342
The Fredholm solvability of inverse problems for abstract differential equations D.G. Orlovskii
367
Regularized traces of singular differential operators of higher orders A.S. Pechentsov
375
Inverse problems for evolution equations A.I. Prilepko, A.B. Kostin, and I.V. Tikhonov
379
Inverse problems in mathematical physics A.I. Prilepko, D.G. Orlovskii, and I.A. Vasin
390
Stability estimates for inverse problems of geoelectrics V.G. Romanov
408
Inverse problems for elliptic differential V.A. Sadovnichii, V.V. Dubrovskii, and operators A.V. Nagorny Inverse boundary value problems in viscous fluid dynamics I.A. Vasin
417 423
On some inverse problems for the time-dependent transport equation N.P. Volkov
431
Determination of constant parameters in some semilinear parabolic equations M. Yamamoto
439
Part 3. APPLICATIONS A numerical method for solving inverse problem of deep magnetotelluric quasi-layered media sounding I.S Barashkov and V.I. Dmitriev
449
On numerical methods of solving inverse problems of electrical prospecting S.M. Bersenev
454
The solution stability and restrictions on the space scatterer spectrum in the two-dimensional monochromatic inverse scattering problem V.A. Burov and O.D. Rumiantseva
463
Inverse problems of astrophysics A.M. Cherepashchuk, A.V. Goncharskii, and A.G. Yagola
472
IX
Methods of solving the ion exchange inverse problem in counter-current columns A.M. Denisov, S.R. Taikina, and A.V. Chanov
482
Numerical methods for optical thin films at the National Research Council of Canada J.A. Dobrowolski
487
Inverse scattering and synthesis problems in the diffraction theory Ju.A. Eremin and A.G. Sveshnikov
504
The construction of numerical algorithms for ill-posed problems with random errors in initial data A.M. Fedotov
515
A unified approach to the creation of knowledgebased systems for identification of some classes of nonstationary processes V.Ya. Galkin and V.A. Karpenko
525
Inverse problems in vibrational spectroscopy I V . Kochikov,
G.M. Kuramshina,
Yu.A. Pentin, and A.G. Yagola
535
Inverse problems in electroencephalography and their numerical solving Yu.M. Koptelov
and E.V. Zakharov
543
A numerical method of crack determination by the boundary integral equation method N.Nishimura
553
Regularization method in nonstationary inverse scattering problem L. Nizhnik and R. Romanenko
563
Numerical methods for inverse scattering problems in two-dimensional scalar field K. Onishi
568
On the hypersingular first kind integral equations for problems of electromagnetic scattering from screen surfaces E. V. Zakharov
and I. V. Halejeva
List of Contributors National Organizing Committee
584
595 596
PREFACE This volume contains the Proceedings of the First International Conference "IllPosed Problems in Natural Sciences", which was held in Moscow during the week of August 19-25, 1991. The conference was managed by Moscow State University and Keldysh Institute of Applied Mathematics and sponsored by several institutes from Moscow, Novosibirsk, Ekaterinburg, Krasnoyarsk. A great interest to ill-posed problems and their applications leads to sufficiently large amount of participants: USSR — 210, Japan — 8, USA — 4, Italy — 3, Germany — 2, Australia — 1, Austria — 1, Canada— 1, Finland — 1, France — 1, Sweden — 1, Czechoslovakia — 1. The Conference had five plenary sessions with following plenary lectures: Monday, August 19 A.N. Tikhonov (USSR). "Problems with approximate information". M.M. Lavrentiev (USSR). "Mathematical problems of tomography". A. A. Samarsky, P.N. Vabishchevich (USSR). "Regularization of difference schemes". Tuesday, August 20 V.G. Romanov (USSR). "Stability in inverse problems of geoelectrics". A.V. Goncharsky (USSR). "Computational diagnostics and image processing". A.G. Ramm (USA). "Property C and inverse scattering problems". Wednesday, August 21 A.S. Leonov, A.G. Yagola (USSR). "Tikhonov's approach for constructing regularizing algorithms". H.W. Engl, A. JVeubauer (Austria). "Convergence rates for Tikhonov regularization of linear and nonlinear ill-posed problems". A.B. Bakushinsky (USSR). "Iterative methods for solving nonlinear ill-posed problems and applications". A.V. Goncharsky, A.M. Cherepashchuk, A.G. Yagola (USSR). "Inverse problems of astrophysics". Thursday, August 22 G. Vainikko (Estonia). "Discretization and regularization of ill-posed problems with non-compact operators". A.I. Prilepko (USSR). "Inverse nonlocal problems and predictioncontrollable systems of equations in mathematical physics". G. Anger (Germany). "A measure theoretic approach for studying inverse problems relative to wave equation and the heat equation". A. Lorenzi (Italy). "Identification problems for integrodifferential equations". Friday, August 23 M.Z. Nashed (USA). "Ill-posed problems and signal processing". A.M. Fedotov (USSR). "Constructing the numerical algorithms for ill-posed problems". V.V. Vasin (USSR). "Ill-posed problems and iterative approximation of fixed points of pseudocontractive mappings".
xii
In addition to plenary sessions 14 sections were organized: Section 1. Inverse problems in mathematical physics. (Co-chairmen: M.M. Lavrentiev, A.I. Prilepko). Section 2. Inverse problems for differential equations. (Co-chairmen: Yu.E. Anikonov, V.G. Romanov). Section 3. Inverse spectral problems and related topics. (Co-chairmen: A.M. Denisov, V.G. Cherednichenko). Section 4. Statistical methods. (Co-chairmen: A.V. Krianev, A.M. Fedotov). Section 5. Iterative methods. (Co-chairmen: A.B. Bakushinsky, G.M. Vainikko). Section 6. Variational methods. (Co-chairmen: J.T. King, V.P. Tanana, F.P. Vasiliev). Section 7. Numerical methods. (Co-chairmen: L. Elden, A.S. Leonov, A.A. Samarsky). Section 8. Special problems of the regularization theory. (Co-chairmen: A.D. Iskenderov, V.A. Morozov, M.Z. Nashed). Section 9. Inverse problems in the control theory. (Co-chairmen: Yu.S. Osipov, V.V. Vasin). Section 10. Tomography and computational diagnostics. (Co-chairmen: A.V. Goncharsky, V.P. Palamodov). Section 11. Inverse problems of electro-, hydro-, gazo-dynamics. E.V. Zakharov, A.G. Sveshnikov).
(Co-chairmen:
Section 12. Inverse problems in geophysics and astro-physics. V.J. Dmitriev, V.N. Strakhov, A.M. Cherepashchuk).
(Co-chairmen:
Section 13. Inverse heat transfer problems. (Co-chairmen: O.M. Alifanov, E.A. Artyukhin). Section 14. Inverse problems in optics and spectroscopy. (Co-chairmen: A.V. Tikhonravov, A.G. Yagola). Some lectures and talks presented to the conference are included in this book. Regretfully not all plenary lecturers prepared their lectures for publication and some limitations did not allow to all authors of communications to take part in this volume. We wish to acknowledge with thanks all contributors and participants, who made the conference a success, and to TVP Science Publishers for the possibilities to publish this volume. We greatly appreciated the presence of so many distinguished scientists in Moscow. We hope that such conferences will become regular. Professor A.N.
Tikhonov,
Chairman of the National Organizing Committee Professor A. G. Yagola, Secretary of the National Organizing Committee
Part One
THEORY AND METHODS OF SOLVING ILL-POSED PROBLEMS
Ill-Posed Problems in Natural Sciences, pp. 3 - 1 2 A. Tikhonov (Ed.) 1992 V S P / T V P
GENERALIZED M A X I M U M LIKELIHOOD M E T H O D A N D ITS APPLICATION FOR SOLVING ILL-POSED PROBLEMS V.Ya. ARSENIN and A.V. KRIANEV Moscow Engineering Physics Institute, Kashirskoe shosse, 31, 115409 Moscow, Russia ABSTRACT W h e n solving applied problems one can often use certain prior information on t h e desired solution. Two classical, commonly used methods allowing t o employ additional prior information are t h e minimax method and t h e Bayes method. In this paper, one more method for using prior information is proposed which is a generalization of the maximum likelihood method. This method, named t h e "generalized maximum likelihood method" (GMLM), enables one to make use of prior information on t h e solution in stochastic and deterministic forms. T h e GMLM can be applied for the estimation of solutions of systems of algebraic equations (linear and nonlinear). Additional prior information on t h e desired vector is supposed to be present except for the case of t h e system of main equations. This additional prior information can be given in various forms. T h e main problem to be solved with the aid of t h e GMLM is to find optimal estimators of t h e desired vector taking into account additional prior information. A great many of applied problems are reducible to systems of algebraic equations, among them problems of processing and interpretation of complicated physical experiments. Some of these problems, for example, inverse problems of mathematical physics, are ill-posed and therefore unstable even with respect to small errors in measurable quantity. Effective methods and algorithms based on t h e GMLM are proposed which yield optimal approximate solutions of input systems; these solutions are stable with respect to deviations of input d a t a and prior assumptions. T h e efficiency of the GMLM is illustrated with t h e examples of certain applied problems. A robust method for solving unstable problems including computerized tomography problems, based on the GMLM, is proposed.
1. I N T R O D U C T I O N Approximate solution of many applied problems, including the problems of mathematical physics and the interpretation of results of a physical experiment, reduces to systems of equations of the form © T V P Sei. Publ. 1992
4
V.Ya.
Arsentn
and A. V.
Au =
Krianev
fT
(1) where A is a linear or nonlinear operator acting from a Euclidean space E m into another Euclidean space E " and u is an unknown vector. W h e n solving certain problems, a measurable vector is known instead of a coexact vector / = f r + where £ is an error vector a n d t h e problem is to find an approximate solution of system (1). In m a n y problems the vector £ is r a n d o m and its distribution law or some characteristics of this law are known. Under these conditions it is n a t u r a l t o use the stochastic approach for finding an approximate solution of the system. This stochastic approach takes into account the existing information on t h e distribution law of the error vector and enables one to get an approximate solution with minimum possible deviation from the desired exact solution. T h e stochastic approach for solving ill-posed problems and, among them, system (1), was used in (Tikhonov and Arsenin, 1986; Lavrentiev and Vasiliev, 1966; Turchin et al., 1970; Jukovsky, 1972; Muravieva, 1973; Meleshko, 1980; Kuks and Olman, 1972; Lavrentiev and Fedotov, 1982; Fedotov, 1982; Fedotov, 1988; Hocking et al., 1976; Hoerl, 1962; Hoerl and Kennard, 1970; M a r q u a r d t , 1970; Rao, 1976) and many other papers. Different methods for solving approximately ill-posed problems by using the stochastic approach employ different kinds of prior information about the desired solution. Basically, there are two methods taking into account the prior information when solving ill-posed problems using the stochastic approach. These are the Bayes m e t h o d and the minimax method. T h e Bayes m e t h o d treats t h e desired solution u as a r a n d o m vector, and t h e prior information is represented by the probability density function of a r a n d o m vector u. As an approximate solution of problem (1) (the Bayes estimator u B ) , one of the characteristics of posterior distribution (mean, mode, etc.) is given. If the system (1) is linear {A is an n x m matrix) and b o t h the distribution £ and the prior distribution are normal ( A / " ( 0 , c r ' j K f ) and Af(uo,cr^Ka), respectively), then the Bayes estimator is linear (as the mode of posterior distribution is given):
y^ArK7.f+i,K:^y
Thus, the Bayes m e t h o d makes use of the prior information about the desired stochastic solution and, moreover, interprets this solution as a r a n d o m vector. To our mind, this restricts the applicability of the Bayes m e t h o d to solving applied problems. T h e Bayes m e t h o d has been used for solving ill-posed problems in (Lavrentiev and Vasiliev, 1966; Turchin et al., 1970; Jukovsky, 1972; Muravieva, 1973; Meleshko, 1980). T h e minimax method allows one to utilize prior information in a deterministic form which is given as a feature of the desired solution with respect to a prior set u 6 RaT h e minimax estimator u M is given by u M = a r g m i n m a x E||u — u||,
(3)
(2)
Generalized Maximum Likelihood Method
5
where E is the expectation and W is a given class of estimations (usually the class of linear estimations is chosen). T h e "narrower" t h e prior set, the "more extensive" the prior information and t h e "closer" t h e minimax estimator û ^ to the desired solution u. T h e minimax method gives the best estimation class W under the assumption t h a t t h e assumption in prior set Ra has t h e worst possible order u (when the norm on the right-hand side of (3) is fixed). T h e minimax method has been used for solving ill-posed problems in (Kuks and Olman, 1972; Lavrentiev and Fedotov, 1982; Fedotov, 1982; Fedotov, 1988).
2. STOCHASTIC FORM OF PRIOR INFORMATION Now we will consider another method of talcing into account the prior information. This method, based on the stochastic approach, is the generalized maximum likelihood method (GMLM); it has been developed in (Krianev, 1989; Arsenin and Krianev, 1991). The GMLM allows one to exploit prior information in b o t h t h e stochastic and t h e deterministic form. First we will study a general scheme of the GMLM which takes into account the prior information in the stochastic form. Assume t h a t , in addition to system (1), we are given a realization Yu...,Y
(4)
r
of a sequence of r a n d o m variables T]\,... ,i] r which do not depend on the random variable T h e joint probability density p 0 ( t / i , . . . , y r ; u) of the sequence 771,..., rj r is supposed to depend on the solution u we seek for. A realization is called a prior sample. For solving practical problems a prior sample can be obtained as a result of additional direct or indirect measurements with respect t o t h e desired solution u. Let p{x, 2/1,..., j/ r ; u) denote the joint probability density of the random variables 7/1,..., rjr. From what was said above it follows t h a t p{x, yi, • • •, Vr; «) =
(j/i, • • •, yr;
«),
(5)
where is the probability density of the random variable The functions Lg(u) = p(f — Au,Y\,... ,Yr; u), L{u) = — Au), La(u) = p.(yi,..., Yr; u) are called, respectively, the generalized input function, the prior function, and the likelihood function. Assuming t h a t the functions L{u) and La(u) are twice continuously d i f f e r e n t i a t e , we introduce matrices / 9 ( u ) , / ( u ) , / a ( u ) having t h e elements
i,j = 1 , . . . , m .
V. Ya. Arsenin
6
and A. V.
Krianev
I g ( u ) , I ( u ) , I a ( u ) are called, respectively, the generalized input matrix, the prior matrix, and the informative matrix. Each of the matrices is symmetric and nonnegative. From (5) we obtain I , ( u ) = I(u) + /„(«).
(6)
From (6) it follows that the generalized informative matrix unites the information about the desired solution u contained in the input system (1) and in t h e prior sample (4). According to the GMLM, we choose the estimator ug = arg max Lg(u)
(7)
u
as an approximate solution of system (1) on the base of prior information about the desired solution determined by the prior sample (4). It is often more convenient to determine estimator (7) from t h e equality u g = a r g m i n (— l n L ( u ) — l n L a ( u ) ) . u
(8)
If the matrix I g ( u ) is nonsingular, then under some additional generalized conditions for the regularity of (7) and (8) such estimators exist and are unique (see (Krianev, 1989)). Nonsingularity of the matrix I g ( u ) means that the input system (1) and the prior sample (4) contain sufficient amount of information about the desired solution t o get its estimator having a finite covariance matrix. Moreover, under the same conditions, for any asymptotically unbiased estimator u found from system (1) (with the measured value / ) and the prior sample (4), the generalized K r a m e r R a o inequality (9) l
holds, where Ka is the covariance matrix of the estimator u and I ~ is the inverse matrix for I g {u). If we have equality in (9), then estimators u are said to be jointly effective. Let system (1) be linear, £ ~ A/"(0,K,
- Y) J , I
(18)
10
V.Ya. ATSenin and A.V.
Krianev
is the Huber function, a is the regularization parameter, and Y = £ Suppose that more than a half of s realizations of each component of the vector / are normally distributed random variables with dispersion ) is continuous. Theorem 1 allows to obtain for the equations of this class a very useful sufficient condition for some function class M C £2 to belong to UC(A).
22
A.V.
Cherepakhin
THEOREM 2. A convex set M c £ 2 belongs to UC(A) if for a n y e > 0 there exist a > 0, 77 > 0 such a s for a n y i i 6 M , x 2 € M satisfying the inequality
Ia(xi(u) the
- x2(w))
< r)
relation
(J-J.)(ii(«)-i2(«))| is valid also. Here £i(u>), x 2 ( w ) a r e the Fourier
transforms
by ia(w) =
of x\(t)
and 12(f),
and / „ is defined
f l i f M a.
Proof. Let us show that M satisfies the conditions of Theorem 1. Denote by 1 / a the maocimum of the function 1 /K(ui) on the interval (—a, a), where a is chosen to satisfy the inequality |(i-i.)(iiH-i2H)||
I pointwise in H, F respectively, | | A ( / - P f t ) | | - > 0, | | ( / - Q f t ) A | | - > 0 (h 0). The projection method for (1) has the form A^Uh = Qhfs, Ah = QhAPh, Uh € TZ(Pk)- We regularize this equation using a Borel measurable function gr: [0, a] —* R (r > 0) with the following properties for r > 0: 1)
^ 7r
(0 < A < a, 7 = const),
2 ) A |l - A 9 r (A)| < 7 p r~P p
3) r
(0 < A < a, 0 < p < p0, p0 > 0,
gr(A) is continuous and there holds
' [
9i9 a X))
I I
s=r
7jp
= const),
< 7'/9 r (A)(l -
\gr(\)),
7' = const, where /? r (A) = 1 f o r p 0 = 00, /3 r (A) = (1 — Ag I .(A)) 1 / J ' 0 f o r p 0 < 00, 4 ) r —> |1 — A^ r (A)| is decreasing for any A > 0. Let uo £ H and let u* be the solution of (1), nearest to ug. We find
uh>r = ( J - gr(AlAh)A*hAh)Phu0
© T V P Sei. Publ. 1992
+ gr(A*kAh)A*hfe,
(2)
Regularized
Projection
25
Methods
assuming that ||A||2 < a. In the special case F = H,
A = A* > 0,
Qh = Ph
(3)
we assume that || A|| < a and use the regularized Ritz-Galerkin method wfc.r = (I ~ gr(Ah)Ah)Phu0
(4)
+ gr(Ah)Phfs.
Examples of regularization methods of the form (2), (4) with the corresponding functions gr, satisfying l ) - 4 ) are the ordinary Lavrentiev and Tikhonov methods (po = 1)> their iterated versions of order m (p0 = m), explicit and implicit iteration methods (p0 = oo) (see (Plato and Vainikko, 1989, 1990; Gfrerer, 1987)). 2. R U L E S F O R CHOOSING r Let Bhir
= B'h
= I if p0 = oo, BKr
r
the case (3) define B'h
r
= (l — gr(Ah)Ah)1^P°
1. Let b2 > 6j > 1, r = 0. Otherwise choose
RULE
choose
(|A| = (AM) 1 / 2 ) such ||BKr{AhuhtT
such
if po < oo.
< 0 < 1. If ||Bfc,r(Afctto - Qhfs)|| < hS, r < r : = s u p ^ g ) 1 / » | | ( I - Pa)|A|?||-2/«} «>0
0
0
b,6.
that (5) holds,
choose
(5)
r = f or r = int(r) + 1.
Rule 2 = Rule 1, where Bh,r is replaced by I. If (3) holds, then we choose in r approximation (4) according to Rules 1', 2' which we obtain from Rules 1, 2 by using the substitutions Bh, r —• BJ, r , f —> (f) 1 / 2 , Qh -»Ph• 3. C O N V E R G E N C E AND R A T E OF C O N V E R G E N C E THEOREM
h -
1. a) Let r in (2) be chosen
by Rule
1. Then
Uh,r - » « » ( £ - » 0,
0). If u. = |A|'z,
with p < 2p0,
\\z\\0 + 1
Po
1
2po 2p0+l
2p,-l 2po
l 2
P0 2p0+l
2po-l 4po
OO
2po
2po-l
A4 oo
P
Po - 1
Po
5. A P P L I C A T I O N TO I N T E G R A L EQUATIONS Let Au(t) = f^ fC(t, s)u(s) ds, i i
a
0 0
I dhIC{t,s) ds'i
II i
dt ds < oo,
I
0 0
dt1'
dt ds < oo,
A: L 2 [0,1] —» L 2 [0,1]. Let 1Z(Phl), /R-{Qh2) be spline spaces of degrees fcj — 1, — 1 and discretization step sizes hi, h.2, respectively. Then (I - Phl)\A\>\\ =
min{jWi ,ki} 0(hl
(9)
Regularized
Projection
Methods
27
| | ( I - Q h j m 1 | = Winr - u.ll < +
P
(11) \\A(i -
p
h
) \ r
{ p
'
i }
+
P
\ \ ( i -
Q
f c
)4
m i n { , ,
'
2 }
}
was obtained and, for (4), estimate (11) was stated without the last term under the assumption that (3) and (6) hold. Note that estimates (7) and (11) for the regularized least squares method (the case Q^APh = APh) hold without the last term and for the regularized least error method (the case QhAPh — QhA) without the second term. Corresponding estimate (11) for the first of these methods was earlier given in (Groetsch et al., 1982; Groetsch, 1984; Gfrerer, 1987) and for the second method in (King and Neubauer, 1988; Neubauer, 1988). Note also that estimate (7) is always not worse that (11), but there Eire examples when ¡ ( I - P i O I A I ' l l = 0(/imax i/p, (i = 1, 2). It is worth noting that rule (8) with A = fj, = 1 (independently from p0) was proposed for choosing h in (Plato and Vainikko, 1990). REFERENCES Gfrerer, H. (1987). An a posteriori parameter choice for ordinary and iterated Tikhonov regularization of ill-posed problems leading to optimal convergence rates. Math. Comput. 49, 507-522. Groetsch, C.W., King, J.T., and Murio, D. (1982). Asymptotic analysis of a finite element method for Fredholm equations of the first kind. In: Treatment of Integral Equations by Numerical Methods. Ed. by C.T.H. Baker and G.F. Miller. Academic Press, London, pp. 1-11. G r o e t s c h , C . W . ( 1 9 8 2 ) . The
Theory
of Tikhonov
Regularization
for Fredholm
Equations
of the
First Kind. Pitman, Boston. Hamarik, U. (1991). Quasioptimal error estimate for the regularized Ritz-Galerkin method with the a posteriori choice of the parameter. Acta et Comment. Univ. Tartuensis 937, 63-76. King J . T . and Neubauer, A. (1988). A variant of finite-dimensional Tikhonov regularization with a posteriori parameter choice. Computing 40, 91-109.
28
U. Hämarik
Neubauer, A. (1988).
An a posteriori parameter choice for Tikhonov regularization in the
presence of modeling error.
Appl. Numer.
Math. 4, 507-519.
Plato, R. and Vainikko, G. (1990). On the regularization of projection methods for solving illposed problems.
Numer.
Math. 57, 63-79.
Plato, R. and Vainikko, G. (1989). On the regularization of the Ritz-Galerkin method for solving ill-posed problems.
Uch. Zap. Tartu Univ. 863, 3-17.
Ill-Posed P r o b l e m s in N a t u r a l Sciences, p p . 29 - 32 A. T i k h o n o v ( E d . ) 1992 V S P / T V P
REGULARIZATION A N D U N I Q U E N E S S OF SOLUTIONS OF SYSTEMS OF VOLTERRA N O N L I N E A R INTEGRAL EQUATIONS OF THE FIRST K I N D W I T H TWO A R G U M E N T S M.I. IMANALIEV and A.A. ASANOV Institute of Mathematics, Kirgizstan Academy of Sciences, Leninskii Prospekt 265a, Bishkek, Kirgizstan ABSTRACT Systems of nonlinear integral equations with a nondifferentiable vector kernel are considered. Volterra regularization operators are constructed in various spaces and theorems on uniqueness are proved. For linear systems an existence theorem is proved.
Consider a system of nonlinear integral equations of the form t
X
J J K(t,x,s,y,u(s,y))dyds
= f(t,x),
(t,x) £ G,
(1)
to zo
where K, f , and u are n-dimensional vector functions, K and / being known and u unknown, and G is the closure of the set G = (t0,T) x (so, I). Throughout we assume that K(t,x,s,y,u)
= Q(t, x, s, y)u + Ki(t,x,s,y,u),
Q(i, x, t, x) = A(t)B(x)K0(t,
x),
(2) (3)
where Q,A, and B are n x n matrix valued measurable functions, K\ is an ndimensional measurable vector function one, KQ is a n X n matrix valued continuous function in G and det iiToC^i / 0 in G. Without loss of generality we assume that K0(t,x) is the unit n x n matrix En every where in G. Systems of one-dimensional and two-dimensional Volterra integral equations of the first kind were considered in (Asanov, 1980; Imanaliev and Asanov, 1988; Imanaliev and Asanov, 1989; Ten Men Jan, 1975). Here we do not need to assume the vector-functions K and / to be smooth with respect to t and x. Moreover, det[A(i)5(x)] may vanish in G. ©
T V P Sci. P u b l . 1992
M.I. Imanaliev and A.A. Aaanov
30
Remark . In the sequel we use the following notation. 1) For any n x n matrix A and any n-dimensional vector u respectively, ||A|| and ||u|| mean, respectively, their norms. 2) C ^ ' ^ ^ G ) , 0 < 7, < 1, i = l , 2 , is the set of n-dimensiond vector functions u(t, x) satisfying the inequalities IK*,*) ||u(*,x) -
u(s,x) - u(t,y) + u(a,y)||
U(S,x)||
0 for
ii = {(i, x, s, y): t0 < s v and for every (i, x, 3, y, Uj), (T,v,s,y,UI),
(t,u,s,y,u2),
(t,x,s,y,u2), ( r , ® , s , y , u i ) , ( r , x , s , y , u 2 ) , (r, v , s , y , « i ) , (r, 1/, s, y, u 2 ) S ft X R " the following inequalities axe fulfilled: \\K!(t,x,s,y,U2)
- Ki(t,v,s,y,u2)
-
Ki(t,x,s,y,vi)
X
+K1(t,v,s,y,u1)\\
< C'0\(s)fi(y)[
J p{v) dv} ||ui - u 2 | | , V
\\Ki(t,x,s,y,u2)
- Ki(r,x,s,y,u2)
-
Ki(t,x,s,y,ui)
t
+K1(r,x,s,y,ui)\\
) —
sup
VR 1 ( 2R I) ANCI V32"1(JSR2) are the corresponding
)» V 2 " a ( ^ 2 ) ) n .
inverse functions,
z G [0, lj - bij\ < Aij,
\di-di\=i
n
- d, 0; the monotony of these variables: Uj < u J + 1 etc.). We shall denote D the set, taking in to account the a priori information about the solutions of system (1). T h e n it is naturally to consider instead of (5) the following problem: ||Lu||7
inf,
ueUD
= Un
D.
(6)
If D = R " , then this problem coincides with (5). Here in (6) and everywhere below we a s s u m e that the set D is known precisely, so we a s s u m e that the input d a t a determining this set, are given precisely. Below we will show that any solution of problem (6) can b e taken as an "approxim a t i o n " to a solution of (1) with a suitable choice of the matrix L. Moreover, the is a set U»(A) of solutions of problem a = ( { A j j } , { A / j } , { 0; { / * ( A ) will converge in some sense to U t , where is the set Ut of solutions of the following problem: ||Z>u||/ —»inf,
ueUD
=
UnD.
(7)
Poiniwise
Residual
Method for Linear
Systems
35
In particular the unit matrix E 6 R" x " can be always taken as L. Then the set U, consists of the so called normal solutions of problem (1), which have the minimal norm. Other possibilities of the choice of matrix L will be considered below. We shall call the suggested selection algorithm (5) or (6) of the approximate solutions of problem (1) under the pointwise representation of errors (2a), (2b) a pointwise residual method. It is ail analogue of the well known residual method for the solving of the unstable equations (see (Morozov, 1984b; Tikhonov and Arsenin, 1986)). The pointwise residual method for systems of linear equations and some particular classes of inequalities was also investigated in (Morozov, 1984b; Ivanitskii, 1989). 2. WEIERSTRASS THEOREM. CONDITION OF A D D I T I O N A L L Y We investigate problem (7), assuming that UD ^ 0 and the following condition, of additionally is fulfilled there exist such constants 0; a* > 0; i = 1,2,3; a\ + 0, v £ D}, Ko be of closure of KQ. Analogomly to the proof of Theorem 1 in (Ivanitskii and Fedotov, 1989) one can prove THEOREM 2. The condition of additionality (8) and the following condition kerj4 n ker£ fl {it € R n : Bu < 0} n KD = {0}
(9)
36
A. Yu.
Ivanitskii
et
al.
are equivalent. We remark that in (Ivanitskii and Fedotov, 1989) Theorem 2 was proved for the infinite dimensional spaces when B — 0, d = 0. It follows from Theorem 2 that if kerL = {0} for the matrix L, then the condition of additionality (8) is fulfilled for arbitrary matrices A € R m x n , B G R i > x n and for any closed sets D C R". In this case set £ , = { » £ R": ||Lu||i < y} is compact for any y > 0 and the function ||£u||i is a stabilizer (Tikhonov and Arsenin, 1986; Vasil'ev, 1988a) for problems (1), (7). We give an example showing that condition (8) can be fulfilled in the case when ||£u||i is not a stabilizer. Example.
3
Let ||u||/ = ||t>||2 = ( 2
v
j)
|
try
be the Euclidean norm at R 3 ,
3=1
3
D = {u e R : u3 = 0},
Au = (u 1 ,0,0) T ,
£u = ( 0 , u 2 , 0 ) t
for Vu € D3.
It can be verified easily that condition (9) and also condition (8) are satisfied with any matrix Lebeg's B € R p x 3 . However set Cy = {« € R 3 : ||£u||2 = ^ y} is unbounded here, and hence ||£u||2 is not a stabilizer. Thus the condition of additionality (8) simultaneously universely connects the four main objects of problem (7), namely: the matrices A, B, L and the set D, and it contains the idea of compensation of "bad" properties of some objects by "good" properties of the others. It allows to consider sufficiently wide class of unstable problems and methods of their regularization with relaxed requirements on matrices A, B, L and on the set D as compared with the traditional cases, where a priory ||£u||i is considered as a stabilizer. 3. C O N V E R G E N C E OF T H E P O I N T W I S E R E S I D U A L M E T H O D T H E O R E M 3. Let the conditions of Theorem 1, (2a), (2b) be fulfilled. Then U, = {u € Up: ||Z,u||i — ¡x„ = inf ||Lu||i} is the set solutions of problem (6)
well be nonempty and compact if {A tJ }, {A^} are sufficiently small and any minimizing sequence {it*} for the function ||Lu||i on the set Up converges to (/„. The proof of Theorem 3 is similar to the proof of Theorem 1. Remark. If D is bounded or a2 = ctz = 0, then the condition that {A^-} and {A,j } are small is superfluous (Ivanitskii, 1988). It is not necessary to solve problem (6) exactly in the numerical realizing of the pointwise residual method. It is sufficient to determine the vector u = u(e, a) from the condition: u e UD: ||Itx||i < £ „ + e , e > 0. (10) We shall denote by U*(6,e) the set of vectors sutisfying conditions (10). The convergence of the method under consideration is established by T H E O R E M 4. Let the conditions of Theorem f3(U*(a, e), Ut) —» 0 as S —* 0 and e —* 0, where
/3(V,W) = sup inf ||u-u>||. MGV UJGLV
3
be fulfilled.
Then
Pointwise
Residual
37
Method for Linear Systems
Proof. We consider arbitrary sequences 0 as k —• oo. According to the definition of the upper bound there exists such uk
= u{ak,ek)
p(uk,U,)
G Ut( 0 depending only on the elements of the matrices L, A, B, A1 and B1 that p(u,U») < M m a x { max ((Lu), — fi*)+, max (—(Lu)s — /x*)+, ||^-/||00,||(5U-d)+||00,||A1«-/1||00,||(B1U-d1)+||00},
Vu£R".
(16)
In particular this inequality is correct for all the points u( inf, v£V = {v£ R'. Av = / } (25) in the space of variables u , = (u*|£*)T € V». We denote by V* the set of solutions for this problem. It is clear that if v, = (u»|£»)T € V», then u* £ U, and conversly, if u* G Ut, then u, = (w„|f«) T G V«, where = Au« — /. Problem (25) is of the same type as problem (7) if B = 0, d = 0, DC. R n . We shall verify the correctness of the condition of additionally for problem (25) to use the theory developed in §1-4. We shall use the following Theorem. THEOREM 6. If condition of additionality (22) is fulfilled for matrices A and L, then an analogous condition is fulfilled for matrices A and L, i.e. « ¿ H I ||i + < 4 P H k
v«eRJ,
(26)
where constants aj, > 0, a'j > 0 , ot'2 > 0, 0, p > 0, u0 (E H; e) we use the m-step iterated Lavrentiev's method to find an approximate solution: ua = umiQ\ Uj,a = (al + A) _ 1 (auj_i l C t + /«), u 0,a = "0,
j = 1, ...,m;
where the regularization parameter a > 0. Recall that if it is possible to choose the parameter a so that sup ||u - ua|| < c p m p ^ 6 ^ «6M„„0,/seH: ||A«-M| Wj the orthogonal projection onto Wj. Set yj =: \\K*(I — Qj)|| and we assume limm_>oo 7 m = 0. Our approximation u6m x to u is given by u6m x = : K*w6m where (K'wtmiX,K'*)
+ \(wtmiX,4) = (gs,4),
G Wm = (Qmg6, 0 is fixed. We do not address the selection of the regularization parameter A here, but see (King and Neubauer, 1988) for an optimal a posteriori choice. Rather we want to consider the solution of (2) for a given A by preconditioned 0
T V P Sei. P u b l . 1992
Multilevel Iterative
49
Methods
iterative methods using a multilevel approach. We point out that u6m A minimizes, the Tikhonov functional over Vm =: K*(Wm) F(v)=\\Kv-gs\\l
+ X\\v\\l
MULTILEVEL OPERATORS To describe the multilevel operators and iterative methods we find it convenient to introduce the symmetric bilinear form a(w,z)
= (K*w,K*z)
+ A(w,z),
tu,z €
and the operators Ajty. Wj —* Wj given by a{w, ) = (AjtXw, ), e Wj Then (2) is equivalent to AmAWSm,x = Qmg6.
(3)
For later reference we denote the minimal eigenvalue of Ajto by fij and remark that n j < 7|_iThe multilevel iterative methods for the solution of (3) axe described as follows. Let By. Wj —• Wj denote a symmetric positive definite operator that is (in a sense to be made clear later) an approximate inverse of Ajt\. Let tij denote the number of iterations on level j , that is on Wj. On Wi set B\ = \ and define g6j = Qjgs for each j, 1 < j < m. Full Multilevel Algorithm. Define w"1 = Bxg{. Suppose w^Li1 € Wj-1 has been defined and define w* 6 Wj by w) = w*-1 + Bj(gsj - Aj^w*-1),
(4)
where Wj = wjL~il and with n j determined by the least index k such that TOL||U£||2. The sequence is terminated at level j when \\Aj,0wkj - gfj\\2 < RTOL. The tolerances, TOL and RTOL, are specified so that one does not iterate too often per level and so that the residual is comparable to the (Morozov, 1966) discrepancy principle, say RTOL = C6, for some constant C. We note that (4) is simply a preconditioned version of Landweber-Fridman iteration applied to Ajt\wj A = Qjg6. In the context of finite differences or finite elements for partial differential equations this sort of iteration is called the full multigrid algorithm (see (McCormick, 1987)). To completely specify the procedure described above we need to define the operators Bj.
J.T. King
50
One can also use the multilevel operators to precondition the iterative solution of AjflWjp = Qjg6. In this case (4) is replaced by w ^ w ^ + B ^ - A ^ '
1
) .
(5)
Remark. In this paper we have restricted our attention to the preconditioning of Landweber-Fridman iteration but the operators Bj can be used to precondition conjugate gradients (see (King, 1989 and King, 1991)) or steepest descent, etc. For example, to precondition the conjugate gradient method for (3) one applies conjugate gradients to the equivalent problem BmAmi\wsmX
=
BmQmgs
with respect to the inner product ( B , •). Of course this presumes Bm is symmetric positive definite on Wm. To motivate the definition of Bj consider the operator Tj = I — ocj(I — Qj-i)Aj \ defined on Wj. Then Tj is a symmetric positive definite operator and is a contraction on the orthogonal complement of Wj-1 for suitable otj . LEMMA 1. For 0 < ctj < (A + spectrum of Tj,cr(Tj), satisfies a(Tj) Wj: a(w, ) = 0,e Wj-1}, then a(TjW,w)
, T j is symmetric in a ( - , - ) and the C (0,1]. Moreover, if w £ Wji.j = {to g
to,ti>) < (Bjw,w)
),
where r = m i n { l , / x m A - 1 } .
Prom the Corollary it follows that the iterative method satisfies for u>® = 0
wkj=w)^+Bj{g]-Aj,0wkj-')
IKO-KXII^p'KoIII, where p = 1 — r ( l — e) < 1. Note that A in the definition of Bj can be chosen independently of the choice of A in Ajt\. E X A M P L E S AND NUMERICAL E X P E R I M E N T S We consider an integral operator of the form b Ku(s)
= J
k(s,t)u(t)dt
a
defined on L2[a, 6] with square integrable kernel on [a, 6] x [a, 6]. We also assume that d2k/dt2 £ L2[a, 6]. Our choice of finite dimensional subspaces consists of linear splines on a sequence of grids. Let N > 0 be an integer and set h\ = (b—a)/N and t] = a + ihx for 0 t.
Then it follows that Ku = g on LjfO, 1] is equivalent to the inverse heat conduction problem: Wf — wzz
w(l,t)
= 0,
0 < x < oo, 0 < i < 1,
= g(t),w(x,t)
bounded as
iu(z,0) = 0,
x —> o o ,
wx(0,t) = - u ( i ) ,
where t is time and x is the distance to the heated surface. Thus one wants to determine the heat flux at x = 0 based on the measured temperature at x — 1. For numerical methods for this and the related problem of recovering id(0, 0 and hence there is no convergence rate predicted by the theory (see e.g. Groetsch, 1984) of Tikhonov regularization for A = A( 0. In Table 2 we list the resulting L2 errors for the multilevel iterative method find direct solution of (3) for several noise levels and for the parameters specified above. The L2 errors are comparable except for the largest noise level where the iterative method required only one iteration on W j . We used TOL = 1 0 - 3 and RTOL = ¿|| 0} such that M
M
M
fc=l Here 9(p) =
fc=l
k=1
for p ± 0; 0 for p = 0}.
T h e o r e m 2 (Leonov, 1990). i) Problem (4) is solvable for each h > 0. ii) Under the conditions: 0 < h < ho(A), ||AJ,|| > h problem (4) has a unique solution and this solution is given by s — J PkXk[ r. Here xjt(a) is the unique solution of the equation 4 O A x — x = apk Pk
satisfying the inequality x > 1, and a(h) > 0 is the unique solution of the r
e(a) = £#[**(«)-
equation
M
1]2+
f>l = h>
with continuous and increasing function e(a). The following two theorems the interconnection between problems (2) and (4).
clarify
THEOREM 3. Let the condition || Ah || > h hold for a given h and let (pi,..., PM) ls defined by the mabe a solution of (4). Then the matrix A/, = Uh DhVh > trix Dh = diag(p!,..., PM) € 21 and by the matrices Uh, V), of the SVD for Ah is a solution of problem (2). THEOREM 4. If the conditions 0 < h < h0(A), ||Aj,|| > h are satisfied, any solution of (2) has the same singular values pi, • • • ,PM-
then
It is clear from Theorem 3 that the element zv = VhD^U^us can be taken as an MPM approximation to z. Here, D]J" = diag(0(/5i),..., 8(pM)) £ Remark 1. The numerical implementation of the MPM method does not require knowledge of the exact SVD of Ah- It is very important since the algorithms for the calculation of the SVD used in practice give an approximate singular value decomposition which has some accuracy x. I.e., by using Ah, we can calculate the orthogonal matrices Uhx € 2 l m x m , Vhx 6 2 l n x n , and Dhx = d i a g ( p f , . . . , pM") £ 2t, such that \\Uk*DkKVZ,-Ak\\ 0 determining the level of approximation error, are known. © TVP Sei. Publ. 1992
64
A.S.
Leonov
We consider the following Basic problem: find an element zs = z(Jg, S, "to) G D such that z$ converges r-sequentially to the set Z of fi-optimal solutions of (1), (2) as 6 —• 0. To solve this problem one must generally use a computer. Therefore, it is necessary to construct a finite-dimensional approximation of the problem. In many cases the d a t a of the Basic Problem are directly given in a finite-dimensional form. In view of that we formulate the problem of finite-dimensional approximation of (1), (2). Let Zn be an iV-dimensional normed space with elements z^. We introduce operators Pj\r: Z —• Zjv, Pn'- Zn —* Z. T h e essential properties of these operators are given below. We define a set Dn = {¿n- Pn%n € D}. Sets of this kind will represent finite-dimensional analogues of D. In the sequel we sometimes omit the index N when it is fixed (e.g., zn = z, Dn = D, Pn = P, and so on). We assume t h a t , instead of the functional Je(z), its "finite-dimensional approximation" is given, i.e., a finite-dimensional functional J(z) = Jn(z), rj = (£, 1 /N), with a domain Dn, which satisfies an approximation condition I J„{PNz)
- J6(z)\
J* for each T) and p.,, —> J* as 77 —> 0. In order to formulate the algorithms with the a posteriori choice of a for the BFDP we introduce some auxiliary functions and functionals: 7(a) = n(r),
p(a) = J(za),
(¿(a) = A?[z°]
Vza e Za
(6a)
and *(«) = * (77,7(0)) = 0 ( 5 " ) , p(a) = /3(a) - Tr(a) - £„ = P(za), e(a) = 0 the set Za can contain more than one element. LEMMA 1. The function (a) is single-valued and continuous for all a > 0. The other functions in (6a) and (6b) are single-valued and continuous everywhere for a > 0 except, possibly, for not more than a countable number of common points of multiple-valuedness. If a > 0 is such a point of multiple-valuedness, then the set Za contains at least two elements, z" and zZ, such that
7(a±0) =
fi(z|),
p(a±0) = P(5|),
£(a ± 0) =
The functions ¡3, tp, p, £ are monotonically nondecreasing, and 7, tt are monotonically nonincreasing. The algorithm of finite-dimensional generalized discrepancy principle (FDGDP) for solving extremal problems develops the approaches due to (Morozov, 1968; Goncharskii et. al., 1973; Leonov, 1979) and consists in the following (see (Leonov, 1982a; 1986)). i) The regularization parameter a , > 0 is chosen as a generalized solution of an equation with monotonic function: p(a) = 0.
(7)
Here and in what follows we say that a v is a generalized solution of equation x(a) = 0 with a monotonically nondecreasing function x, if x ( a ) ( Q — a ri) > 0, Va € (0,+oo). ii) An approximate solution za'1 is selected by a certain rule. Let C > 1, q > 1 be given constants of the algorithm, and let (*i = a^/q, «2 = oivq. We consider any two auxiliary extremals of (4), zai and ¿™2, associated with a = a i and a = «2» respectively. If (8)
Variational
Algorithms
for Extremal
67
Problems
for the given 77, then we take an element zai in Za* such that P(zai) < 0 as the desired approximate solution. For instance, we can put za,> = . Otherwise, if J ( * a s ) < c n ( z ° i ) + £,,. (9) we select an zai £ Zai satisfying the condition P(zai) we can take 2 " ' = z+ .
> 0. For example,
The algorithm of finite-dimensional generalized quasisolution principle (FDGQP) is a generalization of the methods due to (Ivanov, 1966; Tikhonov and Vasil'ev, 1976). It is possible to use this method if, in addition to the data (J, , , 6, N, Zn), we know an estimate wJJ > fIV77, Qri —* fi as 77 —• 0. The algorithm consists of the following steps.
for ft possessing the properties:
i) Choosing a v > 0 as a generalized solution of the equation 7 ( a ) - w„ = 0.
(10)
ii) Choosing an element za'> according to the following rule. If (8) holds for a given 77, then we select zai g Za* such that 0,{zai) > Qv (e.g., za" = zt"). If (9) holds, then we take z a * G Z a i satisfying the inequality Q ( z a i ) < Q v (e.g., z"' = z?). The algorithm of the finite-dimensional generalized principle of a smoothing functional (FDGSFP) is an advanced version of the algorithm suggested by Morozov and Liskovets (Morozov, 1966; Liskovets, 1976). For general extreme value problems it has the form: i) Choosing a v > 0 as a generalized solution of ¿(a) = 0.
(11)
ii) The selection rule: we take an arbitrary element in Zai such that E(zai) > 0. We note that the second step can be omitted if a , > 0 is an ordinary solution if (7), or (10), or (11). In this case any element z " ' 6 can be taken for constructing the approximate solution z = P h z 0 * . A widely known example of such a situation is the case where problem (4) has a unique solution for each a > 0. In this case, functions (6a), (6b) are continuous and equations (7), (10), and (11) have ordinary solutions. Let a , , za'> and zv — P¡^za'> be the values which axe found by one of the algorithms formulated above. THEOREM 1. Suppose that the inequality Cl(z*) > ii* holds for any solution z* £ D of problem (1). Then: a) equations (7), (10), and (11) have positive generalized solutions at least for "small" values 6 and 1 /TV; b ) equation (11) has a unique generalized solution which converges to 0 as r) —> 0;
A.S. Leonov
68
c) equations (7), (10) have also unique generalized solutions if problem (4) has the property: Za' H Za" = 0 whenever a' > a" > 0; d) for any sequence rjn = (Sn, 1 /Nn) which converges to 0 as n —> oo the corresponding sequence zn = znn converges in the sense that is such that zn—*Z,
ii(z n ) —*
fl
as
n —> oo.
Thus, the algorithms of generalized principles which have been formulated above solve the Basic finite-dimensional problem.
4. ON THE NUMERICAL TREATMENT OF THE GENERALIZED PRINCIPLES The algorithms considered axe based on solving the extreme value problem (4). Methods for solving such problems are well known (see (Fiacco and McCormick, 1968; Polak, 1976; Vasil'ev, 1980)) and used in many applications (Tikhonov and Arsenin, 1979). We assume in what follows that there is a method which finds one of the solutions of (4) for an arbitrary a > 0. Since we usually cannot find all the solutions of (4) for a given a, we must modify the algorithms of §3 in order to use only one extremal za for every a > 0. The first constituent of each of the algorithms in question is solving an equation of the form x ( a ) = 0 ( s e e (7)> (10), and (11)). Because of possible discontinuity of the monotonic function x(a) w e cannot use "feist" methods of the Newton type for solving this equation. However, this equation can be solved by simple bisection method (or by combinations of the bisection method and some "fast" methods) (see (Bakhvalov, 1975)). It is easy to prove the following statement. THEOREM 2. Let a function x(a) be defined for a > 0 and be monotonically nondecreasing. Suppose that the equation x(a) = 0 has a generalized solution on a given segment [ao, &o]> such that ao > 0, and x(°o) < 0, x(^o) > 0. Then the sequence a„, which is constructed by the bisection method, converges to a generalized solution of this equation. It is sufficient to use only one extremal of (4) to implement the bisection method. Now we consider a modification of selection rules (§3), which use also one extremal of (4) for every a . Since the generalized solution a n > 0 of (7) (or (10), or (11)) is found approximately, we suppose, that there is a sequence a „ = a„(ij) such that: av Vn, an —• av + 0 as n —> oo. Such a sequence can be determined by the bisection method. Then there exists a number n(ij) satisfying the condition: a„v /q < av. Choosing such a n(rj), we fix the following values of the regularization parameter: a = a i " alli «2 = aq (q = const > 1), said find arbitrary extremals z", z" 1 , z" 2 of (4) for these a. Let us formulate analogues of algorithms described in §3 which use these values. a) The generalized discrepancy principle. If (12)
Variational
Algorithms
for Extremal
Problems
69
holds, then we take a(r?) = a\ as the regularization parameter and the element ZV = PNZ"1 as the approximate solution of (1), (2). In the opposite case, where + (13) w e p u t a(rj)
= a,
= PNZ* •
b) The generalized quasisolution principle is constructed in a similar way, by using (10) instead of (7). c) The generalized principle of a smoothing functional. We take the number log2{(&o — ao)/[ao(i — 1)]}- Here [do, 60], a o > 0, is an initial localization segment for the solution of (7) (or (10), or (11)). The algorithms of generalized principles have optimal order of precision (see (Leonov, 1990)). The finite-dimensional algorithms constructed above can be used for finding stable solutions of operator equations by the variational method (Leonov, 1979). Examples of special formulations of problem (1), (2) in various functional spaces as well as examples of applications of the algorithms are given in (Tikhonov et al., 1992). REFERENCES Bakhvalov, N.C. (1975). Numerical
Methods.
Nauka, Moscow (in Russian).
Fiacco, A.V. a n d McCormick, G.P. (1968). Nonlinear Minimization
Techniques.
Programming:
Sequential
Unconstrained
J o h n Wiley, New York.
Goncharskii, A.V., Leonov, A.S., and Yagola, A.G. (1973). Generalized discrepancy principle. USSR
Comput.
Math,
and Math. Phys. 13, 294-302 (in Russian).
Ivanov, V.K. (1966). O n approximate solving of operator equations of t h e first kind. Comput.
Math,
and Math.
USSR
Phys. 6, 1089-1094 (in Russian).
Leonov, A.S. (1979). O n algorithms of a p p r o x i m a t e solving nonlinear p r o b l e m s w i t h o p e r a t o r approximately d e t e r m i n e d . Dokl. Akad.
Nauk SSSR.
245, 300-304 (in Russian).
Leonov, A.S. (1982a). O n an application of t h e generalized residual principle for solving ill-posed e x t r e m a l problems. Dokl. Akad. Leonov, A.S. (1982b).
Nauk SSSR.
262, 1306-1310 (in Russian).
O n a relation between t h e m e t h o d of generalized discrepancy a n d gen-
eralized discrepancy principle for nonlinear ill-posed problems. USSR Math.
Comput.
Math,
Leonov, A.S. (1986). O n some algorithms for solving ill-posed extremal problems. Math. Sbornik.
and
Phys. 22, 783-790 (in Russian). USSR
129(171), 218-231 (in Russian).
Leonov, A.S. (1990).
T h e optimality of precision order for s o m e algorithms of solving ill-posed
e x t r e m a l problems. Izv. Vyssh.
Ucheb. Zaved. Mat. N o 6, 30-38 ( in Russian).
Liscovets, O.A. (1976). A choice of regularization p a r a m e t e r in solving nonlinear ill-posed problems. Dokl. Akad.
Nauk SSSR.
229, 292-295 (in Russian).
70
A.S.
Leonov
Morozov, V . A . (1966). On regularization of ill-posed problems and on the choice of regularization parameter. USSR Comput.
Math,
and Math. Phys. 6, 170-175 (in Russian).
Morozov, V.A. (1968). On discrepancy principle in solving operator equations by regularization method. USSR Comput. Polak, E. (1971).
Math,
Computational
and Math. Phys. 8, 295-309 (in Russian). Methods
in Optimization.
A Unified Approach.
Academy
Press, New York, London. Tikhonov, A.N. (1966). On a stability of the problems of functional's optimization. USSR put. Math,
and Math. Phys.
Com-
6, 631-634 (in Russian).
Tikhonov, A.N. and Vasil'ev, F . P . (1976). T h e methods for solving ill-posed extremal problems. Banach
Center
Publ. 297-342.
Tikhonov, A.N. and Arsenin, V.Ya. (1979). Methods
of Solving
Ill-Posed
Problems.
Nauka,
Problems.
Nauka,
Moscow (in Russian). Tikhonov, A.N., Leonov, A.S., and Yagola, A.G. (1992). Nonlinear
Ill-Posed
Moscow (in Russian). Vasil'ev, F.P. (1980). Numerical Russian).
Methods
for
Solving
Extremal
Problems.
Nauka, Moscow (in
Ill-Posed Problems in Natural Sciences, pp. 71 - 83 A. Tikhonov ( E d . ) 1992 V S P / T V P
TIKHONOV'S APPROACH FOR CONSTRUCTING REGULARIZING ALGORITHMS A.S. LEONOV and A.G. YAGOLA Moscow Institute of Engineering Physics, Kashirskoe shosse, 31, 115409 Moscow, Russia Moscow State University, Len. Gory, 119899 Moscow, Russia ABSTRACT A general scheme for constructing regularizing algorithms on the base of Tikhonov variational approach is considered. It is used for solving linear and nonlinear ill-posed problems. The algorithms with a priori and a posteriori choice of the regularization parameter are reviewed. The optimality of the order of accuracy is studied for a posteriori parameter choice. 1. F O R M U L A T I O N OF T H E P R O B L E M Let (Z, T) be a topological space with a topology r and let D be a nonempty set in Z and U be a metric space with metric p. We assume A to be a class of operators from D into U. Let us fix an operator A £ A and an element u £ U and let us consider the so-called quasisolution problem for the operator equation Az = u
(1)
on D: find a z* £ D for which p{Az*,u)
= inf {p(Az, u): z e D) = fi0.
(2)
In the case D = Z problem (2) gives the pseudosolutions of (1). If the measure of incompatibility fiQ is equal zero, then the solutions of (2) are solutions of (1) on D. The quasisolution problem (2) may be ill-posed. Namely, problem (2) is not solvable for some equations of the form (1). The solution of (2) may be not unique and unstable in (Z, r ) with respect to perturbations of the data {A, u). We assume that to some element u = u € U there corresponds the nonempty set Z* C D of quasisolutions and that Z* may consist of more than one element. Furthermore, we suppose that a functional Q(z) is defined on D and bounded below: ft(>) > Q * = inf z e D ) > 0.
©
T V P Sci. Publ. 1992
A.S. Leonov and A.G. Yagola
72
The ft-optimal quasisolution problem for (1) is as follows: find a z 6 Z* such that ft(z) = inf {ft(z): 2 € Z* } =
ft.
(3)
We denote the set of all ft-optimal quasisolutions of (1) by Z. If D = Z, then Z is the set of ii-optimal pseudosolutions of (1). We suppose that, instead of the unknown exact data (A, u), we are given approximate data (A^, us) which satisfy the following conditions ue € U,
p(u,us)
< 6", AheA,
p(Az,Ahz)
0 of the "closeness" of (A^, ««) to (A, u). The main problem is to construct from the approximate data (Ah, ug, ip, h, 6) in (1) an element z, = zn(Ah, us, ip, h, 6) £ D which r-sequentially converges to the set Z of ii-optimal solutions as rj = (/i, 6) —• 0. We will consider also the problem of the stable estimation of the measure of incompatibility for (1) on D: using the data (Aj,, u$, ip, h, 6), find a number fin such that fj,v —y fi0 as tj —»• 0. Let us formulate our basic assumptions. 1. The class A consists of the operators A from D into U r-sequentially continuous: for any zq € D and any sequence {z„} C D which r-converges to Zo the sequence {p(Azn,Az0)} converges to 0 as n - t oo. 2. The functional fi(z) is T-sequentially lower semicontinuous on D: Vzq e D, V{z„} C D: z „ ^ z 0 => lim inf fi(z„) > i2(z 0 ). n—•oo 3. If K is an arbitrary number such that K > fi*, then the set Q,K = {z € D: Sl(z) < K] is r-sequentially compact in Z. 4. The measure of approximation ip(h,£l) is assumed to be defined for h > 0, ft > i2*, to depend continuously on all its arguments, to be monotonically increasing with respect to ft for any h > 0, and to satisfy the equality V>(0,ft) = 0, Vft > ft*. Conditions 1—3 guarantee that Z ^ 0 . 2. TIKHONOV'S A P P R O A C H An operator 7Z is called a regularizing algorithm for problem (1) — (3) if it gives a solution zv = K(Ah, us, ip, h,S) £ D of the main problem. This notion was introduced by A.N.Tikhonov in basic papers (Tikhonov, 1963a,b). These papers proposed also an approach for constructing the regularizing algorithms named later as Tikhonov's (or variational) scheme. Preparatory to considering this scheme, we first illustrate its applications by two examples.
Tikhonov's
Approach
73
Example 1. Let A be a linear continuous operator from a Hilbert space Z into a normed space U. The operator A is also assumed to be injective over Z. Suppose that (1) is compatible for some ug = u G U and suppose t h a t z € Z is the corresponding solution. The operator A is known exactly but instead of u we are given an approximation us G U: ||w — Uj|| < 6. The scheme of constructing an approximation to the element z from the data (A, us, 6) in (1) is as follows. We introduce the parametrical functional (Tikhonov's, or smoothing, functional) Ma[z]=a\\zW2
+ \\Az-us\\2,
z € Z,
a > 0
(4)
and consider the variational problem: for a fixed a > 0, find an element za 6 Z such that Ma[za]
= inf {Ma[z}:
z € Z}.
(5)
Problem (5) has an unique minimizer (extremal) for each a > 0. As an approximation to z, the element za^ € Z is used, i.e., the solution of (5) for a specially choosed regularization parameter a = a(8). It is the usage of the special choice of regularization parameter that provides the convergence as 6 —+ 0. Two approaches exist to choosing the regularization parameter: 1) the a priori choice, where a is chosen as a function of the "noise" level 0, 62/a(6) —> 0 as 6 —• 0. A widely known a posteriori choice of regularization parameter is the discrepancy principle (Ivanov, 1966; Morozov, 1967), where a( 0 is chosen such that p(a)
= \\Az°
-us\\=6
(6)
holds. Since /3(a) is continuous and monotonically increasing, the solution a( 0. Then problem (5) has the solution za = {zo for 0 < a < Qo!0,zo for a = ao! 0 for a > ao}- Hence, /3(a) = {0 for 0 < a
0, find an element z £ D such that a
Ma[za]
= i n f {Ma[z\.
z 6 £>}.
(8)
Here f(x) is an auxiliary function. A common choice is f(x) = xm, m >2. We denote the set of extremals of (8) which correspond to a given a > 0 by Za. Conditions 1—3 imply that Za ^ 0 . The scheme of constructing an approximation to the set Z includes: (i) the choice of the regularization parameter av — av(Ak, us, h, £); (ii) the fixation of a set Zai, corresponding to an, and a special selection of an element zar> in this set. We take the element zai chosen in this way as a solution of main problem. Procedures (i), (ii) must be accomplished so as to guarantee the convergence z a ' A Z as tj —» 0. Thus, the Tikhonov's regularizing algorithms differ from each other by the method of choosing a n and by the method of selecting za". Sometimes, the latter procedure is not necessary, and one can take an arbitrary element z a ' from Z"* as an approximate solution. In this case we say that the corresponding Tikhonov's algorithm is simple. 3. TIKHONOV'S ALGORITHMS F O R LINEAR P R O B L E M S Suppose that A is a set of the linear bounded operators acting from a locally and uniformly convex reflexive Banach space Z into a normed space U. We assume that a set D is convex and strongly closed in Z. As a topology r we take the topology of weak convergence in Z. Let O(ir) =