319 7 21MB
en Pages [1364]
1
Volume 14, Number 1 ISSN:1521-1398 PRINT,1572-9206 ONLINE
January 2012
Journal of Computational Analysis and Applications EUDOXUS PRESS,LLC
2
Journal of Computational Analysis and Applications ISSNno.’s:1521-1398 PRINT,1572-9206 ONLINE SCOPE OF THE JOURNAL An international publication of Eudoxus Press, LLC(seven times annually) Editor in Chief: George Anastassiou Department of Mathematical Sciences, University of Memphis, Memphis, TN 38152-3240, U.S.A [email protected] http://www.msci.memphis.edu/~ganastss/jocaaa The main purpose of "J.Computational Analysis and Applications" is to publish high quality research articles from all subareas of Computational Mathematical Analysis and its many potential applications and connections to other areas of Mathematical Sciences. Any paper whose approach and proofs are computational,using methods from Mathematical Analysis in the broadest sense is suitable and welcome for consideration in our journal, except from Applied Numerical Analysis articles. Also plain word articles without formulas and proofs are excluded. The list of possibly connected mathematical areas with this publication includes, but is not restricted to: Applied Analysis, Applied Functional Analysis, Approximation Theory, Asymptotic Analysis, Difference Equations, Differential Equations, Partial Differential Equations, Fourier Analysis, Fractals, Fuzzy Sets, Harmonic Analysis, Inequalities, Integral Equations, Measure Theory, Moment Theory, Neural Networks, Numerical Functional Analysis, Potential Theory, Probability Theory, Real and Complex Analysis, Signal Analysis, Special Functions, Splines, Stochastic Analysis, Stochastic Processes, Summability, Tomography, Wavelets, any combination of the above, e.t.c. "J.Computational Analysis and Applications" is a peer-reviewed Journal. See the instructions for preparation and submission of articles to JoCAAA. Webmaster:Ray Clapsadle. Editor’s Assistant:Dr.Razvan Mezei,Lander University,SC 29649, USA.
Journal of Computational Analysis and Applications(JoCAAA) is published by EUDOXUS PRESS,LLC,1424 Beaver Trail Drive,Cordova,TN38016,USA,[email protected]
http//:www.eudoxuspress.com.Annual Subscription Prices:For USA and Canada,Institutional:Print $470,Electronic $300,Print and Electronic $500.Individual:Print $150,Electronic $100,Print &Electronic $200.For any other part of the world add $50 more to the above prices for Print.No credit card payments. Copyright©2012 by Eudoxus Press,LLCAll rights reserved.JoCAAA is printed in USA. JoCAAA is reviewed and abstracted by AMS Mathematical Reviews,MATHSCI,and Zentralblaat MATH. It is strictly prohibited the reproduction and transmission of any part of JoCAAA and in any form and by any means without the written permission of the publisher.It is only allowed to educators to Xerox articles for educational purposes.The publisher assumes no responsibility for the content of published papers.
3
Editorial Board Associate Editors 1) George A. Anastassiou Department of Mathematical Sciences The University of Memphis Memphis,TN 38152,U.S.A Tel.901-678-3144 e-mail: [email protected] Approximation Theory,Real Analysis, Wavelets, Neural Networks,Probability, Inequalities. 2) J. Marshall Ash Department of Mathematics De Paul University 2219 North Kenmore Ave. Chicago,IL 60614-3504 773-325-4216 e-mail: [email protected] Real and Harmonic Analysis 3) Mark J.Balas Department Head and Professor Electrical and Computer Engineering Dept. College of Engineering University of Wyoming 1000 E. University Ave. Laramie, WY 82071 307-766-5599 e-mail: [email protected] Control Theory,Nonlinear Systems, Neural Networks,Ordinary and Partial Differential Equations, Functional Analysis and Operator Theory 4) Carlo Bardaro Dipartimento di Matematica e Informatica Universita di Perugia Via Vanvitelli 1 06123 Perugia, ITALY TEL+390755853822 +390755855034 FAX+390755855024 E-mail [email protected] Web site: http://www.unipg.it/~bardaro/ Functional Analysis and Approximation Theory, Signal Analysis, Measure Theory, Real Analysis.
20) Burkhard Lenze Fachbereich Informatik Fachhochschule Dortmund University of Applied Sciences Postfach 105018 D-44047 Dortmund, Germany e-mail: [email protected] Real Networks, Fourier Analysis,Approximation Theory 21) Hrushikesh N.Mhaskar Department Of Mathematics California State University Los Angeles,CA 90032 626-914-7002 e-mail: [email protected] Orthogonal Polynomials, Approximation Theory,Splines, Wavelets, Neural Networks 22) M.Zuhair Nashed Department Of Mathematics University of Central Florida PO Box 161364 Orlando, FL 32816-1364 e-mail: [email protected] Inverse and Ill-Posed problems, Numerical Functional Analysis, Integral Equations,Optimization, Signal Analysis 23) Mubenga N.Nkashama Department OF Mathematics University of Alabama at Birmingham Birmingham, AL 35294-1170 205-934-2154 e-mail: [email protected] Ordinary Differential Equations, Partial Differential Equations 24) Charles E.M.Pearce Applied Mathematics Department University of Adelaide Adelaide 5005, Australia e-mail: [email protected] Stochastic Processes, Probability Theory,Harmonic Analysis, Measure Theory, Special Functions, Inequalities
4
5) Martin Bohner Department of Mathematics and Statistics Missouri S&T Rolla, MO 65409-0020, USA [email protected] web.mst.edu/~bohner Difference equations, differential equations, dynamic equations on time scale, applications in economics, finance, biology. 6) Jerry L.Bona Department of Mathematics The University of Illinois at Chicago 851 S. Morgan St. CS 249 Chicago, IL 60601 e-mail:[email protected] Partial Differential Equations, Fluid Dynamics 7) Luis A.Caffarelli Department of Mathematics The University of Texas at Austin Austin,Texas 78712-1082 512-471-3160 e-mail: [email protected] Partial Differential Equations 8) George Cybenko Thayer School of Engineering Dartmouth College 8000 Cummings Hall, Hanover,NH 03755-8000 603-646-3843 (X 3546 Secr.) e-mail: [email protected] Approximation Theory and Neural Networks 9) Ding-Xuan Zhou Department Of Mathematics City University of Hong Kong 83 Tat Chee Avenue Kowloon,Hong Kong 852-2788 9708,Fax:852-2788 8561 e-mail: [email protected] Approximation Theory, Spline functions,Wavelets 10) Sever S.Dragomir School of Computer Science and Mathematics, Victoria University, PO Box 14428, Melbourne City, MC 8001,AUSTRALIA Tel. +61 3 9688 4437
25) Svetlozar T.Rachev Department of Statistics and Applied Probability University of California at Santa Barbara, Santa Barbara,CA 93106-3110 805-893-4869 e-mail: [email protected] and Chair of Econometrics,Statistics and Mathematical Finance School of Economics and Business Engineering University of Karlsruhe Kollegium am Schloss, Bau II,20.12, R210 Postfach 6980, D-76128, Karlsruhe,GERMANY. Tel +49-721-608-7535, +49-721-608-2042(s) Fax +49-721-608-3811 [email protected] Probability,Stochastic Processes and Statistics,Financial Mathematics, Mathematical Economics. 26) Alexander G. Ramm Mathematics Department Kansas State University Manhattan, KS 66506-2602 e-mail: [email protected] Inverse and Ill-posed Problems, Scattering Theory, Operator Theory, Theoretical Numerical Analysis, Wave Propagation, Signal Processing and Tomography 27) Ervin Y.Rodin Department of Systems Science and Applied Mathematics Washington University,Campus Box 1040 One Brookings Dr.,St.Louis,MO 631304899 314-935-6007 e-mail: [email protected] Systems Theory, Semantic Control, Partial Differential Equations, Calculus of Variations, Optimization and Artificial Intelligence, Operations Research, Math.Programming 28) T. E. Simos Department of Computer Science and Technology Faculty of Sciences and Technology University of Peloponnese
5
Fax +61 3 9688 4050 [email protected] Inequalities,Functional Analysis, Numerical Analysis, Approximations, Information Theory, Stochastics. 11) Saber N.Elaydi Department Of Mathematics Trinity University 715 Stadium Dr. San Antonio,TX 78212-7200 210-736-8246 e-mail: [email protected] Ordinary Differential Equations, Difference Equations 12) Augustine O.Esogbue School of Industrial and Systems Engineering Georgia Institute of Technology Atlanta,GA 30332 404-894-2323 e-mail: [email protected] Control Theory,Fuzzy sets, Mathematical Programming, Dynamic Programming,Optimization
GR-221 00 Tripolis, Greece Postal Address: 26 Menelaou St. Anfithea - Paleon Faliron GR-175 64 Athens, Greece [email protected] Numerical Analysis 29) I. P. Stavroulakis Department of Mathematics University of Ioannina 451-10 Ioannina, Greece [email protected] Differential Equations Phone +3 0651098283 30) Manfred Tasche Department of Mathematics University of Rostock D-18051 Rostock,Germany [email protected] Numerical Fourier Analysis, Fourier Analysis,Harmonic Analysis, Signal Analysis, Spectral Methods, Wavelets, Splines, Approximation Theory
13) Christodoulos A.Floudas Department of Chemical Engineering Princeton University Princeton,NJ 08544-5263 609-258-4595(x4619 assistant) e-mail: [email protected] OptimizationTheory&Applications, Global Optimization
31) Gilbert G.Walter Department Of Mathematical Sciences University of Wisconsin-Milwaukee,Box 413, Milwaukee,WI 53201-0413 414-229-5077 e-mail: [email protected] Distribution Functions, Generalised Functions, Wavelets
14) J.A.Goldstein Department of Mathematical Sciences The University of Memphis Memphis,TN 38152 901-678-3130 e-mail:[email protected] Partial Differential Equations, Semigroups of Operators
32) Halbert White Department of Economics University of California at San Diego La Jolla,CA 92093-0508 619-534-3502 e-mail: [email protected] Econometric Theory, Approximation Theory,Neural Networks
15) H.H.Gonska Department of Mathematics University of Duisburg Duisburg, D-47048 Germany 011-49-203-379-3542 e-mail:[email protected] Approximation Theory, Computer Aided Geometric Design
33) Xin-long Zhou Fachbereich Mathematik, Fachgebiet Informatik Gerhard-Mercator-Universitat Duisburg Lotharstr.65,D-47048 Duisburg,Germany e-mail:[email protected] Fourier Analysis,Computer-Aided Geometric Design, Computational Complexity, Multivariate
6
16) John R. Graef Department of Mathematics University of Tennessee at Chattanooga Chattanooga, TN 37304 USA [email protected] Ordinary and functional differential equations, difference equations, impulsive systems, differential inclusions, dynamic equations on time scales , control theory and their applications 17) Weimin Han Department of Mathematics University of Iowa Iowa City, IA 52242-1419 319-335-0770 e-mail: [email protected] Numerical analysis, Finite element method, Numerical PDE, Variational inequalities, Computational mechanics 18) Christian Houdre School of Mathematics Georgia Institute of Technology Atlanta,Georgia 30332 404-894-4398 e-mail: [email protected] Probability, Mathematical Statistics, Wavelets 19) V. Lakshmikantham Department of Mathematical Sciences Florida Institute of Technology Melbourne, FL 32901 e-mail: [email protected] Ordinary and Partial Differential Equations, Hybrid Systems, Nonlinear Analysis
Approximation Theory, Approximation and Interpolation Theory 34) Xiang Ming Yu Department of Mathematical Sciences Southwest Missouri State University Springfield,MO 65804-0094 417-836-5931 e-mail: [email protected] Classical Approximation Theory, Wavelets 35) Lotfi A. Zadeh Professor in the Graduate School and Director, Computer Initiative, Soft Computing (BISC) Computer Science Division University of California at Berkeley Berkeley, CA 94720 Office: 510-642-4959 Sec: 510-642-8271 Home: 510-526-2569 FAX: 510-642-1712 e-mail: [email protected] Fuzzyness, Artificial Intelligence, Natural language processing, Fuzzy logic 36) Ahmed I. Zayed Department Of Mathematical Sciences DePaul University 2320 N. Kenmore Ave. Chicago, IL 60614-3250 773-325-7808 e-mail: [email protected] Shannon sampling theory, Harmonic analysis and wavelets, Special functions and orthogonal polynomials, Integral transforms
7
Instructions to Contributors Journal of Computational Analysis and Applications A quartely international publication of Eudoxus Press, LLC, of TN.
Editor in Chief: George Anastassiou Department of Mathematical Sciences University of Memphis Memphis, TN 38152-3240, U.S.A.
1. Manuscripts files in Latex and PDF and in English, should be submitted via email to the Editor-in-Chief: Prof.George A. Anastassiou Department of Mathematical Sciences The University of Memphis Memphis,TN 38152, USA. Tel. 901.678.3144 e-mail: [email protected] Authors may want to recommend an associate editor the most related to the submission to possibly handle it. Also authors may want to submit a list of six possible referees, to be used in case we cannot find related referees by ourselves.
2. Manuscripts should be typed using any of TEX,LaTEX,AMS-TEX,or AMS-LaTEX and according to EUDOXUS PRESS, LLC. LATEX STYLE FILE. (Click HERE to save a copy of the style file.)They should be carefully prepared in all respects. Submitted articles should be brightly typed (not dot-matrix), double spaced, in ten point type size and in 8(1/2)x11 inch area per page. Manuscripts should have generous margins on all sides and should not exceed 24 pages. 3. Submission is a representation that the manuscript has not been published previously in this or any other similar form and is not currently under consideration for publication elsewhere. A statement transferring from the authors(or their employers,if they hold the copyright) to Eudoxus Press, LLC, will be required before the manuscript can be accepted for publication.The Editor-in-Chief will supply the necessary forms for this transfer.Such a written transfer of copyright,which previously was assumed to be implicit in the act of submitting a manuscript,is necessary under the U.S.Copyright Law in order for the publisher to carry through the dissemination of research results and reviews as widely and effective as possible.
8
4. The paper starts with the title of the article, author's name(s) (no titles or degrees), author's affiliation(s) and e-mail addresses. The affiliation should comprise the department, institution (usually university or company), city, state (and/or nation) and mail code. The following items, 5 and 6, should be on page no. 1 of the paper. 5. An abstract is to be provided, preferably no longer than 150 words. 6. A list of 5 key words is to be provided directly below the abstract. Key words should express the precise content of the manuscript, as they are used for indexing purposes. The main body of the paper should begin on page no. 1, if possible. 7. All sections should be numbered with Arabic numerals (such as: 1. INTRODUCTION) . Subsections should be identified with section and subsection numbers (such as 6.1. Second-Value Subheading). If applicable, an independent single-number system (one for each category) should be used to label all theorems, lemmas, propositions, corollaries, definitions, remarks, examples, etc. The label (such as Lemma 7) should be typed with paragraph indentation, followed by a period and the lemma itself. 8. Mathematical notation must be typeset. Equations should be numbered consecutively with Arabic numerals in parentheses placed flush right, and should be thusly referred to in the text [such as Eqs.(2) and (5)]. The running title must be placed at the top of even numbered pages and the first author's name, et al., must be placed at the top of the odd numbed pages. 9. Illustrations (photographs, drawings, diagrams, and charts) are to be numbered in one consecutive series of Arabic numerals. The captions for illustrations should be typed double space. All illustrations, charts, tables, etc., must be embedded in the body of the manuscript in proper, final, print position. In particular, manuscript, source, and PDF file version must be at camera ready stage for publication or they cannot be considered. Tables are to be numbered (with Roman numerals) and referred to by number in the text. Center the title above the table, and type explanatory footnotes (indicated by superscript lowercase letters) below the table. 10. List references alphabetically at the end of the paper and number them consecutively. Each must be cited in the text by the appropriate Arabic numeral in square brackets on the baseline. References should include (in the following order): initials of first and middle name, last name of author(s) title of article,
9
name of publication, volume number, inclusive pages, and year of publication. Authors should follow these examples: Journal Article 1. H.H.Gonska,Degree of simultaneous approximation of bivariate functions by Gordon operators, (journal name in italics) J. Approx. Theory, 62,170-191(1990).
Book 2. G.G.Lorentz, (title of book in italics) Bernstein Polynomials (2nd ed.), Chelsea,New York,1986.
Contribution to a Book 3. M.K.Khan, Approximation properties of beta operators,in(title of book in italics) Progress in Approximation Theory (P.Nevai and A.Pinkus,eds.), Academic Press, New York,1991,pp.483-495.
11. All acknowledgements (including those for a grant and financial support) should occur in one paragraph that directly precedes the References section.
12. Footnotes should be avoided. When their use is absolutely necessary, footnotes should be numbered consecutively using Arabic numerals and should be typed at the bottom of the page to which they refer. Place a line above the footnote, so that it is set off from the text. Use the appropriate superscript numeral for citation in the text. 13. After each revision is made please again submit via email Latex and PDF files of the revised manuscript, including the final one. 14. Effective 1 Nov. 2009 for current journal page charges, contact the Editor in Chief. Upon acceptance of the paper an invoice will be sent to the contact author. The fee payment will be due one month from the invoice date. The article will proceed to publication only after the fee is paid. The charges are to be sent, by money order or certified check, in US dollars, payable to Eudoxus Press, LLC, to the address shown on the Eudoxus homepage. No galleys will be sent and the contact author will receive one (1) electronic copy of the journal issue in which the article appears.
15. This journal will consider for publication only papers that contain proofs for their listed results.
JOURNAL OF COMPUTATIONAL ANALYSIS AND APPLICATIONS, VOL. 14, NO.1,10-18, 2012, COPYRIGHT 2012 EUDOXUS PRESS, 10 LLC
NEW RESULTS ON THE CONVERGENCE OF SOLUTIONS FOR A CERTAIN FOURTH ORDER NONLINEAR DIFFERENTIAL EQUATION OLUFEMI ADEYINKA ADESINA AND BABATUNDE SUNDAY OGUNDARE
Abstract. This paper is concerned with the convergence of solutions to the fourth order nonlinear differential equation ...
...
x) + g(x) ˙ + h(x) = p(t, x, x, ˙ x ¨, x ) x(iv) + φ( x ) + f (¨ where the nonlinear functions φ, f, g, h and p are continuous. By constructing a complete Lyapunov function, some fundamental questions on whether convergence results on the above equation which do not appear to have received any attention in the literature can be proved, are answered.
1. Introduction Fourth order nonlinear differential equations have been investigated by several authors in order to study their qualitative behaviour of solutions. See for instance ([1]-[23]). Significant results in this direction have been obtained by Abou-el-ela and Sadek [1], Sadek [15], Sadek and AL-Elaiw [16] on asymptotic stability. Also, Chin [9], Ogundare [11], Ogundare and Okecha [13], Tiryaki and Tunc [19], Tunc [20, 21, 22] and Wu and Xiong [23] made contributions on boundedness and stability properties of solutions. An interesting work on the existence of limiting regime in the sense of Demidovich can be found in Afuwape [6]. All these were done with the aid of Lyapunov’s second method. By employing the frequency domain method, Adesina [2] and Afuwape [4, 7, 8] obtained criteria that ensured the existence of periodic, almost periodic, exponential stability and dissipative solutions. It should be noted that the above mentioned papers contain rich bibliography. However, with respect to our observation in relevant literature, comparatively little have been done on convergence of solutions to fourth order nonlinear differential equations (see for instance Afuwape [3, 5]). The convergence of solutions to nonlinear differential equations is important both theoretically and in practice, since small perturbations from the equilibrium point imply that the trajectory will return to it when time goes to infinity. It is well known that the problem of the qualitative behaviour of solutions of the equation ... ... x(iv) + φ( x) + f (¨ x) + g(x) ˙ + h(x) = p(t, x, x, ˙ x ¨, x) (1.1) in which the nonlinear functions φ, f, g, h and p are continuous and depend (at most) only on the arguments displayed explicitly, has so far remained intractable due to (i) the number of the nonlinear terms φ, f, g and h simultaneously involved 2000 Mathematics Subject Classification. 34C11, 34C25, 34C27. Key words and phrases. Convergence of solutions; nonlinear fourth order equations; RouthHurwitz interval; Lyapunov functions. 1
ADESINA, OGUNDARE: NONLINEAR DIFFERENTIAL EQUATION
11
2
and (ii) the form of the functions φ and f (see for instance Tejumola [17]). Here and elsewhere, all the solutions considered and all the functions which appear are supposed real. The dots indicate differentiation with respect to t. Furthermore, we shall say that two solutions x1 (t), x2 (t) of the equation (1.1) converge (to each ... ... other) if x2 − x1 → 0, x˙ 2 − x˙ 1 → 0, x ¨2 − x ¨1 , x 2 − x 1 → 0 as t → ∞. An interesting but simple case of the problem arises if p = 0, and φ, f, g, h are such that the equation (1.1) is linear, i.e. if (1.1) is of the form ...
x(iv) + a x + b¨ x + cx˙ + dx = 0,
(1.2)
where a, b, c and d are constants. In this case, all solutions tend to the trivial solution, as t → ∞, provided that the Routh-Hurwitz criteria a > 0, ab − c > 0, (ab − c)c − a2 d > 0, d > 0
(1.3)
are satisfied. Generalization of this result as well as conditions (1.3) to quite different nonlinear equations in which the functions are assumed differentiable, at times twice differentiable, and also in some cases multiplied with signum functions have been obtained in the literature. See for instance Reissig et al.[14]. Unfortunately, the corresponding situation when all the four nonlinear functions are not necessarily differentiable, has not been as fully investigated and all available results in this direction have been obtained on equations that are special forms of the equation (1.1). Specifically, in this regard, the equation (1.1) has not attracted the interest of any author. Also most of the existing results are characterized with incomplete Lyapunov functions. These we find too weak. On the one hand, the purpose of this paper is to investigate whether convergence of solutions to the fourth order nonlinear differential equation (1.1) are provable under non differentiability conditions on all the nonlinear functions involved. While on the other hand, by constructing a complete Lyapunov function, which is a difficult task, the paper 0 0 0 circumvents the usual heavy restrictions on φ, f, f , g, g , h and h which dominate the results in some of the above mentioned works. Our results generalize previous results of Afuwape [3, 5] and Ogundare and Okecha [12]. They also complement and improve existing results on fourth order nonlinear differential equations in the literature. This paper is organized as follows. In Section 2, we present basic assumptions and main results. Section 3 is devoted to some preliminary results while in Section 4, we give the proofs of the main results. 2. Assumptions and Main Results Assumptions: ... ... (1) The function p(t, x, x, ˙ x ¨, x) is equal to q(t) + r(t, x, x, ˙ x ¨, x) with r(t, 0, 0, 0, 0) = 0 for all t; (2) For some positive constants a, b, c, d , (ab − c)c − a2 d > 0 and (ab − c) > 0; (3) For some positive constants a, b, c, d, D, ∆0 , ∆1 K0 and K1 , the intervals h [(ab − c)c] i , (2.1) I0 ≡ ∆0 , K0 a2 h [(a2 d + c2 )] i I1 ≡ ∆1 , K1 (2.2) ac are in the Routh-Hurwitz interval. The following results are proved.
ADESINA, OGUNDARE: NONLINEAR DIFFERENTIAL EQUATION
12
3
Theorem 2.1. In addition to the basic assumptions and 1–3 above, suppose that φ(0) = f (0) = g(0) = h(0) = 0 and that (i) there are positive constants a, a0 , b, b0 , c and c0 such that a≤
φ(w2 ) − φ(w1 ) ≤ a0 , w2 − w1
w2 6= w1 ,
f (z2 ) − f (z1 ) ≤ b0 , z2 6= z1 , z2 − z1 g(y2 ) − g(y1 ) ≤ c0 , y2 6= y1 ; c≤ y2 − y1 (ii) for any ζ, η, (η 6= 0), the increment ratios for h and g satisfy b≤
h(ζ + η) − h(ζ) ∈ I0 , η
(2.3) (2.4) (2.5)
(2.6)
g(ζ + η) − (ζ) ∈ I1 ; η (iii) there is a continuous function ϑ(t) such that
(2.7)
|r(t, x2 , y2 , z2 , w2 ) − r(t, x1 , y1 , z1 , w1 )| ≤ ϑ(t)(|x2 − x1 | + |y2 − y1 | + |z2 − z1 | + |w2 − w1 |) holds for arbitrary t, x1 , y1 , z1 , w1 , x2 , y2 , z2 , w2 . Then there exists a constant D1 > 0 such that if Z t ϑ% (τ )dτ ≤ D1
(2.8)
(2.9)
0
for some % with 1 ≤ % ≤ 2, then all solutions of (1.1) converge. Theorem 2.2. Suppose that all the conditions of the Theorem 2.1 are satisfied. Let x1 (t), x2 (t) be any two solutions of the equation (1.1). Then for each fixed % in the range 1 ≤ % ≤ 2, there exist positive constants D2 , D3 and D4 such that for t2 ≥ t1 , Z t2 S(t2 ) ≤ D2 S(t1 ) exp − D3 (t2 − t1 ) + D4 ϑ% (τ )dτ , (2.10) t1
where S(t) = (x2 (t) − x1 (t))2 + (x˙ 2 (t) − x˙ 1 (t))2 + (¨ x2 (t) − x ¨1 (t))2 ...
...
+ ( x 2 (t) − x 1 (t))2 .
(2.11)
Remark 2.3. If p = 0 and assumption 3 with hypotheses (i) and (ii) of the Theorem 2.1 hold for arbitrary η 6= 0, then the trivial solution of equation (1.1) is exponentially stable. Remark 2.4. If p = 0 and assumption 3 with hypotheses (i) and (ii) of the Theorem 2.1 hold for arbitrary η 6= 0, and ζ = 0, then there exists a constant D5 > 0 such that every solution x(t) of (1.1) satisfies |x(t)| ≤ D5 ;
|x(t)| ˙ ≤ D5 ;
|¨ x(t)| ≤ D5 ;
...
| x(t)| ≤ D5 .
For the rest of this article, D1 , D2 , D3 , . . . stand for positive constants. Their identities are preserved throughout this paper.
ADESINA, OGUNDARE: NONLINEAR DIFFERENTIAL EQUATION
13
4
3. Preliminary Result Rt
Let Q(t) = 0 q(τ )dτ . By setting x˙ = y, y˙ = z, z˙ = w, it is convenient to consider the equation (1.1) as the equivalent system x˙ y˙ z˙ w˙
= y, = z, = w, = −φ(w) − f (z) − g(y) − h(x) + r(t, x, y, z, w) + Q(t).
(3.1)
Let xi (t), yi (t), zi (t), wi (t), (i = 1, 2), be two solutions of the system (3.1), such that inequalities (2.3)-(2.4) and [(ab − c)c] h(x2 ) − h(x1 ) ∆0 ≤ ≤ K0 , (3.2) x2 − x1 a2 [a2 d + c2 ] g(y2 ) − g(y1 ) (3.3) ≤ K1 ∆1 ≤ y2 − y1 ac are satisfied, where the constants remain as defined as in section two. The main tool in the proofs of the convergence theorems will be the following function: 2V = [β(1 − )x + γy + δz + w]2 + [(1 − )D − 1](δz + w)2 +βD[ + (1 − )D − 1]y 2 + γ(D − 1)z 2 + Dw2 + β 2 (1 − )x2 (3.4) +2γδ[(1 − )2 D − 1]yz, where 0 < < 1 − < 1, D =1+
β(1−)[γδ−β(1−)] γδ−β
γδ β
> 1 − , β, γ, δ are positive real numbers and
with D >
1 (1−)2
always.
Lemma 3.1. The function V defined in equation (3.4) is positive definite and moreover, there exist constants D6 and D7 such that D6 (x2 + y 2 + z 2 + w2 ) ≤ V ≤ D7 (x2 + y 2 + z 2 + w2 ). Proof. Indeed we can rearrange 2V as 2V = 2V1 + 2V2 ,
(3.5)
where 2V1 2V2
= [β(1 − )x + γy + δz + w]2 + [(1 − )D − 1](δz + w)2 +Dw2 + β 2 (1 − )x2 + βDy 2 ; = βD[(1 − )D − 1]y 2 + 2γδ[(1 − )2 D − 1]yz + γ(D − 1)z 2 .
1 Since 0 < < 1, γδ β > 1 − , β, γ, δ > 0 and D can be chosen such that D > 1− , it follows that V1 is positive definite. Also V2 , regarded as a quadratic form in y and z, is always positive since the conditions on all parameters involved are satisfied. Hence V is positive definite. Therefore, a constant D6 > 0 can be found such that
D6 (x2 + y 2 + z 2 + w2 ) ≤ V.
(3.6) 1 2 2 (y
2
+ z ), it follows that Furthermore, by using the Schwartz inequality |y||z| ≤ |2V2 | ≤ D∗ (y 2 + z 2 ) for some D∗ = D∗ (β, γ, δ, D, ). Thus there exists D7 such that V ≤ D7 (x2 + y 2 + z 2 + w2 ). (3.7) On combining inequalities (3.7) and (3.7), we have D6 (x2 + y 2 + z 2 + w2 ) ≤ V ≤ D7 (x2 + y 2 + z 2 + w2 ).
(3.8)
ADESINA, OGUNDARE: NONLINEAR DIFFERENTIAL EQUATION
14
5
Lemma 3.2. Let the hypotheses of the Theorem 2.1 hold, and let the function W (t) = W (x2 − x1 , y2 − y1 , z2 − z1 , w2 − w1 ) be defined by 2W =
[β(1 − )(x2 − x1 ) + γ(y2 − y1 ) + δ(z2 − z1 ) + (w2 − w1 )]2 +[(1 − )D − 1](δ(z2 − z1 ) + (w2 − w1 ))2 + βD[ + (1 − )D − 1](y2 − y1 )2 +γ(D − 1)(z2 − z1 )2 + D(w2 − w1 )2 + β 2 (1 − )(x2 − x1 )2 +2γδ[(1 − )2 D − 1](y2 − y1 )(z2 − z1 ),
then there exist positive constants D8 and D9 such that dW ≤ −2D8 S + D9 S 1/2 |θ|, dt
(3.9)
where θ = r(t, x2 , y2 , z2 , w2 ) − r(t, x1 , y1 , z1 , w1 ). Proof. On using the system (3.1), a direct computation of cations dW = −U1 + U2 , dt where
dW dt
gives after simplifi(3.10)
U1 = β(1 − )H(x2 , x1 )(x2 − x1 )2 + γ[G(y2 , y1 ) − β(1 − )](y2 − y1 )2 +Dδ(1 − )[F (z2 , z1 ) − γ(1 − )](z2 − z1 )2 + D[Φ(w2 , w1 ) − δ(1 − )](w2 − w1 )2 −β(1 − )[G(y2 , y1 ) − β](x2 − x1 )(y2 − y1 ) − β(1 − )[F (z2 , z1 ) − γ](x2 − x1 )(z2 − z1 ) −β(1 − )[Φ(w2 , w1 ) − δ](x2 − x1 )(w2 − w1 ) −[Dδ(1 − )G(y2 , y1 ) + γ(F (z2 , z1 ) − γ) − Dβγ](y2 − y1 )(z2 − z1 ) −[DG(y2 , y1 ) + γΦ(w2 , w1 ) − β(1 − ) − Dγδ(1 − )2 ](y2 − y1 )(w2 − w1 ) −[D(F (z2 , z1 ) + DδΦ(w2 , w1 ) + D(δ 2 (1 − ) + γ)](z2 − z1 )(w2 − w1 ) (3.11) and U2 = θ[β(1 − )(x2 − x1 ) + γ(y2 − y1 ) + Dδ(1 − )(z2 − z1 ) + D(w2 − w1 )], (3.12) with H(x2 , x1 ) = G(y2 , y1 ) = F (z2 , z1 ) = Φ(w2 , w1 ) =
h(x2 )−h(x1 ) , x2 6= x1 ; x2 −x1 g(y2 )−g(y1 ) , y2 6= y1 ; y2 −y1 f (z2 )−f (z1 ) , z2 6= z1 ; z2 −z1 φ(w2 )−φ(w1 ) , w2 6= w1 . w2 −w1
Furthermore, let χ1 = G(y2 , y1 ) − β, y2 6= y1 , χ2 = F (z2 , z1 ) − γ, z2 6= z1 and χ3 = Φ(w2 , w1 ) − δ, w2 6= w1 . Also let λ1 = G(y2 , y1 ) − β(1 − ), y2 6= y1 , λ2 = F (z2 , z1 ) − γ(1 − ), z2 6= z1 and λ3 = Φ(w2 , w1 ) − δ(1 − ), w2 6= w1 . Define 3 X i=1
µi = 1;
7 X i=1
νi = 1;
7 X i=1
κi = 1;
7 X
τi = 1,
i=1
with µi > 0, νi > 0, κi > 0 and τi > 0. Then we can rearrange U1 as U1 = W1 + W2 + W3 + W4 + W5 + W6 + W7 + W8 + W9 + W10 + W11 + W12 ,
ADESINA, OGUNDARE: NONLINEAR DIFFERENTIAL EQUATION
15
6
where µ1 β(1 − )H(x2 , x1 )(x2 − x1 )2 + β(1 − )χ1 (x2 − x1 )(y2 − y1 ) + ν1 γλ1 (y2 − y1 )2 ; µ1 β(1 − )H(x2 , x1 )(x2 − x1 )2 + β(1 − )χ2 (x2 − x1 )(z2 − z1 ) +κ1 Dδ(1 − )λ2 (z2 − z1 )2 ; W3 = µ1 β(1 − )H(x2 , x1 )(x2 − x1 )2 + β(1 − )χ3 (x2 − x1 )(w2 − w1 ) + τ1 Dλ3 (w2 − w1 )2 ; W4 = ν2 γλ1 (y2 − y1 )2 + Dδ(1 − )λ1 (y2 − y1 )(z2 − z1 ) + κ2 Dδ(1 − )λ2 (z2 − z1 )2 ; W5 = ν3 γλ1 (y2 − y1 )2 + γχ2 (y2 − y1 )(z2 − z1 ) + κ3 Dδ(1 − )λ2 (z2 − z1 )2 ; W6 = ν4 γλ1 (y2 − y1 )2 + Dβδ((1 − )2 − 1)(y2 − y1 )(z2 − z1 ) + κ4 Dδ(1 − )λ2 (z2 − z1 )2 ; W7 = ν5 γλ1 (y2 − y1 )2 + Dχ1 (y2 − y1 )(w2 − w1 ) + τ2 Dλ3 (w2 − w1 )2 ; W8 = ν6 γλ1 (y2 − y1 )2 + γχ3 (y2 − y1 )(w2 − w1 ) + τ3 Dλ3 (w2 − w1 )2 ; W9 = ν7 γλ1 (y2 − y1 )2 + [D(β − γδ(1 − )2 ) + γδ − β(1 − )](y2 − y1 )(w2 − w1 ) +τ4 Dλ3 (w2 − w1 )2 ; W10 = κ5 Dδ(1 − )λ2 (z2 − z1 )2 + Dχ2 (z2 − z1 )(w2 − w1 ) + τ5 Dλ3 (w2 − w1 )2 ; W11 = κ6 Dδ(1 − )λ2 (z2 − z1 )2 + Dδχ3 (z2 − z1 )(w2 − w1 ) + τ6 Dλ3 (w2 − w1 )2 ; W12 = κ7 Dδ(1 − )λ2 (z2 − z1 )2 + D[δ 2 (1 − ) + 2γ + δ 2 ](z2 − z1 )(w2 − w1 ) +τ7 Dλ3 (w2 − w1 )2 .
W1 = W2 =
It is not difficult to see that each of W1 , W2 , ..., W12 is quadratic in its respective variables. Thus we can use the fact that any quadratic of the form AX 2 + BXY + CY 2 is non negative if 4AC − B 2 ≥ 0 to obtain the following inequalities: W1 ≥ 0, W2 W3 W4 W5 W6 W7 W8
≥ 0, ≥ 0, ≥ 0, ≥ 0, ≥ 0, ≥ 0, ≥ 0,
4µ1 ν1 γH(x2 ,x1 )λ1 , for x2 6= x1 ; β(1−) 4Dµ2 κ1 δH(x2 ,x1 )λ2 2 χ2 ≤ , for x2 6= x1 ; β 1 H(x2 ,x1 )λ3 , for x2 6= x1 ; χ3 2 ≤ 4Dµ3 τβ(1−) 2 κ2 γλ2 λ1 ≤ 4νβ(1−) , for z2 6= z1 χ2 2 ≤ 4ν3 κ3 Dδ(1 − )λ1 λ2 , for y2 6= y1 , z2 6= z1 ; 2 ((1−)2 −1)2 λ2 ≥ Dδβ 4ν4 γκ4 (1−)λ1 , for y2 6= y1 , z2 6= z1 2 λ1 λ3 χ1 2 ≤ 4ν5 τD , for y2 6= y1 , w2 6= w1 ; 4Dν6 τ3 λ1 λ3 2 χ3 ≤ , for y2 6= y1 , w2 6= w1 ; γ [D(β−γδ(1−)2 )+γδ−β(1−)]2 λ3 ≥ , for y2 6= y1 , w2 6= 4ν7 γτ4 Dλ1 χ2 2 ≤ 4τ5 κ5 δ(1 − )λ2 λ3 , for z2 6= z1 , w2 6= w1 ; 2 λ3 , for z2 6= z1 , w2 6= w1 ; χ3 2 ≤ 4κ6 τ6 (1−)λ γ 2 2 [δ (1−) )+2γ+δ 2 ]2 λ3 ≥ 4κ7 τ7 δ(1−)λ2 , for z2 6= z1 , w2 6= w1 .
if χ1 2 ≤ if if if if if if if
W9 ≥ 0, W10 ≥ 0, W11 ≥ 0,
if if if
W12 ≥ 0,
if
w1 ;
Thus U1 ≥ W1 , provided that 2 ,x1 )λ1 ν5 τ2 λ1 λ3 ; ; 0 ≤ χ1 2 ≤ 4 min µ1 ν1 γH(x β(1−) D 2 ,x1 )λ2 0 ≤ χ2 2 ≤ 4 min Dµ2 κ1 δH(x ; ν κ Dδ(1 − )λ1 λ2 ; τ5 κ5 δ(1 − )λ2 λ3 ; 3 3 β 1 H(x2 ,x1 )λ3 4Dν6 τ3 λ1 λ3 4κ6 τ6 (1−)λ2 λ3 0 ≤ χ3 2 ≤ 4 min 4Dµ3 τβ(1−) ; ; ; γ γ 4ν2 κ2 γλ2 λ1 ≤ β(1−) ; 2
0 ≤ λ3 2 ≤
2
2
((1−) −1) λ2 ≥ Dδβ 4ν4 γκ4 (1−)λ1 ; [D(β−γδ(1−) 2 )+γδ−β(1−)]2 [δ 2 (1−)2 )+2γ+δ 2 ]2 ; 4κ7 τ7 δ(1−)λ2 , max 4ν7 γτ4 Dλ1
where H and G lie respectively in h [(ab − c)c] i I0 ≡ ∆0 , K0 , a2
ADESINA, OGUNDARE: NONLINEAR DIFFERENTIAL EQUATION
16
7
h [(a2 d + c2 )] i I1 ≡ ∆1 , K1 ac with K0 < 1, K1 < 1, ∆0 > 0 and ∆1 > 0 constants. By choosing 2D8 = min β(1 − )∆0 ; γλ1 ; Dδ(1 − )λ2 ; Dλ3 , it follows that U1 ≥ W1 ≥ 2D8 S.
(3.13)
D9 = max β(1 − ; γ; Dδ(1 − ); D(w2 − w1 ) ,
(3.14)
Furthermore, if we choose we have 1
U2 ≤ D9 S 2 |θ|. On combining (3.13) and (3.14) in (3.10), we obtain (3.9). This completes the proof of Lemma 2. 4. Proof of Main Results Proof of Theorem 2.2. We shall first give the proof of the Theorem 2.2. For this, let % be any constant in the range 1 ≤ % ≤ 2 and set σ = 1 − %2 so that 0 ≤ σ ≤ 21 . We re-write inequality (3.9) in the form dW + D8 S ≤ D9 S 1/2 |θ| − D8 S, dt from which
(4.1)
dW + D8 S ≡ D10 S σ W ∗ , dt
where 1
(4.2) W ∗ = S ( 2 −σ) [|θ| − D11 S 1/2 ], D8 1/2 ∗ with D11 = D10 . Considering the two cases (i) |θ| ≤ D11 S , then W < 0; and (ii) |θ| ≥ D11 S 1/2 , then the definition of W ∗ in the equation (4.2) gives at least 1
W ∗ ≤ S ( 2 −σ) |θ| and also S 1/2 ≤
|θ| D11 .
Thus 1
S 2 (1−2σ) ≤ [
|θ| (1−2σ) ] , D11
and from this together with W ∗ follows W ∗ ≤ D12 |θ|
2(1−σ)
,
where D12 = D11 (σ−1) . On using the estimate on W ∗ in inequality (4.1), we obtain dW 2(1−σ) + D8 S ≤ D10 D12 S σ |θ| ≤ D13 S σ ϑ2(1−σ) S (1−σ) dt which follows from |r(t, x2 , y2 , z2 , w2 ) − r(t, x1 , y1 , z1 , w1 )| ≤ ϑ(t)(|x2 − x1 | + |y2 − y1 | + |z2 − z1 | + |w2 − w1 |). In view of the fact that % = 2(1 − σ), we obtain dW ≤ −D8 S + D13 ϑ% S, dt
(4.3)
ADESINA, OGUNDARE: NONLINEAR DIFFERENTIAL EQUATION
17
8
and on using inequalities (3.9), we have
for some constants D14 (t1 ≤ t2 ), we obtain
dW + [D14 − D15 ϑ% (t)]W ≤ 0, (4.4) dt and D15 . On integrating the estimate (4.4) from t1 to t2 Z
t2
W (t2 ) ≤ W (t1 ) exp − D14 (t2 − t1 ) + D15
ϑ% (τ )dτ .
(4.5)
t1
On using Lemma 3.1, we obtain inequality (2.10), with D2 = D4 = D15 . This completes the proof of the Theorem 2.2.
D7 D6 ;
D3 = D14 and
Proof of Theorem 2.1. The proof follows from the inequality (2.10) and the condiD3 tion (2.9) on ϑ. On choosing D2 = D in inequality (2.10), then as t = (t2 − t1 ) → 4 ∞, S(t) → 0 which proves that x2 − x1 → 0, y2 − y1 → 0, z2 − z1 → 0, w2 − w1 → 0, as t → ∞. This completes the proof of Theorem 2.1. References [1] A.M.A. Abou-El-Ela, A.I. Sadek, On the asymptotic behaviour of solutions of some differential equations of the fourth-order, Ann. of Diff. Eqs., 8(1), 1-12(1992). MR1166444 (93c:34110). [2] O.A. Adesina, On the qualitative behaviour of solutions of some-fourth order non-linear differential equations, Ordinary Differential equations, Proc. Natl. Math. Cent. Abuja, Vol.1 No.1, 87–93(2000), (Ezeilo, J.O.C., Tejumola, H.O. and Afuwape, A.U. Eds.), MR1991406. [3] A.U. Afuwape, On the convergence of solutions of certain fourth order differential equations, An. S ¸ tiint¸. Univ. Al. I. Cuza Ia¸si. Mat.,, 27, no. 1, 133–138(1981). MR0618718 (82j:34032). [4] A.U. Afuwape, On some properties of solutions for certain fourth-order nonlinear differential equations, Analysis, 5, no. 1–2, 175-183(1985). MR0791498 (87a:34037). ... [5] A.U. Afuwape, Convergence of the solutions for the equation x(iv) + a x + b¨ x + g(x) ˙ + ... h(x) = p(t, x, x, ˙ x ¨, x ), Internat. J. Math. Math. Sci., 11, no. 4, 727–733(1988). MR0959453 (89i:34071). [6] A.U. Afuwape, On the existence of a limiting regime in the sense of Demidovic for a certain fourth-order nonlinear differential equation, J. Math. Anal. Appl., 129, no. 2 389–393(1988). MR0924297 (89a:34016). [7] A.U. Afuwape, Uniform dissipativity of some fourth-order nonlinear differential equations, An. S ¸ tiint¸. Univ. Al. I. Cuza Ia¸si. Mat., no 5, 35, 9–16(1989). MR1049162. [8] A.U. Afuwape,Uniform dissipative solutions for some fourth-order nonlinear differential equations, An. S ¸ tiint¸. Univ. Al. I. Cuza Ia¸si. Mat., no 4, 37, 431–445(1991). MR1249344 (94m:34087). [9] P.S.M. Chin, Stability results for the solutions of certain fourth-order autonomous differential equations, Internat. J. Control., 49, no. 4, 1163–1173(1989). MR0995162 (90d:34106) ... [10] J.O.C. Ezeilo, New properties of the equation x +a¨ x + bx˙ + h(x) = p(t, x, x, ˙ x ¨) for certain −1 special values of the incrementary ratio y {h(x + y) − h(x)}, Equations differentielles et fonctionnelles non linires (Actes Conference Internat ”Equa-Diff 73”, Brussels/Louvain-laNeuve), Hermann, Paris, 447–462(1973) . MR0430413 (55 #3418). [11] B.S. Ogundare, Boundedness of solutions to fourth order differential equations with oscillatory restoring and forcing terms, Electron. J. Differential Equations, no 6, 6pp(2006). MR2198919 (2007m:34090). [12] B.S. Ogundare, G.E. Okecha, Convergence of solutions of certain fourth order nonlinear differential equations, Int. J. Math. Sci., Art. ID 12536, 13pp(2007). MR2320772 (2008b:34089). [13] B.S. Ogundare, G.E. Okecha, Boundedness and stability properties of solution to certain fourth order nonlinear differential equation, Nonlinear Stud., 15, no 1, 61–70(2008). MR2391313 (2009b:34161). [14] R. Reissig, G. Sansone, R. Conti, Non-linear differential equations of higher order. Translated from German. Noordhoff International Publishing, Leyden, (1974). MR0344556 (49 #9295).
ADESINA, OGUNDARE: NONLINEAR DIFFERENTIAL EQUATION
18
9
[15] A.I. Sadek, On the asymptotic behaviour of solutions of certain fourth-order ordinary differential equations, Math. Japon., 45, no. 3, 527-540(1997). MR1456383 (98e:34069). [16] A.I. Sadek, A.S. Al-Elaiw, Asymptotic behaviour of the solutions of a certain fourth-order differential equation, Ann. Differential Equations, 20, no. 3, 221-234(2004). MR2099957. [17] H.O. Tejumola, Periodic boundary value problems for some fifth, fourth and third order ordinary differential equations, J. Nigerian Math. Soc., 21, 37–46(2006). MR2376824. [18] A. Tiryaki, C. Tun¸c, Construction Lyapunov functions for certain fourth-order autonomous differential equations, Indian J. Pure Appl. Math., 26, no. 3, 225-292(1995). MR1326483 (96a:34111). [19] A. Tiryaki, C. Tun¸c, Boundedness and stability properties of solutions of certain fourth order differential equations via the intrinsic method, Analysis, 16, no. 4, 325–334(1996). MR1429457 (97m:34095). [20] C. Tun¸c, A note on the stability and boundedness results of solutions of certain fourth order differential equations, Appl. Math. Comput, 155, no. 3, 837–843(2004). MR2078694 (2005e:34151). [21] C. Tun¸c, Some stability and boundedness results for the solutions of certain fourth order differential equations, Acta Univ. Palacki. Olomuc., Fac. rer. nat., Mathematica, 44, 161– 171(2005). MR2218575 (2006k:34147). [22] C. Tun¸c, About stability and boundedness of solutions of certain fourth order differential equations, Nonlinear Phenom. Complex Syst., 9, no. 4, 380–387(2006). MR2352613 (2008f:34124). [23] X. Wu, K. Xiong, Remarks on stability results for the solutions of certain fourth-order autonomous differential equations, Internat. J. Control, 69, no. 2, 353–360(1998). MR1684744 (2000d:34104). Olufemi Adeyinka Adesina Department of Mathematics, Obafemi Awolowo University, Ile-Ife, Nigeria E-mail address: [email protected] Babatunde Sunday Ogundare Department of Mathematics, Obafemi Awolowo University, Ile-Ife, Nigeria E-mail address: [email protected]
JOURNAL OF COMPUTATIONAL ANALYSIS AND APPLICATIONS, VOL. 14, NO.1,19-31 , 2012, COPYRIGHT 2012 EUDOXUS PRESS, 19 LLC
Applications of an Ostrowski-type Inequality Heiner Gonska1 , Ioan Ra¸sa2 , Maria-Daniela Rusu3 1
University of Duisburg-Essen, Faculty of Mathematics, Forsthausweg 2, D-47057 Duisburg, Germany [email protected] 2
Technical University, Department of Mathematics,
Str. C. Daicoviciu 15, RO-400020 Cluj-Napoca, Romania [email protected] 3
University of Duisburg-Essen, Faculty of Mathematics, Forsthausweg 2, D-47057 Duisburg, Germany [email protected] January 11, 2012
We extend an Ostrowski-type inequality established by the first author and A. Acu. This extension is used in order to investigate the limit of the iterates of some positive linear operators. The rate of convergence of the iterates is described in terms of ω e , the least concave majorant of the first order modulus of continuity. The same ω e is used in order to estimate the difference of some classical positive linear operators. 2000 Mathematics Subject Classification: 26A15, 41A36, 41A25 Key words and phrases: Ostrowski-type inequality, positive linear operators, iterates, rate of convergence
1
Introduction
One of Ostrowski’s classical inequalities deals with the most primitive form of a quadrature rule. It was published in 1938 in Switzerland (see [17]) and reads in its original form as follows. 1
GONSKA ET AL: OSTROWSKI INEQUALITY
Theorem 1.1. Es sei h(x) im Intervall J : a < x < b stetig und differentiierbar, und es sei in J durchweg |h0 (x)| ≤ m, m > 0. Dann gilt f¨ ur jedes x aus J: Z b 1 h(x)dx ≤ h(x) − b−a a
2 ! x − a+b 1 2 + (b − a)m. 4 (b − a)2
A simplified form can be found, for example, in G. Anastassiou’s 1995 article [5] and reads as follows. Theorem 1.2. Let f be in C 1 [a, b], x ∈ [a, b]. Then Z b (x − a)2 + (b − x)2 1 f (t)dt ≤ · kf 0 k∞ . f (x) − b−a a 2(b − a) The characteristic feature of Ostrowski’s approach is thus to approximate an integral by a single value of the function in question and to estimate the difference assuming differentiability of the function in question. The latter is a dispensable assumption as was observed by A. Acu and the first author in the next result (see [1]). Theorem 1.3. Let L : C[a, b] → C[a, b] be non-zero, linear and bounded, and such that L : C 1 [a, b] → C 1 [a, b] with k(Lg)0 k ≤ cL · kg 0 k for all g ∈ C 1 [a, b]. Then for all f ∈ C[a, b] and x ∈ [a, b] we have Z b cL (x − a)2 + (b − x)2 1 Lf (t)dt ≤ kLk · ω e f; · . Lf (x) − b−a a kLk 2(b − a) The right hand side in the latter inequality is given in terms of the least concave majorant of the first order modulus of continuity of an arbitrary f ∈ C[a, b] and thus generalizes Ostrowski’s inequality in the form given by Anastassiou. For the reader’s convenience we recall the following (see [1]). Definition 1.4. Let f ∈ C[a, b]. If for t ∈ [0, ∞) the quantity ω(f ; t) = sup{|f (x) − f (y)| , |x − y| ≤ t} is the usual modulus of continuity, its least concave majorant is given by (t − x)ω(f ; y) + (y − t)ω(f ; x) ω e (f ; t) = sup ; 0 ≤ x ≤ t ≤ y ≤ b − a, x 6= y . y−x 2
20
GONSKA ET AL: OSTROWSKI INEQUALITY
21
It is important to note that for all f ∈ C[a, b] the following equality holds:
t 0 1 inf kf − gk∞ + kg k∞ = ω e (f ; t), t ≥ 0. 1 2 2 g∈C [a,b] The latter representation can be found in an article of Mitjagin and Semenov [22]. We remark that - according to our knowledge - B. and I. Gavrea in [7] were the first to observe the possibility of using ”omega-tilde” in this context. Ostrowski-type inequalities have attracted a most remarkable amount of attention in the past. The reader should consult Ch. XV on ”Integral inequalities involving functions with bounded derivatives” in the book by D.S. Mitrinovi´c et al. [16] and Chapters 2 - 9 in the recent monography of G. Anastassiou [4]. In our present note we generalize the above-mentioned result of Acu et al. for integrals w.r.t. probability measures λ and apply the new estimates to iterates of certain positive linear operators and to differences of such mappings.
2
On Ostrowski’s inequality
We first consider the space ( Lip[0, 1] :=
f ∈ C[0, 1]| |f |Lip
) |f (x) − f (y)| 0 such that |Lg|Lip ≤ cL |g|Lip , for all g ∈ Lip[0, 1]. Then for all f ∈ C[0, 1], λ ∈ M1+ [0, 1] and x ∈ [0, 1] we have
Z Lf (x) −
0
1
cL e f, Lf (t)dλ(t) ≤ kLk ω wλ (x) . kLk
Proof. Let Ax : C[0, 1] → R be defined by Z Ax (f ) := f (x) −
(3.1)
1
f (t)dλ(t).
0
Then Ax is a bounded linear functional with kAx k ≤ 2. We have Z 1 |Ax (Lf )| ≤ |Lf (x)| + |Lf (t)| dλ(t) 0
≤ 2 kLk kf k∞ , for f ∈ C[0, 1]. Let g ∈ Lip[0, 1]. By using (2.1) we get Z 1 Lg(t)dλ(t) ≤ |Lg|Lip wλ (x) ≤ cL |g|Lip wλ (x). |Ax (Lg)| = Lg(x) − 0
Consequently, |Ax (Lf )| = |(Ax ◦ L)(f − g + g)| ≤ |(Ax ◦ L)(f − g)| + |Ax (Lg)| ≤ 2 kLk kf − gk∞ + cL |g|Lip wλ (x). Passing to the infimum over g ∈ Lip[0, 1] we get cL wλ (x) |g|Lip |Ax (Lf )| ≤ 2 kLk inf kf − gk∞ + 2 kLk g∈Lip[0,1] cL wλ (x) . = kLk ω e f, kLk
4
GONSKA ET AL: OSTROWSKI INEQUALITY
23
Corollary 3.2. In the setting of Theorem 3.1 suppose that, moreover, L is a positive linear operator reproducing the constant functions. Then Z 1 ≤ω Lf (x) − Lf (t)dλ(t) e (f, cL wλ (x))
(3.2)
0
holds, for all f ∈ C[0, 1], λ ∈ M1+ [0, 1] and x ∈ [0, 1]. Let e0 (x) := 1 for x ∈ [0, 1]. It is well known (see, e.g., [13], p.178) that if L is a positive linear operator and Le0 = e0 , then L has at least one invariant measure µ, i.e., there exists µ ∈ M1+ [0, 1] such that Z
1
Z
1
f (t)dµ(t),
Lf (t)dµ(t) =
(3.3)
0
0
for f ∈ C[0, 1]. Now from Corollary 3.2 we obtain Corollary 3.3. Let L : C[0, 1] → C[0, 1] be a positive linear operator with Le0 = e0 , and µ an invariant measure for L. Suppose that L(Lip[0, 1]) ⊂ Lip[0, 1] and there exists cL > 0 such that |Lg|Lip ≤ cL |g|Lip , g ∈ Lip[0, 1]. Then the inequality Z Lf (x) −
1
0
e (f, cL wµ (x)) f (t)dµ(t) ≤ ω
(3.4)
holds, for all f ∈ C[0, 1], x ∈ [0, 1]. Under the hypothesis of Corollary 3.3, let m ≥ 1 be an integer. Then m
L e0 = e0 , |Lm g|Lip ≤ cm L |g|Lip , g ∈ Lip[0, 1], and µ is an invariant measure for the iterate Lm . Consequently, we can state the following result. Corollary 3.4. In the setting of Corollary 3.3 we have Z 1 m L f (x) − e (f, cm f (t)dµ(t) ≤ ω L wµ (x)) ,
(3.5)
0
for all f ∈ C[0, 1], x ∈ [0, 1], m ≥ 1. Moreover, if cL < 1, then Z 1 m lim L f = f (t)dµ(t) e0 , uniformly on [0, 1], m→∞
0
and, consequently, L has exactly one invariant measure µ ∈ M1+ [0, 1]. Related results, in a more general context, can be found in [3]. 5
(3.6)
GONSKA ET AL: OSTROWSKI INEQUALITY
4
24
Examples involving iterates of positive linear operators
Example 4.1. Let n ≥ 1, p = [n/2], 0 ≤ k ≤ p, 0 ≤ x ≤ 1. Consider the polynomials n + 1 − 2p + 2k n + 1 × wn,k (x) := (n + 1)2n+1 x p − k × (1 − x)p−k (1 + x)n+1−p+k − (1 − x)n+1−p+k (1 + x)p−k . The operators βn : C[0, 1] → C[0, 1], defined by βn f (x) :=
p X n − 2p + 2k wn,k (x) f n
k=0
were introduced in [23] (see also [18], [19]). They are positive linear operators with βn e0 = e0 . According to the results of [18], cβn =
n−1 n
and the probability
measure concentrated on 1 is invariant for βn . Now Corollary 3.4 entails m n−1 |βnm f (x) − f (1)| ≤ ω e f, (1 − x) , (4.1) n for all m, n > 1, f ∈ C[0, 1], x ∈ [0, 1]. This result supplements the qualitative results presented in ([20], Ex. 5.7.). Example 4.2. For n ≥ 1 and j ∈ {0, 1, . . . , n} let n j bnj (x) := x (1 − x)n−j , x ∈ [0, 1]. j
(4.2)
Let 0 ≤ α ≤ β, β > 0. Consider the Stancu operators Ln : C[0, 1] → C[0, 1] given by Ln f (x) :=
n X
bnj (x)f
j=0
It is easy to verify that cLn =
n n+β
a unique invariant measure µn ∈
j+α . n+β
< 1. According to Corollary 3.4, Ln has
M1+ [0, 1];
in fact, µn was already determined
in [11] and [21]. The quantitative result derived from (3.5) accompanies the qualitative results of [11] and [21]. In particular, we see thatthe rate of m n convergence, generally expressed by cm . L , is expressed here by n+β Example 4.3 6
GONSKA ET AL: OSTROWSKI INEQUALITY
25
Consider the Bernstein-Durrmeyer operators with Jacobi weights α, β > −1, defined by Mnα,β f (x) :=
n X
Z bnj (x)
1
tj+α (1 − t)n−j+β dt ,
0
0
j=0
1
Z tj+α (1 − t)n−j+β f (t)dt
for f ∈ C[0, 1], x ∈ [0, 1], n ≥ 1. According to the results of [3], cMnα,β =
n < 1, n+α+β+2
and the invariant measure µ is described by Z Z 1 Z 1 tα (1 − t)β f (t)dt f (t)dµ(t) =
tα (1 − t)β dt , with f ∈ C[0, 1].
0
0
0
1
Using the Cauchy-Schwarz inequality we get 1
Z
Z |t − x| dµ(t) ≤
wµ (x) = 0
=
1
x−
21 (t − x) dµ(t) 2
0
α+1 α+β+2
2
(α + 1)(β + 1) + (α + β + 2)2 (α + β + 3)
! 21 .
Now Corollary 3.4 entails Z 1 Z 1 α,β m α β α β (Mn ) f (x) − t (1 − t) f (t)dt t (1 − t) dt 0 0 ! 21 m 2 α + 1 (α + 1)(β + 1) n x− + ≤ω e f, n+α+β+2 α+β+2 (α + β + 2)2 (α + β + 3) This is a quantitative companion to the results of ([3], Section 3.2.). Example 4.4. For each n ≥ 1, let ϑn ∈ L1 [0, 1], ϑn ≥ 0, be a periodic function with period 1 n+1 ,
such that Z
1 n+1
ϑn (t)dt = 1. 0
Consider the generalized Kantorovich operators Kn : C[0, 1] → C[0, 1] defined by Kn f (x) :=
n X
Z bnj (x)
j=0
j+1 n+1 j n+1
f (t)ϑn (t)dt.
It is easy to verify that Kn (Lip[0, 1]) ⊂ Lip[0, 1] and cKn = 7
n n+1 ,
n ≥ 1.
GONSKA ET AL: OSTROWSKI INEQUALITY
26
We want to determine the invariant measure. Consider the matrix j+1 R n+1 Tn := bni (t)ϑn (t)dt j n+1
i,j=0,1,...,n
and let a = (a0 , a1 , . . . , an )t ∈ Rn+1 . Tn is the transpose of a regular stochastic matrix, so that the system Tn a = a has a unique solution with ai ≥ 0, i = 0, 1, . . . , n, and a0 + . . . + an = 1. For this solution, we have ai =
n X
Z aj
j+1 n+1
bni (t)ϑn (t)dt,
j n+1
j=0
for i = 0, . . . , n. Let ϕn (t) := aj ϑn (t), t ∈ M1+ [0, 1]
µn ∈
j j+1 n+1 , n+1
, j = 0, . . . , n. Define
by dµn (t) = ϕn (t)dt. Then µn is the invariant measure of Kn .
Indeed, Z
1
Kn f (t)dµn (t) = 0
=
=
=
n Z X
1
i=0
0
n X
j=0
n X
Z
j+1 n+1
n Z X
i=0
ai i=0 n Z X
f (t)ϑn (t)dt i n+1
Z bni (t)aj ϑn (t)dt
j n+1
i+1 n+1
f (t)ϑn (t)dt i n+1
i+1 n+1
f (t)ϑn (t)dt i n+1 i+1 n+1
f (t)ϕn (t)dt
i n+1
i=0 Z 1
=
i+1 n+1
Z bni (t)dµn (t) ·
f (t)dµn (t). 0
i) As a particular case, let α, β > −1 and ϑn (t) =
h
α
j+1 −t n+1
β
j+1 n+1
Z
j n+1
s−
j n+1
α
i
j j+1 n+1 , n+1 , j = 0, 1, . . . , n. Denote the corresponding Knα,β ; it is easy to see that they can be expressed also as
for all t ∈ Kn by
j t− n+1
n
Knα,β f (x) =
X 1 bnj (x) B(α + 1, β + 1) j=0
Z
1
sα (1 − s)β f
0
In fact, these are the operators introduced in ([15], (1.5)). 8
β ! j+1 − s ds , n+1 operators
s+j n+1
ds.
GONSKA ET AL: OSTROWSKI INEQUALITY
27
ii) More particularly, Kn0,0 f (x)
= (n + 1)
n X
Z bnj (x)
j=0 n X
=
Z
1
f
bnj (x) 0
j=0
j+1 n+1
f (t)dt j n+1
s+j n+1
ds
are the classical Kantorovich operators. For them, as in the above general n n+1 ;
case, the parameter cKn is
moreover, the invariant measure µ is the
Lebesgue measure on [0, 1]. Thus Z wµ (x) = 0
1
2 1 1 + . |t − x| dt = x − 2 4
From Corollary 3.4 we infer Z 0,0 m (Kn ) f (x) −
1
0
m n e f, f (t)dt ≤ ω n+1
1 x− 2
2
1 + 4
!! .
Related results, for multivariate Kantorovich operators, can be found in ([3], Section 3.1). Remark 4.1. Results of this type are especially significant in case of sequences (Ln ) for which the strong limit T (t) := lim L[nt] n n→∞
exists for all t ≥ 0. In such a case (T (t))t≥0 is a C0 -semigroup of operators and the above results can be used in order to study its asymptotic behaviour from a quantitative point of view. Details can be found in [3].
5
Examples involving differences of positive linear operators
In the preceding section the main tool was Corollary 3.4 which makes use of the invariant measure. Now we present applications of Corollary 3.2. Let again L : C[0, 1] → C[0, 1] be a positive linear operator with Le0 = e0 . Suppose that L(Lip[0, 1]) ⊂ Lip[0, 1] and there exists cL > 0 such that |Lg|Lip ≤ cL |g|Lip , for all g ∈ Lip[0, 1]. Let A : C[0, 1] → C[0, 1] be a positive linear 9
GONSKA ET AL: OSTROWSKI INEQUALITY
operator with Ae0 = e0 . For each x ∈ [0, 1] consider the measure λx ∈ M1+ [0, 1] defined by Z
1
f (t)dλx (t) = Af (x), f ∈ C[0, 1]. 0
Then we have Z 1 Lf (t)dλx (t) = A(Lf )(x) = (A ◦ L)f (x), f ∈ C[0, 1]. 0
Now Corollary 3.2 implies Proposition 5.1. With the above notation we have |(A ◦ L)f (x) − Lf (x)| ≤ ω e (f, cL A(|t − x| , x)) 1 ≤ω e f, cL (A((t − x)2 , x)) 2 , for all f ∈ C[0, 1] and x ∈ [0, 1]. Example 5.2 Let L = Bn , where (see [14], [2] and the references therein) , if x = 0, f (0) Bn f (x) := f (1) , if x = 1, R 01 tnx−1 (1−t)n(1−x)−1 f (t)dt , if 0 < x < 1, B(nx,n(1−x)) for all f ∈ C[0, 1], x ∈ [0, 1]. Then Bn e0 = e0 . In [2] it was proved using probabilistic methods that Bn f is increasing whenever f is increasing. Remark 5.2. This shape-preserving property can be proved, as in ([6], Ex.3.1), using analytical tools involving total positivity, not only for Bn but also for the Beta operators Bnα,β which will be described in Example 5.4. Let g ∈ Lip[0, 1]. Then |g|Lip e0 ± g are increasing functions, so that |g|Lip e0 ± Bn g are also increasing. It follows that Bn g Lip ≤ |g|Lip , and so cBn = 1. Take A = Bn , the classical Bernstein operator. Then A◦L = Un , the genuine Bernstein-Durrmeyer operator (see, e.g., [8]). From Proposition 5.1 we get 1 ! x(1 − x) 2 Un f (x) − Bn f (x) ≤ ω . e f, n 10
28
GONSKA ET AL: OSTROWSKI INEQUALITY
29
Example 5.3 Let L = Bn and A = Bn . Then A ◦ L = Sn is a Stancu-type operator investigated in [14]. We infer that |Sn f (x) − Bn f (x)| ≤ ω e f,
x(1 − x) n+1
12 ! .
Example 5.4 For α, β > −1, let Bnα,β : C[0, 1] → C[0, 1] be defined by Bnα,β f (x) :=
1
Z
Z tnx+α (1 − t)n(1−x)+β f (t)dt
0
1
tnx+α (1 − t)n(1−x)+β dt .
0
As in Example 5.2 (see also Remark 5.2), it can be proved that the correspondn n+α+β+2 . Bnα,β and A
ing parameter cBnα,β is If we take L =
= Bn , then A ◦ L = Mnα,β , the Bernstein-
Durrmeyer operator with Jacobi weights discussed in Example 4.3. From Proposition 5.1 we obtain α,β Mn f (x) − Bnα,β f (x) ≤ ω e f,
n n+α+β+2
x(1 − x) n
21 ! .
In particular, we can see what happens when α → ∞ and/or β → ∞. Example 5.5 Let L = Bn+1 and A = Bn . Then A ◦ L = Dn , an operator which was investigated in [8]. In this case we have |Dn f (x) − Bn+1 f (x)| ≤ ω e f,
x(1 − x) n
21 ! .
Remark 5.3. Other kind of results concerning differences of positive linear operators can be found in [8], [9], [10], [12] and the references therein.
References [1] A. ACU, H. GONSKA, Ostrowski inequalities and moduli of smoothness, Result.Math., 53 (2009), 217–228. ´ BAD´IA, J. DE LA CAL, Beta-type operators [2] J.A. ADELL, F. GERMAN preserve shape properties, Stoch. Proc. Appl., 48 (1993), 1–8. 11
GONSKA ET AL: OSTROWSKI INEQUALITY
[3] F. ALTOMARE, I. RAS¸A, Lipschitz contractions, unique ergodicity and asymptotics of Markov semigroups, to appear in Boll. Unione Mat. Ital. [4] G.A. ANASTASSIOU, Advanced Inequalities. World Scientific Publishing Co., Hackensack, NJ (2011). [5] G.A. ANASTASSIOU, Ostrowski type inequalities, Proc. Amer. Math. Soc., 123 (1995), 3775-3781. [6] A. ATTALIENTI, I. RAS¸A, Total positivity: an application to positive linear operators and to their limiting semigroups, Anal. Numer. Theor. Approx., 36 (2007), 51–66. [7] B. GAVREA, I. GAVREA, Ostrowski type inequalities from a linear functional point of view, JIPAM. J. Inequal. Pure Appl. Math., 1 (2000), article 11. ´ I. RAS¸A, On genuine Bernstein-Durrmeyer op[8] H. GONSKA, D. KACSO, erators, Result. Math., 50 (2007), 213–225. [9] H. GONSKA, P. PIT ¸ UL, I. RAS¸A, On differences of positive linear operators, Carpathian J. Math., 22 (2006), No. 1–2, 65–78. [10] H. GONSKA, P. PIT ¸ UL, I. RAS¸A, On Peano’s form of the Taylor remainder, Voronovskaja’s theorem and the commutator of positive linear operators, Proceedings of the International Conference on NUMERICAL ANALYSIS AND APPROXIMATION THEORY, Cluj-Napoca, Romania, July 5–8 (2006), 55–80. [11] H. GONSKA, P. PIT ¸ UL, I. RAS¸A, Over-iterates of Bernstein-Stancu operators, Calcolo, 44 (2007), 117–125. [12] H. GONSKA, I. RAS¸A, Differences of positive linear operators and the second order modulus, Carpathian J. Math., 24 (2008), 332–340. [13] U. KRENGEL, Ergodic Theorems, W. de Gruyter, (1985), Berlin-New York. [14] L. LUPAS ¸ , A. LUPAS¸, Polynomials of binomial type and approximation operators, Studia Univ. Babe¸s-Bolyai, Mathematica, 32 (1987), No. 4, 61– 69. 12
30
GONSKA ET AL: OSTROWSKI INEQUALITY
[15] D.H. MACHE, D.X. ZHOU, Characterization theorems for the approximation by a family of operators, Journal of Approximation Theory, 84 (1996), 145–161. ´ J.E. PECARI ˇ ´ A.M. FINK, Inequalities involving [16] D.S. MITRINOVIC, C, Functions and their Integrals and Derivatives. Kluwer Academic Publishers, Dordrecht (1991). ¨ [17] A. OSTROWSKI, Uber die Absolutabweichung einer differentiierbaren Funktion von ihrem Integralmittelwert, Comment. Math. Helv., 10 (1938), 226–227. [18] I. RAS ¸ A, On Soardi’s Bernstein operators of second kind, Anal. Numer. Theor. Approx., 29 (2000), 191–199. [19] I. RAS ¸ A, Classes of convex functions associated with Bernstein operators of second kind, Math. Inequal. Appl., 9 (2006), 599–605. [20] I. RAS ¸ A, C0 semigroups and iterates of positive linear operators: asymptotic behaviour, Rend. Circ. Mat. Palermo, Serie II, Suppl. 82 (2010), 123– 142. [21] I.A. RUS, Iterates of Stancu operators (via fixed points principles) revisited, Fixed Point Theory, 11 (2010), 369–374. [22] E.M. SEMENOV, B.S. MITJAGIN, Lack of interpolation of linear operators in spaces of smooth functions, Math. USSR-Izv., 11 (1977), 1229–1266. [23] P. SOARDI, Bernstein polynomials and random walks on hypergroups, in: Probability Measures on Groups, X(Oberwolfach) (1990), Plenum, New York, (1991), 387–393.
13
31
JOURNAL OF COMPUTATIONAL ANALYSIS AND APPLICATIONS, VOL. 14, NO.1, 32-41 ,2012, COPYRIGHT 2012 EUDOXUS PRESS, 32 LLC
COMPACT DIFFERENCES OF WEIGHTED COMPOSITION OPERATORS ON H ∞ (BN ) CE-ZHONG TONG∗ DEPARTMENT OF MATHEMATICS, TIANJIN UNIVERSITY TIANJIN 300072, P.R. CHINA E-MAIL: [email protected] Abstract. In this paper, the compactness of differences of two weighted composition operators on the space of bounded analytic functions on the open unit ball in the uniform operator topology is characterized. A sufficient condition when two weighted composition operators lie in the same component of the space of nonzero weighted composition operators on H ∞ (BN ), which is denoted by Cω (H ∞ (BN )), is established as well.
1. Introduction Let Ω be a domain in CN and H(Ω) a space of analytic functions on Ω. Let φ(z) = (φ1 (z), · · · , φn (z)) be an analytic self-map of Ω and u ∈ H(Ω). The composition operator Cφ induced by φ is defined by (Cφ f )(z) = f (φ(z)); the multiplication operator induced by u is defined by Mu f (z) = u(z)f (z); and the weighted composition operator uCφ induced by φ and u is defined by (uCφ f )(z) = u(z)f (φ(z)) for z ∈ Ω and f ∈ H(Ω). If let u ≡ 1, then uCφ = Cφ ; if let φ be an identity mapping, then uCφ = Mu . So we can regard weighted composition operator as a generalization of a multiplication operator and a composition operator. These operators are linear. By H ∞ (Ω) denote the Banach space of bounded analytic functions on Ω. 2000 Mathematics Subject Classification. Primary 47B35; Secondary 47B38, 32A36. Key words and phrases. Weighted composition operators, Compact differences, Weakly convergent, Topological structure, Connected component. ∗ Supported in part by the National Natural Science Foundation of China (Grand Nos. 10971153, 10671141). 1
33
2
C.Z.TONG
When X is a Banach space of analytic functions, we write C(X) for the space of composition operators on X under the operator norm topology. The investigation of the topological structure of C(H 2 ) was initiated by Berkson [3] in 1981, who focused attention on topological structure with his isolation results on composition operators acting on the Hardy space. Readers interested in this topic can refer to the following papers: [1, 4, 6, 9, 10, 11, 12, 13, 15, 18, 19, 20, 21, 25, 26, 27]. Building on these results, this paper answers the question of compact difference for two weighted composition operators acting on H ∞ (BN ), and gives a sufficient condition when two weighted composition operators lie in the same component of the space of nonzero weighted composition operators on H ∞ (BN ). 2. Some preliminary lemmas and definitions Recall that H ∞ (BN ) be the set of all bounded analytic functions on the open unit ball BN of CN . Then H ∞ (BN ) is the Banach algebra with the supremum norm kf k∞ := sup |f (z)| : z ∈ CN . We denote by B(H ∞ (BN )) the closed unit ball of H ∞ (BN ). For z, w ∈ BN , we denote by Φa the involution which interchanges the origin and point a. The pseudohyperbolic distance between z and w is given by β(z, w) := sup {|f (z)| : kf k∞ = 1, f (w) = 0} . The induced distance between z and w is defined as d∞ (z, w) := sup |f (z) − f (w)|. kf k∞ =1
It was shown that in [2] (2.1)
p 2 − 2 1 − β(z, w)2 d∞ (z, w) = . β(z, w) √
2
Put γ(t) = 2(1− t 1−t ) , 0 ≤ t ≤ 1, then d∞ (z, w) = γ(β(z, w)). It is easy to check that γ is an increasing function on [0, 1] and 0 ≤ γ ≤ 2. We state a lemma which was given in [26]. Lemma 2.1. For any z in BN , we have (a) β(z, w) = |Φz (w)| for any w ∈ BN ; (b) {w : β(z, w) < λ} = Φz (λBN ). Let S(BN ) denote the set of holomorphic maps from BN to BN , and C(H ∞ (BN )) the space of composition operators on H ∞ (BN ). Let uCφ be a weighted composition operator on H ∞ (BN ). Put Tt = tuCφ for
34
DIFFERENCES OF WEIGHTED COMPOSITION OPERATORS
3
0 ≤ t ≤ 1. Then we can easily obtain that uCφ and 0 are in the same path component. So it is enough for us to consider only the space of nonzero weighted composition operators on H ∞ (BN ), which is denoted by Cω (H ∞ (BN )). There are several useful lemmas which are used to get the main results of this paper. The following Lemma is important, which was presented by Berndtsson in [5]. Lemma 2.2. Let {xi } be a sequence in the ball BN satisfying Y (2.2) |Φxj (xk )| ≥ d > 0 for any k. j:j6=k
Then there exists a number M = M (d) < ∞ and a sequence of functions hk ∈ H ∞ (BN ) such that X (2.3) (a) hk (xj ) = δkj ; (b) |hk (z)| ≤ M for |z| < 1. k
(The symbol δkj is equal to 1 if k = j and 0 otherwise.) Next two lemmas were proved by Carl Toews in [26]. Lemma 2.3. Let {zn } ⊂ BN be a sequence with |zn | → 1 as n → ∞. Then for any given d ∈ (0, 1) there is a subsequence such that {xi } := {zni } satisfies (2.2). From this lemma, we can find that there is a subsequence which satisfying (2.2) for every sequence converging to the boundary of the ball, and Lemma 2.2 holds for this subsequence. Lemma 2.4. Let {hk } be a sequence of H ∞ (BN ) functions such that P |hk (z)| ≤ M < ∞ for all z ∈ BN . Then hk → 0 weakly. k
Let T be a bounded linear operator on a Banach space. Recall that T is said to be compact if T maps every bounded set into relatively compact one, and that T is said to be completely continuous if T maps every weakly convergent sequence into a norm convergent one. In general, every compact operators is completely continuous, but the converse is not always true. Lemma 2.5. (Proposition 3.11 of [8]) Let φ, ψ ∈ S(BN ), and u, v ∈ H ∞ (BN ), then uCφ −vCψ is compact on H ∞ (BN ) if and only if k(uCφ − vCψ )fn k∞ → 0 for every bounded sequence {fn } in H ∞ (Bn ) such that fn → 0 uniformly on every compact subset of BN .
35
4
C.Z.TONG
Let H(BN , CN ) be the collection of all the holomorphic maps from BN to CN , and H ∞ (BN , CN ) the subspace defined by H ∞ (BN , CN ) := {F = (f1 , ..., fN ) : fj ∈ H ∞ (BN ), j = 1, ..., N } . It is clear that H ∞ (BN , CN ) equipped with kF kH ∞ (BN ,CN ) = k(f1 , ..., fN )kH ∞ (BN ,CN ) :=
N X j=1
sup |fj (z)| z∈BN
as its norm becomes a Banach space. Now, if F (z) = (f1 (z), ..., fN (z)) ∈ H ∞ (BN , CN ), u(z) ∈ H ∞ (BN ) and φ ∈ S(BN ), we define u · F := (uf1 , ..., ufN ), and uCφ (F ) := (uf1 ◦ φ, ..., ufN ◦ φ). Thus every weighted composition operator acting on H ∞ (BN , CN ) is bounded. 3. Compact differences of two weighted composition operators In this section, we shall state a necessary and sufficient condition for the difference of two weighted composition operators to be compact on H ∞ (BN ), and give its proof in detail. Theorem 3.1. Let uCφ , vCψ ∈ Cω (H ∞ (BN )), then uCφ − vCψ is compact on H ∞ (BN ) if and only if the following three conditions hold: (a) If {zn } ⊂ BN , |φ(zn )| → 1 and lim inf β(φ(zn ), ψ(zn )) > 0, then n→∞
u(zn ) → 0; (b) If {zn } ⊂ BN , |ψ(zn )| → 1 and lim inf β(φ(zn ), ψ(zn )) > 0, then n→∞
v(zn ) → 0; (c) If {zn } ⊂ BN , |φ(zn )| → 1, |ψ(zn )| → 1, then u(zn ) − v(zn ) → 0. Proof. The proof of the necessity. Suppose uCφ − vCψ is compact on H ∞ (BN ). Let {zn } ⊂ BN with |φ(zn )| → 1, and (3.1)
lim inf β(φ(zn ), ψ(zn )) = σ > 0. n→∞
To prove (a). Suppose not, then we may assume that lim sup |u(zn )| = n→∞
δ0 > 0. So we can assume that there exists a δ ∈ (0, δ0 ] and a subsequence znj ⊂ {zn } such that lim |u(znj )| = δ. To simplify the sign, n→∞ we now write the subsequence {zn } for znj . So (3.2)
lim |u(zn )| = δ > 0.
n→∞
Then we conclude from Lemma 2.3 that there is a subsequence of φ(zn ) such that for a given d ∈ (0, 1), {xj } := φ(znj ) satisfies
36
DIFFERENCES OF WEIGHTED COMPOSITION OPERATORS
Q
5
|Φxj (xk )| ≥ d > 0 for all fixed k. So by Lemma 2.2, there exists
j:j6=k
a number M = M (d) < ∞ and a sequence of functions fk ∈ H ∞ (BN ) such that ∞ X (3.3) (i) fk (φ(znj )) = δkj ; (ii) |fk (φ(z))| ≤ M for |z| < 1. k=1
It follows from Lemma 2.4 that fn → 0 weakly in H ∞ (BN ). Set gn (z) = Φψ(zn ) (z)fn (z), then gn (z) ∈ H ∞ (BN , CN ), and v u N uX |gn (z)| = t |gni (z)|2 = |Φψ(zn ) (z)||fn (z)| ≤ |fn (z)|, i=1
where gni (z) denote the ith coordinate component of gn (z). So X X |fn (z)| ≤ M. |gni (z)| ≤ n
n
By Lemma 2.4, it follows that → 0 weakly in H ∞ (BN ) for every 1 ≤ i ≤ N . Because uCφ −vCψ is compact operator acting on H ∞ (BN ), it takes every weakly convergent sequence into a norm convergent one, and then N X (3.4) k(uCφ − vCψ )gn kH ∞ (BN ,CN ) = k(uCφ − vCψ )gni k∞ → 0. gni
i=1
On the other hand, k(uCφ − vCψ )gn kH ∞ (BN ,CN ) ≥ |u(zn )gn (φ(zn )) − v(zn )gn (ψ(zn ))| = |u(zn )||Φψ(zn ) (φ(zn ))||fn (φ(zn ))| = |u(zn )|β(φ(zn ), ψ(zn )). By (3.1) and (3.2), lim k(uCφ − vCψ )gn kH ∞ (BN ,CN ) ≥ δσ > 0. This n→∞
contradicts (3.4). The proof of (b) is the same as that of (a). Next, let {zn } ⊂ BN with |φ(zn )| → 1, |ψ(zn )| → 1 as n → ∞. To prove (c), similarly as the analysis in the proof of (a), we may assume that (3.5)
u(zn ) − v(zn ) → α 6= 0.
If (3.6)
β(φ(zn ), ψ(zn )) → 0,
by Lemma 2.4, we may also find a sequence of function {fk } for {φ(zn )} with the properties (3.3). From Lemma 2.5, the compactness of uCφ −
37
6
C.Z.TONG
vCψ implies k(uCφ − vCψ )fk k∞ → 0. Hence (3.7)
u(zn )fn (φ(zn )) − v(zn )fn (ψ(zn )) → 0 as n → ∞.
On the other hand, by (2.1), (3.3) and (3.6), fn f n ≤ γ(β(φ(zn ), ψ(zn ))), (φ(z )) − (ψ(z )) n n kfn k∞ kfn k∞ so there is a M > 0 such that |fn (φ(zn )) − fn (ψ(zn ))| ≤ M γ(β(φ(zn ), ψ(zn ))) → 0. Note that fn (φ(zn )) = 1, it follows that fn (ψ(zn )) → 1 as n → ∞. Then by (3.7), we have |u(zn ) − v(zn )| = |u(zn )fn (φ(zn )) − v(zn )fn (ψ(zn )) + v(zn )(fn (ψ(zn )) − 1)| ≤ |u(zn )fn (φ(zn )) − v(zn )fn (ψ(zn ))| + |v(zn )|fn (ψ(zn )) − 1| → 0. It contradicts (3.5). Now we should discuss the situation when lim sup β(φ(zn ), ψ(zn )) = η0 > 0. If there is an arbitrary η ∈ (0, η0] such that lim β(φ(znj ), ψ(znj )) = η for any arbitrary subsequence of znj ⊂ {zn }, then by (a) and (b), it follows that u(znj ) → 0, v(znj ) → 0, so u(znj ) − v(znj ) → 0. If not, then there is subsequence {znj } of {zn } such that β(φ(znj ), ψ(znj )) → 0. This case is already discussed above, so u(znj ) − v(znj ) → 0, and it also contradicts (3.5). The necessity is proved. The proof of the sufficiency. Let {fn } be an arbitrary sequence of H ∞ (BN ) such that kfn k∞ ≤ 1 and fn → 0 uniformly on every compact subset of BN . By Lemma 2.5, it is enough to prove kufn ◦ φ − vfn ◦ ψk∞ → 0. Suppose not, we may assume that for some ε > 0, kufn ◦ φ − vfn ◦ ψk∞ > ε for every n. Then there exists a sequence {zn }n ⊂ BN such that (3.8)
|u(zn )fn (φ(zn )) − v(zn )fn (ψ(zn ))| > ε
for any n. From which we claim that max {|φ(zn )|, |ψ(zn )|} → 1 as n → ∞. In fact, if not, then we may assume that φ(zn ) → ξ1 , ψ(zn ) → ξ2 , where both ξ1 and ξ2 are inner point of BN . Since fn → 0 on every compact subset of BN , we have fn (φ(zn )) → 0, fn (ψ(zn )) → 0. This contradicts (3.8).
38
DIFFERENCES OF WEIGHTED COMPOSITION OPERATORS
7
So we may assume that |φ(zn )| → 1 and ψ(zn ) → ω0 for some ω0 such that |ω0 | ≤ 1. Moreover we assume (3.9)
β(φ(zn ), ψ(zn )) → r
as n → ∞.
Suppose r > 0. When |ω0 | = 1, we conclude from (a) and (b) that u(zn ) → 0, v(zn ) → 0, which contradicts (3.8). When |ω0 | < 1, since fn → 0 uniformly on every compact subset of BN , (fn ◦ ψ)(zn ) → 0. Note that u(zn ) → 0 as the consequence of (a), these also contradict (3.8). Hence we have r = 0. Then by (3.9), we have |ψ(zn )| → 1, and by (2.1) it follows that |fn (φ(zn )) − fn (ψ(zn ))| ≤ γ(β(φ(zn ), ψ(zn ))) → 0. This shows that fn (φ(zn )) − fn (ψ(zn )) → 0. By (c), we also have u(zn ) − v(zn ) → 0. So u(zn )fn (φ(zn )) − v(zn )fn (ψ(zn )) → 0, it also contradicts (3.8). This completes the proof of the theorem. 4. Path connected subset in Cω (H ∞ (BN )) For uCφ , vCψ ∈ Cω (H ∞ (BN )), we write uCφ ∼ vCψ if uCφ and vCψ are in the some path component of Cω (H ∞ (BN )). The following theorem gives a sufficient condition when two weighted composition operators lies in the same component of Cω (H ∞ (BN )). The similar sufficient and necessary condition of composition operator without weights appears in [26] ( that is, Theorem 1 in section 3, [26]). Theorem 4.1. Let φ and ψ be analytic self maps of BN with dβ (φ, ψ) < 1, and nonzero functions u(z), v(z) ∈ H ∞ (BN ), then uCφ ∼ vCψ . We postpone the proof until we have established a number rather technical lemmas. Lemma 4.2. Let nonzero functions u, v ∈ H ∞ (BN ), and φ ∈ S(BN ). Then uCφ ∼ vCφ . Proof. For every t, 0 ≤ t ≤ 1, put Tt = (tu + (1 − t)v)Cφ . Then T0 = vCφ , T1 = uCφ , Tt 6= 0, and kTt − Tt0 k ≤ |t − t0 |kCφ kku − vk∞ . Hence the mapping t 7→ Tt ∈ Cω (H ∞ (BN )), 0 ≤ t ≤ 1, is continuous. Let φ, ψ ∈ S(BN ), and dβ (φ, ψ) := sup β(φ(z), ψ(z)). This defines z∈BN
a [0, 1]-valued metric on S(BN ). Denote the ensuing topological space by S(BN , dβ ). The next lemma, whose composition operator without weights appears in [26], shows that there is certain connection between S(BN ) and C(H ∞ (BN )). And that may be useful for us to find a path connected subset in Cω (H ∞ (BN )).
39
8
C.Z.TONG
Lemma 4.3. Let φ, ψ : BN → BN and nonzero u(z) ∈ H ∞ (BN ). Then q 2 − 2 1 − dβ (φ, ψ)2 kuCφ − uCψ k ≤ kuk∞ . dβ (φ, ψ) Proof. kuCφ − uCψ k =
sup
sup |u(z)f (φ(z)) − u(z)f (ψ(z))|
kf k∞ =1 z∈BN
≤ kuk∞ sup
sup |f (φ(z)) − f (ψ(z))|
kf k∞ =1 z∈BN
= kuk∞ sup
sup |f (φ(z)) − f (ψ(z))|
z∈BN kf k∞ =1
= kuk∞ sup d∞ (φ(z), ψ(z)) z∈BN q 2 − 2 1 − β(φ(z), ψ(z))2 = kuk∞ sup β(φ(z), ψ(z)) z∈BN q 2 − 2 1 − dβ (φ, ψ)2 = kuk∞ . dβ (φ, ψ) The former lemma follows from the observation that the function √ 2 − 2 1 − x2 f (x) := x maps (0, 1] continuously onto (0, 2] and increases monotonically in x. Now we begin by introducing some abusive but convenient notation. Given two points z and w in BN define (4.1)
zt := (1 − t)z + tw.
Analogously, given two self maps of the ball φ and ψ, we define (4.2)
φt (z) := (1 − t)φ(z) + tψ(z).
Note that z0 = z, z1 = w, and zt lies on the straight line connecting z and w; in particular, any convex set that contains z and w also contains zt . Next lemma was proved by Carl Toews in [26]: Lemma 4.4. Let φ and ψ be analytic self maps of BN satisfying dβ (φ, ψ) ≤ λ < 1, and let φt be as in (4.2). Then, for t ∈ [0, 1] and ε such that t + ε ∈ [0, 1], we have lim dβ (φt , φt+ε ) = 0.
|ε|→0
40
DIFFERENCES OF WEIGHTED COMPOSITION OPERATORS
9
Now using the previous lemmas, we turn our attention back to the proof of Theorem 4.1. Proof. Given φ and ψ in S(BN ) with dβ (φ, ψ) < 1, define φt as in (4.2). It follows from Lemma 4.4 that the map t 7→ φt is a continuous path from φ to ψ in S(BN , dβ ) and from Lemma 4.3 that the map t 7→ uCφt is continuous from uCφ to uCψ in Cω (H ∞ (BN )). And by Lemma 4.2 we know uCφ and vCψ are in the same path component in Cω (H ∞ (BN )). It follows from Lemma 2 in [26] that dβ (φ, ψ) < 1 if and only if kCφ − Cψ k < 2, so we have an equivalent condition. Theorem 4.5. Let φ and ψ be analytic self maps of BN with kCφ − Cψ k < 2, and nonzero functions u(z), v(z) ∈ H ∞ (BN ), then uCφ ∼ vCψ . Remark. Carl Toews [26] gave a sufficient and necessary condition when two composition operator without weights lie in the same component. But it is somewhat different from the main theorem of this section, because equation in Lemma 4.3 is not equal and the method of [26] may be invalid. References [1] R. Aron, P. Galindo and M. Lindsr¨om, Connected components in the space of composition operators on H ∞ functions of many variables, Integral Equations Operator Theory, vol. 45, pp. 1-14, 2003. [2] H.Bear, Lectures on Gleason parts, Lecture Note in Mathematics 121, Springer-Verlag, Berlin and New York, 1970. [3] E. Berkson, Composition opertors isolated in the uniform operator topology, Proc. Amer. Math. Soc., vol. 81, pp. 230-232, 1981. [4] P. Bourdon, Components of linear fractional composition operators, J. Math. Anal. Appl., vol. 279, pp. 228-245, 2003. [5] B. Berndtsson, Interpolating sequences for H ∞ in the ball, Math. Indag. 47 (1985), 1-10; Proc. Kon. Nederl. Akad. Wetens., vol. 88A, pp. 1-10, 1985. [6] J. Bonet, M. Lindstr¨ om and E. Wolf, Differences of composition operators between weighted Banach spaces of holomorphic functions, Journal of the Australian Mathematical Society, vol. 84, pp. 9-20, 2008. [7] M. D. Contreras and S. D´az-Madrigal, Compact-type operators defined on H ∞ , Contemporary Math., vol. 232, pp. 111-118, 1999. [8] C. C. Cowen and B. D. MacCluer, Composition Operators on Spaces of Analytic Functions, CRC Press, Boca Raton, 1995. [9] Z. S. Fang and Z. H. Zhou, Differences of composition operators on the space of bounded analytic functions in the Polydisc, Abstract and Applied Analysis, vol. 2008, Article ID 983132, pp. 1-10, 2008.
41
10
C.Z.TONG
[10] Z. S. Fang and Z. H. Zhou, Differences of composition operators on the Bloch space in the Polydisc, Bulletin of the Australian Mathematical Society, 2009, preprint. [11] E. A. Gallardo-Guti´ errez, M. J. Gonz´ alez, P. J. Nieminen and E. Saksman, On the connected component of compact composition operators on the Hardy space, Advances in Mathematics, vol. 219, pp. 986-1001, 2008. [12] P. Gorkin, R. Mortini, D. Suarez, Homotopic Composition Operators on H ∞ (B n ), Function Spaces, Edwardsville, vol. IL, pp. 177C188, 2002. Contemporary Mathematics, vol. 328, Amer. Math. Soc., Providence, RI, 2003. [13] Takuya Hosokawa, Keiji Izuchi and Shˆ uichi Ohno, Topological structure of the space of weighted composition operators on H ∞ , Integral Equations Operator Theory, vol. 53, pp. 509-526, 2005. [14] C. Hammond and B.D. MacCluer, Isolation and Component Structure in Spaces of Composition Operators, Integral Equations Operator Theory, vol. 53, pp. 269-285, 2005. [15] T. Hosokawa and S. Ohno, Topological structures of the sets of composition operators on the Bloch spaces, J. Math. Anal. Appl., vol. 314, pp. 736-748, 2006. [16] T. Hosokawa and S. Ohno, Differences of composition operators on the bloch spaces, J. Operator Theory, vol. 57, pp. 229-242, 2007. [17] T. Kriete and J. Moorhouse, Linear relations in the Calkin algebra for composition operators, Trans. Amer. Math. Soc., vol. 359, pp. 2915-2944, 2007. [18] M. Lindstr¨ om and E. Wolf, Essential norm of the difference of weighted composition operators, Monatsh. Math., vol. 153, pp. 133-143, 2008. [19] B. D. MacCluer, Components in the space of composition operators, Integral Equations Operator Theory, vol. 12, pp. 725-738, 1989. [20] J. Moorhouse, Compact differences of composition operators, Journal of Functional Analysis, vol. 219, pp. 70-92, 2005. [21] B. MacCluer, S. Ohno and R. Zhao, Topological structure of the space of composition operators on H ∞ , Integral Equations Operator Theory, vol. 40, no. 4, pp. 481-494, 2001. [22] P. J. Nieminen and E. Saksman, On compactness of the difference of composition operators,J. Math. Anal. Appl., vol. 298, pp. 502-522, 2004. [23] S. Ohno and H. Takagi, Some properties of weighted composition operators on algebras of analytic functions, J. Nonlinear Conv. Anal., vol. 3, pp. 872-884, 2001. [24] J. H. Shapiro, Composition Operators and Classical Function Theory, Springer Verlag, New York, 1993. [25] J. H. Shapiro and C. Sundberg, Isolation amongst the composition operators, Pacific J. Math., vol. 145, pp. 117-152, 1990. [26] Carl Toews, Topological components of the set of composition operators on H ∞ (BN ), Integral Equations Operator Theory, vol. 48, pp. 265-280, 2004. [27] E. Wolf, Differences of composition operators between weighted Banach spaces of holomorphic functions on the unit polydisk, Result. Math., vol. 51, pp. 361372, 2008.
JOURNAL OF COMPUTATIONAL ANALYSIS AND APPLICATIONS, VOL. 14, NO.1, 42-53,2012, COPYRIGHT 2012 EUDOXUS PRESS, 42 LLC
RIEMANN-STIELTJES OPERATOR BETWEEN ITERATED LOGARITHMIC BLOCH SPACE AND MIXED-NORM SPACE ON THE UNIT BALL YU-XIA LIANG AND DAI-JUN TIAN∗
Abstract. In this paper, we discuss the boundedness and compactness of Riemann-Stieltjes operator between iterated logarithmic Bloch space and mixednorm space on the unit ball of Cn .
1. Introduction Let B be the unit ball of the complex space Cn , and S = ∂B the boundary of the unit ball. Denote by H(B) the class of all holomorphic functions on B and S(B) the collection of all the holpmorphic self mappings of B. dν(z) is the normalized Lebesgue volume measure on B and dσ(ζ) is the normalized surface measure P on S. Let z = (z1 , ..., zn ) and w = (w1 , ..., wn ) be the points in Cn and n hz, wi = k=1 zk w¯k . For f ∈ H(B), let
< q k 1 q k u(1) n n+k 1 > :1 q
Using this we get Ln(2) (e2 ; q; x)
x2 u(2) n
2
m k
(q; q)k
n k (q ; q)k
m=2 k=2 k 2
(2) un
n
(q ; q)m k (q; q)k
n
k
(q ; q)k
2
(q; q)m
(2) (u(1) n x; q)n (un x; q)n
1 X
n
(q ; q)m
k (q; q)k
(n;n) 2;q
gm
xm
1
9 > = 2
xm
2
9 > = 1
xm
1
> ;
1 X m X
m=1 k=1 k 1 (2) un
9 > = 1 > ;
1
1 X m X
(q; q)m
(1) (2) +xu(2) n (un x; q)n (un x; q)n
k
(q n ; q)m
n
k
(q ; q)k
> ;
1
(2) u(1) xm n ; un
2
m=2
+xu(2) n
1 X 1 q (1) (n;n) (2) (2) (u x; q)n (un x; q)n gm 1;q u(1) xm n ; un 1 qn n m=1
= x2 u(2) n
2
+ xu(2) n
1 q 1 qn
;
or Ln(2) (e2 ; q; x)
(2.2)
e2 (x)
x2
u(2) n
2
1 + xu(2) n
1 q 1 qn
:
On the other hand, since 0
Ln(2) ((y
x)2 ; q; x) = Ln(2) (e2 ; q; x)
2xLn(2) (e1 ; q; x) + x2 ;
it follows from Lemmas 2.1 and 2.2 that Ln(2) (e2 ; q; x)
(2.3)
e2 (x)
2x2 u(2) n
1 :
Combining (2.2) with (2.3) we immediately have Ln(2) (e2 ; q; x) whence the result.
e2 (x)
2x2 1
u(2) n
2
+ xu(2) n
1 q 1 qn
;
1
:
:
72
6
ESRA ERKU S ¸ -DUM AN
Now let A = (ajn ) be a non-negative regular summability matrix. Then replacing q in the de…nition (1.3) by a sequence (qn )n2N lying in the interval (0; 1) so that (2.4)
stA
lim n
1 1
qn =0 qnn
hold. Indeed, one can construct a sequence (qn )n2N satisfying (2.4). For example, take A = C1 , the Cesáro matrix of order one, and de…ne the sequence (qn )n2N by (2.5)
1=2; if n = m2 ; (m = 1; 2; 3; :::) 1 1 n ; if n 6= m2 :
qn =
Observe that this sequence satis…es the conditions in (2.4), however it is nonconvergent in the ordinary sequence. We are now ready to give a Korovkin type approximation result for the operators (2) Ln by means of A-statistical convergence. Theorem 2.4. Let A = (ajn ) be a non-negative regular summability matrix and let (qn )n2N be a sequence satisfying (2:4). Then, for all f 2 C [0; 1] ; (2.6)
lim Ln(2) (f ; qn ; :)
stA
f = 0:
n
if and only if (2.7)
stA
lim u(2) n =1 n
holds. Proof. Assume …rst that (2.6) holds for all f 2 C [0; 1]. Then, in particular, since e1 2 C[0; 1]; we may write that lim Ln(2) (e1 ; qn ; :)
stA
e1 = 0:
n
By Lemma 2.2, the proof of the implication (2.6))(2.7) is clear. Now assume that u(2) is a sequence satisfying (2.7). So, by Lemma 2.1, it is not hard to see that (2.8)
lim Ln(2) (e0 ; qn ; :)
stA
e0 = 0
n
since every convergent sequence is also A-statistically convergent provided that A is a non-negative regular summability matrix. Also, it immediately follows from Lemma 2.2 and condition (2.7) that (2.9)
lim Ln(2) (e1 ; qn ; :)
stA
e1 = 0
n
holds. Furthermore, we claim that (2.10)
lim Ln(2) (e2 ; qn ; :)
stA
e2 = 0:
n
To see this, for a given " > 0; de…ne the following sets: n D : = n : Ln(2) (e2 ; qn ; :) e2 D1
:
=
n:1
D2
:
=
n : u(2) n
u(2) n 1 1
2
qn qnn
" 4
o " ;
;
" 2
:
73
STATISTICAL APPROXIM ATING OPERATORS
7
So, by Lemma 2.3, we easily see that D D1 [ D2 , which implies, for each j 2 N, that X X X (2.11) ajn ajn + ajn : n2D
n2D1
n2D2
Then, observe that (2.7) implies stA
lim 1 n
u(2) n
2
= 0 and stA
lim u(2) n n
1 1
qn qnn
= 0:
Using these facts and also taking limit as n ! 1 in (2.11) we conclude that X lim ajn = 0; j
n2D
which gives (2.10). Now, combining (2.8), (2.9), (2.10) and using the Korovkin type approximation theorem in statistical sense proved by Gadjiev and Orhan [13] (see also [7, 8, 9]) the proof is completed. In a similar manner one can extend Theorem 2.4 to the r dimensional case for (r) the operators Ln (f ; qn ; x) given by (1.3) as follows: Theorem 2.5. Let A = (ajn ) be a non-negative regular summability matrix and let (qn )n2N be a sequence satisfying (2:4). Then, for all f 2 C [0; 1], stA
lim Ln(r) (f ; qn ; :) n
f =0
if and only if stA
lim u(r) n = 1: n
If one replaces the non-negative regular matrix A = (ajn ) in Theorem 2.5 with the identity matrix, then the following result is obtained, which is a classical case of Theorem 2.5. Theorem 2.6. Let (qn )n2N be a sequence of real number such that 0 < qn < 1 for each n 2 N and limn 11 qqnn = 0: Then, for all f 2 C [0; 1], the sequence n o n (r) Ln (f ; qn ; :) is uniformly convergent to f on the interval [0; 1] if and only if (r)
lim un = 1 holds (in the ordinary sense). n
However, the following example shows that our Theorem 2.5 is stronger than its classical case (Theorem 2.6). Example. Consider A = C1 := (cjn ); the Cesáro matrix of order one. Note that, under this choice, A-statistical convergence reduces to the statistical convergence. In this case we use the notation of stC1 : Let (r) := (u(1) ; u(2) ; :::; u(r) ) n st instead o (r) is de…ned by such that the sequence u(r) := un n2N
(2.12)
u(r) n
8 1 > if n = m2 (m = 1; 2; :::) > < 2; = > > : n ; otherwise. n+r
74
8
ESRA ERKU S ¸ -DUM AN (r)
Also consider the sequence (qn )n2N given by (2.5). Then observe that 0 < un < 1 and 0 < qn < 1 for each n 2 N and also that 1 qn st lim u(r) =0 lim n = 1 and st n n 1 qnn Therefore, by Theorem 2.5 we obtain, for all f 2 C [0; 1] ; that st
lim Ln(r) (f ; qn ; :)
f = 0:
n
However, since neither the sequence u(r) in (2.12) nor (qn )n2N in (2.5) are convergent (in the ordinary sense), it is easy to check that Theorem 2.6 does not work. 3. Rates of A-statistical convergence In this section, we study the rates of A-statistical convergence in Theorem 2.5 by the means of the modulus of continuity, the elements of Lipschitz class, Peetre’s K-functional and the Lipschitz type maximal functions, respectively. As usual, the modulus of continuity of a function f 2 C[0; 1] is de…ned by w(f; ) =
sup jy xj
jf (y)
; x;y2[0;1]
f (x)j ( > 0):
Now we have the following result. Theorem 3.1. Let (qn ) be a sequence such that 0 < qn < 1 for each n. For each n 2 N and for all f 2 C[0; 1], we have Ln(r) (f ; qn ; :)
(3.1)
f
2w(f;
n );
where (3.2)
n
:=
4 1
u(r) n
2
+ u(r) n
1 1
qn qnn
1=2
:
Proof. By the de…nition of the modulus of continuity, it is easy to see, for any x 2 [0; 1] and n 2 N, that Ln(r) (f ; qn ; x)
f (x)
Ln(r) (jf (y)
> 0;
f (x)j ; qn ; x)
1 w (f; ) 1 + Ln(r) (jy
xj ; qn ; x) :
Now, applying the Cauchy-Schwarz inequality for positive linear operators we have Ln(r) (f ; qn ; x)
f (x)
w (f; ) 1 +
1
Ln(r) ((y
x)2 ; qn ; x)
1 w (f; ) 1 + ( Ln(r) (e2 ; qn ; x) +2x Ln(r) (e1 ; qn ; x) which gives Ln(r) (f ; qn ; )
f
1
e1 (x) ) 2
1=2
e2 (x) o
1 w (f; ) 1 + ( Ln(r) (e2 ; qn ; ) o 1 +2 Ln(r) (e1 ; qn ; ) e1 ) 2
;
e2
75
STATISTICAL APPROXIM ATING OPERATORS
Then it follows from Lemmas 2.2 and 2.3 that (3.3) ( 1 (r) w (f; ) 1 + Ln (f ; qn ; ) f 4 1 Hence, choosing once.
:=
2
u(r) n
+
9
u(r) n
1 1
1=2
qn qnn
)
given as (3.2), the inequality (3.1) follows from (3.3) at
n
We will now study the rate of convergence of the positive linear operators (r) Ln (f ; qn ; x) given by (1.3) by means of the elements of the Lipschitz class LipM ( ), for 0 < 1 and M > 0: We recall that a function f 2 C[0; 1] belongs to LipM ( ) if the inequality (3.4)
jf (y)
holds.
f (x)j
M jy
xj ; (x; y 2 [0; 1])
Theorem 3.2. Let (qn ) be a sequence such that 0 < qn < 1 for each n. Then, for each n 2 N and for all f 2 LipM ( ), we have Ln(r) (f ; qn ; :)
(3.5) where
n
f
M
n;
is the same as in (3:2):
Proof. Let f 2 LipM ( ) and x 2 [0; 1]. From (3.4), we obtain Ln(r) (f ; qn ; x)
Ln(r) (jf (y)
f (x)
M Ln(r) (jy
f (x)j ; qn ; x) xj ; qn ; x) : 2
2
we get Now, applying the Hölder inequality with p = ; q = 2 n o (3.6) Ln(r) (f ; qn ; x) f (x) M Ln(r) ((y x)2 ; qn ; x)
=2
:
As in the proof of Theorem 3.1, we can write from (3.6) that (3.7)
Ln(r) (f ; qn ; )
If we take
n
f
M
4 1
u(r) n
2
+ u(r) n
1 1
qn qnn
=2
:
as in (3.2), then (3.5) is obtained from (3.7).
Now we recall the following space and norm: C 2 [0; 1] ; the space of all functions f such that f , f 0 , f 00 2 C [0; 1]. The norm on the space C 2 [0; 1] can be de…ned as kf kC 2 [0;1] := kf k + kf 0 k + kf 00 k :
We consider the following Peetre’s K-functional (similarly as in [4]) n o K(f ; ) := inf kf gk + kgk : 2 [0;1] C 2 g2C [0;1]
It is clear that if f 2 C [0; 1], then we have lim !0 K(f ; ) = 0. Some further results on the Peetre’s K-functional may be found in [6]. Theorem 3.3. Let (qn ) be a sequence such that 0 < qn < 1 for each n. Then for all f 2 C [0; 1], we have Ln(r) (f ; qn ; :)
(3.8) where
n
is the same as in (3:2):
f
2K(f ;
2 n );
76
10
ESRA ERKU S ¸ -DUM AN
Proof. Let g 2 C 2 [0; 1] and x 2 [0; 1]: From the Taylor expansion we have Ln(r) (g; qn ; x)
x; qn ; x) jg 0 j
Ln(r) (y
g(x)
1 L (r) (y 2 n Then, by Lemmas 2.1-2.3, we obtain that
2
+
Ln(r) (g; qn ; :)
kg 0 k u(r) n
1
g
+
jg 00 j :
x) ; qn ; x
1 2
u(r) n
4 1 u(r) n
4 1
2
2
+ u(r) n
+ u(r) n
1 1
1 qn kg 00 k 1 qnn qn kgkC 2 [0;1] : qnn
Since for all f 2 C[0; 1] Ln(r) (f ; qn ; :)
f
2 kf
we can write that Ln(r) (f ; qn ; :)
(3.9)
f
gk + Ln(r) (g; qn ; :)
n 2 kf
gk +
2 n
g ;
o kgkC 2 [0;1] ;
where n is the same as in (3.2). Now, by taking in…mum over g 2 C 2 [0; 1] on both sides of (3.9) we obtain (3.8). Finally, by using the similar technique as in Theorem 3.9 of Agratini [2], we (r) may estimate the following rates of the operators Ln (f ; qn ; x) by means of the Lipschitz type maximal function of order : Here we recall that the Lipschitz type maximal function of order introduced by B. Lenze in [18] as follows: s
w (f ) :=
jf (y) f (x)j ; jy xj x;y2[0;1]
sup y6=x;
2 (0; 1] :
s
Of course, the boundedness of w (f ) is equivalent to f 2 LipM ( ) : The we have the following Theorem 3.4. Let (qn ) be a sequence such that 0 < qn < 1 for each n. Then, we have s Ln(r) (f ; qn ; x) f (x) n w (f ) ; where
n
is the same as in (3:2):
Concluding remarks. Let A = (ajn ) be a non-negative regular summability matrix. If the conditions 1 qn stA lim u(r) lim =0 n = 1 and stA n 1 qnn hold, then observe that stA limn n = 0; where n is given by (3.2). In this case, we get stA lim w(f; n ) = stA lim K(f; 2n ) = 0 n
for all f 2 C[0; 1]: So, Theorems 3.1-3.3 give us the rates of A-statistical convergence in Theorem 2.5. Of course, if we replace A = (ajn ) by the identity matrix, then we get the ordinary rates of convergence.
77
STATISTICAL APPROXIM ATING OPERATORS
11
References [1] G.E. Andrews, R. Askey and R. Roy, Special Functions, Cambridge Univ. Press, Cambridge, 1999. [2] O. Agratini, Korovkin type error estimates for Meyer-König and Zeller operators, Math. Inequal. Appl. 4 (2001) 119-126. [3] A. Alt¬n, E. Erku¸s and F. Ta¸sdelen, The q-Lagrange polynomials in several variables, Taiwanese J. Math. 10 (2006) 1131-1137. [4] G. Bleimann, P.L. Butzer and L. Hahn, A Bernstein-type operator approximating continuous functions on the semi-axis, Indag. Math. 42 (1980) 255-262. [5] W.C.C. Chan, C.J. Chyan and H.M. Srivastava, The Lagrange polynomials in several variables, Integral Transform. Spec. Funct. 12 (2001) 139-148. [6] Z. Ditzian and V. Totik, Moduli of Smoothness, Springer Ser. Comput. Math., Vol. 9, Springer-Verlag, Berlin; 1987. [7] O. Duman, M.K. Khan and C. Orhan, A-statistical convergence of approximating operators, Math. Inequal. Appl. 6 (2003) 689-699. [8] E. Erku¸s and O. Duman, A Korovkin type approximation theorem in statistical sense, Studia Sci. Math. Hungarica 43 (2006) 285-294. [9] E. Erku¸s, O. Duman and H.M. Srivastava, Statistical approximation of certain positive linear operators constructed by means of the Chan-Chyan-Srivastava polynomials, Appl. Math. Comput. 182 (2006) 213-222. [10] H. Fast, Sur la convergence statistique (in French), Colloq. Math. 2 (1951) 241-244. [11] A.R. Freedman and J.J. Sember, Densities and summability, Paci…c J. Math. 95 (1981) 293-305. [12] J.A. Fridy, On statistical convergence, Analysis 5 (1985) 301-313. [13] A.D. Gadjiev and C. Orhan, Some approximation theorems via statistical convergence, Rocky Mountain J. Math. 32 (2002) 129-138. [14] V. Kac and P. Cheung, Quantum Calculus, Springer, Berlin, 2002. [15] E. Kolk, Matrix summability of statistically convergent sequences, Analysis 13 (1993) 77-83. [16] A. Kundu, Origin of quantum group and its application in integrable systems, Chaos, Solitons & Fractals 12 (1995) 2329-2344. [17] A. Lavagno and P.N. Swamy, Non-extensive entropy in q-deformed quantum groups, Chaos,Solitons & Fractals 13 (2002) 437-444. [18] B. Lenze, On Lipschitz type maximal functions and their smoothness spaces, Proc. Netherland Acad. Sci. A 91 (1998) 53-63. [19] S. Majid, Foundations of Quantum Group Theory, Cambridge Univ. Press, Cambridge, 2000. [20] H.M. Srivastava and H.L. Manocha, A Treatise on Generating Functions. Halsted Press (Ellis Horwood Limited, Chichester), John Wiley and Sons, New York, 1984.
Esra Erku¸ s-Duman Gazi University, Faculty of Sciences and Arts, Department of Mathematics, Teknikokullar TR-06500, Ankara, Turkey. E-mail address: [email protected]
JOURNAL OF COMPUTATIONAL ANALYSIS AND APPLICATIONS, VOL. 14, NO.1, 78-89 , 2012, COPYRIGHT 2012 EUDOXUS PRESS, 78 LLC
A bivariate blending interpolator and the properties Fangxun Bao1,∗ Qinghua Sun1
Jianxun Pan2 Qi Duan1
1 School of Mathematics, Shandong University, Jinan, 250100, China 2 Woman’s Academy at Shandong, Jinan, 250300, China
Abstract In this paper a new weighted bivariate blending interpolator is constructed based on function values and partial derivatives of a function. The interpolator has a simple and explicit mathematical representation, and is C 1 for any positive parameters in the whole interpolating region. Also, some properties of the interpolator are derived, such as the properties of the basis function, the bounded property and the error estimates. Keywords: Rational spline; bivariate blending interpolation; weighted interpolation; error estimates; computer-aided geometric design.
1
Introduction
The construction method of curve and surface and the mathematical description of them is a key issue in computer-aided geometric design. There are many ways to tackle this problem [1-3,9,1214,17,18], for example, the polynomial spline method, the NURBS method and the B´ezier method. These methods are effective and applied widely in shape design of industrial products. Generally speaking, most of the polynomial spline methods are the interpolating methods. However, one of the disadvantages of the polynomial spline method is that it can not be used to modify the local shape of the interpolating surfaces for unchanged given data. The NURBS and B´ezier methods are the so-called ”no-interpolating type” methods; this means that the constructed curve and surface do not pass through the given data, and the given points play the role of the control points. In recent years, Motivated by the univariate rational spline interpolation [4,5,10,11,15], the bivariate rational interpolation with parameters which has simple and explicit mathematical representation has been studied [6-8]. Since the parameters in the interpolator are selective according to the control need, the constrained control of the shape becomes possible. However, for this kind of bivariate rational interpolator, only when the parameters of y-direction satisfy certain condition can it be C 1 in the interpolating region [6,7]. It is inconvenient to modify the shape of surfaces. To overcome the problem, in this paper, a new weighted bivariate blending interpolator will be constructed based on function values and derivatives of a function, and its properties is also discussed. This paper is arranged as follows. In Section 2, a weighted bivariate blending rational interpolator is constructed. In Section 3, the basis of this interpolator is derived. Section 4 is about ∗
Corresponding author: [email protected]
1
BAO ET AL: BIVARIATE INTERPOLATOR
79
some properties of the interpolator, including the properties of the basis function and the bounded property. Sections 5 deals with the error estimates of the interpolator. Some examples are given in Section 6, which show that this interpolator is with good approximation to the interpolated function.
2
Interpolation
Let Ω : [a, b; c, d] be the plane region, and {(xi , yi , fi,j , ei,j , di,j ), i = 1, 2, · · · , n; j = 1, 2, · · · , m} be a given set of data points, where a = x1 < x2 < · · · < xn = b and c = y1 < y2 < · · · < ym = d are the (x,y) ∂f (x,y) knot spacings, fi,j , ei,j , di,j represent f (xi , yj ), ∂f∂x , ∂y at the point (xi , yj ) respectively. Let hi = xi+1 − xi , and lj = yj+1 − yj , and for any point (x, y) ∈ [xi , xi+1 ; yj , yj+1 ] in the xy-plane, Let θ = (x − xi )/hi and η = (y − yj )/lj . First, for each y = yj , j = 1, 2, · · · , m, construct the x-direction interpolating curve [5]; this is given by p∗i,j (x) =
∗ + θ 2 (1 − θ)W ∗ + θ 3 f (1 − θ)3 αi,j fi,j + θ(1 − θ)2 Vi,j i+1,j i,j , i = 1, 2, · · · , n − 1, (1 − θ)αi,j + θ
(1)
where ∗ Vi,j = (2αi,j + 1)fi,j + αi,j hi ei,j , ∗ Wi,j = (αi,j + 2)fi+1,j − hi ei+1,j ,
with αi,j > 0. This interpolation is called the rational cubic interpolator based on function values and derivatives which satisfies p∗i,j (xi ) = fi,j , p∗i,j (xi+1 ) = fi+1,j , p∗i,j 0 (xi ) = ei,j , p∗i,j 0 (xi+1 ) = ei+1,j . For each pair of (i, j), i = 1, 2, · · · , n − 1 and j = 1, 2, · · · , m − 1, using the x-direction interpolation p∗i,j (x), define the interpolation function P ∗ (x, y) on [xi , xi+1 ; yj , yj+1 ] as follows: ∗ Pi,j (x, y) = (1 − η)3 p∗i,j (x) + η(1 − η)2 Vi,j + η 2 (1 − η)Wi,j + η 3 p∗i,j+1 (x),
i = 1, 2, · · · , n − 1; j = 1, 2, · · · , m − 1,
(2)
where Vi,j = 3p∗i,j (x) + lj ((1 − θ)2 (3θ + 1)di,j + θ2 (2 − θ)di+1,j ), Wi,j = 3p∗i,j+1 (x) − lj ((1 − θ)2 (3θ + 1)di,j+1 + θ2 (2 − θ)di+1,j+1 ). ∗ (x, y) is called the bivariate blending rational Hermite interpolation based on function The term Pi,j values and partial derivatives which satisfies ∗ Pi,j (xr , ys ) = f (xr , ys ),
∗ (x , y ) ∗ (x , y ) ∂Pi,j ∂Pi,j r s r s = er,s , = dr,s , r = i, i + 1, s = j, j + 1. ∂x ∂y
If the knots are equally spaced for variable x, namely, hi = (b − a)/n, it is easy to test that the ∗ (x, y) is C 1 in the whole interpolating region [x , x ; y , y ], no matter interpolation function Pi,j 1 n 1 m what the parameters αi,j , αi,j+1 might be [6]. Furthermore, when αi,j = αi,j+1 = 1, the interpolator is a bivariate standard Hermite interpolation. 2
BAO ET AL: BIVARIATE INTERPOLATOR
80
The interpolating scheme above begins in x-direction first. Now, let the interpolation begins with y-direction first. For each x = xi , i = 1, 2, · · · , n, denote the y-direction interpolation in [yj , yj+1 ] by ∗ qi,j (y) =
∗ + η 2 (1 − η)U ∗ + η 3 f (1 − η)3 βi,j fi,j + η(1 − η)2 Ti,j i,j+1 i,j , i = 1, 2, · · · , m − 1, (1 − η)βi,j + η
(3)
where ∗ Ti,j = (2βi,j + 1)fi,j + βi,j lj di,j , ∗ Ui,j = (βi,j + 2)fi,j+1 − lj di,j+1 ,
with βi,j > 0. For each pair (i, j), i = 1, 2, · · · , n − 1 and j = 1, 2, · · · , m − 1, using the y-direction interpolation ∗ (y), define the interpolation function Q∗ (x, y) on [x , x qi,j i i+1 ; yj , yj+1 ] as follows: i,j ∗ ∗ Q∗i,j (x, y) = (1 − θ)3 qi,j (y) + θ(1 − θ)2 Ti,j + θ2 (1 − θ)Ui,j + θ3 qi+1,j (y),
i = 1, 2, · · · , n − 1; j = 1, 2, · · · , m − 1,
(4)
where ∗ Ti,j = 3qi,j (y) + hi ((1 − η)2 (3η + 1)ei,j + η 2 (2 − η)ei,j+1 ), ∗ Ui,j = 3qi+1,j (y) − hi ((1 − η)2 (3η + 1)ei+1,j + η 2 (2 − η)ei+1,j+1 ).
The interpolation function Q∗i,j (x, y) satisfies Q∗i,j (xr , ys ) = f (xr , ys ),
∂Q∗i,j (xr , ys ) ∂Q∗i,j (xr , ys ) = er,s , = dr,s , r = i, i + 1, s = j, j + 1. ∂x ∂y
Similarly, if the knots are equally spaced for variable y, namely, lj = (d − c)/m, the interpolation Q∗i,j (x, y) is C 1 in the whole interpolating region [x1 , xn ; y1 , ym ], no matter what the parameters βi,j , βi+1,j might be. The weighted bivariate blending rational interpolator will be constructed by using the two kinds of interpolator described above. Let ∗ Pi,j (x, y) = λi,j Pi,j (x, y) + (1 − λi,j )Q∗i,j (x, y),
(5)
with the weight coefficient λi,j ∈ [0, 1], then Pi,j (x, y) satisfies Pi,j (xr , ys ) = f (xr , ys ),
∂Pi,j (xr , ys ) ∂Pi,j (xr , ys ) = er,s , = dr,s , r = i, i + 1, s = j, j + 1, ∂x ∂y
and it is C 1 in the whole interpolating region [x1 , xn ; y1 , ym ] if the knots are equally spaced for variable x and y, no matter what the parameters αi,j , αi,j+1 and βi,j , βi+1,j might be. In the following of this paper, consider the interpolator defined in (5).
3
BAO ET AL: BIVARIATE INTERPOLATOR
3
81
The basis of the interpolation
From (1)-(5), the interpolation function Pi,j (x, y) defined in (5) can be written as follows: Pi,j (x, y) =
i+1 j+1 X X
[ar,s (θ, η)fr,s + br,s (θ, η)hi er,s + cr,s (θ, η)lj dr,s ],
(6)
r=i s=j
where (θ + (1 + θ)αi,j )(1 + 2η)λi,j (1 + 2θ)(η + (1 + η)βi,j )(1 − λi,j ) + ], (1 − θ)αi,j + θ (1 − η)βi,j + η (θ + (1 + θ)αi,j+1 )(3 − 2η)λi,j (1 + 2θ)(2 − η + (1 − η)βi,j )(1 − λi,j ) ai,j+1 (θ, η) = (1 − θ)2 η 2 [ + ], (1 − θ)αi,j+1 + θ (1 − η)βi,j + η (2 − θ + (1 − θ)αi,j )(1 + 2η)λi,j (3 − 2θ)(η + (1 + η)βi+1,j )(1 − λi,j ) ai+1,j (θ, η) = θ2 (1 − η)2 [ + ], (1 − θ)αi,j + θ (1 − η)βi+1,j + η (2 − θ + (1 − θ)αi,j+1 )(3 − 2η)λi,j ai+1,j+1 (θ, η) = θ2 η 2 [ (1 − θ)αi,j+1 + θ (3 − 2θ)(2 − η + (1 − η)βi+1,j )(1 − λi,j ) ], + (1 − η)βi+1,j + η (1 + 2η)λi,j αi,j + (1 + 3η)(1 − λi,j )], bi,j (θ, η) = θ(1 − θ)2 (1 − η)2 [ (1 − θ)αi,j + θ (3 − 2η)λi,j αi,j+1 bi,j+1 (θ, η) = θ(1 − θ)2 η 2 [ + (2 − η)(1 − λi,j )], (1 − θ)αi,j+1 + θ (1 + 2η)λi,j + (1 + 3η)(1 − λi,j )], bi+1,j (θ, η) = −θ2 (1 − θ)(1 − η)2 [ (1 − θ)αi,j + θ (3 − 2η)λi,j bi+1,j+1 (θ, η) = −θ2 (1 − θ)η 2 [ + (2 − η)(1 − λi,j )], (1 − θ)αi,j+1 + θ (1 + 2θ)(1 − λi,j )βi,j ci,j (θ, η) = (1 − θ)2 η(1 − η)2 [ + (1 + 3θ)λi,j ], (1 − η)βi,j + η (1 + 2θ)(1 − λi,j ) ci,j+1 (θ, η) = −(1 − θ)2 η 2 (1 − η)[ + (1 + 3θ)λi,j ], (1 − η)βi,j + η (3 − 2θ)(1 − λi,j )βi+1,j ci+1,j (θ, η) = θ2 η(1 − η)2 [ + (2 − θ)λi,j ], (1 − η)βi+1,j + η (3 − 2θ)(1 − λi,j ) ci+1,j+1 (θ, η) = −θ2 η 2 (1 − η)[ + (2 − θ)λi,j ]. (1 − η)βi+1,j + η ai,j (θ, η) = (1 − θ)2 (1 − θ)2 [
Terms ar,s (θ, η), br,s (θ, η), cr,s (θ, η), (r =, i, i + 1; s = j, j + 1) are called the basis of the bivariate interpolator defined by (5), and satisfy following equations and inequalities: ai,j (θ, η) + ai,j+1 (θ, η) + ai+1,j (θ, η) + ai+1,j+1 (θ, η) = 1,
(7)
bi,j (θ, η) + bi,j+1 (θ, η) − bi+1,j (θ, η) − bi+1,j+1 (θ, η) = θ(1 − θ)(1 + η(1 − 3η + 2η 2 )(1 − λi,j )),
(8)
ci,j (θ, η) − ci,j+1 (θ, η) + ci+1,j (θ, η) − ci+1,j+1 (θ, η) = η(1 − η)(1 + θ(1 − 3θ + 2θ2 )λi,j ), 4
(9)
BAO ET AL: BIVARIATE INTERPOLATOR
4
82
ar,s (θ, η) > 0, r = i, i + 1; s = j, j + 1,
(10)
bi,s (θ, η) > 0, bi+1,s (θ, η) < 0, s = j, j + 1,
(11)
cr,j (θ, η) > 0, cr,j+1 (θ, η) < 0, r = i, i + 1.
(12)
Some properties of the interpolation
For the interpolator defined by (5), we use Eq.(7) to arrive at the following unity property. Property 1. Let f (x, y) ≡ 1, ∀(x, y) ∈ Ω, and Pi,j (x, y) is its interpolation function over [xi , xi+1 ; yj , yj+1 ] defined in (5). No matter what positive number the parameters αi,s , βr,j take, the unity property holds, namely Z Z
D
Pi,j (x, y)dxdy = hi lj ,
where D denotes the subregion [xi , xi+1 ; yj , yj+1 ]. Use (6) to define Z Z D
Pi,j (x, y)dxdy = hi lj
i+1 j+1 X X
(a∗r,s fr,s + b∗r,s hi er,s + c∗r,s lj dr,s ),
r=i s=j
where
Z Z
a∗r,s
= Z Z
b∗r,s =
[0,1;0,1]
[0,1;0,1]
ar,s (θ, η)dθdη,
r = i, i + 1, s = j, j + 1,
br,s (θ, η)dθdη,
r = i, i + 1, s = j, j + 1,
cr,s (θ, η)dθdη,
r = i, i + 1, s = j, j + 1.
Z Z
c∗r,s =
[0,1;0,1]
a∗r,s , b∗r,s and c∗r,s are called the integral weights coefficients of the interpolator defined in (5). Thus, we use Eq.(7)-(9) to yield the following property. Property 2. Let Pi,j (x, y) is the interpolation function over [xi , xi+1 ; yj , yj+1 ] defined in (5). No matter what positive number the parameters αi,s and βr,j take, the integral weights coefficients of interpolation Pi,j (x, y) satisfy the following equations: 1 X 1 X
a∗i+r,j+s = 1,
r=0 s=0 1 X 1 X
1 (−1)r b∗i+r,j+s = , 6 r=0 s=0 1 X 1 X
1 (−1)s c∗i+r,j+s = . 6 r=0 s=0 From (10)-(12), it is easy to see that the following inequalities hold: a∗r,s > 0, r = i, i + 1, s = j, j + 1,
(13)
b∗i,s c∗r,j
s = j, j + 1,
(14)
r = i, i + 1.
(15)
> 0, > 0,
b∗i+1,s < 0, c∗r,j+1 < 0, 5
BAO ET AL: BIVARIATE INTERPOLATOR
83
From Property 2 and the inequalities (13)-(15), the following bounded property can be obtained. Property 3. Let Pi,j (x, y) be the interpolation function over [xi , xi+1 ; yj , yj+1 ] defined in (5). Denoting M1 = max{|fr,s |, r = i, i + 1; s = j, j + 1} M2 = max{hi |er,s |, r = i, i + 1; s = j, j + 1}, M3 = max{lj |dr,s |, r = i, i + 1; s = j, j + 1}. No matter what positive number the parameters αi,s and βr,j take, the integral of interpolation function Pi,j (x, y) satisfy the following inequality: Z Z
|
D
Pi,j (x, y)dxdy| ≤ (M1 +
M2 + M3 )hi lj , 6
where D denotes the subregion [xi , xi+1 ; yj , yj+1 ]. ∗ (x, y) defined in Furthermore, when λi,j = 1, the interpolation function Pi,j (x, y) becomes Pi,j (2). Denoting Z
ur,s = Z
vr,s =
1
0 1
0
Z
wr,s =
0
ar,s (θ, η)dη, r = i, i + 1, s = j, j + 1, br,s (θ, η)dη, r = i, i + 1, s = j, j + 1,
1
cr,s (θ, η)dη, r = i, i + 1, s = j, j + 1.
Then the following conclusions hold for any parameters αi,s > 0, 1 0 ≤ ur,s ≤ , r = i, i + 1, s = j, j + 1; 2 1 1 0 ≤ vi,s ≤ , − ≤ vi+1,s ≤ 0, s = j, j + 1; 8 8 (1 − θ)2 (1 + 3θ) (1 − θ)2 (1 + 3θ) wi,j = , wi,j+1 = − , 12 12 θ2 (2 − θ) θ2 (2 − θ) wi+1,j = , wi+1,j+1 = − . 12 12 Thus, the following property can be concluded. ∗ (x, y) defined in (2), the integral Property 4. For the bivariate blending interpolation Pi,j weights coefficients of the interpolation satisfy: 1 0 < a∗r,s < , r = i, i + 1, s = j, j + 1; 2 1 1 0 < b∗i,s < , − < b∗i+1,s < 0, s = j, j + 1; 8 8 7 5 5 7 ∗ ∗ , c =− , c∗ = , c∗ =− . ci,j = 144 i,j+1 144 i+1,j 144 i+1,j+1 144 Similarly, for the interpolation function Q∗i,j (x, y) defined in (4), there is the following property. 6
BAO ET AL: BIVARIATE INTERPOLATOR
84
Property 5. For the bivariate blending interpolation Q∗i,j (x, y) defined in (4), the integral weights coefficients of interpolation satisfy: 1 0 < a∗r,s < , r = i, i + 1, s = j, j + 1; 2 7 5 7 5 ∗ bi,j = , b∗i,j+1 = , b∗i+1,j = − , b∗i+1,j = − ; 144 144 144 144 1 1 0 < c∗r,j < , − < c∗r,j+1 < 0, r = i, i + 1. 8 8 From (5) and Property 4 and Property 5, it is easy to derive the following property. Property 6. For the interpolation Pi,j (x, y) defined in (5), no matter what positive number the parameters αi,s , βr,j take, the integral weights coefficients of interpolation satisfy: 1 0 < a∗r,s < , r = i, i + 1, s = j, j + 1, 2 1 1 0 < b∗i,s < , − < b∗i+1,s < 0, s = j, j + 1, 8 8 1 1 ∗ 0 < cr,j < , − < c∗r,j+1 < 0, r = i, i + 1. 8 8
5
Error estimates of the interpolation
For the error estimation of the interpolator defined in (5), note that the interpolator is local, without loss of generality, it is only necessary to consider the interpolating region [xi , xi+1 ; yj , yj+1 ] in order to process its error estimates. Let f (x, y) ∈ C 2 be the interpolated function, and Pi,j (x, y) is the interpolation function defined by (5) over [xi , xi+1 ; yj , yj+1 ]. Denoting k
∂f (x, y) ∂P ∂Pi,j (x, y) ∂f k = max | |, k k = max | |, ∂y ∂y ∂y ∂y (x,y)∈D (x,y)∈D
where D = [xi , xi+1 ; yj , yj+1 ]. By the Taylor expansion and the Peano-Kernel Theorem [16] gives the following: |f (x, y) − Pi,j (x, y)| ≤ |f (x, y) − f (x, yj )| + |Pi,j (x, yj ) − Pi,j (x, y)| + |f (x, yj ) − Pi,j (x, yj )| Z xi +1 2 ∂f ∂P ∂ f (τ, yj ) ≤ lj (k k + k k) + | Rx [(x − τ )+ ]dτ | ∂y ∂y ∂x2 xi Z xi+1 ∂f ∂P ∂ 2 f (x, yj ) ≤ lj (k k + k k) + k k |Rx [(x − τ )+ ]|dτ, (16) ∂y ∂y ∂x2 xi where k
∂ 2 f (x,yj ) k ∂x2
= maxx∈[xi ,xi+1 ] | (
Rx [(x − τ )+ ] = (
=
∂ 2 f (x,yj ) |, ∂x2
and
(x − τ ) − ai+1,j (θ, 0)(xi+1 − τ ) − bi+1,j (θ, 0)hi , xi < τ < x; −ai+1,j (θ, 0)(xi+1 − τ ) − bi+1,j (θ, 0)hi , x < τ < xi+1 , r(τ ), t(τ ),
xi < τ < x; x < τ < xi+1 . 7
BAO ET AL: BIVARIATE INTERPOLATOR
85
Thus, by simple integral calculation, it can be derived that Z
xi+1
xi
|Rx [(x − τ )+ ]|dτ = h2i
B1 (θ, αi,j , λi,j ) = h2i B(θ, αi,j , λi,j ), B2 (θ, αi,j , λi,j )
(17)
where B1 (θ, αi,j , λi,j ) = θ2 (1 − θ)2 [2θ + (1 − 2θ)λi,j + αi,j (2 − 2θ − (1 − 2θ)λi,j )]2 , B2 (θ, αi,j , λi,j ) = [θ(3 − 2θ) + 2(1 − θ)2 λi,j + αi,j (1 − θ)(3 − 2θ − 2(1 − θ)λi,j )] [θ(1 + 2θ − 2θλi,j ) + αi,j (1 + θ − 2θ2 + 2θ2 λi,j )]. For the fixed αi,j and λi,j , let (x)
Bi,j = max B(θ, αi,j , λi,j ).
(18)
θ∈[0,1]
This leads to the following theorem. Theorem 1. Let f (x, y) ∈ C 2 be the interpolated function, and Pi,j (x, y) be its interpolator defined by (5) in [xi , xi+1 ; yj , yj+1 ]. Whatever the positive values of the parameters αi,s , βr,j might be, the error of the interpolation satisfies |f (x, y) − Pi,j (x, y)| ≤ lj (k
∂f ∂P ∂ 2 f (x, yj ) (x) k+k k) + h2i k kBi,j , ∂y ∂y ∂x2
(x)
where Bi,j defined by (18). ∂ 2 f (x,y
∂ 2 f (x,y
)
)
j+1 j+1 k = maxx∈[xi ,xi+1 ] | |, then the following theorem holds. Similarly, denoting k ∂x2 ∂x2 2 Theorem 2. Let f (x, y) ∈ C be the interpolated function, and Pi,j (x, y) be its interpolation function defined by (5) in [xi , xi+1 ; yj , yj+1 ]. Whatever the positive values of the parameters αi,s , βr,j might be, the error of the interpolation satisfies
|f (x, y) − Pi,j (x, y)| ≤ lj (k
∂f ∂P ∂ 2 f (x, yj+1 ) (x) k+k k) + h2i k kBi,j+1 , ∂y ∂y ∂x2
(x)
where Bi,j+1 = maxθ∈[0,1] B(θ, αi,j+1 , λi,j ), and B(θ, αi,j , λi,j ) defined by (17). Note that the interpolator defined by (5) is symmetric about two variables x and y, we denote 2 (x ,y) 2 (x ,y) ∂Pi,j (x,y) (x,y) ∂f r r k ∂x k = max(x,y)∈D | ∂f∂x |, k ∂P |, and k ∂ f∂y k = maxy∈[yj ,yj+1 ] | ∂ f∂y |, 2 2 ∂x k = max(x,y)∈D | ∂x r = 1, 2, then the following theorem can be obtained. Theorem 3. Let f (x, y) ∈ C 2 be the interpolated function, and Pi,j (x, y) be its interpolator defined by (5) in [xi , xi+1 ; yj , yj+1 ]. Whatever the positive values of the parameters αi,s , βr,j might be, the error of the interpolation satisfies |f (x, y) − Pi,j (x, y)| ≤ hi (k
∂f ∂P ∂ 2 f (xr , y) (y) k+k k) + lj2 k kBr,j , r = 1, 2, ∂x ∂x ∂y 2
(y)
where Br,j = maxη∈[0,1] B(η, βr,j , λi,j ), and B(θ, αi,j , λi,j ) defined by (17). (x)
Furthermore, for Bi,j , when the parameter αi,j > 0 and the weight coefficient λi,j ∈ [0, 1], B2 (θ, αi,j , λi,j ) ≥ θ(1 − θ)[2θ + (1 − 2θ)λi,j + αi,j (2 − 2θ − (1 − 2θ)λi,j )]2 , 8
BAO ET AL: BIVARIATE INTERPOLATOR
86
so it is easy to derive that 1 (x) 0 ≤ Bi,j ≤ . 4 Thus, we can conclude the following theorem. (x)
Theorem 4. For any positive parameters αi,s , βr,j and weight coefficient λi,j ∈ [0, 1], Bi,s and (y)
Br,j are bounded, and 1 (x) (y) 0 ≤ Bi,s = Br,j ≤ , r = 1, 2; j = 1, 2. 4
6
Numerical examples
Example 1. Let the interpolated function be f (x, y) = sin(5(x2 − y)/2), (x, y) ∈ [0, 0.75; 0, 0.75], and let hi = lj = 0.25, then xi = 0.25i, yj = 0.25j, i = 0, 1, 2, 3, j = 0, 1, 2, 3. Also let αi,j = 0.9 + 0.2i − 0.2j, βi,j = 1.1 − 0.2i + 0.3j, λi,j = 0.8, i = 0, 1, 2, 3, j = 0, 1, 2, 3. Figure 1 shows the graph of the interpolated function f (x, y). Figure 2 shows the graph of the interpolation function P (x, y) defined by (5). Figure 3 shows the surface of the error f (x, y) − P (x, y). From Figure 3, it is easy to see that the interpolator defined by (5) gives a good approximation to the interpolated function.
1
0.5
0
−0.5
−1 0.8 0.8
0.6 0.6
0.4 0.4 0.2
0.2 0
0
Figure 1: Graph of surface f (x, y). Furthermore, for the same parameters αi,j and βi,j , the interpolation function Pi,j (x, y) defined in (5) can give a better approximation to the interpolated function by selecting suitable weight coefficient than the interpolator discussed in [6]. In order to show the case, an example is given in the following. 2 Example 2. Let the interpolated function be f (x, y) = ex −y . Note that the interpolator is local, we only consider a small interpolating region: [0.5, 0.7; 0.5, 0.7]. Let hi = lj = 0.2. 9
BAO ET AL: BIVARIATE INTERPOLATOR
87
1
0.5
0
−0.5
−1
−1.5 0.8 0.8
0.6 0.6
0.4 0.4 0.2
0.2 0
0
Figure 2: Graph of surface P (x, y). Pi,j (x, y) is the interpolation function defined by (5), Qi,j (x, y) is the interpolation function defined in [6], and denote R[f, Q] = |f (x, y) − Qi,j (x, y)|, R[f, P ] = |f (x, y) − Pi,j (x, y)|. For the same parameters αi,s = 1.15 and βr,j = 0.99, we take λi,j = 0.95. Table 1 gives some values of f (x, y), Pi,j (x, y), Qi,j (x, y), R[f, Q] and R[f, P ]. It is easy to see that the values of Pi,j (x, y) are closer to f (x, y) than that of Qi,j (x, y) in the interpolating region [0.5, 0.7; 0.5, 0.7]. Table 1. Values of f (x, y), Qi,j (x, y), Pi,j (x, y), R[f, Q] and R[f, P ]. x .55 .55 .55 .55 .55 .65 .65 .65 .65 .65
7
y .50 .55 .60 .65 .70 .50 .55 .60 .65 .70
f (x, y) .82078 .78075 .74267 .70645 .67200 .92543 .88029 .83736 .79652 .75768
Qi,j (x, y) .82077 .77964 .74243 .70719 .67199 .92541 .88172 .83765 .79553 .75766
Pi,j (x, y) .82077 .77974 .74244 .70710 .67199 .92541 .88146 .83761 .79571 .75766
R[f, Q] .00001 .00111 .00024 .00074 .00001 .00002 .00143 .00029 .00099 .00002
R[f, P ] .00001 .00101 .00023 .00065 .00001 .00002 .00117 .00025 .00081 .00002
Concluding remarks • This paper gives the explicit expression of a weighted bivariate blending rational interpolator. In the interpolator, there are four parameters αi,j , αi,j+1 , βi,j , βi+1,j and a weight coefficient λi,j , and it is C 1 in the whole interpolating region [x1 , xn ; y1 , ym ] for any parameters αi,j , βi,j and weight coefficient λi,j ∈ [0, 1]. Positive parameters and weight coefficient can be freely selected according to the needs of practical design. • For each pitch of the interpolating surface, the value of the interpolation function depends on 10
BAO ET AL: BIVARIATE INTERPOLATOR
88
−3
x 10 8 6 4 2 0 −2 −4 −6 −8 −10 0.8
0.8
0.6 0.6
0.4 0.4 0.2
0.2 0
0
Figure 3: Graph of surface f (x, y) − P (x, y). the interpolating data. Property 3 shows that the interpolation is stable. Properties 4–6 show that the function values and the partial derivatives play not the same roles in the interpolator. That is, the magnitude of the integral weight coefficient values describe that of ”function” of the interpolating knots and data. Also, the error estimation formula of this interpolator is obtained. Acknowledgment The support of the National Nature Science Foundation of China(No.61070096) and the Nature Science Foundation of the Shandong Province of China are gratefully acknowledged.
References [1] P.E. B´ezier, The Mathematical Basis of The UNISURF CAD System, Butterworth, London, 1986. [2] C.K. Chui, Multivariate Splines, SIAM, 1988. [3] P. Dierck, B. Tytgat, Generating the B´ezier points of BETA-spline curve, Comput. Aided Geom. Des. 6(1989)279-291. [4] Q. Duan, F. Bao, S. Du, E.H. Twizell, Twizell EH. Local control of interpolating rational cubic spline curves, Comput. Aided Des. 41(2009)825-829. [5] Q. Duan, K. Djidjeli, W.G. Price, E.H. Twizell, The approximation properties of some rational cubic splines, Int. J. Comput. Math. 72(1999)155-166. [6] Q. Duan, S. Li, F. Bao, E.H. Twizell, Hermite interpolation by piecewise rational surface, Appl. Math. Comput. 198(2008)59-72.
11
BAO ET AL: BIVARIATE INTERPOLATOR
89
[7] Q. Duan, L. Wang, E.H. Twizell, A new bivariate rational interpolation based on function values, Information Sciences 166(2004)181-191. [8] Q. Duan, Y. Zhang, E.H. Twizell, A bivariate rational interpoaltion and the properties, Appl. Math. Comput. 179(2006)190-199. [9] G. Farin, Curves and Surfaces for Computer Aided Geometric Design: A Practical Guide (fourth ed.), Academic press, New York, 1997. [10] J.A. Gregory, M. Sarfraz, P.K. Yuen, Interactive curve design using C 2 rational splines, Comput. Graph. 18(1994)153–159. [11] M.Z. Hussain, M. Sarfraz, Positivity-preserving interpolation of positive data by rational cubics, J. Comput. Appl. Math. 218(2008)446-458. [12] K. Konno, H. Chiyokura, An approach of designing and controlling free-form surfaces by using NURBS boundary Gregory patches, Comput. Aided Geom. Des. 13(1996)825-849. [13] R. M¨ uller, Universal parametrization and interpolation on cubic surfaces, Comput. Aided Geom. Des. 19(2002)479-502. [14] L. Piegl, On NURBS: A survey, IEEE Comput. Graph. Appl. 11(1991)55-71. [15] M. Sarfraz, A C 2 rational cubic spline which has linear denominator and shape control, Ann. Univ. Sci. Budapest 37(1994)53-62. [16] M.H. Schultz, Spline Analysis, Prentice-Hall, Englewood Cliffs, New Jersey, 1973. [17] J. Tan, S. Tang, Composite schemes for multivariate blending rational interpolation, J. Comput. Appl. Math. 144(2002)263-275 [18] R. Wang, Multivariate Spline Functions and Their Applications, Kluwer Academic Publishers, Beijing/New York/Dordrecht/Boston/London, 2001.
12
JOURNAL OF COMPUTATIONAL ANALYSIS AND APPLICATIONS, VOL. 14, NO.1, 90-100 , 2012, COPYRIGHT 2012 EUDOXUS PRESS, 90 LLC
A blending rational spline for value control of curves with minimal energy Fangxun Baoa,∗ Qinghua Suna
Gengwen Yangb
Qi Duana
a b
School of Mathematics, Shandong University, Jinan, 250100, China Luoyang Institute of Science and Technology, Luoyang, 471023, China
Abstract A weighted blending rational spline is constructed using only the function values of a function. The interpolation can approximate the interpolated function very well by selecting suitable parameter and weight coefficient. And what is more, a new method of value control is employed to control the shape of curves, and the optimal solution with minimal strain energy for the value control equation is derived. Also when the weight coefficient is in [0, 1], The error estimation formula of this interpolation function is obtained. Key words: Rational spline; weighted blending interpolation; minimal strain energy; value control; error estimation
1
Introduction
Spline interpolation is a useful and powerful tool in curve and surface design, such as the outer shape of a ship, car or aeroplane. Many authors have studied several kinds of spline for curve and surface control [2-4,9,14,15,19]. In recent years, the rational spline and its application to shape preserving and shape control have received attention. For the univariate rational spline interpolation with parameters, in References [13,16,17], positivity preserving, monotonic preserving and convexity preserving of the interpolating curves have been discussed. In Reference [11], a method to construct conic blending arcs from constraints using rational parametric representation that combines the separate cases of blending parallel and non-parallel edges have developed. In References [6-8], the region control and convexity control of the interpolating curves have been studied. Since the parameters in the interpolation function are selective according to the control need, the constrained control of the shape becomes possible. In fact, some practical designs in CAGD need only local control, but there are reports on only a few methods for local control. In Reference [10], a local control method of interval tension using weighted splines was given. In References [1,5], a local control method of the rational cubic spline with parameters based on function values was discussed, including value control, inflection-point and convex control. However, for the local shape control of the interpolating curve at a given point, only when certain condition is satisfied can the shape of the interpolating curve at the point be ∗
Corresponding author: [email protected]
1
BAO ET AL: RATIONAL SPLINE
91
modified. This is to say that there are still some cases for which there are no parameters that can be used in order to control the shape of the curve. To overcome this problem, in this paper, a weighted blending rational interpolator is constructed using a rational cubic spline and a polynomial spline. Moreover, a method to control the value of the interpolation function at a prescribed parameter is also developed. This paper is structured as follows. In Section 2, a C 1 continuous weighted blending interpolation function is constructed using only function values. Section 3 deals with a method to control the value of the interpolation function at a prescribed parameter. Moreover, the minimal strain energy spline interpolating an additional point is computed and numerical examples are presented to show the performance of the method. In Section 4, when the interpolated function f (t) is in C 2 [a, b] and weight coefficient λi is in [0, 1], the error estimation formula of the interpolation is obtained.
2
Interpolation
Let {(ti , fi ); i = 1, · · · , n} be a given set of data points, with a = t1 < · · · < tn = b the knot sequence and fi the values of the interpolated function f (t) at the knots ti (i = 1, 2, · · · , n). Denote by hi = ti+1 − ti , θ = (t − ti )/hi , and let αi be positive parameter. Then a C 1 -continuous, piecewise rational cubic spline with linear denominator is defined on an interpolating interval as follows: ¯ pi (t) , i = 1, 2, · · · , n − 1, (1) P ∗ (t)¯[t ,t ] = i i+1 qi (t) where pi (t) = (1 − θ)3 αi fi + θ(1 − θ)2 Vi + θ2 (1 − θ)Wi + θ3 fi+1 , qi (t) = (1 − θ)αi + θ, Vi = (2αi + 1)fi + αi hi di , Wi = (αi + 2)fi+1 − hi di+1 , with d1 = ∆1 , dn = ∆n−1 , di = (hi ∆i−1 + hi−1 ∆i )/(hi−1 + hi ), i = 2, 3, · · · , n − 1, and ∆i = (fi+1 − fi )/hi . This rational cubic spline P ∗ (t) satisfies P ∗ (ti ) = fi ,
P ∗ 0 (ti ) = di ,
i = 1, . . . , n,
and it exists and is unique for the given data and the positive parameter αi . We consider another interpolation function: the cubic polynomial spline based on function values and given by ¯ P∗ (t)¯[t ,t ] = (1 − θ)3 fi + θ(1 − θ)2 (3fi + hi di ) + θ2 (1 − θ)(3fi+1 − hi di+1 ) + θ3 fi+1 , (2) i i+1
where di , i = 1, 2, · · · , n are defined as in (1). Then P∗ (t) satisfies P∗ (ti ) = fi ,
P∗ 0 (ti ) = di ,
i = 1, . . . , n,
A weighted blending rational interpolation will be constructed using the two kinds of interpolation described above. Let P (t)|[ti ,ti+1 ] = λi P ∗ (t)|[ti ,ti+1 ] + (1 − λi )P∗ (t)|[ti ,ti+1 ] , i = 1, 2, · · · , n − 1, 2
(3)
BAO ET AL: RATIONAL SPLINE
92
with the weight coefficient λi ∈ R. The interpolation function based on function values in [a, b] defined by (3) is denoted by P (t). It is obvious that P (t) is a C 1 −continuous, piecewise rational interpolation called the weighted blending rational spline, and satisfies P 0 (ti ) = di ,
P (ti ) = fi ,
i = 1, 2, . . . , n.
Example 1. Let the interpolated function be f (t) = ln(1 + 8t), t ∈ [0, 1.5] with interpolating knots at t = ih, i = 0, 1, 2, 3, h = 0.5. Denote the corresponding C 1 -continuous interpolation functions defined by (1) − (3) in [0, 1.5], by P ∗ (t), P∗ (t) and P (t), respectively. Consider the section of the interpolation functions corresponding to the interval [0.5, 1.0] with αi = 0.6, λi = 1.8. Fig.1 shows the graphs of the curves f (t), P ∗ (t), P∗ (t) and P (t) in the interval [0.5, 1.0]. It is easy to see that P (t) gives a better approximation to f (t) than the others in the interval [0.5, 1.0]. 2.3
2.2
2.1
2
y=P (t) *
*
y=P (t) y=P(t)
1.9
y=f(t) 1.8
1.7
1.6 0.5
0.55
0.6
0.65
0.7
0.75
0.8
0.85
0.9
0.95
1
Figure 1: Graphs of the curves f (t), P ∗ (t), P∗ (t) and P (t).
3
Value control with minimal strain energy
The shape of interpolating curves on an interpolating interval depends on the interpolating data. Generally speaking, when the interpolating data are given, the shape of the interpolating curve is fixed. However, note that there are the independent parameter αi and the weight coefficient λi in (3), with them varying, the interpolation function can be changed for unchanged interpolating data. Thus, the shape of the interpolating curve can be modified by selecting the parameter and the weight coefficient appropriately. In what follows in this paper, we consider the case of equally spaced knots, namely, hi = hj for all i, j ∈ {1, 2, · · · , n − 1}. Consider the following constraint control issue. For a point t∗ in the interpolating interval [ti , ti+1 ], let θ∗ = (t∗ − ti )/hi be its local co-ordinate. If the practical design requires the function value of the interpolation at the point t∗ to be equal to a real number M , and M ∈ (min{fi , fi+1 }, max{fi , fi+1 }), how it can be achieved. This kind of control is called the value control of the interpolation at a point. Denoting M = ufi + (1 − u)fi+1 , 3
(4)
BAO ET AL: RATIONAL SPLINE
93
with u ∈ (0, 1). Then the equation P (t∗ ) = M can be written as λi pi (t∗ ) + (1 − λi )qi (t∗ )P∗ (t∗ ) − [ufi + (1 − u)fi+1 ]qi (t∗ ) = 0.
(5)
Eq. (5) is equivalent to B1 αi + B2 λi (αi − 1) + B3 = 0,
(6)
where B1 = (1 − θ∗ )B, B2 = θ∗ 2 (1 − θ∗ )2 [2(fi − fi+1 ) + (di + di+1 )hi ], B3 = θ∗ B, with B = [(1 − θ∗ )2 (1 + 2θ∗ ) − u]fi + [θ∗ 2 (3 − 2θ∗ ) − 1 + u]fi+1 + θ∗ (1 − θ∗ )[(1 − θ∗ )di − θ∗ di+1 ]hi . From Eq. (6), it is easy to see that there must exist a positive parameter αi and a weight coefficient λi satisfying Eq. (5). Therefore we have the following theorem. Theorem 1. For the given interpolation data {(ti , fi ); i = 1, · · · , n}, where the knots are equally spaced, let P (t) be the interpolation function over [ti , ti+1 ] defined by (3), let t∗ be a point in [ti , ti+1 ], M is a real number, and M ∈ (min{fi , fi+1 }, max{fi , fi+1 }). Then there must exist a positive parameter αi and a weight coefficient λi to satisfy P (t∗ ) = M . Th.1 shows that the value control problem can be always achieved. However, from Eq. (6), it is evident that the value control equation (5) has many solutions. For each solution, an interpolating curve can be obtained respectively. This mean that the uniqueness of the interpolating curve for the value control problem cannot be guaranteed. Hence, it is an important problem that how to choose the best solution in all solutions. In the following, we will present a solution to this problem. First of all, we consider the strain energy of the interpolating curve. The strain energy of a (piecewise) C 2 -continuous curve f (t) defined on [a, b] is defined as follows: Z b [f 00 (t)]2 dt a
f 00 (t)
where is the second derivative of f (t). Note that the interpolation function P (t) defined in (3) is local, without loss of generality, it is only necessary to consider the strain energy of the interpolating curve P (t) in the subinterval [ti , ti+1 ]. We denote the strain energy of the interpolating curve P (t) in the subinterval [ti , ti+1 ] by E(αi , λi ), then Z ti+1 A1 αi + A2 (1 − αi )2 λ2i E(αi , λi ) = [P 00 (t)]2 dt = , (7) αi ti where A1 = A2 =
4 [3(fi − fi+1 )2 + 3hi (di + di+1 )(fi − fi+1 ) + (d2i + di di+1 + d2i+1 )h2i ], h3i 4 [2(fi − fi+1 ) + (di + di+1 )hi ]2 . 5h3i 4
BAO ET AL: RATIONAL SPLINE
94
It is evident that the strain energy E(αi , λi ) of the interpolating curve P (t) defined by (3) in the subinterval [ti , ti+1 ] depends only on the parameter αi and the weight coefficient λi , as shown in (7). From (7), when λi = 0 or αi = 1, the strain energy of the interpolation curve P (t) defined by (3) is minimal. In this case, the interpolating curve P (t) defined by (3) is the polynomial curve defined in (2), and its strain energy is A1 . Definition. The solution (αi , λi ) of Eq. (5) is called the optimal solution, if (αi , λi ) is the minimal value point of the strain energy E(αi , λi ) defined in (7) under constraint condition (6); And if the optimal solution (αi , λi ) exists, then the value control is called the value control with minimal strain energy. In order to obtain the optimal solution for the value control, we let L(αi , λi , µ) = E(αi , λi ) − µ(B1 αi + B2 λi (αi − 1) + B3 ) A1 αi + A2 (1 − αi )2 λ2i − µ(B1 αi + B2 λi (αi − 1) + B3 ). = αi
(8)
From the following system of equations A λ2 (α2 −1) ∂L(αi ,λi ,µ) = 2 i α2 i − B1 µ − B2 λi µ = 0, ∂αi
∂L(αi ,λi ,µ) ∂λi ∂L(αi ,λi ,µ) ∂µ
i
2
i) = 2A2 λiα(1−α − B2 (αi − 1)µ = 0, i = −B1 αi − B2 (αi − 1)λi − B3 = 0,
it is easy to conclude that the stationary point (αi , λi , µ) of the function L(αi , λi , µ) defined by (8) is given by αi =
θ∗ 2B1 B3 4A2 B1 , λi = , , µ=− ∗ 1−θ B2 (B1 − B3 ) B22
(9)
where B1 , B2 , B3 are defined in Eq. (6), A2 is defined in (7). It is easy to prove that (αi , λi ) is just the optimal solution for the value control. This can be stated in the following theorem for value control with minimal strain energy. Theorem 2. For the given interpolation data {(ti , fi ); i = 1, · · · , n}, where the knots are equally spaced, let P (t) be the interpolation function over [ti , ti+1 ] defined by (3), let t∗ be a point in [ti , ti+1 ], M is a real number, and M ∈ (min{fi , fi+1 }, max{fi , fi+1 }). Then the optimal solution of Eq. (5) is given by αi =
θ∗ , 1 − θ∗
λi =
2B1 B3 . B2 (B1 − B3 )
Remark: In the optimal solution (αi , λi ), when θ∗ = 12 , αi = 1. In this case, the interpolation function P (t) defined in (3) is the polynomial interpolation defined by (2), its shape is fixed, the function value can not be modified with minimal strain energy. Example 2.Given the interpolating data shown in Table 1, let P (t) be the interpolation function defined by (3) in the interpolating interval [0.5, 4]. Since the interpolation function is local it is possible to compute directly its expression in the interval [1, 2] only.
5
BAO ET AL: RATIONAL SPLINE
Table 1. The interpolating data 0.5 1.0 2.0 2.4 3.0 3.0 5.0 3.0 3.0 4.0
t f (t)
95
4.0 6.5
This example will show how the value control with minimal strain energy of the interpolating curves can be obtained by selecting a optimal solution under the condition that the interpolating data are not changed. Taking αi = 2 and λi = 0.4 the interpolation function in the interval [1, 2] is given by P1 (t) =
941 − 3403t + 3012t2 − 1014t3 + 114t4 . 35t − 105
(10)
It can be shown that P1 (1.4) = 20637 4375 = 4.71703 by (10). For the given interpolating data, if the design requires P (1.4) = 4.6, then u = 0.8 by (4). Thus αi = 32 , λi = 17 57 by (9), that is to say that ( 23 , 17 value control problem. We denote the interpolation function ) is the optimal solution of the 57 in [1, 2] by P2 (t). P2 (t) =
−385 + 939t − 202t2 − 222t3 + 80t4 . 21t + 21
Furthermore, if the design requires P (1.4) = 4.8, then u = 0.9 by (4). Thus αi = 32 , λi = − 47 38 47 by (9), ( 23 , − 38 ) is the optimal solution of the value control problem. We denote the interpolation function in [1, 2] by P3 (t). P3 (t) =
105 − 387t + 691t2 − 424t3 + 85t4 . 7t + 7
5.5
5
y=P3(t) y=P1(t)
4.5
y=P2(t)
4
3.5
3
1
1.1
1.2
1.3
1.4
1.5
1.6
1.7
1.8
1.9
2
Figure 2: Graphs of the curves P1 (t), P2 (t) and P3 (t). Fig. 2 shows the graphs of the curves P1 (t), P2 (t) and P3 (t), it can be examined that P2 (1.4) = 4.6, P3 (1.4) = 4.8.
6
BAO ET AL: RATIONAL SPLINE
4
96
Error estimation
To estimate the error estimation of the weighted blending rational interpolation function defined in (3), we assume that the weight coefficient λi is in [0, 1] and the knots are equally spaced, namely hi = hj for all i, j ∈ {1, 2, · · · , n−1}. Since the interpolation is local, it is necessary to consider just the subinterval [ti , ti+1 ]. Assuming that f (t) ∈ C 2 [a, b] and P (t) is the rational cubic interpolation function of f (t) in [ti , ti+1 ], we discuss the following three cases. Case 1. i = 2, 3, · · · , n − 2. In this case, after some rearrangement, (3) can be rewritten as P (t) = ω1 (θ, αi , λi )fi−1 + ω2 (θ, αi , λi )fi + ω3 (θ, αi , λi )fi+1 + ω4 (θ, αi , λi )fi+2 ,
(11)
where θ(1 − θ)2 [αi + θ(1 − αi )(1 − λi )] , 2((1 − θ)αi + θ) (1 − θ)[θ(2 − θ) + 2(1 − θ2 )αi + 3θ2 (1 − θ)(1 − αi )(1 − λi )] , ω2 (θ, αi , λi ) = 2((1 − θ)αi + θ) θ[2θ(2 − θ) + (1 − θ2 )αi − 3θ(1 − 2θ + 3θ2 )(1 − αi )(1 − λi )] ω3 (θ, αi , λi ) = , 2((1 − θ)αi + θ) θ2 (1 − θ)[θ + (1 − θ)((1 − λi )αi + λi )] ω4 (θ, αi , λi ) = − . 2((1 − θ)αi + θ) ω1 (θ, αi , λi ) = −
Using the Peano-Kernel Theorem [18] we obtain Z
ti+2
R[f ] = f (t) − P (t) = ti
f 00 (τ )Rt [(t − τ )+ ] dτ,
(12)
where θ2 (1 − θ)[θ + (1 − θ)((1 − λi )αi + λi )](ti+2 − τ ) p(τ ) = (t − τ ) + 2((1 − θ)αi + θ) 2 )α − 3θ(1 − 2θ + 3θ 2 )(1 − α )(1 − λ )](t θ[2θ(2 − θ) + (1 − θ i i i i+1 − τ ) − , 2((1 − θ)αi + θ) ti < τ < t ; Rt [(t − τ )+ ] = θ[2θ(2 − θ) + (1 − θ2 )αi − 3θ(1 − 2θ + 3θ2 )(1 − αi )(1 − λi )](ti+1 − τ ) q(τ ) = − 2((1 − θ)αi + θ) 2 (1 − θ)[θ + (1 − θ)((1 − λ )α + λ )](t θ i i i i+2 − τ ) + , t < τ < ti+1 ; 2((1 − θ)α + θ) i 2 r(τ ) = θ (1 − θ)[θ + (1 − θ)((1 − λi )αi + λi )](ti+2 − τ ) , ti+1 < τ < ti+2 . 2((1 − θ)αi + θ) Now, we consider the properties of the kernel function Rt [(t − τ )+ ] of the variable τ ∈ [ti , ti+1 ]. It is easy to see that r(τ ) ≥ 0 for all t ∈ [ti+1 , ti+2 ]. For q(τ ), since q(t) =
θ(1 − θ)[(1 − θ)(1 + θ + θ(1 − λi )(1 − 2θ))αi + θ(1 + (1 − θ)λi + 2θ(1 − θ)(1 − λi ))]hi ≤ 0, −2((1 − θ)αi + θ) 7
BAO ET AL: RATIONAL SPLINE
97
and q(ti+1 ) =
θ2 (1 − θ)[θ + (1 − θ)((1 − λi )αi + λi )]hi ≥ 0, 2((1 − θ)αi + θ)
the root τ ∗ of q(τ ) is τ ∗ = ti+1 −
θ(1 − θ)[θ + (1 − θ)((1 − λi )αi + λi )]hi . (1 − θ)(1 + θ(3 − 2λi ) − 2θ2 (1 − λi ))αi + θ(1 + 3θ − 2θ2 + 2(1 − θ)2 λi )
Hence, q(τ ) ≤ 0 for t < τ < τ ∗ and q(τ ) ≥ 0 for τ ∗ < τ < ti+1 . Also, since p(ti ) =
θ(1 − θ)2 [αi + θ(1 − αi )(1 − λi )]hi ≥ 0, 2((1 − θ)αi + θ)
p(t) = q(t) ≤ 0,
the root τ∗ of p(τ ) is τ ∗ = ti +
θ(1 − θ)[αi + θ(1 − αi )(1 − λi )]hi . (2 + θ − 2θ2 (1 − λi ))(αi (1 − θ) + θ) − 2θ2 λi
Hence, p(τ ) ≥ 0 for ti < τ < τ∗ and p(τ ) ≤ 0 for τ∗ < τ < t. Thus, we use (12) to arrive at Z
ti+2
f 00 (τ )Rt [(t − τ )+ ] dτ |
kR[f ]k = kf (t) − P (t)k = | Z 00 ≤ kf (t)k[
ti τ∗
Z
Z
t
p(τ )dτ −
p(τ )dτ − τ∗
ti
Z
τ∗
ti+2
r(τ )dτ ]
q(τ )dτ +
q(τ )dτ + t
Z
ti+1
τ∗
ti+1
= h2i kf 00 (t)kUi (θ, αi , λi ),
(13)
where Ui (θ, αi , λi ) =
wi (θ, αi , λi ) , vi (θ, αi , λi )
with wi (θ, αi , λi ) = θ3 [2 + 11θ + 10θ2 − 27θ3 + 4θ4 + 4θ5 + (6 + 9θ − 58θ2 + 47θ3 + 8θ4 − 12θ5 )λi +4(1 − θ)3 (2 − 2θ − 3θ2 )λ2i − 4θ(1 − θ)4 λ3i ] + αi θ2 (1 − θ)[12θ3 (1 − θ)3 λ3i +4(1 − θ)2 (2 − 8θ + 3θ2 + 9θ3 )λ2i + (12 + 12θ − 131θ2 + 137θ3 + 12θ4 − 36θ5 )λi +6 + 33θ + 3θ2 − 81θ3 + 12θ4 + 12θ5 ] + αi2 θ(1 − θ)2 [4θ(1 − θ)(4 − 10θ + 9θ3 )λ2i +(6 − 3θ − 88θ2 + 133θ3 − 36θ5 )λi + 6 + 33θ + 30θ2 − 81θ3 + 12θ4 + 12θ5 −12θ3 (1 − θ)2 λ3i ] + αi3 (1 − θ)3 [2 + 11θ + 10θ2 − 27θ3 + 4θ4 + 5θ5 + 4θ4 (1 − θ)λ3i +4θ2 (2 − 4θ − θ2 + 3θ3 )λ2i − θ(6 + 15θ − 43θ2 + 4θ3 + 12θ4 )λi ], vi (θ, αi , λi ) = 4[(1 − θ)(1 + θ(3 − 2λi ) − 2θ2 (1 − λi ))αi + θ(1 + 3θ − 2θ2 + 2(1 − θ)2 λi )] [(1 − θ)αi + θ][(2 + θ − 2θ2 (1 − λi ))(αi (1 − θ) + θ) − 2θ2 λi ].
8
BAO ET AL: RATIONAL SPLINE
98
Case 2. i = 1. In this case, similar to Case 1, we obtain Z t3 kR[f ]k = | f 00 (τ )Rt [(t − τ )+ ] dτ | ≤ h2i kf 00 (t)kU1 (θ, α1 , λ1 ),
(14)
t1
where U1 (θ, α1 , λ1 ) =
w1 (θ, α1 , λ1 ) , v1 (θ, α1 , λ1 )
with w1 (θ, α1 , λ1 ) = θ(1 − θ)[θ2 (2 + θ − θ3 + (1 − θ)2 (1 + 2θ)λ1 + (1 − θ)3 λ21 ) +α1 θ(1 − θ)(4 + 2θ − 2θ2 + (1 − θ − 4θ2 + 4θ3 )λ1 − 2θ(1 − θ)2 λ21 ) +α12 (1 − θ)2 (2 + θ − θ3 − θ(1 + θ − 2θ2 )λ1 + θ2 (1 − θ)λ21 )], v1 (θ, α1 , λ1 ) = 2(α1 (1 − θ) + θ)[(2 + θ(1 − θ)(1 − λ1 ))(α1 (1 − θ) + θ) + θ(1 − θ)λ1 ]. Case 3. When i = n − 1, we conclude Z tn f 00 (τ )Rt [(t − τ )+ ] dτ | ≤ h2i kf 00 (t)kUn−1 (θ, αn−1 , λn−1 ), kR[f ]k = |
(15)
tn−2
where Un−1 (θ, αn−1 , λn−1 ) =
wn−1 (θ, αn−1 , λn−1 ) , vn−1 (θ, αn−1 , λn−1 )
with wn−1 (θ, αn−1 , λn−1 ) = θ(1 − θ)[θ2 (2 + 5θ − 4θ2 + θ3 + 2λn−1 (1 − θ)3 + θ(1 − θ)2 λ2n−1 ) +2αn−1 θ(1 − θ)(2 + 5θ − 4θ2 + θ3 + λn−1 (1 − θ)2 (1 − 2θ) 2 −θ2 (1 − θ)λ2n−1 ) + αn−1 (1 − θ)2 (2 + 5θ − 4θ2 + θ3
−2θ(1 − θ)2 λn−1 + θ3 λ2n−1 )], vn−1 (θ, αn−1 , λn−1 ) = 4[2(θ + αn−1 ) − αn−1 θ(1 − θ) + θ2 (1 − θ)(1 − αn−1 )(1 − λn−1 )] · (αn−1 (1 − θ) + θ). For the fixed parameter αi and weight coefficient λi , let ci = max Ui (θ, αi , λi ),
i = 1, 2, · · · , n − 1.
0≤θ≤1
(16)
This leads to the following theorem. Theorem 3. Let f (t) ∈ C 2 [a, b], P (t) is the weighted blending rational interpolation function of f (t) in [ti , ti+1 ] defined by (3). When the knots are equally spaced, for the parameter αi and the weight coefficient λi ∈ [0, 1], kR[f ]k ≤ h2i kf 00 (t)kci , where ci is defined by (16). It is obvious that the optimal error constant ci does not depend on the subinterval [ti , ti+1 ], but just on the parameter αi and the weight coefficient λi , as shown in (16). Some values ci for different αi and λi are given in Table 2. 9
BAO ET AL: RATIONAL SPLINE
α1 1.0 1.1 1.2 1.5 2.0 3.0 5.0 10.0 20.0 50.0 100.0 1000.0
5
λ1 0.88 0.88 0.88 0.88 0.88 0.88 0.88 0.88 0.88 0.88 0.88 0.88
Table 2. Values ci for various values c1 αi λi ci 0.13203 1.0 0.95 0.11128 0.13149 1.1 0.95 0.10996 0.13102 1.2 0.95 0.10877 0.12988 1.5 0.95 0.10585 0.12864 2.0 0.95 0.10244 0.12732 3.0 0.95 0.09844 0.12628 5.0 0.95 0.09483 0.12559 10.0 0.95 0.09194 0.12562 20.0 0.95 0.09051 0.12518 50.0 0.95 0.08968 0.12514 100.0 0.95 0.08942 0.12511 1000.0 0.95 0.08914
99
αi and λi . αn−1 λn−1 1.0 0.92 1.1 0.92 1.2 0.92 1.5 0.92 2.0 0.92 3.0 0.92 5.0 0.92 10.0 0.92 20.0 0.92 50.0 0.92 100.0 0.92 1000.0 0.92
cn−1 0.10205 0.10128 0.10059 0.09890 0.09695 0.09469 0.09264 0.09097 0.09011 0.08958 0.08940 0.08924
Conclusion
In this paper, a C 1 -continuous, piecewise weighted blending interpolation based only on function values is constructed. This interpolation function has a simple and explicit mathematical representation, and approximates the interpolated function very well by selecting suitable parameter and weight coefficient. Also, a value control method of interpolating curves is developed. When the given data are not changed, in view of this method, the uniqueness of the solution for the value control equation not only can be guaranteed, but the value control with minimal strain energy can be achieved only by selecting suitable parameter αi and weight coefficient λi . Acknowledgments The support of the National Nature Science Foundation of China (No.61070096) and the Natural Science Foundation of the Shandong Province of China are gratefully acknowledged.
References [1] F. Bao, Q. Sun, Q. Duan, Point control of the interpolating curve with a rational cubic spline, J Vis Commun Image R. 20, 275-280(2009). [2] C.K. Chui, Multivariate spline. SIAM, 1988. [3] C.A. de Boor, practical guide to splines, Revised Edition. New York: Springer-Verlag, 2001. [4] P. Dierck, B. Tytgat, Generating the B`ezier points of BETA-spline curve, Comput. Aided Geom. Des. 6,279-291(1989). [5] Q. Duan, F. Bao, S. Du, E.H. Twizell, Local control of interpolating rational cubic spline curves, Comput. Aided Des. 41,825-829(2009). [6] Q. Duan, K. Djidjeli, W.G. Price, E.H. Twizell, A rational cubic spline based on function values, Comput. & Graph. 22, 479-486(1998).
10
BAO ET AL: RATIONAL SPLINE
100
[7] Q. Duan, L. Wang, E.H. Twizell, A new C 2 rational interpolation based on function values and constrained control of the interpolant curves, Appl. Math. Comput. 161, 311-322(2005). [8] Q. Duan, K. Djidjeli, W.G. Price, E.H. Twizell, Constrained control and approximation properties of a rational interpolating curve, Inform. Sci. 152, 181-194(2003). [9] G. Farin, Curves and surfaces for computer aided geometric design: A practical guide. Academic press, 1988. [10] T.A. Foley, Local control of interval tension using weighted splines, Comp. Aided Geom. Des. 3, 281-294(1986). [11] I. Fudos, C.M. Hoffmann, Constraint-based parametric conics for CAD, Comput. Aided Des. 28, 91-100(1996). [12] J.A. Gregory, M. Sarfraz, P.K. Yuen, Interactive curve design using C 2 rational splines, Comput. & Graph. 18, 153-15(1994). [13] M.Z. Hussain, M. Sarfraz, Positivity-preserving interpolation of positive data by rational cubics, J. Comput. Appl. Math. 218, 446-458(2008). [14] A. Lahtinen, Monotone interpolation with application to estimation of taper curves, Annals Numer. Math. 3, 151-161(1996). [15] G.M. Nielson, Rectangular ν-splines, IEEE comp. Graph. Appl. 6, 35-40(1986). [16] M. Sarfraz, M.Z. Hussain, Data visualization using rational spline interpolation, J. Comput. Appl. Math. 189, 513-525(2006). [17] J.W. Schmidt, W. Hess, Positive interpolation with rational quadratic spline, Computing. 38, 261-267(1987). [18] M.H. Schultz, Spline Analysis, Prentice-Hall, Englewood Cliffs, New Jersey, 1973. [19] R.H. Wang, Multivariate Spline Functions and Their Applications, Kluwer Academic Publishers, Beijing/New York/Dordrecht/Boston/London, 2001.
11
JOURNAL OF COMPUTATIONAL ANALYSIS AND APPLICATIONS, VOL. 14, NO.1, 101-112 , 2012, COPYRIGHT 2012 EUDOXUS101 PRESS, LLC
The Construction and Approximation of a Class of Neural Networks Operators with Ramp Functions∗ Zhixiang Chen1
Feilong Cao2†
1. Department of Mathematics, Shaoxing University, Shaoxing 312000, Zhejiang Province, P R China. 2. Department of Mathematics, China Jiliang University, Hangzhou 310018, Zhejiang Province, P R China.
Abstract Single hidden layer feedforward neural networks with ramp sigmoidal activation functions are constructed to approximate two-variable functions defined on compact interval. With the help of modulus of continuity and analysis techniques, two Jackson-type theorems are given respectively when the objective functions being continuous and having continuous partial derivatives of order N . Keywords neural network operators; modulus of continuity; order of approximation AMS(2000) Subject Classification 41A20; 41A25
1
Introduction
Essentially, artificial neural networks are nonlinear parametric expressions representing multivariate numerical functions. In connection with such paradigms there arise mainly three problems: the density, the complexity and the algorithm. The density deals with the question: which functions can be approximated in a suitable sense. This problem has been satisfactorily solved (see [8, 9, 12, 13, 14, 15, 16]). The complexity which deals with the relationship between the size of an expression (i.e., the number of neurons) and its approximation capacity is a key issue. Usually, to solve the complexity problem some neural network operators are constructed and used (for example, see [1, 2, 3, 4, 5, 7, 10]). A function s : R → R is called sigmoidal if lim s(x) = 1,
lim s(x) = 0.
x→+∞
x→−∞
Cheang Gerald [6] introduced the following 0, ϕv (x) = vx, 1,
so called ramp sigmoidal function
∗ This
x ∈ (−∞, 0), x ∈ [0, v1 ], x ∈ ( v1 , +∞)
research was supported by the National Natural Science Foundation of China (Nos. 10871226) † Corresponding author: Feilong Cao, E-mail: [email protected]
1
90818020,
CHEN, CAO: NEURAL NETWORKS
102
and investigated the approximation by the single hidden layer feedforward neural networks with the ramp sigmoidal activation function. In this paper we will study the approximation of 2-dimensional continuous functions defined on [−1, 1] × [−1, 1] by constructing single hidden layer feedforward neural network operators with ramp sigmoidal activation function. Now we introduce some notations. Let ϕ˜v (x) := ϕv (x + v1 ) − ϕv (x), and ϕ(x) := ϕ˜v ( xv ). Obviously ϕ(x) has support set [−1, 1], and is nondecreasing on [−1, 0] and nonincreasing on ∫1 [0, 1]. Furthermore, we assume that −1 ϕ(x)dx = 1. Let b(x1 , x2 ) := ϕ(x1 )ϕ(x2 ), then b(x1 , x2 ) has support [−1, 1] × [−1, 1], and satisfies ∫ 1∫ 1 b(x1 , x2 )dx1 dx2 = 1, −1
−1
which is called bell-shaped function (see[3]). For the continuous function f (x1 , x2 ) defined on [−1, 1] × [−1, 1] := [−1, 1]2 , we modify the multivariate neural network operators introduced by [3, 5] as follows: [n+nα ]
Fn (f )(x1 , x2 ) =
∑
[n+nα ]
∑
⌈−n−nα ⌉ ⌈−n−nα ⌉
k2 k1 f ( n+n α , n+nα )
n2α
( ) k1 k2 b n1−α (x1 − ), (n1−α (x2 − ) , n n
where 0 < α < 1, [a] denotes the integral part of a number a, and ⌈a⌉ the ceiling. Clearly, Fn are single hidden layer feedforward neural network operators. We will give some estimations based on the procedures of [3].
2
Some Lemmas
To prove our results, we shall give some lemmas. Lemma 1 (1) If ti ≤ t′i ≤ 0(i = 1, 2), then b(t1 , t2 ) ≤ b(t′1 , t′2 ). (2) If 0 ≤ ti ≤ t′i (i = 1, 2), then b(t1 , t2 ) ≥ b(t′1 , t′2 ). Proof Noting that b(t1 , t2 ) ≤ b(t′1 , t2 ) ≤ b(t′1 , t′2 ), the proof of (i) can be finished easily. The proof of (2) is similar. Lemma 2 (i) For ki ∈ Z : ⌈nxi ⌉ + 1 ≤ ki ≤ [nxi + nα ], i = 1, 2, we have ) ∫ [nx1 +nα ]+1 ∫ [nx2 +nα ]+1 ( t1 t2 b n1−α (x1 − ), n1−α (x2 − ) dt1 dt2 n n ⌈nx1 ⌉+1 ⌈nx2 ⌉+1 α α ) ( [nx1 +n ] [nx2 +n ] ∑ ∑ k2 k1 ≤ b n1−α (x1 − ), n1−α (x2 − ) n n ⌈nx1 ⌉+1 ⌈nx2 ⌉+1 ) ∫ [nx1 +nα ] ∫ [nx2 +nα ] ( t1 t2 1−α 1−α ≤ b n (x1 − ), n (x2 − ) dt1 dt2 . n n ⌈nx1 ⌉ ⌈nx2 ⌉ (ii) For ki ∈ Z : ⌈nxi − nα ⌉ ≤ ki ≤ [nxi ] − 1, i = 1, 2, we have ( ) ∫ [nx1 ]−1 ∫ [nx2 ]−1 t1 t2 b n1−α (x1 − ), n1−α (x2 − ) dt1 dt2 n n ⌈nx1 −nα ⌉−1 ⌈nx2 −nα ⌉−1 ( ) [nx1 ]−1 [nx2 ]−1 ∑ ∑ k1 k2 ≤ b n1−α (x1 − ), n1−α (x2 − ) n n α α ⌈nx1 −n ⌉ ⌈nx2 −n ⌉
2
CHEN, CAO: NEURAL NETWORKS
∫ ≤
[nx1 ]
∫
⌈nx1 −nα ⌉
103
( ) t1 t2 b n1−α (x1 − ), n1−α (x2 − ) dt1 dt2 . n n ⌈nx2 −nα ⌉ [nx2 ]
(iii) For ⌈nx1 ⌉ + 1 ≤ k1 ≤ [nx1 + nα ], ⌈nx2 − nα ⌉ ≤ k2 ≤ [nx2 ] − 1, we have ( ) t2 t1 b n1−α (x1 − ), n1−α (x2 − ) dt1 dt2 n n ⌈nx1 ⌉+1 ⌈nx2 −nα ⌉−1 ) ( [nx1 +nα ] [nx2 ]−1 ∑ ∑ k2 k1 ≤ b n1−α (x1 − ), n1−α (x2 − ) n n ⌈nx1 ⌉+1 ⌈nx2 −nα ⌉ ) ( ∫ [nx1 +nα ] ∫ [nx2 ] t2 t1 1−α 1−α (x2 − ) dt1 dt2 . ≤ b n (x1 − ), n n n ⌈nx1 ⌉ ⌈nx2 −nα ⌉
∫
[nx1 +nα ]+1
∫
[nx2 ]−1
(iv) For ⌈nx1 − nα ⌉ ≤ k1 ≤ ⌈nx1 ⌉ − 1, ⌈nx2 ⌉ + 1 ≤ k2 ≤ [nx2 + nα ], we have ) ( t2 t1 b n1−α (x1 − ), n1−α (x2 − ) dt1 dt2 n n ⌈nx1 −nα ⌉−1 ⌈nx2 ⌉+1 α ( ) [nx1 ]−1 [nx2 +n ] ∑ ∑ k1 k2 ≤ b n1−α (x1 − ), n1−α (x2 − ) n n ⌈nx−nα ⌉ ⌈nx2 ⌉+1 ) ∫ [nx1 ] ∫ [nx2 +nα ] ( t1 t2 1−α 1−α ≤ b n (x1 − ), n (x2 − ) dt1 dt2 . n n ⌈nx1 −nα ⌉ ⌈nx2 ⌉
∫
Proof
[nx1 ]−1
∫
[nx1 +nα ]+1
(i) Let ti satisfy nxi ≤ ki − 1 ≤ ti ≤ ki . Then xi −
ki ti ki − 1 ≤ xi − ≤ xi − ≤ 0, n n n
By Lemma 1 we know ( ) k1 k2 1−α 1−α b n (x1 − ), (n (x2 − ) ≤ n n ≤
i = 1, 2.
(
) t1 t2 1−α b n (x1 − ), n (x2 − ) n n ) ( k2 − 1 k − 1 1 1−α 1−α ), n (x2 − ) . b n (x1 − n n 1−α
Hence ( ) ∫ k1 ∫ k2 ( ) k1 k2 t1 t2 n1−α (x1 − ), n1−α (x2 − ) ≤ b n1−α (x1 − ), n1−α (x2 − ) dt1 dt2 . n n n n k1 −1 k2 −1 Let ti satisfy nxi < ki ≤ ti ≤ ki + 1. Then xi −
ki + 1 ti ki ≤ xi − ≤ xi − ≤ 0, n n n
i = 1, 2.
Again, by Lemma 1 we obtain ) ( ) ( k2 + 1 t1 t2 k1 + 1 ), n1−α (x2 − ) ≤ b n1−α (x1 − ), n1−α (x2 − ) b n1−α (x1 − n n n n ( ) k k2 1 1−α 1−α ≤ b n (x1 − ), n (x2 − ) . n n
3
CHEN, CAO: NEURAL NETWORKS
104
Consequently ( ) t1 t2 b n1−α (x1 − ), n1−α (x2 − ) dt1 dt2 n n k1 k2 ( ) k1 k2 ≤ b n1−α (x1 − ), n1−α (x2 − ) n n
∫
k1 +1
∫
k2 +1
Therefore, ) ( t2 t1 1−α 1−α (x2 − ) dt1 dt2 b n (x1 − ), n n n k1 k2 ( ) k k 1 2 ≤ b n1−α (x1 − ), n1−α (x2 − ) n n ) ∫ k1 ∫ k2 ( t1 t2 ≤ b n1−α (x1 − ), n1−α (x2 − ) dt1 dt2 . n n k1 −1 k2 −1 ∫
k1 +1
∫
k2 +1
We thus get ( ) t1 t2 b n1−α (x1 − ), n1−α (x2 − ) dt1 dt2 n n ⌈nx1 ⌉+1 ⌈nx2 ⌉+1 α α ) ( [nx1 +n ] [nx2 +n ] ∑ ∑ k2 k1 ≤ b n1−α (x1 − ), n1−α (x2 − ) n n ⌈nx1 ⌉+1 ⌈nx2 ⌉+1 ) ∫ [nx1 +nα ] ∫ [nx2 +nα ] ( t1 t2 1−α 1−α ≤ b n (x1 − ), n (x2 − ) dt1 dt2 . n n ⌈nx1 ⌉ ⌈nx2 ⌉ ∫
[nx1 +nα ]+1
∫
[nx2 +nα ]+1
(ii) If ti satisfy ki − 1 ≤ ti ≤ ki ≤ nxi , then xi −
ki − 1 ti ki ≥ xi − ≥ xi − ≥ 0. n n n
By Lemma 1, we obtain that ( ) ( ) k1 − 1 k2 − 1 t1 t2 1−α 1−α 1−α 1−α b n (x1 − ), n (x2 − ) ≤ b n (x − ), n (x − ) n n n n ( ) k k2 1 1−α 1−α ≤ b n (x1 − ), n (x2 − ) . n n Consequently ( ) t1 t2 b n1−α (x1 − ), n1−α (x2 − ) dt1 dt2 n n k1 −1 k2 −1 ( ) k1 k2 ≤ b n1−α (x1 − ), n1−α (x2 − ) . n n ∫
k1
∫
k2
Now let ti satisfy ki ≤ ti ≤ ki + 1 ≤ nxi , then xi −
ki ti ki + 1 ≥ xi − ≥ xi − ≥ 0. n n n
So from Lemma 1, we obtain ( ) k1 k2 b n1−α (x1 − ), n1−α (x2 − ) n n
( ) t1 t2 ≤ b n1−α (x1 − ), n1−α (x2 − ) n n ( ) k + 1 k2 + 1 1 ≤ b n1−α (x1 − ), n1−α (x2 − ) , n n 4
CHEN, CAO: NEURAL NETWORKS
105
and ( ) ∫ k1 +1 ∫ k2 +1 ( ) k1 k2 t1 t2 1−α 1−α 1−α 1−α b n (x1 − ), n (x2 − ) ≤ b n (x1 − ), n (x2 − ) dt1 dt2 . n n n n k1 k2 We have established that ∫ k1 ∫
k2
which shows that ∫
∫
( ) t1 t2 b n1−α (x1 − ), n1−α (x1 − ) dt1 dt2 n n k1 −1 k2 −1 ( ) k1 k2 ≤ b n1−α (x1 − ), n1−α (x2 − ) n n ) ∫ k1 +1 ∫ k2 +1 ( t2 t1 1−α 1−α (x2 − ) dt1 dt2 ≤ b n (x1 − ), n n n k1 k2
( ) t2 t1 1−α 1−α (x2 − ) dt1 dt2 b n (x1 − ), n n n ⌈nx1 −nα ⌉−1 ⌈nx2 −nα ⌉−1 ) ( [nx1 ]−1 [nx2 ]−1 ∑ ∑ k2 k1 ≤ b n1−α (x1 − ), n1−α (x2 − ) n n ⌈nx1 −nα ⌉ ⌈nx2 −nα ⌉ ( ) ∫ [nx1 ] ∫ [nx2 ] t1 t2 1−α 1−α ≤ b n (x1 − ), n (x2 − ) dt1 dt2 . n n ⌈nx1 −nα ⌉ ⌈nx2 −nα ⌉ [nx1 ]−1
[nx2 ]−1
(iii) First, it is easy to see that when ti ≤ t′i ≤ 0, b(t1 , t2 ) ≤ b(t′1 , t2 ) and b(t1 , t2 ) ≤ b(t1 , t′2 ), and when t′i ≥ ti ≥ 0, b(t1 , t2 ) ≥ b(t1 , t′2 ) and b(t1 , t2 ) ≤ b(t′1 , t2 ). Therefore, ) ∫ k1 +1 ∫ k2 ( t1 t2 b n1−α (x1 − ), n1−α (x2 − ) dt1 dt2 n n k1 k2 −1 ) ] ∫ k1 +1 [∫ k2 ( t1 k2 1−α 1−α ≤ b n (x1 − ), n (x2 − ) dt2 dt1 n n k1 k2 −1 ) ∫ k1 +1 ( t1 k2 = b n1−α (x1 − ), n1−α (x2 − ) dt1 n n k1 ) ∫ k1 +1 ( k1 k2 1−α 1−α ≤ b n (x1 − ), n (x2 − ) dt1 n n k1 ( ) k1 k2 = b n1−α (x1 − ), n1−α (x2 − ) n n ) ∫ k2 +1 ( k1 t2 ≤ b n1−α (x1 − ), n1−α (x2 − ) dt2 n n k2 ) ∫ k1 ∫ k2 +1 ( t2 t1 1−α 1−α (x2 − ) dt1 dt2 . ≤ b n (x1 − ), n n n k1 −1 k2 Hence
( ) t1 t2 b n1−α (x1 − ), n1−α (x2 − ) dt1 dt2 n n ⌈nx1 ⌉+1 ⌈nx2 −nα ⌉−1 α ( ) [nx1 +n ] [nx2 ]−1 ∑ ∑ k1 k2 ≤ b n1−α (x1 − ), n1−α (x2 − ) n n ⌈nx1 ⌉+1 ⌈nx2 −nα ⌉ ( ) ∫ [nx1 +nα ] ∫ [nx2 ] t1 t2 1−α 1−α (x1 − ), n ≤ b n (x2 − ) dt1 dt2 . n n ⌈nx1 ⌉ ⌈nx2 −nα ⌉ ∫
[nx1 +nα ]+1
∫
[nx2 ]−1
5
CHEN, CAO: NEURAL NETWORKS
106
The proof of (iv) is similar to (iii). Lemma 3 We have α α ) ∫ 0∫ 0 ( 1 +n ] [nx2 +n ] ∑ 1 [nx∑ k k 4 2 1 1−α 1−α (x2 − ) − b(x1 , x2 )dx1 dx2 ≤ α . (i) 2α b n (x1 − ), n n n −1 −1 n ⌈nx1 ⌉+1 ⌈nx2 ⌉+1 n ( [nx2 ]−1 1 ]−1 ∑ 1 [nx∑ (ii) 2α b n1−α (x1 − n ⌈nx1 −nα ⌉ ⌈nx2 −nα ⌉ α ( 1 +n ] [nx2 ]−1 ∑ 1 [nx∑ (iii) 2α b n1−α (x1 − n α 1 (iv) 2α n
⌈nx1 ⌉+1 ⌈nx2 −n ⌉ [nx1 ]−1 [nx2 +nα ]
∑
∑
⌈nx1 −nα ⌉ ⌈nx2 ⌉+1
Proof
) ∫ 1∫ 1 k 4 k1 2 1−α ), n (x2 − ) − b(x1 , x2 )dx1 dx2 ≤ α . n n n 0 0
) ∫ 0∫ 1 k2 4 k1 ), n1−α (x2 − ) − b(x1 , x2 )dx1 dx2 ≤ α . n n n −1 0
) ∫ 1∫ 0 ( k2 4 k1 b(x1 , x2 )dx1 dx2 ≤ α . b n1−α (x1 − ), n1−α (x2 − ) − n n n 0 −1
(i) Since ( ) t1 t2 b n1−α (x1 − ), n1−α (x2 − ) dt1 dt2 n n ⌈nx1 ⌉+1 ⌈nx2 ⌉+1 ) [nx1 +nα ] [nx2 +nα ] ( ∑ ∑ k1 k2 b n1−α (x1 − ), n1−α (x2 − ) ≤ n n ⌈nx1 ⌉+1 ⌈nx2 ⌉+1 ) ∫ [nx1 +nα ] ∫ [nx2 +nα ] ( t1 t2 1−α 1−α ≤ b n (x1 − ), n (x2 − ) dt1 dt2 n n ⌈nx1 ⌉ ⌈nx2 ⌉ ∫
[nx1 +nα ]+1
∫
[nx2 +nα ]+1
we have ∫
∫
nx1 −⌈nx1 ⌉−1 nα nx1 −[nx1 +nα ]−1 nα
1 ≤ 2α n ∫
nx2 −⌈nx2 ⌉−1 nα nx2 −[nx2 +nα ]−1 nα
[nx1 +nα ] [nx2 +nα ]
∑
∑
⌈nx1 ⌉+1 ⌈nx2 ⌉+1
nx1 −⌈nx1 ⌉ nα nx1 −[nx1 +nα ] nα
∫
∫ b(x1 , x2 )dx1 dx2 −
0
∫
−1
0
−1
b(x1 , x2 )dx1 dx2
( ) ∫ 0∫ 0 k1 k2 1−α 1−α b n (x1 − ), n (x2 − ) dt1 dt2 − b(x1 , x2 )dx1 dx2 n n −1 −1
nx2 −⌈nx2 ⌉ nα nx2 −[nx2 +nα ] nα
∫ b(x1 , x2 )dx1 dx2 −
0 −1
∫
0
−1
b(x1 , x2 )dx1 dx2 .
Writing the left side of above inequality as △1 , and letting −1 < nx − ⌈nx⌉ ≤ 0, −nα ≤ nx − [nx + nα ] < −nα + 1, one has ∫ nx1 −⌈nx1 ⌉−1 ∫ nx2 −⌈nx2 ⌉−1 ∫ 0∫ 0 nα nα |△1 | = b(x , x )dx dx − b(x , x )dx dx 1 2 1 2 1 2 1 2 nx1 −[nx1α+nα ]−1 nx2 −[nx2α+nα ]−1 −1 −1 n n ∫ ∫ 0 ∫ 0∫ 0 0 4 ≤ b(x1 , x2 )dx1 dx2 + b(x1 , x2 )dx1 dx2 ≤ α . nx2 −⌈nx2 ⌉−1 nx1 −⌈nx 1 ⌉−1 n −1 −1 α α n
n
6
CHEN, CAO: NEURAL NETWORKS
107
We write the right side as △2 and get ∫ nx1 −⌈nx1 ⌉ ∫ nx2 −⌈nx2 ⌉ ∫ 0∫ 0 nα nα |△2 | = b(x , x )dx dx − b(x , x )dx dx 1 2 1 2 1 2 1 2 nx1 −[nxα1 +nα ] nx2 −[nxα2 +nα ] −1 −1 n n ∫ ∫ nx2 −[nx2 +nα ] ∫ 0∫ 0 0 nα ≤ b(x1 , x2 )dx1 dx2 + b(x1 , x2 )dx1 dx2 nx2 −⌈nx2 ⌉ −1 −1 −1 nα ∫ nx1 −[nx1 +nα ] ∫ ∫ 0 ∫ 0 0 nα 4 b(x1 , x2 )dx1 dx2 ≤ α . b(x1 , x2 )dx1 dx2 + + nx −⌈nx ⌉ n −1 1 1 −1 −1 nα Therefore, 1 n2α
[nx1 +nα ] [nx2 +nα ]
∑
∑
⌈nx1 ⌉+1 ⌈nx2 ⌉+1
) ∫ 0 ∫ 0 k k 2 1 ≤ 4 . b n1−α (x1 − ), n1−α (x2 − ) dt1 dt2 − nα n n −1 −1 (
Similar calculations to (i), we use Lemma 2 and obtain that ( ) ∫ 1∫ 1 [nx2 ]−1 1 ]−1 ∑ 1 [nx∑ k k 1 2 1−α 1−α b n (x1 − ), n (x2 − ) − b(x1 , x2 )dx1 dx2 n2α n n 0 0 α α ⌈nx1 −n ⌉ ⌈nx2 −n ⌉
≤ max {|△3 |, |△4 |} ≤ where
4 , nα
∫ nx1 −⌈nx1 −nα ⌉ ∫ nx2 −⌈nx2 −nα ⌉ ∫ 1∫ 1 nα nα |△3 | = b(x1 , x2 )dx1 dx2 − b(x1 , x2 )dx1 dx2 , nx2 −[nx2 ] nx1 −[nx 1] 0 0 nα nα ∫ nx1 −⌈nx1 −nα ⌉+1 ∫ nx2 −⌈nx2 −nα ⌉+1 ∫ 1∫ 1 nα nα |△4 | = b(x1 , x2 )dx1 dx2 − b(x1 , x2 )dx1 dx2 . nx2 −[nx2 ]+1 nx1 −[nx 1 ]+1 0 0 α α n
n
This finishes the proof of (ii). Next we prove (iii) and (iV). It is clear that α ( ) ∫ 0∫ 1 1 +n ] [nx2 ]−1 ∑ 1 [nx∑ k1 k2 4 1−α 1−α b n (x − ), n (x − ) − b(x , x )dx dx 1 2 1 2 1 2 ≤ α n2α n n n −1 0 α ⌈nx1 ⌉+1 ⌈nx2 −n ⌉
and
1 n2α
[nx1 ]−1 [nx2 +nα ]
∑
∑
⌈nx1 −nα ⌉ ⌈nx2 ⌉+1
) ∫ 1∫ 0 k1 k2 4 1−α 1−α b n b(x1 , x2 )dx1 dx2 ≤ α . (x1 − ), n (x2 − ) − n n n 0 −1 (
The proof of Lemma 3 is completed.
3
The Main Results
We shall give following Jackson type theorems. Theorem 1 Let f ∈ C([−1, 1]2 ), 0 < α < 1. Then ( |Fn (f )(x1 , x2 ) − f (x1 , x2 )| ≤ 9ω1 (f, 7
2
n
)+ 1−α
) 52 ∥f ∥ , ∞ nα
CHEN, CAO: NEURAL NETWORKS
108
for any (x1 , x2 ) ∈ [0, 1]2 , where ω1 (f, h) is modulus of continuity of function f (see[17]), and ∥f ∥∞ =
sup (x1 ,x2 )∈[−1,1]2
|f (x1 , x2 )|.
Proof Since |n1−α (xi − kni )| ≤ 1(i = 1, 2) iff nxi − nα ≤ ki ≤ nxi + nα (i = 1, 2), when ( ) ki > [nxi + nα ] or ki < ⌈nxi − nα ⌉, we have b n1−α (x1 − kn1 ), n1−α (x2 − kn2 ) = 0. Therefore, a simple calculation implies |Fn (f )(x1 , x2 ) − f (x1 , x2 )| α α [n+n ( ) k2 k1 ∑ ] f ( n+n ∑ ] [n+n k k α , n+nα ) 1 2 1−α 1−α = b n (x1 − ), (n (x2 − ) − f (x1 , x2 ) 2α n n n ⌈−n−nα ⌉ ⌈−n−nα ⌉ α α [nx∑ ( ) 2 +n ] k2 k1 1 +n ] [nx∑ f ( n+n k k α , n+nα ) 1 2 1−α 1−α b n (x1 − ), n (x2 − ) − f (x1 , x2 ) = 2α n n n ⌈nx1 −nα ⌉ ⌈nx2 −nα ⌉ α α [nx∑ ( ) 2 +n ] k2 k1 1 +n ] [nx∑ f ( n+n k k α , n+nα ) − f (x1 , x2 ) 1 2 ≤ b n1−α (x1 − ), n1−α (x2 − ) 2α n n n ⌈nx1 −nα ⌉ ⌈nx2 −nα ⌉ [nx1 +nα ] [nx2 +nα ] ( ) ∑ ∑ k1 k2 1 1−α 1−α b n (x1 − ), n (x2 − ) − 1 +|f (x1 , x2 )| 2α n n ⌈nx1 −nα ⌉ [nx2 −nα ] n ( ) 1 k1 k2 1−α 1−α b n (x1 − ), n (x2 − ) ≤ ω1 (f, 1−α ) n n2α n n ⌈nx1 −nα ⌉ [nx2 −nα ] α α [nx∑ ( ) 2 +n ] 1 +n ] [nx∑ k k 1 1 2 1−α 1−α +∥f ∥∞ b n (x1 − ), n (x2 − ) − 1 2α n n n ⌈nx1 −nα ⌉ ⌈nx2 −nα ⌉ 2
≤ 9ω1 (f, where
[nx1 +nα ] [nx2 +nα ]
∑
∑
2 ) + ∥f ∥∞ En , n1−α
α α [nx∑ ( ) 2 +n ] 1 +n ] [nx∑ 1 k1 k2 1−α 1−α . En = b n (x − ), n (x − ) − 1 1 2 2α n n ⌈nx1 −nα ⌉ ⌈nx2 −nα ⌉ n
Now we estimate En . α α [nx∑ ( ) ∫ 1∫ 1 2 +n ] 1 +n ] [nx∑ k k 1 1 2 1−α 1−α b n (x1 − ), n (x2 − ) − En = b(x1 , x2 )dx1 dx2 2α n n −1 −1 ⌈nx1 −nα ⌉ ⌈nx2 −nα ⌉ n [nx∑ ( ) ∫ 1∫ 1 [nx2 ]−1 1 ]−1 ∑ k1 k2 1 1−α 1−α b n (x1 − ), n (x2 − ) − b(x1 , x2 )dx1 dx2 ≤ 2α n n n 0 0 ⌈nx1 −nα ⌉ ⌈nx2 −nα ⌉ α α [nx∑ ) ∫ 0∫ 0 ( 2 +n ] 1 +n ] [nx∑ 1 k1 k2 1−α 1−α + b n (x1 − ), n (x2 − ) − b(x1 , x2 )dx1 dx2 2α n n n −1 −1 ⌈nx1 ⌉+1 ⌈nx2 ⌉+1 α [nx∑ ( ) ∫ 0∫ 1 2 ]−1 1 +n ] [nx∑ 1 k k 2 1 1−α 1−α + (x2 − ) − b(x1 , x2 )dx1 dx2 b n (x1 − ), n 2α n n n −1 0 ⌈nx1 ⌉+1 ⌈nx2 −nα ⌉
8
CHEN, CAO: NEURAL NETWORKS
+
109
α [nx∑ ( ) ∫ 1∫ 0 1 ]−1 [nx2 +n ] ∑ k1 k2 1 1−α 1−α b n (x − ), n (x − ) − b(x , x )dx dx 1 2 1 2 1 2 2α n n 0 −1 ⌈nx1 −nα ⌉ ⌈nx2 ⌉+1 n [nx2 +nα ]
∑
+
⌈nx2 −nα ⌉ [nx2 +nα ]
∑
+
⌈nx2 −nα ⌉ [nx1 +nα ]
∑
+
⌈nx1 −nα ⌉ [nx1 +nα ]
∑
+
⌈nx2 −nα ⌉
( ) [nx1 ] k2 1 1−α 1−α b n (x − ), n (x − ) 1 2 n2α n n ( ) 1 ⌈nx1 ⌉ k2 1−α 1−α b n (x1 − ), n (x2 − ) n2α n n ( ) k1 [nx2 ] 1 1−α 1−α b n (x − ), n (x − ) 1 2 n2α n n ( ) k1 ⌈nx2 ⌉ 1 1−α 1−α b n (x − ), n (x − ) 1 2 n2α n n
:= I1 + I2 + · · · + I8 . By Lemma 3, we have I1 + I2 + I3 + I4 ≤ Thus, we obtain
16 nα .
On the other hand, we have I5 + I6 + I7 + I8 ≤
( |Fn (f )(x1 , x2 ) − f (x1 , x2 )| ≤ 9ω1 (f,
36 nα .
) 52 ) + α ∥f ∥∞ . n1−α n 2
This finishes the proof of Theorem 1. Theorem 2 Let f ∈ C N ([−1, 1]2 ) (the set of function f having continuous partial derivatives of order N on [−1, 1]2 ), 0 < α < 1. Then ( ) 52 2 36M 9 · 4N |Fn (f )(x1 , x2 ) − f (x1 , x2 )| ≤ ∥f ∥∞ + 1−α + max ω1 (fβ⃗ , 1−α ) . ⃗ nα n n N !nN (1−α) |β|=N for any (x1 , x2 ) ∈ [0, 1]2 , where ⃗
fβ⃗ = Proof
Put g ⃗k (t) = f
∂β f ∂xβ1 1 ∂xβ2 2
( x1 + t(
n
then
{ (j) g ⃗k (t) n
=
2 ∑ ( ( i=1
⃗ = (β1 , β2 ), |β| ⃗ = β1 + β2 . , β
) k1 k2 − x ), x + t( − x ) , 0 ≤ t ≤ 1, 1 2 2 n + nα n + nα
∂ j ki − xi ) ) f n + nα ∂xi
}( x1 + t(
) k2 k1 − x ), x + t( − x ) , 1 2 2 n + nα n + nα
and g ⃗k (0) = f (x1 , x2 ). By Taylor’s formula we obtain n
f(
k1 k2 , ) α n + n n + nα
= g ⃗k (1) n
∫
(j)
=
N g ⃗ (0) ∑ k n
j!
j=0
=
N ∑ j=0
(j) g ⃗k (0) n
j!
1
∫
+ 0
t1
∫ ···
0
⃗k + RN ( , 0). n 9
0
tN −1
(N )
(N )
n
n
(g ⃗k (tN ) − g ⃗k (0))dtN · · · dt1
CHEN, CAO: NEURAL NETWORKS
110
Therefore, Fn (f )(x1 , x2 ) − f (x1 , x2 ) [nx1 +nα ] [nx2 +nα ]
=
∑
∑
( ) k1 k2 1−α 1−α b n (x1 − ), n (x2 − ) − f (x1 , x2 ) n n [nx2 +nα ] k1 k2 1−α 1−α ∑ b(n (x − ), n (x − )) 1 2 (j) n n + R∗ , g ⃗k (0) n2α n α
k2 k1 f ( n+n α , n+nα )
n2α
⌈nx1 −nα ⌉ ⌈nx2 −nα ⌉
[nx1 +nα ] N ∑ 1 ∑ = −f (x1 , x2 ) + j! α j=0
⌈nx1 −n ⌉ ⌈nx2 −n ⌉
where
[nx1 +nα ] [nx2 +nα ]
∑
∗
R =
∑
b(n1−α (x1 −
⌈nx1 −nα ⌉ ⌈nx2 −nα ⌉
k1 1−α (x2 n ), n 2α n
−
k2 n ))
⃗k RN ( , 0). n
Thus Fn (f )(x1 , x2 ) − f (x1 , x2 ) [nx1 +nα ] [nx2 +nα ] ∑ ∑ 1 k k 1 2 = f (x1 , x2 ) b(n1−α (x1 − ), n1−α (x2 − )) − 1 n2α n n α α ⌈nx1 −n ⌉ ⌈nx2 −n ⌉ α α [nx N 1 +n ] [nx2 +n ] k1 k2 1−α 1−α ∑ ∑ b(n (x − ), n (x − )) 1 ∑ 1 2 (j) n n + R∗ , g ⃗k (0) + 2α j! n n α α j=1 ⌈nx1 −n ⌉ ⌈nx2 −n ⌉
= I9 + I10 + R∗ . In view of Theorem 1, we have |I9 | ≤ Next we estimate I10 . 1 n2α
|I10 | ≤
1 n2α
≤
52 nα ∥f ∥∞ .
[nx1 +nα ] [nx2 +nα ]
∑
∑
⌈nx1 −nα ⌉ ⌈nx2 −nα ⌉ [nx1 +nα ] [nx2 +nα ]
∑
∑
N ∑
(j)
|g ⃗k (0)| n
j!
j=1
⌈nx1 −nα ⌉ ⌈nx2 −nα ⌉
N ∑ j=1
∂f ∂f j (| |+| |) ∂x2 j!n(1−α)j ∂x1 2j
j
∂f j ∂ f where | ∂x | means | ∂x j |. Therefore, 1 1
|I10 | ≤ 9
N ∑ j=1
2j
∂f j ∂f |+| |) . (| ∂x2 j!n(1−α)j ∂x1
{ } ∂f ∂f ∂f | N ∂f |+| |, · · · , (| |+| |) M = max | , ∂x1 ∂x2 ∂x1 ∂x2
Let
then
36M . n1−α Finally, we estimate R∗ . We observe (0 ≤ tN ≤ 1) (N ) g (tN ) − g (N ) (0) ⃗k ⃗k |I10 | ≤
n
n
10
CHEN, CAO: NEURAL NETWORKS
111
2 2 ∑ ∑ ⃗k ki ∂ N ki ∂ N = ( ( − x ) ) f (⃗ x + t ( − ⃗ x )) − ( ( − x ) ) f (⃗ x ) i N i n + nα ∂xi n + nα n + nα ∂xi ≤
i=1 N
i=1
4
nN (1−α)
Hence
max ω1 (fβ⃗ ,
⃗ |β|=N
⃗k RN ( , 0) ≤ n ≤
2 n1−α ∫
1
).
∫
0
t1
∫ ···
0
tN −1
0
N g (tN ) − g N (0) dt1 dt2 · · · dtN ⃗k ⃗nk n
N
4 2 max ω1 (fβ⃗ , 1−α ), ⃗ n N !nN (1−α) |β|=N
Consequently [nx1 +nα ] [nx2 +nα ] ∗
|R | ≤
∑
∑
b(n1−α (x1 −
⌈nx1 −nα ⌉ ⌈nx2 −nα ⌉
≤ ≤
1 n2α
[nx1 +nα ] [nx2 +nα ]
∑
∑
⌈nx1 −nα ⌉ ⌈nx2 −nα ⌉
[nx1 ] 1−α (x2 n ), n 2α n
−
k2 n ))
⃗k RN ( , 0) n
⃗k RN ( , 0) n
2 4N 9 max ω1 (fβ⃗ , 1−α ). ⃗ n N !nN (1−α) |β|=N
Therefore, ( |Fn (f )(x1 , x2 ) − f (x1 , x2 )| ≤
) 2 52 36M 9 · 4N ∥f ∥∞ + 1−α + max ω1 (fβ⃗ , 1−α ) . ⃗ nα n n N !nN (1−α) |β|=N
This finishes the proof of Theorem 2.
References [1] G. A. Anastassiou, Rate of convergence of some neural network operators to the unitunivariate case, J. Math. Anal. Appli., 212 (1997), 237-262. [2] G. A. Anastassiou, Rate of convergence of some multivaiate neural network operators to the unit, Computers and Mathematics with Applications, 40 (2000), 1-19. [3] G. A. Anastassiou, Quantitative approximation, New York, Chapman Hall/CRC, 2001. [4] F. L. Cao and Z. B. Xu, Substantial approximation order for neural networks, Science in China, Series A, 34 (2004), 65-74. [5] P. Cardaliaguet and G. Euvrard, Approximation of a function and its derivative with a neural network, Neural Networks, 5 (1992), 207-220. [6] H. L. Cheang Gerald, Approximation with neural networks activated by ramp sigmoids, J. Approx. Theory, 162 (2010), 1450-1465. [7] D. B. Chen, Degree of approximation by superpositions of a sigmoidal function, Approx. Theory Appl., 9 (1993), 17-28. 11
CHEN, CAO: NEURAL NETWORKS
112
[8] T. P. Chen, Approximation problems in system identification with neural networks, Science in China, Series A, 24 (1994), 1-7. [9] T. P. Chen, H. Chen and R. W. Liu, Approximation capability in C(Rn ) by multilayer feedforward networks and related problems, IEEE Trans. Neural Networks, 6(1995), 25-30. [10] Z. X. Chen and F. L. Cao, The approximation operators with sigmoidal functions, Computers and Math. with Appl., 58 (2009), 758-765. [11] W. Cheney and W. Light, A Course in Approximation Theory, Brooks/Cole, 2000. [12] G. Cybenko, Approximation by superpositions of a sigmoidal function, Mathematics of Control, Signals and Systems, 2(1989), 303-314. [13] K. Hornik, Approximation capabilities of multilayer feedforward networks, Neural Networks, 4(1991), 251-257. [14] K. Hornik, Some new results on neural network approximation, Neural Networks, 6 (1993), 1069-1072. [15] L. K. Jones, Constructive approximation for neural networks by sigmoid functions, Proceedings of the IEEE, 78(1990), 1586-1589. [16] M. Leshno, V. Y. Lin, A. Pinkus and S. Schocken, Multilayer feedforward networks with a nonpolynomial activation function can approximate any function, Neural Networks, 6(1993), 861-867. [17] S. M. Nikolski˘ı, Approximation of functions of several variables and imbedding theorems, Springer-Verlag, 1975. [18] Z. B. Xu and F. L. Cao, Simultaneous Lp approximation order for neural networks, Neural Networks, 18 (2005), 914-923.
12
JOURNAL OF COMPUTATIONAL ANALYSIS AND APPLICATIONS, VOL. 14, NO.1, 113-131 , 2012, COPYRIGHT 2012 EUDOXUS 113 PRESS, LLC
A New Stability Analysis for Solving Biharmonic Equations by Finite Difference Methods∗ Guang Zenga,b,†, Jin Huanga , Pan Chenga,c , Zi-Cai Lid a
School of Mathematical Sciences, University of Electronic Science and Technology of China, Chengdu, Sichuan, 611731, PR China b
School of Mathematics and Information Science, East China Institute of Technology, Fuzhou, 344000, PR China c
College of Science, Chongqing Jiaotong University, Chongqing, 400074, PR China
d
Department of Applied Mathematics, National Sun
Yat-sen University, Kaohsiung 80424, Taiwan, ROC
Abstract The paper presents a new stability analysis for the finite difference method (FDM) for biharmonic equations, based on the effective condition number. Upper bounds of the effective condition number and the simplest effective condition number are derived. In general cases, the bound of the effective condition number is O(h−3.5 ), which are smaller than Cond. = O(h−4 ), where h is the minimal meshspace of the difference grids. Surprisingly, the bounds of the effective condition number are only O(1) for homogeneous boundary conditions. Namely, these new stability analysis are more valid than previous stability analysis. Numerical experiments are provided to verify the stability analysis. Keyword : Stability analysis; Condition number; Finite difference method; Biharmonic equation; Effective condition number.
1
Introduction
The biharmonic equation([9], [16], [19]), which is one of the most important partial differential equations is applied in all subject areas of fundamental importance in the engineering sciences such as the elasticity, mechanics of elastic plates and fracture mechanics. In general, we can use finite difference method, ∗ The
work is supported by the National Natural Science Foundation of China (10871034). address: [email protected] (G. Zeng); [email protected] (J. Huang); cheng− [email protected] (P. Cheng); [email protected] (Z.C. Li). † E-mail
1
ZENG ET AL: BIHARMONIC EQUATIONS
114
radial basis function method and method of fundamental solution to solve biharmonic equation with the mixed type of boundary value problems. It is proved that the numerical solution is of high accuracy while the traditional condition number(Cond. ) is very large, which results in a worse or even invalid numerical stability. Fortunately, in practical applications, we deal with only a certain vector b, and the true relative errors may be much smaller than the worst Cond. 1 , which is called the effective condition number in Chan and Foulser [4] and Christiansen and Hansen [5]. For this reason, the new stability analysis in this paper by means of effective condition number may reevaluate the existing numerical methods. Recently, the new computational formulae of the effective condition number have been studied and applied to symmetric and positive-definite matrices which are obtained from Poisson’s equation by the Finite difference method(FDM) in [11]. The matrix A ∈ Rn×n is symmetric and positive definite. The eigenvalues are arranged in a descending order, λ1 ≥ λ2 ≥ · · · ≥ λn > 0, and the eigenvectors ui satisfy Aui = λi ui , where ui are orthogonal. Since the eigenvectors n P βi ui with the coefficients {u1 , u2 , · · · , un } are complete, b can be written as i=1
βi = uT i b. The effective condition number is defined as [11] Cond− eff=
1 ||b|| ||b|| s = , λn ||x|| λn P n β2 i=1
(1)
i λ2i
which can be simplified to ||b|| Cond− E= q , λ2 2 (||b|| − βn2 ) λn2 + βn2
(2)
1
when all λi equal to λ1 . In particular, when βn 6= 0, the simplest effective condition number from (2) is Cond− EE=
||b|| , βn 6= 0. |βn |
(3)
In this paper, the effective condition number is applied to the standard 13point finite difference equation for biharmonic equations on the unit square, and the bounds of effective condition number are derived. When the mixed types of the clamped and simply support boundary conditions are subjected to the solution boundary, it is proved in this paper that the bounds of the effective condition number are O(h−3.5 ) while in general cases they are smaller than O(h−4 ), where h is the minimal meshspace of the difference grids. Surprisingly, the bounds of the effective condition number are only O(1) for homogeneous boundary conditions. Since the stability is a severe issue for numerical biharmonic equations, the reduction of effective condition number is significant. So 1 The definition of the traditional condition number was given in Wilkinson [1] and then used in many books and papers ([2], [3], [8], [18], [19], [20]).
2
ZENG ET AL: BIHARMONIC EQUATIONS
115
Y
Γ
B
ΓD
S
ΓD
D
ΓD
C
N
X A
Figure 1: The unit square domain with the mixed boundary conditions. we may employ a transformation to convert the biharmonic equation with nonhomogeneous boundary conditions to that with homogeneous boundary conditions. Hence, the O(1) effective condition number may be achieved, and we yield an excellent stability of numerical biharmonic equations. This paper is organized as follows. In the next section, upper bounds of the effective condition number and the simplest effective condition number are derived for biharmonic equations in the case of two dimension (2D). In Section 3, a uniform view is given for the assumptions needed for deriving the bounds of the effective condition number. Finally, in Section 4, some numerical examples are provided to illustrate the features of the methods discussed in this paper and a few remarks are made.
2
A New Stability analysis of numerical biharmonic Equations in 2D
In this paper, we consider the biharmonic equation on the unit square [0,1]2 with the mixed type of the clamped and support boundary conditions, see Figure 1 uxxxx + 2uxxyy + uyyyy = f, in S,
(4)
u = g, on Γ,
(5)
∂u ∂2u = g ∗ on ΓD , 2 = g ∗∗ on ΓN , ∂n ∂n
(6)
3
ZENG ET AL: BIHARMONIC EQUATIONS
116
where Γ = ∂S = AB ∪ BC ∪ CD ∪ DA, ΓD = AB ∪ CD ∪ DA and ΓN = BC. The functions in (4)-(6) are supposed to be bounded ||f ||0,S ≤ C, ||g||0,Γ ≤ C, ||g ∗ ||0,ΓD ≤ C, ||g ∗∗ ||0,ΓN ≤ C, where C is a bounded constant and sZ Z
sZ
f 2 ds, ||g||0,Γ =
||f ||0,S =
(7)
s
g 2 dl.
(8)
Γ
Divide S by the uniform difference grids xi = ih, yj = jh with h = 1/N . Denote ui,j = u(xi , yj ) = u(ih, jh). We use the standard 13-point difference equation (DE) to solve (4-6). This approach yields the following linear system Ax = b, 2
(9)
2
where the matrix A ∈ R(N −1) ×(N −1) is symmetric and positive definite, and the unknown vector x consists of ui,j , 1 ≤ i, j ≤ N − 1, b is the known vector.
2.1
Preliminary Lemmas
Denote b = bf + bg + bg∗ + bg∗∗ ,
(10) ∗∗
where bf , bg , bg∗ and bg∗∗ result from the functions f, g, g∗ and g , respectively. From (9), we have bf = {· · · , h4 fi,j , · · · },
(11)
∗ ∗ bg∗ = − 2h{h· · · , gi,0 , · · · i, 0, · · · , 0, h· · · , gN,j , · · · i, 0, · · · , 0, ∗ h· · · , g0,j , · · · i, 0, · · · , 0}T ,
(12)
∗∗ bg∗∗ = −h2 {0, · · · , 0, h· · · , gi,n , · · · i, 0, · · · , 0}T ,
(13)
∗∗ , gi,n ,···i
∗∗ ∗∗ hg1,n , g2,n ,···
∗∗ , gN −1,n i.
where h· · · = The vector bg = (bg )i,j is more complicated, whose entries are given as follows (bg )i,j = 0, 2 ≤ i, j ≤ N − 2, (bg )N −2,j = −gn,j , 1 ≤ j ≤ n − 1, (bg )N −1,j = 8gN,j − 2(gn,j+1 + gN,j−1 ), 1 ≤ j ≤ n − 1, (bg )2,j = −g0,j , 1 ≤ j ≤ n − 1, (bg )1,j = 8g0,j − 2(g0,j+1 + g0,j−1 ), 1 ≤ j ≤ n − 1, (bg )i,2 = −gi,0 , 1 ≤ i ≤ n − 1, (bg )i,1 = 8gi,0 − 2(gi+1,0 + gi−1,0 ), 1 ≤ i ≤ n − 1, (bg )i,N −2 = −gi,n , 1 ≤ i ≤ n − 1, (bg )i,N −1 = 6gi,N − 2(gi+1,N + gi−1,N ), 1 ≤ i ≤ n − 1.
4
(14)
ZENG ET AL: BIHARMONIC EQUATIONS
117
∗ Lemma 2.1. Suppose thatq||f ||0,S ≤ C, ||g||0,Γ ≤ C, qR||g ||0,ΓD ≤ C and R R f 2 ds and ||g||0,Γ = g 2 dl. Then there ||g ∗∗ ||0,ΓN ≤ C with ||f ||0,S = s Γ exist the bounds,
||b|| ≤ C{h3 ||f ||0,S + h−0.5 ||g||0,Γ + h0.5 ||g ∗ ||0,ΓD + h1.5 ||g ∗∗ ||0,ΓN },
(15)
where C is a bounded constant independent of h. Proof. From (11) we have v v un−1 n−1 un−1 n−1 uX X uX X 3t t 4 2 ||bf || = (h fij ) = h (hfij )2 i=1 i=1
i=1 i=1
sZ Z
≤ Ch3
f 2 ds = Ch3 ||f ||0,S ,
(16)
s
where C is a bounded constant. Next, from (12) and (13), there exist the bounds ||bg∗ || = {
n−1 X
∗ 2 (2hgi,0 ) +
n−1 X
∗ 2 ∗ [(2hg0,j ) + (2hgN,j )2 ]}1/2
i=1
i=1
= 2h1/2 {
n−1 X
∗ 2 h(gi,0 ) +
∗ 2 ∗ h[(g0,j ) + (gN,j )2 ]}1/2
i=1
i=1
sZ ≤ Ch1/2
Ab∪CD∪DA
and
n−1 X
(g ∗ )2 dl = Ch1/2 ||g ∗ ||0,ΓD ,
(17)
v v un−1 un−1 uX uX ∗∗ )2 = h3/2 t ∗∗ )2 (h2 gi,N h(gi,N ||bg∗∗ || = t i=1
i=1
sZ
≤ Ch3/2 BC
(g ∗∗ )2 dl = Ch3/2 ||g ∗∗ ||0,ΓN .
(18)
Finally, we obtain from (14) ||bg ||2 =
n−1 X
{(gn,j )2 + (8gN,j − 2(gn,j+1 + gN,j−1 ))2 }
i=1
+
n−1 X
{(g0,j )2 + (8g0,j − 2(g0,j+1 + g0,j−1 ))2 }
i=1
+
n−1 X
{(gi,0 )2 + (8gi,0 − 2(gi+1,0 + gi−1,0 ))2 }
i=1
+
n−1 X
{(gi,N )2 + (6gi,N − 2(gi+1,n + gi−1,N ))2 }.
i=1
5
(19)
ZENG ET AL: BIHARMONIC EQUATIONS
118
Since (8gN,j − 2(gn,j+1 + gN,j−1 ))2 ≤ 3[(8gN,j )2 + (2gn,j+1 )2 + (2gN,j−1 )2 ], we have n−1 X
{(gn,j )2 + (8gN,j − 2(gn,j+1 + gN,j−1 ))2 }
i=1
≤ 193
n−1 X
(gn,j )2 + 12
i=1
≤ 217
n−1 X
{(gn,j+1 )2 + (gN,j−1 )2 }
i=1
n−1 X
(gn,j )2 + 12{(gn,0 )2 + (gN,N )2 }
i=1 n−1 X
≤ 217h−1 { Z ≤ Ch
h(gn,j )2 + h/2((gn,0 )2 + (gN,N )2 )}
i=1
−1
g 2 dl.
(20)
AB
Similarly, we have n−1 X
Z {(g0,j )2 + (8g0,j − 2(g0,j+1 + g0,j−1 ))2 } ≤ Ch−1
g 2 dl,
(22)
g 2 dl.
(23)
Z {(gi,0 )2 + (8gi,0 − 2(gi+1,0 + gi−1,0 ))2 } ≤ Ch−1 AD
i=1 n−1 X
(21)
CD
i=1 n−1 X
g 2 dl,
Z 2
2
{(gi,N ) + (6gi,N − 2(gi+1,N + gi−1,N )) } ≤ Ch
−1 BC
i=1
Hence we obtain from (19)-(23) sZ ||bg || ≤ Ch
−1/2
g 2 dl = Ch−1/2 ||g||0,Γ .
(24)
Γ
Combining (16)-(18) and (24), we have ||b|| ≤ ||bf || + ||bg || + ||bg∗ || + ||bg∗∗ || ≤ C{h3 ||f ||0,S + h−1/2 ||g||0,Γ + h1/2 ||g ∗ ||0,ΓD + h3/2 ||g ∗∗ ||0,ΓN }.
(25)
This is the desired result (15) and the proof of Lemma 2.1 is completed. ¤ Let us consider the corresponding eigenvalue problems of (4)-(6) wxxxx + 2wxxyy + wyyyy = µw, in S,
(26)
w = 0, on Γ,
(27)
6
ZENG ET AL: BIHARMONIC EQUATIONS
119
∂w ∂2w = 0 on ΓN . (28) = 0 on ΓD , ∂n ∂n2 Also the linear algebraic eigenvalue problem of (9) is given by Ay = λy. Denote h the leading eigenpair of (26)-(28) by (µmin , wmin ). Hence we have λhmin ' h4 µmin h and wmin ' wmin . Since the total number of (i, j) is (N − 1)2 , the orthogonal h vector of wmin is given by h yn = Ch{· · · , (wmin )i,j , · · · }T ,
(29)
where C > 0 is a bounded constant independent of h. Below, let us derive the bounds of βn = ynT b = ynT (bf + bg + bg∗ + bg∗∗ ). We have the following lemma. Lemma 2.2. There exist the following approximations, Z Z ynT bf ' Ch3 f wmin ds, (30) s
Z ynT bg∗ ' −Ch3
g∗ Z
ynT bg∗∗ ' Ch3 Z ynT bg ' Ch3
g( Γ
ΓD
∂ 2 wmin dl, ∂n2
(31)
∂wmin dl, ∂n
(32)
g ∗∗ ΓN
∂ 3 wmin ∂ 3 wmin + 2 )dl, ∂n3 ∂n∂s2
(33)
where C is the constant given in (29), and n and s are the exterior normal and tangent directions of Γ = ∂S, respectively. Proof. We shall prove (30)-(33) one by one. First we have from (11) ynT bf = Ch
n−1 X X n−1
h h4 fij (wmin )i,j = Ch3
i=1 j=1
h h2 fij (wmin )i,j
i=1 j=1
Z Z
' Ch3
n−1 X X n−1
f wmin ds,
(34)
s
where h is small. Also we have from (12) ynT bg∗ = −2Ch2
n−1 X
∗ h ∗ h ∗ h {gi,0 (wmin )i,1 + gN,i (wmin )N −1,i + g0,i (wmin )1,i }.
(35)
i=1 h h h Since (wmin )i,0 = 0 and (wmin )i,1 ' (wmin )i,−1 , we obtain h h h h ∂ 2 wmin (wmin )i,−1 − 2(wmin )i,0 + (wmin 2(wmin )i,1 )i,1 ' = . 2 2 ∂n h h2
Then −2Ch2
n−1 X i=1
∗ h gi,0 (wmin )i,1 ' −Ch4
n−1 X
∗ gi,0
i=1
7
∂ 2 wmin ' −Ch3 ∂n2
Z g∗ AB
∂ 2 wmin dl. ∂n2 (36)
ZENG ET AL: BIHARMONIC EQUATIONS
120
Hence we have from (35) Z ynT bg∗ ' −Ch3
g∗ ΓD
∂ 2 wmin dl. ∂n2
(37)
h This is the second desired result (31). Next, since (wmin )i,N = 0, we have from (13)
ynT bg∗∗ ' −Ch3 Z
n−1 X
∗∗ h gi,N (wmin )i,N −1 ' Ch4
i=1
' Ch3 ΓN
n−1 X
∗∗ gi,N
i=1
∂wmin |i,N −1 ∂n
∂wmin g ∗∗ dl. ∂n
(38)
This is the third desired result (32). Below, let us prove (33), which is nontrivial. Let bg = bg |AB + bg |BC + bg |CD + bg |DA ,
(39)
where the components of bg |AB , bg |BC , bg |CD and bg |DA are given in (14). Then ynT bg = ynT (bg |AB + bg |BC + bg |CD + bg |DA ). (40) For the first term on the right hand of the above equation, we have ynT bg |AB = Ch
n−1 X
h [8gN,j − 2(gn,j+1 + gN,j−1 )](wmin )N −1,j
i=1
− Ch
n−1 X
h gN,j (wmin )N −2,j
i=1
= Ch
n−1 X
h h [4(wmin )N −1,j − (wmin )N −2,j ]gN,j
i=1
+ Ch
n−1 X
h [4gN,j − 2(gn,j+1 + gN,j−1 )](wmin )N −1,j .
(41)
i=1 h h h Assume that (wmin )N,j = 0 and (wmin )N +1,j ' (wmin )N −1,j according to the boundary approximation, we obtain
(
∂ 3 wmin h h h h )N,j = h−3 {(wmin )N +1,j − 3(wmin )N,j + 3(wmin )N −1,j − (wmin )N −2,j } ∂n3 h h ' h−3 {4(wmin )N −1,j − (wmin )N −2,j }. (42)
Hence we have the approximation h h 4(wmin )N −1,j − (wmin )N −2,j ' h3 (
8
∂ 3 wmin )N,j , ∂n3
(43)
ZENG ET AL: BIHARMONIC EQUATIONS
121
which indicates Ch
n−1 X
h h )N −1,j − (wmin )N −2,j ]gN,j [4(wmin
i=1
' Ch3
n−1 X i=1
h(
∂ 3 wmin )N,j gN,j ' Ch3 ∂n3
Z g AB
∂ 3 wmin dl. ∂n3
(44)
Next, we have Ch
n−1 X
h [4gN,j − 2(gn,j+1 + gN,j−1 )](wmin )N −1,j
i=1
= Ch
n−1 X
h h h {4(wmin )N −1,j − 2[(wmin )N −1,j+1 + (wmin )N −1,j−1 ]}gN,j ,
(45)
i=1 h h by noting (wmin )N −1,0 = (wmin )N −1,N = 0. Moreover, assume
∂ 2 wmin h h h |N −1,j ' −h−2 {2(wmin )N −1,j −[(wmin )N −1,j+1 +(wmin )N −1,j−1 ]}. (46) ∂y 2 Since
∂ 2 wmin ∂y 2 |AB
Ch
n−1 X
= 0 from wmin |AB = 0, we have from (45),
h [4gN,j − 2(gn,j+1 + gN,j−1 )](wmin )N −1,j
i=1 n−1 X ∂ 3 wmin ∂ 2 wmin 3 ) g ' 2Ch )N −1,j gN,j h( 2 N −1,j N,j ∂n2 ∂y ∂n i=1 i=1 Z Z ∂ 3 wmin ∂ 3 wmin 3 3 dl ' 2Ch g 2 dl. (47) ' 2Ch g 2 ∂y ∂n ∂s ∂n AB AB
' −2Ch
n−1 X
h2 (
Hence we obtain from (41), (44) and (47), Z ∂ 3 wmin ∂ 3 wmin T 3 yn bg |AB ' Ch g( + 2 )dl. ∂n3 ∂n∂s2 AB
(48)
Similarly, we have Z ynT bg |DA ' Ch3
g(
∂ 3 wmin ∂ 3 wmin +2 )dl, 3 ∂n ∂n∂s2
(49)
g(
∂ 3 wmin ∂ 3 wmin +2 )dl. 3 ∂n ∂n∂s2
(50)
DA
Z ynT bg |CD ' Ch3
DC
9
ZENG ET AL: BIHARMONIC EQUATIONS
122
Finally, let us consider ynT bg |BC from (14), ynT bg |BC =Ch
n−1 X
h [6gi,N − 2(gi+1,N + gi−1,N )](wmin )i,N −1
i=1
− Ch
n−1 X
h gi,n (wmin )N −2,j
i=1
=Ch
n−1 X
h h [2(wmin )i,N −1 − (wmin )i,N −2 ]gi,N
i=1
+ Ch
n−1 X
h [4gi,N − 2(gi+1,N + gi−1,N )](wmin )i,N −1 .
(51)
i=1 h h h )i,N −1 , For the (i, j) near BC, since (wmin )i,N = 0, we have (wmin )i,N +1 ' −(wmin and
(
∂ 3 wmin ∂ 3 wmin ) = ( )i,N i,N ∂n3 ∂y 3 h h h h ' h−3 {(wmin )i,N +1 − 3(wmin )i,N + 3(wmin )i,N −1 − (wmin )i,N −2 } h h ' h−3 {2(wmin )i,N −1 − (wmin )i,N −2 }.
(52)
Hence, we obtain Ch
n−1 X
h h [2(wmin )i,N −1 − (wmin )i,N −2 ]gi,N
i=1 n−1 X
∂ 3 wmin )i,N ' Ch3 ' Ch hgi,N ( 3 ∂n i=1 3
Z g BC
∂ 3 wmin dl. ∂n3
(53)
2
wmin Moreover, because of ∂ ∂x |CB = 0, from the second term on the most right2 hand side of (51), we have
Ch
n−1 X
h [4gi,N − 2(gi+1,N + gi−1,N )](wmin )i,N −1
i=1
' Ch
n−1 X
h h h {4(wmin )i,N −1 − 2[(wmin )i+1,N −1 + (wmin )i−1,N −1 ]}gi,N
i=1 n−1 X
n−1
X ∂ 3 wmin ∂ 2 wmin 3 ' −2Ch hg ( )i,N gi,N ( ) ' 2Ch i,N −1 i,N ∂x2 ∂x2 ∂n i=1 i=1 Z ∂ 3 wmin ' Ch3 g dl. ∂n∂s2 BC 3
Combining (51), (53) and (54), we have Z ∂ 3 wmin ∂ 3 wmin T 3 yn bg |CB ' Ch g( + 2 )dl. ∂n3 ∂n∂s2 BC 10
(54)
(55)
ZENG ET AL: BIHARMONIC EQUATIONS
Hence from (48)-(50) and (55), we obtain Z ∂ 3 wmin ∂ 3 wmin ynT bg ' Ch3 g( +2 )dl. 3 ∂n ∂n∂s2 Γ
123
(56)
This is the last desired result (33) and the proof of Lemma 2.2 is completed. ¤
2.2
Bounds of Cond− EE
We are ready to derive the bounds of Cond− EE. We have from (40) and Lemma 2.2, Z Z Z ∂ 2 wmin dl βn = ynT b 'Ch3 { f wmin ds − g∗ ∂n2 s ΓD Z Z ∂ 3 wmin ∂ 3 wmin ∂wmin +2 )dl}. (57) + dl + g( g ∗∗ 3 ∂n ∂n ∂n∂s2 ΓN Γ ∂3v ∂n3
3
∂ v + 2 ∂n∂s 2 and assume A1: Z Z ∂ 3 wmin ∂ 3 wmin ∂wmin T1 = g ∗∗ dl + g( +2 )dl 3 ∂n ∂n ∂n∂s2 ΓN Γ Z Z Z ∂ 2 wmin dl + f wmin ds − g∗ ∂n2 s ΓD
Denote p(v) =
=O(1).
(58)
We obtain the simplest effective condition number from Lemma 2.1 and (58), ||b|| ≤ C{||f ||0,S +h−3.5 ||g||0,Γ +h−2.5 ||g ∗ ||0,ΓD +h−1.5 ||g ∗∗ ||0,ΓN }. |βn | (59) We state this result as a theorem. Theorem 2.1. Let A1 and all conditions in Lemma 2.1 hold. Then the bounds (59) of Cond EE exist. From Theorem 2.1 we have the following corollary. Corollary 2.1 Let all conditions in Theorem 2.1 hold. For the homogeneous boundary conditions g = g ∗ = g ∗∗ = 0, the simplest effective condition number has the bounds Cond− EE = O(1). (60) Cond− EE =
From Theorem 2.1, the worst effects on Cond− EE are resulted from u|Γ = g. Actually, we can use a transformation v = u− u ¯, where u ¯|Γ = g, ∂u/∂n|ΓD = g ∗ , 2 2 ∗∗ and ∂ u/∂n |ΓN = g , to give the new biharmonic equation with the homogeneous boundary conditions ∆2 v = f − ∆2 u ¯ in S, v = 0 on Γ, ∂v/∂n = 0 on ΓD , ∂ 2 u/∂n2 = 0, on ΓN . Hence we may achieve the small bound (60) of effective condition number which offer an excellent stability for numerical biharmonic equations. 11
ZENG ET AL: BIHARMONIC EQUATIONS
2.3
124
Bounds of Cond− eff
Below, we will derive the bounds of effective condition number in (1), Cond− eff=
||b|| . λn ||x||
(61)
First, we give a lemma. Lemma 2.3. For the stiffness matrix A in (9), there exists the bound λmin = λmin (A) ≥ ch4 ,
(62)
where c is the constant independent of h. Proof. First, consider the simple support boundary condition on the entire boundary ∂S, and denote A∗ the corresponding stiffness matrix. Since the eigenvectors are known as wp,q = 2{..., sin(pπih) sin(qπjh), ...}/N, where 1 ≤ p, q ≤ N − 1, we obtain the eigenvalues λp,q (A∗ ) = 16{sin2 (pπh/2) + sin2 (qπh/2)}2 , to give the minimal eigenvalue λmin (A∗ ) = λ1,1 (A∗ ) = 64 sin4 (πh/2) ' 4π 4 h4 .
(63)
Next, consider the mixed type of the clamped and simple support boundary conditions in (5)-(7). The key difference is that the first coefficient in the DE increases from 19 to 21. Hence, we conclude that only difference between matrices A and A∗ lies in that the diagonal entries of A are larger than those of A∗ . Then we have A = A∗ + D, where D is a diagonal matrix with D=Diag{d1,1 , ..., dn,n } with di,i ≥ 0, and n = (N − 1)2 . Hence, we obtain (Ax, x) (A∗ x, x) + (Dx, x) = min (x, x) (x,x)6=0 (x, x) (x,x)6=0 ∗ (A x, x) ≥ min = λmin (A∗ ). (x,x)6=0 (x, x)
λmin (A) = min
(64)
From (63) and (64) we have λmin (A) ≥ λmin (A∗ ) ≥ ch4 .
(65)
This is the desired result (62), and completes the proof of Lemma 2.3. ¤ Since f ∈ C(S), u ∈ C(S) is bounded, to have sX u2i,j ≤ CN = O(h−1 ). (66) ||x|| = i,j=1
12
ZENG ET AL: BIHARMONIC EQUATIONS
125
Hence, we may assume that the majority of ui,j = O(1) and assumption A2: ||x|| ≥ ch−1 ,
(67)
where c is a constant independent of h. Based on (61), (67), A2 and Lemmas 2.1, 2.3, we have the following theorem. Theorem 2.2. Let A2 and all conditions in Lemma 2.1 hold. Then there exist the bounds, Cond− eff ≤ C{||f ||0,S + h−3.5 ||g||0,Γ + h−2.5 ||g ∗ ||0,ΓD + h−1.5 ||g ∗∗ ||0,ΓN }, (68) where C is a constant independent of h. Note that the bounds of effective condition number in Theorems 2.1 and 2.2 are exactly the same, but based on different assumptions, A1 and A2. A linkage between A1 and A2 is provided in the next section.
3
A View on Assumption A1
We cite the Green formula in Li, et al. [14], Z Z Z Z Z Λσ (u, v) = v∆2 u − s
s
Z m(u)vn −
∂S
p(u)v ∂S
+ 2(1 − σ)([uxy v]43 − [uxy v]21 ),
(69)
where ∆u = uxx + uyy , [v]21 = v2 − v1 and 1, 2, 3, 4 are four corners of S. The notations are Λσ (u, v) = ∆u∆v + (1 − σ)(2uxy vxy − uxx vyy − uyy vxx ),
(70)
m(u) = −unn − σuss , p(u) = unnn + (2 − σ)ussn ,
(71)
where 0 ≤ σ < 1 and n and s are the normal and tangent directions along the boundary ∂S of S, respectively. The notations in (70) and(71) are uxy = ∂ 2 u/∂x∂y, uxx = ∂ 2 u/∂x2 , unn = ∂ 2 u/∂n2 , uss = ∂ 2 u/∂s2 etc. For the clamped and simply support boundary condition, v|Γ = 0, there are no corner effects term, because 2(1 − σ)([uxy v]43 − [uxy v]21 ) = 0. Choosing σ = 0, we have from (69) Z Z Z Z Λσ (u, w) = (−2uxy vxy + uxx vxx + uyy vyy ) s Z Zs Z Z = w∆2 u − m(u)wn − p(u)w, (72) s
Γ
Γ
where m(u) = −unn , p(u) = unnn + 2ussn . Let w = wmin satisfy ∆2 w = µw, w|Γ = 0, ∂w/∂n|ΓD = 0, ∂ 2 w/∂n2 |ΓD = 0.
13
ZENG ET AL: BIHARMONIC EQUATIONS
126
We obtain from (72) Z Z Z Z Z Z 2 ∂ u ∂w Λ1 (u, w) = − p(u)w w∆2 u + 2 s Γ s Γ ∂n ∂n Z Z Z ∂w = . fw + g ∗∗ ∂n s ΓN On the other hand, we have from (72), Z Z Z Z Z Λ0 (u, w) = µ uw + s
s
g∗
ΓD
∂2w − ∂n2
(73)
Z p(u)g.
(74)
Γ
Combining (73) and (74), we have Z Z Z Z Z Z Z ∂w ∂2w g ∗∗ + p(u)g = µ uw. fw − g∗ 2 + ∂n ∂n ΓN Γ s s ΓD Hence, we may restate assumption A1 in (66) as A1∗ : Z Z uwmin = O(1),
(75)
(76)
s
by noting µmin = O(1). Hence Theorem 2.1 may be rewritten as follows. Theorem 3.1. Let A1∗ and all conditions in Lemma 2.1 hold. Then there exist the bounds (60) of Cond− EE. To close this section, let us explore a linkage between assumptions A1 (i.e., A1∗ ) and A2. In fact, assumption A1∗ leads to A2. We confirm this conclusion by contradiction. Suppose that A2 does not hold, then we have ||x|| =0(h−1 ), as h → 0.
(77)
Since function wmin denotes the displacements of the vibrating plate corresponding to the minimal frequency (see Courant and Hilbert [9]), we conclude that wmin > 0 in S and that the majority of wmin (xi , yj ) are O(1). Hence we obtain Z Z X X uwmin ' wmin (xi , yj )u(xi , yj ) ' wmin (xi , yj )ui,j s
i,j
≤
sX
i,j
sX 2 wmin (xi , yj ) u2i,j = O(N )0(h−1 ) = 0(1).
i,j
This gives
(78)
i,j
Z Z uwmin = 0(1), s
which contradict assumption A1∗ .
14
(79)
ZENG ET AL: BIHARMONIC EQUATIONS
4
127
Numerical experiments
The uniform rectangles ¤ij are chosen with h = 1/N , where N is the division number along AB (see Figure 1). We choose the biharmonic equation with three different boundary conditions, called Models I, II and III. Model I. Choose the exact solution u(x, y) = [x(1 − x) sin(πx)][y(1 − y)2 sin(πy)] to satisfy
∆2 u = f, in s, u|Γ = 0, ∂u/∂n|AB∪AD∪CD = 0, ∂ 2 u/∂n2 |BC = 0,
where S = [0, 1] × [0, 1] and Γ = ∂S = AB ∪ BC ∪ CD ∪ DA. Model II. Choose the exact solution u(x, y) = [x(1 − x) sin(πx)][y(1 − y) sin(πy)] to satisfy
∆2 u = f, in s,
u|Γ = 0, ∂u/∂n|AB∪AD∪CD = 0, ∂ 2 u/∂n2 |BC = 2πx(1 − x) sin(πx). Model III. Choose the exact solution πx ][y(1 − y)2 sin(πy)] u(x, y) = [x(1 − x) sin 2 to satisfy
∆2 u = f, in s,
u|Γ = 0, ∂u/∂n|AB∪CD = 0, ∂u/∂n|AB = −y(1 − y)2 sin(πy), ∂ 2 u/∂n2 |BC = 0. The difference equations are established as in Section 2.1 and the numerical solutions are obtained from the Gaussian elimination. The numerical results are listed in Tables 1, 2, 3 for Models I, II and III, respectively. Interestingly, the computed values of Cond E and Cond EE are the same in three significant digits. Let us examine the numerical asymptotes of Cond− E and Cond− EE as the N grows. First, for Model I with the homogeneous boundary conditions, g = g ∗ = g ∗∗ = 0 in (5) and (6), the following rates Cond− E=Cond− EE=O(1),
(80)
are observed from Table 1, to verify perfectly Theorem 2.1 and Corollary 2.1. Next, for Model II with the boundary conditions (5) and (6) with g = g ∗ = 0, g ∗∗ = 2πx(1 − x) sin(πx).
(81)
we can see numerically from Table 2, 21.0641 Cond− EE|N =64 = = 2.6905 ' 21.5 = 2.83, Cond− EE|N =32 7.8290 15
(82)
ZENG ET AL: BIHARMONIC EQUATIONS
128
Table 1: The condition numbers and effective condition numbers for Model I N λmax (A) λmin (A) Cond. ||b||2 βn Cond− E Cond− EE
8 59.38 0.1841 0.322(3) 0.542(-1) 0.230(-1) 2.3565 2.3565
16 62.79 0.124(-1) 0.506(4) 0.712(-2) 0.282(-2) 2.5248 2.5248
32 63.69 0.800(-3) 0.796(5) 0.920(-3) 0.351(-3) 2.6211 2.6211
64 63.92 0.603(-4) 0.106(7) 0.117(-3) 0.439(-4) 2.6651 2.6651
Table 2: The condition numbers and effective condition numbers for Model II N λmax (A) λmin (A) Cond. ||b||2 βn Cond− E Cond− EE
8 59.38 0.1841 0.322(3) 0.906(-1) 0.478(-1) 1.8954 1.8954
16 62.79 0.124(-1) 0.506(4) 0.198(-1) 0.590(-2) 3.3559 3.3559
16
32 63.69 0.800(-3) 0.796(5) 0.577(-2) 0.737(-3) 7.8290 7.8290
64 63.92 0.603(-4) 0.106(7) 0.194(-2) 0.921(-4) 21.0641 21.0641
ZENG ET AL: BIHARMONIC EQUATIONS
129
to indicate the following rates Cond− E = Cond− EE=O(h−1.5 ),
(83)
which also verify the Theorem 2.1. Lastly, for Model III with the following boundary conditions in (5) and (6) g = 0, g ∗ |AB = −y(1 − y)2 sin(πy), g ∗ |AD∪CD = 0, g ∗∗ = 0, we can see numerically from Table 3, Cond EE|N =64 590.116 = = 5.6348 ' 22.5 = 5.66, Cond EE|N =32 104.727
(84)
to indicate the following rates, Cond− E = Cond− EE=O(h−2.5 ),
(85)
which also validate Theorem 2.1. Table 3: The condition numbers and effective condition numbers for Model III N λmax (A) λmin (A) Cond. ||b||2 βn Cond− E Cond− EE
8 59.38 0.1841 0.322(3) 0.767(-1) 0.172(-1) 4.459 4.459
16 62.79 0.124(-1) 0.506(4) 0.417(-1) 0.219(-2) 19.041 19.041
32 63.69 0.800(-3) 0.796(5) 0.288(-1) 0.275(-3) 104.727 104.727
64 63.92 0.603(-4) 0.106(7) 0.203(-1) 0.344(-4) 590.116 590.116
To close this paper, let us make a few concluding remarks. 1. The bounds of effective condition number are smaller or even much smaller than those of the traditional Cond. . This paper explores new stability analysis, based on effective condition number. 2. The effective condition number is applied to the standard 13-point difference equation for biharmonic equations. Their bounds are derived, to show Cond− EE = Cond E = O(h−3.5 ), which are smaller than Cond. = O(h−4 ) when h is small. However, for special cases, the effective condition number may be much smaller. For instance, the Cond− EE = Cond− E = O(1) is achieved for the homogeneous boundary conditions. This result is surprising, compared to Cond. = O(h−4 ). The small bounds of effective condition number are significant to stability of numerical biharmonic equations, because its instability is severe, compared with the stability of numerical Poisson’s equation. 3. To reduce the bounds of effective condition number, we may employ a transformation v = u − u ¯, where the function u ¯ satisfies the non-homogeneous 17
ZENG ET AL: BIHARMONIC EQUATIONS
boundary conditions. Then we obtain ∆2 v = f − ∆2 u ¯ with the homogeneous boundary conditions and the small effective condition number as O(1) is achieved. 4. Numerical experiments are carried out for the mixed types of the clamped and simply support boundary conditions, in which the computed results coincide with the new stability analysis perfectly.
References [1] Wilkinson J.H., The Algebraic Eigenvalue Problem, Clarendon Press, Oxford, 1965. [2] Atkinson K.E., An Introduction to Numerical Analysis (Sec. Ed.), John Wiley & Sons, New York, 1989. [3] Atkinson K.E. and Han W., Theoretical Numerical Analysis, A Functional Analysis Framework, Springer, Berlin, 2001. [4] Chan T.F. and Foulser D.E., Effectively well-conditioned linear systems, SIAM J. Sci. Stat. Comput., Vol. 9, pp. 963–969, 1988. [5] Christiansen S. and Hansen P.C., The effective condition number applied to error analysis of certain boundary collocation methods, J. of Comp. and Applied Math., Vol. 54(1), pp. 15–36, 1994. [6] Frayss´ e V. and Toumazou V., A note on the normwise perturbation theory for the regular generalized eigenproblem, Numerical Linear Algebra with Application, Vol. 5, pp. 1–10, 1998. [7] Gulliksson M. and Wedin P., Perturbation theory for generalized and constrained linear least squares, Numerical Linear Algebra with Application, Vol. 7, pp. 181–195, 2000. [8] Golub G.H. and van Loan C.F., Matrix Computations (2nd Edition), The Johns Hopkins, Baltimore and London, 1989. [9] Courant R. and Hilbert D., Methods of Mathematical Physics, Vol. I. WileyInterscience Publishers, New York, 1953. [10] Huang H. T. and Li Z.C., Effective condition number and superconvergence of the Trefftz method coupled with high order FEM for singularity problems, accepted by Boundary Analysis with Boundary Elements, 2005. [11] Li Z.C., Chien CS, Huang HT, Effective condition number for finite difference method, J. Comp. and Appl. Math., Vol 198, pp. 208-235, 2007. [12] Li Z.C., Boundary penalty finite element methods for blending surfaces, II. Biharmonic equations, J. Comp. and Appl. Math., Vol 110, pp. 155-176, 1999. 18
130
ZENG ET AL: BIHARMONIC EQUATIONS
[13] Li Z.C. and Yan N., New error estimates of bi-cubic Hermite finite element methods for biharmonic equations, J. Comp. and Appl. Math., Vol 142, pp. 251-285, 2002. [14] Li Z.C., Lu T.T. and Hu H.Y., The collocation Trefftz method for biharmonic equations with crack singularities, Engineering Analysis with Boundary Elements, Vol. 28, pp. 79–96, 2004. [15] Li Z.C., Huang J., and Huang H.T., Effective condition number of the Hermite finite element methods for biharmonic equations, Technical report, Department of Applied Mathematics, National Sun Yat-sen University, Kaohsiung, Taiwan, 2005. [16] Lu T., Zhou G.F and Lin Q., High order difference methods for the biharmonic equation, Acta Math. Sci., Vol. 6, pp. 223-230, 1986. [17] Luding E., He C. and Volker M., Minimization of the norm, the norm of the inverse and the condition number of a matrix by completion, Numerical Linear Algebra with Application, Vol. 2(2), pp. 155–171, 1995. [18] Parlett B.N., The Symmetric Eigenvalue Problem, SIAM Publishers, Philadelphia, 1998. [19] Quarteroni A. and Valli A., Numerical approximation of Partial Differential Equations, Springer-Verlag, Berlin, 1994. [20] Schwarz H.R., Numerical Analysis, A Comprehensive Introduction, John Wiley & Sons, Chichester, New York, 1989.
19
131
JOURNAL OF COMPUTATIONAL ANALYSIS AND APPLICATIONS, VOL. 14, NO.1, 132-142 , 2012, COPYRIGHT 2012 EUDOXUS132 PRESS, LLC
STABILITY PROBLEMS OF CUBIC MAPPINGS WITH THE FIXED POINT ALTERNATIVE IN GOO CHO1∗ , DONGSEUNG KANG2 , AND HEEJEONG KOH2
Abstract. We investigate the general solution of the cubic functional equation 3f (2x + y) + f (2x − y) + 6f (y) = 8f (x + y) + 24f (x) on the real vector spaces and then we prove the generalized Hyers-Ulam-Rassias stability in quasi-β-normed spaces, the stability by using a subadditive function and then the stability by using the alternative fixed point method.
1. Introduction The stability problem for functional equations is originated by S.M. Ulam [32] concerning the stability of group homomorphisms. The first result concerning the stability of functional equations was presented by Hyers [8] on the assumption that the spaces are Banach spaces. The famous Hyers stability result that appeared in [8] was generalized by T. Aoki [1] for the stability of the additive mapping involving a sum of powers of p-norms and by Th.M. Rassias [24] for the stability of the linear mapping by considering the Cauchy difference to be unbounded. The result of Th.M. Rassias lead mathematicians working in stability of functional equations to establish what is known today as Hyers-Ulam-Rassias stability or Cauchy-Rassias stability as well as to introduce new definitions of stability concepts. In [18] and [19], J.M. Rassias investigated a similar stability theorem for linear and nonlinear mappings with the unbounded Cauchy difference. During the last three decades, several stability problems of a large variety of functional equations have been extensively studied and generalized by a number of authors [2], [5],[6],[7],[9],[25],[10],[26],[27], and [28]. The quadratic function f (x) = cx2 (c ∈ R) satisfies the functional equation (1.1)
f (x + y) + f (x − y) = 2f (x) + 2f (y) .
This question is called the quadratic functional equation, and every solution of the equation (1.1) is called a quadratic function. In fact, a function f : X → Y is a solution of the equation (1.1) if and only if there exists a bilinear function B : X × X → Y such that f (x) = B(x, x) for all x ∈ X . A Hyers-Ulam stability theorem for the quadratic functional equation (1.1) was first proved by Skof [31] for functions f : X → Y , where X is a normed space and Y is a Banach space. 2000 Mathematics Subject Classification. 39B52. Key words and phrases. Hyers-Ulam-Rassias stability, functional equation, cubic mapping, quasi-β-normed space, subadditive function, fixed point. * Corresponding author. 1
133
2
IN GOO CHO, DONGSEUNG KANG, HEEJEONG KOH
The cubic function f (x) = cx3 (c ∈ R) satisfies the functional equation (1.2)
f (2x + y) + f (2x − y) = 2f (x + y) + 2f (x − y) + 12f (x) .
The equation (1.2) was solved by Jun and Kim [12]. Similar to a quadratic functional equation, actually, they proved that a function f : X → Y is a solution of the equation (1.2) if and only if there exists a function F : X × X × X → Y such that f (x) = F (x, x, x) for all x ∈ X , and F is symmetric for each fixed one variable and is additive for fixed two variables; see [12]. We promise that by a cubic function we mean every solution of the equation (1.2) is called a cubic function. Also, Chu and Kang [15, Lemma 2.1] proved that the equation (1.2) is equivalent to the following equation: (1.3)
f (x + 2y) + f (x − 2y) + f (2x) = 2f (x) + 4f (x + y) + 4f (x − y) .
Let β be a real number with 0 < β ≤ 1 and K be either R or C . We will consider the definition and some preliminary results of a quasi-β-norm on a linear space. Definition 1.1. Let X be a linear space over a field K . A quasi-β-norm || · || is a real-valued function on X satisfying the followings: (1) ||x|| ≥ 0 for all x ∈ X and ||x|| = 0 if and only if x = 0 . (2) ||λx|| = |λ|β · ||x|| for all λ ∈ K and all x ∈ X . (3) There is a constant K ≥ 1 such that ||x+y|| ≤ K(||x||+||y||) for all x, y ∈ X . The pair (X, ||·||) is called a quasi-β-normed space if ||·|| is a quasi-β-norm on X . The smallest possible K is called the modulus of concavity of || · || . A quasi-Banach space is a complete quasi-β-normed space. A quasi-β-norm || · || is called a (β, p)-norm (0 < p ≤ 1) if ||x + y||p ≤ ||x||p + ||y||p , for all x, y ∈ X . In this case, a quasi-β-Banach space is called a (β, p)-Banach space; see [3], [29] and [23]. In this paper, we consider the following the cubic functional equation: (1.4)
3f (2x + y) + f (2x − y) + 6f (y) = 8f (x + y) + 24f (x)
for all x, y ∈ X . We investigate the generalized Hyers-Ulam-Rassias stability problem in quasi-β-normed spaces and then the stability by using a subadditive function and then the stability by using the alternative fixed point method for the cubic function f : X → Y satisfying the equation (1.4). 2. Cubic functional equations Lemma 2.1. Let X and Y be real vector spaces. A function f : X → Y satisfies the functional equation (1.3) for all x , y ∈ X if and only if f is cubic. Proof. See [15, Lemma 2.1].
Lemma 2.2. Let X and Y be real vector spaces. A function f : X → Y satisfies the equation (1.4) if and only if f is cubic. Proof. Lemma 2.1 implies that it is enough to show that (1.4) and (1.3) are equivalent. Letting x = y = 0 in the equation (1.4), we have f (0) = 0 . Also putting y = 0 in the equation (1.4), we get 4f (2x) = 32f (x) , that is, f (2x) = 8f (x) , for all x ∈ X . Now letting x = x−y 2 in the equation (1.4),
134
CUBIC MAPPINGS
3
3f (x) + f (x − 2y) + 6f (y) = f (x + y) + 3f (x − y) ,
(2.1)
for all x , y ∈ X . Also, letting y = 2y in the equation (1.4), we have 3f (2x + 2y) + f (2x − 2y) + 48f (y) = 8f (x + 2y) + 24f (x) , or f (x + 2y) + 3f (x) = 3f (x + y) + f (x − y) + 6f (y) ,
(2.2)
for all x , y ∈ X . Adding equations (2.1) and (2.2), f (x + 2y) + 3f (x) + 3f (x) + f (x − 2y) + 6f (y) = f (x + y) + 3f (x − y) + 3f (x + y) + f (x − y) + 6f (y) , i.e., f (x + 2y) + f (x − 2y) + f (2x) = 2f (x) + 4f (x + y) + 4f (x − y) , for all x , y ∈ X . Hence (1.4) implies (1.3). Conversely, it is easy to show that f (0) = 0 , f (−x) = −f (x) , and f (2x) = 8f (x) , for all x ∈ X . Letting x = 2x in the equation (1.3), we have f (2x + y) + f (2x − y) = 2f (x + y) + 2f (x − y) + 12f (x) ,
(2.3)
for all x , y ∈ X . Now switching x and y in the equation (1.3), we get f (2x + y) − f (2x − y) + f (2y) = 2f (y) + 4f (x + y) − 4f (x − y) ,
(2.4)
for all x , y ∈ X . By (2.3) and (2.4), 3f (2x + y) + f (2x − y) + f (2y) = 2f (y) + 8f (x + y) + 24f (x) , for all x , y ∈ X . Since f (2y) = 8f (y) , we have the desired result.
3. Stabilities Throughout this section, let X be a quasi-β-normed space and let Y be a quasiβ-Banach space with a quasi-β-norm || · ||Y . Let K be the modulus of concavity of || · ||Y . We will investigate the generalized Hyers-Ulam-Rassias stability problem for the functional equation (1.4). After then we will study the stability by using a subadditive function and alternative fixed point method. For a given mapping f : X → Y , let Df (x, y) = 3f (2x + y) + f (2x − y) − 8f (x + y) − 24f (x) + 6f (y) , x, y ∈ X . Theorem 3.1. Suppose that there exists a mapping φ : X 2 → R+ := [0, ∞) for which a mapping f : X → Y satisfies f (0) = 0 , (3.1)
||Df (x, y)||Y ≤ φ(x, y)
135
4
IN GOO CHO, DONGSEUNG KANG, HEEJEONG KOH
P∞ K j and the series φ(2j x, 2j y) converges for all x, y ∈ X . Then there j=0 23β exists a unique cubic mapping C : X → Y which satisfies the equation (1.4) and the inequality ∞ K X K j (3.2) ||f (x) − C(x)||Y ≤ β φ(2j x, 0) , 32 j=0 23β for all x ∈ X . Proof. By letting y = 0 in the inequality (3.1), since f (0) = 0 , we have ||Df (x, 0)||Y = ||4f (2x) − 32f (x)||Y ≤ φ(x, 0) , for all x ∈ X . Then 1 1 f (2x)||Y ≤ β φ(x, 0) , 23 32 for all x ∈ X . Now, putting x = 2x and dividing by 23β in the inequality (3.3), we get 1 1 1 1 2 (3.4) ||f (2x) − f (2 x)|| ≤ φ(2x, 0) , Y 23β 23 32β 23β for all x ∈ X . Combining two equations (3.3) and (3.4), we have 1 2 1 1 1 ||f (x) − 3 f (22 x)||Y ≤ K(k f (x) − 3 f (2x) kY + k 3 f (2x) − ( 3 )2 f (22 x) kY ) 2 2 2 2 K 1 ≤ (φ(x , 0) + 3β φ(2x, , 0)) , β 32 2 for all x ∈ X . Inductively, continue this way, ||f (x) −
(3.3)
||f (x) −
n−1 1 n K X K j n f (2 x)|| ≤ φ(2j x, 0) , Y 23 32β j=0 23β
n for all x ∈ X , and all n ∈ N . Let n , m ∈ N with n < m . Replacing x by 2 x and 1 n multiplying 22β in the inequality (3.3), 1 n 1 n 1 ||f (2n x) − 3 f (2n+1 x)||Y ≤ 3β φ(2n x, 0) , 3β 2 2 2 for all x ∈ X . Also, inductively
(3.5)
||
m−1 1 n 1 m K X K j n m f (2 x) − f (2 x)|| ≤ φ(2j x , 0) , Y 23 23 32β j=n 23β
for all x∈ X n. As m → ∞ , the right-hand side in the inequality (3.5) close to 0. Hence { 213 f (2n x)} is a Cauchy sequence in the quasi-β-Banach space Y . Thus we can define a mapping C : X → Y by 1 n f (2n x) , C(x) = lim n→∞ 23 for all x ∈ X . Hence we have ∞ K X K j ||f (x) − C(x)||Y ≤ β φ(2j x, 0) , 32 j=0 23β for all x ∈ X . Now, we will show that the mapping C : X → Y satisfies the equation (1.4) and then the mapping is unique.
136
CUBIC MAPPINGS
5
1 n ||Df (2n x, 2n y)||Y 23β 1 n = 3β ||3f (2n (2x + y)) + f (2n (2x − y)) − 8f (2n (x + y)) − 24f (2n x) + 6f (2n y)||Y 2 K n ≤ 3β ||3f (2n (2x + y)) + f (2n (2x − y)) − 8f (2n (x + y)) − 24f (2n x) + 6f (2n y)||Y 2 K n ≤ 3β φ(2n x, 2n y) , 2 for all x, y ∈ X . By taking n → ∞ , we know that the mapping C : X → Y is cubic. Now, assume that there exists a cubic mapping T : X → Y satisfying the inequalities (3.1) and (3.2). Then 1 n ||T (x) − C(x)||Y = ||T (2n x) − C(2n x)||Y 23β 1 n K ||T (2n x) − f (2n x)||Y + ||f (2n x) − C(2n x)||Y ≤ 3β 2 ∞ 2K 2 X K n+j φ(2n+j x, 0) , ≤ 32β j=0 23β for all x ∈ X . By letting n → ∞ , we immediately have the uniqueness of C : X → Y. Theorem 3.2. Suppose that there exists a mapping φ : X 2 → R+ := [0, ∞) for which a mapping f : X → Y satisfies f (0) = 0 , ||Df (x, y)||Y ≤ φ(x, y)
(3.6)
j
P∞ and the series j=1 23β K φ(2−j x, 2−j y) converges for all x, y ∈ X . Then there exists a unique cubic mapping C : X → Y which satisfies the equation (1.4) and the inequality ∞ 1 X 3β j (3.7) ||f (x) − C(x)||Y ≤ β 2 K φ(2−j x, 0) , 4 j=1 for all x ∈ X . Proof. Letting y = 0 in the inequality (3.6), ||Df (x, 0)||Y = ||4f (2x) − 32f (x)||Y ≤ φ(x, 0) , for all x ∈ X . Now, replacing x by x2 , we have ||4f (x) − 32f ( x2 )||Y ≤ φ( x2 , 0) , that is, ||f (x) − 23 f ( x2 )||Y ≤ 41β φ( x2 , 0) , for all x ∈ X . Let n , m ∈ N with n < m . Similar to the inequality (3.5) of the proof Theorem 3.1, we get ||23n f (
m−1 x x 1 X 3β j 3m ) − 2 f ( )|| ≤ (2 K) φ(2−j x, 0) , Y 2n 2m 4β j=n
for all x ∈ X . The remain parts of the proof follow from the proof of Theorem 3.1.
137
6
IN GOO CHO, DONGSEUNG KANG, HEEJEONG KOH
Now we will recall a subadditive function and then investigate the stability under the condition that the space Y is a (β , p)-Banach space. The basic definitions of subadditive functions follow from the reference [23]. A function φ : A → B having a domain A and a codomain (B, ≤) that are both closed under addition is called (1) a subadditive function if φ(x + y) ≤ φ(x) + φ(y) , (2) a contractively subadditive function if there exists a constant L with 0 < L < 1 such that φ(x + y) ≤ L(φ(x) + φ(y)) , (3) a expansively superadditive function if there exists a constant L with 0 < L < 1 such that φ(x + y) ≥ L1 (φ(x) + φ(y)) , for all x, y ∈ A . Theorem 3.3. Suppose that there exists a mapping φ : X 2 → R+ := [0, ∞) for which a mapping f : X → Y satisfies f (0) = 0 , ||Df (x, y)||Y ≤ φ(x, y)
(3.8)
for all x, y ∈ X and the map φ is contractively subadditive with a constant L such that 21−3β L < 1 . Then there exists a unique cubic mapping C : X → Y which satisfies the equation (1.4) and the inequality ||f (x) − C(x)||Y ≤
(3.9)
4β
φ(x, 0) p , p 23βp − (2L)p
for all x ∈ X . Proof. By the inequalities (3.3) and (3.5) of the proof of Theorem 3.1, we have 1 1 f (2n x) − 3m f (2m x)||pY 23n 2 m−1 X 1 jp 1 ≤ ||f (2j x) − 3 f (2j+1 x)||pY 3β 2 2 j=n ||
≤
m−1 1 X 1 jp φ(2j x, 0)p 32βp j=n 23β
≤
m−1 1 X 1 jp (2L)jp φ(x, 0)p 32βp j=n 23β
=
m−1 φ(x, 0)p X 1−3β jp 2 L , 32βp j=n
that is, (3.10)
m−1 1 m 1 n φ(x, 0)p X 1−3β jp f (2m x)||pY ≤ 2 L , || 3 f (2n x) − 3 2 2 32βp j=n
1 for all x ∈ X , and for all n and m with n < m . Hence { 23n f (2n x)} is a Cauchy sequence in the space Y . Thus we may define a mapping C : X → Y by
C(x) = lim
n→∞
1 f (2n x) , 23n
138
CUBIC MAPPINGS
7
for all x ∈ X . Now, we will show that the map C : X → Y is a cubic mapping. Indeed, ||DC(x, y)|| =
||Df (2n x, 2n y)||pY n→∞ 23βpn φ(2n x, 2n y)p ≤ lim n→∞ 23βpn ≤ lim φ(x, y)p (21−3β L)pn = 0 , lim
n→∞
for all x ∈ X . Hence the mapping C : X → Y is a cubic mapping. Note that the inequality (3.10) implies the inequality (3.9) by letting n = 0 and taking m → ∞ . Assume that there exists another mapping T : X → Y satisfying (1.4) and (3.9). Then 1 pn 1 n ||T (2n x) − f (2n x)||pY ||T (x) − 3 f (2n x)||pY = 2 23β 1 pn φ(2n x, 0)p ≤ 23β 4βp (23βp − (2L)p ) pn φ(x, 0)p ≤ 21−3β L , 4βp (23βp − (2L)p ) that is, ||T (x) −
1 n n φ(x, 0) n 1−3β p f (2 x)|| ≤ 2 L , Y 3 p βp 2 4 (23βp − (2L)p )
for all x ∈ X . By letting n → ∞ , we immediately have the uniqueness of C : X → Y. Theorem 3.4. Suppose that there exists a mapping φ : X 2 → R+ := [0, ∞) for which a mapping f : X → Y satisfies f (0) = 0 , ||Df (x, y)||Y ≤ φ(x, y)
(3.11)
for all x, y ∈ X and the map φ is expansively superadditive with a constant L such that 23β−1 L < 1 . Then there exists a unique cubic mapping C : X → Y which satisfies the equation (1.4) and the inequality (3.12)
||f (x) − C(x)||Y ≤
4β L
φ(x, 0) p , p 2p − (23β L)p
for all x ∈ X . Proof. By letting y = 0 in the equation (3.11), we have ||4f (2x) − 323 f (x)||Y ≤ φ(x, 0) , and then replacing x by
x 2,
x 1 x ||f (x) − 23 f ( )||Y ≤ β φ( , 0) , 2 4 2 for all x ∈ X . For all n and m with n < m , inductively we have m−1 φ(x, 0)p X 3β−1 jp x x (3.14) ||23n f ( n ) − 23m f ( m )||pY ≤ βp 2 L , 2 2 4 (2L)p j=n
(3.13)
for all x ∈ X . The remains follow from the proof of Theorem 3.3.
139
8
IN GOO CHO, DONGSEUNG KANG, HEEJEONG KOH
Now, we will investigate the stability of the given cubic functional equation (1.4) using the alternative fixed point method. Before proceeding the proof, we will state the theorem, the alternative of fixed point. Theorem 3.5 ( The alternative of fixed point [22], [30] ). Suppose that we are given a complete generalized metric space (Ω, d) and a strictly contractive mapping T : Ω → Ω with Lipschitz constant L . Then for each given x ∈ Ω , either d(T n x, T n+1 x) = ∞ for all n ≥ 0 , or there exists a natural number n0 such that (1) d(T n x, T n+1 x) < ∞ for all n ≥ n0 ; (2) The sequence (T n x) is convergent to a fixed point y ∗ of T ; (3) y ∗ is the unique fixed point of T in the set 4 = {y ∈ Ω|d(T n0 x, y) < ∞} ; (4) d(y, y ∗ ) ≤
1 1−L
d(y, T y) for all y ∈ 4 .
Theorem 3.6. Let f : X → Y be a function with f (0) = 0 for which there exists a function φ : X 2 → [0, ∞) such that there exists a constant L , 0 < L < 1 , satisfying the inequalities k Df (x, y) kY ≤ φ(x, y)
(3.15)
φ(2x, 2y) ≤ 8β Lφ(x, y) , for all x, y ∈ X . Then there exists a unique cubic function C : X → Y defined by n x) = C(x) such that limn→∞ f (2 23n k f (x) − C(x) kY ≤
(3.16)
1 φ(x, 0) , 32β (1 − L)
for all x ∈ X . Proof. Consider the set Ω = {g|g : X → Y , g(0) = 0} and introduce the generalized metric on Ω , d(g, h) = inf {K ∈ (0, ∞) | k g(x) − h(x) kY ≤ Kφ(x, 0) , x ∈ X} . It is easy to show that (Ω, d) is complete. Now we define a function T : Ω → Ω by 1 T g(x) = g(2x) , g ∈ Ω 8 for all x ∈ X . Note that for all g, h ∈ Ω , let K ∈ (0, ∞) be an arbitrary constant with d(g, h) ≤ K . Then k g(x) − h(x) kY ≤ Kφ(x, 0) , for all x ∈ X , 1 1 1 ⇒ k g(2x) − h(2x) kY ≤ β Kφ(2x, 0) , for all x ∈ X , 8 8 8 1 1 ⇒ k g(2x) − h(2x) kY ≤ LKφ(x, 0) , for all x ∈ X , 8 8 ⇒ d(T g, T h) ≤ LK . Hence we have that d(T g, T h) ≤ L d(g, h) ,
140
CUBIC MAPPINGS
9
for all g, h ∈ Ω , that is, T is a strictly self-mapping of Ω with the Lipschitz constant L . By setting y = 0 in the inequality (3.15) and dividing both sides by 32β , then we have 1 1 k f (x) − f (2x) kY ≤ β φ(x, 0) , 8 32 for all x ∈ X , that is, d(T f, f ) ≤ 321β < ∞ . We can apply the fixed point alternative and since limr→∞ d(T r f, C) = 0 , there exists a fixed point C of T in Ω such that f (2x) , 23n for all x ∈ X . Letting x = 2n x and y = 2 y in the equation (3.15) and dividing by 23nβ , k Df (2n x, 2n y) kY k DC(x, y) kY = lim n→∞ 23nβ 1 ≤ lim 3nβ φ(2n x, 2n y) n→∞ 2 ≤ lim Ln φ(x, y) = 0 , (3.17)
C(x) = lim
n→∞ n
n→∞
for all x, y ∈ X ; that is it satisfies the equation (1.4). By Lemma 2.2, the C is cubic. Also, the fixed point alternative guarantees that such a C is the unique function. Again using the fixed point alternative, we have 1 d(f, C) ≤ d(T f, f ) . 1−L Hence we may conclude that 1 1 d(T f, f ) ≤ β , d(f, C) ≤ 1−L 32 (1 − L) which implies the equation (3.16).
Theorem 3.7. Let f : X → Y be a function with f (0) = 0 for which there exists a function φ : X 2 → [0, ∞) such that there exists a constant L , 0 < L < 1 , satisfying the inequalities k Df (x, y) kY ≤ φ(x, y) L φ(x, y) ≤ β φ(2x, 2y) , 8 for all x, y ∈ X . Then there exists a unique cubic function C : X → Y defined by limn→∞ 23n f ( 21n x) = C(x) such that
(3.18)
(3.19)
k f (x) − C(x) kY ≤
L φ(x, 0) , 32β (1 − L)
for all x ∈ X . Proof. We will use the same notation for Ω and d as in the proof of Theorem 3.6 and then we define a function T : Ω → Ω by x T g(x) = 8 g( ) , g ∈ Ω 2 for all x ∈ X . Then x x ||T g(x) − T h(x)||Y = 8β ||g( ) − h( )||Y 2 2 x ≤ 8β Kφ( , 0) ≤ LKφ(x, 0) , 2
141
10
IN GOO CHO, DONGSEUNG KANG, HEEJEONG KOH
for all x ∈ X , that is, d(T g, T h) ≤ LK . Hence d(T g, T h) ≤ Ld(g, h) , for any g, h ∈ Ω . Thus T is a strictly self-mapping of Ω with the Lipschitz constant L . By letting x = x2 and y = 0 in the inequality (3.18) and dividing both sides by 4β , then x 1 x L ||f (x) − 8f ( )||Y ≤ β φ( , 0) ≤ β φ(x, 0) , 2 4 2 32 for all x ∈ X . Hence d(T f, f ) ≤ 32Lβ < ∞ . The remains follows from the proof of Theorem 3.6. Acknowledgement This work was supported by the University of Incheon Research Grant in 2010. References [1] T. Aoki, On the stability of the linear transformation in Banach spaces, J. Math. Soc. Japan 2 (1950) 64–66. [2] J.-H. Bae and W.-G. Park, On the generalized Hyers-Ulam-Rassias stability in Banach modules over a C ∗ −algebra, J. Math. Anal. Appl. 294(2004), 196–205. [3] Y. Benyamini and J. Lindenstrauss, Geometric Nonlinear Functional Analysis, vol. 1, Colloq. Publ., vol. 48, Amer. Math. Soc., Providence, (2000). [4] J. K. Chung and P. K. Sahoo, On the general solution of a quartic functional equation, Bulletin of the Korean Mathematical Society, 40 no. 4 (2003), 565–576. [5] S. Czerwik, On the stability of the quadratic mapping in normed spaces, Abh. Math. Sem. Univ. Hamburg 62 (1992), 59–64. [6] S. Cherwik, Functional Equations and Inequalities in Several Variables, World Scientific Publ.Co., New Jersey, London, Singapore, Hong Kong, (2002). [7] Z. Gajda, On the stability of additive mappings, Internat. J. Math. Math. Sci., 14 (1991), 431–434. [8] D. H. Hyers, On the stability of the linear equation, Proc. Nat. Acad. Sci. U.S.A. 27 (1941), 222–224. [9] D.H.Hyers and Th.M.Rassias, Approximate homomorphisms, Aequationes Mathematicae, 44 (1992),125–153. [10] D.H.Hyers, G.Isac and Th.M.Rassias, Stability of Functional Equations in Several Variables, Birkhauser, Boston, Basel, Berlin, (1998). [11] S.-M.Jung, Hyers-Ulam-Rassias Stability of Functional Equations in Mathematical Analysis, Hadronic Press,Inc., Florida, (2001). [12] K.-W. Jun and H.-M. Kim, The generalized Hyer-Ulam-Rassias stability of a cubic functional equation, J. Math. Anal. Appl. 274 (2002), 867–878. [13] K.-W. Jun and H.-M. Kim, On the stability of Euler-Lagrange type cubic functional equations in quasi-Banach spaces, J. Math. Anal. Appl. 332 (2007), 1335–1350. [14] K.-W. Jun and H.-M. Kim, Solution of Ulam stability problem for approximately biquadratic mappings and functional inequalities, J. Inequal. Appl., in press [15] H. -Y. Chu and D. Kang, On the stability of an n-dimensional cubic functional equation, J. Math. Anal. Appl. 325 (2007), 595–607. [16] H.-M. Kim,On the stability problem for a mixed type of quartic and quadratic functional equation, J. Math. Anal. Appl. 324 (2006), 358–372. [17] Y.-S. Lee and S.-Y. Chung, Stability of quartic functional equations in the spaces of generalized functions, Adv. Diff. Equa. (2009), to appear. [18] J. M. Rassias, On approximation of approximately linear mappings by linear mappings,Bulletin des Sciences Mathematiques, 108 no. 4 (1984), 445–446. [19] J. M. Rassias, On the stability of the non-linear Euler-Lagrange functional equation in real normed linear spaces, Journal of Mathematical and Physical Sciences, 28 no. 5 (1994),231– 235. [20] J. M. Rassias, On the stability of the Euler-Lagrange functional equation, Chinese J. Math., 20 (1992) 185–190.
142
CUBIC MAPPINGS
11
[21] J. M. Rassias, Solution of the Ulam stability problem for quartic mappings, Glasnik Matematicki Series III, 34 no. 2 (1999) 243–252. [22] B. Margolis and J.B. Diaz, A fixed point theorem of the alternative for contractions on a generalized complete metric space, Bull. Amer. Math. Soc. 126, 74(1968), 305–309. [23] J. M. Rassias, H.-M. Kim Generalized Hyers.Ulam stability for general additive functional equations in quasi-β-normed spaces, J. Math. Anal. Appl. 356 (2009), 302–309. [24] Th. M. Rassias, On the stability of the linear mapping in Banach spaces, Proc. Amer. Math. Soc. 72 (1978), 297–300. [25] Th.M.Rassias, On the stability of functional equations and a problem of Ulam, Acta Applicandae Mathematicae, 62 (2000),23–130. [26] Th. M. Rassias, On the stability of functional equations in Banach spaces, J. Math. Anal. Appl. 251 (2000), 264–284. ˇ [27] Th. M. Rassias, P. Semrl On the Hyers-Ulam stability of linear mappings, J. Math. Anal. Appl. 173 (1993), 325–338. [28] Th. M. Rassias, K. Shibata, Variational problem of some quadratic functions in complex analysis, J. Math. Anal. Appl. 228 (1998), 234–253. [29] S. Rolewicz, Metric Linear Spaces, Reidel/PWN-Polish Sci. Publ., Dordrecht, (1984). [30] I.A. Rus, Principles and Appications of Fixed Point Theory, Ed. Dacia, Cluj-Napoca, 1979 (in Romanian). [31] F. Skof, Propriet` a locali e approssimazione di operatori,Rend. Semin. Mat. Fis. Milano 53 (1983) 113–129. [32] S. M. Ulam, Problems in Morden Mathematics, Wiley, New York (1960). 1 Faculty of Liberal Education, University of Incheon, 12-1, Songdo, Yeonsu, Incheon, South Korea 406-772 E-mail address: [email protected] (I. G. Cho) 2 Department of Mathematical Education, Dankook University, 126, Jukjeon, Suji, Yongin, Gyeonggi, South Korea 448-701 , Korea E-mail address: [email protected] (D. Kang) E-mail address: [email protected] (H. Koh)
JOURNAL OF COMPUTATIONAL ANALYSIS AND APPLICATIONS, VOL. 14, NO.1, 143-164 , 2012, COPYRIGHT 2012 EUDOXUS143 PRESS, LLC
A New Representation For The Fuzzy Systems In Terms Of Some Additive And Multiplicative Subsystem Inferences Iuliana F Iatan Department of Mathematics and Informatics, Technical University of Civil Engineering, Bucharest, Romania, e-mail [email protected] Abstract A new representation for fuzzy systems in terms of additive and multiplicative subsystem inferences of single variable is proposed. This representation enables an approximate functional characterization of the inferred output. The form of the approximating function is dictated by the choice of polynomial, sinusoidal, or other designs of subsystem inferences. The paper is organized as follows. The first section gives a brief review on product sum fuzzy inference and introduces the concepts of additive and multiplicative decomposable systems. The second section presents the proposed subsystem inference representation. The third section focuses on subsystem inference of single variable.The sections 4 to 6 discuss the cases of polynomial, sinusoidal, orthonormal and other designs of subsystem inferences, and their applications. The section 7 presents some conclusions. The paper one finishes with its references. AMS Subject Classification: 62-xx, 62Lxx, 62L10 Keywords: subsystem inference representation, fuzzy systems, membership function, approximating function, universal approximators.
1
Introduction
Consider a fuzzy system of n input variables a, b, . . . , y, z with input membership functions Ai , i = 1, ma , Bj , j = 1, mb , . . . , Yh , h = 1, my , Zr , r = 1, mz and the ma · mb · . . . · my · mz fuzzy rules: 0,
u(0) = 1, u0 (0) = 2. The exact solution of this initial value problem for α = 2, the ODE case, is 9 u(x) = 4
µ
¶2 3 0.5x 1 −0.5x e + e −1 . 2 6 15
VANANI, AMINATAEI: NUMERICAL SOLUTION OF FRACTIONAL DE
624
The numerical results of the solution for n = 10 and α = 1.5, 1.75, 2, compared with the HPM, are given in Table 3. Also, the nonlinear term in the problem is expanded as the following series: √ 1 1 5 1 (u − 1)4 + · · · . u ' 1 + (u − 1) − (u − 1)2 + (u − 1)3 − 2 8 16 128 Table 3: Comparison of the solutions of HPM and OTM with the exact solution for different α and x of experiment 6.3
x 0.0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1.0
α = 1.5 uHP M uOT M 1.00000000 1.00000000 1.28245453 1.25127654 1.65028248 1.59134824 2.09444288 2.01182972 2.61939160 2.51346262 3.23340806 3.09921740 3.94743003 3.76140179 4.77490977 4.44407982 5.73195406 4.94458363 6.83756778 4.69087327 8.11394079 2.29381291
α = 1.75 uHP M uOT M 1.00000000 1.00000000 1.23780724 1.22934593 1.53420555 1.51587669 1.88844768 1.86064977 2.30514239 2.26766722 2.79109495 2.74266841 3.35490799 3.29186910 4.00703217 3.91920598 4.75997836 4.61955254 5.62861104 5.36359893 6.63049227 6.06756297
uHP M 1.00000000 1.21697814 1.47098992 1.76704394 2.11071067 2.50817057 2.96626225 3.49253054 4.09527467 4.78359634 5.56744792
α=2 uOT M 1.00000000 1.21697814 1.47099037 1.76704923 2.11074085 2.50828744 2.96661653 3.49343764 4.09732725 4.78782294 5.57552751
uExact 1.00000000 1.21697814 1.47099037 1.76704923 2.11074085 2.50828744 2.96661653 3.49343764 4.09732726 4.78782298 5.57552764
Since the exact solution for α = 1.5 and α = 1.75 does not exist, we are in doubt which of the solution of HPM or OTM is more accurate, but this doubt removes soon by attention to the obtained results for α = 2. Here, you can easily see the OTM is more accurate and has well agreement to the exact solution. Thus we prefer using OTM instead of HPM to solve nonlinear FODEs. Experiment 6.4 Consider the following linear time-fractional convective-diffusion equation, ∂αu ∂u ∂ 2 u +x + = 2(1 + t + x2 ), α ∂t ∂x ∂x2
t > 0, x ∈ R, 0 < α ≤ 1,
with the initial condition: u(x, 0) = x2 . We have solved this problem for n = 8 and α = 0.75, 0.85, 0.95 and compared it with HPM [22] to show the efficiency of the OTM. The results are given in Table 4.
16
VANANI, AMINATAEI: NUMERICAL SOLUTION OF FRACTIONAL DE
625
Table 4: Comparison of the HPM and OTM for different t and x of experiment 6.4
t/x t=0.1 0.00 0.25 0.50 0.75 1.00 t=0.2 0.00 0.25 0.50 0.75 1.00 t=0.3 0.00 0.25 0.50 0.75 1.00 t=0.4 0.00 0.25 0.50 0.75 1.00 t=0.5 0.00 0.25 0.50 0.75 1.00
α = 0.75
uHP M α = 0.85
α = 0.95
α = 0.75
uOT M α = 0.85
α = 0.95
0.017325 0.079825 0.267326 0.579826 1.017326
0.014486 0.076986 0.264486 0.576986 1.014486
0.011452 0.073952 0.261452 0.573952 1.011453
0.025199 0.087699 0.275199 0.587699 1.025199
0.017920 0.080420 0.267920 0.580420 1.017920
0.012299 0.074799 0.026229 0.574799 1.012299
0.067432 0.129932 0.317432 0.629932 1.067432
0.056444 0.118944 0.306444 0.618944 1.056444
0.045243 0.107743 0.295243 0.607743 1.045243
0.077557 0.140057 0.327557 0.640057 1.077558
0.060178 0.122678 0.310178 0.622678 1.060179
0.046034 0.108534 0.296034 0.608534 1.046035
0.142801 0.205301 0.392801 0.705301 1.142801
0.121285 0.183785 0.371285 0.683785 1.121285
0.099954 0.162454 0.349954 0.662454 1.099955
0.154136 0.216636 0.404136 0.716636 1.154136
0.125161 0.187661 0.375161 0.687661 1.125162
0.100719 0.163219 0.350719 0.663219 1.100719
0.241393 0.303893 0.491393 0.803893 1.241394
0.207880 0.270380 0.457880 0.770380 1.207880
0.175242 0.237742 0.425242 0.737742 1.175243
0.252852 0.315352 0.502852 0.815352 1.252852
0.211689 0.274189 0.461689 0.774189 1.211688
0.175989 0.238489 0.425989 0.738489 1.175989
0.361591 0.424091 0.611591 0.924091 1.361591
0.315341 0.377841 0.565341 0.877841 1.315341
0.270835 0.333335 0.520835 0.833335 1.270835
0.372193 0.434693 0.622193 0.934693 1.372194
0.318879 0.381379 0.568879 0.881379 1.318879
0.271561 0.334061 0.521561 0.834061 1.271561
The obtained results in Table 4, demonstrate the approximate solution obtained using the OTM is in good agreement with the approximate solution obtained using the HPM for all values of t and x. Experiment 6.5 Consider the following nonlinear space-fractional Fisher’s equation, ∂u ∂ 1.5 u − 1.5 − u(x, t)(1 − u(x, t)) = x2 , ∂t ∂x
x > 0,
with the initial condition: u(x, 0) = x. We have obtained the approximate solution for n = 7 with different t and x. Comparison of the OTM with VIM [35] and generalized differential transform method (GDTM) [36] are given in Table 5.
17
VANANI, AMINATAEI: NUMERICAL SOLUTION OF FRACTIONAL DE
626
Table 5: Comparison of the solutions of GDTM, VIM and OTM for different t and x of experiment 6.5
x 0.0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1.0 x 0.0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1.0
uGDT M 0.0 0.110197 0.220333 0.330252 0.439954 0.549439 0.658706 0.767755 0.876585 0.985196 1.093587 uGDT M 0.0 0.127035 0.256895 0.384480 0.509772 0.632755 0.753415 0.871740 0.987717 1.101336 1.212587
t=0.1 uV IM 0.0 0.110123 0.220243 0.330158 0.439870 0.549383 0.658701 0.767825 0.876761 0.985512 1.094081 t=0.3 uV IM 0.0 0.126129 0.255777 0.383473 0.509305 0.633359 0.755727 0.876498 0.995769 1.113633 1.230189
uOT M 0.0 0.110123 0.220239 0.330153 0.439869 0.549389 0.658716 0.767854 0.876806 0.985576 1.094166
uGDT M 0.0 0.119817 0.240073 0.359392 0.477771 0.595204 0.711688 0.827220 0.941795 1.055411 1.168066
uOT M 0.0 0.125536 0.255291 0.383307 0.509667 0.634452 0.757743 0.879621 1.000160 1.119438 1.237524
uGDT M 0.0 0.129536 0.267899 0.401893 0.531477 0.656611 0.777260 0.893392 1.004976 1.111987 1.214400
t=0.2 uV IM 0.0 0.119386 0.239548 0.588663 0.477364 0.595069 0.712008 0.828207 0.943694 1.058498 1.172648 t=0.4 uV IM 0.0 0.128692 0.266790 0.401330 0.532516 0.660556 0.785664 0.908056 1.027956 1.145590 1.261189
uOT M 0.0 0.119302 0.239453 0.358809 0.477393 0.595229 0.712343 0.828759 0.944504 1.059601 1.174077 uOT M 0.0 0.126286 0.265133 0.401004 0.534090 0.664574 0.792632 0.918431 1.042129 1.163878 1.283822
From the above comparison, a good agreement of OTM with the other methods is observable.
7. Conclusion Nowadays, most of the real physical world problems can be best modeled by the FODEs and FPDEs. Besides modeling, the solution techniques and their use are most important to obtain high accurate solutions. The main idea of the proposed method (OTM) is to convert the problem including linear and nonlinear terms to an algebraic system to simplify the computations. Several experiments of FODEs and FPDEs were numerically solved and compared with the most powerful methods such as HPM, VIM, FDM, GDTM and ADM. From the results, it can be easily seen that OTM obtains results as accurate as possible. So, OTM is a powerful tool which enables us to handle even nonlinear problems. It is seen that, for the case where the exact solution is known, the numerical results demonstrate the high accuracy of the scheme. These results supported the confidence in applying this method to FODEs and FPDEs in which the theoretical solution is not known.
18
VANANI, AMINATAEI: NUMERICAL SOLUTION OF FRACTIONAL DE
627
References [1] E.L. Ortiz and H. Samara, An operational approach to the Tau method for the numerical solution of nonlinear differential equations, Computing 27 (1981) 15-25. [2] C. Lanczos, Trigonometric interpolation of empirical and analytical functions, J. Math. Phys. 17 (1938) 123-199. [3] K.M. Liu and E.L. Ortiz, Eigenvalue problems for singularly perturbed differential equations, in: J.J.H. Miller (Ed.), Proceedings of the BAIL II Conference, Boole Press, Dublin, (1982) 324-329. [4] K.M. Liu and E.L. Ortiz, Approximation of eigenvalues defined by ordinary differential equations with the Tau method, in: B. Ka gestrm, A. Ruhe (Eds.), Matrix Pencils, Springer, Berlin, (1983) 90-102. [5] K.M. Liu and E.L. Ortiz, Tau method approximation of differential eigenvalue problems where the spectral parameter enters nonlinearly, J. Comput. Phys. 72 (1987) 299-310. [6] K.M. Liu and E.L. Ortiz, Numerical solution of ordinary and partial functional-differential eigenvalue problems with the Tau method, Computing 41 (1989) 205-217. [7] E.L. Ortiz and H. Samara, Numerical solution of differential eigenvalue problems with an operational approach to the Tau method, Computing 31 (1983) 95-103. [8] K.M. Liu and E.L. Ortiz, Numerical solution of eigenvalue problems for partial differential equations with the Tau-lines method, Comp. Math. Appl. B 12 (5/6) (1986) 1153-1168. [9] K.M. Liu, E.L. Ortiz and K.S. Pun, Numerical solution of Steklov’s partial differential equation eigenvalue problem, in: J.J.H. Miller (Ed.), Computational and Asymptotic Methods for Boundary and Interior Layers III, Boole Press, Dublin, (1984) 244-249. [10] E.L. Ortiz and K.S. Pun, Numerical solution of nonlinear partial differential equations with Tau method, J. Comp. Appl. Math. 12/13 (1985) 511-516. [11] E.L. Ortiz and K.S. Pun, A bi-dimensional Tau-elements method for the numerical solution of nonlinear partial differential equations with an application to Burgers’ equation, Comp. Math. Appl. B 12 (5/6) (1986) 1225-1240. [12] E.L. Ortiz and H. Samara, Numerical solution of partial differential equations with variable coefficients with an operational approach to the Tau method, Comp. Math. Appl. 10 (1) (1984) 5-13. [13] M.K. EL-Daou and H.G. Khajah, Iterated solutions of linear operator equations with the Tau method, Math. Comput. 66 (217) (1997) 207-213. [14] S.M. Hosseini and S. Shahmorad, Numerical solution of a class of integro-differential equations by the Tau method with an error estimation, Appl. Math. Comput. 136 (2003) 559-570. [15] I. Podlubny, Fractional Differential Equations, Academic Press, New York, 1999.
19
VANANI, AMINATAEI: NUMERICAL SOLUTION OF FRACTIONAL DE
628
[16] F. Mainardi, Fractals and Fractional Calculus Continuum Mechanics, Springer Verlag, (1997) 291-348. [17] K. Diethelm and A.D. Freed, On the solution of nonlinear fractional order differential equations used in the modelling of viscoplasticity, in: Scientific Computing in Chemical Engineering II: Computational Fluid Dynamics, Reaction Engineering and Molecular Properties, Springer Verlag, Heidelberg, (1999) 217-224. [18] W.M. Ahmad and R. El-Khazali, Fractional-order dynamical models of love. Chaos, Solitons & Fractals, 33 (2007) 1367-1375. [19] F. Huang and F. Liu, The time-fractional diffusion equation and fractional advectiondispersion equation, ANZIAM J. 46 (2005) 1-14. [20] H. Beyer and S. Kempfle, Definition of physically consistent damping laws with fractional derivatives, Z. Angew. Math. Mech. 75 (1995) 623-635. [21] K.S. Miller and B. Ross, An Introduction to the Fractional Calculus and Fractional Differential Equations, Wiley, New York, 1993. [22] S. Momani and Z. Odibat, Comparison between the homotopy perturbation method and the variational iteration method for linear fractional partial differential equations, Computers and Mathematics with Applications 54 (2007) 910-919. [23] H. Jafari and S. Seifi, Homotopy analysis method for solving linear and nonlinear fractional diffusion-wave equation, Commun. Nonlinear Sci. Numer. Simulat. 14 (2009) 2006-2012 [24] A.M.A. El-Sayed and M. Gaber, The Adomian decomposition method for solving partial differential equations of fractal order in finite domains, Physics Letters A 359 (2006) 175182. [25] H. Jafari and V. Daftardar-Gejj, Solving a system of nonlinear fractional differential equations using Adomian decomposition, Journal of Computational and Applied Mathematics 196 (2006) 644-651. [26] S. Momani and Z. Odibat, Analytical solution of a time-fractional Navier-Stokes equation by Adomian decomposition method, Appl. Math. Comput. 177 (2) (2006) 488-494. [27] S. Momani and Z. Odibat, Numerical comparison of methods for solving linear differential equations of fractional order, Chaos, Solitons & Fractals, 31 (2007) 1248-1255. [28] A. Ghorbani, Toward a new analytical method for solving nonlinear fractional differential equations,Comput. Methods Appl. Mech. Engrg. 197 (2008) 4173-4179. [29] G. Jumarie, Fractional Brownian motions via random walk in the complex plane and via fractional derivative. Comparison and further results on their Fokker-Planck equations. Chaos, Solitons & Fractals, 22 (2004) 907-25. [30] S. Samko, A. Kilbas and O. Marichev, Fractional Integrals and Derivatives, Gordon and Breach, Yverdon, 1993. [31] S.M. Hosseini and S. Shahmorad, Tau numerical solution of Fredholm integro-differential equations with arbitrary polynomial bases, J. Appl. Math. Modell. 27 (2003) 145-154. 20
VANANI, AMINATAEI: NUMERICAL SOLUTION OF FRACTIONAL DE
629
[32] J.P. Boyd, Chebyshev and Fourier Spectral Methods, Springer-Verlag, New York, (1989). [33] A. Arikoglu and I. Ozkol, Solution of fractional differential equations by using differential transform method, Chaos, Solitons and Fractals 34 (2007) 1473-1481. [34] O. Abdulaziz, I. Hashim and S. Momani, Application of homotopy perturbation method to fractional IVPs, Journal of Computational and Applied Mathematics 216 (2008) 574 - 584. [35] Z. Odibat and S. Momani, A reliable treatment of homotopy perturbation method for Klein-Gordon equations, Phys. Lett. A 365 (2007) 351-357. [36] S. Momani and Z. Odibat, A novel method for nonlinear fractional partial differential equations: Combination of DTM and generalized Taylor’s formula, Journal of Computational and Applied Mathematics 220 (2008) 85-95.
21
JOURNAL OF COMPUTATIONAL ANALYSIS AND APPLICATIONS, VOL. 14, NO.4, 630-647 , 2012, COPYRIGHT 2012 EUDOXUS630 PRESS, LLC
Generalized induced linguistic harmonic mean operators based approach to multiple attribute group decision making Jin Han Park, Min Gwi Gwak Department of Applied Mathematics, Pukyong National University, Pusan 608-737, South Korea Young Chel Kwun∗ Department of Mathematics, Don-A University, Busan 608-714, South Korea February 26, 2011 Abstract Two generalized induced linguistic aggregation operator called the generalized induced linguistic ordered weighted harmonic mean (GILOWHM) operator and generalized induced uncertain linguistic ordered weighted harmonic mean (GIULOWHM) operator is defined. Each object processed by these operators consists of three components, where the first component represents the importance degree or character of the second component, and the second component is used to induce an ordering, through the first component, over the third components which are linguistic variables (or uncertain linguistic variables) and then aggregated. Based on the GILOWHM and GIULOWHM operators respectively, we develop two procedures to solve the multiple attribute group decision making problems where all attribute values are expressed in linguistic variables or uncertain linguistic variables. Finally, an example is used to illustrate the developed procedures. Keywords: Group decision making, linguistic variable, generalized induced linguistic ordered weighted harmonic mean (GILOWHM) operator, generalized induced uncertain linguistic ordered weighted harmonic mean (GIULOWHM) operator, operational laws. 2000 AMS Subject Classifications: 90B50, 91B06, 90C29
1
Introduction
Information aggregation is essential process of gathering relevant information from multiple sources. Many techniques have been developed to aggregate data ∗ Corresponding
author. E-mail: [email protected](J.H. Park), [email protected](M.G. Gwak), [email protected](Y.C. Kwun)
1
PARK ET AL: LINGUISTIC HARMONIC MEAN OPERATORS
information [12,13,20-23,27]. Yager and Filev [26] introduced an induced aggregation operator called the induced ordered weighted averaging (IOWA) operator, which takes as its argument pairs, called OWA pairs, in which one component is used to induce an ordering over the second components which are exact numerical values and then aggregated. Later, some new induced aggregation operators have been developed, including the induced ordered weighted geometric (IOWG) operator [21], induced fuzzy integral aggregation (IFIA) operator [24] and induced Choquet ordered averaging (ICOA) operator [25]. Xu and Da [21] introduced two more general aggregation techniques called generalized IOWA (GIOWA) and generalized IOWG (GIOWG) operators, and proved that the OWA and IOWA operators are the special cases of the GIOWA operator, and that the OWG and IOWG operators are the special cases of the GIOWG operator. All this operators have been used in situations in which the input arguments are numerical values. In some situations, however, the input arguments take the form of linguistic variables or uncertain linguistic variables rather than numerical ones because of time pressure, lack of knowledge, and the decision maker’s limit attention and information processing capabilities [1-10,14-19,28-30]. Recently, Xu [19] developed various generalized induced linguistic aggregation operators, such as the generalized induced linguistic ordered weighted averaging (GILOWA) and generalized induced linguistic ordered weighted geometric (GILOWG) operator, both of which can be used to deal with the linguistic information, and generalized induced uncertain linguistic ordered weighted averaging (GIULOWA) operator and generalized induced uncertain linguistic ordered weighted geometric (GIULOWG) operator, both of which can be used to deal with the uncertain linguistic information. In this paper, we shall develop two new aggregation operators called generalized induced linguistic ordered weighted harmonic mean (GILOWHM) operator and generalized induced uncertain linguistic ordered weighted harmonic mean (GIULOWHM) operator, which can be used to deal with linguistic information or uncertain linguistic information, and study some of their desirable properties. Each object processed by these operators consists of three components, where the first component represents the importance degree or character of the second component, and the second component is used to induce an ordering, through the first component, over the third components which are linguistic variables or uncertain linguistic variables and then aggregated. It is shown that the induced linguistic ordered weighted harmonic mean (ILOWHM) [11] operator and linguistic ordered weighted harmonic mean (LOWHM) [11] operator are the special cases of the GILOWHM operator and that the induced uncertain linguistic ordered weighted harmonic mean (IULOWHM) operator and uncertain linguistic ordered weighted harmonic mean (ULOWHM) operator are the special cases of the GIULOWHM operator. Two procedures based on the GILOWHM and GIULOWHM operators respectively, are developed to solve the multiple attribute decision making (MADM) problems where all decision information about attribute values take the forms of linguistic variables or uncertain linguistic variables. Finally, an illustrative example is pointed out.
2
Preliminaries
The linguistic approach is approximate technique, which represents qualitative aspects as linguistic values by means of linguistic variables [28-30]. 2
631
PARK ET AL: LINGUISTIC HARMONIC MEAN OPERATORS
632
Let S = {si : i = 1, 2, . . . , t} be a finite and totally ordered discrete label set, whose cardinality value is odd. Any label, si , represents a possible value for a linguistic variable, and it must have the following characteristics [6]: (1) the set is ordered: si ≥ sj if i ≥ j, and (2) there is the negation operator: neg(si ) = st−i+1 . We call this linguistic label set S the additive linguistic scale. For example, S can be defined as: S = {s1 = extremely poor, s2 = very poor, s3 = poor, s4 = slightly poor, s5 = fair, s6 = slightly good, s7 = good, s8 = very good, s9 = extremely good}. To preserve all the given information, we extend the discrete linguistic label set S to a continuous linguistic label set S¯ = {sα : α ∈ [1, q]}, where q (q > t) is a sufficiently large positive integer. If sα ∈ S, then Xu [16] called sα the original linguistic label, otherwise, Xu [16] called sα the virtual linguistic label. The decision maker, in general, uses the original linguistic labels to evaluate alternatives, and the virtual linguistic labels can only appear in operations [16]. ¯ sα and sβ are the lower and upper Let s˜ = [sα , sβ ], where sα , sβ ∈ S, limits, respectively, then Xu [15] called s˜ the uncertain linguistic variables. For convenience, we let S˜ be the set of all the uncertain linguistic variables. ¯ and any three uncerConsider any three linguistic variables sλ , sλ1 , sλ2 ∈ S, tain linguistic variables s˜ = [sα , sβ ], s˜1 = [sα1 , sβ1 ] and s˜2 = [sα2 , sβ2 ], and let µ ∈ [0, 1], then we define their operations as follows: (1) sλ1 ⊕ sλ2 = sλ1 +λ2 ; (2) µsλ = sµλ ; (3) s1λ = s λ1 ; (4) s˜1 ⊕ s˜2 = [sα1 , sβ1 ] ⊕ [sα2 , sβ2 ] = [sα1 ⊕ sα2 , sβ1 ⊕ sβ2 ]; (5) λ˜ s = λ[sα , sβ ] = [λsα , λsβ ]; (6) 1s˜ = [sα1,sβ ] = [ s1β , s1α ]. In order to compare uncertain linguistic variables, Xu [17] provided the following definition: Definition 2.1 Let s˜1 = [sα1 , sβ1 ] and s˜2 = [sα2 , sβ2 ] be two uncertain linguistic variables, and let len(˜ s1 ) = β1 − α1 and len(˜ s2 ) = β2 − α2 , then the degree of possibility of s˜1 ≥ s˜2 is defined as p(˜ s1 ≥ s˜2 ) =
max{0, len(˜ s1 ) + len(˜ s2 ) − max(β2 − α1 , 0)} . len(˜ s1 ) + len(˜ s2 )
From Definition 2.1, we can easily get the following results: (1) 0 ≤ p(˜ s1 ≥ s˜2 ) ≤ 1, 0 ≤ p(˜ s2 ≥ s˜1 ) ≤ 1; (2) p(˜ s1 ≥ s˜2 ) + p(˜ s2 ≥ s˜1 ) = 1. Especially, p(˜ s1 ≥ s˜1 ) = p(˜ s2 ≥ s˜2 ) = 12 .
3
(1)
PARK ET AL: LINGUISTIC HARMONIC MEAN OPERATORS
3 3.1
633
Generalized induced linguistic aggregation operators The GILOWHM and GIULOWHM operators
¯ if Definition 3.1 [11] Let LWHM : S¯n → S, LWHMw (sα1 , sα2 , . . . , sαn ) =
w1 sα 1
⊕
w2 sα 2
1 ⊕ ··· ⊕
,
wn s αn
(2)
T where Pn w = (w1 , w2 , .¯. . , wn ) is the weight vector of the sαi with wi ∈ [0, 1], w = 1, s ∈ S, then LWHM is called the linguistic weighted harmonic αi i=1 i mean (LWHM) operator. Especially, if wi = 1 and wj = 0, j 6= i, then LWHM(sα1 , sα2 , . . . , sαn ) = sαi ; if w = ( n1 , n1 , . . . , n1 )T , then LWHM operator is called the linguistic harmonic mean (LHM) operator, i.e.,
n LHM(sα1 , sα2 , . . . , sαn ) = P s n
.
(3)
1 i=1 αi
The fundamental aspect of the LWHM operator is that it compute an aggregated value taking into the importance of the sources of information. Definition 3.2 [11] A LOWHM operator of dimension n is a mapping LOWHM : ¯ which has an associated vector w = (w1 , w2 , . . . , wn )T with wi ∈ [0, 1] S¯n → PS, n and i=1 wi = 1, such that LOWHMw (sα1 , sα2 , . . . , sαn ) =
w1 sβ1
⊕
w2 sβ2
1 ⊕ ··· ⊕
wn sβn
,
(4)
where sβi is the ith largest of the sαi . The weighted vector w = (w1 , w2 , . . . , wn )T can be determined by using some weight determining methods like the normal distribution based method. Definition 3.3 An ULOWHM operator of dimension n is a mapping ULOWHM : ˜ which has an associated vector w = (w1 , w2 , . . . , wn )T with wi ∈ [0, 1] S˜n → PS, n and i=1 wi = 1, such that ULOWHMw (˜ s1 , s˜2 , . . . , s˜n ) =
w1 s˜β1
⊕
w2 s˜β2
1 ⊕ ··· ⊕
wn s˜βn
,
(5)
where s˜βi is the ith largest of the s˜i . The fundamental aspect of the LOWHM and ULOWHM operators is reordering of the arguments to be aggregated, based on their values. When using the ULOWHM operator, we need to rank the uncertain linguistic arguments s˜i (i = 1, 2, . . . , n). To do so, we first compare each argument s˜i with all arguments s˜j (j = 1, 2, . . . , n) by using (1), and let pij = p(˜ si ≥ s˜j ). Then we can contract a complementary matrix P = (pij )n×n where: pij ≥ 0, pij + pji = 1, pii = 4
1 , i, j = 1, 2, . . . , n. 2
PARK ET AL: LINGUISTIC HARMONIC MEAN OPERATORS
634
Pn Summing all elements in each line of matrix P, we have pi = j=1 pij , i = 1, 2, . . . , n. Then, in accordance with the values of pi (i = 1, 2, . . . , n), we can rank the arguments s˜i (i = 1, 2, . . . , n) in descending order in accordance with the values of pi (i = 1, 2, . . . , n). Definition 3.4 [11] An induced LOWHM (ILOWHM) operator is defined as follows: ILOWHMw (hu1 , sα1 i, hu2 , sα2 i, . . . , hun , sαn i) =
w1 sβ1
⊕
w2 sβ2
1 ⊕ ··· ⊕
wn sβn
(6)
where w =P (w1 , w2 , . . . , wn )T is a weighting vector, such that wi ∈ [0, 1], i = n 1, 2, . . . , n, i=1 wi = 1, sβi is the sαi value of the LOWHM pair hui , sαi i having the ith largest ui , and ui in hui , sαi i is referred to as the order inducing variable and sαi as the linguistic argument variable. Especially, if w = ( n1 , n1 , . . . , n1 )T , then ILOWHM is reduced to the LHM operator; if ui = sαi , for all i, then ILOWHM is reduced to the LOWHM operator; if ui =No. i, for all i, where No. i is the ordered position of the ai , then ILOWHM is the LHM operator. However, if there is a tie between hui , sαi i, huj , sαj i with respect to orderinducing variables, in this case, we can follow the policy presented by Yager and Filov [26] - to replace the arguments of the tied objects by the mean of the arguments of the tied objects (i.e., we replace the argument component of each of hui , sαi i and huj , sαj i by their average (sαi ⊕ sαj )/2). If k items are tied, we replace these by k replicas of their average. In the following, we shall give example to specify the special cases with respect to the inducing variables. Example 3.5 Consider the following collection of LOWHM pairs: hs4 , s3 i, hs6 , s7 i, hs3 , s1 i, hs5 , s4 i. Performing the ordering the LOWHM pairs with respect to the first component, we have hs6 , s7 i, hs5 , s4 i, hs4 , s3 i, hs3 , s1 i. This ordering induces the ordered linguistic arguments sβ1 = s7 , sβ2 = s4 , sβ3 = s3 , sβ4 = s1 . If the weighting vector w = (0.3, 0.1, 0.4, 0.2)T , then we get an aggregated value: ILOWHMw (hs4 , s3 i, hs5 , s7 i, hs3 , s1 i, hs6 , s4 i) 1 = 0.3 0.1 0.4 0.2 = s2.49 . s7 ⊕ s 4 ⊕ s 3 ⊕ s1 Definition 3.6 An induced uncertain LOWHM (IULOWHM) operator is defined as follows: IULOWHMw (hu1 , s˜1 i, hu2 , s˜2 i, . . . , hun , s˜n i) =
5
w1 s˜β1
⊕
w2 s˜β2
1 ⊕ ··· ⊕
wn s˜βn
(7)
PARK ET AL: LINGUISTIC HARMONIC MEAN OPERATORS
635
T where w = (wP is a weighting vector, such that wi ∈ [0, 1], 1 , w2 , . . . , wn ) n i = 1, 2, . . . , n, i=1 wi = 1, s˜βi is the s˜i value of the ULOWHM pair hui , s˜i i having the ith largest ui , and ui in hui , s˜i i is referred to as the order inducing variable and s˜i as the uncertain linguistic argument variable. Especially, if w = ( n1 , n1 , . . . , n1 )T , then IULOWHM is reduced to the ULHM operator; if ui = s˜i , for all i, then IULOWHM is reduced to the ULOWHM operator; if ui =No. i, for all i, where No. i is the ordered position of the ai , then IULOWHM is the ULHM operator.
However, if there is a tie between hui , s˜i i, huj , s˜j i with respect to orderinducing variables. In this case, we can replace the argument component of each of hui , s˜i i and huj , s˜j i by their average (˜ si ⊕ s˜j )/2). If k items are tied, we replace these by k replicas of their average. Example 3.7 Consider the following collection of ULOWHM pairs: h0.5, [s3 , s4 ]i, h0.3, [s6 , s7 ]i, h0.7, [s2 , s3 ]i, h0.4, [s2 , s4 ]i. Performing the ordering the ULOWHM pairs with respect to the first component, we have h0.7, [s2 , s3 ]i, h0.5, [s3 , s4 ]i, h0.4, [s2 , s4 ]i, h0.3, [s6 , s7 ]i. This ordering induces the ordered linguistic arguments s˜β1 = [s2 , s3 ], s˜β2 = [s3 , s4 ], s˜β3 = [s2 , s4 ], s˜β4 = [s6 , s7 ]. If the weighting vector w = (0.3, 0.1, 0.4, 0.2)T , then we get an aggregated value: ILOWHMw (h0.5, [s3 , s4 ]i, h0.3, [s6 , s7 ]i, h0.7, [s2 , s3 ]i, h0.4, [s2 , s4 ]i) = [s2.40 , s3.94 ]. An important feature of the ILOWHM operator is that the argument ordering process is guided by a variable called the order inducing value. This operator essentially aggregate objects, which are pairs, and provide a very general family of aggregations operators. In some situations, however, when we need to provide more information about the objects, i.e. each object may consist of three components, a direct locator, an indirect locator and a prescribed value, it is unsuitable to use this induced aggregation operator as an aggregation tool. In following we shall present some more general linguistic aggregation technique. Definition 3.8 A generalized induced LOWHM (GILOWHM) operator is given by GILOWHMw (hv1 , u1 , sα1 i, hv2 , u2 , sα2 i, . . . , hvn , un , sαn i) 1 = w1 w2 ⊕ ⊕ · · · ⊕ swβn sβ sβ 1
(8)
n
2
where = (w1 , w2 , . . . , wn )T is the associated weighting vector with wi ∈ [0, 1] Pw n and i=1 wi = 1, the object hvi , ui , sαi i consists of three components, where the first component vi represents the importance degree or character of second 6
PARK ET AL: LINGUISTIC HARMONIC MEAN OPERATORS
636
component ui , and the second component ui is used to induce an ordering, through the first component vi , over the third component sαi which are then aggregated. Here, sβj is the sαi value of the object having the jth largest vi . In discussing the object hvi , ui , sαi i, because of its role we shall refer to the vi as the direct order inducing variable, the ui as the indirect inducing variable, and sαi as the linguistic argument variable. Especially, if vi = ui , for all i, then the GILOWHM operator is reduced to the ILOWHM operator; if vi = sαi , for all i, then the GILOWHM operator is reduced to the LOWHM operator; if vi = No. i, for all i, where No. i is the ordered position of the sαi , then the GILOWHM operator is reduced to the LWHM operator; if w = ( n1 , n1 , . . . , n1 )T , then the GILOWHM operator is reduced to the LHM operator. Example 3.9 Consider the collection of the objects hNo. 3, Kim, s1 i, hNo. 1, Park, s7 i, hNo. 2, Lee, s2 i, hNo. 4, Jung, s5 i. By the first component, we get the ordered objects hNo. 1, Park, s7 i, hNo. 2, Lee, s2 i, hNo. 3, Kim, s1 i, hNo. 4, Jung, s5 i. The ordering induces the ordered arguments sβ1 = s7 , sβ2 = s2 , sβ3 = s1 , sβ4 = s5 . If the weighting vector for this aggregation is w = (0.3, 0.1, 0.2, 0.4)T , then we get GILOWHMw (hNo.3, Kim, s1 i, hNo.1, Park, s7 i, hNo.2, Lee, s2 i, hNo.4, Jung, s5 i) 1 = 0.3 0.1 0.2 0.4 = s2.70 . ⊕ ⊕ s7 s2 s1 ⊕ s5 However, if we replace the objects in Example 3.9 with hNo. 3, Kim, s1 i, hNo. 1, Park, s7 i, hNo. 2, Lee, s2 i, hNo. 3, Jung, s5 i, then there is a tie between hNo. 3, Kim, s1 i and hNo. 3, Jung, s5 i with respect to order direct inducing variable, in this case, we can follow the policy: we replace the linguistic argument component of each of hNo. 3, Kim, s1 i and hNo. 3, Jung, s5 i by their average (s1 ⊕ s5 )/2 = s3 . This substitution gives us ordered arguments sβ1 = s7 , sβ2 = s2 , sβ3 = s3 , sβ4 = s3 . Thus GILOWHMw (hNo.3, Kim, s1 i, hNo.1, Park, s7 i, hNo.2, Lee, s2 i, hNo.3, Jung, s5 i) 1 = 0.3 0.1 0.2 0.4 = s3.44 . ⊕ ⊕ s7 s2 s3 ⊕ s3 If we replace (8) with GIULOWHMw (hv1 , u1 , s˜1 i, hv2 , u2 , s˜2 i, . . . , hvn , un , s˜n i) 1 = w1 w2 ⊕ ⊕ · · · ⊕ s˜wβn s˜β s˜β 1
n
2
7
(9)
PARK ET AL: LINGUISTIC HARMONIC MEAN OPERATORS
then by Definition 3.8, we get a GIULOWHM operator. Especially, if vi = ui , for all i, then the GIULOWHM operator is reduced to the IULOWGM operator; if vi = s˜i , for all i, then the GIULOWHM operator is reduced to the ULOWHM operator; if vi = No. i, for all i, where No. i is ordered position of the s˜i , then the GIULOWHM operator is reduced to the ULWHM operator; if w = ( n1 , n1 , . . . , n1 )T , then the GIULOWHM operator is reduced to the ULHM operator. Example 3.10 Consider a collection of the objects h0.3, Kim, [s1 , s3 ]i, h0.1, Park, [s7 , s8 ]i, h0.2, Lee, [s2 , s3 ]i. Performing the ordering of the objects with respect to the first component, we get the ordered objects h0.3, Kim, [s1 , s3 ]i, h0.2, Lee, [s2 , s3 ]i, h0.1, Park, [s7 , s8 ]i. The ordering induces the ordered uncertain linguistic arguments s˜β1 = [s1 , s3 ], s˜β2 = [s2 , s3 ], sβ3 = [s7 , s8 ]. If the weighting vector for this aggregation is w = (0, 2, 0.6, 0.2)T , then we have GIULOWHMw (h0.3, Kim, [s1 , s3 ]i, h0.1, Park, [s7 , s8 ]i, h0.2, Lee, [s2 , s3 ]i) = [s2.33 , s3.42 ]. If the direct order inducing variables vi (i = 1, 2, . . . , n) take the form of uncertain linguistic variables s˜0i (i = 1, 2, . . . , n), then we shall use, to rank these uncertain linguistic variables, the procedure for ranking uncertain linguistic arguments when using the ULOWHM operator. Example 3.11 Consider a collection of the objects h[s1 , s2 ], Kim, [s2 , s4 ]i, h[s4 , s5 ], Park, [s7 , s8 ]i, h[s3 , s5 ], Lee, [s2 , s3 ]i. To rank the first components vi (i = 1, 2, 3) of the objects, we first compare each vi with all these first components vi (i = 1, 2, 3) by using (1), and then construct a complementary matrix ! 0.500 0.000 0.000 1.000 0.500 0.667 . P= 1.000 0.333 0.500 Summing all elements in each line of matrix P, we have p1 = 0.500, p2 = 2.167, p3 = 1.833. Then we rank all the variables vi (i = 1, 2, 3) in descending order in accordance with the values of pi (i = 1, 2, 3) v2 = [s4 , s5 ], v3 = [s3 , s5 ], v1 = [s1 , s2 ]. Performing the ordering of the objects with respect to the first component, we get the ordered objects h[s4 , s5 ], Park, [s7 , s8 ]i, h[s3 , s5 ], Lee, [s2 , s3 ]i, h[s1 , s2 ], Kim, [s2 , s4 ]i. 8
637
PARK ET AL: LINGUISTIC HARMONIC MEAN OPERATORS
638
The ordering induces the ordered uncertain linguistic arguments s˜β1 = [s7 , s8 ], s˜β2 = [s2 , s3 ], sβ3 = [s2 , s4 ]. If the weighting vector for this aggregation is w = (0, 2, 0.6, 0.2)T , then we have GIULOWHMw h[s1 , s2 ], Kim, [s2 , s4 ]i, h[s4 , s5 ], Park, [s7 , s8 ]i, h[s3 , s5 ], Lee, [s2 , s3 ]i = [s2.33 , s3.64 ].
3.2
Some properties of the GILOWHM operator
In the following we shall make an investigation on some desirable properties of the GILOWHM operator. Theorem 3.12 (Commutativity) If (hv10 , u01 , s0α1 i, hv10 , u01 , s0α1 i, . . . , hvn0 , u0n , s0αn i) is any permutation of (hv1 , u1 , sα1 i, hv1 , u1 , sα1 i, . . . , hvn , un , sαn i), then GILOWHMw (hv1 , u1 , sα1 i, hv2 , u2 , sα2 i, . . . , hvn , un , sαn i) = GILOWHMw (hv10 , u01 , s0α1 i, hv20 , u02 , s0α2 i, . . . , hvn0 , u0n , s0αn i). Proof Let GILOWHMw (hv1 , u1 , sα1 i, hv2 , u2 , sα2 i, . . . , hvn , un , sαn i) = GILOWHMw (hv10 , u01 , s0α1 i, hv20 , u02 , s0α2 i, . . . , hvn0 , u0n , s0αn i) =
w1 sβ1 w1 s0β
1
w2 sβ2
⊕ ⊕
w2 s0β
1 ⊕ ··· ⊕ 1 ⊕ ··· ⊕
2
wn sβn wn s0β
.
n
Since (hv10 , u01 , s0α1 i, hv10 , u01 , s0α1 i, . . . , hvn0 , u0n , s0αn i) is a permutation of (hv1 , u1 , sα1 i, hv1 , u1 , sα1 i, . . . , hvn , un , sαn i), we have sβj = s0βj (j = 1, 2, . . . , n), then GILOWHMw (hv1 , u1 , sα1 i, hv2 , u2 , sα2 i, . . . , hvn , un , sαn i) = GILOWHMw (hv10 , u01 , s0α1 i, hv20 , u02 , s0α2 i, . . . , hvn0 , u0n , s0αn i). Theorem 3.13 (Idempotency) If sαi = sα , for all i, then GILOWHMw (hv1 , u1 , sα1 i, hv2 , u2 , sα2 i, . . . , hvn , un , sαn i) = sα . Proof Since sαi = sα , for all i, we have GILOWHMw (hv1 , u1 , sα1 i, hv2 , u2 , sα2 i, . . . , hvn , un , sαn i) 1 1 = w1 w2 wn = w 1 w2 sβ1 ⊕ sβ2 ⊕ · · · ⊕ sβn sα ⊕ sα ⊕ · · · ⊕ = sα .
wn sα
Theorem 3.14 (Monotonicity) If sαi ≤ s0αi , for all i, then GILOWHMw (hv1 , u1 , sα1 i, hv2 , u2 , sα2 i, . . . , hvn , un , sαn i) ≤ GILOWHMw (hv1 , u1 , s0α1 i, hv2 , u2 , s0α2 i, . . . , hvn , un , s0αn i). 9
PARK ET AL: LINGUISTIC HARMONIC MEAN OPERATORS
639
Proof Let GILOWHMw (hv1 , u1 , sα1 i, hv2 , u2 , sα2 i, . . . , hvn , un , sαn i) = GILOWHMw (hv1 , u1 , s0α1 i, hv2 , u2 , s0α2 i, . . . , hvn , un , s0αn i) =
w1 sβ1 w1 s0β
1
⊕ ⊕
w2 sβ2 w2 s0β
1 ⊕ ··· ⊕ 1 ⊕ ··· ⊕
wn sβn wn s0β
n
2
Since sαi ≤ s0αi , for all i, it follows that sβi ≤ s0βi , then GILOWHMw (hv1 , u1 , sα1 i, hv2 , u2 , sα2 i, . . . , hvn , un , sαn i) ≤ GILOWHMw (hv1 , u1 , s0α1 i, hv2 , u2 , s0α2 i, . . . , hvn , un , s0αn i). Theorem 3.15 (Boundedness) min(sαi ) ≤ GILOWHMw (hv1 , u1 , sα1 i, hv2 , u2 , sα2 i, . . . , hvn , un , sαn i) ≤ max(sαi ). i
i
Proof Let maxi (sαi ) = sβ and mini (sαi ) = sα , then GILOWHMw (hv1 , u1 , sα1 i, hv2 , u2 , sα2 i, . . . , hvn , un , sαn i) 1 1 = w1 = sβ , w2 w n ≤ w1 w2 ⊕ ⊕ · · · ⊕ ⊕ ⊕ · · · ⊕ wsβn sβ sβ sβ sβ sβ 1
n
2
GILOWHMw (hv1 , u1 , sα1 i, hv2 , u2 , sα2 i, . . . , hvn , un , sαn i) 1 1 = sα . = w1 w2 w2 w n ≥ w1 ⊕ ⊕ · · · ⊕ wsαn ⊕ ⊕ · · · ⊕ sβ sβ sβ sα sα 1
n
2
Hence we have min(sαi ) ≤ GILOWHMw (hv1 , u1 , sα1 i, hv2 , u2 , sα2 i, . . . , hvn , un , sαn i) ≤ max(sαi ). i
i
Similarly, we can prove that GIULOWHM operator also has the desirable properties above.
4
An approach to group decision making
For a group decision making with linguistic information, let X = {x1 , x2 , . . . , xn } be a set of alternatives, and G = {G1 , G2 , . . . , Gm } be the set of attributes, and ω = (ω1 , ω2 ,P . . . , ωm )T be the weight vector of attributes, where ωi ≥ 0, m i = 1, 2, . . . , m, i=1 ωi = 1. Let U = {u1 , u2 , . . . , ul } be a set of decision makers, and V = {v1 , v2 , . . . , vl } be the set of importance degrees or charac(k) ters of decision makers uk (k = 1, 2 . . . , l). Suppose that A(k) = (aij )m×n is (k) the linguistic decision matrix, where aij ∈ S¯ is preference value, which takes the form of linguistic variables, given by the decision maker uk ∈ U , for the alternative xj ∈ X with respect to the attribute Gi ∈ G, for all k = 1, 2, . . . , l; i = 1, 2, . . . , m; j = 1, 2, . . . , n.
10
.
PARK ET AL: LINGUISTIC HARMONIC MEAN OPERATORS
640
In the following, we apply the GILOWHM operator (whose exponential Pl weighting vector w = (w1 , w2 , . . . , wl )T , wk ≥ 0, k = 1, 2, . . . l, k=1 wk = 1) and the LWHM operator to group decision making with linguistic information: Procedure I. Step 1: Utilize the GILOWHM operator (1)
(2)
(l)
aij = GILOWHMw (hv1 , u1 , aij i, hv2 , u2 , aij i, . . . , hvl , ul , aij i), 1 = w1 wl , i = 1, 2, . . . , m; j = 1, 2, . . . , n w2 ⊕ (1) (2) ⊕ · · · ⊕ (l) aij
aij
aij
to aggregate all the decision matrices A(k) (k = 1, 2, . . . , l) into a collective decision matrix A = (aij )m×n , where vk (k = 1, 2, . . . l) are direct order inducing variables and uk (k = 1, 2, . . . l) are indirect order inducing variables. Step 2: Utilize the decision information given in matrix A, and the LWHM operator aj = LWHMω (a1j , a2j , . . . , amj ) 1 = ω1 ω2 ωm , j = 1, 2, . . . , n a1j ⊕ a2j ⊕ · · · ⊕ amj to derive the collective overall preference values aj of the alternative xj , where ω = (ω1 , ω2 , . . . , ωm )T be the weight vector of attributes. Step 3: Rank all the alternatives xj (j = 1, 2, . . . , n) and select the best one(s) in accordance with the collective overall preference values aj (j = 1, 2, , . . . , n). Step 4: End. Now we consider the group decision making problems under interval uncertainty where all the attribute values are expressed in uncertain linguistic variables. The following notations are used to depict the considered problems: Let X, G, ω, U and V be presented as above-mentioned, and let A˜(k) = (k) (k) (˜ aij )n×m be an uncertain linguistic decision matrix, where a ˜ij ∈ S˜ is preference value, which takes the form of uncertain linguistic variables, given by the decision maker uk ∈ U , for the alternative xj ∈ X with respect to the attribute Gi ∈ G, for all k = 1, 2, . . . , l; i = 1, 2, . . . , m; j = 1, 2, . . . , n. Similar to the Procedure I, a procedure for solving the above problems can be described as follows: Procedure II. Step 1: Utilize the GIULOWHM operator (1)
(2)
(l)
aij = GIULOWHMw (hv1 , u1 , a ˜ij i, hv2 , u2 , a ˜ij i, . . . , hvl , ul , a ˜ij i), 1 = w1 wl , i = 1, 2, . . . , m; j = 1, 2, . . . , n w2 ⊕ (1) (2) ⊕ · · · ⊕ (l) a ˜ij
a ˜ij
a ˜ij
11
PARK ET AL: LINGUISTIC HARMONIC MEAN OPERATORS
to aggregate all the decision matrices A˜(k) (k = 1, 2, . . . , l) into a collective decision matrix A˜ = (˜ aij )m×n , where vk (k = 1, 2, . . . l) are direct order inducing variables and uk (k = 1, 2, . . . l) are indirect order inducing variables. ˜ and the ULWHM Step 2: Utilize the decision information given in matrix A, operator a ˜j = ULWHMω (˜ a1j , a ˜2j , . . . , a ˜mj ) 1 = ω1 , j = 1, 2, . . . , n ω2 m ⊕ ⊕ · · · ⊕ a˜ωmj a ˜1j a ˜2j to derive the collective overall preference values a ˜j of the alternative xj , where ω = (ω1 , ω2 , . . . , ωm )T be the weight vector of attributes. Step 3: To rank these collective attribute values a ˜i (i = 1, 2, . . . , n), we first compare each a ˜i with all a ˜j (j = 1, 2, . . . , n) by using (1). For simplicity, we let pij = p(˜ ai ≥ a ˜j ), then we develop a complementary matrix as P = (pij )n×n , where: pij ≥ 0, pij + pji = 1, pii = 0.5, i, j = 1, 2, . . . , n. Summing all elements in each line of matrix P, we have pi =
n X
pij , i = 1, 2, . . . , n.
j=1
Then we rank the a ˜i (i = 1, 2, . . . , n) in descending order in accordance with the values of pi (i = 1, 2, . . . , n). Step 4: Rank all the alternatives xi (i = 1, 2, . . . , n) and select the best one(s) in accordance with the a ˜i (i = 1, 2, . . . n). Step 5: End.
5
Illustrative example
Let us suppose an investment company, which wants to invest a sum of money in the best option (adapted by Herrera et al. [5]). There is a panel with five possible alternatives in which to invest the money: (1) x1 is a car industry; (2) x2 is a food company; (3) x3 is a computer company; (4) x4 is an arms company; (5) x5 is a TV company. The investment company must make a decision according to the following four attributes (suppose that the weight vector of four attributes is ω = (0.3, 0.4, 0.2, 0.1)T ): (1) G1 is the risk analysis; (2) G2 is the growth analysis; (3) G3 is the social-political impact analysis; (4) G4 is the environmental impact analysis. There is three decision makers uk (k = 1, 2, 3) to evaluate five alternatives as follows: u1 is Anderson; u2 is Smith; and u3 is Brown, where v1 = No. 3, v2 = No. 2 and v3 = No. 1 are order positions of relative importance of decision
12
641
PARK ET AL: LINGUISTIC HARMONIC MEAN OPERATORS
642
makers uk (k = 1, 2, 3), respectively. The five possible alternatives xj (j = 1, 2, 3, 4, 5) are evaluated using the linguistic scale: S = {s1 = extremely poor, s2 = very poor, s3 = poor, s4 = slightly poor, s5 = fair, s6 = slightly good, s7 = good, s8 = very good, s9 = extremely good}. by three decision makers under the above four attributes Gi (i = 1, 2, 3, 4), and (k) construct, respectively, the decision matrices A(k) = (aij )4×5 (k = 1, 2, 3) as listed in Tables 1-3. Table 1: Linguistic decision matrix A(1) G1 G2 G3 G4
x1 s6 s3 s7 s2
x2 s9 s7 s4 s4
x3 s4 s8 s6 s6
x4 s3 s8 s8 s7
x5 s6 s4 s7 s8
Table 2: Linguistic decision matrix A(2) G1 G2 G3 G4
x1 s6 s3 s7 s2
x2 s8 s6 s4 s3
x3 s4 s8 s6 s4
x4 s7 s8 s7 s6
x5 s3 s4 s9 s8
Table 3: Linguistic decision matrix A(3) G1 G2 G3 G4
x1 s6 s4 s7 s3
x2 s8 s6 s3 s4
x3 s4 s8 s7 s4
x4 s7 s7 s9 s7
x5 s2 s4 s8 s7
Now we utilize the proposed procedure I to prioritize these alternatives: Step 1: Utilize the GILOWHM operator (whose weight vector is w = (0.3, 0.4, 0.3)T ) (1)
(2)
(3)
aij = GILOWHMw (hv1 , u1 , aij i, hv2 , u2 , aij i, hv3 , u3 , aij i), i = 1, 2, 3, 4; j = 1, 2, 3, 4, 5 to aggregate all the decision matrices A(k) (k = 1, 2, 3) into a collective decision matrix A = (aij )4×5 (Table 4). Step 2: Utilize the decision information given in matrix A, and the LWHM operator aj = LWHMω (a1j , a2j , a3j , a4j ) 1 = ω1 ω2 ω3 ω4 , j = 1, 2, 3, 4, 5 a1j ⊕ a2j ⊕ a3j ⊕ a4j 13
PARK ET AL: LINGUISTIC HARMONIC MEAN OPERATORS
643
Table 4: Collective linguistic decision matrix A G1 G2 G3 G4
x1 s6.0 s3.2 s7.0 s2.2
x2 s8.3 s6.3 s3.6 s3.5
x3 s4.0 s8.0 s6.3 s4.4
x4 s5.0 s7.7 s7.8 s6.6
x5 s3.0 s4.0 s8.0 s7.7
to derive the collective overall preference values aj of the alternative xj : a1 = s4.02 , a2 = s5.44 , a3 = s5.57 , a4 = s6.55 , a5 = s4.20 . Step 3: Rank all the alternatives xj (j = 1, 2, 3, 4, 5) and select the best one(s) in accordance with the collective overall preference values aj (j = 1, 2, 3, 4, 5): x4 x3 x2 x1 x5 thus the best alternative is x4 . If three decision makers evaluate the performance of five companies xj (j = 1, 2, 3, 4, 5) according to attributes Gi (i = 1, 2, 3, 4) by using the uncertain linguistic terms in the set S˜ and constructs, respectively, the uncertain linguistic decision matrices A˜(k) (k = 1, 2, 3) as listed in Tables 5-7. In such case, we can utilize the proposed procedure II to prioritize these alternatives as follows. Step 1: Utilize the GIULOWHM operator (whose weight vector w = (0.3, 0.4, 0.3)T ) (1)
(2)
(3)
a ˜ij = GIULOWHMw (hv1 , u1 , a ˜ij i, hv2 , u2 , a ˜ij i, hv3 , u3 , a ˜ij i), i = 1, 2, 3, 4; j = 1, 2, 3, 4, 5 to aggregate all the uncertain linguistic decision matrices A˜(k) (k = 1, 2, 3) into a collective uncertain linguistic decision matrix A˜ = (aij )4×5 (Table 8). ˜ and the ULWHM Step 2: Utilize the decision information given in matrix A, operator a ˜j
=
ULWHMω (˜ a1j , a ˜2j , a ˜3j , a ˜4j ) 1 = ω1 ω2 ω3 ω4 , j = 1, 2, 3, 4, 5 a ˜1j ⊕ a ˜2j ⊕ a ˜3j ⊕ a ˜4j
to derive the collective overall preference values a ˜j of the alternative xj : a ˜1 = [s2.49 , s3.98 ], a ˜2 = [s2.67 , s4.59 ], a ˜3 = [s1.78 , s3.48 ], a ˜4 = [s2.36 , s4.09 ], a ˜5 = [s2.77 , s4.61 ]. Step 3: To rank these collective overall preference values a ˜j (j = 1, 2, 3, 4, 5), we first compare each a ˜j with all a ˜i (i = 1, 2, 3, 4, 5) by using (1), and develop 14
PARK ET AL: LINGUISTIC HARMONIC MEAN OPERATORS
644
Table 5: Uncertain linguistic decision matrix A˜(1) G1 G2 G3 G4
x1 [s5 , s7 ] [s2 , s3 ] [s2 , s4 ] [s3 , s4 ]
x2 [s7 , s9 ] [s6 , s7 ] [s5 , s6 ] [s2 , s3 ]
x3 [s2 , s4 ] [s7 , s9 ] [s1 , s3 ] [s3 , s5 ]
x4 [s3 , s5 ] [s3 , s5 ] [s6 , s7 ] [s2 , s3 ]
x5 [s4 , s6 ] [s4 , s6 ] [s4 , s5 ] [s3 , s4 ]
Table 6: Uncertain linguistic decision matrix A˜(2) G1 G2 G3 G4
x1 [s6 , s7 ] [s2 , s4 ] [s1 , s2 ] [s3 , s5 ]
x2 [s8 , s9 ] [s2 , s3 ] [s2 , s3 ] [s4 , s6 ]
x3 [s1 , s2 ] [s3 , s5 ] [s1 , s2 ] [s2 , s3 ]
x4 [s3 , s5 ] [s2 , s4 ] [s2 , s4 ] [s1 , s3 ]
x5 [s1 , s3 ] [s4 , s5 ] [s5 , s6 ] [s4 , s6 ]
Table 7: Uncertain linguistic decision matrix A˜(3) G1 G2 G3 G4
x1 [s6 , s8 ] [s3 , s4 ] [s1 , s3 ] [s2 , s3 ]
x2 [s6 , s8 ] [s1 , s3 ] [s3 , s5 ] [s2 , s4 ]
x3 [s1 , s3 ] [s4 , s5 ] [s2 , s3 ] [s4 , s5 ]
x4 [s2 , s3 ] [s3 , s4 ] [s4 , s5 ] [s1 , s2 ]
x5 [s4 , s5 ] [s3 , s4 ] [s3 , s4 ] [s2 , s4 ]
Table 8: Collective uncertain linguistic decision matrix A˜ G1 G2 G3 G4
x1 [s5.66 , s7.27 ] [s2.22 , s3.64 ] [s1.54 , s2.67 ] [s2.61 , s3.92 ]
x2 [s7.00 , s8.67 ] [s1.82 , s3.62 ] [s2.78 , s4.11 ] [s2.50 , s4.14 ]
a complementary matrix: 0.500 0.616 P = 0.310 0.497 0.637
0.384 0.500 0.224 0.389 0.516
x3 [s1.18 , s2.67 ] [s3.98 , s5.77 ] [s1.18 , s2.50 ] [s2.67 , s3.95 ]
0.690 0.776 0.500 0.673 0.799
0.503 0.611 0.327 0.500 0.630
x4 [s2.61 , s4.17 ] [s2.50 , s4.26 ] [s3.08 , s4.93 ] [s1.18 , s2.61 ]
0.363 0.484 0.201 0.370 0.500
x5 [s1.82 , s4.11 ] [s3.64 , s4.88 ] [s3.92 , s4.96 ] [s2.86 , s4.62 ]
.
Summing all elements in each line of the matrix P, we have p1 = 2.440, p2 = 2.987, p3 = 1.562, p4 = 2.429, p5 = 3.082 and then we rank a ˜j (j = 1, 2, 3, 4, 5) in descending order in accordance with the values of pj (j = 1, 2, 3, 4, 5): a ˜5 > a ˜2 > a ˜1 > a ˜4 > a ˜3 . 15
PARK ET AL: LINGUISTIC HARMONIC MEAN OPERATORS
Step 4. Rank all alternatives xj (j = 1, 2, 3, 4, 5) by the ranking a ˜j (j = 1, 2, 3, 4, 5): x5 x2 x1 x4 x3 and thus the most desirable alternative is x5 .
6
Conclusions
In this paper, we have defined the GILOWHM and GIULOWHM operators, by which each object processed consists of three components, where the first component represents the importance degree or character of the second component, and the second component is used to induce an ordering, through the first component, over the third components which are linguistic variables (or uncertain linguistic variables) and then aggregated. We have also shown that the ILOWHM operator and LOWHM operator are the special cases of the GILOWHM operator, and that the IULOWHM operator and the ULOWHM operator are the special cases of the GIULOWHM operator. In the process of aggregating information, these operators can avoid losing the original linguistic or uncertain linguistic information and thus ensure exactness and rationality of the aggregated results. Moreover, based on the GILOWHM and GIULOWHM operators respectively, we have developed two procedures for solving the MADM problems where all decision information about attribute values take the forms of linguistic variables or uncertain linguistic variables. To verify the effectiveness and practicality of the developed procedures, we have given an illustrative example.
Acknowledgments This study was supported by research funds from Dong-A University.
References [1] M. Delgado, J.L. Verdegay, M.A. Vila, Linguistic decision making models, Int. J. Intell. Syst. 8, 351–370 (1993) [2] F. Herrera, E. Herrera-Viedma, Aggregation operators for linguistic weighted information, IEEE Trans. Syst. Man. Cybern. 27, 646–656 (1997). [3] F. Herrera, E. Herrera-Viedma, Choice functions and mechanisms for linguistic preference relations, Eur. J. Oper. Res. 120, 144–161 (2000). [4] F. Herrera, E. Herrera-Viedma, Linguistic decision analysis: steps for solving decision problems under linguistic information, Fuzzy Sets Syst. 115, 67–82 (2000). [5] F. Herrera, E. Herrera-Viedma, L. Martinez, A fusion approach for managing multi-granularity linguistic term sets in decision making, Fuzzy Sets Syst. 114, 43–58 (2000). 16
645
PARK ET AL: LINGUISTIC HARMONIC MEAN OPERATORS
[6] F. Herrera, E. Herrera-Viedma, J.L. Verdegay, A model of consensus in group decision making under linguistic assessments, Fuzzy Sets Syst. 78, 73–87 (1996). [7] F. Herrera, E. Herrera-Viedma, J.L. Verdegay, Direct approach process in group decision making using linguistic OWA operators, Fuzzy Sets Syst. 79, 175–190 (1996). [8] F. Herrera, E. Herrera-Viedma, J.L. Verdegay, A rational consensus model in group decision making linguistic assessments, Fuzzy Sets Syst. 88, 31–49 (1997). [9] F. Herrera, L. Martinez, A 2-tuple fuzzy linguistic representation model for computing with words, IEEE Trans. Fuzzy Syst. 8, 746–752 (2000). [10] F. Herrera, J.L. Verdegay, Linguistic assessments in group decision, Proceedings of 11th European Congress of Fuzzy Intelligent Technology, pp 941–948 (1993). [11] J.H. Park, B.Y. Lee, M.J. Son, An approach based on the LOWHM and induced LOWHM operators to group decision making under linguistic information, Int. J. Fuzzy. Logic Intell. Syst. 20, 285–291 (2010). [12] V. Torra, Y. Narukawa, Information fusion and aggregating information, Berlin, Springer (2007). [13] Z.S. Xu, Uncertain Multiple Attribute Decision Making: Methods and Applications, Tsinghua University Press, Beijing (2004). [14] Z.S. Xu, A method based on linguistic aggregation operators for group decision making with linguistic preference relations, Inf. Sci. 166, 19–30 (2004). [15] Z.S. Xu, Uncertian linguistic aggregation operators based approach to multiple attribute group decision making under uncertain linguistic environment, Inf. Sci. 168, 171-184 (2004). [16] Z.S. Xu, Deviation measures of linguistic preference relations in group decision making, Omega 33, 249–254 (2005). [17] Z.S. Xu, An approach based on the uncertain LOWG and induced uncertain LOWG operators to group decision making with uncertain multiplicative linguistic preference relations, Decis. Support Syst. 41, 488–499 (2006). [18] Z.S. Xu, Induced uncertain linguistic OWA operators applied to group decision making, Inf. Fusion 7, 231–238 (2006). [19] Z.S. Xu, On generalized induced linguistic aggregation operators, Int. J. Intell. Syst. 35, 17–28 (2006). [20] Z.S. Xu, Q.L. Da, The uncertain OWA operators, Int. J. Intell. Syst. 17, 569–575 (2002).
17
646
PARK ET AL: LINGUISTIC HARMONIC MEAN OPERATORS
[21] Z.S. Xu, Q.L. Da, An overview of operators for aggregationg information, Int. J. Intell. Syst. 18, 953–969 (2003). [22] R.R. Yager, On ordered weighted averaging aggregation operators in multicriteria decision making, IEEE Trans. Syst. Man Cybern. 18, 183–190 (1988). [23] R.R. Yager, Families and extension of OWA aggregation, Fuzzy Sets Syst. 59, 125–148 (1993). [24] R.R. Yager, The induced fuzzy integral aggregation operator, Int. J. Intell. Syst. 17, 1049–1065 (2002). [25] R.R. Yager, Induced aggregation operators, Fuzzy Sets Syst. 137, 59–69 (2003). [26] R.R. Yager, D.F. Filev, Induced ordered weighted averaging operators, IEEE Trans. Syst. Man Cybern. 29, 141–150 (1999). [27] R.R. Yager, J. Kacprzyk, The ordered weighted averaging operator: Theory and application, Boston, Kluwer (1997). [28] L.A. Zadeh, The concept of a linguistic variable and its application to approximate reasoning-I, Inform. Sci. 8, 199–249 (1975). [29] L.A. Zadeh, The concept of a linguistic variable and its application to approximate reasoning-II, Inform. Sci. 8, 301–357 (1975). [30] L.A. Zadeh, The concept of a linguistic variable and its application to approximate reasoning-III, Inform. Sci. 9, 43–80 (1976).
18
647
JOURNAL OF COMPUTATIONAL ANALYSIS AND APPLICATIONS, VOL. 14, NO.4, 648-658 , 2012, COPYRIGHT 2012 EUDOXUS PRESS, 648 LLC
A GENERALIZED RANDOM NONLINEAR VARIATIONAL INCLUSION FOR MULTI-VALUED RANDOM OPERATORS IN A UNIFORMLY SMOOTH BANACH SPACE∗ NAWITCHA ONJAI-UEA AND POOM KUMAM‡
Abstract. In this paper, we introduce and study the random nonlinear variational inclusion problem with multi-valued random operators. We define a random iterative algorithm for finding the approximate solutions of class of random variational inclusions and establish the convergence of random iterative sequence generated by proposed algorithm in a uniformly smooth Banach space. Our result in this paper improves and generalizes some know corresponding results in the literature.
1. Introduction Variational inequalities and variational inclusions are among the most interesting and important mathematical problems and have been studied intensively in the past years since they have wide applications in the optimization and control, economics and transportation equilibrium, engineering science and physical sciences. For these reasons, many existence result and iterative algorithms for various variational inclusion have been studied extensively many authors (see, for example, [10]). Very recently, Lan [23] first introduced a new concept of (A, η)-monotone operators, which generalizes the (H, η)monotonicity, A-monotonicity and other existing monotone operators as special cases, and studied some properties of (A, η)-monotone operators and defined resolvent operators associated with (A, η)monotone operators. Lan et al. [24] introduced a new class of general nonlinear random multi-valued operator equations involving generalized m-accretive mappings in Banach spaces. By using the Chang’s lemma and the resolvent operator technique for generalized m-accretive mapping due to Huang et al [18], they also prove the existence theorems of the solution and convergence theorems of the generalized random iterative procedures with errors for this nonlinear random multi-valued operator equations in q-uniformly smooth Banach spaces. In the past years, many existence results and random iterative algorithms for various random variational inequality and random variational inclusion problems have been studied. For more details, see [8, 11, 12, 13, 22, 21, 30] and the references therein. Motivated and inspired by the recent research works in this fascinating area, the purpose of this paper is to introduce and study a new class of general nonlinear multi-valued random operators with generalized m-accretive operators in Banach spaces, give the notion of proximal random operators associated with the A-monotone random operators. An iterative algorithm is defined to compute the approximate solutions of random variational inclusion problems. The convergence of the iterative sequences generated by the proposed algorithm is also shown. 2. Preliminaries Throughout this paper, let (Ω, A, µ) be a complete σ-finite measure space and E be a separable real Banach space. We denote by B(E), (·, ·) and ∥ · ∥ the class of Borel σ-fields in E, the inner product and the norm on E, respectively. In the sequel, we denote 2E , CB(E) and H by 2E = {A : A ∈ E}, CB(E) = {A ⊂ E : A is nonempty, bounded and closed} and the Hausdorff metric on CB(E) respectively. Next, we will use the following definitions and lemmas. 2000 Mathematics Subject Classification. : 47N10, 47J20, 47J22, 46N10. Key words and phrases. Generalized m-accretive mapping; random variational inclusion;random multi-valued operator; monotone accretive operator . ∗ This research was partially supported by the Centre of Excellence in Mathematics, under the Commission on Higher Education, Ministry of Education, Thailand. ‡ Corresponding author email: [email protected] (P. Kumam). 1
649
2
N. ONJAI-UEA AND P. KUMAM
Definition 2.1. A operator x : Ω −→ E is said to be measurable if, for any B ∈ B(E), {t ∈ Ω : x(t) ∈ B} ∈ A. Definition 2.2. A operator F : Ω×E −→ E is called a random operator if for any x ∈ E, F (t, x) = y(t) is measurable. A random operator F is said to be continuous (resp. linear, bounded) if for any t ∈ Ω, the operator F (t, ·) : E −→ E is continuous (resp. linear, bounded). Similarly, we can define a random operator a : Ω × E × E −→ E. We shall write Ft (x) = F (t, x(t)) and at (x, y) = a(t, x(t), y(t)) for all t ∈ Ω and x(t), y(t) ∈ E. It is well known that a measurable operator is necessarily a random operator. Definition 2.3. A multi-valued operator G : Ω −→ 2E is said to be measurable if, for any B ∈ B(E), G−1 (B) = {t ∈ Ω : G(t) ∩ B ̸= ∅} ∈ A. Definition 2.4. A operator u : Ω −→ E is called a measurable selection of a multi-valued measurable operator Γ : Ω −→ 2E if u is measurable and for any t ∈ Ω, u(t) ∈ Γ(t). Lemma 2.5. [4] Let M : Ω × E −→ CB(E) be a H-continuous random multi-valued operator. Then, for any measurable operator x : Ω −→ E, the multi-valued operator M (·, x(·)) : Ω −→ CB(E) is measurable. Lemma 2.6. [4] Let M, V : Ω × E −→ CB(E) be two measurable multi-valued operators, ϵ > 0 be a constant and x : Ω −→ E be a measurable selection of M . Then there exists a measurable selection y : Ω −→ E of V such that, for any t ∈ Ω, ∥x(t) − y(t)∥ ≤ (1 + ϵ)H(M (t), V (t)). Definition 2.7. A multi-valued operator F : Ω × E −→ 2E is called a random multi-valued operator if, for any x ∈ E, F (·, x) is measurable. A random multi-valued operator F : Ω × E −→ CB(E) is said to be H-continuous if, for any t ∈ Ω, F (t, ·) is continuous in H(·, ·), where H(·, ·) is the Hausdorff metric on CB(E) defined as follows: for any given A, B ∈ CB(E), { } H(A, B) = max sup inf d(x, y), sup inf d(x, y) . x∈A y∈B
y∈B x∈A
Suppose that f, g, P, S : Ω × E → E and M : Ω × E × E → 2E with Im(g) ∩ dom(M (t, ·, s)) ̸= ∅, K, T, G : Ω × E → 2E are three multi-valued operators. Let N : Ω × E × E × E −→ E be single-valued. Now, we consider the following problem: Find measurable mappings x, u, v, w : Ω → E such that u(t) ∈ K(t, x(t)), v(t) ∈ T (t, x(t)), w(t) ∈ G(t, x(t)) and (2.1) 0 ∈ M (t, g(t, x(t), w(t)) + N (t, S(t, x(t)), u(t), v(t)) + P (t, u(t)) − {f (t, v(t)) − g(t, x(t))}. The inequality (2.1) is called random variational inclusion problem with random multi-valued operators. The set of measurable mappings (x, u, v, w) is called a random solution of (2.1). Some special cases of the problem (2.1): (1) If P ≡ 0 and f (t, v(t)) = g(t, x(t)) for all t ∈ Ω, then problem (2.1) is equivalent to finding x, v, w : Ω −→ E such that v(t) ∈ T (t, x(t)), w(t) ∈ G(t, x(t)) and (2.2)
0 ∈ N (t, S(t, x(t)), u(t), v(t)) + M (t, g(t, x(t)), w(t))
for all t ∈ Ω and u(t) ∈ M (t, x(t)). Problem (2.2) is introduced and studied by Lan et al, [24]. If G is a single-valued operator, g ≡ I, the identity mapping and N (t, x, y, z) = fe(t, z) + ge(t, x, y) for all t ∈ Ω and x, y, z ∈ E, then problem (2.1) is equivalent to finding x, v : Ω −→ E such that v(t) ∈ T (t, x(t)) and (2.3)
0 ∈ fe(t, v(t)) + ge(t, S(t, x(t)), u(t)) + M (t, x(t), G(t, x(t))),
for all t ∈ Ω and u ∈ M (t, x(t)). The problem (2.3) was considered and studied by Agarwal et al [2]. when G ≡ I. If M (t, x, s) = M (t, x) for all t ∈ Ω, x, s ∈ E and, for all t ∈ Ω, M (t, ·) : E −→ 2E is a generalized m-accretive mapping, then the problem (2.1) reduces to the following generalized nonlinear random multi-valued operator equation involving generalized m-accretive mapping in Banach spaces:
650
A GENERALIZED RANDOM VARIATIONAL INCLUSION
3
Find x, v : Ω −→ E such that v(t) ∈ T (t, x(t)) and (2.4)
0 ∈ N (t, S(t, x(t)), u(t), v(t)) + M (t, g(t, x(t)))
for all t ∈ Ω and u(t) ∈ M (t, x(t)). (2) If E = H, N (t, S(t, x(t)), u(t), v(t)) = 0 and for all t ∈ Ω, then the problem (2.1) reduces to the following random variational inclusion problem in Hilbert space was considered and studied by Ahmad and Farajzadeh [1]. ∗ The generalized duality mapping Jq : E −→ 2E is defined by Jq (x) = {f ∗ ∈ E ∗ : ⟨x, f ∗ ⟩ = ∥x∥q and ∥f ∗ ∥ = ∥x∥q−1 } for all x ∈ E,where q > 1 is a constant. In particular, J2 is the usual normalized duality mapping. It is well known that, in general, Jq (x) = ∥x∥q−2 J2 (x) for all x ̸= 0 and Jq is single-valued if E ∗ is strictly convex (see, for example, [31]). If E = H is a Hilbert space, then J2 becomes the identity mapping of H. In what follows we shall denote the single-valued generalized duality mapping by jq . The modules of smoothness of E is the function ρE : [0, ∞) −→ [0, ∞) defined by 1 ρE (t) = sup{ ∥x + y∥ + ∥x − y∥ − 1 : ∥x∥ ≤ 1, ∥y∥ ≤ t}. 2 A Banach space E is called uniformly smooth if lim
t−→0 q
ρE (t) t
= 0 and E is called q-uniformly smooth if
there exists a constant c > 0 such that ρE(t) ≤ ct , where q > 1 is a real number. It is well known that Hilbert spaces, Lp (or lp ) spaces, 1 < p < ∞ and the Sobolev spaces W m,p , 1 < p < ∞, are all q-uniformly smooth. In the study of characteristic inequalities in q-uniformly smooth Banach spaces, Xu [33] proved the following result. Lemma 2.8. Let q > 1 be a given real number and E be a real uniformly smooth Banach space. Then E is q-uniformly smooth if and only if there exists a constant cq > 0 such that, for all x, y ∈ E and jq (x) ∈ Jq (x), the following inequality holds: ∥x + y∥q ≤ ∥x∥q + q⟨y, jq (x)⟩ + cq ∥y∥q . Definition 2.9. A random operator g : Ω × E −→ E is said to be: (a) α-strongly accretive if there exists j2 (x(t) − y(t)) ∈ J2 (x(t) − y(t)) such that ⟨gt (x) − gt (y), j2 (x(t) − y(t))⟩ ≥ α(t)∥x(t) − y(t)∥2 for all x(t), y(t) ∈ E and t ∈ Ω, where α(t) > 0 is a real-valued random variable; (b) β-Lipschitz continuous if there exists a real-valued random variable β(t) > 0 such that ∥gt (x) − gt (y)∥ ≤ β(t)∥x(t) − y(t)∥ for all x(t), y(t) ∈ E and t ∈ Ω. Definition 2.10. Let S : Ω × E −→ E be a random operator. A operator N : Ω × E × E × E −→ E is said to be: (a) ϱ-strongly accretive with respect to S in the first argument if there exists j2 (x(t) − y(t)) ∈ J2 (x(t) − y(t)) such that ⟨Nt (St (x), ·, ·) − Nt (St (y), ·, ·), j2 (x(t) − y(t))⟩ ≥ ϱ(t)∥x(t) − y(t)∥2 for all x(t), y(t) ∈ E and t ∈ Ω, where ϱ(t) > 0 is a real-valued random variable; (b) ϵ-Lipschitz continuous in the first argument if there exists a real-valued random variable ϵ(t) > 0 such that ∥Nt (x, ·, ·) − Nt (y, ·, ·)∥ ≤ ϵ(t)∥x(t) − y(t)∥ for all x(t), y(t) ∈ E and t ∈ Ω. Similarly, we can define the Lipschitz continuity in the second argument and third argument of N (·, ·, ·).
651
4
N. ONJAI-UEA AND P. KUMAM
Definition 2.11. Let η : Ω × E × E −→ E ∗ be a random operator A : Ω × E −→ E be a random operator and M : Ω × E −→ 2E be a random multi-valued operator. Then M is said to be: (a) η-accretive if ⟨u(t) − v(t), ηt (x, y)⟩ ≥ 0 for all x(t), y(t) ∈ E, u(t) ∈ Mt (x) and v(t) ∈ Mt (y) where Mt (z) = M (t, z(t)),∀t ∈ Ω; (b) strictly η-accretive if ⟨u(t) − v(t), ηt (x, y)⟩ ≥ 0 for all x(t), y(t) ∈ E, u(t) ∈ Mt (x), v(t) ∈ Mt (y) and t ∈ Ω and the equality holds if and only if u(t) = v(t) for all t ∈ Ω; (c) strongly η-accretive if there exists a real-valued random variable r(t) > 0 such that, for any t ∈ Ω, ⟨u(t) − v(t), ηt (x, y)⟩ ≥ r(t)∥x(t) − y(t)∥2 for all x(t), y(t) ∈ E, u(t) ∈ Mt (x), v(t) ∈ Mt (y) and t ∈ Ω; (d) generalized m-accretive if M is η-accretive and (At + ρ(t)M (t, ·))(E) = E for all t ∈ Ω and ρ(t) > 0. Remark 2.12. If E = E ∗ = H is a Hilbert space, then (a)-(d) of Definition 2.11 reduce to the definition of η-monotonicity, strict η-monotonicity, strong η-monotonicity and maximal η-monotonicity respectively; if E is uniformly smooth and η(x, y) = j2 (x−y) ∈ J2 (x−y), then (a)-(d) of Definition 2.11 reduce to the definitions of accretive, strictly accretive, strongly accretive and m-accretive operators in uniformly smooth Banach spaces, respectively. Definition 2.13. The operator η : Ω × E × E −→ E ∗ is said to be: (a) monotone if ⟨x(t) − y(t), ηt (x, y)⟩ ≥ 0 for all x(t), y(t) ∈ E and t ∈ Ω; (b) strictly monotone if ⟨x(t) − y(t), ηt (x, y)⟩ ≥ 0 for all x(t), y(t) ∈ E and t ∈ Ω and the equality holds if and only if x(t) = y(t) for all t ∈ Ω; (c) δ-strongly monotone if there exists a measurable function δ : Ω −→ (0, ∞) such that ⟨x(t) − y(t), ηt (x, y)⟩ ≥ δ(t)∥x(t) − y(t)∥2 for all x(t), y(t) ∈ E and t ∈ Ω; (d) τ -Lipschitz continuous if there exists a real-valued random variable τ (t) > 0 such that ∥ηt (x, y)∥ ≤ τ (t)∥x(t) − y(t)∥ for all x(t), y(t) ∈ E and t ∈ Ω. Definition 2.14. A multi-valued measurable operator T : Ω × E −→ CB(E) is said to be γ-HLipschitz continuous if there exists a measurable function γ : Ω −→ (0, +∞) such that, for any t ∈ Ω, H(Tt (x), Tt (y)) ≤ γ(t)∥x(t) − y(t)∥, for all x(t), y(t) ∈ E. Definition 2.15. A random operator A : Ω × E → E is said to be (i) monotone, if ⟨A(t, x1 (t)) − A(t, x2 (t)), x1 (t) − x2 (t)⟩ ≥ 0, ∀x1 (t), x2 (t) ∈ E, t ∈ Ω, (ii) r-strongly monotone, if there exists a measurable function r : Ω → (0, ∞) such that ⟨A(t, x1 (t)) − A(t, x2 (t)), x1 (t) − x2 (t)⟩ ≥ r(t)∥x1 (t) − x2 (t)∥2 ∀x1 (t), x2 (t) ∈ E, t ∈ Ω, Definition 2.16. Let M : Ω × E × E −→ 2E be a generalized m-accretive random operator and ρ(t),A A : Ω × E → E be r-strongly monotone random operator. Then the proximal operator JMt(·,x)t is defined as follows: JMt(·,x)t (z) = (At + ρ(t)Mt )−1 (z) ρ(t),A
for all t ∈ Ω and z ∈ E, where ρ : Ω −→ (0, ∞) is a measurable function and η : Ω × E × E −→ E ∗ is a strictly monotone operator.
652
A GENERALIZED RANDOM VARIATIONAL INCLUSION
5
Lemma 2.17. [1] Suppose that E is a reflexive Banach space with the dual space E ∗ . Let A : Ω × E → ∗ E ∗ be a r-strongly monotone mapping such that for each fixed t ∈ Ω and M : Ω × E → 2E be a Aρ(t),A 1 , monotone mapping, then JM (t) t (x) : Ω × E ∗ → E is Lipschitz continuous with constant r(t)−ρ(t)m(t) i.e., ρ(t),A
ρ(t),A
∥JM (t) t (x) − JM (t) t (y)∥ ≤
1 ∥x − y∥, ∀x, y ∈ E ∗ , r(t) − ρ(t)m(t)
( ) r(t) where ρ(t) ∈ 0, m(t) is a real valued random variable.
3. Random Iterative Algorithms In this section, we suggest and analyze a new class of iterative methods and construct some new random iterative algorithms for solving the problems (2.1). Lemma 3.1. The set of measurable mapping x, u, v, w : Ω → E a random solution of problem (2.1) if and only if for all t ∈ Ω, and gt (x) =
ρ(t),A
JMt(·,w)t [At (gt (x)) − ρ(t){Pt (u) − (ft (v) − gt (x)) + Nt (St (x), u, v)}]. ρ(t),A
Proof. The proof directly follows from the definition of JMt(·,w)t as follows: gt (x)
=
ρ(t),A
JMt(·,w)t [At (gt (x)) − ρ(t){Pt (u) − (ft (v) − gt (x)) + Nt (St (x), u, v)}]
⇔ gt (x) = (At + ρ(t)Mt )−1 [At (gt (x)) − ρ(t){Pt (u) − (ft (v) − gt (x)) + Nt (St (x), u, v)}] ⇔ (At + ρ(t)Mt )gt (x) = At (gt (x)) − ρ(t){Pt (u) − (ft (v) − gt (x)) + Nt (St (x), u, v)} ⇔ ρ(t)Mt (gt (x), w) = −ρ(t){Pt (u) − (ft (v) − gt (x)} + Nt (St (x), u, v)} ⇔ 0 ∈ Mt (gt (x), w) + Pt (u) − (ft (v) − gt (x)) + Nt (St (x), u, v). Algorithm 3.2. Let M : Ω × E × E −→ 2E be a random multi-valued operator such that for each fixed∩t ∈ Ω and s ∈ E, M (t, ·, s) : E −→ 2E be a generalized m-accretive mapping and Range(g) domM (t, ·, s) ̸= ∅. Let S, g : Ω×E −→ E, η : Ω×E×E −→ E and N : Ω×E×E×E −→ E be single-valued operators, K, T, G : Ω × E −→ 2E be multi-valued operators and λ : Ω −→ (0, 1] be a measurable step size function. Then, by Lemma 2.5 and Himmelberg [14], we know that, for given x0 (·) ∈ E, the multi-valued operators K(·, x0 (·)), T (·, x0 (·)) and G(·, x0 (·)) are measurable and there exist measurable selections u0 (·) ∈ K(·, x0 (·)), v0 (·) ∈ T (·, x0 (·)) and w0 (·) ∈ G(·, x0 (·)). Set { ]} ρ(t),A x1 (t) = x0 (t)−λ(t) gt (x0 )−JMt(·,wt ) [At (gt (x0 ))−ρ(t){Pt (u0 )−(ft (v0 )−gt (x0 ))+Nt (St (x0 ), u0 , v0 )} 0
where ρ and A are the same as in Lemma 3.1. Then it is easy to know that x1 : Ω −→ E is measurable. Since u0 (t) ∈ Kt (x0 ) ∈ CB(E), v0 (t) ∈ Tt (x0 ) ∈ CB(E) and w0 (t) ∈ Gt (x0 ) ∈ CB(E), by Lemma 2.6, there exist measurable selections u1 (t) ∈ Kt (x1 ), v1 (t) ∈ Tt (x1 ) and w1 (t) ∈ Gt (x1 ) such that for all t ∈ Ω, ∥u0 (t) − u1 (t)∥ ≤ (1 + 11 )H(Kt (x0 ), Kt (x1 )), ∥v0 (t) − v1 (t)∥ ≤ (1 + 11 )H(Tt (x0 ), Tt (x1 )), ∥w0 (t) − w1 (t)∥ ≤ (1 + 11 )H(Gt (x0 ), Gt (x1 )). By induction, we can define the sequences {xn (t)}, {un (t)}, {vn (t)} and {wn (t)} inductively satisfying { ]} ρ(t),At x (t) = x (t) − λ(t) g (x ) − J [A (g (x )) − ρ(t){P (u ) − (f (v ) − g (x )) + N (S (x ), u , v )} n+1 n t n t t n t n t n t n t t n n n Mt(·,wn ) 1 )H(Kt (xn ), Kt (xn+1 )), un (t) ∈ Mt (xn ), ∥un (t) − un+1 (t)∥ ≤ (1 + n+1 1 v (t) ∈ T (x ), ∥v (t) − v (t)∥ ≤ (1 + )H(T n t n n n+1 t (xn ), Tt (xn+1 )), n+1 1 )H(Gt (xn ), Gt (xn+1 )). wn (t) ∈ Gt (xn ), ∥wn (t) − wn+1 (t)∥ ≤ (1 + n+1 From Algorithm 3.2, we can get the following algorithms.
653
6
N. ONJAI-UEA AND P. KUMAM
Algorithm 3.3. Suppose that E, M , g, η, S, N , K, T , G and λ be the same as in Algorithm 3.2 and setting P ≡ 0, ft (vn ) = gt (xn ) and A ≡ I for all t ∈ Ω. Then, for given measurable x0 : Ω −→ E, we have ρ(t) xn+1 (t) = xn (t) − λ(t){gt (xn ) − JM [gt (xn ) − ρ(t)Nt (St (xn ), un , vn )]} t (·,wn ) 1 un (t) ∈ Kt (xn ), ∥un (t) − un+1 (t)∥ ≤ (1 + n+1 )H(Kt (xn ), Kt (xn+1 )), (3.1) 1 v (t) ∈ T (x ), ∥v (t) − v (t)∥ ≤ (1 + )H(T n t n n n+1 t (xn ), Tt (xn+1 )), n+1 w (t) ∈ G (x ), ∥w (t) − w 1 n t n n n+1 (t)∥ ≤ (1 + n+1 )H(Gt (xn ), Gt (xn+1 )). Algorithm 3.4. Suppose that E, M , η, S, K, T and λ are the same as in Algorithm 3.2 and setting P ≡ 0, ft (vn ) = gt (xn ) and g, A ≡ I for all t ∈ Ω. Let G : Ω × E −→ E be a random single-valued operator and N (t, x, y, z) = fe(t, z) + ge(t, x, y) for all t ∈ Ω and x, y, z ∈ E. Then, for given measurable x0 : Ω −→ E, we have ρ(t) xn+1 (t) = (1 − λ(t))xn (t) + λ(t)JMt (·,Gt (xn ) {xn (t) − ρ(t)[fet (vn ) + get (St (xn ), un )]} 1 un (t) ∈ Kt (xn ), ∥un (t) − un+1 (t)∥ ≤ (1 + n+1 )H(Kt (xn ), Kt (xn+1 )), v (t) ∈ T (x ), ∥v (t) − v 1 n t n n n+1 (t)∥ ≤ (1 + n+1 )H(Tt (xn ), Tt (xn+1 )), such that for each fixed Algorithm 3.5. Let M : Ω × E −→ 2E be a random multi-valued operator ∩ t ∈ Ω, M (t, ·) : E −→ 2E is a generalized m-accretive mapping and Range(g) domM (t, ·) ̸= ∅. If S, g, η, N , K, T and λ are the same as in Algorithm 2.1 and P ≡ 0, ft (vn ) = gt (xn ) and A ≡ I for all t ∈ Ω. Then for given measurable x0 : Ω −→ E, we have ρ(t) xn+1 (t) = xn (t) − λ(t){gt (xn ) − JMt (·) [gt (xn ) − ρ(t)Nt (St (xn ), un , vn )]} 1 un (t) ∈ Kt (xn ), ∥un (t) − un+1 (t)∥ ≤ (1 + n+1 )H(Kt (xn ), Kt (xn+1 )), v (t) ∈ T (x ), ∥v (t) − v 1 n t n n n+1 (t)∥ ≤ (1 + n+1 )H(Tt (xn ), Tt (xn+1 )). 4. Main Results Theorem 4.1. Let E be a q-uniformly smooth and separable Banach space, M : Ω × E × E → 2E be a random multi-valued mapping such that for each fixed t ∈ Ω and s ∈ E, M (t, ·, s) : E → 2E is a generalized m-accretive mapping g : Ω × E → E is α-strongly accretive and µg (t)-Lipschitz continuous and Range(g) ∩ dom(M (t, ·, s)) ̸= ∅, for all t ∈ Ω. Let η : Ω × E × E → E be a δ-strongly monotone and τ -Lipschitz continuous and S : Ω × E → E be a σ-Lipschitz continuous random operator. Let N : Ω × E × E × E → E be a ϱ-strongly accretive with respect to S and ϵ-Lipschitz continuous in the first argument, and µ-Lipschitz continuous in the second argument, ν-Lipschitz continuous in the third argument, respectively. Suppose that A : Ω × E → E is r-strongly monotone and µA (t)-Lipschitz continuous. Let f, P : Ω × E → E be Lipschitz continuous random mappings with constants ψf (t) and ψp (t), respectively. Let K, T, G : Ω × E → CB(E) be H-Lipschitz continuous with constants µK (t), µT (t) and µG (t) respectively. If there exist real-valued random variables ρ(t) > 0 and π(t) > 0 such that, for any t ∈ Ω, x, y, z ∈ E, (4.1)
ρ(t),A
ρ(t),A
∥JMt(·,x)t (z) − JMt(·,y)t (z)∥ ≤ π(t)∥x − y∥
and the following conditions hold: ( ) 1 µA ψg ρ(t) k(t) = 1 + (1 − qα(t) + cq β(t)q ) q + π(t)µHG (t) + r(t)−ρ(t)m(t) < 1, r(t)−ρ(t)m(t) ( ) (4.2) ψp (t)µK (t) + ψf (t)µT (t) + (1 − qρ(t)ϱ(t) + cq ϵ(t)q σ(t)q ) q1 < (r(t)−ρ(t)m(t))(1−k(t)) ρ(t) where cq is the same as in Lemma 2.8 for any t ∈ Ω. If there exist real-valued random variables λ(t), ρ(t) exist )x∗ (t) ∈ E, u∗ (t) ∈ Kt (x∗ ), v ∗ (t) ∈ Tt (x∗ ) and w∗ (t) ∈ Gt (x∗ ) such that ( ∗ > 0 ∗then there ∗ x (t), u (t), v (t), w∗ (t) is solution of the problem (2.1) and xn (t) → x∗ (t), un (t) → u∗ (t), vn (t) → v ∗ (t), wn (t) → w∗ (t) as n → ∞, where {xn (t)}, {un (t)}, {vn (t)} and {wn (t)} are iterative sequences generated by Algorithm 3.2. Proof. From Algorithm 3.2, Lemma 2.17 and (4.1), we compute ∥xn+1 (t) − xn (t)∥
ρ(t),A
= ∥xn (t) − λ(t){gt (xn ) − JMt(·,wt ) [At (gt (xn )) − ρ(t){Pt (un ) − (ft (vn ) − gt (xn )} n
654
A GENERALIZED RANDOM VARIATIONAL INCLUSION
7
+ Nt (St (xn ), un , vn )]} ρ(t),A
− xn−1 (t) − λ(t){gt (xn−1 ) − JMt(·,wt
n−1 )
[At (gt (xn−1 )) − ρ(t){Pt (un−1 )
− (ft (vn−1 ) − gt (xn−1 )} + Nt (St (xn−1 ), un−1 , vn−1 )]}∥ ≤ ∥xn (t) − xn−1 (t) − λ(t)(gt (xn ) − gt (xn−1 ))∥ ρ(t),A
+ λ(t)∥JMt(·,wt ) [At (gt (xn )) − ρ(t){Pt (un ) − (ft (vn ) − gt (xn ) n
+ Nt (St (xn ), un , vn )] ρ(t),A
− JMt(·,wt
n−1 )
[At (gt (xn−1 )) − ρ(t){Pt (un−1 ) − (ft (vn−1 ) − gt (xn−1 )}
+ Nt (St (xn−1 ), un−1 , vn−1 )]∥ ≤ ∥xn (t) − xn−1 (t) − λ(t)(gt (xn ) − gt (xn−1 ))∥ ρ(t),A
+ λ(t)∥JMt(·,wt ) [At (gt (xn )) − ρ(t){Pt (un ) − (ft (vn ) − gt (xn ) n
+ Nt (St (xn ), un , vn )] ρ(t),A
− JMt(·,wt ) [At (gt (xn−1 )) − ρ(t){Pt (un−1 ) − (ft (vn−1 ) − gt (xn−1 )) n
+ Nt (St (xn−1 ), un−1 , vn−1 )]∥ ρ(t),A
+ λ(t)∥JMt(·,wt ) [At (gt (xn−1 )) − ρ(t){Pt (un−1 ) − (ft (vn−1 ) − gt (xn−1 ) n
+ Nt (St (xn−1 ), un−1 , vn−1 )] ρ(t),A
− JMt(·,wt
n−1 )
[At (gt (xn−1 )) − ρ(t){Pt (un−1 ) − (ft (vn−1 ) − gt (xn−1 )}
+ Nt (St (xn−1 ), un−1 , vn−1 )]∥ ≤ ∥xn (t) − xn−1 (t) − λ(t)(gt (xn ) − gt (xn−1 ))∥ λ(t) + ∥[At (gt (xn )) − ρ(t){Pt (un ) − (ft (vn ) − gt (xn )) + Nt (St (xn ), un , vn )}] r(t) − ρ(t)m(t) − [At (gt (xn−1 )) − ρ(t){Pt (un−1 ) − (ft (vn−1 ) − gt (xn−1 )) + Nt (St (xn−1 ), un−1 , vn−1 )]}∥ + λ(t)π(t)∥wn − wn−1 ∥ ≤ ∥xn (t) − xn−1 (t) − λ(t)(gt (xn ) − gt (xn−1 ))∥ λ(t)ρ(t) + ∥(gt (xn ) − gt (xn−1 )) + Nt (St (xn ), un , vn ) − Nt (St (xn−1 ), un−1 , vn−1 )∥ r(t) − ρ(t)m(t) λ(t) + ∥At (gt (xn )) − At (gt (xn−1 ))∥ r(t) − ρ(t)m(t) λ(t)ρ(t) + ∥Pt (un ) − Pt (un−1 )∥ r(t) − ρ(t)m(t) λ(t)ρ(t) + ∥ft (vn ) − ft (vn−1 )∥ r(t) − ρ(t)m(t) + λ(t)π(t)∥wn − wn−1 ∥ ≤ ∥xn (t) − xn−1 (t) − λ(t)(gt (xn ) − gt (xn−1 ))∥ { λ(t)ρ(t) + ∥xn (t) − xn−1 (t) − (gt (xn ) − gt (xn−1 ))∥ r(t) − ρ(t)m(t) ( ) + ∥xn (t) − xn−1 (t) + Nt (St (xn ), un , vn ) − Nt (St (xn−1 ), un , vn ) ∥ + ∥Nt (St (xn−1 ), un , vn ) − Nt (St (xn−1 ), un−1 , vn )∥
} + ∥Nt (St (xn−1 ), un−1 , vn ) − Nt (St (xn−1 ), un−1 , vn−1 )
λ(t) ∥At (gt (xn )) − At (gt (xn−1 ))∥ r(t) − ρ(t)m(t) λ(t)ρ(t) + ∥Pt (un ) − Pt (un−1 ∥ r(t) − ρ(t)m(t) λ(t)ρ(t) + ∥ft (vn ) − ft (vn−1 )∥ r(t) − ρ(t)m(t) +
655
8
N. ONJAI-UEA AND P. KUMAM
+ λ(t)π(t)∥wn − wn−1 ∥ ≤ (1 − λ(t))∥xn (t) − xn−1 (t)∥ ( ) ρ(t) + λ(t) 1 + ∥xn (t) − xn−1 (t) − (gt (xn ) − gt (xn−1 ))∥ r(t) − ρ(t)m(t) { ( ) λ(t)ρ(t) + ∥xn (t) − xn−1 (t) + Nt (St (xn ), un , vn ) − Nt (St (xn−1 ), un , vn ) ∥ r(t) − ρ(t)m(t) + ∥Nt (St (xn−1 ), un , vn ) − Nt (St (xn−1 ), un−1 , vn )∥ } + ∥Nt (St (xn−1 ), un−1 , vn ) − Nt (St (xn−1 ), un−1 , vn−1 )∥ λ(t) ∥At (gt (xn )) − At (gt (xn−1 ))∥ r(t) − ρ(t)m(t) λ(t)ρ(t) + ∥Pt (un ) − Pt (un−1 )∥ r(t) − ρ(t)m(t) λ(t)ρ(t) + ∥ft (vn ) − ft (vn−1 )∥ r(t) − ρ(t)m(t) + λ(t)π(t)∥wn − wn−1 ∥. +
(4.3)
By using gt is a strongly accretive and Lipschitz continuous, we have ∥xn (t) − xn−1 (t) − (gt (xn ) − gt (xn−1 ))∥q ≤ ∥xn (t) − xn−1 (t)∥q − q⟨(gt (xn ) − gt (xn−1 )), jq (xn (t) − xn−1 (t))⟩ +cq ∥gt (xn ) − gt (xn−1 )∥q ≤ (1 − qα(t) + cq β(t)q )∥xn (t) − xn−1 (t)∥q , that is ∥xn (t) − xn−1 (t) − (gt (xn ) − gt (xn−1 ))∥ (4.4)
1
≤ (1 − qα(t) + cq β(t)q ) q ∥xn (t) − xn−1 (t)∥,
where cq is the same as in Lemma 2.8. Also from the strongly accretivity of N with respect to S and the Lipschitz continuity of N in the first argument, we obtain ∥xn (t) − xn−1 (t) − (Nt (St (xn ), un , vn ) − Nt (St (xn−1 ), un , vn ))∥ (4.5)
1
≤ (1 − qϱ(t) + cq ϵ(t)q σ(t)q ) q ∥xn (t) − xn−1 (t)∥.
By Lipschitz continuity of N in the second and third argument and since K, T, G are H-Lipschitz continuous and f, g, A and P are Lipschitz continuous, we have ∥Nt (St (xn−1 ), un , vn )) − Nt (St (xn−1 ), un−1 , vn )∥
≤ ≤ ≤
(4.6) ∥Nt (St (xn−1 ), un−1 , vn ) − Nt (St (xn−1 ), un−1 , vn−1 )∥
≤ ≤ ≤
(4.7) ∥wn − wn−1 ∥
≤ ≤
(4.8) ∥Pt (un ) − Pt (un−1 (t))∥
≤ ≤
(4.9) ∥ft (vn (t)) − ft (vn−1 (t))∥
≤
µ(t)∥un − un−1 ∥ 1 µ(t)(1 + )H(Kt (xn−1 ), Kt (xn )) n 1 (1 + )µ(t)µK (t)∥xn (t) − xn−1 (t)∥, n ν(t)∥vn − vn−1 ∥ 1 ν(t)(1 + )H(Tt (xn−1 ), Tt (xn )) n 1 (1 + )ν(t)µT (t)∥xn (t) − xn−1 (t)∥, n 1 (1 + )H(Gt (xn−1 ), Gt (xn )) n 1 (1 + )µG (t)∥xn (t) − xn−1 (t)∥, n ψp (t)∥un (t) − un−1 (t)∥ 1 ψp (t)µK (t)(1 + )∥xn (t) − xn−1 (t)∥, n ψf (t)∥vn (t) − vn−1 (t)∥
656
A GENERALIZED RANDOM VARIATIONAL INCLUSION
9
1 )∥xn (t) − xn−1 (t)∥, n ≤ µA ∥gt (xn ) − gt (xn−1 )∥ ≤ ψf (t)µT (t)(1 +
(4.10) ∥At (gt (xn )) − At (gt (xn−1 ))∥
≤ µA ψg ∥xn (t) − xn−1 (t)∥.
(4.11) Using (4.4)-(4.11) in (4.3), we obtain for all t ∈ Ω, ∥xn+1 (t) − xn (t)∥
(4.12)
≤ θn (t)∥xn (t) − xn−1 (t)∥
where θn (t)
=
κn (t)
=
1 − λ(t) + λ(t)κn (t), ( ) 1 ρ(t) 1+ (1 − qα(t) + cq β(t)q ) q r(t) − ρ(t)m(t) 1 ρ(t) µA ψ g + (1 − qϱ(t) + cq ϵ(t)q σ(t)q ) q + r(t) − ρ(t)m(t) r(t) − ρ(t)m(t) ( ) 1 1 ρ(t) + (1 + ) ψp (t)µK (t) + ψf (t)µT (t) + (1 + )π(t)µG (t) n r(t) − ρ(t)m(t) n
Letting (
) 1 ρ(t) µA ψg (1 − qα(t) + cq β(t)q ) q + π(t)µHG (t) + r(t) − ρ(t)m(t) r(t) − ρ(t)m(t) ( ) 1 ρ(t) + ψp (t)µK (t) + ψf (t)µT (t) + (1 − qϱ(t) + cq ϵ(t)q σ(t)q ) q , r(t) − ρ(t)m(t) 1 − λ(t) + λ(t)κ(t). 1+
κ(t) =
θ(t) =
Thus κn (t) −→ κ(t), θn (t) −→ θ(t) as n −→ ∞. From the condition (4.2), we know that 0 < θ(t) < 1, for all t ∈ Ω. Using the same arguments as those used in the proof in [Lan et al, Theorem 3.1, pp 14, [24]] it follow that {xn (t)}, {un (t)}, {vn (t)} and {wn (t)} are Cauchy sequences. Thus by the completeness of E, there exist u∗ (t), v ∗ (t), w∗ (t) ∈ E such that un (t) → u∗ (t), vn (t) → v ∗ (t), wn (t) → w∗ (t) as n → ∞. Next, we show that u∗ (t) ∈ Kt (x∗ ), we have d(u∗ (t), Kt (x∗ ))
= inf{∥u∗ (t) − y∥ : y ∈ Kt (x∗ )} ≤ ∥u∗ (t) − un (t)∥ + d(un (t), Kt (x∗ )) ≤ ∥u∗ (t) − un (t)∥ + H(Kt (xn ), Kt (x∗ )) ≤ ∥u∗ (t) − un (t)∥ + µK (t)∥xn (t) − x∗ (t)∥ −→ 0.
This implies that u∗ (t) ∈ Kt (x∗ ). Similarly, we can get v ∗ (t) ∈ Tt (x∗ ) and w∗ (t) ∈ Gt (x∗ ) for all t ∈ Ω. ρ(t) Therefore, from algorithm 3.2 and the continuity of JAt (·,w) , P, N and S, we obtain gt (x)
ρ(t),A
= JMt(·,w)t [At (gt (x)) − ρ(t){Pt (u) − (ft (v) − gt (x)) + Nt (St (x), u, v)}]
By Lemma 3.1, we know that (x∗ (t), u∗ (t), v ∗ (t), w∗ (t)) is a solution of the problem (2.1). This completes the proof. If P ≡ 0, f (t, v(t)) = g(t, x(t)) and A ≡ I, the identity mapping, in Theorem 4.1, then we can get the following theorems. Theorem 4.2. Let E, M, η, g, N, S, K, T, G and λ be the same as in Theorem 4.1. Assume that M : Ω × E × E −→ 2E is a random multi-valued operator such that for each fixed t ∈ Ω and s ∈ E, M (t, ·, s) : E −→ 2E is a generalized m-accretive mapping. If there exist real-valued random variables (t) 1 ρ(t) > 0, π(t) > 0 and τδ(t) = r(t)−ρ(t)m(t) such that (4.1) holds and ) ( 1 k(t) = 1 + τ (t)δ(t)−1 (1 − qα(t) + cq β(t)q ) q + π(t)µHG (t) < 1, ( ) δ(t)(1 − k(t)) 1 (4.13) ρ(t)(ψp (t)µK (t) + ψf (t)µT (t)) + (1 − qρ(t)ϱ(t) + cq ϵ(t)q σ(t)q ) q < , τ (t)
657
10
N. ONJAI-UEA AND P. KUMAM
where cq is the same as in Lemma 2.8 for any t ∈ Ω. the iterative sequences {xn (t)}, {un (t)} and {vn (t)} defined by Algorithm 3.3 converge strongly to the solution (x∗ (t), u∗ (t), v ∗ (t)) of the problem (2.2). Theorem 4.3. Let E, η, M , S, K, T and λ be the same as in Theorem 3.1. Assume that M : Ω × E × E −→ 2E is a random multi-valued operator such that, for each fixed t ∈ Ω and s ∈ E, M (t, ·, s) : E −→ 2E is a generalized m-accretive mapping. Let fe : Ω × E −→ E be ν-Lipschitz continuous, S : Ω × E −→ E be a σ-Lipschitz continuous random operator, G : Ω × E −→ E be ζLipschitz continuous and ge : Ω × E × E −→ E be ϱ-strongly accretive with respect to S and ϵ-Lipschitz continuous in the first argument and µ-Lipschitz continuous in the second argument, respectively. If there exist real-valued random variables ρ(t) > 0 and π(t) > 0 such that (4.1) holds and 1
ρ(t)(ψp (t)µK (t) + ψf (t)µT (t)) + (1 − qρ(t)ϱ(t) + cq ρ(t)q ϵ(t)q σ(t)q ) q
0, ∀ x ∈ R, P∞ ii) k=−∞ Φ (x − k) = 1, ∀ x ∈ R, P∞ iii) k=−∞ Φ (nx − k) = 1, ∀ x ∈ R; n ∈ N, R∞ iv) −∞ Φ (x) dx = 1, v) Φ is a density function, vi) Φ is even: Φ (−x) = Φ (x), x ≥ 0. We see that ([8]) e−x e2 − 1 = Φ (x) = 2e (1 + e−x−1 ) (1 + e−x+1 ) 2 e −1 1 , 2e2 (1 + ex−1 ) (1 + e−x−1 )
3
661
ANASTASSIOU: NEURAL NETWORK APPROXIMATION
662
and
0
Φ (x) =
e2 − 1 2e2
" −
#
(ex − e−x ) 2
2
e (1 + ex−1 ) (1 + e−x−1 )
≤ 0, x ≥ 0.
Hence vii) Φ is decreasing on R+ , and increasing on R− . Let 0 < α < 1, n ∈ N. We observe the following ∞ X :
∞ X
Φ (nx − k) =
k = −∞ |nx − k| > n1−α
:
Φ (|nx − k|) ≤
k = −∞ |nx − k| > n1−α
Z ∞ e2 − 1 1 dx ≤ x−1 2e2 (1 + e ) (1 + e−x−1 ) (n1−α −1) Z ∞ 2 2 e − 1 −(n1−α −1) e −1 −x e dx = e 2e 2e (n1−α −1) 2 (1−α) e − 1 −n(1−α) = e = 3.1992e−n . 2
We have found that: viii) for n ∈ N, 0 < α < 1, we get ∞ X :
Φ (nx − k)
n1−α
Denote by d·e the ceiling of a number, and by b·c the integral part of a number. Consider x ∈ [a, b] ⊂ R and n ∈ N such that dnae ≤ bnbc. We observe that 1=
∞ X
bnbc
Φ (nx − k) >
k=−∞
X
Φ (nx − k) =
k=dnae
bnbc
X
Φ (|nx − k|) > Φ (|nx − k0 |) ,
k=dnae
for any k0 ∈ [dnae , bnbc] ∩ Z. Here we can choose k0 ∈ [dnae , bnbc] ∩ Z such that |nx − k0 | < 1. 4
ANASTASSIOU: NEURAL NETWORK APPROXIMATION
663
Therefore Φ (|nx − k0 |) > Φ (1) = 0.19046485. Consequently, bnbc
X
Φ (nx − k) > Φ (1) = 0.19046485.
k=dnae
Therefore we obtain ix)
Pbnbc
1 Φ(nx−k)
Φ (nb − bnbc − 1) (call ε := nb − bnbc, 0 ≤ ε < 1) = Φ (ε − 1) = Φ (1 − ε) ≥ Φ (1) > 0. Pbnbc Therefore lim 1 − k=dnae Φ (nb − k) > 0. n→∞ Similarly, bnbc
1−
X
dnae−1
X
Φ (na − k) =
k=−∞
k=dnae
∞ X
Φ (na − k) +
Φ (na − k)
k=bnbc+1
> Φ (na − dnae + 1) (call η := dnae − na, 0 ≤ η < 1) = Φ (1 − η) ≥ Φ (1) > 0. Pbnbc Therefore again lim 1 − k=dnae Φ (na − k) > 0. n→∞ Therefore we derive that
x) lim
n→∞
Pbnbc k=dnae
Φ (nx − k) 6= 1, for at least some x ∈ [a, b].
Let f ∈ C ([a, b]) and n ∈ N such that dnae ≤ bnbc. We introduce and define the positive linear neural network operator Pbnbc Gn (f, x) :=
k k=dnae f n Φ (nx − Pbnbc k=dnae Φ (nx − k)
5
k)
, x ∈ [a.b] .
(1)
ANASTASSIOU: NEURAL NETWORK APPROXIMATION
664
For large enough n we always obtain dnae ≤ bnbc. Also a ≤ nk ≤ b, iff dnae ≤ k ≤ bnbc. We study here the pointwise convergence of Gn (f, x) to f (x) with rates. For convinience we call bnbc
G∗n (f, x) :=
X
f
k=dnae
that is
k Φ (nx − k) , n
G∗ (f, x) Gn (f, x) := Pbnbc n . k=dnae Φ (nx − k)
Thus,
G∗ (f, x) Gn (f, x) − f (x) = Pbnbc n − f (x) k=dnae Φ (nx − k) Pbnbc G∗n (f, x) − f (x) k=dnae Φ (nx − k) = . Pbnbc k=dnae Φ (nx − k)
(2)
(3)
(4)
Consequently we derive bnbc X 1 ∗ |Gn (f, x) − f (x)| ≤ Gn (f, x) − f (x) Φ (nx − k) . Φ (1)
(5)
k=dnae
That is bnbc X k − f (x) Φ (nx − k) . (6) |Gn (f, x) − f (x)| ≤ (5.250312578) f n k=dnae We will estimate the right hand side of (6). For that we need, for f ∈ C ([a, b]) the first modulus of continuity ω1 (f, h) :=
sup |f (x) − f (y)| , h > 0. x, y ∈ [a, b] |x − y| ≤ h
(7)
Similarly it is defined for f ∈ CB (R) (continuous and bounded on R). We have that lim ω1 (f, h) = 0. h→0
When f ∈ CB (R) we define, (see also [8]) ∞ X k Φ (nx − k) , n ∈ N, x ∈ R, Gn (f, x) := f n
(8)
k=−∞
the quasi-interpolation neural network operator. By [3] we derive the following three theorems on extended Taylor formula. 6
ANASTASSIOU: NEURAL NETWORK APPROXIMATION
Theorem 1 Let N ∈ N, 0 < ε < x, y ∈ − π2 + ε, π2 − ε . Then f (x) = f (y) +
π 2
small, and f ∈ C N
665
− π2 + ε, π2 − ε ;
(k) N X (sin y) f ◦ sin−1 k (sin x − sin y) + KN (y, x) , k!
(9)
1 · (N − 1)!
(10)
k=1
where KN (y, x) = x
Z
N −1
(sin x − sin s)
f ◦ sin−1
(N )
(sin s) − f ◦ sin−1
(N )
(sin y) cos s ds.
y
Theorem 2 Let f ∈ C N ([ε, π − ε]), N ∈ N, ε > 0 small; x, y ∈ [ε, π − ε]. Then (k) N X f ◦ cos−1 (cos y) k ∗ f (x) = f (y) + (cos x − cos y) + KN (y, x) , (11) k! k=1
where ∗ KN (y, x) = −
Z
x
N −1
(cos x − cos s)
h
f ◦ cos−1
(N )
1 · (N − 1)!
(12)
(cos s) − f ◦ cos−1
(N )
i (cos y) sin s ds.
y
Theorem 3 Let f ∈ C N ([a, b]) (or f ∈ C N (R)), N ∈ N; x, y ∈ [a, b] (or x, y ∈ R). Then (k) N f ◦ ln 1e (e−y ) X
f (x) = f (y) +
k!
k=1
where
e−x − e−y
k
+ K N (y, x) ,
(13)
1 · (14) (N − 1)! (N ) −s −s −y e − f ◦ ln 1e e e ds.
K N (y, x) = − Z
x
e
−x
−e
−s N −1
y
f ◦ ln 1e
(N )
Remark 1 Using the mean value theorem we get |sin x − sin y| ≤ |x − y| , |cos x − cos y| ≤ |x − y| ,
(15) ∀ x, y ∈ R,
furthermore we have |sin x − sin y| ≤ 2, |cos x − cos y| ≤ 2,
7
(16) ∀ x, y ∈ R.
ANASTASSIOU: NEURAL NETWORK APPROXIMATION
666
Similarly we get −x e − e−y ≤ e−a |x − y| ,
(17)
and −x e − e−y ≤ e−a − e−b ,
∀ x, y ∈ [a, b] .
(18)
Let g (x) = ln 1e x, sin−1 x, cos−1 x and assume f (j) (x0 ) = 0, k = 1, ..., N . (j) Then, by [3], we get f ◦ g −1 (g (x0 )) = 0, j = 1, ..., N. x m Remark 2 It is well knownl that m e > x , m ∈ N, for large x > 0. Let fixed α, β > 0, then α β ∈ N, and for large x > 0 we have α α ex > xd β e ≥ x β .
So for suitable very large x > 0 we get β
e x > xβ
αβ
= xα .
We proved for large x > 0 and α, β > 0 that β
ex > xα .
(19)
Therefore for large n ∈ N and fixed α, β > 0, we have β
en > nα .
(20)
e−n < n−α , for large n ∈ N.
(21)
That is β
So for 0 < α < 1 we get (1−α)
e−n
< n−α .
(22) (1−α)
Thus be given fixed A, B > 0, for the linear combination An−α + Be−n
the (dominant) rate of convergence to zero is n−α . The closer α is to 1 we get faster and better rate of convergence to zero.
3
Real Neural network Approximations
Here we present a series of neural network approximations to a function given with rates. We first give.
8
ANASTASSIOU: NEURAL NETWORK APPROXIMATION
Theorem 4 Let f ∈ C ([a, b]) , 0 < α < 1, n ∈ N, x ∈ [a, b]. Then i) (1−α) 1 |Gn (f, x) − f (x)| ≤ (5.250312578) ω1 f, α + 6.3984 kf k∞ e−n =: λ, n (23) and ii) kGn (f ) − f k∞ ≤ λ, (24) where k·k∞ is the supremum norm. Proof. We see that bnbc X k ≤ − f (x) Φ (nx − k) f n k=dnae bnbc
X k f − f (x) Φ (nx − k) = n
k=dnae bnbc
f k − f (x) Φ (nx − k) + n
X
k = dnae − x ≤ n1α
k n
bnbc
f k − f (x) Φ (nx − k) ≤ n
X
k = dnae − x > n1α
k n
bnbc
X
k = dnae − x ≤ n1α
k ω1 f, − x Φ (nx − k) + n
k n
bnbc
X
2 kf k∞ |k
Φ (nx − k) ≤
k = dnae − nx| > n1−α
1 ω1 f, α n
∞ X
k = −∞ 1 − x ≤ nα
k n
9
Φ (nx − k) +
667
ANASTASSIOU: NEURAL NETWORK APPROXIMATION
∞ X
2 kf k∞
Φ (nx − k)
668
≤ (by (viii))
k = −∞ − nx| > n1−α (1−α) 1 ω1 f, α + 2 kf k∞ (3.1992) e−n n (1−α) 1 . = ω1 f, α + 6.3984 kf k∞ e−n n |k
That is bnbc X (1−α) k 1 f − f (x) Φ (nx − k) ≤ ω1 f, α + 6.3984 kf k∞ e−n . n n k=dnae Using (6) we prove the claim. Theorem 4 improves a lot the model of neural network approximation of [8], see Theorem 3 there. Next we present Theorem 5 Let f ∈ CB (R), 0 < α < 1, n ∈ N, x ∈ R. Then i) Gn (f, x) − f (x) ≤ ω1 f, 1 + 6.3984 kf k e−n(1−α) =: µ, ∞ nα
(25)
and ii)
Gn (f ) − f ≤ µ. ∞
(26)
Proof. We observe that ∞ ∞ X X k Gn (f, x) − f (x) = f Φ (nx − k) − f (x) Φ (nx − k) = n k=−∞
k=−∞
∞ ∞ X X k k f − f (x) Φ (nx − k) ≤ f − f (x) Φ (nx − k) = n n k=−∞
k=−∞
∞ X
k = −∞ 1 − x ≤ nα
f k − f (x) Φ (nx − k) + n
k n
∞ X
k = −∞ 1 − x > nα
f k − f (x) Φ (nx − k) ≤ n
k n
10
ANASTASSIOU: NEURAL NETWORK APPROXIMATION
∞ X
669
k ω1 f, − x Φ (nx − k) + n
k = −∞ 1 − x ≤ nα
k n
∞ X
2 kf k∞
Φ (nx − k) ≤
k = −∞ 1 − x > nα ∞ X
k n
ω1 f,
1 nα
Φ (nx − k) +
k = −∞ 1 − x ≤ nα
k n ∞ X
2 kf k∞
Φ (nx − k)
≤ (by (viii))
k = −∞ − nx| > n1−α (1−α) 1 ω1 f, α + 2 kf k∞ (3.1992) e−n n (1−α) 1 = ω1 f, α + 6.3984 kf k∞ e−n , n |k
proving the claim. Theorem 5 improves Theorem 4 of [8]. In the next we discuss high order of approximation by using the smoothness of f . Theorem 6 Let f ∈ C N ([a, b]), n, N ∈ N, 0 < α < 1, x ∈ [a, b]. Then i) |Gn (f, x) − f (x)| ≤ (5.250312578) · N (j) X f (x) 1 j −n(1−α) + (3.1992) (b − a) e + j! nαj j=1 "
ω1 f
(N )
1 , α n
(27)
#)
N (6.3984) f (N ) ∞ (b − a) −n(1−α) + e , nαN N ! N! 1
ii) assume further f (j) (x0 ) = 0, j = 1, ..., N , for some x0 ∈ [a, b], it holds |Gn (f, x0 ) − f (x0 )| ≤ (5.250312578) · " #
N (6.3984) f (N ) ∞ (b − a) −n(1−α) 1 (N ) 1 + e , ω1 f , α n nαN N ! N! 11
(28)
ANASTASSIOU: NEURAL NETWORK APPROXIMATION
notice here the extremely high rate of convergence at n−(N +1)α , iii) kGn (f ) − f k∞ ≤ (5.250312578) ·
N (j) X f 1 j −n(1−α) ∞ + (3.1992) (b − a) e + j! nαj j=1 "
ω1 f
(N )
1 , α n
670
(29)
#)
N (6.3984) f (N ) ∞ (b − a) −n(1−α) + e . nαN N ! N! 1
Proof. Next we apply Taylor’s formula with integral remainder. We have (here nk , x ∈ [a, b]) f
X j Z k N k − tN −1 n k f (j) (x) k = −x + f (N ) (t) − f (N ) (x) n dt. n j! n (N − 1)! x j=0
Then f
j N X f (j) (x) k k Φ (nx − k) = Φ (nx − k) −x + n j! n j=0 k n
Z Φ (nx − k)
f
(N )
(t) − f
N −1 −t dt. (N − 1)!
k n
(x)
(N )
x
Hence bnbc
X
bnbc X k Φ (nx − k) − f (x) Φ (nx − k) = n
f
k=dnae
k=dnae
bnbc N X f (j) (x) X
j!
j=1 bnbc
k=dnae
Z
X
Φ (nx − k)
Φ (nx − k)
k n
f
(N )
(t) − f
(N )
k −x n
(x)
x
k=dnae
j +
N −1 −t dt. (N − 1)!
k n
Thus
bnbc
G∗n (f, x) − f (x)
X
Φ (nx − k) =
N X f (j) (x) j=1
k=dnae
j!
j G∗n (· − x) + Λn (x) ,
where bnbc
Λn (x) :=
X k=dnae
Z Φ (nx − k)
k n
f
(N )
x
12
(t) − f
(N )
(x)
N −1 −t dt. (N − 1)!
k n
ANASTASSIOU: NEURAL NETWORK APPROXIMATION
We assume that bl− a > n1α ,mwhich is always the case for large enough n ∈ N, −1 that is when n > (b − a) α . Thus k − x ≤ 1α or k − x > 1α . n
n
n
n
As in [2], pp. 72-73 for k n
Z γ :=
f (N ) (t) − f (N ) (x)
x
in case of nk − x ≤
1 nα ,
(for x ≤ nk or x ≥ nk ). Notice also for x ≤
N −1 −t dt, (N − 1)!
k n
we find that 1 (N ) 1 |γ| ≤ ω1 f , α αN n n N!
k n
that
Z k n k − tN −1 dt ≤ f (N ) (t) − f (N ) (x) n x (N − 1)! Z
k n
x k n
Z
2 f (N )
∞
x
Next assume
k n
N −1
−t
dt = 2 f (N ) (N − 1)! ∞
k n
N −1 −t dt ≤ (N − 1)! N
(b − a)N k
n −x ≤ 2 f (N ) . N! N! ∞
k n
(N ) (t) − f (N ) (x) f
≤ x, then Z k n k − tN −1 (N ) (N ) n f (t) − f (x) dt = x (N − 1)! Z x k − tN −1 dt ≤ f (N ) (t) − f (N ) (x) n k (N − 1)! n t − k N −1 (N ) (N ) n (t) − f (x) dt ≤ f k (N − 1)! n N −1 N
(b − a)N t − nk x − nk
(N )
dt = 2 f ≤ 2 f (N ) . (N − 1)! N! N! ∞ ∞ Z
2 f (N )
∞
Z
x k n
x
Thus
|γ| ≤ 2 f (N )
∞
in all two cases.
13
N
(b − a) , N!
671
ANASTASSIOU: NEURAL NETWORK APPROXIMATION
672
Therefore bnbc
bnbc
X
Λn (x) =
X
Φ (nx − k) γ +
k=dnae | nk −x|≤ n1α
Φ (nx − k) γ.
k=dnae | nk −x|> n1α
Hence bnbc
1 (N ) 1 + Φ (nx − k) ω1 f , α n N !nN α
X
|Λn (x)| ≤
k=dnae
| nk −x|≤ n1α
bnbc
(b − a)N
Φ (nx − k) 2 f (N ) ≤ N! ∞
X k=dnae
| nk −x|> n1α N
(1−α) 1
(N ) (b − a) (N ) 1 ω1 f , α + 2 (3.1992) e−n .
f
n N !nN α N! ∞ Consequently we have
(N )
f (b − a)N (1−α) 1 (N ) 1 ∞ + (6.3984) e−n . |Λn (x)| ≤ ω1 f , α αN n n N! N! We further see that bnbc
G∗n
j
(· − x)
(x) =
X
Φ (nx − k)
k=dnae
k −x n
j .
Therefore j bnbc X k ∗ j Φ (nx − k) − x = Gn (· − x) (x) ≤ n k=dnae
bnbc
X
k = dnae − x ≤ n1α
j k Φ (nx − k) − x + n
k n
1 nαj
bnbc
X k = dnae − x > n1α
k n
bnbc
bnbc
X
j k Φ (nx − k) − x ≤ n
X
j
Φ (nx − k) + (b − a)
k = dnae − x ≤ n1α
k n
≤
|k
k = dnae − nx| > n1−α
(1−α) 1 j + (b − a) (3.1992) e−n . αj n
14
Φ (nx − k)
ANASTASSIOU: NEURAL NETWORK APPROXIMATION
673
Hence
(1−α) 1 ∗ j j , Gn (· − x) (x) ≤ αj + (b − a) (3.1992) e−n n for j = 1, ..., N. Putting things together we have proved X bnbc N (j) X ∗ f (x) 1 j −n(1−α) Gn (f, x) − f (x) ≤ Φ (nx − k) + (3.1992) (b − a) e + j! nαj j=1 k=dnae
"
ω1 f
(N )
1 , α n
#
N (6.3984) f (N ) ∞ (b − a) −n(1−α) + e , nαN N ! N! 1
that is establishing theorem. We make Remark 3 We notice that Gn (f, x) −
N X f (j) (x) j=1
G∗n (f, x) P
bnbc k=dnae
j!
j Gn (· − x) (x) − f (x) =
1
N X f (j) (x)
− P bnbc Φ (nx − k) Φ (nx − k) j=1 k=dnae −f (x) = P bnbc
j!
j
G∗n (· − x)
(x)
1
· Φ (nx − k) bnbc N (j) X X f (x) j G∗n (f, x) − G∗n (· − x) (x) − Φ (nx − k) f (x) . j! j=1 k=dnae
k=dnae
Therefore we get N (j) X f (x) j Gn (f, x) − ≤ (5.250312578) · G (· − x) (x) − f (x) n j! j=1 bnbc N (j) X X ∗ f (x) j ∗ Gn (f, x) − Gn (· − x) (x) − Φ (nx − k) f (x) , j! j=1 k=dnae (30) ∀ x ∈ [a, b] . In the next three Theorems 7-9 we present more general and flexible upper bounds to our error quantities. We give 15
ANASTASSIOU: NEURAL NETWORK APPROXIMATION
674
Theorem 7 Let f ∈ C N ([a, b]), n, N ∈ N, 0 < α < 1, x ∈ [a, b]. Then 1) (j) −x N 1 f ◦ ln (e ) X e −· −x j Gn e − e , x − f (x) ≤ Gn (f, x) − j! j=1 (N ) e−a e−aN 1 ω f ◦ ln , α + 1 e N !nN α n #
N (N )
(6.3984) e−a − e−b
f ◦ ln 1
e−n(1−α) ,
e N! ∞
(5.250312578)
(31)
2) |Gn (f, x) − f (x)| ≤ (5.250312578) · (32) (j) (e−x ) N f ◦ ln 1 X e 1 j −n(1−α) e−aj + (3.1992) (b − a) e + j! nαj j=1 (N ) e−a e−aN , α + ω1 f ◦ ln 1e N !nαN n #)
N (N )
−n(1−α)
(6.3984) e−a − e−b
e
f ◦ ln 1 ,
e N! ∞
3) If f (j) (x0 ) = 0, j = 1, ..., N , it holds |Gn (f, x0 ) − f (x0 )| ≤ (5.250312578) · −aN (N ) e−a e ω1 f ◦ ln 1e , α + N !nN α n # N (N )
−n(1−α) (6.3984) e−a − e−b
e
f ◦ ln 1 .
e N! ∞ Observe here the speed of convergence is extremely high at
1 . n(N +1)α
Proof. Call F := f ◦ ln 1e . Let x, nk ∈ [a, b]. Then f
N j X k F (j) (e−x ) − k k − f (x) = e n − e−x + K N x, , n j! n j=1
where
KN
x,
k n
:= −
16
1 · (N − 1)!
(33)
ANASTASSIOU: NEURAL NETWORK APPROXIMATION
k n
Z
k
e− n − e−s
N −1 h
675
i F (N ) e−s − F (N ) e−x e−s ds.
x
Thus bnbc
X k=dnae
bnbc X k − f (x) Φ (nx − k) = Φ (nx − k) f n k=dnae
bnbc N X F (j) (e−x ) X
j!
j=1
bnbc j k X Φ (nx − k) e− n − e−x + Φ (nx − k)
k=dnae
Z
k n
k=dnae
k
e− n − e−s
N −1 h
1 · (N − 1)!
i F (N ) e−s − F (N ) e−x de−s .
x
Therefore
bnbc
G∗n (f, x) − f (x)
X
Φ (nx − k) =
k=dnae N X F (j) (e−x )
j!
j=1
G∗n
e−· − e−x
j
, x + Un (x) ,
where bnbc
X
Un (x) :=
Φ (nx − k) µ,
k=dnae
with Z e− nk k N −1 h i 1 µ := e− n − w F (N ) (w) − F (N ) e−x dw. (N − 1)! e−x Case of nk − x ≤ n1α . k i) Subcase of x ≥ nk . I.e. e− n ≥ e−x . k
Z
1 |µ| ≤ (N − 1)!
e− n
k
e− n − w
e−x
N −1 (N ) (w) − F (N ) e−x dw ≤ F
k
1 (N − 1)!
Z
e− n
k
e− n − w
N −1
e−x
ω1 F (N ) , w − e−x dw ≤
k Z e− nk k N −1 1 −x (N ) − n ω1 F , e −e e− n − w dw ≤ (N − 1)! e−x k N e− n − e−x k (N ) −a ω1 F , e x − ≤ N! n
17
ANASTASSIOU: NEURAL NETWORK APPROXIMATION
e
Hence when x ≥
k n
x − k N
k ω1 F (N ) , e−a x − ≤ N! n −aN −a e e ω1 F (N ) , α . αN N !n n
−aN
n
we get |µ| ≤
ii) Subcase of
k n
−a e−aN (N ) e ω F , . 1 N !nαN nα k
≥ x. Then e− n ≤ e−x and Z
1 |µ| ≤ (N − 1)! 1 (N − 1)!
676
e−x k
e− n
Z
e−x k
e− n
k
w − e− n
N −1 (N ) (w) − F (N ) e−x dw ≤ F
N −1 k w − e− n ω1 F (N ) , w − e−x dw ≤
k 1 ω1 F (N ) , e−x − e− n (N − 1)!
Z
e−x k
k
w − e− n
N −1
dw ≤
e− n
e−x − e− nk N 1 k ω1 F (N ) , e−a x − ≤ (N − 1)! n N N −a 1 k −aN (N ) e ω1 F , α e x − n ≤ N! n −aN −a 1 e (N ) e ω1 F , , α N! n nαN i.e.
−a e−aN (N ) e ω F , , 1 N !nαN nα ≥ x. So in general when nk − x ≤ n1α we proved that −a e−aN (N ) e |µ| ≤ ω . F , 1 N !nαN nα |µ| ≤
when
k n
Also we observe: i)’ When nk ≤ x, we get Z e− nk
N −1 k 1
|µ| ≤ e− n − w dw 2 F (N ) (N − 1)! ∞ −x e k N
(N )
−n −x
e − e
N 2 F 2 F (N ) ∞ −a ∞ = ≤ e − e−b . (N − 1)! N N! 18
ANASTASSIOU: NEURAL NETWORK APPROXIMATION
ii)’ When
k n
677
≥ x, we obtain Z
1 |µ| ≤ (N − 1)!
e−x k
k −n
w−e
N −1
!
dw 2 F (N )
∞
e− n
N N 2 F (N ) ∞ −x 2 F (N ) ∞ −a k −n = e −e ≤ e − e−b . N! N! We proved always true that
N 2 e−a − e−b F (N ) ∞ . |µ| ≤ N! Consequently we get bnbc
X
|Un (x)| ≤
bnbc
k = dnae − x ≤ n1α
k n
X
Φ (nx − k) |µ| +
Φ (nx − k) |µ| ≤
k = dnae − x > n1α
k n
−a e−aN (N ) e ω F , 1 N !nαN nα
! N 2 e−a − e−b F (N ) ∞ N!
∞ X
! Φ (nx − k) +
k=−∞
∞ X
Φ (nx − k) ≤
k = −∞ − nx| > n1−α
! N 2 e−a − e−b F (N ) ∞ (1−α) (3.1992) e−n . N! |k
−a e−aN (N ) e ω F , + 1 N !nαN nα So we have proved that
N −a (6.3984) e−a − e−b F (N ) ∞ −n(1−α) e−aN (N ) e |Un (x)| ≤ ω1 F , α + . e N !nαN n N! We also notice that j j ∗ Gn e−· − e−x , x ≤ G∗n e−· − e−x , x ≤ e−aj G∗n
j bnbc X k |· − x| , x = e−aj Φ (nx − k) − x = e−aj · n j
k=dnae
19
ANASTASSIOU: NEURAL NETWORK APPROXIMATION
678
bnbc
X k =k dnae − n ≤ n1α
j k Φ (nx − k) − x + n
x
bnbc
X k =k dnae − n > n1α
j k Φ (nx − k) − x ≤ n
x
1 j e−aj nαj + (b − a)
bnbc
X k =k dnae − n > n1α
Φ (nx − k) ≤
x
e−aj
1 j −n(1−α) + (3.1992) (b − a) e . nαj
Thus we have proved 1 ∗ j −n(1−α) −· −x j −aj ,x ≤ e + (3.1992) (b − a) e , Gn e − e nαj for j = 1, ..., N , and the theorem. We continue with Theorem 8 Let f ∈ C N − π2 + ε, π2 − ε , n, N ∈ N, 0 < ε < π2 , ε small, π x ∈ − 2 + ε, π2 − ε , 0 < α < 1. Then 1) N −1 (j) X f ◦ sin (sin x) j Gn (f, x) − Gn (sin · − sin x) , x − f (x) ≤ (34) j! j=1
ω1
f ◦ sin−1
(5.250312578)
(N )
nαN N !
, n1α
+
(N )
(3.1992) 2N +1 f ◦ sin−1
(1−α) ∞ −n , e N!
2) |Gn (f, x) − f (x)| ≤ (5.250312578) · (j) N f ◦ sin−1 X (sin x) 1 j −n(1−α) + (3.1992) (π − 2ε) e + j! nαj j=1
20
ANASTASSIOU: NEURAL NETWORK APPROXIMATION
ω1
f ◦ sin−1
(N )
, n1α
+
N !nαN
(1−α) (3.1992) 2N +1 (N )
, e−n
f ◦ sin−1 N! ∞
(35) 3) assume further f (j) (x0 ) = 0, j = 1, ..., N for some x0 ∈ − π2 + ε, π2 − ε , it holds |Gn (f, x0 ) − f (x0 )| ≤ (5.250312578) · (36)
(N ) (N )
(3.1992) 2N +1 f ◦ sin−1 ω1 f ◦ sin−1 , n1α
(1−α) ∞ −n . e + nαN N ! N! Notice in the last the high speed of convergence of order n−α(N +1) . Proof. Call F := f ◦ sin−1 and let
k n, x
∈ − π2 + ε, π2 − ε . Then
j N X k k F (j) (sin x) f − f (x) = sin − sin x + n j! n j=1 1 (N − 1)!
k n
Z
sin
x
N −1 k − sin s F (N ) (sin s) − F (N ) (sin x) d sin s. n
Hence bnbc X k Φ (nx − k) = Φ (nx − k) − f (x) n
bnbc
X
f
k=dnae
k=dnae
bnbc N X F (j) (sin x) X j=1
j!
k=dnae
Z x
k n
j bnbc X k 1 Φ (nx − k) sin − sin x + Φ (nx − k) · n (N − 1)! k=dnae
N −1 k sin − sin s F (N ) (sin s) − F (N ) (sin x) d sin s. n
Set here a = − π2 + ε, b =
π 2
− ε. Thus bnbc
G∗n
(f, x) − f (x)
X
Φ (nx − k) =
k=dnae N X F (j) (sin x) j=1
j!
j G∗n (sin · − sin x) , x + Mn (x) ,
where bnbc
Mn (x) :=
X k=dnae
21
Φ (nx − k) ρ,
679
ANASTASSIOU: NEURAL NETWORK APPROXIMATION
with N −1 Z nk 1 k sin − sin s F (N ) (sin s) − F (N ) (sin x) d sin s. (N − 1)! x n k Case of n − x ≤ n1α . i) Subcase of nk ≥ x. The function sin is increasing on [a, b] , i.e. sin nk ≥ sin x. Then N −1 Z nk 1 k (N ) (sin s) − F (N ) (sin x) d sin s = |ρ| ≤ sin − sin s F (N − 1)! x n ρ :=
k n
N −1 k sin − w F (N ) (w) − F (N ) (sin x) dw ≤ n sin x N −1 Z sin nk 1 k ω1 F (N ) , |w − sin x| dw ≤ sin − w (N − 1)! sin x n N sin nk − sin x k (N ) ω1 F , sin − sin x ≤ n N! N k k 1 1 n −x ≤ ω1 F (N ) , α . ω1 F (N ) , − x n N! n nαN N !
1 (N − 1)!
So if
k n
sin
Z
≥ x, then 1 1 |ρ| ≤ ω1 F (N ) , α . n N !nαN
ii) Subcase of 1 |ρ| ≤ (N − 1)!
k n
Z
≤ x, then sin nk ≤ sin x. Hence x
k n
k sin s − sin n
N −1 (N ) (sin s) − F (N ) (sin x) d sin s = F
N −1 k (N ) (w) − F (N ) (sin x) dw ≤ w − sin F k n sin n N −1 Z sin x 1 k w − sin ω1 F (N ) , |w − sin x| dw ≤ (N − 1)! sin nk n Z sin x N −1 k 1 k (N ) ω1 F , sin x − sin w − sin dw ≤ k (N − 1)! n n sin n N 1 k sin x − sin kk (N ) ω1 F , x − ≤ (N − 1)! n N N 1 1 k 1 1 ω1 F (N ) , α x − ≤ αN ω1 F (N ) , α . N! n n n N! n 1 (N − 1)!
Z
sin x
22
680
ANASTASSIOU: NEURAL NETWORK APPROXIMATION
k n
We proved for
681
≤ x that 1 1 |ρ| ≤ ω1 F (N ) , α . n nαN N !
So in both cases we got that 1 1 |ρ| ≤ ω1 F (N ) , α , αN n n N! when nk − x ≤ n1α . Also in general ( nk ≥ x case)
1 (N − 1)! 1 N! Also (case of
k n
k n
Z
1 |ρ| ≤ (N − 1)!
x k n
sin
Z
! N −1
k
sin − sin s d sin s 2 F (N ) = n ∞
sin x
k sin − w n
N −1
!
dw 2 F (N )
∞
=
N
2N +1 k
(N ) sin − sin x 2 F (N ) ≤
F
. n N! ∞ ∞
≤ x) we get
1 (N − 1)!
x
Z
1 |ρ| ≤ (N − 1)!
k n
Z
sin x
sin
k n
k sin s − sin n
N −1
!
d sin s 2 F (N )
∞
=
N −1 !
k
dw 2 F (N ) = w − sin n ∞
sin x − sin nk 1 (N − 1)! N
N
2 F (N )
∞
≤
2N +1
(N )
F
. N! ∞
So we proved in general that |ρ| ≤
2N +1
(N )
F
. N! ∞
Therefore we derive bnbc
|Mn (x)| ≤
bnbc
X
Φ (nx − k) |ρ| +
k=dnae (k:|x− nk |≤ n1α )
ω1 F (N ) , n1α N !nαN
X
Φ (nx − k) |ρ| ≤
k=dnae (k:|x− nk |> n1α )
! +
(3.1992) 2N +1
(N ) −n(1−α) .
F
e N! ∞ 23
ANASTASSIOU: NEURAL NETWORK APPROXIMATION
So that
ω1 F (N ) , n1α (3.1992) 2N +1
(N ) −n(1−α) . + |Mn (x)| ≤
e
F N !nαN N! ∞ Next we estimate ∗ j j Gn (sin · − sin x) , x ≤ G∗n |sin · − sin x| , x ≤ j bnbc X k j G∗n |· − x| , x = Φ (nx − k) − x ≤ n k=dnae
(work as before) (1−α) 1 j + (3.1992) (π − 2ε) e−n . nαj
Therefore (1−α) 1 ∗ j j , Gn (sin · − sin x) , x ≤ αj + (3.1992) (π − 2ε) e−n n j = 1, ..., N. The theorem is proved. We finally give Theorem 9 Let f ∈ C N ([ε, π − ε]), n, N ∈ N, ε > 0 small, x ∈ [ε, π − ε], 0 < α < 1. Then 1) N −1 (j) X f ◦ cos (cos x) j Gn (f, x) − Gn (cos · − cos x) , x − f (x) ≤ (37) j! j=1
ω1
f ◦ cos−1
(5.250312578)
(N )
nαN N !
, n1α
+
(N )
(3.1992) 2N +1 f ◦ cos−1
(1−α) ∞ −n , e N!
2) |Gn (f, x) − f (x)| ≤ (5.250312578) · (j) N f ◦ cos−1 X (cos x) 1 j −n(1−α) + (3.1992) (π − 2ε) e + j! nαj j=1
24
682
ANASTASSIOU: NEURAL NETWORK APPROXIMATION
ω1
f ◦ cos−1
(N )
, n1α
+
N !nαN
683
(1−α) (3.1992) 2N +1 (N )
, e−n
f ◦ cos−1 N! ∞
(38) 3) assume further f (j) (x0 ) = 0, j = 1, ..., N for some x0 ∈ [ε, π − ε], it holds |Gn (f, x0 ) − f (x0 )| ≤ (5.250312578) · (39)
(N ) (N ) 1
(3.1992) 2N +1 f ◦ cos−1 ω1 f ◦ cos−1 , nα
(1−α) ∞ −n . e + nαN N ! N! Notice in the last the high speed of convergence of order n−α(N +1) . Proof. Call F := f ◦ cos−1 and let
k n, x
∈ [ε, π − ε]. Then
j N X k k F (j) (cos x) f − f (x) = cos − cos x + n j! n j=1 1 (N − 1)!
k n
Z x
N −1 k cos − cos s F (N ) (cos s) − F (N ) (cos x) d cos s. n
Hence bnbc
X
f
k=dnae
bnbc X k Φ (nx − k) − f (x) Φ (nx − k) = n k=dnae
bnbc N X F (j) (cos x) X j=1
j! Z x
k=dnae k n
k Φ (nx − k) cos − cos x n
j
bnbc X 1 + Φ (nx − k) · (N − 1)! k=dnae
N −1 k cos − cos s F (N ) (cos s) − F (N ) (cos x) d cos s. n
Set here a = ε, b = π − ε. Thus bnbc
G∗n
(f, x) − f (x)
X
Φ (nx − k) =
k=dnae N X F (j) (cos x) j=1
j!
j G∗n (cos · − cos x) , x + Θn (x) ,
where bnbc
Θn (x) :=
X k=dnae
25
Φ (nx − k) λ,
ANASTASSIOU: NEURAL NETWORK APPROXIMATION
with λ :=
1 (N − 1)!
Z
k n
N −1 k cos − cos s F (N ) (cos s) − F (N ) (cos x) d cos s = n
x
N −1 Z cos nk 1 k cos − w F (N ) (w) − F (N ) (cos x) dw. (N − 1)! cos x n k Case of n − x ≤ n1α . i) Subcase of nk ≥ x. The function cos ine is decreasing on [a, b] , i.e. cos nk ≤ cos x. Then N −1 Z cos x 1 k |λ| ≤ w − cos | F (N ) (w) − F (N ) (cos x) |dw ≤ (N − 1)! cos nk n 1 (N − 1)!
cos x
Z
cos
k n
k w − cos n
N −1
ω1 F (N ) , |w − cos x| dw ≤
N cos x − cos nk k (N ) ≤ ω1 F , cos x − cos n N! N k x − nk 1 1 ω1 F (N ) , x − ≤ ω1 F (N ) , α . n N! n nαN N ! So if
k n
≥ x, then |λ| ≤ ω1 F
ii) Subcase of
k n
1 |λ| ≤ (N − 1)! 1 (N − 1)!
(N )
1 , α n
1 . nαN N !
≤ x, then cos nk ≥ cos x. Hence Z
cos
k n
cos x
Z
cos
k n
N −1 k (N ) (w) − F (N ) (cos x) dw ≤ cos − w F n cos
cos x
k −w n
N −1
ω1 F (N ) , w − cos x dw ≤
Z cos k N −1 n 1 k k (N ) ω1 F dw ≤ , cos − cos x cos − w (N − 1)! n n cos x N cos nk − cos x 1 (N ) k ω1 F , − x ≤ (N − 1)! n N N k 1 1 1 (N ) k (N ) 1 ω1 F , − x − x ≤ ω1 F , α . N! n n N! n nαN
26
684
ANASTASSIOU: NEURAL NETWORK APPROXIMATION
We proved for
k n
685
≤ x that 1 1 . |λ| ≤ ω1 F (N ) , α n N !nαN
So in both cases we got that 1 1 |λ| ≤ ω1 F (N ) , α , n nαN N ! when nk − x ≤ n1α . Also in general ( nk ≥ x case) 1 |λ| ≤ (N − 1)! 1 N! Also (case of
k n
cos x
Z
cos
k n
k w − cos n
N −1
!
dw 2 F (N )
∞
≤
N
2N +1 k
(N ) cos x − cos 2 F (N ) ≤
.
F n N! ∞ ∞
≤ x) we get
1 |λ| ≤ (N − 1)! 1 N!
Z
cos
k n
cos x
N −1 !
k
cos − w dw 2 F (N ) = n ∞
N
k 2N +1
(N ) 2 F (N ) ≤ cos − cos x
F
. n N! ∞ ∞
So we proved in general that |λ| ≤
2N +1
(N )
F
. N! ∞
Therefore we derive bnbc
bnbc
X
|Θn (x)| ≤
k=dnae (k:|x− nk |≤ n1α )
ω1 F
(N )
1 , α n
X
Φ (nx − k) |λ| +
Φ (nx − k) |λ| ≤
k=dnae (k:|x− nk |> n1α )
1 nαN N !
+ (3.1992)
2N +1
(N ) −n(1−α) .
F
e N! ∞
So that
ω1 F (N ) , n1α 2N +1
(N ) −n(1−α) + (3.1992) |Θn (x)| ≤ .
F
e nαN N ! N! ∞ Next we estimate ∗ j j Gn (cos · − cos x) , x ≤ G∗n |cos · − cos x| , x ≤ 27
ANASTASSIOU: NEURAL NETWORK APPROXIMATION
G∗n
686
j bnbc X k Φ (nx − k) − x ≤ |· − x| , x = n j
k=dnae
(work as before) (1−α) 1 j + (3.1992) (π − 2ε) e−n . αj n
Therefore (1−α) 1 ∗ j j , Gn (cos · − cos x) , x ≤ αj + (3.1992) (π − 2ε) e−n n j = 1, ..., N. The theorem is proved.
4
Complex Neural network Approximations
We make Remark 4 Let X := [a, b], R and f : X → C with real and imaginary parts √ f1 , f2 : f = f1 + if2 , i = −1. Clearly f is continuous iff f1 and f2 are continuous. Also it holds (j) (j) f (j) (x) = f1 (x) + if2 (x) , (40) for all j = 1, ..., N , given that f1 , f2 ∈ C N (X), N ∈ N. We denote by CB (R, C) the space of continuous and bounded functions f : R → C. Clearly f is bounded, iff both f1 , f2 are bounded from R into R, where f = f1 + if2 . Here we define Gn (f, x) := Gn (f1 , x) + iGn (f2 , x) ,
(41)
Gn (f, x) := Gn (f1 , x) + iGn (f2 , x) .
(42)
and We observe here that |Gn (f, x) − f (x)| ≤ |Gn (f1 , x) − f1 (x)| + |Gn (f2 , x) − f2 (x)| ,
(43)
and kGn (f ) − f k∞ ≤ kGn (f1 ) − f1 k∞ + kGn (f2 ) − f2 k∞ . Similarly we get Gn (f, x) − f (x) ≤ Gn (f1 , x) − f1 (x) + Gn (f2 , x) − f2 (x) ,
(44)
(45)
and
Gn (f ) − f ≤ Gn (f1 ) − f1 + Gn (f2 ) − f2 . ∞ ∞ ∞ 28
(46)
ANASTASSIOU: NEURAL NETWORK APPROXIMATION
687
We present Theorem 10 Let f ∈ C ([a, b] , C) , f = f1 + if2 , 0 < α < 1, n ∈ N, x ∈ [a, b]. Then i) |Gn (f, x) − f (x)| ≤ (5.250312578) · (47) (1−α) 1 1 ω1 f1 , α + ω1 f2 , α + (6.3984) (kf1 k∞ + kf2 k∞ ) e−n =: ψ1 , n n and ii) kGn (f ) − f k∞ ≤ ψ1 .
(48)
Proof. Based on Remark 4 and Theorem 4. We give Theorem 11 Let f ∈ CB (R, C), f = f1 + if2 , 0 < α < 1, n ∈ N, x ∈ R. Then i) Gn (f, x) − f (x) ≤ ω1 f1 , 1 + ω1 f2 , 1 + (49) nα nα (1−α)
(6.3984) (kf1 k∞ + kf2 k∞ ) e−n
=: ψ2 ,
ii)
Gn (f ) − f ≤ ψ2 . ∞
(50)
Proof. Based on Remark 4 and Theorem 5. Next we present a result of high order complex neural network approximation. Theorem 12 Let f : [a, b] → C, [a, b] ⊂ R, such that f = f1 + if2. Assume f1 , f2 ∈ C N ([a, b]) , n, N ∈ N, 0 < α < 1, x ∈ [a, b]. Then i) |Gn (f, x) − f (x)| ≤ (5.250312578) · (51) N ( f (j) (x) + f (j) (x) ) X 1 2 1 j −n(1−α) + (3.1992) (b − a) e + j! nαj j=1
(N ) (N ) (ω1 f1 , n1α + ω1 f2 , n1α )
nαN N !
(N ) (6.3984) f1
+
(N ) N + f2 (b − a) (1−α) ∞ ∞ e−n , N!
29
ANASTASSIOU: NEURAL NETWORK APPROXIMATION
(j)
688
(j)
ii) assume further f1 (x0 ) = f2 (x0 ) = 0, j = 1, ..., N , for some x0 ∈ [a, b], it holds |Gn (f, x0 ) − f (x0 )| ≤ (5.250312578) · (52) (N ) (N ) (ω1 f1 , n1α + ω1 f2 , n1α ) + nαN N !
(N ) N (b − a) + f2 (1−α) ∞ ∞ e−n , N!
(N ) (6.3984) f1
notice here the extremely high rate of convergence at n−(N +1)α , iii) kGn (f ) − f k∞ ≤ (5.250312578) ·
(j)
(j) N X
f1 + f2 1 j −n(1−α) ∞ ∞ + (3.1992) (b − a) e + j! nαj j=1
(N ) (N ) ω1 f1 , n1α + ω1 f2 , n1α
(
nαN N !
(N ) (6.3984) f1
∞
(N ) N + f2 (b − a) ∞
N!
+
(1−α) . )e−n
(53)
Proof. Based on Remark 4 and Theorem 6.
References [1] G.A. Anastassiou, Rate of convergence of some neural network operators to the unit-univariate case, J. Math. Anal. Appli. 212 (1997), 237-262. [2] G.A. Anastassiou, Quantitative Approximations, Chapman&Hall/CRC, Boca Raton, New York, 2001. [3] G.A. Anastassiou, Basic Inequalities, Revisited, Mathematica Balkanica, New Series, Vol. 24, Fasc. 1-2 (2010), 59-84. [4] A.R. Barron, Universal approximation bounds for superpositions of a sigmoidal function, IEEE Trans. Inform. Theory, 39 (1993), 930-945. [5] F. Brauer and C. Castillo-Chavez, Mathematical models in population biology and epidemiology, Springer-Verlag, New York, pp. 8-9, 2001.
30
ANASTASSIOU: NEURAL NETWORK APPROXIMATION
[6] F.L. Cao, T.F. Xie and Z.B. Xu, The estimate for approximation error of neural networks: a constructive approach, Neurocomputing, 71 (2008), 626-630. [7] T.P. Chen and H. Chen, Universal approximation to nonlinear operators by neural networks with arbitrary activation functions and its applications to a dynamic system, IEEE Trans. Neural Networks, 6 (1995), 911-917. [8] Z. Chen and F. Cao, The approximation operators with sigmoidal functions, Computers and Mathematics with Applications, 58 (2009), 758-765. [9] C.K. Chui and X. Li, Approximation by ridge functions and neural networks with one hidden layer, J. Approx. Theory, 70 (1992), 131-141. [10] G. Cybenko, Approximation by superpositions of sigmoidal function, Math. of Control Signals and System, 2 (1989), 303-314. [11] S. Ferrari and R.F. Stengel, Smooth function approximation using neural networks, IEEE Trans. Neural Networks, 16 (2005), 24-38. [12] K.I. Funahashi, On the approximate realization of continuous mappings by neural networks, Neural Networks, 2 (1989), 183-192. [13] N. Hahm and B.I. Hong, An approximation by neural networks with a fixed weight, Computers & Math. with Appli., 47 (2004), 1897-1903. [14] K. Hornik, M. Stinchombe and H. White, Multilayer feedforward networks are universal approximators, Neural Networks, 2 (1989), 359-366. [15] K. Hornik, M. Stinchombe and H. White, Universal approximation of an unknown mapping and its derivatives using multilayer feedforward networks, Neural Networks, 3 (1990), 551-560. [16] N. Hritonenko and Y. Yatsenko, Mathematical modeling in economics, ecology and the environment, Reprint, Science Press, Beijing, pp. 92-93, 2006. [17] M. Leshno, V.Y. Lin, A. Pinks and S. Schocken, Multilayer feedforward networks with a nonpolynomial activation function can approximate any function, Neural Networks, 6 (1993), 861-867. [18] G.G. Lorentz, Approximation of Functions, Rinehart and Winston, New York, 1966. [19] V. Maiorov and R.S. Meir, Approximation bounds for smooth functions in C Rd by neural and mixture networks, IEEE Trans. Neural Networks, 9 (1998), 969-978.
31
689
ANASTASSIOU: NEURAL NETWORK APPROXIMATION
[20] Y. Makovoz, Uniform approximation by neural networks, J. Approx. Theory, 95 (1998), 215-228. [21] H.N. Mhaskar and C.A. Micchelli, Approximation by superposition of a sigmoidal function, Adv. Applied Math., 13 (1992), 350-373. [22] H.N. Mhaskar and C.A. Micchelli, Degree of approximation by neural networks with a single hidden layer, Adv. Applied Math., 16 (1995), 151-183. [23] S. Suzuki, Constructive function approximation by three-layer artificial neural networks, Neural Networks, 11 (1998), 1049-1058. [24] T.F. Xie and S.P. Zhou, Approximation Theory of Real Functions, Hangzhou University Press, Hangzhou, 1998. [25] Z.B. Xu and F.L. Cao, The essential order of approximation for neural networks, Science in China (Ser. F), 47 (2004), 97-112.
32
690
JOURNAL OF COMPUTATIONAL ANALYSIS AND APPLICATIONS, VOL. 14, NO.4, 691-705 , 2012, COPYRIGHT 2012 EUDOXUS691 PRESS, LLC
Hardy inequality in block weighted spaces A. R. Moazzen, R. Lashkaripour Department of Mathematics, University of Sistan and Baluchestan Zahedan, Iran e-mail: [email protected] e-mail:[email protected]
Abstract In this study, we prove the classical Hardy inequality in block weighted sequence space. Also, by a non-negative function K = K(x, y) we define corresponding continuous case of the block sequence space. Indeed, by a non-negative Lebesgue measurable function K on (0, ∞) × (0, ∞), we define Z ∞Z ∞ p p LK := f : K(x, y)|f (x)|dx dy < ∞ . 0
The space
LpK
0
equipped by the norm kf kp,K =
R∞R∞ 0
0
p p1 K(x, y)|f (x)|dx dy is a
normed linear space. Finally, for K which is homogeneous of degree θ we prove that Z ∞ p1 kf kp,K ≤ C(p, q, θ) xp(1+θ) |f p (x)|dx . 0
Keywords: Integral inequality; Hardy’s inequality; Block weighted sequence space; N¨ urlond’s operator. Msc 2000: 26D15
1
Introduction
The classical Hardy inequality reads: Z
R ∞ x f (t)dt p 0
x
0
p p dx < p−1
Z
∞
f p (x) dx,
(p > 1)
(1 − 1)
0
where, f is nonnegative function such that f ∈ Lp (R+ ) and R+ = (0, ∞). The almost dramatic period of research for at least 10 years until G. H. Hardy [2] stated and proved (1-1) was described in details in [4]. Another important inequality is the following: If p > 1 and f is a nonnegative function such that f ∈ Lp (R+ ), then Z 0
∞ Z ∞ 0
f (x) p π dx dy < x+y sin( πp ) 1
Z 0
∞
f p (y)dy.
(1 − 2)
MOAZZEN, LASHKARIPOUR: HARDY INEQUALITY
692
It was early known that these inequalities are in fact equivalent. Moreover, (1-2) is sometimes called Hilbert’s inequality even if Hilbert himself only considered the case p = 2 (Lp spaces were not defined at that time). 1 Rx We also note that (1-1) can be interpreted as the Hardy operator H : (Hf )(x) := 0 f (t)dt, x p p−1
maps Lp into Lp with the operator norm q =
since, it is known that
p p p−1
is the sharp con-
stant in (1-1) . Similarly, (1-2) may be interpreted as also the operator A : (Af )(y) := π maps Lp into lp with the operator norm .
R∞ 0
f (x) x+y dx
π p
sin
In 1928 Hardy [4] proved an estimate for some integral operators as a generalization of the Schur test, from which the first ”weighted” modification of Hardy’s inequality (1-1) followed, namely the inequality Z 0
R ∞ x f (t)dt p α 0
x
x dx
1)
(1 − 3)
0
valid , with p > 1 and α < all measurable non-negative functions f (see [3] Theorem p − 1,for p p is the best possible. The prehistory of (1-1) up to the time 330), where the constant p−α−1 when Hardy finally proved (1-1) in 1925 in [3] can be found in [6]. After that the inequality has been developed and applied in almost unbelievable ways. See for instance the books [6] , [7] devoted to this subject and also the historical article [5] and references given therein. In this study, at first we prove a weighted Hardy inequality in discrete form and then verify Hardy inequality in weighted block sequence space. Finally, by a non-negative and homogeneous function K = K(x, y) in definition 2.1 we obtain some Hardy-type inequalities.
2
New results
p Suppose that w = (wn )∞ n=1 is a non-negative sequence of real numbers . For p ∈ R\{0}, let lw ∞ denotes the space of all real sequences x = {xk }k=1 , such that ∞ X
kxkw,p :=
wk |xk |p
1
p
< ∞.
k=1
For the continuous case, Lpw denotes the space of all functions f , such that kf kw,p :=
Z
∞
1
w(x)f p (x)dx
p
< ∞.
0
Lashkaripour and Foroutannia in [8], defined the weighted block sequence space as follows. Assume that F = (Fn ) is a partition of positive integers where each Fn is a finite interval of N and max Fn < min Fn+1 (n = 1, 2, ...).
2
MOAZZEN, LASHKARIPOUR: HARDY INEQUALITY
693
p The weighted block sequence space lw,F is defined as ∞ X
n
p lw,F := x = (xn ) :
o
wn | < x, Fn > |p < ∞ ,
n=1
where, < x, Fn >=
P
j∈Fn
p xj . The norm on lw,F is denoted by k.kp,w,F and is defined by
kxkp,w,F :=
∞ X
wn | < x, Fn > |p
1
p
.
n=1 p Note that with the above-mentioned definition lw,F is not a normed sequence space. Indeed, one may consider x = (1, −1, 0, 0, ...), F1 = {1, 2}, F2 = {3, 4}, ... and wn = 1 then, kxkp,w,F = 0 whereas x 6= 0. We reform the above-mentioned definition as p lw,F
∞ X
:= x = (xn ) :
wn
X
n=1
and kxkw,F :=
X ∞
p
|xj |
0, defined the sequence space n
lA(p) = x = (xn ) :
XX n
p
an,k |xk |
o
1 and K = K(x, y) be non-negative Lebesgue measurable function in (0, ∞) × (0, ∞), we define
LpK := f
∞Z ∞
Z
: 0
p
K(x, y)|f (x)|dx dy < ∞
0
The space LpK equipped by the norm kf kp,K =
R∞R∞ 0
0
p
K(x, y)|f (x)|dx dy
1
p
is a normed
linear space. In special case by taking K(x, y) = 1, for x = y and 0 in otherwise, LpK is reduced to the usual space Lp . Also, by considering K(x, y) = w(y), for x = y and 0 in otherwise, the weighted space LPw is obtained. By taking K in above-mentioned definition as: (
K(x, y) =
β
yp 0
−1
(x ≤ y), (x > y),
we obtain in Theorem 2.12 below that kf kpp,K
1 and ab ≤
1 p
+
1 q
= 1, then
ap bq + p q
for all a, b > 0. In the following, inspired by Elliot’s and Ingham’s proof for the classical Hardy’s inequality we prove a weighted Hardy inequality(see [6, p. 158]). Theorem 2.3. Let an ≥ 0 and 0 < wn ≤ 1. If p > 1, then ∞ X An p n=1
n
wn < q p
∞ X
apn wn ,
n=1
Pn
where, An = k=1 ak . proof. Let αn = Ann and α0 = 0, then we have
αnp − qαnp−1 an = αnp − qαnp−1 nαn − (n − 1)αn−1 4
MOAZZEN, LASHKARIPOUR: HARDY INEQUALITY
695
= αnp (1 − nq) + (n − 1)qαnp−1 αn−1 . Now, by applying Lemma 2.2 one may obtain αnp − qαnp−1 an ≤ αnp (1 − nq) + (n − 1)q
αp
n
q
+
p αn−1 p
p = (q − 1) (n − 1)αn−1 − nαnp .
Since, 0 < wn ≤ 1 we have
p αnp wn − qαnp−1 an wn ≤ (q − 1) (n − 1)αn−1 − nαnp .
Summing from 1 to N yields N X An p
n
n=1
wn − q
N X An p−1
n
n=1
p an wn ≤ (1 − q)N αN ≤ 0,
so from H¨older’s inequality we infer that N X An p n=1
n
wn ≤ q
X N n=1
An n
p
wn
1 X ∞ q
apn wn
1
p
.
n=1
Division by the last factor completes the proof. Proposition 2.4. (Schur, see [1] Proposition 7.1) Fix p, 1 < p < ∞, and suppose that A is a matrix with non-negative entries. If sup n
∞ X
an,k = R < ∞
k=1
and sup
∞ X
an,k = C < ∞,
k n=1 1
1
then A maps lp into lp and kAxkp ≤ R p∗ C p kxkp . In the following, we generalize the above-mentioned proposition to the weighted sequence spaces for upper triangular matrices. Lemma 2.5. Suppose that 1 < p < ∞, that w = (wn ) is a non-negative increasing sequence and let B = (bn,k ) be an upper triangular matrix with non-negative entries. If we take sup
∞ X
bi,j = RB < ∞,
inf
j≥1
i≥1 j=1
j X
bi,j = CB < ∞,
i=1
then for x ≥ 0, we have 1 ∗
1
k x kw,B(p) ≤ RBp CBp k x kw,p . 5
MOAZZEN, LASHKARIPOUR: HARDY INEQUALITY
696
Proof. By applying H¨older’s inequality, we have k x kpw,B(p) =
∞ X
wi
∞ X
i=1
=
≤
∞ X
bi,j xj
p
j=1
wi
∞ X
bi,j xj
i=1
j=i
∞ X
X ∞
wi
i=1 p−1 ≤ RB
bi,j
p
∞ p−1 X
j=i ∞ X
wi
i=1
bi,j xpj
j=i
∞ X
bi,j xpj .
j=i
Now, since (wn ) is increasing, one may obtain p−1 k x kpw,B(p) ≤ RB
∞ X
wj
j=1
j X
bi,j xpj
i=1
p−1 ≤ RB CB k x kw,p .
Theorem 2.6. Let an ≥ 0 and (wn ) be an increasing sequence satisfying 0 < wn ≤ 1. If p > 1 and B = (bn,k ) is an upper triangular matrix with non-negative entries, then 1 ∗
1
khan kw,B(p) ≤ RBp CBp qkan kp,w , Pn
a
k where, han = k=1 is the discrete Hardy operator. n Proof. By using Lemma 2.5 and Theorem 2.3 the assertion is proved.
Corollary 2.7. Let an ≥ 0 and 0 < wn ≤ 1 be an increasing sequence. If p > 1 and F = (Fn ) is a partition of natural numbers then 1
khan kp,w,F ≤ N p∗ qkan kp,w , Pn
where, han =
k=1
n
ak
and N is the largest cardinal number of Fn ’s.
Note that for the case of weighted block sequence space the right-hand side constant necessarily, is not the best possible. Indeed, it depends on the weight of that space. For instance, in the following we prove a special case. Theorem 2.8. Let an ≥ 0 and wn = n1α , α > 0. If p > 1 and F = (Fn ) is a partition of natural numbers such that, Fn = {nN − N + 1, nN − N + 2, ..., nN } then 1
khan kp,w,F ≤ N p∗
6
+α p
qkan kp,w ,
MOAZZEN, LASHKARIPOUR: HARDY INEQUALITY
Pn
697
a
k where, han = k=1 . The constant factor is the best possible. n Proof. By applying H¨older’s inequality we have
khan kpp,w,F
∞ X 1 X
=
nα
n=1
n=1
n
p
j∈Fn
∞ X 1
≤
haj
p
N p∗ α
X
(haj )p .
j∈Fn
Since, for j ∈ Fn , we have j ≤ nN so p
∞ X X 1
+α
khan kpp,w,F ≤ N p∗
jα
n=1 j∈Fn p
≤ N p∗ p
= N p∗
+α
∞ X 1
nα n=1
+α p
≤ q p N p∗
(haj )p
(han )p
khan kpp,w
+α
kan kpp,w .
Fix n, for j ∈ Fn , take ak = 1 for 1 ≤ k ≤ j and ak = 0, otherwise. Then ∞ X 1 X
khan kpp,w,F =
n=1
=
nα
haj
p
j∈Fn
Np . nα
On the other hand khan kpw,p ≤ q p kan kpw,p,F X 1
≤ qp
k∈Fn
≤ qp So
1
khan kp,w,F ≥ qN p∗
+α p
kα
N 1−α . (n − 1)α
n − 1α p
n
khan kw,p .
By tending n to infinity one may obtain 1
khan kp,w,F ≥ qN p∗
+α p
khan kw,p .
Theorem 2.9. Let p > 1 and K = K(x, y) be non-negative Lebesgue measurable function on (0, ∞) × (0, ∞). Suppose that K is homogeneous of degree θ. Then Lpxp(1+θ) ⊆ LpK . 7
MOAZZEN, LASHKARIPOUR: HARDY INEQUALITY
698
Moreover, ∞
Z
kf kp,K ≤ C(p, q, θ)
1
xp(1+θ) |f p (x)|dx
p
,
0
where
∞
Z
C(p, q, θ) =
K(t, 1)t
− p1
dt
1 Z q
∞
K(1, t)t
1
(p−1)(1+θ)− 1q
p
dt
.
0
0
Providing all integrals be finite. Proof. By applying H¨older’s inequality we have ∞Z ∞
Z
kf kpp,K
= 0
0
Z ∞Z
∞
= 0
x
1 p
K (x, y)|f (x)|
∞Z ∞
x1
K(x, y)|f p (x)|
q
y
0
0
Note that
∞
Z
1 pq
y 1
p
K(x, y)
x
0
y
0
Z
≤
p
K(x, y)|f (x)|dx dy
dx
y
1 q
K (x, y)
∞
− p1
dx =
p
dx
y 1
p
K(x, y)
x
0
Z
x
∞
Z
1 pq
K(yt, y)t
dy dx
p−1
dy.
ydt
0
= y 1+θ
∞
Z
K(t, 1)t
− p1
dt.
0
So kf kpp,K ≤ =
∞
Z
K(t, 1)t
− p1
dt
0 − p1
K(t, 1)t
p−1
Z
p−1
Z
= =
Z
∞
K(t, 1)t
− p1
dt
|f p (x)|
− p1
K(t, 1)t
dt
x1
K(x, y)|f p (x)|
K(x, y)y (p−1)(1+θ)
0
∞
Z
|f p (x)|
0
0 ∞
∞
Z
0
∞
∞
Z 0
dt
0
Z
y (p−1)(1+θ)
0
∞
Z
∞
p−1 Z
∞
y
x1 q
y
K(x, xt)(xt)(p−1)(1+θ) t
dx dy
dy dx
− 1q
xdt dx
0 ∞
p−1 Z
K(1, t)t
(p−1)(1+θ)− 1q
dt
Z
0
0
q
∞
xp(1+θ) |f p (x)|dx .
0
Corollary 2.10. Suppose that p > 1, that K(x, y) is non-negative and homogeneous of degree -1, also assume that K(x, y) = K(y, x). Then Z
p
∞Z ∞
dy ≤ C
K(x, y)|f (x)|dx 0
Z
0
∞
|f (x)|p dx,
0
where
Z
C=
∞
K(t, 1)t
− p1
dt.
0
Proof. By assumptions on K we have Z
∞
− p1
K(t, 1)t
∞
Z
dt =
0
t−1 K(t, 1)t
1− p1
0
Z
= 0
8
∞
1
K(1, t−1 )t q dt.
dt
MOAZZEN, LASHKARIPOUR: HARDY INEQUALITY
∞
Z
K(1, z)z
=
−1 q
699
dz.
0
Note that Corollary 2.10 corresponds to Theorem 319 in [3]. Also, B. Yang in [10] by defining the integral operator T as: ∞
Z
(T f )(y) :=
y ∈ (0, ∞),
K(x, y)f (x)dx, 0
and assuming some conditions on K, found the norm of operator T . He supposed that K(x, y) is continuous in (0, ∞) × (0, ∞), satisfying K(x, y) = K(y, x) > 0. Then, for (≥ 0) small enough f (r, x) as and x > 0, Yang by setting K 1+
∞
Z f (r, x) := K
K(x, t) 0
x t
r
dt
(r = p, q),
proved the following assertions: f0 (r, x) = (i) if K
R∞ 0
1
K(x, t)
r
x t
dt = K0 (p)(r = p, q; x > 0), and K0 (p) is a constant indepen-
dent of x, then T ∈ B(Lr (0, ∞) → Lr (0, ∞)) and kT kr ≤ K0 (p)(r = p, q); f (r, x) = K (p)(r = p, q; x > 0) is independent of x, and K (p) = K0 (p) + o(1)( → 0+ ), (ii) if K then kT kr = K0 (p)(r = p, q). Remark 2.11. Note that similar to discrete case, constant factor necessarily, is not the best possible. Indeed, it depends on K. For instance, in the following we give a new proof for inequality (1-3), which the constant factor is the best possible. Theorem 2.12. Let p > 1 and β < p − 1. Also, suppose that β
(
yp 0
K(x, y) = and
−1
Z
∞
(x ≤ y), (x > y), xβ |f p (x)|dx < ∞.
0
Then kf kpp,K
0 and An = a1 +...an .Two matrices are associated with the sequence (an ). The first is called the N¨orlund matrix Na = (an , k)n,k≥0 is defined as follows: ( a n−k k ≤ n, An an,k = 0 k > n, and the second is called the weighted mean matrix Ma = (an,k )n,k≥0 which is defined as the following: ( ak k ≤ n, An an,k = 0 k > n. 10
MOAZZEN, LASHKARIPOUR: HARDY INEQUALITY
701
In light of above-mentioned definitions, by a weight function w(x) = xα (α 6= −1) on (0, ∞) we define continuous N¨orlund and continuous weighted mean operators as the following: ∞
Z
(TN f )(y) :=
N (x, y)f (x)dx,
y ∈ (0, ∞),
M (x, y)f (x)dx,
y ∈ (0, ∞),
0
and
∞
Z
(TM f )(y) := 0
where N (x, y) =
α R(x−y) x 0
y ≤ x,
z α dz
0
M (x, y) =
y > x,
α R xy 0
y ≤ x,
z α dz
0
y > x.
−1 p .
Corollary 2.13. Suppose that p > 1 and α > kf kp,M ≤
Then
p(1 + α) kf kp , 1 + pα
(2 − 1)
1 kf kp,N ≤ (1 + α)β(1 + α, )kf kp , p
(2 − 2)
where β(., .) is beta function. Proof. Note that M = M (x, y) and N = N (x, y) are non-negative and homogenous function of degree -1. Now, by applying Theorem 2.9 we have kf kp,M ≤ C(p, q, −1)kf kp , where C(p, q, −1) =
∞
Z
M (t, 1)t
− p1
dt
q
=
M (1, t)t
∞
(1 + α)t
−1−α− p1
dt
1
1 Z
dt
q
1
=
− 1q
1
p
.
0
0
Z
∞
1 Z
(1 + α)t
α− 1q
1
dt
p
0
p(1 + α) . 1 + pα
For the second inequality we have C(p, q, −1) =
Z
∞
− p1
N (t, 1)t
1 Z
dt
q
0
∞
N (1, t)t
− 1q
dt
1
p
0
Z
= (1 + α) 1
∞
(t − 1)α t
1+α+ p1
11
1
dt
q
Z
(1 + α) 0
1
1 α −q
(1 − t) t
1
dt
p
MOAZZEN, LASHKARIPOUR: HARDY INEQUALITY
702
1 = (1 + α)β(1 + α, ). p Remark 2.14. Inequalities mentioned in the above corollary mean that the operators TM and TN are bounded in Lp (0, ∞) and p(1 + α) 1 + pα
kTM kLp →Lp ≤ and
1 kTN kLp →Lp ≤ (1 + α)β(1 + α, ). p 1 x+y
Remark 2.15. By taking K(x, y) =
in Theorem 2.9 we have
C(p, q, −1) =
t− p1
∞
Z 0
1+t
dt
1 1 = β( , ) p q π . = sin( πp ) So, Theorem 2.9 asserts that Z
∞ Z ∞
0
0
f (x) p π dx dy < x+y sin( πp )
Z
∞
f p (y)dy.
0
This means that, Theorem 2.9 is a generalization of Hilbert’s inequality. Indeed, Hardy et al. in [3] prove the following statements: p , and that K(x, y) has the following properties: Suppose that p > 1, p0 = p−1 (i) K is non-negative, and homogeneous of degree -1, Z
(ii)
∞
− p1
K(x, 1)x
Z
0
∞
K(1, y)y
dx =
1 − p0
dx = k,
0 −1
−1
and either (iii) K(x, 1)x p is a strictly decreasing function of x, and K(1, y)y p0 of y, or more −1 generally, (iii0) K(x, 1)x p decreases from x = 1 onwards, while the interval (0,1) can be divided into two parts, (0, ξ) and (ξ, 1), of which one may be null, in the first of which it decreases and in −1 the second of which it increases, and K(1, y)y p0 has similar properties. Finally suppose that, when only the less stringent condition (iii0) is satisfied, (iv)
K(x, x) = 0.
Then XX
K(m, n)am bn < k
X
apm
1 X p
bp0 n
unless (am ) or (bn ) is null, XX n
p
K(m, n)am
m
< kp
X m
12
apm ,
1
p0
,
MOAZZEN, LASHKARIPOUR: HARDY INEQUALITY
unless (am ) is null, XX m
p0
K(m, n)bn
X
< k p0
n
703
bp0 n,
n
unless (bn ) is null (see [3 Theorem 318]). For the corresponding theorem for integrals see [3 Theorem 319]. Note that by applying H¨older’s inequality one may easily prove that the abovementioned three inequalities are equivalent. They extend the above-mentioned assertions as follows: Suppose that K(x, y) is a strictly decreasing function of x and y, and also satisfies conditions (i) and (ii), that λm > 0, µn > 0, Λm = λ1 + λ2 + ... + λm ,
Mn = µ1 + µ2 + ... + µn ,
and that p > 1. Then 1
1
XX
p0 µnp am bn < k K(Λm , Mn )λm
X
apm
1 X p
1
bp0 n
p0
,
unless (am ) or (bn ) is null. 1 If K(x, y) = x+y , the following inequality is obtained (see [3, Theorem 321]): 1
XX
1
p0 1 X 1 X λm µnp π p0 ( apm p bp0 . am bn < n π Λ m + Mn sin p
In light of above considerations, we shall prove the following corollary of Theorem 2.9. Rx
Corollary 2.16. By taking w(x) = xα (α 6= −1) and defining W (x) = Z 0
∞Z ∞ 0
α
α
xq yp π f (x)g(y) < W (x) + W (y) sin πp
Proof. By taking
α
Z
∞
0
1 Z
p
p
f (x)dx
∞
w(t)dt, then g q (y)dy
0
0
α
α
α
xq yp xq yp K(x, y) = = (1 + α) 1+α , W (x) + W (y) x + y 1+α in Theorem 2.9, we have C(p, q, −1) =
∞
Z
− p1
K(t, 1)t
∞
1 Z
dt
0
q
K(1, t)t
dt
1
p
0
Z
α − p1 q
∞
t dt 1 + t1+α
= (1 + α) 0
1 Z q
Now, by substituting z = t1+α and noting that Z
β(a, b) = 0
we obtain
− 1q
∞
z a−1 dz, (1 + z)a+b
1 1 C(p, q, −1) = β( , ) p q 13
0
∞
α
−1
1
tp q dt 1 + t1+α
p
.
1 q
.
MOAZZEN, LASHKARIPOUR: HARDY INEQUALITY
704
π
= sin
. π p
This completes the proof. wk Suppose that (wn ) and (vm ) are two non-negative sequences. Also, assume that Mw = ( W ) n n,k≥1 vk and Mv = ( Vn )n,k≥1 are the corresponding their weighted mean matrices, respectively. In this case, let Mv,w = (ai,j )i,j≥1 denotes the product of Mw and Mv . Then
ai,j =
i vj X wk . Wi k=j Vk
Now, we define the weighted mean operator and the n¨ urlond’s operator of two functions w(t) and v(t) as follows: (
Mvw (x, y)
=
Nvw (x, y) =
v(y) R x w(t) W (x) y V (t) dt
y ≤ x, y > x,
0
Rx y w(x−t)v(t−y)dt
y ≤ x, y > x.
W (x)V (y)
0
Corollary 2.17. Suppose that w(t) = tα and v(t) = tγ , that γ, α > ∞Z ∞
Z 0
0
where CM (p, q, −1) =
. Then
1
p
Mvw (x, y)|f (x)|dx
−1 p
dy
p
≤ CM (p, q, −1)kf kp ,
p(1 + α)(1 + γ) . (pα + 1)(pγ + 1)
Proof. The assertion is a consequence of Theorem 2.9. Corollary 2.18. Suppose that w(t) = tα and v(t) = tγ , that 1 < p < 2 then Z 0
where
∞Z ∞ 0
−1. If
1
p
Nvw (x, y)|f (x)|dx
−1 p
dy
p
≤ CN (p, q, −1)kf kp ,
1 CN (p, q, −1) = (1 + α)(1 + γ)β(1 + α, 1 + γ)β(−γ − , α + γ + 2). q
Proof. The assertion is a consequence of Theorem 2.9 and that Z
β(a, b) =
1
xa−1 (1 − x)b−1 dx
0
14
a, b > 0.
MOAZZEN, LASHKARIPOUR: HARDY INEQUALITY
705
References [1] G. Bennett, Factorizing the classical inequalities, Mem. Amer. Soc. (120) 576(1996), 1-130, Zbl. 0857. 26009. [2] G.H. Hardy, Notes on some points in the integral calculus LX: An inequality between integrals (60), Messenger of math., 54(1925), 150-156. [3] G.H. Hardy, J. E. Littlewood and G. Polya, Inequalities, Cambrdge University Press, Cambridge,1952. [4] G.H. Hardy, Note on some points in the integral calculus, LXIV, Further inequalities between integrals, Messenger of math., 57 (1927), 12-16. [5] A. Kufner, L. Maligranda, and L.-E. persson, The prehistory of the Hardy inequality, Amer. math. monthly, 113 8 (2006), 715-732. [6] A. Kufner, L. Maligranda, and L.-E. persson, The Hardy Inequality- About Its History and Some Related Results, Vydavatelsky Servis Publishing House, Pilsen, 2007. [7] A. Kufner, L.-E. persson, Weighted Inequalities of Hardy type, World Scientific Publishing Co., Singapore, 2003. [8] R. Lashkaripour, D. Foroutannia, Norm and lower bounds for matrices on block weighted sequence spaces I, Czechoslovak Math. J., 59 (134) (2009), 81-94. [9] D. S. MItrinovi´c, Analytic inequalities, Springer Verlag, 1970. [10] B. Yang, On the norm of an integral operator and applications, J. Math. Anal. Appl., 321 (2006), 182-192.
15
JOURNAL OF COMPUTATIONAL ANALYSIS AND APPLICATIONS, VOL. 14, NO.4, 706-713 , 2012, COPYRIGHT 2012 EUDOXUS PRESS, 706 LLC
Univariate mixed Caputo fractional Ostrowski inequalities George A.Anastassiou Department of Mathematical Science, University of Memphis Memphis, TN 38152 U.S.A., [email protected] Abstract Very general univariate mixed Caputo fractional Ostrowski inequalities are presented. One of them is proved sharp and attained. Estimates are with respect to k·kp , 1 ≤ p ≤ ∞. 2010 AMS Matematics Subject Classification: 26A33, 26D10, 26D15. Keywords and Phrases: Ostrowski inequalities, right and left Caputo fractional derivatives.
1
Introduction
In 1938, A. Ostrowski [8] proved the following important inequality: Theorem 1 Let f : [a, b] → R be continuous on [a, b] and differentiable on (a, b) whose derivative f 0 : (a, b) → R is bounded on (a, b), i.e., kf 0 k∞ := supt∈(a,b) |f 0 (t)| < +∞. Then " # 1 Z b 2 1 (x − a+b 2 ) + · (b − a) kf 0 k∞ , f (t)dt − f (x) ≤ a − b a 4 (b − a)2 for any x ∈ [a, b].The constant
1 4
(1)
is the best possible.
Since then there has been a lot of activity around these inequalities with important applications to Numerical Analysis and Probability. This paper is greatly motivated and inspired also by the following result. Theorem 2 (see [1]). Let f ∈ C n+1 ([a, b]), n ∈ N and x ∈ [a, b] be fixed, such that f (k) (x) = 0, k = 1, ..., n. Then it holds 1 Z b f (n+1)
(x − a)n+2 + (b − x)n+2 ∞ f (y)dy − f (x) ≤ · . (2) b − a a (n + 2)! b−a
1
ANASTASSIOU: FRACTIONAL OSTROWSKI INEQUALITIES
707
Inequality (2) is sharp. In particular, when n is odd is attained by f ∗ (y) := (y − x)n+1 · (b − a), while when n is even the optimal function is f (y) := |y − x|
n+α
(b − a),
α > 1.
Clearly inequality (2) generalizes inequality (1) for higher order derivatives of f. Also in [2], see Chapters 24-26, we presented a complete theory of left fractional Ostrowski inequalities. Here we combine both right and left Caputo fractional derivatives and produce Ostrowski inequalities. For the concepts of right and left Caputo fractional Calculus we use here, we refer to [3]-[7], and [9].
2
Main Results
We make Remark 3 Let [a, b] ⊂ R, α > 0, m = dαe . Let f ∈ AC m ([a, b]), x0 ∈ [a, b] (i.e. f (m−1) ∈ AC([a, b])). Thus f ∈ AC m ([a, x0 ]), and f ∈ AC m ([x0 , b]). Consequently, by [4, p.40], Z x m−1 X f (k) (x0 ) 1 α (x − x0 )k + (x − J)α−1 D∗x f (J)dJ, ∀x ∈ [x0 , b]. f (x) = 0 k! Γ(α) x0 k=0
And by [3], f (x) = Here α.
m−1 X
f (k) (x0 ) 1 (x − x0 )k + k! Γ(α)
k=0 α D∗x0 f,
Dxα0 − f
Z
x0
x
(J − x)α−1 Dxα0 − f (J)dJ, ∀x ∈ [a, x0 ].
are the left and right Caputo fractional derivatives of order
Assume f (k) (x0 ) = 0, k = 1, ..., m − 1. Then Z x 1 α f (x) − f (x0 ) = (x − J)α−1 D∗x f (J)dJ, ∀x ∈ [x0 , b] 0 Γ(α) x0
(3)
and 1 f (x) − f (x0 ) = Γ(α)
Z
x0
x
(J − x)α−1 Dxα0 − f (J)dJ, ∀x ∈ [a, x0 ].
(4)
Hence Z x α 1 (x − J)α−1 D∗x f (J) dJ 0 Γ(α) x0 Z x
α 1 ≤ (x − J)α−1 dJ D∗x f ∞,[x ,b] 0 0 Γ(α) x0 α
1 (x − x0 ) α = D∗x0 f ∞,[x0 ,b] . Γ(α) α
|f (x) − f (x0 )| ≤
2
(5)
ANASTASSIOU: FRACTIONAL OSTROWSKI INEQUALITIES
Hence |f (x) − f (x0 )| ≤
α
D∗x f 0 ∞,[x
0 ,b]
Γ(α + 1)
(x − x0 )α , ∀x ∈ [x0 , b].
708
(6)
Similarly it holds Z x0
1 α−1 (J − x) |f (x) − f (x0 )| ≤ dJ Dxα0 − f ∞,[a,x0 ] Γ(α) x
1 (x0 − x)α
Dxα − f = 0 ∞,[a,x0 ] Γ(α) α
α
Dx − f 0 ∞,[α,x0 ] = (x0 − x)α , Γ(α + 1) that is |f (x) − f (x0 )| ≤
α
Dx − f 0 ∞,[a,x
0]
Γ(α + 1)
(x0 − x)α , ∀x ∈ [a, x0 ].
(7)
Next we see that Z 1 Z b 1 b f (x)dx − f (x0 ) = (f (x) − f (x0 ))dx b−a a b − a a Z b 1 ≤ |f (x) − f (x0 )| dx (8) b−a a (Z ) Z b x0 1 = |f (x) − f (x0 )| dx + |f (x) − f (x0 )| dx b−a a x0 Z x0
α
1
Dx0 − f ∞,[a,x ] (x0 − x)α dx ≤ 0 (b − a)Γ(α + 1) a ) Z b
α α + D∗x (x − x0 ) dx 0
∞,[x0 ,b]
x0
! α+1 n
(x0 − a) 1 α
Dx0 − f ∞,[a,x0 ] = (b − a)Γ(α + 1) α+1 α+1
α (b − x0 ) + D∗x f ∞,[x0 ,b] 0 α+1 n
1
Dxα − f = (x0 − a)α+1 0 ∞,[a,x0 ] (b − a)Γ(α + 2) (9) o
α + D∗x0 f ∞,[x0 ,b] (b − x0 )α+1 . We have proved that
3
ANASTASSIOU: FRACTIONAL OSTROWSKI INEQUALITIES
709
Theorem 4 Let [a, b] ⊂ R, α > 0, m = dαe , f ∈ AC m ([a, b]), and Dxα0 − f ∞,[a,x0 ] ,
α
D∗x < ∞, x0 ∈ [a, b]. Assume f (k) (x0 ) = 0, k = 1, ..., m − 1. Then 0 ∞,[x ,b] 0
1 Z b n
1
Dxα − f f (x)dx − f (x0 ) ≤ (x0 − a)α+1 0 ∞,[a,x0 ] b − a a (b − a)Γ(α + 2) o
α (10) + D∗x f ∞,[x0, b] (b − x0 )α+1 0 n o
1 α ≤ max Dxα0 − f ∞,[a,x0 ] , D∗x f ∞,[x0 ,b] (b − a)α . 0 Γ(α + 2) We also make Remark 5 As before we have (α ≥ 1) Z x α 1 (x − J)α−1 D∗x f (J) dJ |f (x) − f (x0 )| ≤ 0 Γ(α) x0 Z (x − x0 )α−1 x α D∗x0 f (J) dJ ≤ Γ(α) x0
(x − x0 )α−1 α
D∗x . ≤ 0 L1([x ,b]) 0 Γ(α) That is we get |f (x) − f (x0 )| ≤
(x − x0 )α−1 α
D∗x f L , ∀x ∈ [x0 , b]. 0 1([x0 ,b]) Γ(α)
(11)
Similarly we derive (α ≥ 1) Z x0 1 |f (x) − f (x0 )| ≤ (J − x)α−1 Dxα0 − f (J) dJ Γ(α) x Z x0 α 1 α−1 Dx − f (J) dJ (x0 − x) ≤ 0 Γ(α) x
1 α−1 α ≤ (x0 − x) Dx0 − f L . 1([a,x0 ]) Γ(α) That is |f (x) − f (x0 )| ≤
(x0 − x)α−1
Dxα − f , ∀x ∈ [a, x0 ]. 0 L1 ([a,x0 ]) Γ(α)
4
(12)
ANASTASSIOU: FRACTIONAL OSTROWSKI INEQUALITIES
710
As before we have Z b 1 Z b 1 f (x)dx − f (x0 ) ≤ |f (x) − f (x0 )| dx b − a a b−a a ) (Z Z b x0 1 |f (x) − f (x0 )| dx = |f (x) − f (x0 )| dx + b−a x0 a (13)
1 ≤ (x0 − x)α−1 dx Dxα0 − f L1 ([a,x0 ]) (b − a)Γ(α) a ! ) Z b
α α−1
(x − x0 ) D∗x0 f L1 ([x0 ,b]) + dx Z
x0
x0
=
n
1 (x0 − a)α Dxα0 − f L1 ([a,x0 ]) (b − a)Γ(α + 1) (14)
o
α
+ (b − x0 )α D∗x f . 0 L1 ([x0 ,b]) We have proved Theorem 6 Let α ≥ 1, m = dαe , and f ∈ AC m ([a, b]). Assume that f (k) (x0 ) = α f ∈ L1 ([x0 , b]). 0, k = 1, ..., m − 1, x0 ∈ [a, b] and Dxα0 − f ∈ L1 ([a, x0 ]), D∗x 0 Then 1 Z b n
1 f (x)dx − f (x0 ) ≤ (x0 − a)α Dxα0 − f L1 ([a,x0 ]) b − a a (b − a)Γ(α + 1) o
α +(b − x0 )α D∗x f (15) 0
≤
1 Γ(α + 1)
L1 ([x0 ,b])
n o
α f L1 ([x0 ,b]) max Dxα0 − f L1 ([a,x0 ]) , D∗x 0
· (b − a)α−1 . We further make Remark 7 Let p, q > 1 : p1 + 1q = 1 and α > 1 − p1 . Working as before on (3) and (4) and using H¨ older’s inequality we obtain 1
1 (x − x0 )α−1+ p α
D∗x |f (x) − f (x0 )| ≤ f Lq ([x0 ,b]) , ∀x ∈ [x0 , b]. (16) 0 Γ(α) (p(α − 1) + 1)1/p And also it holds 1
1 (x0 − x)α−1+ p
Dxα − f |f (x) − f (x0 )| ≤ , ∀x ∈ [a, x0 ]. (17) 0 Lq ([a,x0 ]) Γ(α) (p(α − 1) + 1)1/p
5
ANASTASSIOU: FRACTIONAL OSTROWSKI INEQUALITIES
711
We see that ) (Z Z b 1 Z b x0 1 f (x)dx − f (x0 ) ≤ |f (x) − f (x0 )| dx + |f (x) − f (x0 )| dx b − a a b−a a x0 Z x0 1 1 α−1+ p ≤ (x0 − x) dx (b − a)Γ(α)(p(α − 1) + 1)1/p a ! ) Z b
1
α + (x − x0 )α−1+ p dx kD∗x0 f kLq ([x0 ,b])
Dx0− f Lq ([a,x0 ])
x0
(18) 1 (19) (b − a)Γ(α)(p(α − 1) + 1)1/p (α + p1 ) n o
1 1 (x0 − a)α+ p Dxα0 − f Lq ([a,x0 ]) + (b − x0 )α+ p kD∗x0 f kLq ([x0 ,b]) . =
We have proved Theorem 8 Let p, q > 1 : p1 + 1q = 1, α > 1 − p1 , m = dαe , α > 0, and f ∈ AC m ([a, b]). Assume that f (k) (x0 ) = 0, k = 1, ..., m − 1, x0 ∈ [a, b]. α f ∈ Lq ([x0 , b]). Then Assume Dxα0 − f ∈ Lq ([a, x0 ]), and D∗x 0 1 Z b 1 f (x)dx − f (x0 ) ≤ b − a a (b − a)Γ(α)(p(α − 1) + 1)1/p (α + p1 ) n
1 (20) (x0 − a)α+ p Dxα0 − f Lq ([a,x0 ]) o
1 α + (b − x0 )α+ p D∗x f Lq ([x0 ,b]) 0 1 Γ(α)(p(α − 1) + 1)1/p (α + p1 ) n
max Dxα0 − f L ([a,x ]) , q 0 o
α
D∗x f · (b − a)α−1/q . 0 L ([x ,b]) ≤
q
(21)
0
Corollary 9 Let p = q = 2, α > 21 , m = dαe , α > 0, and f ∈ AC m ([a, b]). Assume f (k) (x0 ) = 0, k = 1, ..., m−1, x0 ∈ [a, b]. Assume Dxα0 − f ∈ L2 ([a, x0 ]), and
6
ANASTASSIOU: FRACTIONAL OSTROWSKI INEQUALITIES
712
α D∗x f ∈ L2 ([x0 , b]). Then 0 1 Z b 1 . f (x)dx − f (x0 ) ≤ b − a a (b − a)Γ(α)(2α − 1)1/2 (α + 12 ) n
1 (x0 − a)α+ 2 Dxα0 − f L 2([a,x0 ])
1
+ (b − x0 )α+ 2 D∗αx0 f
(22)
L2 ([x0 ,b])
1 √ ≤ · Γ(α) 2α − 1(α + 12 ) n
α max Dxα0 − f L ([a,x ]) , D∗x f L 0 2
0
o 2 ([x0 ,b])
1
· (b − a)α− 2 .
We further make Remark 10 Here α > 0, a ≤ x0 ≤ b. Let (x − x0 )α , x ∈ [x0 , b], f (x) := (x0 − x)α , x ∈ [a, x0 ].
(23)
See f is in AC m ([x0 , b]), and in AC m ([a, x0 ]). (k)
(k)
See that f − (x0 ) = f + (x0 ) = 0, k = 0, 1, ..., m − 1. (m−1)
(m−1)
Hence there exists f at x0 , also f ∈ AC[a, b]. That is f ∈ AC m [a, b]. We find that
α
Dx − f = Γ(α + 1), 0 ∞,[a,x0 ]
(24)
and
α
D∗x f = Γ(α + 1). 0 ∞,[x0 ,b]
(25)
Consequently Γ(α + 1)(α + 1) (x0 − a)α+1 + (b − x0 )α+1 (b − a)Γ(α + 2)(α + 1) 1 (x0 − α)α+1 + (b − x0 )α+1 = (b − a)(α + 1)
R.H.S.(10) =
Also we see that 1 L.H.S(10) = b−a
(Z
x0
(x0 − x)α dx +
a
Z
)
b
(x − x0 )α dx
x0
(x0 − a)α+1 (b − x0 )α+1 + α+1 α+1 1 = (x0 − a)α+1 + (b − x0 )α+1 . (b − a)(α + 1) =
1 b − a)
Therefore inequality (10) is sharp and attained. 7
(26)
ANASTASSIOU: FRACTIONAL OSTROWSKI INEQUALITIES
713
We have proved Proposition 11 Inequality (10) is sharp, in particular it is attained.
References [1] G. A. Anastassiou, Ostrowski type inequalities, Proc. AMS 123 (1995), 37753781. [2] G. A. Anastassiou, Functional Differentiation Inequalities, Research Monograph, Springer, New York, 2009. [3] G. A. Anastassiou, On Right Fractional Calculus, “Chaos, Solitons and Fractals”, 42 (2009), 365-376. [4] K. Diethelm, Fractional Differential Equations, http://www.tubs.de/˜ diethelm/lehre/f-dg102/fde-skript.ps.gz.
online:
[5] A. M. A. El-Sayed and M. Gaber, On the finite Caputo and finite Riesz derivatives, Electronic Journal of Theoretical Physics, Vol. 3, No. 12 (1006), 81-95. [6] G. S. Frederico and D. F. M. Torres, Fractional optimal control in the sense of Caputo and the fractional Noether’s theorem, International Mathematical Forum, Vol. 3, No. 10 (2008), 479-493. [7] R. Gorenflo and F. Mainardi, Essentials of Fractional Calculus, 2000, Maphysto Center, http://www.maphysto.dk/oldpages/events/LevyCAC2000/MainardiNotes/fm2k0a.ps ¨ [8] A. Ostrowski, (1938) Uber die Absolutabweichung einer differtentiebaren Funktion von ihrem Integralmittelwert, Comment. Math. Helv. 10, 226-227. [9] S. G. Samko, A. A. Kilbas and O. I.Marichev, Fractional Integrals and Derivatives Theory and Applications, (Gordon and Breach, Amsterdam, 1993) [English translation from the Russian, Integrals and Derivatives of Fractional Order and Some of Their Applications (Nauka i Tekhnika, Minsk, 1987)].
8
JOURNAL OF COMPUTATIONAL ANALYSIS AND APPLICATIONS, VOL. 14, NO.4, 714-723 , 2012, COPYRIGHT 2012 EUDOXUS714 PRESS, LLC
Fixed point theorems for τ − ϕ−concave-convex operators and applications ∗ Jiandong Yin†, Ting Guo Institute of Mathematics, Nanchang University, Nanchang 330031, PR China.
Abstract In this paper, we introduce the concept of a τ − ϕ−concave–convex operator. Meanwhile by the properties of cone and monotone iterative technique, we prove some new existence and uniqueness theorems of fixed points for mixed monotone operators under more extensive conditions in ordered Banach spaces. Finally, we apple the results to a nonlinear Hammerstein integral equation. The results given in this work generalize and extend some known results (e.g., [Z. Q. Zhao, Existence and uniqueness of fixed points for some mixed monotone operators. Nonlinear Anal., 73 (2010)1481–1490.]). Key words: τ − ϕ−concave-convex operator; mixed monotone operator; fixed point; integral equation 2000 Mathematics Subject Classification: 47H10
1
Preliminaries
It is well known that mixed monotone operators are a class of important operators, which are extensively used in nonlinear differential and integral equations (see [1–3]). In the past several decades, many authors investigated the existence and uniqueness of mixed monotone operators and obtained a few fixed point theorems for mixed monotone operators in ordered Banach spaces, (see [4–9] and the references therein). Recently, some authors focused on mixed monotone operators with certain concavity and convexity (see [10–13]). In 2004, Xu [14] introduced the φ− concave(−ϕ) –convex operator and found a general method to deal with such a class of operators with certain concavity and convexity. In 2010, Zhao in [15] introduced the e-concave-convex operator and obtained some new ∗ †
Foundation item: The Natural Science Foundation from the Education Department(GJJ11295). Corresponding author: Jiandong Yin, Man, E-mail: [email protected].
1
YIN, GUO: CONVEX-CONCAVE OPERATORS
715
existence and uniqueness theorems of fixed points for mixed monotone operators without any compact or continuous assumptions or the assumption for the coupled-upper-lower solutions in ordered Banach spaces. The main difference between a φ− concave (−ϕ)– convex operator and an e-concave-convex operator is that , for a φ− concave (−ϕ)–convex operator A(x, y), there is just one variation (x or y) with a coefficient t, but for the econcave-convex operator A1 (x, y), not only the variation x has a coefficient t, but also y has a coefficient 1t , where t ∈ (0, 1). Hence they are different concepts. Moreover, the introduction of an e-concave-convex operator initiates a new research field of the fixed pint theory for binary operators. Now, a natural problem arises: can we copy with such a class of binary operators as the e-concave-convex operator with certain concavity and convexity by a general method as well as being done in [14]? In this work, in order to solve this problem, we present the concept of a τ −ϕ−concaveconvex operator which includes the e-concave-convex operator as an especial case. Without demanding the assumptions of the existence of coupled upper-lower solutions or compactness or continuity, we prove the existence and uniqueness of fixed points for τ − ϕ−concave-convex operators. Our results in essence extend and generalize the corresponding ones in [15]. As an application, we apply our main fixed point theorem to a class of nonlinear Hammerstein integral equations. For the convenience of readers and the discussion of the following sections, we state here some definitions, notations and suggest ones refer to [1,2,3, 14, 15]. Suppose E is a real Banach space with norm k · k. A nonempty convex closed set P is called a cone if it satisfies the following conditions: (i) x ∈ P , λ ≥ 0 implies λx ∈ P ; (ii) x ∈ P and −x ∈ P implies x = θ, where θ denotes the zero element of E. A partial order in E is given by u ≤ v iff v − u ∈ P . If u ≤ v and u 6= v, then we denote u < v. P is called normal if there exists a constant N such that, for all x, y ∈ E, θ ≤ x ≤ y implies kxk ≤ N k y k, the smallest constant N satisfying the above inequality is called the normality constant of P . P is called a solid cone if int(P)( the set of interior of P ) is non-empty. Assume that D ⊆ E. We say that an operator A : D × D → E is mixed monotone if A(u, v) is non-decreasing in its first argument u and is non-increasing in its second argument v, i.e., u1 ≤ u2 (u1 , u2 ∈ D) implies A(u1 , v) ≤ A(u2 , v) for any v ∈ D and v1 ≤ v2 (v1 , v2 ∈ D) implies A(u, v1 ) ≥ A(u, v2 ) for any u ∈ D. (u, v) ∈ D×D is said to be a coupled upper- lower solution of A if u ≤ v, u ≤ A(u, v), A(v, u) ≤ v. (u∗ , v ∗ ) ∈ D × D is said to be a coupled quasi- fixed point of A if A(u∗ , v ∗ ) = u∗ and A(v ∗ , u∗ ) = v ∗ . x∗ ∈ D is called a fixed point of A if A(x∗ , x∗ ) = x∗ .
2
YIN, GUO: CONVEX-CONCAVE OPERATORS
716
Definition 1.1. (see [14]) Let E be a real Banach space and D ⊂ E. A : D × D → E is said to be φ− concave(−ϕ)− convex, if there exist a function φ : (0, 1]×D → (0, ∞) and a function ϕ : (0, 1] × D → (0, ∞) such that (t, x) ∈ (0, 1] × D implies t < φ(t, x)ϕ(t, x) ≤ 1, and also A satisfies the following two conditions: (H1 ) A(tx, y) ≥ φ(t, x)A(x, y), ∀t ∈ (0, 1), ∀(x, y) ∈ D × D; 1 (H2 ) A(x, T y) ≤ ϕ(t,x) A(x, y), ∀t ∈ (0, 1), ∀(x, y) ∈ D × D. Let P be a cone in E. For e ∈ P + = P \ {θ}, set Pe = {x ∈ E| there exist positive numbers α, β such that αe ≤ x ≤ βe}. In [15], an e− concave-convex operator is introduced as follows. Definition 1.2. (see [15]) Suppose that A : Pe × Pe → Pe and e ∈ P + . Suppose that there exists an η(u, v, t) > 0 such that A(tu, t−1 v) ≥ t(1 + η(u, v, t))A(u, v), ∀u, v ∈ Ce and 0 < t < 1. Then, A is called an e− concave-convex operator. Now, we give the definition of τ − ϕ−concave–convex operators. Definition 1.3. Let E be a real Banach space, P be a cone in E. We say an operator A : P × P → P is τ − ϕ−concave–convex if there exist a function τ (t) defined on an interval (a, b) and a map ϕ : (a, b) × P × P → (0, +∞) such that (C1 ) τ :³(a, b) → (0,´1) is a surjection; 1 y ≥ ϕ(t, x, y)A(x, y), ∀t ∈ (a, b), ∀(x, y) ∈ P × P ; (C2 ) A τ (t)x, τ (t) (C3 ) ϕ(t, x, y) > τ (t), ∀t ∈ (a, b), ∀(x, y) ∈ P × P . When τ (t) = t, we say that A is a ϕ− concave– convex operator. Remark 1.1. The τ − ϕ−concave–convex operator as well as the ϕ− concave– convex operator is a generalization of the e− concave- convex operator which is introduced in [15]. Remark 1.2. Condition (C2 ) in Definition 1.3 implies that ¶ µ 1 1 ´ A(x, y), ∀t ∈ (a, b), ∀(x, y) ∈ P × P. A x, τ (t)y ≤ ³ 1 τ (t) ϕ t, τ (t) x, τ (t)y
3
(1.1)
YIN, GUO: CONVEX-CONCAVE OPERATORS
2
717
Main results In this section, we present our main results.
Theorem 2.1. Let E be a real Banach space and P be a normal cone in E. Suppose that an operator A : P × P → P is mixed monotone and τ − ϕ−concave-convex. In addition, suppose that there exists h ∈ P + such that A(h, h) ∈ Ph and for any t ∈ (a, b) and x, y ∈ Ph , ϕ(t, x, y) ≥ ϕ(t, h, h). Then operator A has a unique fixed point x∗ in Ph and for any initial x0 , y0 ∈ Ph , constructing successively the sequences xn = A(xn−1 , yn−1 ), yn = A(yn−1 , xn−1 ), n = 1, 2, · · · , we have kxn − x∗ k → 0 and kyn − x∗ k → 0(n → ∞). Proof. Since A(h, h) ∈ Ph , we can choose a sufficiently small number λ0 ∈ (0, 1) such that λ0 h ≤ A(h, h) ≤
1 h. λ0
It follows from (C1 ) that there exists t0 ∈ (a, b) such that τ (t0 ) = λ0 , hence τ (t0 )h ≤ A(h, h) ≤
1 h. τ (t0 )
(2.1)
From (C3 ), we know that ϕ(t0 , h, h) > τ (t0 ). Thus, ϕ(t0 , h, h)/τ (t0 ) > 1 and we can take a positive integer k such that µ
which implies
µ
ϕ(t0 , h, h) τ (t0 )
¶k
τ (t0 ) ϕ(t0 , h, h)
1 , τ (t0 )
(2.2)
≤ τ (t0 ).
(2.3)
≥ ¶k
Set u0 = [τ (t0 )]k h, v0 = 1/[τ (t0 )]k h. Clearly, u0 , v0 ∈ Ph and u0 = [τ (t0 )]2k v0 < v0 . Take any r ∈ [0, [τ (t0 )]2k ], then r ∈ (0, 1) and u0 ≥ rv0 . By the mixed monotonicity of A, we get A(u0 , v0 ) ≤ A(v0 , u0 ). Combining condition (C2 ) with (2.1) and (2.2), we obtain ¶ µ ¶ µ 1 1 1 k k−1 h = A τ (t0 ) · [τ (t0 )] h, · h A(u0 , v0 ) = A [τ (t0 )] h, [τ (t0 )]k τ (t0 ) [τ (t0 )]k−1 µ ¶ µ ¶ 1 1 k−1 k−1 ≥ ϕ t0 , [τ (t0 )] h, h A [τ (t0 )] h, h [τ (t0 )]k−1 [τ (t0 )]k−1 ¶ µ ¶ µ 1 1 k−1 ≥ ϕ t0 , [τ (t0 )] h, h · · · ϕ t0 , τ (t0 )h, h ϕ(t0 , h, h)A(h, h) [τ (t0 )]k−1 τ (t0 ) ≥ [ϕ(t0 , h, h)]k · τ (t0 )h ≥ [τ (t0 )]k h = u0 . 4
YIN, GUO: CONVEX-CONCAVE OPERATORS
718
For t ∈ (a, b), from (1.1) and (2.3), we have µ ¶ ¶ µ 1 1 1 k−1 k A(v0 , u0 ) = A · h, [τ (t0 )] h = A h, τ (t0 ) · [τ (t0 )] h [τ (t0 )]k τ (t0 ) [τ (t0 )]k−1 µ ¶ 1 1 k−1 ´A ³ h, [τ (t0 )] h ≤ [τ (t0 )]k−1 ϕ t0 , [τ (t01)]k−1 h, [τ (t0 )]k−1 h ≤
1 1 ´ ··· ³ ´· A(h, h) ϕ(t0 , h, h) ϕ t0 , [τ (t01)]k−1 h, [τ (t0 )]k−1 h ϕ t0 , τ (t10 ) h, τ (t0 )h
≤
1 1 1 h≤ · h = v0 . k [ϕ(t0 , h, h)] τ (t0 ) [τ (t0 )]k
³
1
Hence we get u0 ≤ A(u0 , v0 ) ≤ A(v0 , u0 ) ≤ v0 .
(2.4)
Construct successively the sequences un = A(un−1 , vn−1 ), vn = A(vn−1 , un−1 ), n = 1, 2, · · · . By the mixed monotonicity of A, we have u1 = A(u0 , v0 ) ≤ A(v0 , u0 ) = v1 . By induction, we get un ≤ vn , n = 1, 2 · · · . Then from (2.4) and the mixed monotonicity of A, we have u0 ≤ u 1 ≤ · · · ≤ u n ≤ · · · v n ≤ · · · v 1 ≤ v 0 . (2.5) Noting that u0 ≥ rv0 , we get un ≥ u0 ≥ rv0 ≥ rvn , n = 1, 2, · · · . Set rn = sup{r > 0|un ≥ rvn }, n = 1, 2, · · · , then we have un ≥ rn vn , n = 1, 2, · · · and un+1 ≥ un ≥ rn vn ≥ rn vn+1 , n = 1, 2, · · · . So rn+1 ≥ rn , namely, {rn } is increasing with {rn } ⊂ (0, 1]. Assume rn → r∗ as n → ∞. Next we prove r∗ = 1. Indeed, suppose to the contrary that 0 < r∗ < 1. From (C1 ), there is t1 ∈ (a, b) such that τ (t1 ) = r∗ . Distinguish two cases: Case one: There exists a positive integer N such that rN = r∗ . In this case, we get rn = r∗ for all n ≥ N . Therefore, for all n ≥ N , we have ¶ µ ¶ µ 1 1 ∗ un un+1 = A(un , vn ) ≥ A r vn , ∗ un = A τ (t1 )vn , r τ (t1 ) ≥ ϕ(t1 , vn , un )A(vn , un ) ≥ ϕ(t1 , h, h)A(vn , un ) = ϕ(t1 , h, h)vn+1 . 5
YIN, GUO: CONVEX-CONCAVE OPERATORS
719
By the definition of rn , we obtain rn+1 = r∗ ≥ ϕ(t1 , h, h) > τ (t1 ) = r∗ , which is a contradiction. Case two: For all positive integer n, rn < r∗ . Then we get 0 < rn /r∗ < 1. By (C1 ), there exist sn ∈ (a, b) such that τ (sn ) = rn /r∗ . So we have µ n ¶ r ∗ 1 r∗ 1 un+1 = A(un , vn ) ≥ A(rn vn , un ) = A r v n , n ∗ un rn r∗ r r µ ¶ 1 1 1 1 · ∗ un ≥ ϕ(sn , r∗ vn , ∗ un )A(r∗ vn , ∗ un ) = A τ (sn ) · r∗ vn , τ (sn ) r r r ¶ µ 1 1 un ≥ ϕ(sn , h, h)A(r∗ vn , ∗ un ) = ϕ(sn , h, h)A τ (t1 )vn , r τ (t1 ) ≥ ϕ(sn , h, h)ϕ(t1 , h, h)A(vn , un ) = ϕ(sn , h, h)ϕ(t1 , h, h)vn+1 . By the definition of rn , we have rn+1 ≥ ϕ(sn , h, h)ϕ(t1 , h, h) > τ (sn )ϕ(t1 , h, h) =
rn ϕ(t1 , h, h). r∗
Let n → ∞, we obtain r∗ ≥ (r∗ /r∗ )ϕ(t1 , h, h) > τ (t1 ) = r∗ , which is also a contradiction. Hence limn→∞ rn = 1. For any natural number m, we have θ ≤ un+m − un ≤ vn − un ≤ vn − rn vn ≤ (1 − rn )vn ≤ (1 − rn )v0 ; θ ≤ vn − vn+m ≤ vn − un ≤ (1 − rn )v0 . By the normality of P , we get, as n → ∞, k un+m − un k≤ N k (1 − rn )v0 k→ 0; k vn − vn+m − k≤ N k (1 − rn )v0 k→ 0. Here N is the normality constant of P . So {un } and {vn } are Cauchy sequences. Since E is complete, there exist u∗ and v ∗ such that un → u∗ , vn → v ∗ as n → ∞. From (2.5), we have u∗ , v ∗ ∈ [u0 , v0 ] and for any n, un ≤ u∗ ≤ v ∗ ≤ vn . Therefore, θ ≤ v ∗ − u∗ ≤ vn − un ≤ (1 − rn )v0 . Since P is normal, k v ∗ − u∗ k≤ N k (1 − rn )v0 k→ 0 as n → ∞. Thus u∗ = v ∗ . Let x∗ = u∗ = v ∗ , we have, by the mixed monotonicity of A, un+1 = A(un , vn ) ≤ A(x∗ , x∗ ) ≤ A(vn , un ) = vn+1 . Take n → ∞, the normality of P implies x∗ = A(x∗ , x∗ ). Namely, x∗ is a fixed point of A. 6
YIN, GUO: CONVEX-CONCAVE OPERATORS
720
In the following, we prove that x∗ is the unique fixed point of A in Ph . In fact, assume x e is also a fixed point of A in Ph . By the definition of Ph , there exist positive numbers λ1 , λ2 , η1 , η2 such that λ1 h ≤ x∗ ≤ λ2 h, η1 h ≤ x e ≤ η2 h. Then,
η1 η1 λ2 h ≥ x ∗ . λ2 λ2 λ λ1 1 x ∗ ≥ λ1 h ≥ η2 h ≥ x e. η2 η2 Put γ1 = sup{γ > 0 : x e ≥ γx∗ , x∗ ≥ γe x}. Clearly, 0 < γ1 < +∞. Furthermore, we prove γ1 ≥ 1. Suppose to the contrary that 0 < γ1 < 1, by (C1 ), there exists t2 ∈ (a, b) such that τ (t2 ) = γ1 . Thus, ¶ µ ¶ µ 1 ∗ ∗ ∗ 1 ∗ x x e = A(e x, x e) ≥ A γ1 x , x = A τ (t2 )x , γ1 τ (t2 ) ≥ ϕ(t2 , x∗ , x∗ )A(x∗ , x∗ ) = ϕ(t2 , x∗ , x∗ )x∗ . x e ≥ η1 h ≥
Because ϕ(t2 , x∗ , x∗ ) > τ (t2 ) = γ1 , this is in contradiction with the definition of γ1 . Thus γ1 ≥ 1 and then we have x e ≥ x∗ and x∗ ≥ x e. Hence x e = x∗ , that is A has a unique fixed point in Ph . For any initial x0 , y0 ∈ Ph , we can choose a small number e1 , e2 ∈ (0, 1) such that e1 h ≤ x0 ≤
1 1 h, e2 h ≤ y0 ≤ h. e1 e2
By (C1 ) again, there exist t3 , t4 ∈ (a, b) such that τ (t3 ) = e1 , τ (t4 ) = e2 . Hence, τ (t3 )h ≤ x0 ≤
1 1 h, τ (t4 )h ≤ y0 ≤ h. τ (t3 ) τ (t4 )
We can choose a sufficiently large positive integer m such that · · ¸m ¸m ϕ(t4 , h, h) ϕ(t3 , h, h) 1 1 ≥ , ≥ . τ (t3 ) τ (t3 ) τ (t4 ) τ (t4 ) Let α = min{τ (t3 ), τ (t4 )}, then α ∈ (0, 1). By (C1 ), there exists µ1 ∈ (a, b) such that τ (µ1 ) = α. Put u0 = [τ (µ1 )]m h, v 0 = (1/[τ (µ1 )]m )h. Then u0 , v 0 ∈ Ph and u0 < x0 , y0 < v 0 . Let un = A(un−1 , v n−1 ), v n = A(v n−1 , un−1 ) and xn = A(xn−1 , yn−1 ), yn = A(yn−1 , xn−1 ). By a proof similar to that of the existence of x∗ , we can prove that there exists y ∗ ∈ Ph such that limn→∞ un = limn→∞ v n = y ∗ and A(y ∗ , y ∗ ) = y ∗ . The uniqueness of fixed point of A implies y ∗ = x∗ . By induction, we can get un ≤ xn , yn ≤ v n n = 1, 2, · · · . Since P is normal, we have limn→∞ xn = x∗ and limn→∞ yn = x∗ . 7
YIN, GUO: CONVEX-CONCAVE OPERATORS
721
Remark 2.1. If the hypothesis (C3 ) in Definition1.3 is replaced by (C30 ) ϕ(t, x, y) is non-decreasing in x and is non-increasing in y, and the other conditions are unchangeable in Theorem 2.1, then the conclusions of Theorem 2.1 are also true. Remark 2.2. If we suppose that operator A : Ph × Ph → Ph , then A(h, h) ∈ Ph is clear. So we have the following corollaries. Corollary 2.1. Let E be a real Banach space and P be a normal cone in E. Suppose h > θ and that an operator A : Ph × Ph → Ph is mixed monotone and τ − ϕ−concaveconvex. In addition, suppose for any t ∈ (a, b) and x, y ∈ Ph , ϕ(t, x, y) ≥ ϕ(t, h, h). Then operator A has a unique fixed point x∗ in Ph and for any initial x0 , y0 ∈ Ph , constructing successively the sequences xn = A(xn−1 , yn−1 ), yn = A(yn−1 , xn−1 ), n = 1, 2, · · · , we have kxn − x∗ k → 0 and kyn − x∗ k → 0(n → ∞). Corollary 2.2. (see [15]) Let E be a real Banach space and P be a normal cone in E. Suppose h > θ and that A : Ph × Ph → Ph is a mixed monotone operator. Suppose that there exists an η(t, u, v) > 0 such that ¶ µ 1 (2.6) A tu, v ≥ t(1 + η(t, u, v))A(u, v), ∀ u, v ∈ Ph , 0 < t < 1. t Then A has a unique fixed point x∗ in Ph and for any initial x0 , y0 ∈ Ph , constructing successively the sequences xn = A(xn−1 , yn−1 ), yn = A(yn−1 , xn−1 ), n = 1, 2, · · · , we have kxn − x∗ k → 0 and kyn − x∗ k → 0(n → ∞). Proof. In Theorem 2.1, put τ (t) = t and ϕ(t, u, v) = t(1+η(t, u, v)), then all the conditions in Theorem 2.1 are satisfied. Thus, from Theorem 2.1, we prove Corollary 2.2. Remark 2.3. The corresponding results in [15] turn out to be special cases of our main results, (see Theorem 3.1 and 3.2, Corollary 4.1 and 4.2 in [15]). Remark 2.4. In our results, as well as in [15], we need not assume that the operator A is compact and continuous or that the condition of the coupled upper–lower solutions holds. Thus our results in essence improve and extend relevant results in [11-13].
8
YIN, GUO: CONVEX-CONCAVE OPERATORS
3
722
Applications
In this section, we investigate the existence of solutions for a class of nonlinear Hammerstein integral equations on unbounded regions by using some of the fixed-point theorems given above. Example : Considering the following nonlinear integral equation: Z £ ¤ x(t) = (Ax)(t) = K(t, s) xα (s) + x−β (s) ds, (3.1) Rn
where α, β ∈ (0, 1) are positive constants. Conclusion: Suppose that K : Rn × Rn → R1 is a positive and continuous function. Then equation (3.1) has a unique positive solution x∗ (t). Moreover, constructing ∞ successively the sequences {xn }∞ n=1 and {yn }n=1 with Z h i −β α xn (t) = K(t, s) xn−1 (s) + yn−1 (s) ds, Rn
Z yn (t) = Rn
h i α K(t, s) yn−1 (s) + x−β (s) ds n−1
for any positive continuous functions x0 (t), y0 (t) defined on Rn , we have, as n → ∞, sup |xn (t) − x∗ (t)| → 0 and
t∈Rn
sup |yn (t) − x∗ (t)| → 0.
t∈Rn
Proof. Let E = CB (Rn ) be the space of continuous functions defined on Rn . For any x ∈ E, define kxk = supt∈Rn |x(t)|, then E is a Banach space. Denote by P = CB+ (Rn ) the set of all nonnegative functions on CB (Rn ), then P is a normal solid cone in E. Hence, giving a positive continuous function h(t) on Rn , we know that there exist positive constants λ1 , λ2 ∈ (0, 1) such that, for any t ∈ Rn , λ1 h(t) ≤ x0 (t) ≤ (1/λ1 )h(t) and λ2 h(t) ≤ y0 (t) ≤ (1/λ2 )h(t). Clearly, equation (3.1) can be denoted by x = A(x, x), where A(x, y) = A1 (x) + A2 (y), R R A1 (x) = Rn K(t, s)xα (s)ds, A2 (y) = Rn K(t, s)y −β (s)ds. Then A : P × P → P . In following, we prove that A satisfies all the conditions of Theorem 2.1. Assume that P defines a partial order ≤ in E, then A : P × P → P is a mixed monotone operator. Set Ph = {x ∈ E : there exist λ, µ > 0 such that λh(t) ≤ x(t) ≤ µh(t), ∀t ∈ Rn }. Since K : Rn × Rn → R1 is a positive and continuous function, A(h, h) ∈ Ph . Let α0 = min{α, β}, then α0 ∈ (0, 1). For any map τ (t) : (a, b) → (0, 1), where a < b, let ϕ(t) = τ (t)α0 , then A is a τ − ϕ− concave-convex operator. Thus, by Theorem 2.1, we know that equation (3.1) has a unique positive solution x∗ (t). 9
YIN, GUO: CONVEX-CONCAVE OPERATORS
723
References [1] D. J. Guo, V. Lakshmikantham, Coupled fixed points of nonlinear operators with applications, Nonlinear Anal. 11 (1987) 623–632. [2] K. Deimling, Nonlinear Functional Analysis, Springer-Verlag, New York, Berlin, 1985. [3] D. J. Guo, V. Lakshmikantham, Nonlinear Problems in Abstract Cones, Academic Press, Boston, 1988. [4] D. J. Guo, Existence and uniqueness of positive fixed point for mixed monotone operators with applications, Appl. Anal. 46 (1992) 91–100. [5] D. J. Guo, Existence and uniqueness of positive fixed point for mixed monotone operators with applications, Appl. Anal. 34 (1988) 215–224. [6] Z. T. Zhang, New fixed point theorems of mixed monotone operators and applications, J. Math. Anal. Appl. 204 (1996) 307–319. [7] Z. T. Zhang, Fixed-point theorems of mixed monotone operators and applications, Acta Math. Sinica 41 (1998) 1121–1126 (in Chinese). [8] Y. S. Wu, G. Z. Li, On the fixed point existence and uniqueness theorems of mixed monotone operators and applications, Acta Math. Sinica 46 (2003) 161–166 (in Chinese). [9] X. G. Lian,Y. J. Li, Fixed point theorems for a class of mixed monotone operators with applications, Nonlinear Anal. 67 (2007) 2752–2762. [10] Z. Drici, F.A. McRae, J. Vasundhara Devi, Fixed point theorems for mixed monotone operators with PPF dependence, Nonlinear Anal. 69 (2008) 632–636. [11] Z. Q. Zhao, Xinsheng Du, Fixed points of generalized e-concave (generalized e-convex) operators and their applications, J. Math. Anal. Appl. 334 (2007) 1426–1438. [12] Y. X. Wu, Z. D. Liang, Existence and uniqueness of fixed points for mixed monotone operators with applications, Nonlinear Anal. 65 (2006) 1913–1924. [13] M. Y. Zhang, Fixed point theorems of φ−convex ϕ−concave mixed monotone operators and applications, J. Math. Anal. Appl. 339 (2008) 970–981. [14] S. Y. Xu, B. G. Jia, C. X. Zhu,Fixed-point theorems of φ−convex ϕ−concave mixed monotone operators and applications, J. Math. Anal. Appl. 295 (2004) 645–657. [15] Z. Q. Zhao, Existence and uniqueness of fixed points for some mixed monotone operators, Nonlinear, Anal. 73(2010) 1481–1490.
10
JOURNAL OF COMPUTATIONAL ANALYSIS AND APPLICATIONS, VOL. 14, NO.4, 724-732 , 2012, COPYRIGHT 2012 EUDOXUS724 PRESS, LLC
Expressions and Iterative Methods for the Weighted Group Inverses of Linear Operators on Banach Space 1
Xiaoji Liu 1∗ Chunmei Hu 1 College of Mathematics and Computer Science, Guangxi University for Nationalities, Nanning 530006, P.R. China March 1, 2011
Abstract In this note, some expressions and characterizations for the weighted group inverses A♯W of operator A by using the technique of block operator matrix are given, three iterative methods for computing A♯W are established, and the necessary and sufficient conditions for iterative convergence to A♯W are discussed. 2000 Mathematics Subject Classification: 47A05, 15A09 Key words: weighted group inverse, operator matrix, expression, iterative method
1
Introduction
The Drazin inverse and the W -weighted Drazin inverse are applied to varions fields, for example, applications in singular differential and difference equations, Markov chains, iterative methods and numerical analysis. Recently, a new weighted group inverse of the rectangular matrix had been defined in [1], the weighted group inverse A♯W is not the special case of the W -weighted Drazin inverse, which was defined by Cline and Greville in [2]. Some properties, expressions and the uniqueness have been proved by Cen in [1]. A constructive perturbation bound of the Drazin inverse of a square matrix was derived in [3]. The algebra perturbation and analytical perturbation of the weighted group inverse A♯W were studied in [5]. several approximate methods for the W -weighted Drazin inverse of bounded linear operators on Banach space were presented in [6]. In this note, we mainly consider the weighted group inverse of operators on Banach space. For any operator A ∈ L(X , Y), we denote its range, null space and spectral ∗
corresponding author E-mail: [email protected]. Tel. +86-0771-3264782.
1
LIU, HU: LINEAR OPERATORS IN BANACH SPACES
725
radius by R(A), N (A) and ρ(A), respectively. Let A ∈ L(X , Y), if there exists some B ∈ L(Y, X ), such that ABA = A holds, then B is an inner generalized inverse of A, and the operator A is called relatively regular. Definition 1.1. [1] Let A ∈ L(X , Y) and W ∈ L(Y, X ). If X ∈ L(X , Y) satisfies the system of operator equations (W -system of operator equations) (W 1)AW XW A = A, (W 2)XW AW X = X, (W 3)AW X = XW A Then X is called the weighted group inverse of A with W , and denoted by A♯W . If A♯W exists, then it is unique. Definition 1.2. [4] Let A ∈ L(X , Y). If there exist: a Banach space Z, and operators P ∈ L(Z, Y) and Q ∈ L(X , Z), such that P is left invertible, Q is right invertible and A = PQ
(1.1)
Then (1.1) is the full-rank decomposition of A. It is easy to see that an operator A ∈ L(X , Y) has the full-rank decomposition, if and only if A is relatively regular. The following lemma is needed in what follows. Lemma 1.1. Let A ∈ L(X ) and A♯ exists. Then X = R(A) ⊕ N (A), and ( ) ( ) ( ) A1 0 R(A) R(A) A= : → . 0 0 N (A) N (A)
(1.2)
where A1 is invertible. Moreover,
( ♯
A =
A−1 0 1 0 0
) ( ) ( ) R(A) R(A) : → . N (A) N (A)
(1.3)
Proof. The result can be proved similarly as in [4].
2
Existence conditions, expressions and the characterization of the weighted group inverse A♯W
In this section, we will give some existence conditions for the weighted group inverse A♯W , and also give several expressions for the inverse A♯W by using operator matrix blocks. Theorem 2.1. Let A ∈ L(X , Y), W ∈ L(Y, X ), and A = P Q is the full-rank decomposition of A, where P ∈ L(Z, Y) and Q ∈ L(X , Z). then the following statements are equivalent: (i) A♯W exists; (ii) (AW )♯ and (W A)♯ exist, and R(AW ) = R(A), N (W A) = N (A); (iii) R(W A) ⊕ N (A) = X, N (AW ) ⊕ R(A) = Y ; (iv) QW P is invertible. 2
LIU, HU: LINEAR OPERATORS IN BANACH SPACES
726
Proof. (i) ⇒ (ii): If A♯W exists, then we set X0 is the solution to the W -system of operator equations. From (W 1), AW X0 W A = A, we get R(AW ) = R(A), N (W A) = N (A). From (W 1) and (W 3), (AW )2 X0 = A, X0 (W A)2 = A, hence, R((AW )2 ) = R(A) = R(AW ), N ((W A)2 ) = N (A) = N (W A). From (W 1), we have AW X0 W AW = AW, W AW X0 W A = W A, combining (W 3), we get X0 W (AW )2 = AW, (W A)2 W X0 = W A, therefore, N ((AW )2 ) = N (AW ), R((W A)2 ) = R(W A). Hence ind(AW ) = ind(W A) = 1, i.e., (AW )♯ and (W A)♯ exist. (ii) ⇒ (iii): (AW )♯ exists is equivalent to R(AW ) ⊕ N (AW ) = Y , (W A)♯ exists is equivalent to R(W A) ⊕ N (W A) = X, since R(AW ) = R(A), N (W A) = N (A), then R(W A) ⊕ N (A) = X, N (AW ) ⊕ R(A) = Y . (iii) ⇒ (iv): Consider an arbitrary x ∈ Z satisfying QW P x = 0. then P QW P x = 0, which implies AW P x = 0, then P x ∈ N (AW ). Since P x = P QQr x = AQr x ∈ R(A), where Qr is the right inverse of Q. So P x ∈ N (AW ) ∩ R(A) = 0. Since P is left invertible, hence x = 0 and QW P is invertible. (iv) ⇒ (i): If QW P is invertible, let X0 = P (QW P )−2 Q, it is easy to show that X0 is a solution to the W -system of operator equations, then A♯W exists. By Theorem 2.1, and From Lemma 1.1,we have the following result: Theorem 2.2. Let A ∈ L(X , Y) is relatively regular, W ∈ L(Y, X ), If A♯W exists, then: (i)A♯W = [(AW )♯ ]2 A; (ii)A♯W = A[(W A)♯ ]2 ; (iii)A♯W = (AW )♯ A(W A)♯ ; (iv)(AW )♯ = A♯W W ; (v)(W A)♯ = W A♯W ; (1,2) (vi)A♯W = (W AW )R(A),N (A) . Next, we present the characterizations of the weighted group inverse A♯W by using the above expressions of A♯W . Theorem 2.3. Let A ∈ L(X , Y) is relatively regular, W ∈ L(Y, X ), If A♯W exists. Then there exist a unique operator X ∈ L(Y) such that AW X = 0, XAW = 0, X 2 = X, AW + X is invertible
(2.1)
and a unique operator Y ∈ L(X ) such that W AY = 0, Y W A = 0, Y 2 = Y, W A + Y is invertible
(2.2)
Further, we have X = I − A♯W W AW 3
(2.3)
LIU, HU: LINEAR OPERATORS IN BANACH SPACES
727
and Y = I − W AW A♯W
(2.4)
Proof. By Theorem 2.2, we get ( I−
A♯W W AW
=
0 0 0 I
)
It is easy to verify that X = I − A♯W W AW satisfies (2.1), which shows the existence of such operator X. To show the uniqueness, let X0 be an operator which satisfies (2.1) and let X0 be partitioned as ( ) X11 X12 X0 = X21 X22 with respect to the space decomposition N (AW ) ⊕ R(A) = Y . By Equation (2.1), ( )( ) ( ) A1 0 X11 X12 A1 X11 A1 X12 0 = AW X0 = = 0 0 X21 X22 0 0 ( 0 = X0 AW =
X11 X12 X21 X22
)(
A1 0 0 0
)
( =
X11 A1 0 X21 A1 0
)
so that X11 = 0, X12 = 0 and X21 = 0. Since AW + X0 is invertible, we get that X22 is invertible. From the assumption 2 = X , that is, (X − I)X that X 2 = X, we have X22 22 22 22 = 0. Thus, X22 − I = 0, i.e., X22 = I. Finally, we get ( X0 =
0 0 0 I
)
Thus, we obtain X0 = X = I − A♯W W AW . This shows that X unique exists. The second statement is proved in a similar manner.
3
The iterative methods for computing A♯W
In this section, we construct three iterative methods for computing A♯W , and prove the convergence by using operator matrix blocks.
4
LIU, HU: LINEAR OPERATORS IN BANACH SPACES
728
Lemma 3.1. Let A ∈ L(X , Y) is relatively regular, W ∈ L(Y, X ), If A♯W exists, then (1)P1 = W A(W A)♯ = (W A)♯ W A = PR(W A),N (A) ; (2)P2 = AW (AW )♯ = (AW )♯ AW = PR(A),N (AW ) . Theorem 3.1. Let A ∈ L(X , Y) is relatively regular, W ∈ L(Y, X ), If A♯W exists. Define the sequence (Xk )k in B(X , Y) in the following way: { Rk = P1 − W AW Xk , (3.1) Xk+1 = Xk (I + Rk ), k = 0, 1, 2, . . . . Then the iteration (3.1) converges to A♯W if and only if R(X0 ) = R(A), ρ(R0 ) < 1. Proof. By Theorem 2.3, we have A♯W P1 = A♯W PR(W A),N (A) = A♯W , and ( P1 = W A(W A)♯ =
I 0 0 0
) ( ) ( ) R(W A) R(W A) : → N (A) N (A)
Since, P1 Rk = P1 (P1 − W AW Xk ) = P1 − P1 W AW Xk = P1 − PR(W A),N (A) W AW Xk = P1 − W AW Xk = Rk . Hence, Rk = P1 − W AW Xk = P1 − W AW Xk−1 (I + Rk−1 ) = P1 − W AW Xk−1 − W AW Xk−1 Rk−1 = Rk−1 − W AW Xk−1 Rk−1 = P1 Rk−1 − W AW Xk−1 Rk−1 = (P1 − W AW Xk−1 )Rk−1 k
2 = Rk−1 = · · · = R02 .
and Xk+1 = Xk (I + Rk ) = Xk−1 (I + Rk−1 )(I + Rk ) = · · · = X0 (I + R0 )(I + R1 ) · · · (I + Rk ) k
= X0 (I + R0 )(I + R02 ) · · · (I + R02 ) = X0 (I + R0 + R02 + · · · + R02
k+1 −1
)
Then, k+1
Xk+1 (I − R0 ) = X0 (I − R02 5
)
(3.2)
LIU, HU: LINEAR OPERATORS IN BANACH SPACES
k
SUFFICIENCY If ρ(R0 ) < 1, then R02 → 0 (k → ∞) and I − R0 is invertible. From (3.2), we obtain X∞ = X0 (I − R0 )−1 . From R(X0 ) = R(A), by Theorem 2.3, we can set ( ) X11 X12 X0 = 0 0 Then, ( ) ( )( )( ) I 0 W11 0 A1 0 X11 X12 R0 = P1 − W AW X0 = − 0 0 0 W22 0 0 0 0 ( ) I − W11 A1 X11 −W11 A1 X12 . = 0 0 Hence
( I − R0 =
W11 A1 X11 W11 A1 X12 0 I
) .
Since I − R0 is invertible, then W11 A1 X11 is invertible, which implies that X11 is invertible. Now ( −1 −1 −1 ) −1 X11 A1 W11 −X11 X12 −1 (I − R0 ) = . 0 I It is easy to verify −1
X∞ = X0 (I − R0 ) ( −1 −1 A1 W11 = 0
(
) ( −1 −1 −1 ) −1 X11 X12 X11 A1 W11 −X11 X12 = 0 0 0 I ) ( −2 ) 0 A1 A11 0 = = A♯W . 0 0 0
NECESSITY If Xk converges to A♯W , then Rk → P1 − W AW A♯W = P1 − k W AW (AW )♯ A(W A)♯ = P1 − W A(W A)♯ = 0(k → 0). Hence, we have R02 → 0(k → 0) and then ρ(R0 ) < 1 and I − R0 is invertible. From (3.2), we obtain A♯W (I − R0 ) = X0 . So R(X0 ) = R(A♯W ) = R(A). The iterative method defined in Theorem 3.1 is the Newton-Raphson method for computing A♯W . We now turn to study the pth-order iterative method for computing A♯W , where p ≥ 2 is an integer. Theorem 3.2. Let A ∈ L(X , Y) is relatively regular, W ∈ L(Y, X ), If A♯W exists. Define the sequence (Xk )k in B(X , Y) in the following way: { Rk = P1 − W AW Xk , (3.3) Xk+1 = Xk (I + Rk + · · · + Rkp−1 ), p ≥ 2, k = 0, 1, 2, . . . . Then the iteration (3.3) converges to A♯W if and only if R(X0 ) = R(A), ρ(R0 ) < 1. 6
729
LIU, HU: LINEAR OPERATORS IN BANACH SPACES
730
Proof. Since A♯W P1 = A♯W , P1 Rk = Rk , (k ≥ 0). and Rk+1 = P1 − W AW Xk+1 = P1 − W AW Xk (I + Rk + · · · + Rkp−1 ) = P1 − W AW Xk − W AW Xk Rk (I + Rk + · · · + Rkp−2 ) = Rk − W AW Xk Rk − W AW Xk Rk2 (I + Rk + · · · + Rkp−3 ) = (P1 − W AW Xk )Rk − W AW Xk Rk2 (I + Rk + · · · + Rkp−3 ) = Rk2 − W AW Xk Rk2 (I + Rk + · · · + Rkp−3 ) k+1
= · · · = Rkp = · · · = R0p
.
Furthermore, Xk+1 = Xk (I + Rk + · · · + Rkp−1 ) = X0 (I + R0 + · · · + R0p−1 )(I + R1 + · · · + R1p−1 ) · · · (I + Rk + ... + Rkp−1 ) k
k
= X0 (I + R0 + · · · + R0p−1 )(I + R0p + · · · + (R0p )p−1 ) · · · (I + R0p + · · · + (R0p )p−1 ) k+1 −1
= X0 (I + R0 + · · · + R0p
).
Hence, Xk+1 (I − R0 ) = X0 (I − R0p
k+1
)
(3.4)
k
SUFFICIENCY If ρ(R0 ) < 1, then R0p → 0 (k → ∞) and I − R0 is invertible. From (3.4), we obtain X∞ = X0 (I − R0 )−1 . Similarly to Theorem 3.1, we have ( −2 ) A1 A11 0 −1 X∞ = X0 (I − R0 ) = = A♯W 0 0 NECESSITY
Similar to the prove of the necessity of Theorem 3.1.
The iterative method defined in Theorem 3.2 is the hyper power method for computing A♯W . when p = 2, it is the Newton-Raphson method. Similar to the prove of the of Theorem 3.1, we have the Euler-Knopp method: Theorem 3.3. Let A ∈ L(X , Y) is relatively regular, W ∈ L(Y, X ), If A♯W exists. Define the sequence (Xk )k in B(X , Y) in the following way: Xk = X0
k ∑
(P1 − W AW X0 )j , k = 0, 1, 2, . . . .
(3.5)
j=0
Then the iteration (3.5) converges to A♯W if and only if R(X0 ) = R(A), ρ(R0 ) < 1. 7
LIU, HU: LINEAR OPERATORS IN BANACH SPACES
4
731
Example (
)
(
) 2 1 Example: A = , W = 2 1 , AW = , W A = 5, in order to meet 6 3 R(X0 ) = R(A), ρ(P1 − W AW X0 ) < 1, we can set X0 = αA, where α satisfies 2 0 α, z ∈ U, ξ ∈ U , the class of convex functions of order α, where α < 1 and let K ∗ = K ∗ (0). In order to prove our main results we use the following definitions and lemmas. Definition 1.1 [5] Let H(z, ξ), f (z, ξ) be analytic in U × U . The function f (z, ξ) is said to be strongly subordinate to H(z, ξ), or H(z, ξ) is said to be strongly superordinate to f (z, ξ), if there exists a function w analytic in U , with w(0) = 0 and |w(z)| < 1 such that f (z, ξ) = H(w(z), ξ), for all ξ ∈ U . In such a case we write f (z, ξ) ≺≺ H(z, ξ), z ∈ U , ξ ∈ U . Remark 1.1 (i) Since f (z, ξ) is analytic in U ×U , for all ξ ∈ U and univalent in U , for all ξ ∈ U , Definition 1.1 is equivalent to H(0, ξ) = f (0, ξ), for all ξ ∈ U and f (U × U ) ⊂ H(U × U ). (ii) If H(z, ξ) ≡ H(z) and f (z, ξ) ≡ f (z) then the strong subordination becomes the usual notion of subordination. Definition 1.2 [5] We denote by Q the set of functions q(·, ξ) that are analytic and injective, as function of z on U \ E(q) where E(q) = {ζ ∈ ∂U : lim q(z, ξ) = ∞} z→ζ
and are such that q 0 (ζ, ξ) 6= 0 for ζ ∈ ∂U \ E(q), ξ ∈ U . The subclass of Q for which q(0, ξ) = a is denoted by Q(a). Definition 1.3 [5] Let Ωξ be a set in C, q(·, ξ) ∈ Q and n be a positive integer. The class of admissible functions Ψn [Ωξ , q(·, ξ)] consists of those functions ψ : C3 ×U ×U → C that satisfy the admissibility condition:
whenever r = q(ζ, ξ), s = mζq 0 (ζ, ξ), Re m ≥ n.
t s
ψ(r, s, t; z, ξ) 6∈ Ωξ , (A) h 00 i (ζ,ξ) + 1 ≥ mRe ζqq0 (ζ,ξ) + 1 , z ∈ U, ζ ∈ ∂U \ E(q), ξ ∈ U and
1
OROS: STRONG DIFFERENTIAL SUBORDINATION
734
We write Ψ1 [Ωξ , q(·, ξ)] as Ψ[Ωξ , q(·, ξ)]. In the special case when Ωξ is a simply connected domain, Ωξ 6= C, and h(·, ξ) is a conformal mapping of U × U onto Ωξ we denote this class by Ψn [h(·, ξ), q(·, ξ)]. If ψ : C2 × U × U → C, then the admissibility condition (A) reduces to ψ(r, s; z.ξ) 6∈ Ωξ ,
(A’)
whenever r = q(ζ, ξ), s = ζq 0 (ζ, ξ), z ∈ U, ζ ∈ ∂U \ E(q), ξ ∈ U ,and m ≥ n. If ψ : C × U × U → C, then the admissibility condition (A) reduces to ψ(r; z, ξ) 6∈ Ωξ
(A”)
whenever r = q(ζ, ξ), z ∈ U, ξ ∈ U , ζ ∈ ∂U \ E(q). Lemma 1.1 [5, Th. 3] Let h(·, ξ) ∈ Hu (U ), for all ξ ∈ U and q(, ξ) ∈ Hu (U ), for all ξ ∈ U , with q(0, ξ) = a, and set qρ (z, ξ) = q(ρz, ξ). Let ψ : C 3 × U × U → C satisfy one of the following conditions: (i) ψ ∈ Ψn [h(·, ξ), qρ (·, ξ)], for some ρ ∈ (0, 1), or (ii) there exists ρ0 ∈ (0, 1) such that ψ ∈ Ψn [hρ (·, ξ), qρ (·, ξ)] for all ρ ∈ (ρ0 , 1). If p(·, ξ) ∈ H ∗ [a, n, ξ], ψ(p(z, ξ), zp02 p00 (z, ξ); z, ξ) is analytic in U × U and ψ(p(z, ξ), zp0 (z, ζ) , z 2 p00 (z, ξ); z, ξ) ≺≺ h(z, ξ), then p(z, ξ) ≺≺ q(z, ξ),
z ∈ U, ξ ∈ U .
Lemma 1.2 [5, Th. 5] Let h(·, ξ) ∈ Hu (U ), for all ξ ∈ U and let ψ : C 3 × U × U → C. Suppose that the differential equation ψ(q(z, ξ), nzq 0 (z, ξ), n(n − 1)zq 0 (z, ζ) + n2 z 2n q 00 (z, ξ); z, ξ) = h(z, ξ), z ∈ U, ξ ∈ U , has a solution q(·, ξ), with q(0, ξ) = a, and one of the following conditions is satisfied: (i) q(·, ξ) ∈ Q and ψ ∈ Ψn [h(·, ξ), q(·, ξ)], (ii) q(·, ξ) ∈ Hu (U ), for all ξ ∈ U and ψ ∈ Ψn [h(·, ξ), qρ (·, ξ)] (iii) q(·, ξ) ∈ Hu (U ), for all ξ ∈ U and there exists ρ0 ∈ (0, 1) such that ψ ∈ Ψn [hρ (·, ξ), qρ (·, ξ)] for all ρ ∈ (ρ0 , 1). If p(·, ξ) ∈ H ∗ [a, n, ξ], ψ(p(z, ξ), zp0 (z, ζ) , z 2 p00 (z, ξ); z, ξ) is analytic in U × U and p(·, ξ) satisfies ψ(p(z, ξ), zp0 (z, ζ) , z 2 p00 (z, ξ); z, ξ) ≺≺ h(z, ξ),
z ∈ U, ξ ∈ U ,
then p(z, ξ) ≺≺ q(z, ξ),
z ∈ U, ξ ∈ U ,
and q(·, ξ) is the best dominant.
2
Main results
Definition 2.1 Let β, γ ∈ C, β 6= 0, the functions h(·, ξ) ∈ Hu (U ), for all ξ ∈ U , with h(0, ξ) = a, and let p ∈ H∗ [a, n, ξ] satisfy zp0 (z, ξ) p(z, ξ) + ≺≺ h(z, ξ), z ∈ U, ξ ∈ U . (1) βp(z, ξ) + γ This first-order strong differential subordination is called Briot-Bouquet strong differential subordination. The name derives from the fact that a differential equation of the form q(z, ξ) +
zq 0 (z, ξ) = h(z, ξ), βq(z, ξ) + γ
is called a differential equation of Briot-Bouquet type.
2
z ∈ U, ξ ∈ U ,
(2)
OROS: STRONG DIFFERENTIAL SUBORDINATION
735
Remark 2.1 (i) Strong differential subordination (1) is equivalent to: Let β, γ ∈ C, β 6= 0, and let Ωξ be any set in the complex plane C, the function p(·, ξ) ∈ H∗ [a, n, ξ] and h(·, ξ) ∈ Hu (U ) for all ξ ∈ U satisfies zp0 (z, ξ) (1’) p(z, ξ) + ⊂ Ωξ , z ∈ U, ξ ∈ U , βp(z, ξ) + γ where h(U × U ) = Ωξ . (ii) More general, let p(·, ξ) ∈ H∗ [a, n, ξ] and h(·, ξ) ∈ H∗ [a, n, ξ]. p(z, ξ) +
zp0 (z, ξ) ≺≺ h(z, ξ) ⇒ p(z, ξ) ≺≺ q(z, ξ), z ∈ U, ξ ∈ U . βp(z, ξ) + γ
(1”)
Related to the strong differential subordination (1) we can formulate the next two problems. Problem 2.1 Given h(·, ξ) ∈ Hu (U ), for all ξ ∈ U with h(0, ξ) = p(0, ξ) what are the conditions for the function q(·, ξ) ∈ Hu (U ), for all ξ ∈ U , with q(0, ξ) = h(0, ξ), for all ξ ∈ U and find this function such that implication (1”) holds. Moreover, find the best dominant q(·, ξ), for all ξ ∈ U of this strong differential subordination. Problem 2.2 Given q(·, ξ) ∈ Hu (U ), for all ξ ∈ U with q(0, ξ) = p(0, ξ) find the function h(·, ξ) ∈ Hu (U ), for all ξ ∈ U with h(0, ξ) = q(0, ξ), such that implication (1”) holds. This paper provides certain results which respond to the above formulated problems. Theorem 2.3 Let β, γ ∈ C, β 6= 0, and let convex function h(·, ξ) ∈ H ∗ [a, n, ξ] satisfies Re [βh(z, ξ) + γ] > 0,
z ∈ U, ξ ∈ U .
(3)
If p(·, ξ) is analytic in U × U , with p(0, ξ) = h(0, ξ), for all ξ ∈ U then p(z, ξ) +
zp0 (z, ξ) ≺≺ h(z, ξ), βp(z, ξ) + γ
z ∈ U, ξ ∈ U ,
(4)
implies p(z, ξ) ≺≺ h(z, ξ),
z ∈ U, ξ ∈ U .
Proof. Let ψ : C2 × U × U → C, r = p(z, ξ), s = zp0 (z, ξ) then ψ(r, s; z, ξ) = r + differential subordination (4) becomes
s βr+γ ,
and strong
ψ(p(z, ξ), zp0 (z, ξ); z, ξ) ≺≺ h(z, ξ),
(5)
implies p(z, ξ) ≺≺ q(z, ξ), z ∈ U, ξ ∈ U . We will use part (ii) Lemma 1.1, with q(z, ξ) = h(z, ξ) and hρ (z, ξ) = h(ρz, ξ) to prove this result. We only need to show that ψ ∈ Ψ[hρ (·, ξ), hρ (·, ξ)] for 0 < ρ < 1. In this case, admissibility condition (A) reduces to showing ψ0 ≡ ψ(hρ (ζ, ξ), mζh0ρ (ζ, ξ)) = hρ (ζ, ξ) +
mζh0ρ (ζ, ξ) 6∈ hρ (U × U ), βhρ (ζ, ξ) + γ
(6)
when |ζ| = 1, ξ ∈ U and m ≥ n. Equality (6) is equivalent to ψ0 − hρ (ζ, ξ) m = , ζh0ρ (ζ, ξ) βhρ (z, ξ) + γ when |ζ| = 1, ξ ∈ U , and m ≥ 1. Let λ=
ψ0 − hρ (ζ, ξ) m = . ζh0ρ (ζ, ξ) βhρ (z, ξ) + γ
From Theorem 2.3 we have Re λ = Re arg ζh0ρ (ζ, ξ)|
(7)
m βhρ (z,ξ)+γ
> 0, i.e. | arg λ|
0 and Re 4 1 + 4ξ z + 2 > 0. If
p(·, ξ) is analytic in U × U with p(0, ξ) = h(0, ξ), for all ξ ∈ U then p(z, ξ) +
zp0 (z, ξ) ξ ≺≺ 1 + z, βp(z, ξ) + γ 4
z ∈ U, ξ ∈ U ,
implies ξ p(z, ξ) ≺≺ 1 + z, 4
z ∈ U, ξ ∈ U .
Theorem 2.4 Let β, γ ∈ C, β 6= 0 and let convex function h(·, ξ) ∈ H ∗ [a, n, ξ]. Suppose that the BriotBouquet differential equation q(z, ξ) +
nzq 0 (z, ξ) = h(z, ξ), βq(z, ξ) + γ
[q(0, ξ) = h(0, ξ) = a]
has a solution q with q(0, ξ) = a that satisfy q(z, ξ) ≺≺ h(z, ξ), z ∈ U, ξ ∈ U . If p ∈ H ∗ [a, n, ξ] then zp0 (z, ξ) p(z, ξ) + ≺≺ h(z, ξ) βp(z, ξ) + γ
(9)
(10)
implies p(z, ξ) ≺≺ q(z, ξ),
z ∈ U, ξ ∈ U ,
and q(·, ξ) is the best dominant. Proof. We will use part (iii) of Lemma 1.2 to prove this result. Let ψ : C2 × U × U → C, ψ(r, s; z, t) = s r + βr+γ , r = p(z, ξ), s = zp0 (z, ξ), then strong differential subordination (10) becomes ψ(p(z, ξ), zp0 (z, ξ); z, ξ) ≺≺ h(z, ξ),
z ∈ U, ξ ∈ U ,
(11)
and we only need to show that ψ ∈ Ψn [hρ (·, ξ), qρ (·, ξ)], for 0 < ρ < 1. In this case, admissibility condition (A) reduces to showing that ψ0 ≡ ψ(qρ (ζ, ξ), mζqρ0 (ζ, ξ); z, ξ) = ζq 0 (ζ,ξ)
ρ qρ (ζ, ξ) + m βhρ (ζ,ξ)+γ 6∈ hρ (U × U ), when |ζ| = 1, ξ ∈ U , and m ≥ n. Using (9) we obtain ψ0 =
qρ (ζ, ξ) + m n [hρ (ζ, ξ) − qρ (ζ, ξ)], |ζ| = 1, ξ ∈ U . From q(z, ξ) ≺ h(z, ξ) we obtain qρ (ζ, ξ) ∈ hρ (U × U ). Using this together with the fact that hρ (U × U ) is a convex domain and m n ≥ 1, we deduce that ψ0 6∈ hρ (U × U ). Therefore ψ ∈ Ψn [hρ (·, ξ), qρ (·, ξ)], and by Lemma 1.2 we conclude that p(z, ξ) ≺≺ q(z, ξ), and that q(·, ξ) is the best dominant. We can also use Theorem 2.4 to obtain a strong subordination relation between univalent solutions of Briot-Bouquet equation of the form (9). Theorem 2.5 Let h(·, ξ), ξ ∈ U convex, with h(0, ξ) = a and let m and n be positive integers. Let qm (·, ξ) and qn (·, ξ), ξ ∈ U be univalent solutions of differential equation q(z, ξ) +
nzq 0 (z, ξ) = h(z, ξ), βq(z, ξ) + γ
z ∈ U, ξ ∈ U , β, γ ∈ C, β 6= 0,
(12)
for n = m and n respectively, with qn (z, ξ) ≺ h(z, ξ), z ∈ U , ξ ∈ U . If m > n, then qm (z, ξ) ≺≺ qn (z, ξ). Proof. By hypothesis qm (z, ξ) +
0 mzqm (z,ξ) βqm (z,ξ)+γ
= h(z, ξ), z ∈ U, ξ ∈ U . Let
p(z, ξ) = qm (z m , ξ), 0
0
mz m q (z m ,ξ)
(13)
zp (z,ξ) = qm (z m , ξ) + βqm (zmm ,ξ)+γ = h(z m , ξ) ≺≺ h(z, ξ), z ∈ U, ξ ∈ U . we obtain p(z, ξ) + βp(z,ξ)+γ Since p(., ξ) ∈ H∗ [a, m, ξ] ⊂ H∗ [a, n, ξ], qn (z, ξ) ≺≺ h(z, ξ) and qn (·, ξ) satisfies (12) we can apply Theorem 2.4 to obtain p(z, ξ) ≺≺ qn (z, ξ), and from (13), we have qm (z m , ξ) ≺≺ p(z, ξ), which implies qm (z, ξ) ≺≺ qn (z, ξ), z ∈ U, ξ ∈ U . We can combine Theorem 2.3 and 2.4 and we obtain the following theorem.
4
OROS: STRONG DIFFERENTIAL SUBORDINATION
737
Theorem 2.6 Let h(·, ξ) be convex in U , with h(0, ξ) = a and Re [βh(z, ξ) + γ] > 0, z ∈ U, ξ ∈ U , β, γ ∈ C, β 6= 0. Let n be a positive integer and suppose that differential equation q(z, ξ) +
nzq 0 (z, ξ) = h(z, ξ), βq(z, ξ) + γ
z ∈ U, ξ ∈ U ,
(14)
has a univalent solution q(·, ξ). If p ∈ H ∗ [a, n, ξ] satisfies differential subordination p(z, ξ) +
zp0 (z, ξ) ≺≺ h(z, ξ), βp(z, ξ) + γ
z ∈ U, ξ ∈ U ,
(15)
then p(z, ξ) ≺≺ q(z, ξ) ≺≺ h(z, ξ),
z ∈ U, ξ ∈ U ,
and q(·, ξ) is the best dominant. Proof. If we replace z by z n in (14) and let qe(z, ξ) = q(z n , ξ), we obtain qe(z, ξ) +
ze q 0 (z,ξ) βe q (z,ξ)+γ
=
∗
n
h(z , ξ) ≺≺ h(z, ξ), z ∈ U, ξ ∈ U , then the function qe ∈ H [a, n, ξ] satisfies strong differential subordination (15). Since qe(·, ξ) satisfies the conditions of Theorem 2.3, we deduce that qe(z, ξ) ≺≺ h(z, ξ) or equivalently q(z n , ξ) ≺≺ h(z, ξ). Since this implies q(z, ξ) ≺≺ h(z, ξ), we can combine this result with Theorem 2.4 to obtain p(z, ξ) ≺≺ q(z, ξ) ≺≺ h(z, ξ), and q is the best dominant.
References [1] J.A. Antonino and S. Romaguera, Strong differential subordination to Briot-Bouquet differential equations, Journal of Differential Equations, 114(1994), 101-105. [2] S.S. Miller and P.T. Mocanu, Differential subordinations and univalent functions, Michig. Math. J., 28(1981), 157-171. [3] S.S. Miller and P.T. Mocanu, Differential subordinations. Theory and applications, Pure and Applied Mathematics, Marcel Dekker, Inc., New York, 2000. [4] Georgia Irina Oros, Gheorghe Oros, Strong differential subordination, Turkish Journal of Mathematics, 33(2009), 249-257. [5] Georgia Irina Oros, On a new strong differential subordination (to appear).
5
JOURNAL OF COMPUTATIONAL ANALYSIS AND APPLICATIONS, VOL. 14, NO.4, 738-744 , 2012, COPYRIGHT 2012 EUDOXUS738 PRESS, LLC
Lp-left Caputo fractional Landau inequalities George A. Anastassiou Department of Mathematical Sciences University of Memphis Memphis, TN 38152, U.S.A. [email protected] Abstract Here we establish left Caputo fractional Lp -Landau type inequalities. We give applications on R+ at the end.
2010 Mathematics Subject Classification: 26A33, 26D10, 26D15. Keywords and Phrases: Landau inequality, Caputo fractional derivative.
1
Introduction
Let p ∈ [1, ∞] , I = R+ or I = R, and f : I → R is twice differentiable with f, f 00 ∈ Lp (I), then f 0 ∈ Lp (I). Moreover, there exists a constant Cp (I) > 0 independent of f , such that 1
1
2 2 kf 0 kp,I ≤ Cp (I) kf kp,I kf 00 kp,I ,
(1)
where k·kp,I is the p-norm on the interval I, see [1], [3]. The research on these inequalities started by E. Landau [9] in 1914. For the case of p = ∞ he proved that √ (2) C∞ (R+ ) = 2 and C∞ (R) = 2, are the best constants in (1). In 1932, G.H. Hardy and J.E. Littlewood [6] proved (1) for p = 2, with the best constants √ (3) C2 (R+ ) = 2, and C2 (R) = 1. In 1935, G.H. Hardy, E. Landau and J.E. Littlewood [7] showed that the best constant Cp (R+ ) in (1) satisfies the estimate Cp (R+ ) ≤ 2, for p ∈ [1, ∞) , 1
(4)
ANASTASSIOU: FRACTIONAL LANDAU INEQUALITIES
739
which yields Cp (R) ≤ 2 for p ∈ [1, ∞). Infact in [5] and [8], was shown that √ Cp (R) ≤ 2. In this article we prove fractional Landau inequalities with respect to k·kp , p > 1, involving the left Caputo fractional derivative. We need Definition 1 ([4], p. 38) Let ν ≥ 0, n = dνe (d·e the ceiling of the number), f ∈ AC n ([A, B]) (i.e. f (n−1) ∈ AC ([A, B]) , A, B ∈ R). We define the left Caputo fractional derivative Z x 1 n−ν−1 (n) ν D∗A f (x) := (x − t) f (t) dt, (5) Γ (n − ν) A ν ∀ x ∈ [A, B], and D∗A f (x) := 0, if x < A. Here Γ is the gamma function.
Definition 2 Let 0 < ν ≤ 1, f ∈ AC ([A, B]), we define Z x 1 −ν ν D∗A f (x) = (x − t) f 0 (t) dt, Γ (1 − ν) A
(6)
∀ x ∈ [A, B] . n f (x) = f (n) , for n ∈ N. Notice D∗A We make Remark 3 Let f 0 ∈ AC ([A, B]), 0 < ν ≤ 1, then dν + 1e = 2, and Z x 1 −ν ν D∗A f 0 (x) = (x − t) f 00 (t) dt = Γ (1 − ν) A Z x 1 2−(ν+1)−1 00 ν+1 (x − t) f (t) dt = D∗A f (x) . Γ (2 − (ν + 1)) A Hence it holds ν+1 ν D∗A f 0 (x) = D∗A f (x) .
2
(7)
Main results
We use Theorem 4 ([2], p. 620) Let p, q > 1 : p1 + 1q = 1, 1 − p1 < ν ≤ 1, f ∈ ν AC ([A, B]), A, B ∈ R, A < B. Assume D∗A f ∈ Lq ([A, B]). Then ν 1 Z B kD∗A f kLq ([A,B]) 1 (B − A)ν−1+ p . f (x) dx − f (A) ≤ 1 B − A A Γ (ν) (p (ν − 1) + 1) p ν + 1 p
(8) 2
ANASTASSIOU: FRACTIONAL LANDAU INEQUALITIES
740
We make Remark 5 Let p, q > 1 : p1 + 1q = 1, 1 − p1 < ν ≤ 1, f 0 ∈ AC ([A, B]), A, B ∈ R, ν A < B. Assume D∗A f 0 ∈ Lq ([A, B]). Then ν 1 Z B kD∗A f 0 kLq ([A,B]) 1 (B − A)ν−1+ p . f 0 (x) dx − f 0 (A) ≤ B − A A Γ (ν) (p (ν − 1) + 1) p1 ν + 1 p
(9) Equivalently we have: For p, q > 1 : p1 + 1q = 1, 1 − p1 < ν ≤ 1, f ∈ AC 2 ([A, B]), ν+1 A, B ∈ R, A < B. Assume D∗A f ∈ Lq ([A, B]). Then 1 0 (10) B − A (f (B) − f (A)) − f (A) ≤
ν+1
D
1 ∗A f Lq ([A,B]) (B − A)ν−1+ p . 1 1 Γ (ν) (p (ν − 1) + 1) p ν + p Therefore it holds
ν+1
D 1 ∗A f Lq ([A,B]) 1 (B − A)ν−1+ p . |f (A)|− |f (B) − f (A)| ≤ 1 B−A Γ (ν) (p (ν − 1) + 1) p ν + p1 (11) 0
We also make Remark 6 Let 1 − p1 < ν ≤ 1; p, q > 1 : p1 + 1q = 1, f ∈ AC 2 ([A, b]), ∀b > A. ν+1 ν+1 Assume D∗A f ∈ Lq ([A, +∞)) (thus D∗A ∈ Lq ([A, b])). Let A < a < b. Then ν+1 ν+1 2 f ∈ AC ([a, b]) and D∗A f ∈ Lq ([a, +∞)) and D∗A ∈ Lq ([a, b]). Here Z x 1 −ν ν+1 D∗a f (x) = (x − t) f 00 (t) dt. (12) Γ (1 − ν) a If f 00 (t) ≥ 0 a.e., then ν+1 ν+1 D∗A f (x) ≥ D∗a f (x) ≥ 0, a.e., for x ≥ a.
Thus
ν+1
ν+1
ν+1 ∞ > D∗A f q,[A,+∞) ≥ D∗A f q,[a,+∞) ≥ D∗a f q,[a,+∞) . So it is not strange to assume that
ν+1
ν+1
D∗a f ≤ D∗A f q,[A,+∞) , q,[a,+∞) ∀ a ≥ A (it is obvious when ν ∈ N). 3
(13)
ANASTASSIOU: FRACTIONAL LANDAU INEQUALITIES
741
We need Remark 7 Let a, b ∈ [A, +∞), a < b. Then we get
ν+1
D∗a f 1 1 Lq ([a,b]) (b − a)ν−1+ p . |f 0 (a)| ≤ |f (b) − f (a)| + 1 b−a Γ (ν) (p (ν − 1) + 1) p ν + 1 p
(14) We also assume that kf k∞,[A,+∞) < ∞. Therefore
ν+1
D
2 kf k 1 ∗A f q,[A,+∞) ∞,[A,+∞) (b − a)ν−1+ p , (15) |f 0 (a)| ≤ + 1 1 b−a Γ (ν) (p (ν − 1) + 1) p ν + p ∀ a, b ∈ [A, +∞), a < b. The R.H.S.(15) depends only on b − a. Therefore
ν+1
D
2 kf k 1 ∗A f q,[A,+∞) ∞,[A,+∞) (b − a)ν−1+ p . kf 0 k∞,[A,+∞) ≤ + 1 1 b−a Γ (ν) (p (ν − 1) + 1) p ν + p (16) We may call t = b − a > 0. Thus
ν+1
D
2 kf k∞,[A,+∞) 1 ∗A f q,[A,+∞) 0 tν−1+ p , (17) kf k∞,[A,+∞) ≤ + 1 t Γ (ν) (p (ν − 1) + 1) p ν + p1 ∀ t ∈ (0, ∞) . Notice that 0 < ν − 1 +
1 p
< 1. Call µ e := 2 kf k∞,[A,+∞) ,
θe :=
(18)
ν+1
D
∗A f q,[A,+∞) , 1 Γ (ν) (p (ν − 1) + 1) p ν + p1
both are positive; and 1 ν ∈ (0, 1) ). νe := ν − 1 + , (e p
(19)
We consider the function ye (t) =
µ e e νe + θt , t
4
t ∈ (0, ∞) .
(20)
ANASTASSIOU: FRACTIONAL LANDAU INEQUALITIES
742
The only critical number here is te0 =
µ e νeθe
1 ν+1 e
,
(21)
and ye has a global minimum at te0 , which is 1 νe ν+1 ν e ). eµ e (e e ν + 1) νe−( ν+1 ye te0 = θe
(22)
Consequently, we get 11
ν+1 ν+
p D f ∗A q,[A,+∞) e ye t0 = · 1 Γ (ν) (p (ν − 1) + 1) p ν + p1
2 kf k∞,[A,+∞)
ν−1+ 1 p ν+ 1 p
− 1 1 ν+ ν−1+ p p
ν−1+ 1 p ν+ 1 p
.
(23)
We have proved that
kf 0 k∞,[A,+∞)
2 ν + p1 ≤ ν − 1 + p1
1
(p (ν − 1) + 1)
1 (pν+1)
· kf k∞,[A,+∞)
ν−1+ 1 p ν+ 1 p
ν−1+ 1 p ν+ 1 p
1
·
1
·
(24)
1 (Γ (ν)) (ν+ p )
11
ν+1 · D∗A f q,[A,+∞) (ν+ p ) .
We have established the following Lq result, left Caputo fractional Landau Lq inequality: Theorem 8 Let p, q > 1 : p1 + 1q = 1, 1 − p1 < ν ≤ 1, f ∈ AC 2 ([A, b]), ∀b > A. ν+1 Assume D∗A f ∈ Lq ([A, +∞)) , kf k∞,[A,+∞) < ∞, and
ν+1
ν+1
D∗a f ≤ D∗A f q,[A,+∞) , (25) q,[a,+∞) ∀ a ≥ A. Then
kf 0 k∞,[A,+∞)
2 ν + p1 ≤ ν − 1 + p1
ν−1+ 1 p ν+ 1 p
1 1
(Γ (ν)) (
kf k∞,[A,+∞)
ν+ 1 p
·
·
1 ) (p (ν − 1) + 1) (pν+1)
ν−1+ 1 p ν+ 1 p
11
ν+1 · D∗A f q,[A,+∞) (ν+ p ) . 5
(26)
ANASTASSIOU: FRACTIONAL LANDAU INEQUALITIES
743
We give Corollary 9 Let p, q > 1 : p1 + 1q = 1, 1 − p1 < ν ≤ 1, f ∈ AC 2 ([0, b]), ∀b > 0. ν+1 Assume D∗0 f ∈ Lq ([0, +∞)) , kf k∞,R+ < ∞, and
ν+1
ν+1
D∗a f ≤ D∗0 f q,R , (27) q,[a,+∞) +
∀ a ≥ 0. Then
kf 0 k∞,R+
2 ν + p1 ≤ ν − 1 + p1
ν−1+ 1 p ν+ 1 p
·
1
·
1
(Γ (ν)) (
kf k∞,R+
ν+ 1 p
(28)
1 ) (p (ν − 1) + 1) (pν+1)
ν−1+ 1 p ν+ 1 p
11
ν+1 · D∗0 f q,R+ (ν+ p ) .
We mention the case of p = q = 2. ν+1 Corollary 10 Let 21 < ν ≤ 1, f ∈ AC 2 ([0, b]), ∀b > 0. Assume D∗0 f ∈ L2 (R+ ) , f ∈ L∞ (R+ ) , and
ν+1
ν+1
D∗a f (29) f 2,R+ , ≤ D∗0 2,[a,+∞)
∀ a ≥ 0. Then
kf 0 k∞,R+ ≤
2ν + 1 ν − 12 1
ν− 1 2 ν+ 1 2
·
(30)
· 1 1 ν+ 1 ) ( (2ν+1) 2 (Γ (ν)) (2ν − 1) 11 ν− 121
ν+ ν+1 2 f 2,R+ (ν+ 2 ) . · D∗0 kf k∞,R+ When ν = 1 we get: Corollary 11 Let p, q > 1 : p1 + 1q = 1, f ∈ AC 2 ([0, b]), ∀b > 0. Assume f 00 ∈ Lq (R+ ) , f ∈ L∞ (R+ ) . Then 1
kf 0 k∞,R+ ≤ (2 (p + 1)) p+1 · p 1 p+1 p+1 · kf 00 kq,R+ . kf k∞,R+
(31)
We finish with Corollary 12 Let f ∈ AC 2 ([0, b]), ∀b > 0. Assume f 00 ∈ L2 (R+ ) , f ∈ L∞ (R+ ) . Then 13 23 √ 3 kf 0 k∞,R+ ≤ 6 · kf k∞,R+ · kf 00 k2,R+ . (32) 6
ANASTASSIOU: FRACTIONAL LANDAU INEQUALITIES
References [1] A. Agli´c Aljinovic, Lj. Marangunic and J. Pecari´c, On Landau type inequalities via Ostrowski inequalities, Nonlinear Funct. Anal. & Appl., Vol. 10, No. 4 (2005), pp. 565-579. [2] G.A. Anastassiou, Fractional Differentiation Inequalities, Springer, New York, 2009. [3] N.S. Barnett and S.S. Dragomir, Some Landau type inequalities for functions whose derivatives are of locally bounded variation, Tamkang Journal of Mathematics, Vol. 37, No. 4, 301-308, winter 2006. [4] K. Diethelm, Fractional Differential Equations, On line: http://www.tubs.de/˜diethelm/lehre/f-dgl02/fde-skript.ps.gz, 2003. [5] Z. Ditzian, Remarks, questions and conjections on Landau-Kolmogorov-type inequalities, Math. Ineq. Appl. 3 (2000), 15-24. [6] G.H. Hardy and J.E. Littlewood, Some integral inequalities connected with the calculus of variations, Quart. J. Math. Oxford Ser. 3 (1932), 241-252. [7] G.H. Hardy, E. Landau and J.E. Littlewood, Some inequalities satisfied by the integrals or derivatives of real or analytic functions, Math. Z. 39 (1935), 677-695. 2
[8] R.R. Kallman and G.-C. Rota, On the inequality kf 0 k ≤ 4 kf k · kf 00 k, in ”Inequalities”, vol. II (O. Shisha, Ed), 187-192, Academic Press, New York, 1970. [9] E. Landau, Einige Ungleichungen f¨ ur zweimal differentzierban funktionen, Proc. London Math. Soc. 13 (1913), 43-49.
7
744
JOURNAL OF COMPUTATIONAL ANALYSIS AND APPLICATIONS, VOL. 14, NO.4, 745-751 , 2012, COPYRIGHT 2012 EUDOXUS745 PRESS, LLC
Superconvergence of Tensor-Product Quadratic Pentahedral Elements for Variable Coefficient Elliptic Equations∗ Jinghong Liu
(Department of Fundamental Courses, Ningbo Institute of Technology, Zhejiang University, Ningbo 315100, China) E-mail : [email protected]
Abstract For a second order variable coefficient elliptic boundary value problem in 3D, we derive the weak estimate of the first type for tensor-product quadratic pentahedral finite elements. Combined the estimate for the discrete derivative Green’s function, we show that the derivatives of the finite element approximation and the corresponding interpolant are superclose in the sense of the L∞ -norm. Mathematics subject classification: 65N30. Key words: tensor-product quadratic pentahedral elements, superconvergence, discrete derivative Green’s function, variable coefficient elliptic equation
I. INTRODUCTION AND PRELIMINARIES There have been many studies concerned with superconvergence of finite element methods in 3D. Books and survey papers have been published. For the literature, we refer to [1–12] and references therein. As for pentahedral elements, superconvergence results are achieved in [3, 6, 7, 9, 12]. Among of them, [7, 12] give how to locate superconvergent points of the pentahedral finite element approximation. Recently, we obtained supercloseness properties of pentahedral elements in the pointwise sense of the L∞ -norm for the Poisson equation (see [9]). The present article shall discuss maximum norm error estimates for derivatives of tensor-product quadratic pentahedral elements for a second order variable coefficient elliptic equation. In this article, we shall use letter C to denote a generic constant which may not be the same in each occurrence and also use the standard notation for the Sobolev spaces and their norms. The model problem considered in this article is Lu ≡ −
3 ∑ i,j=1
∂j (aij ∂i u) +
3 ∑
ai ∂i u + a0 u = f in Ω,
u = 0 on ∂Ω.
(1.1)
i=1
∗
Supported by the Natural Science Foundation of Zhejiang Province (No. Y6090131), and the Natural Science Foundation of Ningbo City (No. 2010A610101).
1
LIU: VARIABLE COEFFICIENT ELLIPTIC EQUATIONS
Here Ω = [0, 1]2 × [0, 1] = Ωxy × Ωz ⊂ R3 is a rectangular block with boundary, ∂Ω, consisting of faces parallel to the x-, y-, and z-axes. We also assume that the given functions aij , ai ∈ W 1, ∞ (Ω), a0 ∈ L∞ (Ω), and f ∈ L2 (Ω). In addition, we write ∂u ∂u ∂1 u = ∂u ∂x , ∂2 u = ∂y , and ∂3 u = ∂z , which are usual partial derivatives. To discretize the problem, one proceeds as follows. The domain Ω is firstly partitioned into subcubes of side h, and each of these is then subdivided into two pentahedra (triangular prisms). We denote by {T h } a uniform family of pentahedral ¯ = ∪e∈T h e¯. Obviously, we can write e = D × L (see Fig. partitions as above. Thus Ω 1), where D and L represent a triangle parallel to the xy-plane and a one-dimensional interval parallel to the z-axes, respectively.
z
6
L y
>
D -x
FIG. 1. Pentahedral Elements We introduce the tensor-product quadratic polynomial space denoted by P, that is,
∑
q(x, y, z) =
aijk xi y j z k , aijk ∈ R, q ∈ P,
(i,j,k)∈I
where P = Pxy ⊗ Pz , Pxy stands for the quadratic polynomial space with respect to (x, y), and Pz the quadratic polynomial space with respect to z. The indexing set I satisfies I = {(i, j, k)|i, j, k ≥ 0, i + j ≤ 2, k ≤ 2}. Let Πexy be the quadratic interpolation operator with respect to (x, y) ∈ D, and Πez the quadratic interpolation operator with respect to z ∈ L. Thus we may define the tensor-product quadratic interpolation operator by Πe : H01 (e) → P(e). Obviously, Πe = Πexy ⊗ Πez = Πez ⊗ Πexy . The corresponding weak form of the problem (1.1) is a(u , v) = (f , v) ∀ v ∈ H01 (Ω), where a(u , v) ≡ =
∫
(
3 ∑
aij ∂i u∂j v +
Ω i,j=1 3 ∑
3 ∑
ai ∂i uv + a0 uv) dxdydz
i=1
(aij ∂i u, ∂j v) +
i,j=1
3 ∑ i=1
2
(ai ∂i u, v) + (a0 u, v),
746
LIU: VARIABLE COEFFICIENT ELLIPTIC EQUATIONS
747
∫
and (f , v) =
f v dxdydz. Ω
Define the tensor-product quadratic pentahedral finite element space by {
}
S0h (Ω) = v ∈ H01 (Ω) : v|e ∈ P(e) ∀ e ∈ T h .
(1.2)
Thus, the finite element method is to find uh ∈ S0h (Ω) such that a(uh , v) = (f , v) ∀ v ∈ S0h (Ω). Obviously, there is the following Galerkin orthogonality relation: a(u − uh , v) = 0 ∀ v ∈ S0h (Ω).
(1.3)
Additionally, from the definitions of Πe and S0h (Ω), we can define the global tensorproduct quadratic interpolation operator Π : H01 (Ω) → S0h (Ω). Here (Πu)|e = Πe u. In next section, we will bound the term a(u−Πu , v) =
3 ∑
(aij ∂i (u−Πu), ∂j v)+
i,j=1
3 ∑
(ai ∂i (u−Πu), v)+(a0 (u−Πu), v). (1.4)
i=1
II. A WEAK ESTIMATE FOR THE TENSOR-PRODUCT QUADRATIC INTERPOLATION OPERATOR In this section, we give the following weak estimate for the interpolation operator Π. Lemma 2.1. Let {T h } be a uniform family of pentahedral partitions of Ω, u ∈ W 4, ∞ (Ω) ∩ H01 (Ω), and v ∈ S0h (Ω). Then the tensor-product quadratic interpolation operator Π satisfies the following weak estimate: |a(u − Πu, v)| ≤ Ch3 ∥u∥4, ∞, Ω |v|1, 1, Ω .
(2.1)
Proof. Obviously, the remainder of the interpolation is as follows: u − Πu = (u − Πxy u) + (u − Πz u) + (Πxy (u − Πz u) − (u − Πz u)) ≡ Rxy + Rz + R∗ ,
(2.2)
where (Πxy u)|e = Πexy u, (Πz u)|e = Πez u, and R∗ is a high-order term. Thus, we only need to analyze Rxy and Rz . We first bound the term ∑
∑
a(Rxy , v) = ( 3i,j=1 (aij ∂i Rxy , ∂j v) + 3i=1 (ai ∂i Rxy , v) +) (a0 Rxy , v) ∑2 ∑2 = i=1 (ai ∂i Rxy , v) i,j=1 (aij ∂i Rxy , ∂j v) + (∑
3 + j=1 (a3j ∂3 Rxy , ∂j v) + +(a3 ∂3 Rxy + a0 Rxy , v) ≡ J1 + J2 + J3 .
3
∑2
)
i=1 (ai3 ∂i Rxy , ∂3 v)
(2.3)
LIU: VARIABLE COEFFICIENT ELLIPTIC EQUATIONS
748
By the weak estimate of quadratic triangular elements (see [13]), ∫
|J1 | ≤
Ωz
∫ 2 2 ∑ ∑ ( aij ∂i Rxy ∂j v + ai ∂i Rxy v) dxdy dz Ωxy i,j=1 i=1 ∫
≤ Ch3
Ωz
∥u∥4, ∞, Ωxy |v|1, 1, Ωxy dz
≤ Ch3 ∥u∥4, ∞, Ω |v|1, 1, Ω , namely, |J1 | ≤ Ch3 ∥u∥4, ∞, Ω |v|1, 1, Ω .
(2.4)
Consider the integral I3j ≡
∫
a3j ∂3 Rxy ∂j v dxdydz, j = 1, 2, 3. Ω
Obviously, by the interpolation error estimate, we have |I3j | ≤ C
∫ Ω
|∂3 u − Πxy (∂3 u)||∂j v| dxdydz ≤ Ch3 ∥u∥4, ∞, Ω |v|1, 1, Ω .
(2.5)
As for the integral Ii3 ≡
∫
ai3 ∂i Rxy ∂3 v dxdydz, i = 1, 2. Ω
Since v ∈ S0h (Ω), integration by parts yields Ii3 = −
∫ Ω
v∂3 (ai3 ∂i Rxy ) dxdydz = −
∫
∫
v∂3 ai3 ∂i Rxy dxdydz− Ω
vai3 ∂3 ∂i Rxy dxdydz. Ω
Applying integration by parts again, we have ∫
Ii3 =
∫
Rxy ∂i (v∂3 ai3 ) dxdydz + Ω
∂3 Rxy ∂i (vai3 ) dxdydz. Ω
By the interpolation error estimate and the Poincar´e inequality, we derive |Ii3 | ≤ Ch3 ∥u∥4, ∞, Ω |v|1, 1, Ω .
(2.6)
|J2 | ≤ Ch3 ∥u∥4, ∞, Ω |v|1, 1, Ω .
(2.7)
From (2.5) and (2.6), Applying the interpolation error estimate and the Poincar´e inequality directly gives |J3 | ≤ Ch3 ∥u∥4, ∞, Ω |v|1, 1, Ω . (2.8) From (2.3), (2.4), (2.7), and (2.8), |a(Rxy , v)| ≤ Ch3 ∥u∥4, ∞, Ω |v|1, 1, Ω . 4
(2.9)
LIU: VARIABLE COEFFICIENT ELLIPTIC EQUATIONS
749
Next we bound the term ∑
∑
3 a(Rz , v) = (a ∂ R , ∂ v) + 3i=1 (ai ∂i Rz , v) + (a0 Rz , v) ∑2i,j=1∑3ij i z j = i=1 j=1 (aij ∂i Rz , ∂j v) ∑ + 2j=1 (a3j ∂3 Rz , ∂j v) +(a 33 ∂3 Rz , ∂3 v) (∑ ) 2 + i=1 (ai ∂i Rz , v) + (a0 Rz , v) +(a3 ∂3 Rz , v) ≡ K1 + K2 + K3 + K4 + K5 .
(2.10)
Applying the interpolation error estimate directly gives |K1 | ≤ Ch3 ∥u∥4, ∞, Ω |v|1, 1, Ω .
(2.11)
Integration by parts yields K2 =
2 ∑
(Rz , ∂3 (v∂j a3j )) +
j=1
2 ∑
(∂j Rz , ∂3 (va3j )).
j=1
Applying the interpolation error estimate and the Poincar´e inequality gives |K2 | ≤ Ch3 ∥u∥4, ∞, Ω |v|1, 1, Ω .
(2.12)
To bound the term K3 , we need the following expansion a33 (Q) = a33 (Q0 ) + O(he ) ≡ a033 + O(he ) ∀ Q = (x, y, z) ∈ e, where he = diam(e), and Q0 ∈ e is a fixed point. Thus, K3 =
∑ (∫ e
e
)
∫
a033 ∂3 Rz ∂3 v dxdydz
+
O(he )∂3 Rz ∂3 v dxdydz . e
It is easy to see that ∫ O(he )∂3 Rz ∂3 v dxdydz ≤ Ch3 ∥u∥3, ∞, Ω |v|1, 1, e , e
whereas (see [9]),
∫ e
a033 ∂3 Rz ∂3 v dxdydz = 0.
Summing over all elements yields |K3 | ≤ Ch3 ∥u∥3, ∞, Ω |v|1, 1, Ω .
(2.13)
Applying the interpolation error estimate and the Poincar´e inequality gives |K4 | ≤ Ch3 ∥u∥4, ∞, Ω |v|1, 1, Ω . 5
(2.14)
LIU: VARIABLE COEFFICIENT ELLIPTIC EQUATIONS
750
As for the term K5 , integration by parts gives K5 = −(Rz , ∂3 (a3 v)). Thus, applying the interpolation error estimate and the Poincar´e inequality gives |K5 | ≤ Ch3 ∥u∥3, ∞, Ω |v|1, 1, Ω .
(2.15)
|a(Rz , v)| ≤ Ch3 ∥u∥4, ∞, Ω |v|1, 1, Ω .
(2.16)
From (2.10)–(2.15),
Thus, combining (2.9) and (2.16) completes the proof of the result (2.1). III. SUPERCONVERGENCE OF THE TENSOR-PRODUCT QUADRATIC PENTAHEDRAL ELEMENT To derive superconvergence estimates, for every Z ∈ Ω, we need to give the definition of the discrete derivative Green’s function ∂Z,ℓ GhZ ∈ S0h (Ω) as follows. a(v, ∂Z,ℓ GhZ ) = ∂ℓ v(Z) ∀ v ∈ S0h (Ω).
(3.1)
where ℓ ∈ R3 and |ℓ| = 1. ∂ℓ v(Z) stands for the onesided directional derivative v(Z + ∆Z) − v(Z) , ∆Z = |∆Z|ℓ. |∆Z| |∆Z|→0
∂ℓ v(Z) = lim
Lemma 3.1. For ∂Z,ℓ GhZ the discrete derivative Green’s function defined by (3.1), we have the following estimate: ∂Z,ℓ GhZ
4
1, 1, Ω
≤ C| ln h| 3 .
(3.2)
The proof of the result (3.2) may be found in [10]. Finally, we give the following maximum norm supercloseness property. Theorem 3.1. Let {T h } be a uniform family of pentahedral partitions of Ω, u ∈ W 4, ∞ (Ω) ∩ H01 (Ω). For uh the tensor-product quadratic pentahedral finite element approximation, and Πu the corresponding interpolant to u. Then we have the following superconvergence estimate: 4
|uh − Πu|1, ∞, Ω ≤ Ch3 |ln h| 3 ∥u∥4, ∞, Ω .
(3.3)
Proof. For every Z ∈ Ω and unit vector ℓ ∈ R3 , from (1.3) and (3.1), ∂ℓ (uh − Πu) (Z) = a(uh − Πu, ∂Z,ℓ GhZ ) = a(u − Πu, ∂Z,ℓ GhZ ). Hence, from (2.1),
|∂ℓ (uh − Πu) (Z)| ≤ Ch3 ∥u∥4, ∞, Ω ∂Z,ℓ GhZ 6
1, 1, Ω
.
(3.4)
LIU: VARIABLE COEFFICIENT ELLIPTIC EQUATIONS
Combining (3.2) and (3.4), we immediately obtain the result (3.3).
References 1. J. H. Brandts and M. Kˇr´ıˇzek, History and future of superconvergence in three dimensional finite element methods, Proceedings of the Conference on Finite Element Methods: Three-dimensional Problems, GAKUTO International Series Mathematics Science Application, Gakkotosho, Tokyo, 15 (2001), 22–33. 2. J. H. Brandts and M. Kˇr´ıˇzek, Superconvergence of tetrahedral quadratic finite elements, J. Comput. Math. 23 (2005), 27–36. 3. C. M. Chen, Construction theory of superconvergence of finite elements (in Chinese), Hunan Science and Technology Press, Changsha, China, 2001. 4. L. Chen, Superconvergence of tetrahedral linear finite elements, Internat J. Numer. Anal. Model. 3 (2006), 273–282. 5. G. Goodsell, Pointwise superconvergence of the gradient for the linear tetrahedral element, Numer. Meth. Part. Differ. Equa. 10 (1994), 651–666. 6. A. Hannukainen, S. Korotov, and M. Kˇr´ıˇzek, Nodal O(h4 )-superconvergence in 3D by averaging piecewise linear, bilinear, and trilinear FE approximations, J. Comput. Math. 28 (2010), 1–10. 7. R. C. Lin and Z. M. Zhang, Natural superconvergent points in 3D finite elements, SIAM J. Numer. Anal. 46 (2008), 1281–1297. 8. J. H. Liu and Q. D. Zhu, Pointwise supercloseness of tensor-product block finite elements, Numer. Meth. Part. Differ. Equa. 25 (2009), 990–1008. 9. J. H. Liu and Q. D. Zhu, Pointwise supercloseness of pentahedral finite elements, Numer. Meth. Part. Differ. Equa. 26 (2010), 1572–1580. 10. J. H. Liu and Q. D. Zhu, The estimate for the W 1,1 -seminorm of discrete derivative Green’s function in three dimensions (in Chinese), J. Hunan Univ. Arts Sci. 16 (2004), 1–3. 11. A. H. Schatz, I. H. Sloan, and L. B. Wahlbin, Superconvergence in finite element methods and meshes that are locally symmetric with respect to a point, SIAM J. Numer. Anal. 33 (1996), 505–521. 12. Z. M. Zhang and R. C. Lin, Locating natural superconvergent points of finite element methods in 3D, Internat J. Numer. Anal. Model. 2 (2005), 19–30. 13. Q. D. Zhu and Q. Lin, Superconvergence theory of the finite element methods (in Chinese), Hunan Science and Technology Press, Changsha, China, 1989.
7
751
JOURNAL OF COMPUTATIONAL ANALYSIS AND APPLICATIONS, VOL. 14, NO.4, 752-766 , 2012, COPYRIGHT 2012 EUDOXUS PRESS, 752 LLC
ON SUPPORTS OF EQUILIBRIUM MEASURES WITH CONCAVE SIGNED EQUILIBRIA D. BENKO,†S. B. DAMELIN
‡ AND
P. D. DRAGNEV§
Abstract. Let Σ ⊂ R be compact. In this paper, we study the support of the equilibrium measure for a class of external fields Q : Σ → R, whose associated signed equilibrium measure has a positive part with concave density supported on at most two intervals. We prove that the support of the equilibrium measure is at most two intervals. Our proof uses the iterated balayage algorithm. As a corollary we obtain by a constructive method that the equilibrium measure of any two intervals has convex density. A non-trivial counterpart of the results to the unit circle is also presented. Key words. Logarithmic potential theory, external fields, equilibrium measure, equilibrium support AMS subject classifications. 31A15, 30C15, 78A30
1. Introduction. In recent years minimal energy problems with external fields have found many applications in a variety of areas ranging from diverse subjects such as orthogonal polynomials, weighted Fekete points, numerical conformal mappings, weighted polynomial approximation, rational and P´ade approximation, integrable systems, random matrix theory and random permutations. Let Σ ⊂ C be closed. An essential step towards the solution of such minimal energy problems is the determination of the nature of the support of the equilibrium measure µ := µQ associated with a given external field Q : Σ → (−∞, ∞]. As described by Deift [8, Chapter 6], information that the support consists of finitely many intervals allows one to set up a system of equations for the endpoints, from which the endpoints may be calculated, and thus, the equilibrium measure may be obtained from a Riemann-Hilbert problem or, equivalently, a singular integral equation. It is for this reason, that it is important to have a priori, conditions to ensure that the support S of µ is the union of a finite number of intervals. For more on the various applications of minimal energy problems we refer the reader to the references [1, 2, 3, 6, 7, 8, 9, 15, 17, 18, 19, 20] and those listed there in. In this paper we establish a sufficient condition that the equilibrium support consists of at most two intervals. The method utilizes the Iterated Balayage Algorithm, introduced and used first in the papers [15, 6, 7]. As a result from our analysis we establish also that the equilibrium measure for any two intervals is convex. In order to formulate our results we introduce some needed definitions and notations from potential theory. 1.1. Potential-Theoretical Preliminaries. Let Σ = ∪ni=1 [ai , bi ] or let Σ be finitely many closed arcs on the unit circle. Let w = exp(−Q) be a function (called weight), where Q : Σ → (−∞, ∞] is continuous in an extended sense but not identically infinity. Given a Borel probability measure ν supported on Σ, its logarithmic potential and logarithmic energy are respectively given by Z Z Z 1 1 ν dν(t), I(ν) := dν(s)dν(t). log U (z) := log |z − t| |s − t| The weighted energy of ν associated with the external field Q (or weight w) is defined as Z Iw (ν) := I(ν) + 2 Q(x) dν(x). It is well known (see [11], [19, Theorem 1.3]), that under the above assumptions there exists a unique equilibrium measure µ := µw associated with Q, that solves the minimal energy problem Iw (µ) = Ew := minν∈P(Σ) Iw (ν) † Department
(1.1)
of Mathematics and Statistics, ILB 325, University of South Alabama, AL 36688 ([email protected]) of Mathematical Sciences, Georgia Southern University, P.O. Box 8093, Statesboro, GA 30460; and School of Computational and Applied Mathematics, University of the Witwatersrand, Private Bag 3, Wits, 2050, South Africa ([email protected]) § Department of Mathematical Sciences, Indiana-Purdue University, Fort Wayne, IN 46805 ([email protected]) ‡ Department
1
753
2
D. BENKO, S. B. DAMELIN AND P. D. DRAGNEV
where P(Σ) denotes the class of all Borel probability measures supported on Σ. The support of the equilibrium measure is denoted by Sµ or SQ . This measure is completely characterized by the Gauss variational conditions ½ µ U (x) + Q(x) = F, x ∈ SQ , (1.2) U µ (x) + Q(x) ≥ F, x ∈ Σ, with some constant F . We note that under the assumptions we have, the logarithmic potential U µ (z) is continuous in C. In the particular case when Q ≡ 0 the unique minimizer µΣ of (1.1) is called an equilibrium measure of Σ. Next we introduce the notion of signed equilibrium (see [4]). D EFINITION 1. Given a compact subset E ⊂ C and an external field Q, we call a signed measure ηE supported on E, and of total mass ηE (E) = 1, a signed equilibrium on E associated with Q, if U ηE (x) + Q(x) = FE
for all x ∈ E.
(1.3)
The choice of the normalization ηE (E) = 1 is just for convenience in the applications here. The next subsection is a brief summary of an essential tool in our analysis, called the Iterated Balayage Algorithm (see [15, 6, 7]). For the rest of the paper we will assume that Σ is a union of finitely many intervals and that the external field Q has δ-H¨older continuous first derivative for some δ > 0. 1.2. The Iterated Balayage Algorithm (IBA). We recall the notion of balayage onto a compact set (see [16, Chapter IV]). Let M be a compact subset of the complex plane with positive logarithmic capacity and such that the complement C \ M is regular. Then, if ν is any finite positive Borel measure on C with compact support, there exists a unique measure νˆ supported on M such that kνk = kˆ ν k, and for some constant C, U νˆ (z) = U ν (z) + C,
z ∈ M.
The measure νˆ is called the balayage of ν onto M and we denote it by Bal(ν; M ). For a signed measure σ = σ + −σ − , the balayage is defined as Bal(σ; M ) := Bal(σ + ; M ) − Bal(σ − ; M ). The Iterated Balayage Algorithm (IBA), presents an iterative method to solve the variational problem (1.2). Given an external field Q on Σ0 := Σ one proceeds as follows. The first step is to solve the integral equation Z log |x − t|v0 (t)dt = Q(x) − F0 , x ∈ Σ0 , (1.4) Σ0
subject to the condition Z v0 (t)dt = 1.
(1.5)
Σ0
Here, F0 is a fixed constant. Since Q ∈ C 1+δ (Σ0 ) (recall that Σ0 consists of finitely many intervals), it can be shown that (1.4)–(1.5) has a unique solution v0 (t), given by the Sokhotski-Plemelj formula (see [12, p. 425]). In the simplest case when Σ0 = [a, b] the formula for v0 is (see [12, p. 428]), " # Z b 0 p 1 Q (t) 1 1 + P.V. (b − t)(t − a)dt , a < x < b, v(x) = v0 (x) := p (1.6) π π (b − x)(x − a) a t−x where the above integral is a Cauchy principle value integral. In view of Definition 1, v0 is the density of the signed equilibrium on Σ0 . If v0 happens to be non-negative on Σ0 then it is the density of the equilibrium measure with external field Q and we are done. If not, then we put dσ0 (t) := v0 (t)dt,
754
3
SUPPORT OF EQUILIBRIUM MEASURE
so that σ0 is a signed measure of Σ0 . Clearly, U σ0 (x) + Q(x) = F0 Let σ0 =
σ0+
−
σ0−
for all x ∈ Σ0
be the Jordan decomposition of σ0 and let Σ1 := supp(σ0+ ).
From [15, Lemma 3], µ ≤ σ0+ and supp(µ) ⊂ Σ1 , so that in determining µ and its support we may restrict ourselves in (1.1) to Σ = Σ1 . The next step is to determine the signed equilibrium associated with Q on Σ1 , which is to solve the singular equation Z log |x − t|dσ1 (t) = Q(x) − F1 , x ∈ Σ1 (1.7) Σ1
subject to the condition Z dσ1 = 1.
(1.8)
Σ1
Alternatively, using balayage one derives that the signed measure σ1 := σ0+ − Bal(σ0− , Σ1 ) satisfies U σ1 (x) + Q(x) = F1
for all x ∈ Σ1 ,
provided Σ1 is regular (which will be the case in our applications), and by uniqueness it is the solution to (1.7)-(1.8). If Σ1 is not regular the equality holds q.e. and still uniqueness holds, but we will not elaborate on this generality here. an operator J was introduced in [15, 6, 7] on all finite signed measures σ on [a, b] with R To describe this process, dσ = 1 and cap(supp(σ + )) > 0 as follows J(σ) := σ + − Bal(σ − ; supp(σ + )) = Bal(σ; supp(σ + )). Since the operator J sweeps the negative part of the measure σ onto the support of the positive part, we have that J(σ)+ ≤ σ + . The IBA scheme I(σ0 ) = {(Σk , σk )}∞ k=0 is obtained as the iterates of the operator J applied to a signed (equilibrium) measure σ0 supported on the set Σ0 , i.e., + Σk := supp(σk−1 ),
σk := J(σk−1 ) = J k (σ0 ),
k = 1, 2, . . . .
The measures σk are signed measures which have a Jordan decomposition σk =
σk+
−
σk− .
(1.9) It follows that
σ0+ ≥ σ1+ ≥ · · · ≥ µ,
(1.10)
Σ0 ⊃ Σ1 ⊃ Σ2 ⊃ · · · ⊃ S.
(1.11)
and that ∗ Even for more general Σ0 one expects from (1.10) and (1.11) that the sequence {σk+ }∞ k=0 converges in the weak topology to the equilibrium measure µ, but this has not been proven yet. If it holds, then we say that the IBA converges. Besides presenting a possible algorithm for numerical calculations, the iterated Balayage algorithm can also be used to prove rigorous results on the support of µ in certain situations. The main difficulty in proving that the iterated Balayage algorithm converges generally, lies in the fact that one has to show that the negative parts σk− tend to zero as k tends to ∞. This can be shown, if one can control the limiting set Σ∗ := ∩∞ k=1 Σk . See for example [15, 6, 7]. Note that the IBA scheme is derived similarly if Σ0 is union of finitely many arcs of the unit circle T := {z ∈ C : |z| = 1}, in which case one uses [12, p. 425]. Finally, we remark that recently a continuous version of this algorithm has been used to solve the equilibrium problem for minimal Riesz energy problems for axis-symmetric external fields on the unit sphere Sd (see [5]).
755
4
D. BENKO, S. B. DAMELIN AND P. D. DRAGNEV
1.3. Balayage and Kelvin Transform. There is a natural relationship between balayage measures and equilib¯ →C ¯ with center z0 and radius R is given as the inversion rium measures. Let us recall that the Kelvin transform K : C ∗ with respect to the circle {|z − z0 | = R}, namely if z = K(z), then z ∗ lies on the ray stemming from z0 passing through z and distances satisfy |z − z0 ||z ∗ − z0 | = R2 . The distance distortion is given by |z ∗ − x∗ | =
R2 |z − x| . |z − z0 ||x − z0 |
(1.12)
To any measure ν we associate its Kelvin transform ν ∗ = K(ν) as dν ∗ (x∗ ) = dν(x). Observe, that both the point and measure conversion are self-inverse. Given a compact set A and a point z0 6∈ A we find the balayage δc z0 of the Dirac-delta measure δz0 using Riesz’s approach [16, Chapter IV, §5], namely if A∗ = K(A), then the following relation holds δc z0 = K(µA∗ ),
(1.13)
where µA∗ is the equilibrium measure of A∗ . 2. Results and examples. Following is our main result. T HEOREM 2. Let Σ ⊆ R and w : Σ → [0, ∞), w = exp(−Q) as described in Section 1.1. Suppose that the signed equilibrium on Σ associated with Q exists and is denoted by σ0 . If supp(σ0+ ) consists of at most two intervals A1 , A2 , and σ0+ has a concave density on each subinterval, then the equilibrium support supp(µ) consists of at most two intervals B1 , B2 , with B1 ⊂ A1 , B2 ⊂ A2 , and the equilibrium density is concave on each of these intervals. We remark that the theorem remains valid for any Σ ⊆ R compact set as long as σ0+ has a support consisting of at most two intervals and σ0+ has a concave density. Also, the δ-H¨older continuous first derivative condition on Q must be satisfied only on supp(σ0+ ). The proof of this theorem relies on a lemma concerning the balayage of a measure onto one or two intervals, which we deem important by itself and include in this section. L EMMA 3. Let A be an interval or a union of two intervals, and let ν be a positive measure with compact support in R, such that ν(A) = 0. Then the balayage νˆ = Bal(ν, A) of ν onto A is absolutely continuous with convex density on every subinterval. If we consider ν = δs , where δs is the Dirac-delta measure with point mass at s, and let s → ∞, then it is well known that densities of the balayage measures δˆs converge to the density of the equilibrium measure µA . Therefore, we obtain as a byproduct of our analysis the following corollary. C OROLLARY 4. The density of the equilibrium measure of the union of any two intervals is convex on every subinterval. R 3π E XAMPLE 5. Let Σ = [0, 3π] and Q(x) := 0.5 0 ln |x − t| sin t dt. We can verify that the signed equilibrium is dη = 0.5 sin t dt. Since it is positive and concave on [0, π] ∪ [2π, 3π], we conclude that the equilibrium measure associated with Q(x) is supported on at most two intervals I1 ∪ I2 , where I1 ⊂ [0, π] and I2 ⊂ [2π, 3π], and that the equilibrium density is concave. We remark that by symmetry, in fact the support will consist of two intervals which are symmetric to the point 1.5π. We also remark that this external field is not weak convex in the sense defined in [2, Definition 9], therefore it is an essentially new example in the literature. λ E XAMPLE 6. (Freud weights example) The following classical Freud weights example w(x) = e−|x| (see [19, Example IV.1.15]) provides a nice illustration of our results. For simplicity of the computations, we shall assume that λ = 2, or equivalently Q(x) = x2 , and Σ = R. General theory yields that the support of the equilibrium measure associated with Q(x) is compact, so we may restrict ourselves to solving the minimal energy problem on the interval [−β, β] for some large enough β. Using (1.6) we easily find that the density of the signed equilibrium in this case is given by vβ (x) =
β2 − 1 2p 2 . β − x2 − p π π β 2 − x2
(2.1)
For β > 1 the function vβ (x) is clearly concave, and thus by Theorem 2 the equilibrium support is one interval and the equilibrium √ measure associated with Q(x) has concave density. Indeed, it is known that supp(µQ ) = [−1, 1] and dµQ (x) = (2 1 − x2 /π) dx. Below we provide an alternative argument to this fact.
756
SUPPORT OF EQUILIBRIUM MEASURE
5
Observe that since vβ (1) > 0 for all β > 1, the IBA scheme is the collection of nested intervals supp(σk ) =: [−βk , βk ] ⊃ [−1, 1]. It can be calculated that the sequence {βk } satisfies the recurrence relation r βk+1 =
βk2 + 1 , 2
β0 = β.
by βk . One can easily show that βk & 1 as k → ∞, The density of σk is simply given by (2.1) with β replaced √ implying that supp(µQ ) = [−1, 1] and dµQ (x) = (2 1 − x2 /π) dx. The unit circle counterpart of Theorem 2 is not a trivial consequence of the result on the real line and its proof is essentially different, so we formulate it as a separate theorem. T HEOREM 7. Let Σ ⊆ T and w : Σ → [0, ∞), w = exp(−Q) as described in Section 1.1. Suppose that the signed equilibrium on Σ associated with Q exists and is denoted by σ0 . If supp(σ0+ ) consists of at most two arcs A1 , A2 , and dσ0+ = g(θ) dθ has a concave density g(θ) on each of the arcs, then the equilibrium support supp(µ) consists of at most two arcs B1 , B2 , with B1 ⊂ A1 , B2 ⊂ A2 , and the equilibrium density is concave on each of these arcs. Here dθ indicates the arclength Lebesgue measure on T. We remark that the theorem remains valid for any Σ ⊆ T compact set as long as σ0+ has a support consisting of at most two arcs and dσ0+ = g(θ) dθ has a concave density. Also, the δ-H¨older continuous first derivative condition on Q must be satisfied only on supp(σ0+ ). The key to the proof of the unit circle case is an analog of Lemma 3. L EMMA 8. Let A be an arc or a union of two arcs, and let ν be a positive measure with compact support on T, such that ν(A) = 0. Then the balayage νˆ := Bal(ν, A) of ν onto A is absolutely continuous measure with respect to the Lebesgue arclength measure dθ and has convex density on every subarc. R EMARK 9. It is possible to derive Lemma 3 from Lemma 8 through a limiting process, which we illustrate briefly. It is enough to consider only the point mass balayage case. Let s ∈ R be fixed, A = [a, b] ∪ [c, d] ⊂ R, and let AR be the preimage of A under the inversion with center s + iR and radius R. Observe, that AR consists of two arcs on the circle |x − s − iR/2| = R/2, and hence Bal(δs , AR ) has convex density with respect to the Lebesgue arclength on every subarc. As R → ∞ one can show that the density of Bal(δs , AR ) approaches that of Bal(δs , A), which would imply the convexity of the limit. However, instead of following this route, we prefer to prove Lemma 3 directly, as its proof is simpler than that of Lemma 8, as well as illustrative and constructive. The next theorem establishes another condition which guarantees that the support of the equilibrium measure is an interval. We say that a function h(x) is strictly increasing (decreasing) a.e. on [−1, 1], if there exists a set G ⊂ [−1, 1] such that G has measure 1 and h(x) is strictly increasing (decreasing) on G. T HEOREM 10. Let Q be an external field on [−1, 1] as described in Section 1.1. Then the following hold: (a) Let f (x) := (1 − x2 )Q0 (x). If f is convex on [−1, 1], then the support S is an interval [b, 1], where b ∈ [−1, 1). If f is concave on [−1, 1], then the support S is an interval [−1, b], where b ∈ (−1, 1]. √ 0 2 , and let g(x) = 0 otherwise. Assume that g(x) is absolutely 1 − x (b) For x ∈ [−1, 1] let g(x) := Q (x) R R R continuous, R R |(g 0 (x + u) − g 0 (x))/u|dudx < ∞, and x 7→ R (g 0 (x + u) − g 0 (x))/udu is a continuous function on (−1, 1). √ If 1 − x2 g 0 (x) is strictly increasing a.e. on [−1, 1], then the support S is an interval [b, 1], where b ∈ [−1, √ 1). If 1 − x2 g 0 (x) is a strictly decreasing a.e. on [−1, 1], then the support S is an interval [−1, b], where b ∈ (−1, 1]. E XAMPLE 11. Let Q(x) = c arcsin(x/a), x ∈ [−1, 1], a > 1, c 6= 0. We find that f (x) = c
¸ ·p a2 − 1 . a2 − x2 − √ a2 − x2
757
6
D. BENKO, S. B. DAMELIN AND P. D. DRAGNEV
Thus, for c > 0 the function f is concave and the equilibrium support SQ = [−1, b]. For c < 0 the function f (x) is convex and SQ = [b, 1]. Observe that the external field Q is weakly-convex in the sense of [2], so this is an independent verification of the result there. The remainder of this paper is devoted to the proofs of our results. 3. Proof of the real line case. We will repeatedly make use of the Chebyshev’s Integral Inequality (see [13, p. 1092], [14, pp. 43-44]), which we formulate as a separate lemma and provide a short proof for completeness. L EMMA 12. (Chebyshev, 1882) Let f, g, h be integrable functions on [a, b] and let f ≥ 0 on [a, b]. (a) Suppose that both g and h are monotone increasing (decreasing). Then Z
Z
b
Z
b
(f g)(x) dx
(f h)(x) dx ≤
a
a
Z
b
b
f (x) dx a
(f gh)(x)dx,
(3.1)
(f gh)(x)dx,
(3.2)
a
provided all integrals exist. (b) If h increases and g decreases (h decreases and g increases), then Z
Z
b a
Z
b
(f g)(x) dx
(f h)(x) dx ≥ a
Z
b
b
f (x) dx a
a
provided all integrals exist. Proof. Clearly (b) follows from (a) by substituting g with (−g), so let h, g be both monotone increasing (decreasing). Then f (x)f (y)[g(x) − g(y)][h(x) − h(y)] ≥ 0 for any x, y ∈ [a, b]. Integrating the inequality yields Z
b
Z
b
f (x)f (y)[g(x) − g(y)][h(x) − h(y)] dx dy ≥ 0, a
a
which implies (3.1). R EMARK 13. In the particular case when h(x) = x, Lemma 12 has an interesting geometric interpretation. Suppose f, g, are integrable non-negative functions on [a, b] and g is an increasing function on [a, b]. Then (3.1) becomes R R xf (x)g(x) dx xf (x) dx R ≤ R . (3.3) f (x) dx f (x)g(x) dx Imagine that the [a, b] interval is a wire with density f (x). The above inequality says that the center of mass will move to the right if we multiply the density by an increasing non-negative function g(x). We now continue with the proof of Lemma 3. Proof of Lemma 3. It is enough to prove the lemma only for Dirac delta measures δs , s 6∈ A, because in general we have the representation dˆ ν = dt
Z supp(ν)
dδˆs dν(s) =: dt
Z f (t, s) dν(s),
(3.4)
supp(ν)
where f (t, s) := dδˆs /dt. Then the convexity of νˆ can be easily derived from that of f (t, s) by integrating the inequality f (αt + (1 − α)y, s) ≤ αf (t, s) + (1 − α)f (y, s), with respect to dν(s) and using (3.4) (recall that ν(A) = 0).
0 ≤ α ≤ 1,
t, y ∈ A
758
7
SUPPORT OF EQUILIBRIUM MEASURE
We now consider two cases. Case 1: The set A consists of one interval, for simplicity A = [−1, 1]. From [19, Chapter II, Corollary 4.12] we have that √ s2 − 1 1 √ f (t, s) = π |s − t| 1 − t2
(3.5)
We claim that ftt (t, s) > 0 for all t ∈ (−1, 1). Without loss of generality let s > 1. Then we find that √ ¶ µ s+t 3t2 2 s2 − 1 √ + , + ftt (t, s) = π (s − t)2 (1 − t2 )3/2 (s − t)(1 − t2 )5/2 (s − t)3 1 − t2 which verifies the claim, since s + t > 0 and s − t > 0. Case 2: The set A consists of two intervals, i.e., A = [a, b] ∪ [c, d]. Assume first that s ∈ (b, c). Applying Kelvin transform K with center s and radius 1 we obtain A∗ := K(A) = [b∗ , a∗ ] ∪ [d∗ , c∗ ] (see Section 1.3). The density of the equilibrium measure of A∗ (see [21, Lemma 4.4.1]) is given by |t∗ − y ∗ | dµA∗ , = p ∗ dt π |t∗ − a∗ ||t∗ − b∗ ||t∗ − c∗ ||t∗ − d∗ |
(3.6)
where y ∗ ∈ (a∗ , d∗ ) is determined from the equation Z
d∗
a∗
t∗ − y ∗ p dt∗ = 0. π |t∗ − a∗ ||t∗ − b∗ ||t∗ − c∗ ||t∗ − d∗ |
(3.7)
Using the distance distortion formula (1.12) and the balayage/equilibrium relation (1.13) we derive the balayage density as p |s − a||s − b||s − c||s − d| |t − y| dδˆs p , t ∈ A. = f (t, s) := (3.8) dt π|s − y| |t − s| |t − a||t − b||t − c||t − d| Let à φ(t) := log(f (t, s)) = log
|t − y|
|t − s|
!
p |t − a||t − b||t − c||t − d|
+ c(s).
We shall prove that for fixed s ∈ (b, c), we have φ00 (x) > 0 for all x ∈ (a, b) ∪ (c, d). This amounts to the inequality ½ ¾ 1 1 1 1 1 1 1 < + + + + , for all x ∈ A. (3.9) (x − y)2 (x − s)2 2 (x − a)2 (x − b)2 (x − c)2 (x − d)2 (Observe that if y ∗ = s, then y = ∞ and the factors |t − y| and |s − y| in (3.8) are omitted and (3.9) obviously holds.) If y ∗ ∈ (a∗ , d∗ ) \ {s}, then y ∈ (−∞, a) ∪ (d, ∞). Without loss of generality we may assume y ∈ (−∞, a). Clearly, for x ∈ [c, d] (3.9) holds. So, let us fix x ∈ [a, b]. From (3.7) we derive that Z
d∗
a∗ d∗
y ∗ − a∗ = Z
a∗
t∗ − a∗ p dt∗ |t∗ − a∗ ||t∗ − b∗ ||t∗ − c∗ ||t∗ − d∗ | 1
∗
p dt |t∗ − a∗ ||t∗ − b∗ ||t∗ − c∗ ||t∗ − d∗ |
(3.10)
759
8
D. BENKO, S. B. DAMELIN AND P. D. DRAGNEV
Since
p |t∗ − c∗ | is decreasing on [a∗ , d∗ ] we can apply Lemma 12 (see also Remark 13) to estimate that Z y ∗ − a∗ ≥ Z
d∗ a∗
t∗ − a∗
p dt |t∗ − a∗ ||t∗ − b∗ ||t∗ − d∗ |
d∗ a∗
1
p dt |t∗ − a∗ ||t∗ − b∗ ||t∗ − d∗ |
∗
∗
Z √ a∗ − b∗ ≥ √ ∗ ·Z d − b∗
a∗
d∗
a∗
d∗
r
t∗ − a ∗ ∗ dt d ∗ − t∗ 1
(3.11) ∗
p dt (t∗ − a∗ )(d∗ − t∗ )
The second fraction evaluates to (d∗ − a∗ )/2, thus reducing (3.11) to p |a∗ − b∗ | d∗ − a∗ ∗ ∗ , y −a ≥ p 2 |d∗ − b∗ |
(3.12)
which after the Kelvin transformation becomes p |d − s||a − b| |d − a| |y − a| ≥p . |y − s| |a − s||d − b| 2|d − s|
(3.13)
p p 2 |y − a||a − s| |d − s||d − b| 1 1 p ≤ ≤p , · ·p |y − s| |d − a| |y − a| |a − b| |a − b|
(3.14)
Rewriting (3.13) yields 1
where each of the two fractions above is less than 1. On the other hand, one easily derives that ½ ¾ 1 1 4 1 + = . min 2 2 (a − x) (b − x) (b − a)2 x∈[a,b] 2 So, the inequality (3.14) implies (3.9) at least with a factor of four in excess. ½ ¾ 1 1 1 1 1 1 ≤ ≤ ≤ + . (y − x)2 (y − a)2 (a − b)2 8 (a − x)2 (b − x)2 This establishes the lemma when s ∈ (b, c), and hence for any measure ν with supp(ν) ⊂ [b, c] with ν(A) = 0. If we assume that s ∈ (−∞, a)∪(d, ∞), then the balayage µ = Bal(δs , [a, d]) has a convex density by Case 1. However, from the properties of balayage measure we have that Bal(δs , [a, b] ∪ [c, d]) = Bal(µ, [a, b] ∪ [c, d]) = µ|A + Bal(µ|[b,c] , A). Both measures on the right have convex densities on A, the first, as a restriction of a measure with a convex density on the entire interval [a, d], and the second by what we just proved. Hence, δˆs has convex density and the lemma is proved. We are now ready for the Proof of Theorem 2. The proof is based on the iterated balayage algorithm discussed in Section 1.2. Let Σ0 := Σ and σ0 denote the signed equilibrium associated with Q. By the assumption of the theorem the latter exists, and if σ0 = σ0+ − σ0− is the Jordan decomposition of σ0 , the measure σ0+ has concave density and its support Σ1 := supp(σ0+ ) consists of at most two intervals. Then from Lemma 3, Bal(σ0− , Σ1 ) is convex and thus, σ1 = J(σ0 ) = σ0+ − Bal(σ0− , Σ1 ) has concave density on Σ1 . Therefore, Σ2 := supp(σ1+ ) consists of at most two intervals (nested in at most two intervals that make up Σ1 ). Continuing, in this way we obtain an IBA sequence of nested compact sets Σ0 ⊃ Σ1 ⊃ . . . Σn ⊃ · · · ⊃ supp(µQ ) ,
760
9
SUPPORT OF EQUILIBRIUM MEASURE
each made of at most two intervals. Now we show that the density of σn , which we will denote by vn (x), is converging to the density of the equilibrium measure µQ associated with the external field Q(x). Let us assume that for all n we have Σn = [an , bn ] ∪ [cn , dn ], where [an , bn ] and [cn , dn ] are two non-trivial disjoint intervals. If it was not the case, the proof would be the same. (We remark that even if Σ0 consisted of two intervals, we may “lose” one in the IBA if at one step vn ≤ 0 on [an , bn ] or on [cn , dn ].) Let lim an = a, lim bn = b, lim cn = c, lim dn = d. We have Z Q(x) − Fn = log |t − x|vn (t) dt Σ ! ÃZn Z =
+
log |t − x|vn (t)dt
Σn+1
(3.15)
Σn \Σn+1
ÃZ
!
Z
=
+ [an+1 ,bn+1 ]∪[cn+1 ,dn+1 ]
log |t − x|vn (t)dt. [an ,an+1 ]∪[bn+1 ,bn ]∪[cn ,cn+1 ]∪[dn+1 ,dn ]
We claim that Z vn (t)dt → 0.
(3.16)
[an ,an+1 ]∪[bn+1 ,bn ]∪[cn ,cn+1 ]∪[dn+1 ,dn ]
If not, then we can chose a subsequence, denoted by n for simplicity, and mi ≤ 0, i = 1, . . . , 4, such that Z Z Z Z X vn (t)dt → m1 , vn (t)dt → m2 , vn (t)dt → m3 , vn (t)dt → m4 , mi < 0. [an ,an+1 ]
¯ ¯ Since vn ¯
[bn+1 ,bn ]
[a,b]∪[c,d]
[cn ,cn+1 ]
[dn+1 ,dn ]
is a bounded decreasing sequence, letting v := lim vn , we can use the dominated convergence
theorem and mean value theorem to derive that for any x ∈ (a, b) ∪ (c, d) Z lim(Q(x)−Fn ) = log |t−x|v(t)dt+m1 log |a−x|+m2 log |b−x|+m3 log |c−x|+m4 log |d−x|. (3.17) [a,b]∪[c,d]
This shows that Fn has a finite limit. We also see that we must have m1 = m2 = m3 = m4 = 0. For example, if m1 < 0, let x = a in (3.15) and let n → ∞. The right-hand side of (3.15) is approaching positive infinity while the left hand-side has finite limit, which is a contradiction. Let F := lim Fn . From (3.15) we get that for any x ∈ (a, b) ∪ (c, d) Z Q(x) − F = log |t − x|v(t)dt. [a,b]∪[c,d]
From (3.16) it is also clear that v(t) is a probability density function. The equilibrium measure µQ minimizes the weighted energy on Σ and therefore on [a, b] ∪ [c, d], too. Also, the support of µQ is a subset of [a, b] ∪ [c, d]. It follows that v(x) is the density of the equilibrium measure (see [19, Theorem I.3.3]). If a = b then supp(µQ ) = [c, d]. If c = d then supp(µQ ) = [a, b]. Finally, if a < b and c < d then supp(µQ ) = [a, b] ∪ [c, d]. 4. Proof of the unit circle case. We proceed first with the proof of Lemma 8. Proof of Lemma 8. As in the proof of Lemma 3 above, it is enough to verify the Lemma for case of a point mass, so without loss of generality we assume that ν = δs , where s ∈ T \ A. Given two points on the unit circle eiφ and eiθ we denote by [eiφ , eiθ ] the closed arc that connects the points counterclockwise. In this notation, its complement relative to the unit circle would be the open arc (eiθ , eiφ ). Since the case of an arc easily follows from the case of two arcs as we deform one of the arcs to a point, we set A =
761
10
D. BENKO, S. B. DAMELIN AND P. D. DRAGNEV
F IG . 4.1. Circle case
[eiα , eiβ ] ∪ [eiγ , eiδ ], where eiα , eiβ , eiγ , eiδ are points on the unit circle ordered counterclockwise. Without loss of generality we assume that s = i ∈ (eiδ , eiα ) (see Fig. 4.1). For simplicity assume also that all angles below are given in the interval [π/2, 5π/2), meaning in particular that π/2 < α < β < γ < δ < 5π/2. √ To find the balayage νb, we observe that after a Kelvin ¡transform centered at s with a¢ radius 2 the ¢ ¡ ¢ ¡ ¡ unit ¢∗ circle is ∗ ∗ ∗ sent to the real line and n A∗ = [a, b] ∪ [c, d], where a = eiα , b = eiβ , c = eiγ , and d = eiδ (see Fig. 4.2). Using Riesz’s approach (Section 1.3) we find νb = K(µA∗ ). Recall that (see (3.6)) dµA∗ =
|t∗ − y ∗ | p dt∗ , t∗ ∈ [a, b] ∪ [c, d], π |t∗ − a||t∗ − b||t∗ − c||t∗ − d|
(4.1)
¡ ¢∗ where t∗ = eiθ ∈ A and y ∗ ∈ (b, c) is determined from the equation Z b
c
x∗ − y ∗ p dx∗ = 0. π |x∗ − a||x∗ − b||x∗ − c||x∗ − d|
(4.2)
Since the relationship between the Lebesgue measures on R and T is given by |d t| dθ dt∗ = = iθ , ∗ |t − s| |t − s| |e − s| we find that dt =
2dθ . |eiθ − s|2
(4.3)
´ ³ |, we conclude Let y := eiφ := (y ∗ )∗ with φ ∈ (β, γ). Using (1.12) and the formula |eiξ − eiζ | = 2| sin ξ−ζ 2 that (recall the order of the angles) ´ ³ | dθ | sin θ−φ 2 r , θ ∈ [α, β] ∪ [γ, δ], (4.4) db ν=C ´ ³ ´ ³ ´ ³ ¡ θ−δ ¢ ¡ θ−α ¢ θ−π/2 θ−β θ−γ sin | sin 2 sin 2 sin 2 sin 2 | 2
762
SUPPORT OF EQUILIBRIUM MEASURE
11
F IG . 4.2. Kelvin transformation - circle case
where r ³ ´ ³ ´ ³ ´ ³ ´ sin β−π/2 sin γ−π/2 sin δ−π/2 sin α−π/2 2 2 2 2 ´ ³ . C= 2π sin φ−π/2 2 ¡ ¢∗ If x∗ = eiψ , then ³ x∗ − y ∗ =
³ 2 sin
sin φ−π/2 2
ψ−φ 2
´
sin
´ ³
ψ−π/2 2
´ , x∗ , y ∗ ∈ [b, c].
´ ³ are the same Indeed, the equality holds with absolute values by (1.12) and since the signs of x∗ − y ∗ and sin ψ−φ 2 we can remove the absolute value. Therefore, (4.2) implies that φ is determined uniquely by ´ ³ Z γ sin ψ−φ 2 (4.5) ´r ´ ³ ´ ³ ´ ³ ´ dψ = 0, ³ ³ β ψ−π/2 ψ−β ψ−γ ψ−δ ψ−α sin sin sin 2 | sin | sin 2 2 2 2 which is equivalent to Z
µ
γ
cot µ cot
φ−θ 2
β
¶ =
Z
γ
β
for any convenient choice of θ.
³ ´ sin ψ−θ 2 ´r ´ ³ ´ ³ ´ ³ ´ dψ ³ ³ ψ−π/2 ψ−β ψ−γ ψ−δ ψ−α sin sin sin 2 | sin | sin 2 2 2 2 ´ ³ sin ψ−θ 2 ´r ´ ³ ´ ³ ´ ³ ´ dψ ³ ³ ψ−π/2 ψ−β ψ−γ ψ−δ ψ−α sin sin sin 2 | sin | sin 2 2 2 2
ψ−θ 2
¶
(4.6)
763
12
D. BENKO, S. B. DAMELIN AND P. D. DRAGNEV
Recall that our goal is to show that for any π/2 < α < β < γ < δ < 5π/2, the balayage density function ´ ³ | | sin θ−φ 2 r , θ ∈ [α, β] ∪ [γ, δ] N (θ) := ´ ³ ´ ³ ´ ³ ¡ θ−δ ¢ ¡ θ−α ¢ θ−π/2 θ−β θ−γ sin | sin 2 sin 2 sin 2 sin 2 | 2 00
is convex, provided φ satisfies (4.6). We will actually show more, namely that (log(N (θ))) > 0. The latter is equivalent to verifying for all θ ∈ [α, β] ∪ [γ, δ] the inequality ¶ µ ¶ · µ ¶ ¶ ¶ ¶¸ µ µ µ µ θ − π/2 1 θ−α θ−β θ−γ θ−δ θ−φ ≤ csc2 + csc2 + csc2 + csc2 + csc2 . csc2 2 2 2 2 2 2 2 (4.7) Let us fix θ ∈ [α, β] (the case θ ∈ [γ, δ] is considered similarly). If φ − θ ≥ π, then ¶ ¶ µ µ 4 4 θ − π/2 θ−φ 2 2 = iφ < iπ/2 = csc csc 2 |e − eiθ | 2 |e − eiθ | and (4.7) holds trivially. Henceforth, assume that φ − θ < π. We now perform several perturbations to simplify the problem. First, we study the effect of substituting δ with 5π/2. Let φ1 ∈ (β, γ) be uniquely determined by ³ ´ ψ−θ µ ¶ Z γ sin 2 ψ−θ cot ´r ´ ³ ´ ³ ´ ³ ´ dψ ³ ³ 2 β ψ−5π/2 ψ−π/2 ψ−β ψ−γ ψ−α ¶ µ sin sin sin | sin | sin 2 2 2 2 2 φ1 − θ ´ ³ . (4.8) = cot 2 Z γ sin ψ−θ 2 ´r ´ ³ ´ ³ ´ ³ ´ dψ ³ ³ β ψ−5π/2 ψ−π/2 ψ−β ψ−γ ψ−α sin sin sin | sin | sin 2 2 2 2 2 Observe that the monotonicity of cot((ψ − θ)/2) guarantees the existence and uniqueness of φ1 . Applying Lemma 12 with v ´ ³ u s ¶ ¶ µ ¶ µ ¶ µ µ u sin δ−ψ 2 5π/2 − ψ 5π/2 − δ 5π/2 − δ ψ−θ u ´ = cos ³ , g(ψ) = t − cot sin h(ψ) = cot 2 2 2 2 sin 5π/2−ψ 2 and µusing ¶that both µ h(ψ) ¶and g(ψ) are monotone decreasing functions on (β, γ), we conclude that φ1 − θ φ−θ < cot , or that φ1 < φ. cot 2 2 Similarly, if φ2 ∈ (β, γ) denotes the unique solution obtained when α = θ in (4.8), we conclude that φ2 < φ1 . Indeed, if r ³ ´ µ ¶ Z γ sin ψ−θ 2 ψ−θ r ³ cot ´´ ´ ³ ´ dψ ³ ³ 3/2 2 β ψ−π/2 γ−ψ ψ−β ¶ µ sin sin sin 2 2 2 φ2 − θ r ³ , (4.9) = cot ´ 2 ψ−θ Z γ sin 2 ´´3/2 r ³ ´ ³ ´ dψ ³ ³ β ψ−π/2 γ−ψ ψ−β sin sin sin 2 2 2
764
SUPPORT OF EQUILIBRIUM MEASURE
13
we again apply Lemma 12, this time with v ´ s ³ u ¶ ¶ µ ¶ µ µ u sin ψ−α 2 θ−α θ−α ψ−θ u ´ = cos ³ + cot sin , g(ψ) = t 2 2 2 sin ψ−θ 2 which is still decreasing function for ψ ∈ (β, γ). Finally, let φ3 ∈ (β, γ) be derived from (4.9) with θ instead of π/2. Then we have µ ¶ Z γ ψ−θ 1 r ³ cot ´ ´ ³ ´ dψ ³ 2 β γ−ψ ψ−θ ψ−β ¶ µ sin sin 2 sin 2 2 φ3 − θ Z γ . = cot 1 2 r dψ ´ ´ ³ ´ ³ ³ β γ−ψ ψ−β sin sin ψ−θ sin 2 2 2
(4.10)
Applying Lemma 12 with the decreasing function ´ 3/2 ³ µ µ ¶ ¶ µ ¶¶3/2 µ sin ψ−π/2 2 θ − π/2 θ − π/2 ψ−θ ´ ³ = cos + cot sin g(ψ) = 2 2 2 sin ψ−θ 2 we obtain φ3 < φ2 . In Lemma 14 below we will show for the so derived φ3 the inequality ¶ · µ ¶ ¶¸ µ µ θ−β 1 θ−γ φ3 − θ 2 2 2 ≤2+ csc + csc , csc 2 2 2 2
(4.11)
which coupled with θ < β < φ3 < φ < γ and φ − θ < π proves (4.7) so the proof of Lemma 8 will be complete. We now prove that (4.11) holds provided φ3 satisfies (4.10). Since we find the inequality interesting in its own right we formulate it as a separate lemma. Without loss of generality we could assume that β = 0 < γ < θ < 2π. L EMMA 14. Let 0 < γ < θ < 2π. Then ¢ ¡ 2 Z γ dt cot θ−t 2 ¢q ¡t¢ ¡γ ¢ ¡ t 0 sin θ−t sin − sin 1 1 1 2 2 2 2 Z ´ . ³ ≤ 1 + 2 ¡θ¢ + γ dt 2 sin 2 sin2 θ−γ q 2 ¢ ¡ ¢ ¡ ¡ ¢ 0 sin θ−t sin 2t sin γ2 − 2t 2
(4.12)
Proof. First we compute: Z I1 = 0
γ
dt ¡ ¢ ¡ θ−t ¢ q ¡ t ¢ sin 2 sin 2 sin γ2 − 2t
Making a substitution y = (cos(t − γ/2) − cos(γ/2))/(1 − cos(γ/2)) in (4.13) leads to ´ ´ ³ ³ θ−γ/2 Z 1 4 sin 4 sin θ−γ/2 2 2 1 π dy p p = I1 = 1 − cos( γ2 ) 0 y + c y(1 − y) 1 − cos( γ2 ) c(c + 1) √ 2 2π =p , cos(γ/2) − cos(θ − γ/2)
(4.13)
(4.14)
765
14
D. BENKO, S. B. DAMELIN AND P. D. DRAGNEV
cos(γ/2) − cos(θ − γ/2) > 0. 1 − cos(γ/2) Next, we compute similarly,
where c = c(θ, γ) :=
Z
γ
dt cos((θ − t)/2) p 2 sin ((θ − t)/2) sin(t/2) sin(γ/2 − t/2) 0 ³ ´ ´ ³ θ−γ/2 θ−γ/2 cos − u/2 + y/2 √ Z γ/2 cos 2 2 du ´+ ´ p ³ ³ = 2 . θ−γ/2 θ−γ/2 2 2 cos(u) − cos(γ/2) 0 − u/2 + u/2 sin sin I2 :=
2
2
Using the identity cos(α + α0 ) 8 cos(α) cos(α0 )(1 − 1/2(cos(2α) + cos(2α0 ))) cos(α − α0 ) + = 2 2 (cos(2α0 ) − cos(2α))2 sin (α − α0 ) sin (α + α0 ) that holds for any real α and α0 , and the substitution y = (cos u − cos(γ/2))/(1 − cos(γ/2)), we get # " Z 1 Z 1 dy dy 1 4 cos((θ − γ/2)) 2(1 − cos(θ − γ/2)) 1 p p − . I2 = 2 1 − cos(γ/2) 1 − cos(γ/2) y(1 − y) y(1 − y) 0 y+c 0 (y + c) Since Z
1
0
d dy 1 p =− (y + c)2 y(1 − y) dc
ÃZ
1 0
1 dy p y + c y(1 − y)
! =−
(4.15)
d π p , dc c(1 + c)
we obtain Z 0
1
(y + c)2
π(2c + 1) dy p = p . y(1 − y) 2 c(c + 1)c(c + 1)
Substituting in (4.15) we find that I2 =
√ 2 2 sin(θ − γ/2)π (cos(γ/2) − cos(θ − γ/2))
3/2
.
(4.16)
Thus, we see from (4.14) and (4.16) that 2
(I2 /I1 ) = (1/4)(cot(θ/2) + cot((θ − γ)/2))2 ≤ (1/2)(cot2 (θ/2) + cot2 ((θ − γ)/2))) < (1/2)(csc2 (θ/2) + csc2 ((θ − γ)/2)) < 1 + 1/2(csc2 (θ/2) + csc2 ((θ − γ)/2)), which concludes the proof. Proof of Theorem 7. The proof of this theorem follows word for word the argument in Theorem 2, where instead of Lemma 3 we use Lemma 8. Proof of Theorem 10. From (1.6) with [a, b] = [−1, 1] we can write · ¸ Z 1 1 1 f (t) − f (x) √ 1+ dt . v(x) = √ π −1 (t − x) 1 − t2 π 1 − x2 Here we used the fact that Z
1
P.V. −1
1 √ dt = 0, (t − x) 1 − t2
(4.17)
766
SUPPORT OF EQUILIBRIUM MEASURE
15
which follows from differentiation of the equilibrium potential on [−1, 1] Z
1
log −1
dt 1 √ = log 2, x ∈ [−1, 1]. |x − t| π 1 − t2
Proof of part a). If f is identically zero, then Q is constant and S = [−1, 1]. Note that f cannot be a linear function (unless f ≡ 0) because of the δ-H¨older continuity assumption on Q0 . So let f be convex but not a linear function. Let gt (x) = (f (t) − f (x))/(t − x), and x1 < x2 . Then gt (x1 ) ≤ gt (x2 ) for all t ∈ [−1, 1] and√gt (x1 ) < gt (x2 ) holds on a set of positive measure. Integrating over [−1, 1] with respect to t, we thus derive that 1 − x2 v(x) is strictly increasing. The claim now √ follows from [15, Theorem 2]. The proof is similar when f is concave. √ Proof of part b). Let 1 − x2 g 0 (x) be strictly increasing (strictly decreasing) a.e. on [−1, 1]. Note that 1 − x2 v(x) is strictly increasing (strictly decreasing) function a.e. if the following is positive (negative): √ √ √ Z 1 √ Z 1 √ Z 1 √ ( 1 − t2 Q0 (t))0 1 − t2 Q0 (t) 1 − t2 ( 1 − t2 Q0 (t))0 − 1 − x2 ( 1 − x2 Q0 (x))0 d √ dt. dt = dt = dx −1 t−x t−x (t − x) 1 − t2 −1 −1 We used that Z 1 Z Z 0 Z 1 0 g(t) d g(x + u) − g(x) g (x + u) − g 0 (x) g (t) d dt = du = du = du. dx −1 t − x dx R u u t R −1 − x We could differentiate inside the parametric integral because of [1, Lemma 13]. The claim of part b) now follows from [15, Theorem 2]. REFERENCES [1] D. Benko, Approximation by weighted polynomials, J. Approx. Theory, 120, no. 1, 153–182 (2003). , The support of the equilibrium measure, Acta Sci. Math. (Szeged), 70, no. 1-2, 35–55 (2004). [2] [3] D. Benko, S. B. Damelin and P. Dragnev, On the support of the equilibrium measure for arcs of the unit circle and extended real intervals, ETNA, 25, 27–40 (2006). [4] J. S. Brauchart, P. D. Dragnev, and E. B. Saff, On an energy problem with Riesz external field, Oberwolfach reports, Volume 4, Issue 2, 1027–1072 (2007). , Riesz extremal measures on the sphere for axis-supported external fields, JMAA, 356, no. 2, 769–792 (2009). [5] [6] S. B. Damelin and A. B. Kuijlaars, The support of the extremal measure for monomial external fields on [−1, 1]., Trans. Amer. Math. Soc., 351, 4561–4584 (1999). [7] S. B. Damelin, P. Dragnev and A. B. Kuijlaars, The support of the equilibrium measure for a class of external fields on a finite interval, Pacific J. Math., 199, no. 2, 303–321 (2001). [8] P. Deift, Orthogonal Polynomials and Random Matrices: a Riemann–Hilbert approach, Courant Lecture Notes in Mathematics, Courant Institute, New York, 1999. [9] P. Deift, T. Kriecherbauer and K. T-R McLaughlin, New results on the equilibrium measure for logarithmic potentials in the presence of an external field, J. Approx. Theory, 95, 388–475 (1998). [10] P. D. Dragnev and E. B. Saff, Constrained energy problems with applications to orthogonal polynomials of a discrete variable, J. Anal. Math., 72, 223–259 (1997). [11] O. Frostman, La m´ethode de variation de Gauss et les fonctions sousharmoniques, Acta. Sci. Math., 8, 149–159 (1936). [12] F. D. Gakhov, Boundary Value Problems, Pergamon Press, Oxford, 1966. [13] I. S. Gradshteyn and I. M. Ryzhik, Tables of Integrals, Series, and Products, 6th ed. San Diego, CA: Academic Press, 2000. [14] G. H. Hardy, J. E. Littlewood, and G. P`olya, Inequalities, 2nd ed. Cambridge, England: Cambridge University Press, 1988. [15] A. B. J. Kuijlaars and P. D. Dragnev, Equilibrium problems associated with fast decreasing polynomials, Proc. Amer. Math. Soc., 127, 1065–1074 (1999). [16] N. S. Landkof, Foundations of Modern Potential Theory, Grundlehren der mathematischen Wissenschaften, Springer-Verlag, Berlin, 1972. [17] H. N. Mhaskar and E. B. Saff, Where does the sup norm of a weighted polynomial live? A Generalization of Incomplete Polynomials, Constr. Approx. 1, no. 1, 71–91 (1985). [18] E. A. Rakhmanov, Equilibrium measure and the distribution of zeroes of the extremal polynomials of a discrete variable, Sbornik: Mathematics, 187, no. 8, 1213–1228 (1996); Matematicheskii Sbornik, 187, no. 8, 109–124 (1996). [19] E. B. Saff and V. Totik, Logarithmic Potentials with External Fields, Springer-Verlag, New York, 1997. [20] P. Simeonov, A minimal weighted energy problem for a class of admissible weights, Houston J. Math., 31, no. 4, 1245–1260 (2005). [21] H. Stahl and V. Totik, General Orthogonal polynomials, Cambridge University Press, Cambridge, 1992. [22] V. Totik, Polynomial inverse images and polynomial inequalities, Acta Math., 187, 139–160 (2001).
JOURNAL OF COMPUTATIONAL ANALYSIS AND APPLICATIONS, VOL. 14, NO.4, 767-784 , 2012, COPYRIGHT 2012 EUDOXUS767 PRESS, LLC
An algorithm for symmetric indefinite linear systems∗ Dandan Chen†, Ting-Zhu Huang‡, Liang Li School of Mathematical Sciences, University of Electronic Science and Technology of China, Chengdu, Sichuan, 611731, P. R. China
Abstract In this paper, the new pivoting strategy named boundedly partial pivoting is presented in the process of the incomplete LDLT factorization, based on the maximum weighted matchings in the preprocessing of solving a symmetric indefinite linear system. This new pivoting method is acquired by changing the tri-diagonal pivoting algorithm, and none of permutations of rows and columns are needed. Meanwhile, the lower triangular factor L is bounded. Numerical results demonstrate that the newly proposed pivoting strategy can extend the scope of applications of the LDLT preconditioner, compared with the tri-diagonal pivoting. Moreover, almost the same results are obtained or our boundedly partial pivoting is more efficient with respect to the iteration speed, when both the tri-diagonal pivoting and the boundedly partial pivoting are applicable to incomplete LDLT factorization. Key words: Symmetric indefinite linear system; Weighted matching; Pivoting; Incomplete LDLT factorization; Boundedly partial pivoting AMSC(2010): 65F10; 65F08; 65F05
1
Introduction
We consider an efficient algorithm to solve symmetric indefinite linear systems Ax = b. In [1], Hagemann and Schenk have proposed the preconditioning algorithm based on a combination of the symmetric weighted matchings and the tri-diagonal pivoting technique. In this paper, the preprocessing used is similar to the preprocessing in [1], and during the incomplete LDLT factorization, we put forward a new ∗
This research was supported by NSFC (60973015). E-mail: [email protected] ‡ E-mail: [email protected]. Fax: +86-28-61831608 †
1
CHEN ET AL: SYMMETRIC INDEFINITE LINEAR SYSTEMS
768
pivoting algorithm based on the tri-diagonal pivoting. To be brief, our preconditioning algorithm is based on a combination of the symmetric weighted matchings and the boundedly partial pivoting technique. The boundedness of the factor L is ensured by the newly proposed pivoting strategy compared to the tri-diagonal pivoting, and therefore the stability of the incomplete factorization is improved. Here the whole process of the preconditioned iterative solution of a symmetric indefinite linear system is presented. Firstly, the preprocessing technique is applied to the original system Ax = b, then we gain the new system ˆx = ˆb, Aˆ which is also a symmetric indefinite linear system. Secondly, incomplete LDLT technique is applied to factorize Aˆ in order to construct preconditioner M , which is also indefinite. Namely, the incomplete factorization is M = LDLT = Aˆ − E,
(1)
where E is the error due to the incompleteness of the L factor, and Aˆ is the scaled and reordered original indefinite linear system: Aˆ = PQ QDQ ADQ QT PQT .
(2)
Lastly, the SQMR iterative method solves a system of the form ˆx = M −1ˆb. M −1 Aˆ
(3)
Sparse symmetric indefinite linear systems mainly arise in linear and nonlinear optimization,incompressible flow computations, finite element analysis, electromagnetic scattering, shift-invert eigensolvers and so on. The preconditioned iterative solution of the indefinite linear systems is an active research area, and a variety of approaches have been proposed. Preconditioning is maturing and the choice of a good preconditioner is benefit to accelerate the iterative solution of the indefinite linear systems. In [2], Benzi summarizes the main preconditioning techniques, incomplete factorization methods and sparse approximate inverses, which are applied to accelerate iterative speed of solving sparse symmetric indefinite linear systems. Many preconditioning techniques exploit the inherent block structure of the given problem type. Specially, block and approximate Schur complement approaches [7-10] and indefinite block preconditioners [9, 11-13] have been proposed for saddle-point systems. The direct solvers are mostly chosen to solve general indefinite systems for the poor properties of indefinite problems. Now, since the LU and LDLT factorizations of indefinite matrices may break down or yield bad results because of the small or zero diagonal elements, there exist a number of techniques that try to avoid or alleviate these problems, and pivoting techniques, perturbations of diagonal elements, and block factorizations are the 2
CHEN ET AL: SYMMETRIC INDEFINITE LINEAR SYSTEMS
769
three basic approaches. A classic pivoting approach for the symmetric dense matrices is the Bunch-Kaufman pivoting [6], which searches for 2×2 diagonal blocks during the factorization. In [6], Duff and Reid introduce the pivoting method which is adapted to sparse matrices. Benzi, Hawa and Tuma [14] study the non-symmetric permutations based on the weighted matchings as a preprocessing step for various incomplete or approximate factorizations. Duff and Gilbert [4] consider the weighted matchings as an approximation order of the pivoting in the context of sparse direct solution methods for symmetric indefinite linear systems. This work is elaborated by Duff and Parlet [3] and Schenk and G¨ artner [5]. In [1], the symmetric weighted matchings technique is firstly applied to the preconditioning technique(the incomplete factorization) by Hagemann and Schenk. Section 2 gives an overview of our preprocessing step, which is similar to the preprocessing in [1]. In section 3, we describe the incomplete LDLT factorization with our pivoting algorithm named by a boundedly partial pivoting, which is based on tri-diagonal pivoting, and prove that the newly proposed pivoting can bound the elements of the L factor. Numerical results and comparisons with other techniques are presented in section 4. In closing, we draw a conclusion.
2
Preprocessing
Our preprocessing step is similar to the preprocessing step in [1]. The difference is no more than in the aspect of reordering, and the reordering approach we used is symrcm. The objective of this preprocessing method is to restrict the pivoting during the factorization phase, by obtaining a good pivoting order through the matching in the preprocessing step. Symmetric weighted matchings, symmetric scaling and reordering are involved in the preprocessing. The original system Ax = b is processed ˆx = ˆb is obtained ,with by the preprocessing technique and then the new system Aˆ the matched and large entries in or nearby the 1×1 and 2×2 diagonal blocks of the ˆ For the details on the preprocessing, the interested reader should consult matrix A. [1]. We will present the preprocessing step which is similar to the preprocessing step in [1]. The algorithm is given as follows: Algorithm 2.1. (Pseudo-code for the preprocessing step) 1. (PM , Dr , Dc ) = build mps matching(A) 2. match cycles = get cycle repr(PM ) ˜ = split cycle repr(match cycles) 3. split cycles M ˜) 4. (Q, marker 1 × 1) = build diag22 reordering(split cycles M 5. (Acomp , rev map) = compress matrix 2 × 2(A, Q, marker 1 × 1) 6. Pcomp = build symrcm reordering(Acomp ) reordering 2 × 2(Pcomp , marker 1 × 1, rev map) 7. PQ = expand √ 8. DQ = Dr · Dc 3
CHEN ET AL: SYMMETRIC INDEFINITE LINEAR SYSTEMS
770
9. Aˆ = PQ QDQ ADQ QT PQT Now, we simply explain the meanings of the code in each row. 1. Using the MC64 program package, we obtain the permutation matrix PM , which corresponds to the maximum weighted matchings, and the row, column scaling matrices Dr , Dc . 2. To gain the cycle representation of the permutation matrix PM . ˜ , which 3. To split the cycle representation of PM to the cycle representation M only contains 1-cycle and 2-cycles. 4. To acquire the permutation matrix Q, which corresponds with the symmetric ˜. weighted matchings, by the splitting cycle representation M 5. To get the compressed matrix Acomp through compressing the matrix QAQT according to the symmetric weighted matchings. 6. To reorder the compressed matrix Acomp with the reordering approach – symrcm, and obtain the permutation matrix Pcomp . 7. To expand the permutation matrix Pcomp to the corresponding permutation matrix PQ , which is applied to the matrix QAQT . 8. To get the symmetric scaling matrix DQ by the two-sided scaling matrices Dr and Dc . 9. To acquire the preprocessed matrix Aˆ through the symmetric scaling matrix DQ , the permutation matrix Q corresponding to symmetric weighted matchings, and the permutation matrix PQ corresponding to reordering working on the original matrix A. Compared to the preprocessing step in [1], the difference is only the code in the sixth row of Algorithm 2.1.
3 3.1
Incomplete LDLT factorization with boundedly partial pivoting Algorithm
The incomplete LDLT factorization is applied since the systems are symmetric. The largest block pivots we considered are 2×2 pivots, and therefore the factor D is a block diagonal matrix with the 1×1 and 2×2 blocks. In the process of the incomplete factorization, JKI version of Gauss elimination is employed. So the factor L is constructed by column-wise and the dropping strategy can be easily implemented. In [1], Hagemann and Schenk adopt the tri-diagonal pivoting in the incomplete LDLT factorization since the preprocessed matrix has a heavy diagonal. We note that the cost of the exchanges of rows and columns in the factorization with the tridiagonal pivoting is avoided, but the boundedness of the L factor can not be proved. 4
CHEN ET AL: SYMMETRIC INDEFINITE LINEAR SYSTEMS
771
Namely, the factorization is unstable. We will consider the pivoting approach that also has no exchanges of rows and columns and meanwhile makes sure that the factor L is bounded in order to assure the stability of the factorization. Thereby, our approach extends the scope of applications of the LDLT preconditioner.
3.2
Tri-diagonal pivoting
In [15], Bunch proposes the tri-diagonal pivoting aiming at tri-diagonal matrices. Next, we will show this pivoting algorithm. Let α1 β2 β2 α2 β3 . β3 α3 . . . B= .. .. .. . . . ... αn−1 βn βn αn be a symmetric tri-diagonal matrix. Firstly, the tri-diagonal pivoting in the first elimination of B is given as follows. Algorithm 3.1(Bunch, [15]). Tri-diagonal Pivoting (TP method) σ = max { |αi | , |βj | : i, j = 2 : n } , / √ α = ( 5 − 1) 2 ≈ 0.62. If |α1 | σ ≥ αβ22 SB = 1 (use 1×1 pivot) Else SB = 2 (use 2×2 pivot) End Next, Hagemann and Schenk use the tri-diagonal pivoting which discards the √ / variable σ of Algorithm 3.1. Namely, a constant α = ( 5 − 1) 2 is given, and a 2×2 pivot is chosen in step k if (k) 2 (k) (4) akk ≤ α ak+1,k , where a(k) designates the entries in the reduced matrix after step k. The constant α is a weight that controls the choice of the pivot.
5
CHEN ET AL: SYMMETRIC INDEFINITE LINEAR SYSTEMS
3.3
772
Boundedly partial pivoting
We devise a new pivoting approach with no exchanges of rows and columns and the boundedness of the L factor gets guaranteed, based on the tri-diagonal pivoting in [1]. Subsequently, this will be proved and means the valuable stability of the new pivoting approach. The new pivoting approach is named by boundedly partial pivoting. In addition, we must note that the boundedly partial pivoting is proposed in the incomplete factorization under the condition that the system has been preprocessed according to Algorithm 2.1. The algorithm of boundedly partial pivoting is given as follows. Algorithm 3.2. Boundedly Partial Pivoting (BPP method) / √ α = ( 5 − 1) 2, (k) σk = max( ak:n,k ).
(k) If akk ≥ ασk2 (k) (k) (k) If akk ≤ εpiv , then akk = sign(akk ) · εpiv End (k)
S = 1 (choose akk as 1×1 pivot) (k) Else If det(Ak:k+1,k:k+1 ) ≤ δ (k)
(k)
S = 1 (let akk = ασk2 and choose akk as 1×1 pivot) Else ] [ (k) (k) akk ak,k+1 as 2×2 pivot) S = 2 (choose (k) (k) ak+1,k ak+1,k+1 End There are two parameters εpiv and δ in the Algorithm 3.2. And we commonly let εpiv , δ in the subsequent experiments be 10−8 , 10−3 respectively. Meanwhile, we find that the changes of the value of δ could bring on the changes of the iterative solution rate. The incomplete factorization with the boundedly partial poviting maybe costs more time and memory in contrast to that with the tri-diagonal pivoting, because that more comparisons of elements in the process of pivoting are needed. This pivoting approach has no exchanges of rows and columns as the tri-diagonal pivoting. Perhaps it is not better than the pivoting strategy with the exchanges of rows and columns, but preferable results can be obtained by combining it with the preprocessing introduced in section 2, and meanwhile there is no cost of the exchanges of rows and columns. Besides, we also make use of the character of the preprocessed matrix that the larger entries mostly concentrate on or round the tri-diagonal of
6
CHEN ET AL: SYMMETRIC INDEFINITE LINEAR SYSTEMS
773
the matrix. In one word, the LDLT factorization with boundedly partial pivoting is backward stability since the factor L is bounded, and therefore the latter preconditioning SQMR iterative solution is feasible.
3.4
Dropping strategy
The choice of an appropriate dropping strategy is an important part of the incomplete factorization. We use the same dropping strategy as that in [1], based on the average number of elements per column of the matrix. Each column k of the factor may contain at most γk elements, where γk = γ ·
nnz , n
(5)
where nnz indicates the number of non-zeros in A and n indicates dimension of A. γ is the only parameter controlling the amount of fill-in in the factors. In our experiments, we also use γ = 8.
3.5
Summary of the incomplete LDLT factorization method
The following pseudo-code provides a detailed overview of the incomplete factorization phase. Algorithm 3.3. Incomplete LDLT factorization with boundedly partial pivoting (ILDLT-BPP method). ˆ 1. Function (L, D) = ildlt blp f actorization(A) 2. For k √ =1:n/ (k) 3. α = ( 5 − 1) 2; σk = max( ak:n,k ) (k) 4. If akk ≥ ασk2 /*use 1×1 pivot*/ (case 1) 5. keep γk largest elements (k) (k) (k) 6. If akk ≤ εpiv , then akk = sign(akk ) · εpiv End (k)
7. dkk = akk
;
(k+1)
Lk+1:n,k = d−1 kk ak+1:n,k (k)
(k)
8. update ak+1:n,k+1:n from ak+1:n,k+1:n 9. Else (k) 10. If det(Ak:k+1,k:k+1 ) ≤ δ /*use 1×1 pivot */ (case 2) 11. 12. 13. 14.
(k)
akk = ασk2 ; break Else /*use 2×2 pivot*/ (case 3) keep γk largest 1×2 elements (k) (k) Dk = ak:k+1,k:k+1 ; Lk+1:n,k:k+1 = ak+1:n,k:k+1 · Dk−1 (k+2)
(k)
15. update ak+2:n,k+2:n from ak+2:n,k+2:n 16. End 17. End 7
CHEN ET AL: SYMMETRIC INDEFINITE LINEAR SYSTEMS
774
18. End Theorem 3.1. The factor L is bounded while we use the Algorithm 3.3-incomplete LDLT factorization with boundedly partial pivoting. Proof. Let L denote the lower triangular factor of the incomplete LDLT facˆ namely torization of A, Aˆ = LDLT − E, where D is a block diagonal matrix with 1×1 and 2×2 blocks. And let lij denote ˆ If 1×1 pivot elimination is entries of L. For simply to read, we use A instead of A. (k+1) (k) implemented, we will obtain A from A . Otherwise we will get A(k+2) through 2×2 pivot elimination. The superscript k shows the reduced matrix whose dimension is n − k + 1 after 1×1 and 2×2 pivot eliminations. Let the entries of A(k) be denoted (k) by aij . We know that the symmetry of the symmetric matrix can be retained after 1×1 and 2×2 pivot eliminations. So A(k+1) and A(k+2) are both symmetric. Firstly, we consider the case using 1×1 pivot as follows. Through the elimination, the relation of A(k+1) and A(k) is (k) (k)
(k+1) aij
=
(k) aij
aik akj
−
,
(k)
i > k, j > k.
akk
And entries of L are
(k)
lik =
aik
(k)
,
i > k.
akk
Secondly, we consider the case using 2×2 pivot as follows. Through the elimination, the relation of A(k+2) and A(k) is ] [ ]−1 [ (k) (k) (k) akk ak+1,k akj (k+2) (k) (k) (k) , aij = aij − [ aik ai,k+1 ] (k) (k) (k) ak+1,k ak+1,k+1 ak+1,j (k+2) aij
=
(k) aij −
(k+2)
aij
[
1 (k) (k) (akk ak+1,k+1 (k)
= aij −
−
(k) 2 ak+1,k )
(k) (k)
(k) [ a(k) ai,k+1 ] ik
(k)
(k)
(k) (k)
(k)
(k)
ak+1,k+1 −ak+1,k (k) (k) −ak+1,k akk
(k)
(k)
(k)
][
(k) (k)
akk ai,k+1 ak+1,j −ak+1,k (aik ak+1,j +ai,k+1 akj )+ak+1,k+1 aik akj (k) (k)
(k)
akk ak+1,k+1 −ak+1,k
2
,
(i > k + 1, j > k + 1). And entries of L are (k) (k)
lik =
(k)
(k)
aik ak+1,k+1 −ai,k+1 ak+1,k (k) (k) (k)
li,k+1 =
(k)
akk ak+1,k+1 −ak+1,k (k)
2
i > k + 1,
(k) (k)
ai,k+1 akk −aik ak+1,k (k) (k)
,
(k)
akk ak+1,k+1 −ak+1,k
8
2
,
i > k + 1,
(k)
akj (k) ak+1,j
] ,
CHEN ET AL: SYMMETRIC INDEFINITE LINEAR SYSTEMS
775
lk+1,k = 0. Let µ be the magnitude of the largest entry in A(k) and µ′ be the maximum magnitude of any entry in the new reduced matrix (A(k+1) or A(k+2) ). Next, we will prove that σ1 , · · · , σn are all finite positive real numbers. We can (k) easily know σk ≥ 0 since σk = max( ak:n,k ), where k = 1, · · · , n. (k) (k) Case 1 : When akk ≥ ασk2 , choose akk as 1×1 pivot. a(k) a(k) (k+1) (k) ik kj aij ≤ aij + (k) akk (k) akj (k) ≤ aij + σk (k) a kk (k) ajk (k) = aij + σk (k) , akk
σk 1 µ′ ≤ µ + σk ≤ µ + . (k) α akk real numbers. So σk+1 and σk+2 are both finite positive (k) (k) (k) Case 2 : When akk < ασk2 and det(Ak:k+1,k:k+1 ) ≤ δ, implement ak,k = ασk2 (k)
and break, cycle over again and choose akk as 1×1 pivot. Here, this is the same as the Case 1. ] [ (k) (k) a a (k) (k) kk k,k+1 Case 3 : When akk < ασk2 and det(Ak:k+1,k:k+1 ) > δ, choose (k) (k) ak+1,k ak+1,k+1 (k) as 2×2 pivot. According to det(Ak:k+1,k:k+1 ) > δ, we get (k) (k) (k) 2 akk ak+1,k+1 − ak+1,k > δ. While i > k + 1, j > k + 1, (k) (k) (k) (k) (k) (k) (k) (k) (k) (k) (k) akk ai,k+1 ak+1,j −a(k) (k+2) k+1,k (aik ak+1,j +ai,k+1 akj )+ak+1,k+1 aik akj aij ≤ aij + 2 (k) (k) (k) < µ+
akk ak+1,k+1 −ak+1,k (k) (k) (k) (k) σk σk+1 ak+1,j +σk (σk ak+1,j +σk+1 akj )+σk+1 σk akj δ
µ′ < µ + = µ+
,
σk σk+1 σk+1 +σk (σk σk+1 +σk+1 σk )+σk+1 σk σk δ 2 σk σk+1 +3σk+1 σk2 . δ
So σk+2 and σk+3 are both finite positive real numbers. In summary, we know that σ1 , · · · , σn are all finite positive real numbers. Finally, we prove that lik , li,k+1 are bounded. 9
CHEN ET AL: SYMMETRIC INDEFINITE LINEAR SYSTEMS
776
In Case 1 and Case 2, 1×1 pivot is chosen and then lik is (k)
lik =
aik
⇒ |lik | ≤
(k) akk
1 (while i > k and σk ̸= 0). ασk
(k) When σk = 0, then akk = 0. So (k) akk ≤ εpiv . Afterwards we implement (k)
(k)
akk = sign(akk ) · εpiv . In this situation, |lik | = 0, i > k. So lik (i > k) is bounded in Case 1 and Case 2. In Case 3, 2×2 pivot is chosen and then lik and li,k+1 are (k) (k)
lik =
(k)
(k)
aik ak+1,k+1 − ai,k+1 ak+1,k (k) (k)
(k)
akk ak+1,k+1 − ak+1,k (k)
li,k+1 =
(k)
2
,
(k) (k)
ai,k+1 akk − aik ak+1,k (k) (k)
(k)
akk ak+1,k+1 − ak+1,k
|lik | ≤ ≤ |li,k+1 | ≤ ≤
i > k + 1,
2,
i > k + 1,
(k) (k) σk+1 aik +σk ai,k+1
δ σk+1 σk +σk σk+1 = 2σk σδ k+1 , δ (k) (k) σk ai,k+1 +σk+1 aik δ σk σk+1 +σk+1 σk δ
=
2σk σk+1 . δ
So lik and li,k+1 (i > k + 1) are bounded. In summary, lik , li,k+1 are bounded. The proof is completed. 2
4
Numerical experiments
We consider the symmetric indefinite matrices which partly come from the matrix market and partly are produced randomly by the function in MATLAB. In this section, the main objective is to compare the boundedly partial pivoting and the tri-diagonal pivoting, to contrast their results, and to draw the conclusion that our pivoting approach is preferable.
10
CHEN ET AL: SYMMETRIC INDEFINITE LINEAR SYSTEMS
4.1
777
Iterative solution and termination criterion
There are three Krylov subspace methods, which are MINRES, SYMMLQ and SQMR [16], to solve symmetric indefinite systems. Since both the original system and the preconditioner are indefinite, we cannot use methods like SYMMLQ or MINRES, which require a positive definite preconditioner. Therefore, we adopt SQMR method for iterative solution of the preconditioned system. The SQMR ˜ with method has the benefit of allowing us to solve preconditioned systems A, A˜ = M1−1 AM2−1 ,
(6)
where A is symmetric, and M1 and M2 can be arbitrary matrices. The iterative methods used the following stopping criterion. The current iterate xˆn was considered good enough, if its residual referring to the preconditioned system Aˆ was less than 10−6 :
−1ˆ
ˆxn (7)
M b − M −1 Aˆ
≤ 10−6 M −1ˆb . ˆx = When using the SQMR iterative method to solve the preconditioned system M −1 Aˆ M −1ˆb, all matrices were solved with an artificial right hand side ˆb, with ˆb = Aˆ · ⃗1, where ⃗1 denotes the column vector of n dimension whose entries are all one. The iterations were started with a zero initial guess.
4.2
Experimental content and Numerical results
In this subsection, we mainly introduce our experimental content and give the numerical results by carrying out relevant programs. Firstly, we introduce the information of the tested matrices and the meanings of the simple marks. The information of the tested matrices in our experiments is presented in Table 1 and the meanings of simple marks are shown in Table 2. Note: • n denotes the dimension of matrix and nnz denotes the number of nonzero entries. • F LG = 0 denotes the success of iterative solution, F LG = 1 denotes misconvergence of iterative solution, F LG = 2 denotes the failure of iterative solution.
11
CHEN ET AL: SYMMETRIC INDEFINITE LINEAR SYSTEMS
Matrix name bcsstk19 bcsstk22 bcsstm27 randmat1 randmat2 randmat3
n 817
nnz 6853
Matrix type Sparse symmetric indefinite matrix 138 696 Sparse symmetric indefinite matrix 1224 56126 Sparse symmetric indefinite matrix 50 463 Sparse symmetric indefinite matrix 100 2583 Sparse symmetric indefinite matrix 20 142 Sparse symmetric indefinite matrix
778
Source Matrix market Matrix market Matrix market Randomly produced by MATLAB Randomly produced by MATLAB Randomly produced by MATLAB
Table 1: Information of the tested matrices
Simple mark NPREC NPREP ILDLT(0) ILDLT-TP ILDLT-BPP PIVOT22 ITERS FLG
meaning Iterative solution with no preconditioning No preprocessing Iterative solution with preconditioning of combining zero fill-in incomplete LDLT factorization and preprocessing Iterative solution with preconditioning of combining incomplete LDLT factorization with tri-diagonal pivoting and preprocessing Iterative solution with preconditioning of combining incomplete LDLT factorization with bounded partial pivoting and preprocessing The number of 2×2 pivot chosen in the process of incomplete LDLT factorization Iterative number Result mark of iterative solution
Table 2: The meanings of the simple marks
We will analyze from the following three aspects: (1) Compare the solution results of the NPREC method, the ILDLT(0) method, the ILDLT-TP method and the ILDLT-BPP method (meanings of these four methods are in Table 2). The default values of the parameters γ, εpiv and δ in Table 3 are 8, 10−8 and 10−3 respectively. In MATLAB, we run corresponding programs and the experimental results obtained are given in Table 3. (2) We exploit MATLAB’s own function sprandsym(n, dense) to randomly produce the sparse symmetric indefinite matrices, where n is to decide the dimension of the produced sparse symmetric matrix and the parameter dense, which is valued in the interval [0.2, 0.4], is to control the sparse extent of the sparse symmetric matrix. 12
CHEN ET AL: SYMMETRIC INDEFINITE LINEAR SYSTEMS
Matrix name bcsstk19
Parameter NPREC FLG 0 ITERS 909
bcsstk22
PIVOT22 FLG ITERS
— 0 113
bcsstm27
PIVOT22 FLG ITERS
— 1 > 10000
PIVOT22 FLG ITERS PIVOT22 FLG ITERS PIVOT22 FLG ITERS
— 0 56 — 0 142 — 0 20
— 2 — — 2 — — 2 —
PIVOT22
—
—
randmat1
randmat2
randmat3
ILDLT(0) ILDLT-TP 0 0 1203 3 γ = 1.7 — 0 0 0 25 4 γ=1 — 0 1 0 > 10000 3 γ = 1.1 1 2 — — 2 — — 0 10 γ=1 7
779
ILDLT-BPP 0 3 γ = 1.7 0 0 4 γ=1 0 0 7 3 γ = 1.1 γ = 1.1 δ = 10−4 58 62 0 4 23 0 4 48 0 8 γ=1 9
Table 3: Solution results of the four methods
Matrix name randmat1 randmat2 randmat3
Solution method ILDLT-BPP- NPREP ILDLT-BPP ILDLT-BPP- NPREP ILDLT-BPP ILDLT-BPP- NPREP ILDLT-BPP
FLG 0 0 0 0 0 0
ITERS 14 4 11 4 20 8
PIVOT22 18 23 45 48 3 9
Table 4: Solution results of containing preprocessing or not
13
CHEN ET AL: SYMMETRIC INDEFINITE LINEAR SYSTEMS
780
And then judge the matrix indefinite by validating it has both positive and negative eigenvalues. And thus randmat1, randmat2, and randmat3 are all indefinite since they all have both positive and negative eigenvalues. Commonly, the matrices produced by the above technique are indefinite. Next, the numerical results of the comparison between the ILDLT-BPP method and the ILDLT-BPP-NPREP method (meanings of these two methods are in Table 2) to solve the systems corresponding to the three random sparse symmetric indefinite matrices are given in Table 4. (3) The residual changes in plans will be presented in order to observe and contrast these solution methods. The residual in step k is resk =
ˆxk || ||M −1ˆb − Aˆ . ||M −1ˆb||
(8)
In one thing, the residual changes curves of the iterative solution of the system corresponding to the matrix named bcsstk22 will be given in Fig. 1 and Fig. 2. In the other thing, the residual changes curves of the iterative solution of the system corresponding to the matrix named randmat2 will be given in Fig. 3.
Figure 1: Residual changes curves of ILDLT-BPP and ILDLT-TP for bcsstk22
4.3
Analysis of numerical results
For preparing for the final conclusion, the main work in this subsection is to analyze the data of Table 3 and Table 4 and the residual changes curves in Fig. 1, Fig. 2 and Fig. 3. At first, from the Table 3, we can clearly see that the ILDLT-TP method and the ILDLT-BPP method are preferable to the NPREC method and the ILDLT(0) method with respect to the iterative number. Therefore, the ILDLT-TP method and the ILDLT-BPP method have the same iterative number for the matrices named bcsstk19 and bcsstk22 with comparing the last two columns. And iterative number 14
CHEN ET AL: SYMMETRIC INDEFINITE LINEAR SYSTEMS
781
Figure 2: Residual changes curves of NPREC, ILDLT(0) and ILDLT-BPP for bcsstk22
Figure 3: Residual changes curves of ILDLT-BPP and ILDLT-BPP-NPREP for randmat2
15
CHEN ET AL: SYMMETRIC INDEFINITE LINEAR SYSTEMS
782
of the ILDLT-TP method for solving the system corresponding to the matrix named bcsstm27 is smaller, under the conditions of the default parameters. However, the ILDLT-BPP method is able to get the same iterative number as the ILDLT-TP method by altering the value of the parameter δ. In addition, the iterative number of our ILDLT-BPP method is smaller for the matrix named randmat3. At second, we still notice that the systems corresponding to the matrices named randmat1 and randmat2 can not be solved by the ILDLT-TP method, nevertheless our ILDLT-BPP method is able to make it and the iterative number is rather small. The reason is that the factor L produced by incomplete LDLT factorization with boundedly partial pivoting is bounded. We have proofed it in the section 3, and then the numerical stability of the factorization gets guaranteed. Next, to observe the Table 4, in which the results of iterative solution of the ILDLT-BPP-NPREP method and the ILDLT-BPP method are presented. We can easily find that the iterative number of the method without preprocessing is larger. At last, observe and analyze the three figures. In Fig.1, the two kinds of curves are completely heavy to match, and this explains that the results of iterative solution by the ILDLT-BPP method for the matrix named bcsstk22 are the same as that by the ILDLT-TP method. In Fig.2 and Fig.3, we observe the iterative number corresponding to the point of intersection between curve and straight line log10 (res) = −6. The smaller the iterative number is, the quicker the speed of iterative solution will be. From Fig.2, the relation among the ILDLT(0) method, the NPREC method and the ILDLT-BPP method can be easily found. Namely, the ILDLT-BPP method is quicker than the ILDLT(0) method, and meanwhile the ILDLT(0) method is quicker than the NPREC method with respect to the iterative solution rate. And from Fig. 3, the relation between the ILDLT-BPP-NPREP method and the ILDLT-BPP method is that the iterative solution rate of the later is quite quicker than that of the former. The iterative solution rate is measured by the iterative number, and the smaller the iterative number is, the quicker the speed of the iterative solution will be, vice versa. Here, we need to explain that the iterative number should be an integer.
4.4
Experimental conclusions
From the front two subsections of experimental content and analysis of numerical results, we can draw the conclusion as follows: (1) Within the scope of the systems which can be solved by the the ILDLT-TP method, our ILDLT-BPP method can be also feasible. Meanwhile, the results of them are probably the same or the iterative solution rate of the ILDLT-TP method is probably quicker under the conditions of the default parameters, but we also can gain the same results of the two methods through altering the value of the parameter δ. Also, the iterative solution rate of the ILDLT-BPP method is possibly quicker. 16
CHEN ET AL: SYMMETRIC INDEFINITE LINEAR SYSTEMS
783
Therefore, in general, the iterative solution rates of the two kinds of methods are basically the same or our ILDLT-BPP method is indeed preferable to the ILDLT-TP method for some special symmetric indefinite linear systems. (2) The numerical stability of the incomplete factorization can not be confirmed since the tri-diagonal pivoting in the incomplete factorization of the ILDLT-TP method can not make sure of the boundedness of the L factor. On the contrary, the boundedly partial pivoting in the incomplete factorization of our ILDLT-BPP method is able to guarantee the boundedness of the L factor, and thus the factorization is numerically stable. This will avoid unfeasibility of the last preconditioned SQMR iterative method brought on by the unboundedness of L. Consequently, our ILDLT-BPP method extends the scope of the symmetric indefinite linear systems which can be solved by it. (3) Within the scope of certain symmetric indefinite linear systems, our ILDLTBPP method is preferable to the NPREC method and the ILDLT(0) method. At the same time, the preprocessing of the ILDLT-BPP method exerts big function for the iterative solution rate.
5
Conclusions
The main work of this paper is to devise an efficient algorithm concerning the solutions of symmetric indefinite linear systems. The preconditioning of combining the preprocessing, which is similar to the preprocessing in [1], with the boundedly partial pivoting in the incomplete factorization is proposed, and the better results are acquired. The experimental conclusions in section 4 indicates that comparing with the preconditioning of combining the preprocessing and the tri-diagonal pivoting in the incomplete factorization, our method extends the scope of the symmetric indefinite linear systems which could be solved by it, since it warrants the stability of the numerical factorization due to the boundedness of the L factor. For the systems which can be solved by the both methods, the same iterative solution rate is gained or our method is more efficient for the solution of some special symmetric indefinite linear systems.
References [1] M. Hagemann and O. Schenk, Weighted matchings for the preconditioning of symmetric indefinite linear systems, SIAM J. Sci. Comput., 2006, 28(2): 403-420. [2] M. Benzi, Preconditioning symmetric indefinite linear systems, XV Householder Symposium Peebles, Scotland, June 17-21, 2002. [3] I. S. Duff and S. Pralet, Strategies for scaling and pivoting for sparse symmetric indefinite problems, SIAM J. Matrix Analysis Appl., 2005, 27(2): 313-340.
17
CHEN ET AL: SYMMETRIC INDEFINITE LINEAR SYSTEMS
784
[4] I. S. Duff and J. R. Gilbert, Maximum-weighted matching and block pivoting for symmetric indefinite systems, in Abstract book of Householder Symposium XV, June 17-21 2002, pp.889-901. [5] O. Schenk and K. G¨ artner, On fast factorization pivoting methods for symmetric indefinite systems, Electronic Trans. Numerical Analysis, 2006, Vol. 23, pp. 158-179. [6] C. Ashcraf, R. G. Grimes and J. G. Lewis, Accurate symmetric indefinite linear equation solvers, SIAM J. Matrix Anal. Appl., 1998, 20(2): 513-561. [7] J. C. Haws and C. D. Meyer, Preconditioning KKT systems, Technical Report M&CT-TECH- 02-002, Boeing, 2002. [8] A. Klawonn, Block-triangular preconditioners for saddle point problems with a penalty term, SIAM J. Scientific Computing, 19 (1998): 172-184. [9] I. Perugia and V. Simoncini, Block-diagonal and indefinite symmetric precondtioners for mixed finite element formulations, Numerical Linear Alg. Appl., 7 (2000): 585-616. [10] T. Rusten and R. Winther, A preconditioned iterative method for saddle point problems, SIAM J. Matrix Anal. Appl., 13 (1992): 887-902. [11] R. Ewing, R. Lazarov, P. Lu, and P.Vassilevski, Preconditioning indefinite systems arising from mixed finite element discretization of second -order elliptic problems, Lecture Notes in Mathematics, 1457 (1990): 28-43. [12] C. Keller, N. I. M.Giuld, and A. J. Wathen, Constraint preconditioning for indefinite linear systems, SIAM J. Matrix Anal. Appl., 21 (2000): 1300-1317. [13] L. Lukˇ san and J. Vlˇ cek, Indefinitely preconditioned inexact newton method for large sparse equality constrained non-linear programming problems, Numerical Linear Alg. Appl., 5 (1998): 219-247. [14] M. Benzi, J. C. Hawa, and M. Tuma, Preconditioning highly indefinite and nonsymmetric matrices, SIAM J. Scientific Computing, 22(2000): 1333-1353. [15] J. R. Bunch, Partial pivoting strategies for symmetric matrices, SIAM J. Numerical Analysis, 11(1974): 521-528. [16] R. W. Freund, Preconditioning of symmetric, but highly indefinite linear systems, in 15th IMACS World Congress on Scientific Computation, Modelling and Applied mathematics, Vol.2, Numerical Mathematic, Wissenschaft und Technik Verlag, 1997, pp. 551-556.
18
JOURNAL OF COMPUTATIONAL ANALYSIS AND APPLICATIONS, VOL. 14, NO.4, 785-791 , 2012, COPYRIGHT 2012 EUDOXUS785 PRESS, LLC
ON THE SYMMETRIC PROPERTIES FOR THE SECOND KIND (h, q)-EULER POLYNOMIALS C. S. RYOO Department of Mathematics, Hannam University, Daejeon 306-791, Korea Abstract : In [7], we studied the second kind Euler numbers and polynomials. By using these numbers and polynomials, we investigated the alternating sums of powers of consecutive odd integers (see [8]). By applying the symmetry of the fermionic p-adic q-integral on Zp , which is defined Kim [2], we give recurrence identities the second kind (h, q)-Euler polynomials and the alternating sums of powers of consecutive (h, q)-odd integers. Key words : the second kind Euler numbers and polynomials, the second kind q-Euler numbers and polynomials, (h, q)-Euler numbers and polynomials, alternating sums 1. Introduction Euler numbers, Euler polynomials, q-Euler numbers, q-Euler polynomials, the second kind Euler number and the second kind Euler polynomials were studied by many authors (see for details [19]). Euler numbers and polynomials posses many interesting properties and arising in many areas of mathematics and physics. In [8], by using the second kind Euler numbers Ej and polynomials Ej (x), we investigated the q-analogue of alternating sums of powers of consecutive odd integers. Let k be a positive integer. Then we obtain Tj (k − 1) =
k−1
(−1)n (2n + 1)j =
n=0
(−1)k+1 Ej (2k) + Ej . 2 (h)
(h)
In [9], we introduced the second kind (h, q)-Euler numbers En,q and polynomials En,q (x). The (h) outline of this paper is as follows. We introduce the second kind (h, q)-Euler numbers En,q and (h) (h) (h) polynomials En,q (x). We give some properties of these numbers En,q and polynomials En,q (x). In Section 2, we obtain the alternating sums of powers of consecutive (h, q)-odd integers. Finally, we give recurrence identities the second kind (h, q)-Euler polynomials and the alternating sums of powers of consecutive (h, q)-odd integers. Throughout this paper, we always make use of the following notations: N = {1, 2, 3, · · · } denotes the set of natural numbers, R denotes the set of real numbers, C denotes the set of complex numbers, Zp denotes the ring of p-adic rational integers, Qp denotes the field of p-adic rational numbers, and Cp denotes the completion of algebraic closure of Qp . Let νp be the normalized exponential valuation of Cp with |p|p = p−νp (p) = p−1 . When one talks of q-extension, q is considered in many ways such as an indeterminate, a complex number q ∈ C, or p-adic number q ∈ Cp . If q ∈ C one normally 1 assume that |q| < 1. If q ∈ Cp , we normally assume that |q − 1|p < p− p−1 so that q x = exp(x log q) for |x|p ≤ 1. 1 − qx , cf. [1, 2, 3, 5] . [x]q = [x : q] = 1−q Hence, limq→1 [x] = x for any x with |x|p ≤ 1 in the present p-adic case. Let d be a fixed integer and let p be a fixed odd prime number. For any positive integer N , we set (a + dpZp ), X = lim(Z/dpN Z), X ∗ = ←− N
0 0: Hence as Proof. It follows
1
d
(t)
Thus k (x)k1 =
Z
1
Z
1 1 1
Z
1
Rn (0; t; x)d
(t)
jRn (0; t; x)j d Z
1
1
(t)
jtj
(jtj w)n (n 1)!
1
jtj
(jtj w)n (n 1)!
1
0
(30) !
j (sign(t) w; x)jdw d
Z
1
1
Z
1 1
(jtj w)n (n 1)!
0
Z
= =
Z
Z
1
jtjn 1 (n 1)!
j (sign(t)w; x)jdw
1
Z
1 1
1 (n
1)!
1 1 1
Z
1 (n
1)!
1 1
Z Z
1 (n
jtjn 1 (n 1)!
1)!
Z
jtj
0
Z
jtj
0
1
1 1 1
Z
j (sign(t)w; x)jdw d
0
Z
jtj
Z
1
! r (f
!
j (sign(t)w; x)jdx dw jtjn
1
(n)
0
!
n 1
; w)1 dw jtj
d
Z
1 1)!
Z
1 1
jtj
! r (f
(n)
0
k (x)k1
(n
1)!
! r (f
(n)
! r (f (n) ; )1 (n 1)! (r + 1)
Z
; )1 Z
1 1
1 1
Z
1+
jtj
n 1
; w)1 dw jtj
1+
0
jtj
r+1
!
d
(t)
(t) :
!
w
1
(t)
!
Consequently we have 1
(33)
!
d
!
r
!
(32)
(t) dx
0
jtj
(31)
!
! ! Z jtj jtjn 1 j (sign(t)w; x)jdw dx d (t) (n 1)! 0 ! ! Z 1 Z jtj j (sign(t)w; x)jdw dx jtjn 1 d
1
(n
(t) dx =: ( )
j (sign(t)w; x)jdw:
!
That is we get k (x)k1
!
j (sign(t)w; x)jdw d
1
1
(t):
!
Therefore it holds
=
> 0;
j (x)jdx
1
Z
( )
;
! 0 we obtain k (x)k1 ! 0.
Z
0
n 1
1 jtj
1
j (x)j =
But we see that Z jtj
!
r+1
jtj
1+
5
(t) :
n 1
dw jtj n 1
1 jtj
d
d !
(34)
!
(t)
(t) ;
(35)
ANASTASSIOU, MEZEI: GENERAL SINGULAR INTEGRALS
1072
George A. Anastassiou, Razvan A. Mezei
6
proving the claim. The case n = 0 is met next. 1 p
Proposition 3. Let p; q > 1 such that Z
:=
+
1
1 q
= 1 and the rest as above. Assume that jtj
! r (f; )p
Z
r;
(f )
f kp
Additionally assume that the Lp norm, p > 1. Proof. We notice that
;
f (x) =
Hence j
(f ; x)
r;
1
Z Z
f (x)j
(t) < 1:
1+
1 p
rp
jtj
d
(t)
:
(36)
1
> 0; 8 > 0; then as
(f ; x)
r;
d
1
Then k
rp
1+
1
! 0 we obtain
(
r t f ) (x)d
j(
r t f ) (x)j d
r;
! unit operator I in
(t):
(37)
1 1 1
(t):
(38)
We next estimate Z
Z =
Z
Z =
Z
1 1 1
j
r;
Z
1 1 1 1 1
1 1 1
Z
1
j(
p r t f ) (x)j
d
j(
p r t f ) (x)j
dx d
! r (f; jtj)pp d
1
jtj
! r f;
1 p
! r (f; )p
Z
Z
f (x)jp dx
(f ; x)
p
d
p 1
(t) !
1+
Z
1 1
p
1 1
j(
r t f ) (x)j d
(t)
(t) dx
dx (39)
(t)
(t) rp
jtj
d
(40)
(t);
1
proving the claim. We also give Proposition 4. Assume
Z
1
1+
r
jtj
d
Then k
r;
(f )
Additionally assuming that
8 > 0; we get as
! 0 that
f k1 Z r;
1
(t) < 1:
1
! r (f; )1
1+
jtj
Z
1
1+
jtj
r
1
r
d
(t)
1
! I in the L1 norm.
;
> 0;
d
(t) :
(41)
ANASTASSIOU, MEZEI: GENERAL SINGULAR INTEGRALS
George A. Anastassiou, Razvan A. Mezei
1073
7
Proof. We do have again j
r;
(f ; x)
f (x)j
Z
1 1
r t f (x)j d
j
(t):
(42)
We estimate Z
1 1
j
r;
(f ; x)
Z
f (x)j dx =
Z
1 1
Z
1 1
Z =
Z
1
1 1
Z
1 1 1 1
r t f (x)j d
j
r t f (x)j dx
j
! r (f; jtj)1 d ! r (f;
1
! r (f; )1
jtj Z
)1 d
1
(t) dx d
(43)
(t)
(t) (t) 1+
jtj
(44) r
d
(t) ;
(45)
1
proving the claim.
3
Applications to General Trigonometric Singular Operators
We make Remark 5. We need the following preliminary result. Let p and m be integers with 1 p m. We de…ne the integral Z 1 Z 1 (sin x)2m (sin x)2m I(m; p) := dx = 2 dx: 2p x x2p 1 0
(46)
This is an (absolutely) convergent integral. According to [11], page 210, item 1033, we obtain m
I(m; p) =
( 1)p (2m)! X ( 1)k 4m p (2p 1)! (m k=1
k 2p 1 : k)!(m + k)!
(47)
In particular, for p = m the above formula becomes Z
0
1
m
X (sin x)2m m dx = ( 1) m ( 1)k x2m (m k=1
k 2m 1 : k)!(m + k)!
(48)
In this section we apply the general theory of this article to the trigonometric smooth general singular integral operators Tr; (f; x) de…ned as follows. Let > 0: Let f 2 C n (R) and f (n) 2 Lp (R); 1 p < 1, 2 N; we de…ne for x 2 R the integral 1 0 Z 1 X r 2 1 @ A sin (t= ) Tr; (f ; x) := dt; (49) j f (x + jt) W t 1 j=0
ANASTASSIOU, MEZEI: GENERAL SINGULAR INTEGRALS
George A. Anastassiou, Razvan A. Mezei
1074
8
where Z
W =
= 2 (48)
= 2
2
1
sin (t= ) dt t Z 1 2 sin t dt: t 0
1 1 2
1 2
X
( 1)
( 1)k
k=1
(50) k2 1 : k)!( + k)!
(
Tr; operators are not positive operators, see [9]. Let d e denote the ceiling of a real number. We present our …rst result of this section 1 p
Theorem 6. Let p; q > 1 such that Then
1 q
+
= 1; n;
2 N;
and the rest as above.
1
k (x)kp
((n 2
4R
1)!)(q(n 1 sin t 2 t
1 0
(51)
1
1
1) + 1) q (rp + 1) p " drpe+1 Z 1 X tnp 1+j dt j=1 0
Moreover, as ! 0 we get that k (x)kp ! 0. Proof. It is well known that lim sint t = 1: Therefore, lim t!0
function tj
drpe+np+1 2
>
sin t 2 t
t!0
2
sin t t
sin t 2 t
#
3 p1
dt5
n
! r (f (n) ; )p :
= 1: Hence, on (0; 1]; the following
is continuous and bounded. Therefore Z
1
0
For j 2 R such that 2
2
sin t t
tj
dt =
Z
1
2
sin t t
tj
0+
dt < 1:
(52)
j > 1 we have Z
1
1
tj
sin t t
Z
2
dt
1
1
=
Z
1
tj
2
1 t
tj 2
dt
dt
1
=
2 < 1:
1 j
1 (53)
Relations (52) and (53) imply Z
0
1
tj
sin t t
2
dt =
Z
1
tj
0
Z
1
0+
< 1:
t
j
sin t t
2
sin t t
2
dt +
Z
1
tj
sin t t
tj
1 t
1
dt +
Z
1
1
2
dt
2
dt (54)
ANASTASSIOU, MEZEI: GENERAL SINGULAR INTEGRALS
1075
George A. Anastassiou, Razvan A. Mezei
By (54) we obtain, for
>
1 M : = W =
2
drpe+np+1 ; 2
Z
1
Z
np 2
=
2
Z
np 2
rp+1
1 tnp
(1 + t)
1
np 2
drpe+1 Z 1 X
np 2
W
0
j=1
< 1:
"
t
2
dt
2
#
2
sin t t
np 1+j
dt
dt
sin t t 3
drpe+1
2
2
sin t t
1
(1 + t) 1 tnp 1 0 20 1 Z 1 drpe+1 X sin t 4@ tj A tnp 1 t 0 j=1
W 2
1
sin (t= ) t
np 1
1 jtj
0
W =
1+
!
rp+1
jtj
1
W 2
9
5 dt
dt
(55)
Using Theorem 1 we obtain 1
k (x)kp
((n
1)!)(q(n
Using (55) we get 2
M
np 2
W
1 1 q
1) + 1) (rp + 1)
drpe+1 Z 1 X 0
j=1
"
tnp
1+j
1 p
Mp
1 p
2
sin t t
! r (f (n) ; )p :
(56)
#
(57)
dt
therefore, k (x)kp
1 ((n 2
4R
1)!)(q(n 1 1 0
sin t 2 t
1
1
1) + 1) q (rp + 1) p " drpe+1 Z 1 X tnp 1+j 0 dt j=1
sin t t
2
proving the claim of the theorem. The counterpart of Theorem 6 follows, case of p = 1: Theorem 7. Let f 2 C n (R) and f (n) 2 L1 (R); n 2 N; 2 N; k (x)k1 Hence as
(r + 1) (n
1)!
1 hR 1 0
sin t 2 t
! 0 we obtain k (x)k1 ! 0.
dt
i
r+1 Z X j=1
0
1
"
t
n 1+j
>
#
3 p1
dt5
r+1+n . 2
sin t t
2
n
! r (f (n) ; )p
(58)
Then #
!
dt
n
! r (f (n) ; )1 : (59)
ANASTASSIOU, MEZEI: GENERAL SINGULAR INTEGRALS
1076
George A. Anastassiou, Razvan A. Mezei
10
Proof. As in the proof of Theorem 6, inequality (54), for > r+1+n we have, 2 ! Z 1 r+1 2 jtj sin (t= ) 1 n 1 1+ 1 jtj dt W t 1 Z 1 2 2 n 2 sin t r+1 = (1 + t) 1 tn 1 dt W t 0 " # r+1 Z 2 2 n 2 X 1 n 1+j sin t t dt = W j=1 0 t
(60)
< 1:
Then, by Theorem 2, we obtain 1 (r + 1) (n
k (x)k1
1 = (r + 1) (n
1)!
1)!
(50)
=
(r + 1) (n
1)!
Z
1 W 0
@2
1 1
n 2
W
1 hR 1 0
1+ r+1 Z X
i dt
sin t 2 t
poving the claim of the theorem.
The case n = 0 is met next. Proposition 8. Let p; q > 1 such that
kTr; (f )
f kp
1
0
j=1
1 p
+
0
r+1
jtj "
1
tn
0
j=1
1 q
1
= 1;
sin t 2 t
0
jtj
2
t
dt
i
>
drpe+1 2
drpe " X Z
1
tj
0
j=0
2
sin t t
n 1+j
2 N;
1
! r (f; )p @ hR 1
"
! 2 sin (t= ) dt ! r (f (n) ; )1 t # 1 dtA ! r (f (n) ; )1
n 1
sin t t
1+j
r+1 Z X
!
#
dt
=
2
1 W 1 2
W 2
1 2
W =
2
1 2
W
< 1:
Z
1
1+
jtj
rp
sin t t
1
Z
1
0
Z
1
drpe
(1 + t)
0
drpe " X Z
j=0
0
1
tj
dt
sin t t
sin t t
dt
2
sin t t
rp
(1 + t)
2
sin (t= ) t
2
dt
2
dt
#
n
! r (f (n) ; )1 ;
and the rest as above. Then 2
Also as ! 0 we obtain Tr; ! unit operator I in the Lp norm, p > 1. Proof. As in the proof of Theorem 6, inequality (54), for > drpe+1 we have 2 : =
!
#1 p1 dt A :
(61)
ANASTASSIOU, MEZEI: GENERAL SINGULAR INTEGRALS
1077
George A. Anastassiou, Razvan A. Mezei
11
By use of Proposition 3 we obtain
kTr; (f )
0
! r (f; )p @
f kp
2
W
0
(50)
drpe " X Z
1 2
1
1
= ! r (f; )p @ hR 1
sin t 2 t
0
dt
proving the claim of the proposition. We also give Proposition 9. For
2 N;
kTr; (f )
r+1 2 ;
>
i
drpe " X Z
Z
= =
2
! r (f; )1 1 0
sin t 2 t
1
1 2
W 2
tj
#1 p1 dt A
2
sin t t
0
j=0
i dt
"Z r X
1
j=0
1 2
W
1
Z
1
0 r X j=0
< 1:
r
jtj
1+
tj
r+1 2
we have
sin t t
tj
0
(62)
dt
2
sin t t
r
dt :
2
sin (t= ) t
(1 + t) "Z 1
>
#
2
sin t t
0
Moreover as ! 0 we get that Tr; ! I in the L1 norm. Proof. As in the proof of Theorem 6, inequality (54), for 1 W
1
#1 p1 dt A
we have
hR
f k1
sin t t
tj
0
j=0
2
dt #
2
dt
By Proposition 4 we obtain kTr; (f )
f k1
! r (f; )1
Z
! r (f; )1
(50)
= hR 1 0
sin t 2 t
which proves the claim of the proposition.
4
1 W
dt
1
1+
r
jtj
1
i
"Z r X j=0
1
0
t
j
sin (t= ) t sin t t
2
2
dt #
!
dt ;
Applications to Particular Trigonometric Singular Operators
In this section we work on the approximation results given in the previous section, for some particular values of r; n; p and : Case = 2: We have the following results Corollary 10. Let f 2 C 1 (R) and f 0 2 L1 (R). Then kT1; (f ; x)
f (x)k1
3 2
ln 2 +
4
! 1 (f 0 ; )1 :
(63)
ANASTASSIOU, MEZEI: GENERAL SINGULAR INTEGRALS
George A. Anastassiou, Razvan A. Mezei
1078
12
Hence as ! 0 we obtain kT1; (f ; x) f (x)k1 ! 0. Proof. By Theorem 7, with r = n = 1; = 2. Corollary 11. It holds kT1; (f )
f k1:5
2 3
7 3 + ln 2 4
! 1 (f; )1:5
:
(64)
Also as ! 0 we obtain T1; ! unit operator I in the L1:5 norm. Proof. By Proposition 8, with p = 1:5; r = 1; = 2. Corollary 12. We have
r
3 7 + ln 2: 4 Also as ! 0 we obtain T1; ! unit operator I in the L2 norm. Proof. By Proposition 8, with p = 2; r = 1; = 2. kT1; (f )
f k2
! 1 (f; )2
kT1; (f )
f k1
! 1 (f; )1
(65)
Corollary 13. It holds 3 ln 2
+1 :
(66)
7 3 + ln 2 : 4
(67)
Moreover as ! 0 we get that T1; ! I in the L1 norm. Proof. By Proposition 9, with r = 1; = 2. Corollary 14. We have kT2; (f )
f k1
! 2 (f; )1
Moreover as ! 0 we get that T2; ! I in the L1 norm. Proof. By Proposition 9, with r = 2; = 2. Case = 3: We have the following results Corollary 15. It holds r kT1; (f ; x)
f (x)k2
5 256 25 + ln 22 27 66
!
! 1 (f 0 ; )p :
(68)
!
! 1 (f 0 ; )1 :
(69)
Moreover, as ! 0 we get that kT1; (f ; x) f (x)k2 ! 0. Proof. By Theorem 6, with p = 2; r = 1; n = 1; = 3. Corollary 16. Let f 2 C 1 (R) and f 0 2 L1 (R): Then kT1; (f ; x)
f (x)k1
20 ln 11
27
3 16 4
!
5 + 22
Hence as ! 0 we obtain kT1; (f ; x) f (x)k1 ! 0. Proof. By Theorem 7, with r = 1; n = 1; = 3. Corollary 17. Let f 2 C 1 (R) and f 0 2 L1 (R). Then ! ! 27 40 3 16 5 5 256 kT2; (f ; x) f (x)k1 ln + + ln ! 2 (f 0 ; )1 : 33 4 33 22 27
(70)
ANASTASSIOU, MEZEI: GENERAL SINGULAR INTEGRALS
George A. Anastassiou, Razvan A. Mezei
1079
13
Hence as ! 0 we obtain kT2; (f ; x) f (x)k1 ! 0. Proof. By Theorem 7, with r = 2; n = 1; = 3. Corollary 18. Let f 2 C 2 (R) and f 00 2 L1 (R): Then kT2; (f ; x)
f (x)
f 00 (x) 2
2 c2;
25 5 256 + ln 66 22 27
k1
Hence as ! 0 we obtain kT2; (f ; x) f (x) f Proof. By Theorem 7, with r = 2; n = 2; = 3.
00
(x) 2 c2; 2
2
! 2 (f 00 ; )1 :
(71)
k1 ! 0.
Corollary 19. It holds
kT2; (f )
f k2
v u u 40 ln ! 2 (f; )2 t 11
3 16 4
v u u 40 4 ! 1 (f; )4 t ln 11
3 16 4
v u u 40 3 ln ! 1 (f; )3 t 11
3 16 4
27
!
+
15 256 47 ln + : 22 27 22
(72)
Also as ! 0 we obtain Tr; ! unit operator I in the L2 norm. Proof. By Proposition 8, with p = r = 2; = 3. Corollary 20. It holds
kT1; (f )
f k4
27
!
+
15 256 47 ln + : 22 27 22
(73)
Also as ! 0 we obtain Tr; ! unit operator I in the Lp norm, p > 1. Proof. By Proposition 8, with p = 4; r = 1; = 3. Corollary 21. It holds
kT1; (f )
f k3
27
!
+
16 15 256 + ln : 11 22 27
(74)
Also as ! 0 we obtain Tr; ! unit operator I in the Lp norm, p > 1. Proof. By Proposition 8, with p = 3; r = 1; = 3. Corollary 22. We have kT1; (f )
f k1
! 1 (f; )1
40 ln 1+ 11
27
3 16 4
!!
:
(75)
Moreover as ! 0 we get that T1; ! I in the L1 norm. Proof. By Proposition 9, with r = 1; = 3. Corollary 23. We have kT2; (f )
f k1
! 2 (f; )1
40 ln 11
Moreover as ! 0 we get that T2; ! I in the L1 norm. Proof. By Proposition 9, with r = 2; = 3.
27
3 16 4
!
16 + 11
!
:
(76)
ANASTASSIOU, MEZEI: GENERAL SINGULAR INTEGRALS
1080
George A. Anastassiou, Razvan A. Mezei
14
Corollary 24. We have kT3; (f )
f k1
27
40 ln 11
! 3 (f; )1
3 16 4
!
16 15 256 + + ln 11 22 27
!
:
(77)
Moreover as ! 0 we get that T3; ! I in the L1 norm. Proof. By Proposition 9, with r = 3; = 3. Case = 4: We have the following results Corollary 25. It holds s 126 327=8 kT2; (f ; x) f (x)k2 ln 151 32
!
651 567 4 + + ln 2416 604 3
! 2 (f 0 ; )2 :
(78)
Moreover, as ! 0 we get that kT2; (f ; x) f (x))k2 ! 0. Proof. By Theorem 6, with p = r = 2; n = 1; = 4. Corollary 26. It holds kT1; (f ; x)
s
f (x)k2
210 35 + ln 151 151
327=8 32
!
! 1 (f 0 ; )2 :
(79)
Moreover, as ! 0 we get that kT1; (f ; x) f (x)k2 ! 0. Proof. By Theorem 6, with p = 2; r = n = 1; = 4. Corollary 27. It holds kT2; (f ; x)
f (x)
f 00 (x) 2
2 c2;
k1:5
1 630 39=4 2415 ln 11=4 + 4 151 2416 2 00
Moreover, as ! 0 we get that kT2; (f ; x) f (x) f 2(x) Proof. By Theorem 6, with p = 1:5; r = 2; n = 2; = 4.
2 c2;
2 3
2
! 2 (f 00 ; )1:5 :
(80)
! 2 (f 0 ; )1 :
(81)
k1:5 ! 0.
Corollary 28. Let f 2 C 1 (R) and f 0 2 L1 (R). Then kT2; (f ; x)
210 ln 151
f (x)k1
2104=15 381=20
+
35 210 + ln 302 151
327=8 32
Hence as ! 0 we obtain kT2; (f ; x) f (x)k1 ! 0. Proof. By Theorem 7, with r = 2; n = 1; = 4. Corollary 29. Let f 2 C 1 (R) and f 0 2 L1 (R). Then kT5; (f ; x)
f (x)k1
105 229=15 945 4 1085 ln 27=40 + ln + 151 1208 3 4832 3
! 5 (f 0 ; )1 :
(82)
! 2 (f 000 ; )1 :
(83)
Hence as ! 0 we obtain kT5; (f ; x) f (x)k1 ! 0. Proof. By Theorem 7, with r = 5; n = 1; = 4. Corollary 30. Let f 2 C n (R) and f (n) 2 L1 (R). Then kT2; (f ; x)
f (x)
f 00 (x) 2
2 c2;
k1
105 39=4 35 ln 11=4 + 151 604 2
3
ANASTASSIOU, MEZEI: GENERAL SINGULAR INTEGRALS
George A. Anastassiou, Razvan A. Mezei
1081
15
Hence as ! 0 we obtain kT2; (f ; x) f (x) f Proof. By Theorem 7, with r = 2; n = 3; = 4.
00
(x) 2 c2; 2
k1 ! 0.
Corollary 31. Let f 2 C n (R) and f (n) 2 L1 (R). Then kT3; (f ; x)
f (x)
f 00 (x) 2
2 c2;
315 39=4 2415 ln 11=4 + 604 19328 2
k1
Hence as ! 0 we obtain kT3; (f ; x) f (x) f Proof. By Theorem 7, with r = 3; n = 3; = 4.
00
(x) 2 c2; 2
3
! 3 (f 000 ; )1 :
(84)
k1 ! 0.
Corollary 32. It holds kT2; (f )
f k2
! 2 (f; )2
r
630 229=15 256 ln 27=40 + : 151 151 3
(85)
Also as ! 0 we obtain T2; ! unit operator I in the L2 norm. Proof. By Proposition 8, with p = r = 2; = 4. Corollary 33. It holds kT2; (f )
f k3
! 2 (f; )3
r 3
2251=60 630 5671 ln 9=5 + : 151 2416 3
(86)
Also as ! 0 we obtain T2; ! unit operator I in the L3 norm. Proof. By Proposition 8, with p = 3; r = 2; = 4. Corollary 34. It holds kT3; (f )
f k2
! 3 (f; )2
r
630 5671 2251=60 ln 9=5 + : 151 2416 3
(87)
Also as ! 0 we obtain T3; ! unit operator I in the L2 norm. Proof. By Proposition 8, with p = 2; r = 3; = 4. Corollary 35. It holds kT1; (f )
f k2
! 1 (f; )2
r
630 2104=15 407 ln 81=20 + : 151 302 3
(88)
Also as ! 0 we obtain T1; ! unit operator I in the L2 norm. Proof. By Proposition 8, with p = 2; r = 1; = 4. Corollary 36. We have kT1; (f )
f k1
! 1 (f; )1 1 +
630 2104=15 ln 81=20 151 3
:
(89)
Moreover as ! 0 we get that T1; ! I in the L1 norm. Proof. By Proposition 9, with r = 1; = 4. Corollary 37. We have kT2; (f )
f k1
! 2 (f; )1
630 2104=15 407 ln 81=20 + 151 302 3
:
(90)
ANASTASSIOU, MEZEI: GENERAL SINGULAR INTEGRALS
George A. Anastassiou, Razvan A. Mezei
1082
16
Moreover as ! 0 we get that T2; ! I in the L1 norm. Proof. By Proposition 9, with r = 2; = 4. Corollary 38. We have kT3; (f )
f k1
! 3 (f; )1
630 229=15 407 ln 27=40 + 151 302 3
:
(91)
Moreover as ! 0 we get that T3; ! I in the L1 norm. Proof. By Proposition 9, with r = 3; = 4. Corollary 39. We have kT6; (f )
f k1
! 6 (f; )1
5671 630 2251=60 ln 9=5 + 151 2416 3
:
(92)
Moreover as ! 0 we get that T6; ! I in the L1 norm. Proof. By Proposition 9, with r = 6; = 4.
References [1] G.A. Anastassiou, Rate of convergence of non-positive linear convolution type operators. A sharp inequality, J. Math. Anal. and Appl., 142 (1989), 441–451. [2] G.A. Anastassiou, Sharp inequalities for convolution type operators, Journal of Approximation Theory, 58 (1989), 259–266. [3] G.A. Anastassiou, Moments in Probability and Approximation Theory, Pitman Research Notes in Math., Vol. 287, Longman Sci. & Tech., Harlow, U.K., 1993. [4] G.A. Anastassiou, Quantitative Approximations, Chapman & Hall/CRC, Boca Raton, New York, 2001. [5] G.A. Anastassiou, Basic convergence with rates of smooth Picard singular integral operators, J. of Computational Analysis and Applications,Vol.8, No.4 (2006), 313-334. [6] G.A. Anastassiou, Lp convergence with rates of smooth Picard singular operators, Di¤erential & di¤erence equations and applications, Hindawi Publ. Corp., New York, (2006), 31–45. [7] G.A. Anastassiou and S. Gal, Convergence of generalized singular integrals to the unit, univariate case, Math. Inequalities & Applications, 3, No. 4 (2000), 511–518. [8] G.A. Anastassiou and S. Gal, Convergence of generalized singular integrals to the unit, multivariate case, Applied Math. Rev., Vol. 1, World Sci. Publ. Co., Singapore, 2000, pp. 1–8. [9] G.A. Anastassiou and R. A. Mezei, Uniform Convergence With Rates of General Singular Operators, submitted 2010. [10] R.A. DeVore and G.G. Lorentz, Constructive Approximation, Springer-Verlag, Vol. 303, Berlin, New York, 1993. [11] Joseph Edwards, A treatise on the integral calculus, Vol II, Chelsea, New York, 1954. [12] S.G. Gal, Remark on the degree of approximation of continuous functions by singular integrals, Math. Nachr., 164 (1993), 197–199.
ANASTASSIOU, MEZEI: GENERAL SINGULAR INTEGRALS
George A. Anastassiou, Razvan A. Mezei
1083
17
[13] S.G. Gal, Degree of approximation of continuous functions by some singular integrals, Rev. Anal. Numér, Théor. Approx., (Cluj), Tome XXVII, No. 2 (1998), 251–261. [14] R.N. Mohapatra and R.S. Rodriguez, On the rate of convergence of singular integrals for Hölder continuous functions, Math. Nachr. 149 (1990), 117–124.
JOURNAL OF COMPUTATIONAL ANALYSIS AND APPLICATIONS, VOL. 14, NO.6, 1084-1095 , 2012, COPYRIGHT 2012 EUDOXUS 1084 PRESS, LLC
CONVERGENCE THEOREMS OF ITERATIVE SCHEMES FOR A FINITE FAMILY OF ASYMPTOTICALLY QUASI-NONEXPANSIVE TYPE MAPPINGS IN METRIC SPACES Jong Kyu Kim1 and Chu Hwan Kim2 1
Department of Mathematics Education, Kyungnam University Masan, Kyungnam, 631-701, Korea e-mail: [email protected] 2
Department of Mathematics, Kyungnam University Masan, Kyungnam, 631-701, Korea
Abstract. The purpose of this paper is to introduce and study the convergence problem of an implicit iterative scheme with errors for a finite family of asymptotically quasi-nonexpansive type mappings in convex metric spaces.
1. Introduction In 1953, Mann [20] introduced the general iterative schemes and proved the following convergence theorem of the iterative schemes: Let K be a nonempty subset of a Banach space E and T : K → K be a mapping. For any given x0 ∈ K, the scheme {xn } defined by xn+1 = (1 − αn )xn + αn T xn
(n ≥ 0)
(1.1)
is called the Mann iterative scheme, where {αn } is a real sequence in [0, 1] satisfying appropriate conditions. Further, in 1974, Ishikawa [12] proved the following theorem by introducing the Ishikawa iterative scheme, that is, for any given x0 ∈ K, the scheme {xn } defined by ½
xn+1 = (1 − αn )xn + αn T yn , yn = (1 − βn )xn + βn T xn (n ≥ 0),
(1.2)
where {αn } and {βn } are two real sequences in [0, 1] satisfying appropriate conditions. In particular, if βn = 0 for all n ≥ 0 in (1.2), then the Ishikawa iterative scheme becomes the Mann iterative scheme (1.1). These two iterative schemes described above have been studied extensively by many authors for approximating either fixed points of nonlinear operators or solutions of nonlinear operator equations in Banach spaces. Concerning the convergence problems of Mann and Ishikawa iterative schemes have been studied extensively by many authors, see [11], [12], [20], [30] and [31]. 0
2000 Mathematics Subject Classification: 47H09, 47H10. The corresponding author: J. K. Kim([email protected]) 0 Keywords: A finite family of asymptotically quasi-nonexpansive type mapping, implicit iterative scheme with error, complete convex metric space. 0 This work was supported by the Kyungnam University Foundation Grant 2010. 0
1085
2
Jong Kyu Kim and Chu Hwan Kim
In 2001, Xu-Ori [29] have introduces an implicit iterative scheme for a finite family of nonexpansive mappings. Let T1 , T2 , · · · , TN be N self-mappings of C ∞ T and suppose that F = F (Ti ) 6= ∅, the set of common fixed points of Ti , i=1
i = 1, 2, · · · , N. Hereafter, we will denote the index set {1, 2, · · · , N } by I. An implicit iterative scheme for a finite family of nonexpansive mappings is defined as follows, with {tn } a real sequence in (0, 1), x0 ∈ C : x1 = t1 x0 + (1 − t1 )T1 x1 , x2 = t2 x1 + (1 − t2 )T2 x2 , ··· xN = tN xN −1 + (1 − tN )TN xN , xN +1 = tN +1 xN + (1 − tN +1 )T1 xN +1 , ··· which can be written in the following compact form : xn = tn xn−1 + (1 − tn )Tn xn ,
n ≥ 1,
(1.3)
where Tk = Tk mod N . (Here the mod N function takes values in I.) And they proved the weak convergence of the process (1.3) to a common fixed point in the setting of a Hilbert space. In 2003, Sun [23] extend the schemes (1.3) to a process for a finite family of asymptotically quasi-nonexpansive mappings, with {αn } a real sequence in (0, 1) and an initial point x0 ∈ C, which is defined as follows : x1 = α1 x0 + (1 − α1 )T1 x1 ··· xN = αN xN −1 + (1 − αN )TN xN , xN +1 = αN +1 xN + (1 − αN +1 )T12 xN +1 , ··· x2N = α2N x2N −1 + (1 − α2N )TN2 x2N , x2N +1 = α2N +1 x2N + (1 − α2N +1 )T13 x2N +1 , ··· which can be written in the following compact form : xn = αn xn−1 + (1 − αn )Tik xn ,
n ≥ 1,
(1.4)
where n = (k − 1)N + i, i ∈ I. And also, Sun [23] proved that the modified implicit iterative scheme (1.4) for a finite family of asymptotically quasi-nonexpansive mappings converges strongly to a common fixed point of the family in a uniformly convex Banach space, requiring one member T in the family to be semi-compact. The result presented Sun [23] generalized and extended the corresponding main results of Wittmann [28], Xu-Ori [29].
1086
Asymptotically quasi-nonexpansive type mappings in metric spaces
3
It is very interesting to consider the convergence theorems of the iterative schemes for nonlinear operators in metric spaces. In 1970, Takahashi [24] introduced the concept of convexity in a metric space: Let (X, d) be a metric space and I = [0, 1]. A mapping W : X × X × I → X is said to be a convex structure on X if for each (x, y, λ) ∈ X × X × I and u ∈ X, d(u, W (x, y, λ)) ≤ λd(u, x) + (1 − λ)d(u, y). X together with a convex structure W is called a convex metric space, denoted it by (X, d, W ). A nonempty subset K of (X, d, W ) is said to be convex if W (x, y, λ) ∈ K for all (x, y, λ) ∈ K × K × I. Recall that a function Φ : [0, ∞) → [0, ∞) is said to satisfy the condition (CΦ ) if it is nondecreasing continuous from right, Φ(t) < t for all t > 0 and Φ(0) = 0. Let (X, d) be a complete metric space, T : X → X be a mapping. (1) T is said to be quasi-contractive if there exists a constant k ∈ [0, 1) such that d(T x, T y) n o ≤ k max d(x, y), d(x, T x), d(y, T y), d(x, T y), d(y, T x) ,
(1.5)
for all x, y ∈ X. (2) T is said to be generalized quasi-contractive if there exists a function Φ : [0, ∞) → [0, ∞) satisfying the condition (CΦ ) such that d(T x, T y) ³ n o´ ≤ Φ max d(x, y), d(x, T x), d(y, T y), d(x, T y), d(y, T x) ,
(1.6)
for all x, y ∈ X. In 1988, Ding [8] established the following result for Ishikawa type iterative schemes of a quasi-contractive mappings in convex metric spaces. Theorem 1.1. Let K be a nonempty closed convex subset of a complete convex metric space (X, d, W ) and let T : K → K be a quasi-contractive mapping (1.5). Suppose that {xn } is the Ishikawa type iterative scheme defined by x0 ∈ K, xn+1 = W (T yn , xn , αn ), yn = W (T xn , xn , βn ), n ≥ 0, P∞ where {αn } and {βn } satisfy 0 ≤ αn , βn ≤ 1 and n=0 αn = ∞. Then {xn } converges to a unique fixed point of T in K. Recently, in 2003, Chang-Kim [3] proved convergence theorems of the Ishikawa type iterative schemes with errors for generalized quasi-contractive mapping (1.6) in convex metric spaces, which is improved and extended the corresponding results in [4], [8] and others.
1087
4
Jong Kyu Kim and Chu Hwan Kim
Theorem 1.2. Let (X, d, W ) be a complete convex metric space with a convex structure W : X 3 × I 3 → X of X, T be a generalized quasi-contractive mapping satisfying condition (1.6) and {xn } be the Ishikawa type iterative scheme with errors of T defined by ½ xn+1 = W (xn , T yn , un ; αn , βn , γn ), yn = W (xn , T xn , vn ; ξn , ηn , δn ), ∀ n ≥ 0, where {un }, {vn } are two sequences in X. Then the scheme {xn } converges to a unique fixed point of T in X. And, Chang-Kim-Jin [2] proved convergence theorems of the Ishikawa type iterative schemes with errors for asymptotically quasi-nonexpansive type mappings in convex metric spaces. Theorem 1.3. Let (X, d, W ) be a complete convex metric space and T : X → X be an asymptotically quasi-nonexpansive type mapping satisfying the following conditions : there exist constants L > 0 and α > 0 such that d(T x, p) ≤ L · d(x, p)α ,
∀ x ∈ X, ∀ p ∈ F (T ).
For any given x0 ∈ X, let {xn } be the iterative scheme with errors defined by ½ xn+1 = W (xn , T n yn , un ; αn , βn , γn ), yn = W (xn , T n xn , vn ; αn0 , βn0 , γn0 ), ∀ n ≥ 0, where {un }, {vn } are bounded sequences in X. If the sequences {βn } and {γn } satisfying the following conditions : ∞ X
βn < ∞,
n=0
∞ X
γn < ∞.
n=0
Then {xn } converges strongly to a fixed point of T in X if and only if lim inf Dd (xn , F (T )) = 0, n→∞
where, Dd (x, A) = inf a∈A d(x, a) for any set A. Very recently, in 2006, Kim-Kim-Kim [15] proved convergence theorems of an implicit iterative schemes with errors for a finite family of asymptotically quasinonexpansive mappings in convex metric spaces. Theorem 1.4. Let (X, d, W ) be a complete convex metric space. Let {Ti : i ∈ I} be a finite family of asymptotically quasi-nonexpansive mappings from X into X, that is, d(Tin x, pi ) ≤ (1 + hn(i) )d(x, pi ) T for all x ∈ X, pi ∈ F (Ti ), i ∈ I. Suppose that F = N and that x0 ∈ i=1 F (Ti ) 6= ∅ P P∞ 1 X, {βn } ⊂ [s, 1 − s] for some s ∈ (0, 2 ), n=1 hn(i) < ∞ (i ∈ I), ∞ n=1 γn < ∞ and {un } is arbitrary bounded sequence in X. Then the implicit iterative scheme with error {xn } generated by (2.2) converges to a common fixed point in F if and only if lim inf Dd (xn , F ) = 0, n→∞
1088
Asymptotically quasi-nonexpansive type mappings in metric spaces
5
where Dd (x, F ) denotes the distance from x to the set F, that is, Dd (x, F ) = inf y∈F d(x, y). The purpose of this paper is to introduce and study the convergence problem of an implicit iterative sequences with errors for a finite family of asymptotically quasi-nonexpansive type mappings in convex metric spaces. And also we point out that the results of this paper not only generalize and improve the corresponding results of Chang et al. [1], Liu [17], [18], Petryshyn-Williamson [21], Tan-Xu [25], Ghosh-Debnath [10] and Chang [4]-[6] but also contain the corresponding results in [4]-[6], [10], [15]-[19], [21], [24] as its special cases. 2. Preliminaries Throughout this paper, we assume that F (Ti ) (i ∈ I) is the set of fixed points of Ti respectively, that is, F (Ti ) = {x ∈ X : Ti x = x}. The set of common fixed N T points of Ti (i ∈ I) denotes by F, that is, F = F (Ti ). i=1
Definition 2.1. Let E be a metric space and T : D(T ) ⊂ E → E be a mapping. (1) T is said to be nonexpansive if d(T x, T y) ≤ d(x, y) for all x, y ∈ D(T ). (2) T is said to be quasi-nonexpansive if F (T ) 6= ∅ and d(T x, p) ≤ d(x, p) for all x ∈ D(T ) and p ∈ F (T ). (3) T is said to be asymptotically nonexpansive if there exists a sequence hn ∈ [1, ∞) with limn→∞ hn = 1 such that d(T n x, T n y) ≤ hn d(x, y) for all x, y ∈ D(T ) and n ≥ 1. (4) T is said to be asymptotically quasi-nonexpansive if F (T ) 6= ∅ and there exists a sequence hn ∈ [1, ∞) with limn→∞ hn = 1 such that d(T n x, p) ≤ hn d(x, p) for all x ∈ D(T ), p ∈ F (T ) and n ≥ 1. (5) T is said to be asymptotically nonexpansive type if ¾¾ ½ ½ n n d(T x, T y) − d(x, y) ≤ 0. sup lim sup n→∞
x,y∈D(T )
(6) T is said to be asymptotically quasi- nonexpansive type if F (T ) 6= ∅ and ½ ½ ¾¾ n lim sup sup d(T x, p) − d(x, p) ≤0 (2.1) n→∞
for all p ∈ F (T ).
x∈D(T )
1089
Jong Kyu Kim and Chu Hwan Kim
6
Remark 2.2. We know that the following implications hold : (1)
=⇒
⇓ F (T ) 6= ∅ (2)
(3)
=⇒
⇓ F (T ) 6= ∅ =⇒
(4)
(5) ⇓ F (T ) 6= ∅
=⇒
(6)
Definition 2.3. ([2]) Let (X, d) be a metric space and I = [0, 1]. A mapping W : X 3 ×I 3 → X is said to be a convex structure on X if it satisfies the following conditions : for all u, x, y, z ∈ X and for all α, β, γ ∈ I with α + β + γ = 1, (1) W (x, y, z; α, 0, 0) = x, (2) d(u, W (x, y, z; α, β, γ)) ≤ αd(u, x) + βd(u, y) + γd(u, z). If (X, d) is a metric space with a convex structure W, then (X, d) is called a convex metric space and denotes it by (X, d, W ). Remark 2.4. Every linear normed space is a convex metric space, where a convex structure W (x, y, z; α, β, γ) = αx + βy + γz, for all x, y, z ∈ X and α, β, γ ∈ I with α + β + γ = 1. In fact, d(u, W (x, y, z; α, β, γ)) = ku − (αx + βy + γz)k ≤ αku − xk + βku − yk + γku − zk = αd(u, x) + βd(u, y) + γd(u, z),
∀ u ∈ X.
But there exists some convex metric spaces which can not be embedded into any linear normed space. Example 2.5. Let X = {(x1 , x2 , x3 ) ∈ R3 : x1 > 0, x2 > 0, x3 > 0}. For x = (x1 , x2 , x3 ), y = (y1 , y2 , y3 ) ∈ X and α, β, γ ∈ I with α + β + γ = 1, we define a mapping W : X 3 × I 3 → X by W (x, y, z; α, β, γ) = (αx1 + βy1 + γz1 , αx2 + βy2 + γz2 , αx3 + βy3 + γz3 ) and define a metric d : X × X → [0, ∞) by ¯ ¯ ¯ ¯ d(x, y) = ¯x1 y1 + x2 y2 + x3 y3 ¯. Then we can show that (X, d, W ) is a convex metric space, but it is not linear normed space.
Definition 2.6. Let (X, d, W ) be a convex metric space with a convex structure W and let Ti : X → X (i ∈ I) be asymptotically quasi-nonexpansive type
1090
Asymptotically quasi-nonexpansive type mappings in metric spaces
7
mappings. For any given x0 ∈ X, the following iterative scheme {xn } defined by x1 = W (x0 , T1 x1 , u1 ; α1 , β1 , γ1 ), ··· xN = W (xN −1 , TN xN , uN ; αN , βN , γN ), xN +1 = W (xN , T12 xN +1 , uN +1 ; αN +1 , βN +1 , γN +1 ), ··· x2N = W (x2N −1 , TN2 x2N , u2N ; α2N , β2N , γ2N ), x2N +1 = W (x2N , T13 x2N +1 , u2N +1 ; α2N +1 , β2N +1 , γ2N +1 ) ··· which can be written in the following compact form : xn = W (xn−1 , Tik xn , un ; αn , βn , γn ),
n≥1
(2.2)
where n = (k − 1)N + i, i ∈ I, {un } is bounded scheme in X, {αn }, {βn }, {γn } be three sequences in [0, 1] such that αn + βn + γn = 1 for n = 1, 2, 3, · · · . Schemes (2.2) is called the implicit iterative scheme with error for a finite family of mappings Ti (i = 1, 2, · · · , N ). If un = 0 in (2.2) then, xn = W (xn−1 , Tik xn ; αn , βn ),
n ≥ 1,
(2.3)
where n = (k − 1)N + i, i ∈ I, {αn }, {βn } be two sequences in [0, 1] such that αn + βn = 1 for n = 1, 2, 3, · · · . Scheme (2.3) is called implicit iterative for a finite family of mappings Ti (i = 1, 2, · · · , N ). 3. Main Results In order to prove the main theorems of this paper, we need the following lemma: Lemma 3.1. ([26]) Let {an } and {bn } be two nonnegative sequences satisfying where
P∞
n=1 bn
an+1 ≤ an + bn
(n ≥ n0 ),
< ∞ and n0 is some positive integer. Then limn→∞ an exists.
Theorem 3.2. Let (E,d,W) be a complete convex metric space. Let {Ti : i ∈ I} be a finite family of asymptotically quasi-nonexpansive type mappings from E into E satisfying the following conditions: there exist constants Li > 0 and ai > 0 such that d(Ti x, pi ) ≤ Li d(x, pi )ai
TN
(3.1)
for all x ∈ E, pi ∈ F (Ti ), i ∈ I. Suppose that F = i=1 F (Ti ) 6= ∅ and let x0 ∈ E be any given point, {xn } defined by the implicit iterative scheme with error (2.2) satisfying the following conditions: (i) P s ≤ βn ≤ 1 − s, for some s ∈ (0, 21 ), (ii) P∞ n=0 βn < ∞, ∞ (iii) n=0 γn < ∞.
1091
Jong Kyu Kim and Chu Hwan Kim
8
Then {xn } strongly converges to a common fixed point in F if and only if lim inf Dd (xn , F ) = 0, n→∞
(3.2)
where Dd (x, F ) = inf y∈F d(x, y). In order to prove Theorem 3.2, we first prove the following important lemma. Lemma 3.3. Suppose that all assumptions in Theorem 3.2 are satisfied. Then for any given ε > 0, there exist a positive integer n0 and M > 0 such that ¶ µ γn (1) d(xn , p) ≤ d(xn−1 , p) + M βn + s , ¶ µ Pn+m 1 Pn+m (2) d(xn+m , p) ≤ d(xn , p) + M l=n+1 γl , l=n+1 βl + s (3) limn→∞ Dd (xn , F ) exists,
½ ¾ ε for any p ∈ F and n ≥ n0 , where M = max 5s , supn≥0 d(un , p) . Proof. (1) For any p ∈ F, from (2.2), where n = (k −1)N +i, Tn = Tn(modN ) = Ti , i ∈ I, d(xn , p) = d(W (xn−1 , Tik xn , un ; αn , βn , γn ), p) ≤ αn d(xn−1 , p) + βn d(Tik xn , p) + γn d(un , p)
(3.3)
≤ αn d(xn−1 , p) + βn {d(Tik xn , p) − d(xn , p)} + βn d(xn , p) + γn d(un , p) From (2.1), for any given ε > 0, there exists a positive integer n0 such that for n ≥ n0 with k > nN0 + 1, we have ε sup{d(Tin x, p) − d(x, p)} ≤ , ∀ p ∈ F. 6 x∈E
(3.4)
Since {xn } ⊂ E, d(Tin xn , p) − d(xn , p) ≤
ε (n ≥ n0 ) (i ∈ I). 5
Substituting (3.4) into (3.3) and simplifying, we have d(xn , p) ≤ αn d(xn−1 , p) +
ε · βn + βn d(xn , p) + γn d(un , p), 5
for all n ≥ n0 , p ∈ F. Then we have βn γn αn ε d(xn−1 , p) + · + d(un , p) 1 − βn 5 1 − βn 1 − βn γn ε · βn + d(un , p), ≤ d(xn−1 , p) + 5s s
d(xn , p) ≤
(3.5)
1092
Asymptotically quasi-nonexpansive type mappings in metric spaces
9
½ ¾ ε for p ∈ F and for any n ≥ n0 . Here, let M = max 5s , supn≥0 d(un , p) . Then we have
¶ µ γn . d(xn , p) ≤ d(xn−1 , p) + M βn + s
(2) It follows from conclusion (1) that for any m ≥ 1, ¶ µ γn+m d(xn+m , p) ≤ d(xn+m−1 , p) + M βn+m + s ¶ µ ¶ µ γn+m γn+m−1 + M βn+m + ≤ d(xn+m−2 , p) + M βn+m−1 + s s ··· ¶ µ n+m n+m X 1 X γl , ≤ d(xn , p) + M βl + s l=n+1
l=n+1
for any p ∈ F and n ≥ n0 . (3) Again, from the conclusion (1), it follows that inf d(xn , q) ≤ inf d(xn−1 , q) + M βn +
q∈F
q∈F
γn d(un , p) s
for any p ∈ F and n ≥ n0 . Therefore we have Dd (xn , F ) ≤ Dd (xn−1 , F ) + M βn +
γn d(un , p). s
for any p ∈ F and n ≥ n0 . Since {un } is a bounded sequence in E, {d(un , p)} is also bounded in [0, ∞) and by conditions (i) and (ii) in Theorem 3.2, ¶ ∞ µ X γn M βn + d(un , p) < ∞. s n=0
Hence from Lemma 3.1 we know that the limn→∞ Dd (xn , F ) exists.
¤
Proof of Theorem 3.2: The necessity is obvious. Now, we prove the sufficiency. Suppose that the condition (3.1) is satisfied. Then, from Lemma 3.3 (3), we have lim Dd (xn , F ) = 0.
(3.6)
n→∞
First, we prove that {xn } is Cauchy in E. In fact, by the condition (ii), (iii) and (3.6), for given ε > 0, there exists a positive integer n1 ≥ n0 (where n0 is the positive integer appeared in Lemma 3.3) such that, for any n ≥ n1 , ε (3.7) Dd (xn , F ) < , 7 N X
∞ X
i=1 k=[ n1 +1 ] N
βki
ε < 3M
and
N X
∞ X
i=1 k=[ n1 +1 ] N
γki
0 and for any n ≥ n1 ≥ n0 , we have d(xn+m , xn ) ≤ d(xn+m , p∗ ) + d(xn , p∗ ) ≤ 2d(xn , p∗ ) + M
N X
∞ X
i=1
n +1 k=[ 1N ]
(3.10) βki +
N MX s i=1
∞ X
γki ,
n +1 k=[ 1N ]
for any m ≥ 1, where M is the positive constant appeared in Lemma 3.3. Therefore from (3.7)-(3.9), for any n ≥ n1 ≥ n0 , we have ε ε ε d(xn+m , xn ) < + + = ε (m ≥ 1). 3 3 3 This implies that {xn } is Cauchy in E. Since E is a complete metric space, there exists q ∈ E such that xn → q as n → ∞. Now, we prove that q is a common fixed point of Ti in E. Since limn→∞ xn = q and limn→∞ Dd (xn , F ) = 0, for given ε > 0, there exists a positive integer n2 such that for n2 ≥ n1 ≥ n0 , for n ≥ n2 , ε ε and Dd (xn , F ) < . (3.11) d(xn , q) < 5 7 It follow from the second inequality in (3.11) that there exists q ∗ ∈ F such that ε (3.12) d(xn2 , q ∗ ) < . 5 Moreover, it follow from (3.4) that for any n ≥ n2 , we have ε (3.13) d(Tin q, q ∗ ) − d(q, q ∗ ) ≤ . 5 Therefore, from (3.11)-(3.13), for any n ≥ n2 , d(Tin q, q) ≤ d(Tin q, q ∗ ) + d(q, q ∗ ) + d(q, q ∗ ) − d(q, q ∗ ) ≤ d(Tin q, q ∗ ) − d(q, q ∗ ) + 2{d(q, xn2 ) + d(xn2 , q ∗ )} ε ε ε ≤ + 2( + ) = ε. 5 5 5 This implies that Tin q → q (as n → ∞). Again since d(Tin q, Ti q) ≤ (d(Tin q, q ∗ ) − d(q, q ∗ )) + d(q, q ∗ ) + d(Ti q, q ∗ ), for any n ≥ n2 , by the condition (3.1) and (3.11)-(3.13), we have ε d(Tin q, Ti q) ≤ + d(q, q ∗ ) + Li d(q, q ∗ )ai 5 ε ≤ + d(q, xn2 ) + d(xn2 , q ∗ ) + Li (d(q, xn2 ) + d(xn2 , q ∗ ))ai 5 µ ¶ai 2ε 3ε + Li , ∀i ∈ I. < 5 5
1094
Asymptotically quasi-nonexpansive type mappings in metric spaces
11
This shows that Tin q → Ti q as n → ∞. From the uniqueness of limit, we have q = Ti q, that is, q is a common fixed point of Ti , for all i ∈ I. ¤ If un = 0, in Theorem 3.2, we can easily obtain the following theorem. Theorem 3.4. Let (X, d, W ) be a complete convex metric space. Let {Ti : i ∈ I} be a finite family of asymptotically quasi-nonexpansive type mappings from E into E satisfying the following conditions: there exist constants Li > 0 and ai > 0 such that d(Ti x, pi ) ≤ Li d(x, pi )ai for all x ∈ E, pi ∈ F (Ti ), i ∈ I. Suppose P∞ that F 6= ∅ and that x0 ∈ E, {βn } ⊂ (s, 1 − s) for some s ∈ (0, 21 ) and n=0 βn < ∞. Then the implicit iterative scheme {xn } generated by (2.3) converges to a common fixed point in F if and only if lim inf Dd (xn , F ) = 0. n→∞
Remark 3.5. We note the followings: (1) The Theorem 3.2 is the extension of the main theorem of Kim-Kim-Kim [15] for the case of any quasi-nonexpansive type mappings. (2) We point out that Theorem 3.2 not only generalize and improve the corresponding results of Chang et al. [1], Liu [17]-[18], Petryshyn-Williamson [21], Tan-Xu [25], Ghosh-Debnath [10] and Chang [4]-[6] but also contain the corresponding results in [4]-[6], [10], [15]-[19], [21], [24] as its special cases. References [1] S. S. Chang, J. K. Kim and S. M. Kang, Approximating fixed points of asymptotically quasinonexpansive type mappings by the Ishikawa iterative sequences with mixed errors, Dynamic Systems and Appl., 13 (2004), 179–186. [2] S. S. Chang, J. K. Kim and D. S. Jin, Iterative sequences with errors for asymptotically quasi-nonexpansive type mappings in convex metric spaces, Archives Ineq. and Appl., 2 (2004), 365–374. [3] S. S. Chang and J. K. Kim, Convergence theorems of the Ishikawa type iterative sequences with errors for generalized quasi-contractive mappings in Convex Metric spaces, Appl. Math. Letters, 16 (2003), 535–542. [4] S. S. Chang, Some results for asymptotically pseudo-contractive mappings and asymptotically nonexpansive mappings, Proc. Amer. Math. Soc., 129 (2001), 845–853. [5] S. S. Chang, Iterative approximation problem of fixed points for asymptotically nonexpansive mappings in Banach spaces, Acta Math. Appl., 24 (2001), 236–241. [6] S. S. Chang, On the approximating problem of fixed points for asymptotically nonexpansive mappings, Indian J. Pure and Appl., 32 (2001), 1–11. [7] S. S. Chang, Y. J. Cho, J. K. Kim and K. H. Kim Iterative approximation of fixed points for asymptotically nonexpansive type mappings in Banach spaces, PanAmer. Math. J., 11 (2001), 53–63. [8] X. P. Ding, Iterative processes for nonlinear mappings in convex metric spaces, J. Math. Anal. Appl., 132 (1988), 114–122. [9] W. G. Dotson, Jr., On the Mann iterative process, Trans. Amer. Math. Soc., 149 (1970), 65–73. [10] M. K. Ghosh and L. Debnath, Convergence of Ishikawa iterative of quasi-nonexpa-nsive mappings, J. Math. Anal. Appl., 207 (1997), 96–103.
1095
12
Jong Kyu Kim and Chu Hwan Kim
[11] S. Ishikawa, Fixed point and iteration of a nonexpansive mapping in a Banach spaces, Proc. Amer. Math. Soc., 73 (1976), 65–71. [12] S. Ishikawa, Fixed points by a new iteration method, Proc. Amer. Math. Soc., 44 (1974), 147–150. [13] J. K. Kim, K. H. Kim and K. S. Kim, Convergence theorems of modified three-step iterative sequences with mixed errors for asymptotically quasi-nonexpansive mappings in Banach spaces, PanAmer. Math. Jour., 14 (2004), 45–54. [14] J. K. Kim, K. H. Kim and K. S. Kim, Three-step iterative sequences with errors for asymptotically quasi-nonexpansive mappings in convex metric spaces, Nonlinear Anal. and Convex Anal., Research Institute for Mathematical Sciences Kyoto University, Kyoto, Japan, 1365 (2004), 156–165. [15] J. K. Kim, S. M. Kim and K. S. Kim, Convergence theorems of implicit iteration process for a finite family of asymptotically quasi-nonexpansive mappings in convex metric spaces, Nonlinear Anal. and Convex Anal., Research Institute for Mathematical Sciences Kyoto University, Kyoto, Japan, 1484 (2006), 40–51. [16] J. Li, J. K. Kim and N. J. Huang, Iteration scheme for a pair of simultaneously asymptotically quasi-nonexpansive type mappings in Banach spaces, Taiwanese J. of Math., 10 (2006), 1419-1429. [17] Q. H. Liu, Iteration sequences for asymptotically quasi-nonexpansive mappings with error member of uniformly convex Banach spaces, J. Math. Anal. Appl., 266 (2002), 468–471. [18] Q. H. Liu, Iterative sequences for asymptotically quasi-nonexpansive mappings, J. Math. Anal. Appl., 259 (2001), 1–7. [19] Q. H. Liu, Iterative sequences for asymptotically quasi-nonexpansive mappings with error member, J. Math. Anal. Appl., 259 (2001), 18–24. [20] W. R. Mann, Mean value methods in iteration, Proc. Amer. Math. Soc., 4 (1953), 506–510. [21] W. V. Petryshyn and T. E. Williamson, Strong and weak convergence of the sequence of successive approximations for asymptotically quasi-nonexpansive mappings, J. Math. Anal. Appl., 43 (1973), 459–497. [22] J. Schu, Iterative construction of fixed points of asymptotically nonexpansive mappings, J. Math. Anal. Appl., 158 (1991), 407–413. [23] Z. H. Sun, Strong convergence of an implicit iteration process for a finite family of asymptotically quasi-nonexpansive mappings, J. Math. Anal. Appl., 286 (2003), 351–358. [24] W. Takahashi, A convexity in metric space and nonexpansive mappings I, Kodai Math. Sem. Rep., 22 (1970), 142–149. [25] K. K. Tan and H. K. Xu, Fixed point iteration processes for asymptotically nonexpansive mappings, Proc. Amer. Math. Soc., 122 (1994), 733–739. [26] K. K. Tan and H. K. Xu, Iterative solutions to nonlinear equations of strongly accretive operators in Banach spaces, J. Math. Anal. Appl., 178 (1993), 9–21. [27] K. K. Tan and H. K. Xu, Approximating fixed point of nonexpansive mappings by the Ishikawa iterative process, J. Math. Anal. Appl., 178 (1993), 301–308. [28] R. Wittmann, Approximation of fixed points of nonexpansive mappings, Arch. Math., 58 (1992), 486–491. [29] H. K. Xu and R. G. Ori, An implicit iteration process for nonexpansive mappings, Numer. Funct. Anal. Optim., 22 (2001), 767–773. [30] H. K. Xu, A note on the Ishikawa iteration scheme, J. Math. Anal. Appl., 167 (1992), 582–587. [31] L. C. Zeng, A note on approximating fixed points of nonexpansive mapping by the Ishikawa iterative processes, J. Math. Anal. Appl., 226 (1998), 245–250.
JOURNAL OF COMPUTATIONAL ANALYSIS AND APPLICATIONS, VOL. 14, NO.6, 1096-1102 , 2012, COPYRIGHT 2012 EUDOXUS1096 PRESS, LLC
Operational Formula For Jacobi-Pineiro Polynomials Cem Kaano¼ glu Cyprus International University, Lefko¸sa, Mersin 10, Turkey. E-Mail: [email protected]
Abstract The main purpose of this paper is to develop certain operational formula for Jacobi-Pineiro polynomials and to …nd some properties of these polynomials with the help of operational formulas.
Key words : Jacobi-Pineiro polynomials, Generating functions, Operational formula, Recurrence relation. 2000 Mathematics Subject Classi…cation. 33C45, 42C05
1
Introduction (
; )
The Jacobi-Pineiro polynomials Pn 0 of degree n for the multi-index n =n1 + ::: + nr 2 Nr and = ( 1 ; :::; r ); are the classical multiple orthogonal polynomials associated with an AT system consisting ( ; ) Jacobi weights on [0,1]. The polynomials Pn 0 are de…ned by the orthogonality conditions [1] Z1
Pn(
0;
)
x) 0 xk dx = 0; k = 0; 1; :::; ni
x i (1
1; i = 1; 2; :::; r;
0 (
and the Rodrigues formula for the Jacobi-Pineiro polynomials Pn ( 1)jnj
r Y
(jnj +
0
+
j
+ 1)nj Pn(
0;
)
(x) = (1
x)
0
r Y
0;
x
)
is given by j
Dnj xnj +
j
(1
x)
0 +jnj
:
j=1
j=1
In [1], various properties of these polynomials have been investigated via the Rodrigues formula. There are several studies related with the generalization of the known special functions via the Rodrigues formula [2], [3], [4], [5], [8], [9]. In the recent papers [6] and [7], the Rodrigues type extensions of multiple Laguerre polynomials and multiple Hermite polynomials were introduced and operational formulas, generating functions and some interesting recurrence relations have been proved. The main purpose of this paper is to develop certain operational formula and generating function for ( ; ; ) Jacobi-Pineiro polynomials Pn;m0 1 2 (x) and to …nd some properties of these polynomials with the help of operational formulas. ( ; ; The following de…nition provides a Rodrigues type generalization of Jacobi-Pineiro polynomials Pn;m0 1 n+m
( 1) = (1
x)
(n + m + 0
x
1
e
px
+
1
Dn x
1
0
+ 1)n (n + m + 2 +n
Dm x
d dx :
2 +m
0
(1
+
( 0; 2 + 1)m Rn;m 0 +n+m px
x)
1;
2)
(x; p)
(1)
e ;
where 0 ; 1 ; 2 > 1; 1 = Z and D = It should be noticed that, in particular case p = 0 2 2 ( ; ; ) the equation (1) gives the Rodrigues formula of the Jacobi-Pineiro polynomials Pn;m0 1 2 (x) of degree n + m. 1
2)
(x) :
KAANOGLU: JACOBI-PINEIRO POLYNOMIALS
2
1097
Main Results
In this section we derive the explicit formula and some operational formulas for the extended Jacobi( 0; 1; 2) Pineiro polynomials Rn;m (x; p). From the Rodrigues formula (1) and Leibniz’formula we get ( 1)n+m (n + m +
+
0
= (1
x)
0
x
1
e
px
= (1
x)
0
x
1
e
px
+ 1)n (n + m +
1 n
D x
1
Dn x
1
2 +n
m
0
+
( 0; + 1)m Rn;m
2
2 +m
1;
2)
(x; p)
0 +n+m px
D x (1 x) e m X m 2 +n Dk (x 2 +m epx )Dm k
k
(1
0 +n+m
x)
k=0
= m!(1
x)
0
x
1
px
e
m X k X
+m j
2
k=0 j=0
D
n
x
1 +m+n
j px
e (1
x)
0 +n+k
0
+n+m pk j ( 1)m m k (k j)!
k
:
Applying the Leibniz’ formula one more time to the last part of the above equation gives the explicit ( 0; 1; 2) (x; p) as follows: formula for the polynomials Rn;m ( 1)n+m (n + m + = m!n!
0
+
1
m X k X n X i X
2
k=0 j=0 i=0 l=0
p (k
k+i j l
j)!(i
l)!
( 1)m
+ 1)n (n + m + +m j
0
x
;
+
2
( 0; + 1)m Rn;m
+n+m m k
k+n i m+n j l
(
0
(1
;
0
1;
+n+k n i
2)
1
(x; p)
(2)
+n+m l
j
x)k+i : (
)
;
;
)
0 1 2 (x; 0) = Pn;m0 1 2 (x); in particular case p = 0 the Remark 1 It should be noticed that, since Rn;m explicit form (2) represents the explicit representation of Jacobi-Pineiro polynomials [1].
(
;
0 In order to …nd an operational formula for the polynomials Rn;m operational formula.
1;
2)
(x; p); we will prove the following
Lemma 2 For the su¢ ciently di¤ erentiable function Y of x, we have n Y
x2 )D
[(x
( + + 2j)x + p(x
x2 ) + + j]Y = (1
x)
x
e
px
Dn [x
+n
(1
x)
+n px
e Y ]: (3)
j=1
Proof. To prove this identity we use the method of induction. It is obvious that for n = 1 the identity (3) is correct. Let for n it is true. If we consider x2 )D
Y = [(x
( +
+ 2n + 2)x + p(x
x2 ) +
+ n + 1]Y
in equation (3), we get n Y
x2 )D
[(x
( +
+ 2j)x + p(x
x2 ) +
+ j]
(4)
j=1
[(x
= (1
x2 )D x)
( +
x
e
px
x2 ) +
+ 2n + 2)x + p(x n
D [x
+n
(1
x)
+n px
+ n + 1]Y 2
( +
+n 1 px
x2 )D
e [(x
x )D
+ 2n + 2)x + p(x
x2 ) +
+ n + 1]Y ]:
Using the identity D[x
+n
(1
x)
+n px
e Y ] = (1
x)
+n 1
x
e [(x 2
( +
+ 2n)x + p(x
x2 ) +
+ n]Y
KAANOGLU: JACOBI-PINEIRO POLYNOMIALS
1098
(4) becomes n+1 Y
x2 )D
[(x
x2 ) + + j]Y = (1
( + + 2j)x + p(x
x)
x
px
e
Dn+1 [x
+n+1
(1
x)
+n+1 px
e Y ]:
j=1
Then the proof is completed. ( 0; Now, we can …nd an operational formula for the polynomials Rn;m (3). For convenience, we will use the notation (x) = (1
x)
0
x
1
px
e
Dn x
Taking (3) into the consideration with n = m; (x) = (1
x) m Y
0
x
1
px
e
x2 )D
[(x
=
Dn [x
(
2 +n
1
+ n;
0 1 +n
(1
+n+
0
Dm x
2
2 +m
=
(1
2)
x)
(x; p) by using the identity
0 +n+m
epx Y:
we get
2
0 +n
x)
1;
epx
(5) x2 ) +
+ 2j)x + p(x
2
+ j]Y ]
j=1
and then applying (3) in (5) one more time we get …rst equality as follows: (x) =
n Y
[(x
x2 )D
(
0
+
[(x
x2 )D
(
0
+n+
k=1 m Y
1
x2 ) +
+ 2k)x + p(x
2
1
+ k]
x2 ) +
+ 2j)x + p(x
2
(6)
+ j]Y:
j=1
On the other hand, with the help of Leibniz’formula the second equality can be found in the following form: (x) = m!n!
(7) r2 X m m k X n nX i X Xr1 X
+m j
2
r1 =0 k=0 j=0 r2 =0 i=0 l=0
pk+i j (k j)!(i
l
l)!
( 1)m
0
m
r1 k+n r2 i m+n j l
x
+n+m r1 k x)k+i
(1
0
+ n + r1 + k n r2 i
1
(
(
;
0 Theorem 3 For the extended Jacobi-Pineiro polynomials Rn;m operational formula:
r1 =0 r2 =0
m r1
n ( 1)n+m r2
(n + m + 0 + n Y = [(x x2 )D k=1 m Y
[(x
x2 )D
2
r1 r2 r1 +r2
x
+ r1 + 1)m
(
0
+
(
0
+n+
1
(n + m +
( 0 +r1 +r2 ; r1 Rn r2 ;m r1
+ 2k)x + p(x
2
2)
;
1
where Y is su¢ ciently di¤ erentiable function with respect to x:
1;
2)
(x; p) is obtained
(x; p) we have the following general
+
0
1
+ r1 + r2 + 1)n
2 +r1 )
2
r2
(x; p)Dr1 +r2 (Y )
+ k]
x2 ) +
j=1
3
1;
1 +r1 +r2 ;
x2 ) +
+ 2j)x + p(x
j
Dr1 +r2 Y : r1 !r2 !
0 The general operational formula for the extended Jacobi-Pineiro polynomials Rn;m by combining (2), (6) and (7).
m X n X
+n+m l
+ j] Y;
(8)
KAANOGLU: JACOBI-PINEIRO POLYNOMIALS
1099
Setting p = 0 in Theorem 3 we get: (
;
1;
Theorem 4 For the Jacobi-Pineiro polynomial Pn;m0 formula: m X n X
m r1
r1 =0 r2 =0
n ( 1)n+m r2
(n + m + 0 + n Y = [(x x2 )D
2
r1 r2 r1 +r2
x
( 0 +r1 +r2 ; r1 Pn r2 ;m r1
+ r1 + 1)m
(
0
+
1
(n + m +
+ 2k)x +
1
+ k]
2)
0
+
1 +r1 +r2 ;
m Y
we have the following general operational
+ r1 + r2 + 1)n
1
2 +r1 )
(x)Dr1 +r2 (Y )
x2 )D
[(x
(9)
r2
(
0
+n+
+ 2j)x +
2
2
+ j] Y;
j=1
k=1
where Y is su¢ ciently di¤ erentiable function with respect to x: Setting Y = 1 in (8), we obtain the following theorem. (
;
1;
0 Theorem 5 For the extended Jacobi-Pineiro polynomials Rn;m tional formula:
( 1)n+m (n + m + 0 + 1 + 1)n (n + m + n Y [(x x2 )D ( 0 + 1 + 2k)x + p(x = k=1 m Y
x2 )D
[(x
(
0
+n+
2
+
0
2)
(x; p) we have the following opera-
( 0; + 1)m Rn;m
2
x2 ) +
1
2)
(x; p)
(10)
+ k]
x2 ) +
+ 2j)x + p(x
1;
2
+ j]:1 :
j=1
Moreover for each suitable choice of the su¢ ciently di¤erentiable function Y; various operational ( 0; 1; 2) (x; p): formulas can be proven for the polynomials Rn;m
3
Applications of Operational Formula (
;
0 In this section we will derive some recurrence relations for the polynomials Rn;m If we consider the operational formula (8) we get
n Y
x2 )D
[(x
k=1 m Y
(
x2 )D
[(x
0
+
(
0
1
+ 2k)x + p(x
+n+
2
x2 ) +
1
2)
(x; p).
+ k]
(11)
x2 ) +
+ 2j)x + p(x
1;
2
+ j] x
j=1
= ( 1)n+m x(n + m +
+ ( 1)n+m
1
n+m 1
+ ( 1)
0
+
m!x(n + m + n!x(n + m +
1 0 0
+ 1)n (n + m + +
+
1 1
0
+
+ 2)n (n + m +
+ 2)n
1 (n
( 0; + 1)m Rn;m
2 0
+m+
+ 0
+
2
2)
(x; p)
( 0 +1; 2)m 1 Rn;m 1
+
2
1;
+
( +1; 1)m Rn 01;m
1 +1; 1 +1;
2 +1) 2)
(x; p)
(x; p):
By the equation (6) we can easily obtain the following two relations: n Y
[(x
k=1 m Y
[(x
x2 )D x2 )D
(
0
+
(
0
1
x2 ) +
+ 2k)x + p(x
+n+
2
+ 2j)x + p(x
1
+ k]
x2 ) +
(12)
2
+ j] x
j=1
= ( 1)n+m x(n + m +
0
+
1
+ 2)n (n + m + 4
0
+
2
( 0; + 2)m Rn;m
1 +1;
2 +1)
(x; p);
KAANOGLU: JACOBI-PINEIRO POLYNOMIALS
1100
and n Y
x2 )D
[(x
k=1 m Y
(
x2 )D
[(x
0
+
(
0
1
x2 ) +
+ 2k)x + p(x
+n+
2
+ 2j)x + p(x
1
+ k]
x2 ) +
(13)
2
+ j] (1
x)
j=1
= ( 1)n+m (1
x)(n + m +
0
+
1
+ 2)n (n + m +
0
+
2
( 0 +1; + 2)m Rn;m
1;
2)
(x; p);
respectively. Equating (11) and (12) we have the following relation: ( 0; Rn;m
1;
2)
(x; p)
(n + m + 0 + 1 + n + 1) ( 0; [(n + m + 0 + 2 + m + 1)Rn;m (n + m + 0 + 1 + 1)(n + m + 0 + 2 + 1) (n + m + 0 + 2 + 1) ( 0 +1; 1 +1; 2 +1) ( +1; +1; 2 ) + m!Rn;m (x; p) + n!Rn 01;m 1 (x; p)]: 1 (n + m + 0 + 1 + n + 1) =
1 +1;
2 +1)
(x; p)
Also, adding (11) and (13), we have the following recurrence relation (1
( 0; x)Rn;m
1;
2)
(x; p)
(n + m + 0 + 1 + n + 1) ( 0 +1; = [(n + m + 0 + 2 + m + 1)(1 x)Rn;m (n + m + 0 + 1 + 1)(n + m + 0 + 2 + 1) (n + m + 0 + 2 + 1) ( 0 +1; 1 +1; 2 +1) ( +1; +1; 2 ) m!xRn;m (x; p) n!xRn 01;m 1 (x; p)]: 1 (n + m + 0 + 1 + n + 1)
1;
2)
(x; p)
Furthermore, combining (12) and (13) gives ( 0; Rn;m
1;
2)
(x; p) =
( 0; xRn;m
1 +1;
(n + m + 0 + (n + m +
2 +1)
(x; p) + (1
+ n + 1)(n + m + + 1 + 1)(n + m +
1 0
( 0 +1; x)Rn;m
1;
2)
0 0
+ +
+ m + 1) + 1)
2 2
(x; p) :
Remark 6 Setting p = 0 in the above recurrence relations we can get the corresponding relations for the ( ; ; ) Jacobi-Pineiro polynomial Pn;m0 1 2 :
4
Generating Function (
;
0 In this section we obtain generating function for the polynomials Rn;m following theorem.
(
;
0 Theorem 7 Let the polynomials Rn;m
1 X 1 X
( 1)n+m (
0
+
1
1;
+ 1)n (
2)
0
1;
2)
(x; p). We start with the
(x; p) be de…ned by (1). Then we have
+
2
( 0 + 1)m Rn;m
n m;
1;
2)
(x; p)
t1 1 x
(1
(1 x) t1 )1+ 1 (1
x t1 )(1
0
t2 )1+
2
1
(1
0
t2 )
e
px(1 (1 t1 )(1 t2 )) (1 t1 )(1 t2 )
m
t2 1 x
n!
n=0 m=0
=
n
(14)
m!
:
Proof. Considering
=
1 X 1 X
( 1)n+m (
0
+
1
+ 1)n (
0
+
2
( 0 + 1)m Rn;m
n=0 m=0
5
n m;
1;
2)
(x; p)
t1 1 x
n!
n
t2 1 x
m!
m
KAANOGLU: JACOBI-PINEIRO POLYNOMIALS
1101
then =
1 X 1 X
(1
x)
0
x
1
px
e
Dn x
n=0 m=0
= (1
x)
0
x
1
e
1 X
px
Dn x
1
x)
0
x
1
e
1 X
px
Dn x
1
=
(1
x)
0
1
x t2
1
e
2 +m
x) 0 epx
(1
tn1 tm 2 n! m!
1 1 Z m X 2 +m 0 pz t tn m! z (1 z) e 2 +n @ dz 2 A 1 m+1 2 i m=0 (z x) m! n! C1 0 1 I 1 m 2 0 pz X 1 z (1 z) e zt tn 2 2 +n @ dz A 1 2 i z x z x n! m=0
n=0 1 px X
Dm x
0
n=0
= (1
2 +n
1
C1
Dn x
2 +n
1
(
n=0
x
1
t2
) 2 (1
x
1
t2
) 0e1
px t2
tn1 : n!
Applying Cauchy integral gives =
(1
x)
0
1
x t2
1
px
e
1 2 i
I
z
1
2
( 1 zt2 ) 2 (1 z
z 1 t2 )
0
e1
pz t2
x
n=0
C2
=
(1
(1 x) t1 )1+ 1 (1
t2 )1+
2
1
0
x t1 )(1
0
(1
t2 )
1 X
e
zt1 z x
n
dz
px(1 (1 t1 )(1 t2 )) (1 t1 )(1 t2 )
where C1 is a circle in the complex z plane along the negative real axis, (centered at z = 1 xt2 ) with radius > 0; which is described in the positive direction and the closed contour C2 is a circle in the complex plane, cut along the negative real axis and it is a circle (centered at = 1 xt1 ) of su¢ ciently small radius. (
;
Remark 8 Letting p = 0 in (14), we get the generating function for Jacobi-Pineiro polynomials Pn;m0 Furthermore, using the generating relation (14) gives the following recurrence relation. (
;
0 Theorem 9 For the polynomials Rn;m
( 1)n+m ( 0 + 0 + 1 + 1 + 2)n ( n X m X n m = ( 1)n+m ( 0 + r l r=0 l=0 p ( r ( n r m l; 1 ; 2 ) Rn 0r;m l (x; )Rr;l0 2
1;
2)
(x; p); we have
0
+
1
+ 1)n
l;
1; 2)
0
+
2
+
r( 0
p (x; ): 2
6
+
2
( 0+ + 2)m Rn;m 2
+ 1)m l (
0
0
+
n m;
1
1 + 1 +1;
+ 1)r (
0
+
2 + 2 +1)
2
(x; p)
+ 1)l
1;
2)
.
KAANOGLU: JACOBI-PINEIRO POLYNOMIALS
1102
Proof. Taking (14) into consideration we have 1 X 1 X
( 1)n+m (
+
0
0
+
1
+
2+
2 +1)
+ 2)n (
1
0
+
0
+
2
+
+ 2)m
2
n=0 m=0 ( 0+ Rn;m
= =
=
n m;
0
(1 )2+
1+
x) 1+
(1
t1
(1
(1 x) t1 )1+ 1 (1
(1
(1 x) t1 )1+ 1 (1
1 1 X X
1 +1;
1
0
(1
(x; p)
t2
)2+
2+
1
(1
(1 x t1 )(1
(1
x t1 )(1
2
t2 )1+
1
2
0
1
t2 )1+ 0
2
+
1
+ 1)n (
0
n
+
2
m
t2 1 x
n!
m! 0+ 0
x t1 )(1
0
0
( 1)n+m (
t1 1 x
e
t2 ) 0
e
t2 ) 0
e
t2 )
p x(1 (1 t1 )(1 t2 )) 2 (1 t1 )(1 t2 )
p x(1 (1 t1 )(1 t2 )) 2 (1 t1 )(1 t2 )
( 0 + 1)m Rn;m
n m;
1;
n=0 m=0 1 X 1 X
( 1)r+l (
0+
1 + 1)r (
0+
(
r l;
0 2 + 1)l Rr;l
1; 2)
r=0 l=0
Taking n ! n =
r and m ! m
1 X 1 X n X m X
px(1 (1 t1 )(1 t2 )) (1 t1 )(1 t2 )
p (x; ) 2
0
+
1
t1 1 x
r!
0
+
m
m!
t2 1 x
l!
l
:
l; we get
( 1)n+m (
+ 1)r (
t2 1 x
n! r
0
+
1
+ 1)n
r( 0
+
(
2
+ 1)m l Rn
(n r) (m l); r;m l
0
n=0 m=0 r=0 l=0
(
n
t1 1 x
p 2) (x; ) 2
(
2
+ 1)l Rr;l0
r l;
1;
t1 x
1 p 2) (x; ) 2 r!(n
n
t2 1 x
r)! l!(m
1;
2)
p (x; ) 2
m
l)!
:
Hence the result.
References [1] W.V. Assche, E. Coussement, Some classical multiple orthogonal polynomials, J. Comput. Appl. Math. 127(2001) 317-347. [2] S.K. Chatterjea, A generalization of Laguerre polynomials, Collect. Math. 15(1963) 285-292. [3] S.K. Chatterjea, On a generalization of Laguerre polynomials, Rend. Semin. Mat. Univ. Padova 34(1964) 180-190. [4] S.K. Chatterjea, H.M. Srivastava, A uni…ed presentation of certain operational formulas for the Jacobi and related polynomials, Appl. Math. Comp., 58(1993) 77-95. [5] H.W. Gould, A.T. Hopper, Operational formulas connected with two generalization of Hermite polynomials, Duke Math. J., 29(1962) 51-64. [6] C. Kaanoglu, M.A. Özarslan, Some properties of generalized multiple Hermite polynomials, J. Comp. Appl. Math., in press. [7] M.A. Özarslan, C. Kaanoglu, Some generalization of multiple Laguerre polynomials via Rodrigues formula, Ars Combinatoria, in press. [8] J.P. Singhal, C.M. Joshi, On the uni…cation of generalized Hermite and Laguerre polynomials, Indian J. Pure Appl. Math., 13(8) (1982) 904-906. [9] H.M. Srivastava, J.P. Singhal, A class of polynomials de…ned by generalized Rodrigues’formula, Ann. Mat. Pura Appl., 90(1971) 75-85.
7
JOURNAL OF COMPUTATIONAL ANALYSIS AND APPLICATIONS, VOL. 14, NO.6, 1103-1111 , 2012, COPYRIGHT 2012 EUDOXUS 1103 PRESS, LLC
FIXED POINTS AND STABILITY OF ADDITIVE FUNCTIONAL EQUATIONS ON THE BANACH ALGEBRAS YEOL JE CHO, JUNG IM KANG, AND REZA SAADATI Abstract. Using fixed point methods, we prove the generalized Hyers-Ulam stability of homomorphisms in Banach algebras and of derivations on Banach algebras for the additive functional equation m m m m X X X X f mxi + xj + f xi = 2f mxi i=1
i=1
j=1, j6=i
i=1
for all m ∈ N with m ≥ 2.
1. Introduction and preliminaries The stability problem of functional equations originated from a question of Ulam [13] concerning the stability of group homomorphisms: Let (G1 , ∗) be a group and (G2 , , d) be a metric group with the metric d(·, ·). For any > 0, does there exist a δ() > 0 such that, if a mapping h : G1 → G2 satisfies the inequality d(h(x ∗ y), h(x) h(y)) < δ for all x, y ∈ G1 , then there is a homomorphism H : G1 → G2 with d(h(x), H(x)) < for all x ∈ G1 ? If the answer is affirmative, we would say that the equation of homomorphism H(x∗y) = H(x)H(y) is stable. The concept of stability for a functional equation arises when we replace the functional equation by an inequality which acts as a perturbation of the equation. Thus the stability question of functional equations is that how do the solutions of the inequality differ from those of the given functional equation? Hyers [4] gave a first affirmative answer to the question of Ulam for Banach spaces as follows: Let X and Y be Banach spaces. Assume that f : X → Y satisfies kf (x + y) − f (x) − f (y)k ≤ ε for all x, y ∈ X and some ε ≥ 0. Then there exists a unique additive mapping T : X → Y such that kf (x) − T (x)k ≤ ε for all x ∈ X. Rassias [9] provided a generalization of Hyers’ Theorem which allows the Cauchy difference to be unbounded in the following: 2000 Mathematics Subject Classification. Primary 39A10, 39B72; Secondary 47H10, 46B03. Key words and phrases. Additive functional equation, fixed point, homomorphism in Banach algebra, generalized Hyers-Ulam stability, derivation on Banach algebra. 0 The corresponding author: [email protected] (Jung Im Kang). 0 This work was supported by the Korea Research Foundation(KRF) grant funded by the Korea government(MEST) (No. 2009-0075850). 1
1104
2
YEOL JE CHO, JUNG IM KANG, AND REZA SAADATI
Theorem 1.1. Let f : E → E 0 be a mapping from a normed vector space E into a Banach space E 0 subject to the inequality kf (x + y) − f (x) − f (y)k ≤ (kxkp + kykp )
(1.1)
for all x, y ∈ E, where and p are constants with > 0 and p < 1. Then the limit f (2n x) L(x) = n→∞ lim 2n exists for all x ∈ E and L : E → E 0 is the unique additive mapping which satisfies 2 kf (x) − L(x)k ≤ kxkp 2 − 2p for all x ∈ E. Also, if for each x ∈ E the function f (tx) is continuous in t ∈ R, then L is R-linear. The above inequality (1.1) has provided a lot of influence in the development of what is now known as a generalized Hyers-Ulam stability of functional equations. Beginning around the year 1980 the topic of approximate homomorphisms or the stability of the equation of homomorphism, was studied by a number of mathematicians. G˘avruta [3] generalized the Rassias’ result. The stability problems of several functional equations have been extensively investigated by a number of authors and there are many interesting results concerning this problem (see [5], [10]–[12]). Theorem 1.2. [6, 7, 8] Let X be a real normed linear space and Y a real complete normed linear space. Assume that f : X → Y is an approximately additive mapping for which there exist constants θ ≥ 0 and p ∈ R −{1} such that f satisfies inequality p p kf (x + y) − f (x) − f (y)k ≤ θ · ||x|| 2 · ||y|| 2 for all x, y ∈ X. Then there exists a unique additive mapping L : X → Y satisfying kf (x) − L(x)k ≤ |2pθ−2| ||x||p for all x ∈ X. If, in addition, f : X → Y is a mapping such that the transformation t → f (tx) is continuous in t ∈ R for each fixed x ∈ X, then L is an R-linear mapping. We recall a fundamental result in fixed point theory for our main results. Let X be a set. A function d : X × X → [0, ∞] is called a generalized metric on X if d satisfies (1) d(x, y) = 0 if and only if x = y; (2) d(x, y) = d(y, x) for all x, y ∈ X; (3) d(x, z) ≤ d(x, y) + d(y, z) for all x, y, z ∈ X. Theorem 1.3. [2] Let (X, d) be a complete generalized metric space and J : X → X be a strictly contractive mapping with Lipschitz constant L < 1. Then, for any element x ∈ X, either d(J n x, J n+1 x) = ∞ for all nonnegative integers n or there exists a positive integer n0 such that (1) d(J n x, J n+1 x) < ∞ for all n ≥ n0 ; (2) the sequence {J n x} converges to a fixed point y ∗ of J; (3) y ∗ is the unique fixed point of J in the set Y = {y ∈ X | d(J n0 x, y) < ∞}; 1 (4) d(y, y ∗ ) ≤ 1−L d(y, Jy) for all y ∈ Y .
1105
STABILITY OF ADDITIVE FUNCTIONAL EQUATIONS
3
This paper is organized as follows: In Section 2, using the fixed point methods, we prove the generalized Hyers-Ulam stability of homomorphisms in Banach algebras for the Cauchy functional equation. In Section 3, using the fixed point methods, we prove the generalized Hyers-Ulam stability of derivations on Banach algebras for the Cauchy functional equation. 2. Stability of homomorphisms in Banach algebras Throughout this paper, assume that A is a complex Banach algebra with norm k·kA and B is a complex Banach algebra with norm k · kB . For a given mapping f : A → B, we define Dµ f (x1 , · · · , xm ) :=
m X
µf (mxi +
i=1
m X
m X
xj ) + µf (
j=1, j6=i
xi ) − 2f (µ
i=1
m X
mxi )
i=1
for all µ ∈ T1 := {ν ∈ C : |ν| = 1} and x1 , · · · , xm ∈ A. Note that a C-linear mapping H : A → B is called a homomorphism in Banach algebras if H satisfies H(xy) = H(x)H(y) for all x, y ∈ A. We prove the generalized Hyers-Ulam stability of homomorphisms in Banach algebras for the functional equation Dµ f (x1 , · · · , xm ) = 0. Theorem 2.1. Let f : A → B be a mapping for which there are functions ϕ : Am → [0, ∞) and ψ : A2 → [0, ∞) such that lim m−j ϕ(mj x1 , · · · , mj xm ) = 0,
j→∞
kDµ f (x1 , · · · , xm )kB ≤ ϕ(x1 , · · · , xm ), kf (xy) − f (x)f (y)kB ≤ ψ(x, y), lim m−2j ψ(mj x, mj y) = 0
j→∞
(2.1) (2.2) (2.3) (2.4)
for all µ ∈ T1 and x1 , · · · , xm , x, y ∈ A. If there exists L < 1 such that ϕ(mx, 0, · · · , 0) ≤ mLϕ(x, 0, · · · , 0) for all x ∈ A, then there exists a unique homomorphism H : A → B such that 1 kf (x) − H(x)kB ≤ ϕ(x, 0, · · · , 0) (2.5) m − mL for all x ∈ A. Proof. Consider the set X := {g : A → B} and introduce the generalized metric on X as follows: d(g, h) = inf{C ∈ R+ : kg(x) − h(x)kB ≤ Cϕ(x, 0, · · · , 0), ∀x ∈ A}. It is easy to show that (X, d) is complete. Now, we consider the linear mapping J : X → X such that Jg(x) := m1 g(mx) for all x ∈ A. By Theorem 3.1 of [1], d(Jg, Jh) ≤ Ld(g, h) for all g, h ∈ X. Letting µ = 1,
1106
4
YEOL JE CHO, JUNG IM KANG, AND REZA SAADATI
x = x1 and x2 = · · · = xm = 0 in (2.2), we get kf (mx) − mf (x)kB ≤ ϕ(x, 0, · · · , 0)
(2.6)
for all x ∈ A and so 1 1 f (mx)kB ≤ ϕ(x, 0, · · · , 0) m m 1 for all x ∈ A. Hence d(f, Jf ) ≤ m . Thus, by Theorem 1.3, there exists a mapping H : A → B such that (1) H is a fixed point of J, i.e., kf (x) −
H(mx) = mH(x)
(2.7)
for all x ∈ A. The mapping H is a unique fixed point of J in the set Y = {g ∈ X : d(f, g) < ∞}. This implies that H is a unique mapping satisfying (2.6) such that there exists C ∈ (0, ∞) satisfying kH(x) − f (x)kB ≤ Cϕ(x, 0, · · · , 0) for all x ∈ A; (2) d(J n f, H) → 0 as n → ∞. This implies the equality f (mn x) = H(x) n→∞ mn
(2.8)
lim
for all x ∈ A; 1 (3) d(f, H) ≤ 1−L d(f, Jf ), which implies the inequality d(f, H) ≤ implies that the inequality (2.5) holds.
1 . m−mL
This
It follows from (2.1), (2.2) and (2.7) that
X
m
H(mxi +
i=1
=
lim
n→∞
m X
m X
xj ) + H(
m 1
X f (mn+1 xi + n
m
xi ) − 2H(
i=1
j=1, j6=i
i=1
m X
m X i=1 m X
n
m xj ) + f (
mxi )
B
n
m xi ) − 2f (
i=1
j=1, j6=i
m X i=1
n+1
m
xi )
B
1 ϕ(mn x1 , · · · , mn xm ) = 0 n m for all x1 , · · · , xm ∈ A and so ≤
lim
n→∞
m X i=1
H(mxi +
m X
m X
xj ) + H(
j=1, j6=i
xi ) = 2H(
i=1
m X
mxi )
(2.9)
i=1
for all x1 , · · · , xm ∈ A. By a similar method to above, we get µH(mx) = H(mµx) for all µ ∈ T1 and x ∈ A. Thus one can show that the mapping H : A → B is C-linear.
1107
STABILITY OF ADDITIVE FUNCTIONAL EQUATIONS
5
Finally, it follows from (2.3) that 1 kf (mn xy) − f (mn x)f (mn y)kB n m 1 ≤ lim n ψ(mn x, mn y) = 0 n→∞ m for all x, y ∈ A and so H(xy) = H(x)H(y) for all x, y ∈ A. Thus H : A → B is a homomorphism satisfying (2.5). This completes the proof. kH(xy) − H(x)H(y)kB =
lim
n→∞
Corollary 2.2. Let r < 1 and θ be nonnegative real numbers and f : A → B be a mapping such that kDµ f (x1 , · · · , xm )kB ≤ θ · (kx1 krA + kx2 krA + · · · + kxm krA ), kf (xy) − f (x)f (y)kB ≤ θ · (kxkrA · kykrA )
(2.10) (2.11)
for all µ ∈ T1 and x1 , · · · , xm , x, y ∈ A. Then there exists a unique homomorphism H : A → B such that θ kf (x) − H(x)kB ≤ kxkrA r m−m for all x ∈ A. Proof. The proof follows from Theorem 2.1 by taking ϕ(x1 , · · · , xm ) = θ · (kx1 krA + kx2 krA + · · · + kxm krA ), ψ(x, y) := θ · (kxkrA · kykrA ) for all x1 , · · · , xm , x, y ∈ A and L = mr−1 .
Theorem 2.3. Let f : A → B be a mapping for which there are functions ϕ : Am → [0, ∞) and ψ : A2 → [0, ∞) such that lim mj ϕ(m−j x1 , · · · , m−j xm ) = 0,
j→∞
kDµ f (x1 , · · · , xm )kB ≤ ϕ(x1 , · · · , xm ), kf (xy) − f (x)f (y)kB ≤ ψ(x, y), lim m2j ψ(m−j x, m−j y) = 0
j→∞
(2.12) (2.13) (2.14) (2.15)
for all µ ∈ T1 and x1 , · · · , xm , x, y ∈ A. If there exists an L < 1 such that ϕ(x, 0, · · · , 0) ≤ L ϕ(mx, 0, · · · , 0) for all x ∈ A, then there exists a unique homomorphism H : A → B m such that L kf (x) − H(x)kB ≤ ϕ(x, 0, · · · , 0) (2.16) m − mL for all x ∈ A.
1108
6
YEOL JE CHO, JUNG IM KANG, AND REZA SAADATI
Proof. We consider the linear mapping J : X → X such that x Jg(x) := mg m for all x ∈ A. It follows from (2.6) that
x
f (x) − mf ( )
m
≤ϕ
B
x L , 0, · · · , 0 ≤ ϕ(x, 0, · · · , 0) m m
L . for all x ∈ A. Hence d(f, Jf ) ≤ m By Theorem 1.3, there exists a mapping H : A → B such that (1) H is a fixed point of J, i.e.,
H(mx) = mH(x)
(2.17)
for all x ∈ A. The mapping H is a unique fixed point of J in the set Y = {g ∈ X : d(f, g) < ∞}. This implies that H is a unique mapping satisfying (2.13) such that there exists C ∈ (0, ∞) satisfying kH(x) − f (x)kB ≤ Cϕ(x, 0, · · · , 0) for all x ∈ A; (2) d(J n f, H) → 0 as n → ∞. This implies the equality x lim mn f ( n ) = H(x) n→∞ m for all x ∈ A; 1 (3) d(f, H) ≤ 1−L d(f, Jf ), which implies the inequality L , m − mL which implies that the inequality (2.16) holds. The rest of the proof is similar to the proof of Theorem 2.1. This completes the proof. d(f, H) ≤
Corollary 2.4. Let r > 1 and θ be nonnegative real numbers and f : A → B be a mapping such that kDµ f (x1 , ..., xm )kB ≤ θ · (kx1 krA + kx2 krA + · · · + kxm krA ), kf (xy) − f (x)f (y)kB ≤ θ · (kxkrA · kykrA )
(2.18) (2.19)
for all µ ∈ T1 and x1 , · · · , xm , x, y ∈ A. Then there exists a unique homomorphism H : A → B such that θ kxkrA kf (x) − H(x)kB ≤ r m −m for all x ∈ A.
1109
STABILITY OF ADDITIVE FUNCTIONAL EQUATIONS
7
Proof. The proof follows from Theorem 2.1 by taking ϕ(x1 , · · · , xm ) = θ · (kx1 krA + kx2 krA + · · · + kxm krA ), ψ(x, y) := θ · (kxkrA · kykrA ) for all x1 , · · · , xm , x, y ∈ A and L = m1−r .
3. Stability of derivations on Banach algebras Note that a C-linear mapping δ : A → A is called a derivation on A if δ satisfies δ(xy) = δ(x)y + xδ(y) for all x, y ∈ A. We prove the generalized Hyers-Ulam stability of derivations on Banach algebras for the functional equation Dµ f (x1 , · · · , xm ) = 0. Theorem 3.1. Let f : A → A be a mapping for which there are functions ϕ : Am → [0, ∞) and ψ : A2 → [0, ∞) such that lim m−j ϕ(mj x1 , · · · , mj xm ) = 0,
j→∞
kDµ f (x1 , · · · , xm )kA ≤ ϕ(x1 , · · · , xm ), kf (xy) − f (x)y − xf (y)kA ≤ ψ(x, y), lim m−2j ψ(mj x, mj y) = 0 j→∞
(3.1) (3.2) (3.3) (3.4)
1
for all µ ∈ T and x1 , · · · , xm , x, y ∈ A. If there exists an L < 1 such that ϕ(mx, 0, ..., 0) ≤ mLϕ(x, 0, · · · , 0) for all x ∈ A. Then there exists a unique derivation δ : A → A such that 1 kf (x) − δ(x)kA ≤ ϕ(x, 0, · · · , 0) (3.5) m − mL for all x ∈ A. Proof. By the same reasoning in as the proof of Theorem 2.1, there exists a unique C-linear mapping δ : A → A satisfying (3.3). The mapping δ : A → A is given by f (mn x) n→∞ mn for all x ∈ A. It follows from (3.3), (3.4) and (3.6) that δ(x) = lim
(3.6)
kδ(xy) − δ(x)y − xδ(y)kA 1 = lim 2n kf (m2n xy) − f (mn x) · mn y − mn xf (mn y)kA n→∞ m 1 ≤ lim 2n ψ(mn x, mn y) = 0 n→∞ m for all x, y ∈ A and so δ(xy) = δ(x)y + xδ(y) for all x, y ∈ A. Thus δ : A → A is a derivation satisfying (3.5). This completes the proof.
1110
8
YEOL JE CHO, JUNG IM KANG, AND REZA SAADATI
Corollary 3.2. Let r < 1 and θ be nonnegative real numbers and f : A → A be a mapping such that kDµ f (x1 , · · · , xm )kB ≤ θ · (kx1 krA + · · · kxm krA ), kf (xy) − f (x)y − xf (y)kA ≤ θ · (kxkrA · kykrA )
(3.7) (3.8)
for all µ ∈ T1 and x1 , · · · , xm , x, y ∈ A. Then there exists a unique derivation δ : A → A such that θ kf (x) − δ(x)kA ≤ kxkrA m − mr for all x ∈ A. Proof. The proof follows from Theorem 3.1 by taking ϕ(x1 , · · · , xm ) := θ · (kx1 krA + · · · kxm krA ) and ψ(x, y) := θ · (kxkrA · kykrA ), for all x1 , · · · , xm , x, y ∈ A and L = mr−1 .
Theorem 3.3. Let f : A → B be a mapping for which there are functions ϕ : Am → [0, ∞) and ψ : A2 → [0, ∞) such that lim mj ϕ(m−j x1 , · · · , m−j xm ) = 0,
j→∞
kDµ f (x1 , · · · , xm )kB ≤ ϕ(x1 , · · · , xm ), kf (xy) − f (x)y − xf (y)kB ≤ ψ(x, y), lim m2j ψ(m−j x, m−j y) = 0 j→∞
(3.9) (3.10) (3.11) (3.12)
1
for all µ ∈ T and x1 , · · · , xm , x, y ∈ A. If there exists an L < 1 such that ϕ(mx, 0, · · · , 0) ≤ L ϕ(x, 0, · · · , 0) for all x ∈ A. Then there exists a unique derivation δ : A → A such m that L kf (x) − δ(x)kB ≤ ϕ(x, 0, · · · , 0) (3.13) m − mL for all x ∈ A. Proof. The proof is similar to the proofs of Theorems 2.3 and 3.1.
Corollary 3.4. Let r > 1, θ be nonnegative real numbers and f : A → A be a mapping such that kDµ f (x1 , · · · , xm )kB ≤ θ · (kx1 krA + · · · kxm krA ), kf (xy) − f (x)y − xf (y)kA ≤ θ · (kxkrA · kykrA )
(3.14) (3.15)
for all µ ∈ T1 and x1 , · · · , xm , x, y ∈ A. Then there exists a unique derivation δ : A → A such that θ kf (x) − δ(x)kA ≤ r kxkrA m −m
1111
STABILITY OF ADDITIVE FUNCTIONAL EQUATIONS
9
for all x ∈ A. Proof. The proof follows from Theorem 3.3 by taking ϕ(x1 , · · · , xm ) := θ · (kx1 krA + · · · kxm krA ) and ψ(x, y) := θ · (kxkrA · kykrA ), for all x1 , · · · , xm , x, y ∈ A and L = m1−r .
References [1] L. C˘adariu and V. Radu, Fixed points and the stability of Jensen’s functional equation, J. Inequal. Pure Appl. Math. 4, no. 1, Article 4 (2003). [2] J. Diaz and B. Margolis, A fixed point theorem of the alternative for contractions on a generalized complete metric space, Bull. Amer. Math. Soc. 74 (1968), 305–309. [3] P. Gˇavruta, A generalization of the Hyers-Ulam-Rassias stability of approximately additive mappings, J. Math. Anal. Appl. 184 (1994), 431–436. [4] D.H. Hyers, On the stability of the linear functional equation, Proc. Nat. Acad. Sci. USA 27 (1941), 222–224. [5] M.S. Moslehian, On the orthogonal stability of the Pexiderized quadratic equation, J. Differ. Equ. Appl. 11 (2005), 999–1004. [6] J.M. Rassias, On approximation of approximately linear mappings by linear mappings, J. Funct. Anal. 46 (1982), 126–130. [7] J.M. Rassias, On approximation of approximately linear mappings by linear mappings, Bull. Sci. Math. 108 (1984), 445–446. [8] J.M. Rassias, Solution of a problem of Ulam, J. Approx. Theory 57 (1989), 268–273. [9] Th.M. Rassias, On the stability of the linear mapping in Banach spaces, Proc. Amer. Math. Soc. 72 (1978), 297–300. [10] Th.M. Rassias, Problem 16; 2, Report of the 27th Internat. Symp. on Funct. Equat., Aequat. Math. 39 (1990), 292–293; 309. [11] Th.M. Rassias, The problem of S.M. Ulam for approximately multiplicative mappings, J. Math. Anal. Appl. 246 (2000), 352–378. [12] Th.M. Rassias, On the stability of functional equations in Banach spaces, J. Math. Anal. Appl. 251 (2000), 264–284. [13] S.M. Ulam, A Collection of the Mathematical Problems, Interscience Publ. New York, 1960. Yeol Je Cho Department of Mathematics Education and the RINS, Gyeongsang National University, Chinju 660-701, Korea E-mail address: [email protected] Jung Im Kang National Institute for Mathematical Sciences, KT Daeduk 2 Research Center, 463-1 Jeonmin-dong, Yuseong-gu, Daejeon 305-811, Korea E-mail address: [email protected] R. Saadati, Department of Mathematics, Science and Research Branch, Islamic Azad University, Tehran, I.R. Iran E-mail address: [email protected]
JOURNAL OF COMPUTATIONAL ANALYSIS AND APPLICATIONS, VOL. 14, NO.6, 1112-1117 , 2012, COPYRIGHT 2012 EUDOXUS 1112 PRESS, LLC
Oscillation of second order nonlinear neutral differential equation M. Tamer S¸enel1 and T. Candan2 1
Department of Mathematics, Faculty of Science, Erciyes University, Kayseri, Turkey [email protected] 2 Department of Mathematics, Faculty of Art and Science, Ni˘ gde University, Ni˘gde, 51200, Turkey [email protected]
Abstract In this article, we present some criteria for the oscillation of the second order nonlinear neutral differential equation (r(t)([(x(t)) + p(t)x(τ (t))]0 )γ )0 + q(t)xγ (σ(t)) = 0. We use the oscillation results of the first order equation to establish that of second order equation.
Keywords: Oscillation, neutral differential equation, second-order.
1
Introduction
In this article, we consider the second order neutral differential equation (r(t)([x(t) + p(t)x(τ (t))]0 )γ )0 + q(t)xγ (σ(t)) = 0,
(1)
where q(t) ∈ C([t0 , ∞), (0, ∞)), r(t), τ (t) and σ(t) ∈ C 1 ([t0 , ∞), (0, ∞)) with τ (t), σ(t) → ∞ as t → ∞, p(t) ∈ C 1 ([t0 , ∞), [0, ∞)) with p(t) 6 p0 < ∞, and γ > 1 is ratio of odd positive integers. In addition, suppose that the following conditions are also satisfied: (H1 ) τ 0 (t) > τ0 > 0, τ oσ = σoτ, Rt (H2 ) R(t) = t0 1ds → ∞ as t → ∞. r γ (s)
By a solution of (1) we mean a nontrivial reel valued function which has the properties x(t) + p(t)x(τ (t)) ∈ C 1 ([t1 , ∞), R), r(t)([x(t) + p(t)x(τ (t))]0 )γ ∈ C 1 ([t1 , ∞), R) and (1) is satisfied for t > t1 > t0 . A solution of (1) is said to be oscillatory if it has arbitrarily large zeros, otherwise, it is said to be nonoscillatory. The equation (1) is called oscillatory if all of its solutions are oscillatory. Numerous oscillation results have been obtained for the second order neutral delay differential equations. Grammatikopoulos et al.[1] considered the second order linear neutral delay equation (y(t) + p(t)y(t − τ ))00 + q(t)y(t − δ) = 0, 1
t > t0
(2)
SENEL, CANDAN: NONLINEAR NEUTRAL DE
1113
and they proved that if q(t) > 0, 0 6 p(t) < 1, and Z ∞ q(s)(1 − p(s − δ))ds = ∞, t0
then every solution of (2) oscillates. Graef et al.[2] considered the second order nonlinear neutral delay equation (y(t) + p(t)y(t − τ ))00 + q(t)f (y(t − δ)) = 0,
t > t0 ,
(3)
which is more general than (2), and they proved that if q(t) > 0, 0 6 p(t) < 1, and Z ∞ q(s)f ((1 − p(s − δ))c)ds = ∞, c > 0 t0
then every solution of (3) oscillates. Later Grace and Lalli [3] studied more general equation of the form (r(t)[x(t) + p(t)x(t − τ )]0 )0 + q(t)f (x(t − σ)) = 0, R ∞ ds under the conditions f (x) > k, = ∞, and x r(s) Z
∞
ρ(s)q(s)(1 − p(s − σ)) −
(4)
(ρ0 (s))2 r(s − σ) ds = ∞, 4kρ(s)
they proved that (4) is oscillatory. Bacul´ıkov´a and Dˇzurina [4] studied neutral differential equation (r(t)[x(t) + p(t)x(τ (t))]0 )0 + q(t)x(σ(t)) = 0,
(5)
R t ds →∞ under the condition 0 6 p(t) 6 p0 < ∞, τ 0 (t) > τ0 > 0, τ oσ = σoτ and R(t) = t0 r(s) as t → ∞. Our results originated from results of Bacul´ıkov´a and Dˇzurina [4]. We note that when γ = 1 in (1) we get (5) therefore our results improve and generalize that of in [4]. For some other oscillation results we refer the reader to the papers [5-7] and the references cited therein and related books [8-10].
2
Main Results
Let us use the following notations for convenience: y(t) = x(t) + p(t)x(τ (t)) and Q(t) = min{q(t), q(τ (t))}. Lemma 1. Assume that (H2 ) holds. Let x(t) be an eventually positive solution of (1), then there exists a t1 such that y(t) > 0, r(t)(y 0 (t))γ > 0, (r(t)(y 0 (t))γ )0 < 0 for t > t1 . 2
(6)
SENEL, CANDAN: NONLINEAR NEUTRAL DE
1114
Proof. Since x(t) is an eventually positive solution of (1), in view of property of σ(t) there exists a t1 > t0 such that x(σ(t)) > 0 for t > t1 . Then it follows from (1) that (r(t)(y 0 (t))γ )0 = −q(t)xγ (σ(t)).
(7)
Because of postitive nature of q(t), from (7) r(t)(y 0 (t))γ is strictly decreasing for t > t1 . Thus we can have either y 0 (t) > 0 for t > t1 or y 0 (t) < 0 for t > t2 > t1 . Suppose y 0 (t) < 0 for t > t2 > t1 , then there exists a c > 0 such that r(t)(y 0 (t))γ < −c < 0.
(8)
Dividing (8) by r(t), taking γ root of both sides and integrating from t2 to t, respectively, we obtain Z t ds 1/γ → −∞ as t → ∞, y(t) 6 y(t2 ) − c 1 t2 r γ (s) which contradicts the positive nature of y(t) and therefore y 0 (t) > 0 for t > t1 . Thus the proof is complete. Theorem 1. Assume that the first order neutral differential inequality 0 Q(t) p0 γ w(τ (t)) + γ−1 (R(σ(t)) − R(t1 ))γ w(σ(t)) 6 0 w(t) + τ0 2
(9)
has no positive solution, then (1) is oscillatory. Proof. Assume that x(t) > 0 is a solution of (1). Then y(t) satisfies [y(σ(t))]γ = [x(σ(t)) + p(σ(t))x(τ (σ(t)))]γ 6 [x(σ(t)) + p0 x(σ(τ (t)))]γ γ−1
6 2
[xγ (σ(t)) + p0 γ xγ (σ(τ (t)))].
(10)
On the other hand, using (1) and (H1 ), we have (r(t)(y 0 (t))γ )0 + q(t)xγ (σ(t)) = 0,
(11)
or p0 γ (r(τ (t))(y 0 (τ (t)))γ )0 + p0 γ q(τ (t))xγ (σ(τ (t))) τ 0 (t) p0 γ > (r(τ (t))(y 0 (τ (t)))γ )0 + p0 γ q(τ (t))xγ (σ(τ (t))). τ0
0 =
(12)
Combining (11) and (12), we see that (r(t)(y 0 (t))γ )0 +
p0 γ (r(τ (t))(y 0 (τ (t)))γ )0 + p0 γ q(τ (t))xγ (σ(τ (t))) + q(t)xγ (σ(t)) 6 0. τ0
(13)
Taking (10) into account and using (13), we obtain (r(t)(y 0 (t))γ )0 +
Q(t) p0 γ (r(τ (t))(y 0 (τ (t)))γ )0 + γ−1 y γ (σ(t)) 6 0. τ0 2 3
(14)
SENEL, CANDAN: NONLINEAR NEUTRAL DE
1115
Moreover, since w(t) = r(t)(y 0 (t))γ > 0 is decreasing from Lemma 1, we obtain the following inequality Z t Z t 1 1 1/γ 0 1/γ y(t) > (r (s)y (s))ds > w (t) ds 1/γ 1/γ (s) (s) t1 r t1 r = w1/γ (t)(R(t) − R(t1 )).
(15)
Using (15) into (14) we see that w(t) is a positive solution of (9) which contradicts our assumption and completes the proof. Theorem 2. Assume that τ (t) > t. If the first order differential inequality τ0 Q(t) 0 γ z (t) + γ−1 (R(σ(t)) − R(t1 )) z(σ(t)) 6 0 γ τ0 + p 0 2
(16)
has no positive solution, then (1) is oscillatory. Proof. Assume that x(t) > 0 is a solution of (1). We know from Lemma 1 and the proof of Theorem 1 that w(t) = r(t)(y 0 (t))γ > 0 is decreasing and it satisfies (9). Let us denote γ z(t) = w(t) + pτ00 w(τ (t)). Since τ (t) > t, we have p0 γ z(t) 6 w(t) 1 + . τ0 Substituting these terms into (9), we see that z(t) is a positive solution of (16) and therefore we have a contradiction which completes the proof. Corollary 1. Assume that τ (t) > t and σ(t) 6 t. If Z t γ−1 2 (τ0 + p0 γ ) γ Q(s)R (σ(s))ds > lim inf t→∞ τ0 e σ(t)
(17)
then (1) is oscillatory. Proof. It is well known that when (17) holds (16) has no eventually positive solutions and therefore (1) is oscillatory. Theorem 3. Assume that τ (t) 6 t. If the first order differential inequality τ0 Q(t) 0 z (t) + (R(σ(t)) − R(t1 ))γ z(τ −1 (σ(t))) 6 0, τ0 + p0 γ 2γ−1
(18)
where τ −1 (t) is inverse function of τ (t), has no positive solution, then (1) is oscillatory. Proof. Assume that x(t) > 0 is a solution of (1). As in the proof of Theorem 1 we see that γ w(t) = r(t)(y 0 (t))γ > 0 is decreasing and it satisfies (9). Let us denote z(t) = w(t)+ pτ00 w(τ (t)). Since τ (t) 6 t, we have p0 γ z(t) 6 w(τ (t)) 1 + . τ0 Using the last inequality in (9), we see that z(t) is a positive solution of (18) and therefore we have a contradiction which completes the proof. 4
SENEL, CANDAN: NONLINEAR NEUTRAL DE
1116
Corollary 2. Assume that σ(t) 6 τ (t) 6 t. If Z t→∞
γ−1
t
lim inf
Q(s)Rγ (σ(s))ds >
τ −1 (σ(t))
2
(τ0 + p0 γ ) τ0 e
then (1) is oscillatory. Proof. The proof is similar to that of Corollary 1 therefore it is omitted. Example 1. Consider the nonlinear neutral differential eqaution ( 0 5/3 )0 1 229/6 t7/6 x(t) + x(2t) + 3/2 x5/3 (t/2) = 0, t t
t > 1,
(19)
where γ = 5/3, τ (t) = 2t and τ 0 (t) = 2 > 1 = (τ0 )1/γ , p(t) = 1/t < 1 = p0 , f or t > 1. We will apply Corollary 1 to (19). 210/3 Q(t) = min{q(t), q(τ (t))} = q(2t) = 3/2 . t Z t Z t ds ds 10 R(t) = = = t3/10 . 3/5 1/γ 3 t0 r(s) 0 ((s)7/6 ) Then we get 5/3 10 s 3/10 lim inf Q(s)R (σ(s))ds = lim inf ( ) ds t→∞ t→∞ 3 2 σ(t) t/2 5/3 Z t 10 1 25/3 (τ0 + pγ0 ) 17/6 2 = lim inf ds > 2γ−1 = , t→∞ 3 s τ0 e e t/2 Z
t
γ
Z
t
210/3 s3/2
which guaranties the oscillation of (19).
References [1] M. K. Grammatikopoulos, G. Ladas and A. Meimaridou, Oscillation of second order neutral delay differential equations, Rad. Mat. 1 (1985) 267-274. [2] J. R. Graef, M. K. Grammatikopoulos and P. W. Spikes, Asymptotic properties of solutions of nonlinear neutral delay differential equations of the second order, Rad. Mat. 4 (1988) 133-149. [3] S. R. Grace and B. S. Lalli, Oscillation of nonlinear second order neutral delay differential equations, Rad. Mat. 3 (1987) 77-84. [4] B. Bacul´ıkov´a and J. Dˇzurina, Oscillation theorems for second order neutral differential equations, Comput. Math. with Appl. 61 (2011) 94-99.
5
SENEL, CANDAN: NONLINEAR NEUTRAL DE
1117
[5] R. Xu and Y. Xia, A note on the oscillation of second-order nonlinear neutral functional differential equations, J. Contemp. Math. Sci. 3 (2008) 1441-1450. [6] R. P. Agarwal and S. R. Grace, Oscillation theorems for certain neutral functional differential equations, Comput. Math. Appl. 38 (1999) 1-11. [7] B. Bacul´ıkov´a, Oscillation criteria for second order nonlinear differential equations, Arch. Math. 42 (2006) 141-149. [8] L. H. Erbe, Q. Kong and B. G. Zhang, Oscillation Theory for Functional Differential Equations, Marcel Dekker, New York, 1994. [9] G. S. Ladde, V. Lakshmikantham and B. G. Zhang, Oscillation Theory of Differential Equations with Deviating Arguments, Marcel Dekker, New York, 1987. [10] D. D. Bainov, D. P. Mishev, Oscillation Theory for Nonlinear Differential Equations with Delay, Adam Hilger, Bristol, Philadelphia, New York, 1991.
6
JOURNAL OF COMPUTATIONAL ANALYSIS AND APPLICATIONS, VOL. 14, NO.6, 1118-1129 , 2012, COPYRIGHT 2012 EUDOXUS 1118 PRESS, LLC
On the fuzzy stability of a generalized Jensen quadratic functional equation Hassan Azadi Kenary, Choonkil Park and Sung Jin Lee∗ Abstract. Using the fixed point method, we prove the Hyers-Ulam stability of the following generalized Jensen quadratic functional equation ( ) ( ) ( ) ( ) x+y x+y x−y x−y 4 4 f + sz + f − sz + f + sz + f − sz = 2 f (x) + 2 f (y) + 4s2 f (z) r r r r r r in fuzzy Banach spaces.
1. Introduction The stability problem of functional equations originated from a question of Ulam [28] concerning the stability of group homomorphisms. Hyers [10] gave a first affirmative partial answer to the question of Ulam for Banach spaces. Hyers’ Theorem was generalized by Th. M. Rassias [23] for linear mappings by considering an unbounded Cauchy difference. Theorem 1.1. (Th.M. Rassias) Let f : E → E ′ be a mapping from a normed vector space E into a Banach space E ′ subject to the inequality ∥f (x + y) − f (x) − f (y)∥ ≤ ϵ(∥x∥p + ∥y∥p ) for all x, y ∈ E, where ϵ and p are constants with ϵ > 0 and 0 ≤ p < 1. Then the limit f (2n x) n→∞ 2n exists for all x ∈ E and L : E → E ′ is the unique additive mapping which satisfies L(x) = lim
∥f (x) − L(x)∥ ≤
2ϵ ∥x∥p 2 − 2p
for all x ∈ E. Also, if for each x ∈ E the function f (tx) is continuous in t ∈ R, then L is R-linear. The functional equation f (x + y) + f (x − y) = 2f (x) + 2f (y) 0
2010 Mathematics Subject Classification: 39B52, 46S40, 34K36, 47S40, 26E50, 47H10, 39B82. Keywords: Hyers-Ulam stability, fuzzy Banach space, generalized Jensen functional equation, fixed point method. 0∗ Corresponding author 0
1119
Fuzzy stability of quadratic functional equation
2
is called a quadratic functional equation. In particular, every solution of the quadratic functional equation is said to be a quadratic mapping. The Hyers-Ulam stability of the quadratic functional equation was proved by Skof [27] for mappings f : X → Y , where X is a normed space and Y is a Banach space. Cholewa [5] noticed that the theorem of Skof is still true if the relevant domain X is replaced by an Abelian group. Czerwik [6] proved the Hyers-Ulam stability of the quadratic functional equation. In this paper, we consider the following generalized Jensen functional equation (
f
)
(
x+y x+y + sz + f − sz r r
)
(
)
x−y + sz (1.1) r ( ) 4 4 x−y + f − sz = 2 f (x) + 2 f (y) + 4s2 f (z) r r r
+ f
and prove the Hyers-Ulam stability of the functional equation (1.1) in fuzzy Banach spaces. Jun and Cho [11] proved that a mapping f : X → Y satisfies the functional equation (1.1) if and only if the mapping f : X → Y is quadratic. Moreover, they proved the Hyers-Ulam stability of the functional equation (1.1) in Banach spaces. The stability problems of several functional equations have been extensively investigated by a number of authors, and there are many interesting results concerning this problem (see [9, 12], [19]–[21], [24]–[26]). Katsaras [13] defined a fuzzy norm on a vector space to construct a fuzzy vector topological structure on the space. Some mathematicians have defined fuzzy norms a vector space from various points of view (see [8, 15, 22]). In particular, Bag and Samanta [1], following Cheng and Mordeson [4], gave an idea of fuzzy norm in such a manner that the corresponding fuzzy metric is of Karmosil and Michalek type [14]. They established a decomposition theorem of a fuzzy norm into a family of crisp norms and investigated some properties of fuzzy normed spaces [2].
2. Preliminaries Definition 2.1. (Bag and Samanta [1]) Let X be a real vector space. A function N : X × R → [0, 1] is called a fuzzy norm on X if for all x, y ∈ X and all s, t ∈ R, (N 1) N (x, t) = 0 for t ≤ 0; (N 2) x = 0 if and only if)N (x, t) = 1 for all t > 0; ( t (N 3) N (cx, t) = N x, |c| if c ̸= 0; (N 4) N (x + y, c + t) ≥ min{N (x, s), N (y, t)}; (N 5) N (x, .) is a non-decreasing function of R and limt→∞ N (x, t) = 1; (N 6) for x ̸= 0, N (x, .) is continuous on R.
1120
H. A. Kenary, C. Park, S. Lee
3
The pair (X, N ) is called a fuzzy normed vector space. The properties of fuzzy normed vector space and examples of fuzzy norms are given in (see [17, 18]). Example 2.1. Let (X, ∥.∥) be a normed linear space and α, β > 0. Then {
N (x, t) =
αt αt+β∥x∥
0
t > 0, x ∈ X t ≤ 0, x ∈ X
is a fuzzy norm on X. Definition 2.2. (Bag and Samanta [1]) Let (X, N ) be a fuzzy normed vector space. A sequence {xn } in X is said to be convergent or converge if there exists an x ∈ X such that limt→∞ N (xn − x, t) = 1 for all t > 0. In this case, x is called the limit of the sequence {xn } in X and we denote it by N − limt→∞ xn = x. Definition 2.3. (Bag and Samanta [1]) Let (X, N ) be a fuzzy normed vector space. A sequence {xn } in X is called Cauchy if for each ϵ > 0 and each t > 0 there exists an n0 ∈ N such that for all n ≥ n0 and all p > 0, we have N (xn+p − xn , t) > 1 − ϵ. It is well known that every convergent sequence in a fuzzy normed vector space is Cauchy. If each Cauchy sequence is convergent, then the fuzzy norm is said to be complete and the fuzzy normed vector space is called a fuzzy Banach space. We say that a mapping f : X → Y between fuzzy normed vector spaces X and Y is continuous at a point x ∈ X if for each sequence {xn } converging to x0 ∈ X, then the sequence {f (xn )} converges to f (x0 ). If f : X → Y is continuous at each x ∈ X, then f : X → Y is said to be continuous on X (see [2]). Throughout this paper, assume that X is a vector space and that (Y, N ) is a fuzzy Banach space. Definition 2.4. Let X be a set. A function d : X × X → [0, ∞] is called a generalized metric on X if d satisfies the following conditions: (1) d(x, y) = 0 if and only if x = y for all x, y ∈ X; (2) d(x, y) = d(y, x) for all x, y ∈ X; (3) d(x, z) ≤ d(x, y) + d(y, z) for all x, y, z ∈ X. Theorem 2.1. ([3, 7]) Let (X, d) be a complete generalized metric space and J : X → X be a strictly contractive mapping with Lipschitz constant L < 1. Then, for all x ∈ X, either d(J n x, J n+1 x) = ∞ for all nonnegative integers n or there exists a positive integer n0 such that (1) d(J n x, J n+1 x) < ∞ for all n0 ≥ n0 ;
1121
Fuzzy stability of quadratic functional equation
4
(2) the sequence {J n x} converges to a fixed point y ∗ of J; (3) y ∗ is the unique fixed point of J in the set Y = {y ∈ X : d(J n0 x, y) < ∞}; (4) d(y, y ∗ ) ≤
1 d(y, Jy) 1−L
for all y ∈ Y .
Lemma 2.1. Let X and Y be vector spaces. If a mapping f : X → Y satisfies f (0) = 0 and (1.1) for all x, y, z ∈ X, then the mapping f is quadratic. Proof. Letting x = y in (1.1), we get )
(
(
)
2x 2x 8 + sz + f − sz + f (sz) + f (−sz) = 2 f (x) + 4s2 f (z) f r r r
(2.1)
for all x, z ∈ X. Letting x = 0 in (2.1), we get 2f (sz) + 2f (−sz) = 4s2 f (z). Setting y = −x in (1.1), we obtain (
)
(
)
2x 4 4 2x f + sz + f − sz + f (sz) + f (−sz) = 2 f (x) + 2 f (−x) + 4s2 f (z). r r r r (2.2) By ) and (2.2), we conclude that f is even. And by setting z = 0 in (2.1), we get ( (2.1) f
2x r
=
4 f (x) r2
(
for all x ∈ X. So we get )
(
(
)
)
2x 2x 2x f + sz + f − sz + f (sz) + f (−sz) = 2f + 4f (sz) r r r for all x, z ∈ X. Hence we get (
)
(
)
(
)
2x 2x 2x f + sz + f − sz = 2f + 2f (sz) r r r for all x, z ∈ X. So f is quadratic.
The mapping f : X → Y given in the statement of Lemma 2.1 is called a generalized Jensen quadratic mapping. Putting z = 0 in (1.1) with r = 2, we get the Jensen quadratic mapping, i.e., ( ) ( ) x+y x−y 2f + 2f = f (x) + f (y) 2 2 and putting x = y in (1.1) with r = 2 and s = 1, we get the quadratic mapping, i.e., f (x + z) + f (x − z) = 2f (x) + 2f (z). 3. Fuzzy stability of the functional equation (1.1) Throughout this paper, assume that X is a vector space and that (Y, N ) is a fuzzy Banach space.
1122
H. A. Kenary, C. Park, S. Lee
5
For a given mapping f : X → Y , we define ( ) ( ) ( ) x+y x+y x−y Df (x, y, z) : = f + sz + f − sz + f + sz r r r ( ) 4 4 x−y + f − sz − 2 f (x) − 2 f (y) − 4s2 f (z) r r r for all x, y, z ∈ X. Theorem 3.1. Let ϕ : X 3 → [0, ∞) be a function such that there exists an L < 1 with (
)
r2 L rx ry rz , , ≤ ϕ(x, y, z) 2 2 2 4 for all x, y, z ∈ X. Let f : X → Y be a mapping satisfying f (0) = 0 and ϕ
N (Df (x, y, z), t) ≥
t t + ϕ(x, y, z)
for all x, y, z ∈ X, all t > 0 and r, s ̸= 0. Then (
)
(( )
(3.1) )
4 n r n f Q(x) := N - n→∞ lim x r2 2 exists for each x ∈ X and defines a unique generalized Jensen quadratic mapping Q : X → Y such that (8 − 8L)t N (f (x) − Q(x), t) ≥ . (3.2) (8 − 8L)t + r2 Lϕ(x, x, 0) Proof. Putting x = y and z = 0 in (3.1), we have (
(
)
)
2x 8 t N 2f − 2 f (x), t ≥ r r t + ϕ(x, x, 0)
(3.3)
for all x ∈ X and t > 0. Consider the set S := {g : X → Y, g(0) = 0} and the generalized metric d in S defined by {
}
t d(f, g) = inf µ ∈ R : N (g(x) − h(x), µt) ≥ , ∀x ∈ X, t > 0 , t + ϕ(x, x, 0) +
where inf ∅ = +∞. It is easy to show that (S, d) is complete (see [16, Lemma 2.1]). Now, we consider a linear mapping J : S → S such that (
)
4 rx Jg(x) := 2 g r 2 for all x ∈ X. Let g, h ∈ S satisfy d(g, h) = ϵ. Then N (g(x) − h(x), ϵt) ≥
t t + ϕ(x, x, 0)
1123
Fuzzy stability of quadratic functional equation
6
for all x ∈ X and t > 0. Hence
(
(
)
(
)
4 4 rx rx − 2h , Lϵt N (Jg(x) − Jh(x), Lϵt) = N 2 g r 2 r 2 ( ( ) ) ( ) 2 rx rx r = N g −h , Lϵt 2 2 4 ≥ ≥ =
r2 Lt 4
r2 Lt 4
)
r2 Lt 4 + ϕ( rx , rx , 0) 2 2 2 r Lt 4 r2 + 4 Lϕ(x, x, 0)
t t + ϕ(x, x, 0)
for all x ∈ X and t > 0. Thus d(g, h) = ϵ implies that d(Jg, Jh) ≤ Lϵ. This means that d(Jg, Jh) ≤ Ld(g, h) for all g, h ∈ S. It follows from (3.3) that ( ( ) ) rx 4 2t N f (x) − 2 f ,t ≥ r 2 2t + ϕ( rx , rx , 0) 2 2 2t ≥ r2 L 2t + 4 ϕ(x, x, 0) = Therefore,
(
(
)
rx r2 Lt 4 N f (x) − 2 f , r 2 8 This means, d(f, Jf ) ≤
8t r2 L 8t r2 L
)
≥
+ ϕ(x, x, 0)
.
t . t + ϕ(x, x, 0)
r2 L . 8
By Theorem 2.1, there exists a mapping Q : X → Y satisfying the following: (1) Q is a fixed point of J, that is, (
)
rx r2 = Q(x) 2 4 for all x ∈ X. The mapping Q is a unique fixed point of J in the set Q
(3.4)
Ω = {h ∈ S : d(g, h) < ∞}. This implies that Q is a unique mapping satisfying (3.4) such that there exists µ ∈ (0, ∞) satisfying t N (f (x) − Q(x), µt) ≥ t + ϕ(x, x, 0)
1124
H. A. Kenary, C. Park, S. Lee
7
for all x ∈ X and t > 0. (2) d(J n f, Q) → 0 as n → ∞. This implies the equality )n
(
4 lim N - 2 n→∞ r
(( )n )
r 2
f
x = Q(x)
for all x ∈ X. (3) d(f, Q) ≤
d(f,Jf ) 1−L
with f ∈ Ω, which implies the inequality r2 L . 8 − 8L
d(f, Q) ≤ This implies that the inequality (3.2) holds.
Replacing x, y and z by ( 2r )n x, ( 2r )n y and ( 2r )n z, respectively, in (3.1), we get ((
N 4 − 2f r
4 r2
)n [ (( )n [
r 2
f
x+y ± sz r
(( )n )
r 2
])
r 2
r 2
+f
(( )n )
4 x − 2f r
(( )n [
y − 4s f
≥
( r42 )n t ( r42 )n t + ϕ(( 2r )n x, ( 2r )n y, ( 2r )n z)
≥
( r42 )n t 2 ( r42 )n t + ( r4 )n Ln ϕ(x, y, z)
2
x−y ± sz r
])
(( )n )] )
r 2
z
,t
for all x, y, z ∈ X, t > 0 and all n ∈ N. Since ( r42 )n t =1 n→∞ ( 4 )n t + ( r 2 )n Ln ϕ(x, y, z) 2 r 4 lim
for all x, y, z ∈ X and all t > 0, we deduce that N (DQ(x, y, z), t) = 1 for all x, y, z ∈ X and all t > 0. Thus the mapping Q : X → Y is quadratic, as desired. Corollary 3.1. Let θ ≥ 0 and let p be a real number with p > 2 and 0 < |r| < 2. Let X be a normed vector space with norm ∥ · ∥. Let f : X → Y be a mapping satisfying f (0) = 0 and t (3.5) N (Df (x, y, z), t) ≥ p t + θ(∥x∥ + ∥y∥p + ∥z∥p ) for all x, y, z ∈ X and all t > 0. Then Q(x) := N - lim
n→∞
(
4 r2
)n (( )n )
f
r 2
x
1125
8
Fuzzy stability of quadratic functional equation
exists for each x ∈ X and defines a generalized Jensen quadratic mapping Q : X → Y such that N (f (x) − Q(x), t) ≥
4(2p − |r|p )t . 4(2p − |r|p )t + r2 |r|p θ∥x∥p
Proof. The proof follows from Theorem 3.1 by taking ϕ(x, y, z) := θ(∥x∥p + ∥y∥p + ∥z∥p ) for all x, y, z ∈ X. Then we can choose L =
(
|r| 2
)p
and we get the desired result.
Corollary 3.2. Let θ ≥ 0 and let p be a real number with 0 < p < 2 and |r| > 2. Let X be a normed vector space with norm ∥ · ∥. Let f : X → Y be a mapping satisfying f (0) = 0 and (3.5). Then (
Q(x) := N - n→∞ lim
4 r2
)n
(( )n )
r 2
f
x
exists for each x ∈ X and defines a generalized Jensen quadratic mapping Q : X → Y such that N (f (x) − Q(x), t) ≥
(|r|p − 2p )t . (|r|p − 2p )t + r2 · 2p−2 θ∥x∥p
Proof. The proof follows from Theorem 3.1 by taking ϕ(x, y, z) := θ(∥x∥p + ∥y∥p + ∥z∥p ) for all x, y, z ∈ X. Then we can choose L =
(
)p 2 |r|
and we get the desired result.
Theorem 3.2. Let ϕ : X 3 → [0, ∞) be a function such that there exists an L < 1 with ϕ(x, y, z) ≤
(
4L rx ry rz ϕ , , r2 2 2 2
)
for all x, y, z ∈ X. Let f : X → Y be a mapping satisfying f (0) = 0 and satisfying (3.1). Then (
R(x) := N - lim
n→∞
r2 4
)n
(( )n )
f
2 r
x
exists for each x ∈ X and defines a generalized Jensen quadratic mapping R : X → Y such that N (f (x) − R(x), t) ≥
(8 − 8L)t (8 − 8L)t + r2 ϕ(x, x, 0)
(3.6)
1126
H. A. Kenary, C. Park, S. Lee
9
Proof. Let (S, d) be the generalized metric space defined as in the proof of Theorem 3.1. Consider the linear mapping J : S → S such that (
)
r2 2x Jg(x) := g 4 r for all x ∈ X. Let g, h ∈ S be such that d(g, h) = ϵ. Then t N (g(x) − h(x), ϵt) ≥ t + ϕ(x, x, 0) for all x ∈ X and t > 0 . Hence
(
(
)
(
)
r2 r2 2x 2x g − h , Lϵt N (Jg(x) − Jh(x), Lϵt) = N 4 r 4 r ( ) ) ( ( ) 4 2x 2x −h , 2 Lϵt = N g r r r 4 Lt r2 ≥ 4 Lt + ϕ( 2x , 2x , 0) r2 r r ≥
)
4 Lt r2 4L ϕ(x, x, 0) r2
4 Lt r2
+ t = t + ϕ(x, x, 0)
for all x ∈ X and t > 0. Thus d(g, h) = ϵ implies that d(Jg, Jh) ≤ Lϵ. This means that d(Jg, Jh) ≤ Ld(g, h) for all g, h ∈ S. It follows from (3.3) that (
N
(
)
r2 2x r2 t f − f (x), 4 r 8
for all x ∈ X and t > 0. So d(f, Jf ) ≤ R : X → Y satisfying the following:
r2 . 8
)
≥
t t + ϕ(x, x, 0)
By Theorem 2.1, there exists a mapping
(1) R is a fixed point of J, that is, ( ) 4 2x R(x) = R 2 r r for all x ∈ X. The mapping R is a unique fixed point of J in the set
(3.7)
Ω = {h ∈ S : d(g, h) < ∞}. This implies that R is a unique mapping satisfying (3.7) such that there exists µ ∈ (0, ∞) satisfying t N (f (x) − R(x), µt) ≥ t + ϕ(x, x, 0) for all x ∈ X and t > 0.
1127
Fuzzy stability of quadratic functional equation
10
(2) d(J n f, R) → 0 as n → ∞. This implies the equality (
r2 lim N n→∞ 4
)n
(( )n )
2 r
f
x = R(x)
for all x ∈ X. (3) d(f, R) ≤
d(f,Jf ) 1−L
with f ∈ Ω, which implies the inequality r2 . 8 − 8L
d(f, R) ≤ This implies that the inequality (3.6) holds.
The rest of the proof is similar to that of the proof of Theorem 3.1.
Corollary 3.3. Let θ ≥ 0 and let p be a real number with p > 2 and |r| > 2. Let X be a normed vector space with norm ∥ · ∥. Let f : X → Y be a mapping satisfying f (0) = 0 and (3.5). Then (
R(x) := N - lim
n→∞
r2 4
)n
(( )n )
2 r
f
x
exists for each x ∈ X and defines a generalized Jensen quadratic mapping R : X → Y such that (|r|p − 2p )t N (f (x) − R(x), t) ≥ . (|r|p − 2p )t + 4−1 · r2 |r|p θ∥x∥p Proof. The proof follows from Theorem 3.2 by taking ϕ(x, y, z) := θ(∥x∥p + ∥y∥p + ∥z∥p ) for all x, y, z ∈ X. Then we can choose L =
(
|r| 2
)−p
and we get the desired result.
Corollary 3.4. Let θ ≥ 0 and let p be a real number with 0 < p < 2 and |r| < 2. Let X be a normed vector space with norm ∥ · ∥. Let f : X → Y be a mapping satisfying f (0) = 0 and (3.5). Then (
)
(( )
)
r2 n 2 n f x 4 r exists for each x ∈ X and defines a generalized Jensen quadratic mapping R : X → Y such that (2p − |r|p )t . N (f (x) − R(x), t) ≥ p (2 − |r|p )t + r2 .2p−2 θ∥x∥p R(x) := N - n→∞ lim
Proof. The proof follows from Theorem 3.2 by taking ϕ(x, y, z) := θ(∥x∥p + ∥y∥p + ∥z∥p )
1128
H. A. Kenary, C. Park, S. Lee ( )p
for all x, y, z ∈ X. Then we can choose L =
|r| 2
and we get the desired result.
11
References [1] T. Bag and S.K. Samanta, Finite dimensional fuzzy normed linear spaces, Journal of Fuzzy Mathematics 11 (2003), 687–705. [2] T. Bag and S.K. Samanta, Fuzzy bounded linear operators, Fuzzy Sets and Systems 151 (2005), 513–547. [3] L. C˘adariu and V. Radu, Fixed points and the stability of Jensen’s functional equation, J. Inequal. Pure Appl. Math. 4, no. 1, Art. ID 4 (2003). [4] S.C. Cheng and J.N. Mordeson, Fuzzy linear operators and fuzzy normed linear spaces, Bulletin of Calcutta Mathematical Society 86 (1994), 429–436. [5] P.W. Cholewa, Remarks on the stability of functional equations, Aequationes Mathematicae 27 (1984), 76–86. [6] S. Czerwik, On the stability of the quadratic mapping in normed spaces, Abh. Math. Sem. Univ. Hambourg 62 (1992), 239–248. [7] J. Diaz and B. Margolis, A fixed point theorem of the alternative for contractions on a generalized complete metric space, Bull. Amer. Math. Soc. 74 (1968), 305–309. [8] C. Felbin, Finite-dimensional fuzzy normed linear space, Fuzzy Sets and Systems 48 (1992), 239– 248. [9] P. Gˇavruta, A generalization of the Hyers-Ulam-Rassias stability of approximately additive mappings, J. Math. Anal. Appl. 184 (1994), 431–436. [10] D.H. Hyers, On the stability of the linear functional equation, Proc. Natl. Acad. Sci. USA 27 (1941), 222–224. [11] K. Jun and Y. Cho, Stability of generalized Jensen quadratic functional equations, Journal of Chungcheong Mathematical Society 29 (2007), 515–523. [12] S. Jung, Hyers-Ulam-Rassias Stability of Functional Equations in Mathematical Analysis, Hadronic Press, Palm Harbor, 2001. [13] A.K. Katsaras, Fuzzy topological vector spaces, Fuzzy Sets and Systems 12 (1984), 143–154. [14] I. Karmosil and J. Michalek, Fuzzy metric and statistical metric spaces, Kybernetica 11 (1975), 326–334. [15] S.V. Krishna and K.K.M. Sarma, Separation of fuzzy normed linear spaces, Fuzzy Sets and Systems 63 (1994), 207–217. [16] D. Mihet and V. Radu, On the stability of the additive Cauchy functional equation in random normed spaces, J. Math. Anal. Appl. 343 (2008), 567–572. [17] A.K. Mirmostafaee, M. Mirzavaziri and M.S. Moslehian, Fuzzy stability of the Jensen functional equation, Fuzzy Sets and Systems 159 (2008), 730–738. [18] A.K. Mirmostafaee and M.S. Moslehian, Fuzzy versions of Hyers-Ulam-Rassias theorem, Fuzzy Sets and Systems 159 (2008), 720–729. [19] A. Najati and F. Moradlou, Hyers-Ulam-Rassias stability of the Apollonius type quadratic mapping in non-Archimedean spaces, Tamsui Oxford J. Math. Sci. 24 (2008), 367–380. [20] C. Park, On the stability of the linear mapping in Banach modules, J. Math. Anal. Appl. 275 (2002), 711–720. [21] C. Park, Modefied Trif ’s functional equations in Banach modules over a C ∗ -algebra and approximate algebra homomorphism, J. Math. Anal. Appl. 278 (2003), 93–108.
1129
12
Fuzzy stability of quadratic functional equation
[22] C. Park, Fuzzy stability of a functional equation associated with inner product spaces, Fuzzy Sets and Systems 160 (2009), 1632–1642. [23] Th.M. Rassias, On the stability of the linear mapping in Banach spaces, Proc. Amer. Math. Soc. 72 (1978), 297–300. [24] Th.M. Rassias, On the stability of the quadratic functional equation and it’s application, Studia Univ. Babes-Bolyai XLIII (1998), 89–124. [25] Th.M. Rassias, On the stability of functional equations in Banach spaces, J. Math. Anal. Appl. 251 (2000), 264–284. ˇ [26] Th.M. Rassias and P. Semrl, On the Hyers-Ulam stability of linear mappings, J. Math. Anal. Appl. 173 (1993), 325–338. [27] F. Skof, Local properties and approximation of operators, Rendiconti del Seminario Matematico e Fisico di Milano 53 (1983), 113–129. [28] S.M. Ulam, Problems in Modern Mathematics, John Wiley and Sons, New York, NY, USA, 1964. Hassan Azadi Kenary Department of Mathematics, College of Sciences, Yasouj University, Yasouj 75914-353, Iran E-mail address: [email protected] Choonkil Park Department of Mathematics, Research Institute for Natural Sciences, Hanyang University, Seoul 133-791, Korea E-mail address: [email protected] Sung Jin Lee Department of Mathematics, Daejin University, Kyeonggi 487-711, Korea E-mail address: [email protected]
JOURNAL OF COMPUTATIONAL ANALYSIS AND APPLICATIONS, VOL. 14, NO.6, 1130-1138 , 2012, COPYRIGHT 2012 EUDOXUS1130 PRESS, LLC
Landau type inequalities on time scales George A. Anastassiou Department of Mathematical Sciences University of Memphis Memphis, TN 38152, U.S.A. [email protected] Abstract Here we prove some new type Landau inequalities on Time Scales. Both delta and nabla cases are presented. We give applications.
2010 AMS Subject Classification : 26D10, 39A12, 93C70. Key Words and Phrases: Time Scale, Landau inequality, delta and nabla derivatives.
1
Introduction
We are motivated by [27], where E. Landau in 1913 proved the following two theorems: Theorem 1 Let f ∈ C 2 (R+ , R) with kf k∞,R+ , kf 00 k∞,R+ < ∞. Then q kf 0 k∞,R+ ≤ 2 kf k∞,R+ kf 00 k∞,R+ ,
with 2 the best constant.
Theorem 2 Let f ∈ C 2 (R, R) with kf k∞,R , kf 00 k∞,R < ∞. Then kf 0 k∞,R ≤ with
√ 2 the best constant.
q 2 kf k∞,R kf 00 k∞,R ,
In this article we establish a new type of Landau inequalities on time scales. We treat both delta and nabla cases with applications. To keep paper short for basics on time scales we refer the reader to the following sources: [1], [2], [9], [12], [14], [15], [16], [19], [20], [21], [23], [24], [25], [26], [28], [29].
1
ANASTASSIOU: LANDAU INEQUALITIES ON TIME SCALES
2
Background
1 Let f ∈ Crd (T), s, t ∈ T, where T is a time scale, then
Z
t
f ∆ (τ ) ∆τ = f (t) − f (s) .
s
Here we use h0 (t, s) = 1, ∀ s, t ∈ T, k ∈ N0 = N ∪ {0}, and hk+1 (t, s) = Rt h (τ , s) ∆τ , ∀ s, t ∈ T. s k Notice that Z r h1 (r, s) = 1∆τ = r − s, ∀ r, s ∈ T, s
and
h2 (r, s) =
Z
r
(τ − s) ∆τ , ∀ r, s ∈ T.
s
If r ≥ s, then 0 ≤ τ − s ≤ r − s and 0 ≤ h2 (r, s) ≤ (r − s)2 . If r ≤ s, then Z s 0 ≤ h2 (r, s) = (s − τ ) ∆τ ≤ (s − r)2 . r
Hence in general 2
0 ≤ h2 (r, s) ≤ (r − s) , ∀ r, s ∈ T. Also h2 (r, s) 6= h2 (s, r) . Similarly we define and use
and
c0 (t, s) = 1, ∀ s, t ∈ T, h h[ k+1 (t, s) =
Z
s
We have that and
If t ≥ s, then If t ≤ s, then Therefore
t
ck (τ , s) ∇τ , ∀ s, t ∈ T, k ∈ N0 . h
c1 (t, s) = t − s, h
c2 (t, s) = h
Z
t
s
(τ − s) ∇τ .
c2 (t, s) ≤ (t − s)2 . 0≤h
c2 (t, s) = 0≤h
Z
s
t
(s − τ ) ∇τ ≤ (s − t)2 .
c2 (t, s) ≤ (t − s)2 , ∀ t, s ∈ T. 0≤h 2
1131
ANASTASSIOU: LANDAU INEQUALITIES ON TIME SCALES
1 Let f ∈ Cld (T), then
Z
1132
t
s
f ∇ (τ ) ∇τ = f (t) − f (s) .
We need 1 Theorem 3 (see [9], [13], p. 634) Here T = Tk . Let f ∈ Crd (T), a, b, c ∈ T : a ≤ c ≤ b. Then ¯ ¯ µ ¶ ¯ 1 Z b ¯ ° h2 (a, c) + h2 (b, c) ° ¯ ¯ °f ∆ ° f (t) ∆t − f (c)¯ ≤ . (1) ¯ ∞,[a,b]∩T ¯b − a a ¯ b−a
The last is a basic ∆-Ostrowski inequality on time scales, see for basics on R [30]. Remark 4 We notice that R.H.S. (1) ≤ ≤ Therefore
We need
Ã
(c − a)2 + (b − c)2 b−a
!
° ∆° °f °
∞,[a,b]∩T
° ° ° (b − a)2 ° °f ∆ ° = (b − a) °f ∆ °∞,[a,b]∩T . ∞,[a,b]∩T (b − a)
¯ ¯ ¯ 1 Z b ¯ ° ° ¯ ¯ f (t) ∆t − f (c)¯ ≤ (b − a) °f ∆ °∞,[a,b]∩T . ¯ ¯b − a a ¯
(2)
1 (T), Theorem 5 (see [12], [13], p. 659) Here T = Tk = Tk . Let f ∈ Cld a, b, c ∈ T : a ≤ c ≤ b. Then ¯ ¯ Ã ! ¯ 1 Z b ¯ c2 (b, c) ° ° c2 (a, c) + h h ¯ ¯ °f ∇ ° f (t) ∇t − f (c)¯ ≤ . (3) ¯ ∞,[a,b]∩T ¯b − a a ¯ b−a
The last is a basic ∇-Ostrowski inequality on time scales.
Remark 6 We notice that R.H.S. (3) ≤ ≤ Therefore
Ã
(c − a)2 + (b − c)2 b−a
!
° ∇° °f ° ∞,[a,b]∩T
° ° ° (b − a)2 ° °f ∇ ° = (b − a) °f ∇ °∞,[a,b]∩T . ∞,[a,b]∩T (b − a)
¯ ¯ ¯ 1 Z b ¯ ° ° ¯ ¯ f (t) ∇t − f (c)¯ ≤ (b − a) °f ∇ °∞,[a,b]∩T . ¯ ¯b − a a ¯ 3
(4)
ANASTASSIOU: LANDAU INEQUALITIES ON TIME SCALES
3
1133
Main Results
We give the following ∆-Landau type inequality. 2 (T), a, b ∈ T, a < b. Then Theorem 7 Here T = Tk . Let f ∈ Crd ¶ µ ° 2° ° ∆° 2 ° ∆ ° °f ° kf k ≤ + (b − a) . °f ° ∞,[a,b]∩T ∞,[a,b]∩T b−a ∞,[a,b]∩T
(5)
Proof. By (2) we have ¯ ¯ ¯ ° 2° ¯ 1 Z b ° ¯ ° ¯ ∆ 0 f (t) ∆t − f (c)¯ ≤ (b − a) °f ∆ ° , ¯ ¯ ¯b − a a ∞,[a,b]∩T
for any c ∈ T : a ≤ c ≤ b. Therefore ¯ ¯ ° 2° ¯ 1 ¯ ° ∆ ¯ ¯ ≤ (b − a) ° (f (b) − f (a)) − f (c) . °f ∆ ° ¯b − a ¯ ∞,[a,b]∩T Hence
and
¯ ∆ ¯ ¯f (c)¯ −
¯ ∆ ¯ ¯f (c)¯ ≤
° 2° 1 ° ° |f (b) − f (a)| ≤ (b − a) °f ∆ ° b−a ∞,[a,b]∩T
µ
2 b−a
¶
° 2° ° ° kf k∞,[a,b]∩T + (b − a) °f ∆ °
∞,[a,b]∩T
,
for any c ∈ [a, b] ∩ T. The last implies (5). Next we present a ∇-Landau type inequality.
2 (T), a, b ∈ T, a < b. Then Theorem 8 Here T = Tk = Tk . Let f ∈ Cld
° 2° 2 kf k∞,[a,b]∩T ° ∇° ° ∇ ° °f ° + (b − a) ≤ . °f ° ∞,[a,b]∩T (b − a) ∞,[a,b]∩T
Proof. By (4) we get ¯ ¯ ¯ ° 2° ¯ 1 Z b ° ¯ ° ¯ ∇ ∇ f (t) ∇t − f (c)¯ ≤ (b − a) °f ∇ ° , ¯ ¯ ¯b − a a ∞,[a,b]∩T
for any c ∈ T : a ≤ c ≤ b. That is ¯µ ¯ ¶ ° 2° ¯ f (b) − f (a) ¯ ° ∇ ¯ ¯ ≤ (b − a) ° − f (c) , °f ∇ ° ¯ ¯ b−a ∞,[a,b]∩T and
¯ ∇ ¯ ¯f (c)¯ −
° 2° 1 ° ° |f (b) − f (a)| ≤ (b − a) °f ∇ ° , b−a ∞,[a,b]∩T 4
(6)
ANASTASSIOU: LANDAU INEQUALITIES ON TIME SCALES
and
Therefore
1134
° 2° 1 ° ° |f (b) − f (a)| + (b − a) °f ∇ ° . b−a ∞,[a,b]∩T
¯ ∇ ¯ ¯f (c)¯ ≤
° 2° ¯ ∇ ¯ 2 kf k∞,[a,b]∩T ° ° ¯f (c)¯ ≤ + (b − a) °f ∇ ° , (b − a) ∞,[a,b]∩T
for any c ∈ [a, b] ∩ T, proving the claim.
4
Applications
i) When T = R, then see [3], [4]. ii) When T = Z, see that Z = Zk = Zk , then f ∆ (t) = f (t + 1) − f (t) ,
f ∇ (t) = f (t) − f (t − 1) ,
f
∆2
(7)
(t) = f (t) − 2f (t + 1) + f (t + 2) ,
and 2
f ∇ (t) = f (t) − 2f (t − 1) + f (t − 2) , where f : Z → R. Then by (5) we get
µ
2 b−a
¶
kf (· + 1) − f (·)k∞,[a,b]∩Z ≤ kf k∞,[a,b]∩Z + (b − a) kf (·) − 2f (· + 1) + f (· + 2)k∞,[a,b]∩Z .
(8)
Also by (6) we have that kf (·) − f (· − 1)k∞,[a,b]∩Z ≤ 2 kf k∞,[a,b]∩Z (b − a)
+ (b − a) kf (·) − 2f (· − 1) + f (· − 2)k∞,[a,b]∩Z .
(9)
iii) Next let q > 1, q Z = {q k : k ∈ Z}, and we take T = q Z = q Z ∪ {0}, which ³ ´k is a very important time scale to q-difference equations. See that q Z = q Z = ³ ´ qZ . k
Let f : q Z → R. By [23], p. 17 we get that f ∆ (t) =
f (qt) − f (t) , ∀ t ∈ q Z − {0} (q − 1) t 5
ANASTASSIOU: LANDAU INEQUALITIES ON TIME SCALES
and
1135
f (s) − f (0) , s→0 s
f ∆ (0) = lim provided that the limit exists. For t 6= 0 we have that f
∆2
(t) =
¡ ¢ f q 2 t − (q + 1) f (qt) + qf (t) 2
q (q − 1) t2
and
,
f ∆ (s) − f ∆ (0) , s→0 s
2
f ∆ (0) = lim
provided that the limit exists. Here 0 is a right-dense minimum and every other point in q Z is isolated, also σ (t) = qt and ρ (t) = qt , t ∈ q Z , see [23], p. 16. For t ∈ q Z − {0} we have that (see [14]) ³ ´ ³ ´ f (t) − f qt f 1q t − f (t) ´ f ∇ (t) = , = ³ 1 t − qt − 1 t q
and since zero is a left-dense point of q Z we have f ∇ (0) = lim
s→0
f (s) − f (0) , s
provided that the limit exists. For t 6= 0 we have that ³ ´ ³ ´ ³ ´ 1 1 f t − + 1 f 1q t + 1q f (t) 2 2 q q f ∇ (t) = , ´2 ³ 1 1 2 − 1 t q q and
f ∇ (s) − f ∇ (0) , s→0 s
2
f ∇ (0) = lim
provided that the ³ limit ´ exists. 2 Z Let f ∈ Crd q ; a, b ∈ q Z , a < b, then by (5) we get ° ∆° °f ° ≤ ∞,[a,b]∩q Z
µ
2 b−a
¶
° 2° ° ° kf k∞,[a,b]∩qZ + (b − a) °f ∆ °
³ ´ 2 If f ∈ Cld q Z , then by (6) we get
∞,[a,b]∩q Z
° 2° 2 kf k∞,[a,b]∩qZ ° ∇° ° ∇ ° °f ° + (b − a) ≤ . °f ° ∞,[a,b]∩q Z (b − a) ∞,[a,b]∩q Z 6
.
(10)
(11)
ANASTASSIOU: LANDAU INEQUALITIES ON TIME SCALES
5
1136
Addendum
Similarly as in Theorem 1.24 (i), [23], pp. 9-10, we have m
Theorem 9 Let α be a constant and m ∈ N. For f (t) = (t − α) ∇
f (t) =
m−1 X ν=0
we have
(ρ (t) − α)ν (t − α)m−1−ν .
(12)
Proof. By mathematical induction: If m = 1, then f (t) = t − α, and f ∇ (t) = 1, so that (12) is valid. Now we assume that (12) is true for m ∈ N, we will prove it correct for m + 1. So let F (t) = (t − α)m+1 = (t − α) f (t) . Hence by ∇-product rule (see p. 331, Theorem 8.41(iii), of [23]) we get: ∇
F ∇ (t) = ((t − α) f (t)) = (f (t) (t − α))
∇
∇
= f ∇ (t) (t − α) + f (ρ (t)) (t − α) = f ∇ (t) (t − α) + f (ρ (t)) = f (ρ (t)) + f ∇ (t) (t − α) = f (ρ (t)) + (t − α) f ∇ (t) = (ρ (t) − α)m + (t − α) = (ρ (t) − α)m +
m−1 X ν=0
m−1 X ν=0
(ρ (t) − α)ν (t − α)m−1−ν
(ρ (t) − α)ν (t − α)m−ν =
The claim is proved.
m X
ν=0
(ρ (t) − α)ν (t − α)m−ν .
References [1] R. Agarwal, M. Bohner, Basic Calculus on time scales and some of its applications, Results Math. 35(1999), no. 1-2, 3-22. [2] R. Agarwal, M. Bohner, A. Peterson, Inequalities on time scales: a survey, Math. Inequalities & Applications, Vol. 4, no. 4, (2001), 535-557. [3] A. Aglic Aljinovic, Lj. Marangunic, J. Pecaric, On Landau type inequalities via extension of Montgomery identity, Euler and Fink identities, Nonlinear Funct. Anal. & Appl., Vol. 10, No. 2(2005), 273-283. [4] A. Aglic Aljinovic, Lj. Marangunic, J. Pecaric, On Landau type inequalities via Ostrowski inequalities, Nonlinear Funct. Anal. & Appl., Vol. 10, No. 4 (2005), 565-579.
7
ANASTASSIOU: LANDAU INEQUALITIES ON TIME SCALES
[5] G.A. Anastassiou, Multivariate Ostrowski type inequalities, Acta Math. Hungarica 76 (4) (1997), 267-278. [6] G. Anastassiou, Univariate Ostrowski inequalities, Revisited, Monatshefte für Mathematik 135 (2002), 175-189. [7] G.A. Anastassiou, Multivariate Montgomery identities and Ostrowski inequalities, Numer. Funct. Anal. and Opt. 23 (2002), no. 3-4, 247-263. [8] G.A. Anastassiou, Multidimensional Ostrowski inequalities, revisited, Acta Mathematica Hungarica, 97, no. 4 (2002), 339-353. [9] G.A. Anastassiou, Time scales Inequalities, Intern. J. of Difference Equations, Vol. 5 (1) (2010), 1-23. [10] G.A. Anastassiou, Probabilistic inequalities, World Scientific, Singapore, New Jersey, 2010. [11] G.A. Anastassiou, Advanced Inequalities, World Scientific, Singapore, New Jersey, 2011. [12] G.A. Anastassiou, Nabla Time Scales Inequalities, Editor Al. Paterson, special issue on Time Scales, International Journal of Dynamical Systems and Difference Equations, Vol. 3, No’s 1/2 (2011), 59-83. [13] G.A. Anastassiou, Intelligent Mathematics: Springer, New York, Heidelberg, 2011.
Computational Analysis,
[14] D.R. Anderson, Taylor Polynomials for nabla dynamic equations on time scales, Panamer. Math. J., 12 (4): 17-27, 2002. [15] D. Anderson, J. Bullock, L. Erbe, A. Peterson, H. Tran, Nabla Dynamic equations on time scales, Panamer. Math. J., 13 (2003), no. 1, 1-47. [16] F. Atici, D. Biles, A. Lebedinsky, An application of time scales to economics, Mathematical and Computer Modelling, 43 (2006), 718-726. [17] M. Bohner and G. Sh. Guseinov, Partial differentiation on time scales, Dynamic Systems and Applications 13 (2004), no. 3-4, 351-379. [18] M. Bohner, G. Guseinov, Multiple integration on time scales, Dynamic Systems and Applications, Vol. 14 (2005), no. 3-4, 579-606. [19] M. Bohner, G.S. Guseinov, Multiple Lebesgue integration on time scales, Advances in Difference Equations, Vol. 2006, Article ID 26391, pp. 1-12, DOI 10.1155/ADE/2006/26391.
8
1137
ANASTASSIOU: LANDAU INEQUALITIES ON TIME SCALES
[20] M. Bohner, G. Guseinov, Double integral calculus of variations on times scales, Computers and Mathematics with Appl., Vol. 54 (2007), 45-57. [21] M. Bohner, H. Luo, Singular second-order multipoint dynamic boundary value problems with mixed derivatives, Advances in Difference Equations, Vol. 2006, Article ID 54989, p. 1-15, DOI 10.1155/ADE/2006/54989. [22] M. Bohner and T. Matthews, Ostrowski inequalities on time scales, JIPAM. J. Inequal. Pure Appl. Math. 9 (2008), no. 1, Article 6, 8 pp. [23] M. Bohner, A. Peterson, Dynamic equations on time scales: An Introduction with Applications, Birkaüser, Boston (2001). [24] G. Guseinov, Integration on time scales, J. Math. Anal. Appl., 285 (2003), 107-127. [25] R. Higgins, A. Peterson, Cauchy functions and Taylor’s formula for Time scales T, (2004), in Proc. Sixth. Internat. Conf. on Difference equations, edited by B. Aulbach, S. Elaydi, G. Ladas, pp. 299-308, New Progress in Difference Equations, Augsburg, Germany, 2001, publisher: Chapman & Hall / CRC. [26] S. Hilger, Ein Maßketten kalkül mit Anwendung auf Zentrumsmannigfaltigkeiten, PhD. thesis, Universität Würzburg, Germany (1988). [27] E. Landau, Einige Ungleichungen für zweimal differenzierbaren Funktionen, Proc. London Math. Soc., ser. 2, 13 (1913), 43-49. [28] W.J. Liu, Q.A. Ngo, W.B. Chen, Ostrowski type inequalities on time scales for double integrals, Acta Appl. Math., Vol. 110 (2010), no. 1, 477-497. [29] N. Martins, D. Torres, Calculus of variations on time scales with nabla derivatives, Nonlinear Analysis, 71, no. 12 (2009), 763-773. [30] A. Ostrowski, Über die Absolutabweichung einer differentiebaren Funktion von ihrem Integralmittelwert, Comment. Math. Helv., 10 (1938), 226-227.
9
1138
JOURNAL OF COMPUTATIONAL ANALYSIS AND APPLICATIONS, VOL. 14, NO.6, 1139-1147 , 2012, COPYRIGHT 2012 EUDOXUS1139 PRESS, LLC
GENERALIZED INTEGRATION OPERATORS FROM THE SPACE OF INTEGRAL TRANSFORMS INTO BLOCH-TYPE SPACES ´ ∗ , AJAY K. SHARMA AND S. D. SHARMA STEVO STEVIC Abstract. We consider the generalized integration operator Z z (n) Iϕ,g f (z) = f (n) (ϕ(ζ))g(ζ)dζ 0
D
where g is a holomorphic function on the unit disk , ϕ is a holomorphic selfmap of , and n ∈ 0 . The boundedness and compactness of the operator from the space of Cauchy transforms to the Bloch-type spaces are characterized.
D
N
1. Introduction Let D be the open unit disk in the complex plane C, ∂D its boundary, dA(z) the normalized area measure on D (i.e. A(D) = 1), dλ(z) = dA(z)/(1−|z|2 )2 , H(D) the class of all holomorphic functions on D, S(D) the class of all holomorphic self-maps of D and M the space of all complex Borel measures on ∂D. A positive continuous function ν on D is called a (weight). A weight ν is called typical if it is radial, i.e. ν(z) = ν(|z|), z ∈ D, and ν(|z|) decreasingly converges to 0 as |z| → 1. A positive continuous function ν on the interval [0, 1) is called normal ([16]) if there are δ ∈ [0, 1) and τ and t, 0 < τ < t such that ν(r) ν(r) is decreasing on [δ, 1) and lim = 0; τ r→1 (1 − r) (1 − r)τ ν(r) ν(r) is increasing on [δ, 1) and lim = ∞. t r→1 (1 − r)t (1 − r) If we say that a function ν : D → [0, ∞) is normal we also assume that it is radial. For a weight ν, the Bloch-type space Bν (D) = Bν consists of all f ∈ H(D) such that kf kBν := |f (0)| + bν (f ) = |f (0)| + sup ν(z)|f 0 (z)| < ∞. z∈D
The little Bloch-type space Bν,0 (D) = Bν,0 consists of all f ∈ H(D) such that lim ν(z)|f 0 (z)| = 0.
|z|→1
With the norm k · kBν the Bloch-type space is Banach and the little Bloch-type space is a closed subspace of the Bloch-type space. 2000 Mathematics Subject Classification. Primary 47B38; Secondary 30H30. Key words and phrases. Generalized integration operator, Cauchy transforms, Bloch-type space, boundedness, compactness. ∗ Corresponding author. 1
1140
´ AJAY K. SHARMA AND S. D. SHARMA STEVO STEVIC,
2
If ν is normal we have that ν(a) ³ ν(z) when z ∈ D(a, (1 − |a|)/2). Using this fact, the subharmonicity of |f 0 |2 , the asymptotic relation |1 − a ¯z| ³ 1 − |a|, z ∈ D(a, 1 − |a|), and Proposition 1.4.10 in [13] we have that f ∈ Bν if and only if Z Ψ(f ) := sup |f 0 (z)|2 ν 2 (z)(1 − |ηa (z)|2 )2 dλ(z) < ∞, (1) a∈D
D
where ηa (z) = (a − z)/(1 − a ¯z), a, z ∈ D. Moreover b2ν (f ) ³ Ψ(f ), for f ∈ H(D). An f ∈ H(D) belongs to the space of Cauchy transforms K if Z dµ(ζ) (2) f (z) = ∂D 1 − ζz for some µ ∈ M. The space K becomes a Banach space under the norm ¾ ½ Z dµ(ζ) , (3) kf kK = inf kµk : f (z) = ∂D 1 − ζz where kµk denotes the total variation of measure µ. It is known that H 1 ⊂ K ⊂ ∩0 0, there is an m0 ∈ N, m0 ≥ n, such that for m ≥ m0 , we have that Z ³ n−1 ´2 Y (m − j) sup |ϕ(z)|2(m−n) ν 2 (z)(1 − |ηa (z)|2 )2 |g(z)|2 dλ(z) < ε. (19) a∈D
j=0
D
From (19), we have that for each r ∈ (0, 1) r2(m−n)
³ n−1 Y
Z ´2 (m − j) sup a∈D
j=0
£ Qn−1
Hence for r ∈
j=0 (m0
ν 2 (z)(1 − |ηa (z)|2 )2 |g(z)|2 dλ(z) < ε.
(20)
|ϕ(z)|>r
¢ 1 − j))− m0 −n , 1 , we have
Z ν 2 (z)(1 − |ηa (z)|2 )2 |g(z)|2 dλ(z) < ε.
sup a∈D
(21)
|ϕ(z)|>r
Let f ∈ BK and ft (z) = f (tz), 0 < t < 1. Then sup0r Z (n) ≤ 2 sup |ft (ϕ(z)) − f (n) (ϕ(z))|2 ν 2 (z)(1 − |ηa (z)|2 )2 |g(z)|2 dλ(z) a∈D D Z (n) + 2 sup |ft (ϕ(z))|2 ν 2 (z)(1 − |ηa (z)|2 )2 |g(z)|2 dλ(z) a∈D
|ϕ(z)|>r (n)
≤ 2ε(1 + kft k2∞ ). Hence for every f ∈ BK , there is a δ0 ∈ (0, 1), δ0 = δ0 (f, ε), such that for r ∈ (δ0 , 1) Z |f (n) (ϕ(z))|2 ν 2 (z)(1 − |ηa (z)|2 )2 |g(z)|2 dλ(z) < ε. (23) sup a∈D
|ϕ(z)|>r (n)
From the compactness of Iϕ,g : K → Bν , we have that for every ε > 0 there is a finite collection of functions f1 , f2 , . . . , fk ∈ BK such that for each f ∈ BK , there is a j ∈ {1, 2, . . . , k} such that Z (n) sup |f (n) (ϕ(z)) − fj (ϕ(z))|2 ν 2 (z)(1 − |ηa (z)|2 )2 |g(z)|2 dλ(z) < ε. (24) a∈D
D
1144
´ AJAY K. SHARMA AND S. D. SHARMA STEVO STEVIC,
6
On the other hand, from (23) it follows that if δ := max1≤j≤k δj (fj , ε), then for r ∈ (δ, 1) and all j ∈ {1, 2, . . . , k} we have Z (n) sup |fj (ϕ(z))|2 ν 2 (z)(1 − |ηa (z)|2 )2 |g(z)|2 dλ(z) < ε. (25) a∈D
|ϕ(z)|>r
From (24) and (25) we have that for r ∈ (δ, 1) and every f ∈ BK Z sup |f (n) (ϕ(z))|2 ν 2 (z)(1 − |ηa (z)|2 )2 |g(z)|2 dλ(z) < 4ε. a∈D
(26)
|ϕ(z)|>r
Applying (26) to the functions fζ (z) = 1/(1 − ζz), ζ ∈ ∂D, we obtain Z ν 2 (z) sup sup (1 − |ηa (z)|2 )2 |g(z)|2 dλ(z) < 4ε/(n!)2 , 2n+2 ζ∈∂D a∈D |ϕ(z)|>r |1 − ζϕ(z)| from which (18) follows. (ii) ⇒ (i). Assume that (fm )m∈N is a bounded sequence in K, say by L, converging to 0 uniformly on compacts of D as m → ∞. Then by the Weierstrass theorem, (k) fm also converges to 0 uniformly on compacts of D, as m → ∞, for each k ∈ N. (n) We show that kIϕ,g fm kBν → 0 as m → ∞, and then apply Lemma 1. For each m ∈ N, we can find a µm ∈ M with kµm k = kfm kK such that Z dµm (ζ) . (27) fm (z) = ∂D 1 − ζz Differentiating (27) n times, composing such obtained equation by ϕ, applying Jensen’s inequality, as well as the boundedness of the sequence (fm )m∈N , we obtain Z d|µm |(ζ) 2 2 (n) |fm (ϕ(w))| ≤ L(n!) . (28) 2n+2 ∂D |1 − ζϕ(w)| By the second condition in (ii), we have that for every ε > 0, there is an r1 ∈ (0, 1) such that for r ∈ (r1 , 1), we have Z ν 2 (z) sup sup (1 − |ηa (z)|2 )2 |g(z)|2 dλ(z) < ε. (29) 2n+2 ζ∈∂D a∈D |ϕ(z)|>r |1 − ζϕ(z)| By (1), we have (n) kIϕ,g fm k2Bν
³Z
Z
³ sup a∈D
´
+ |ϕ(z)|≤r
|ϕ(z)|>r
(n) |fm (ϕ(z))|2 (1−|ηa (z)|2 )2 |g(z)|2 ν 2 (z)dλ(z). (n)
Using (28), (29), Fubini’s theorem and the fact that sup|w|≤r |fm (w)|2 < ε, for sufficiently large m, say m ≥ m1 , we have that for m ≥ m1 Z (n) (n) (1 − |ηa (z)|2 )2 |g(z)|2 ν 2 (z)dλ(z) kIϕ,g fm k2Bν ≤ C sup |fm (ϕ(z))|2 sup |ϕ(z)|≤r
Z
Z
a∈D
|ϕ(z)|≤r
ν 2 (z) + C sup (1 − |ηa (z)|2 )2 |g(z)|2 dλ(z)d|µm |(ζ) a∈D ∂D |ϕ(z)|>r |1 − ζϕ(z)|2n+2 µ ¶ Z ≤ C M3 + d|µm |(ζ) ε ∂D
≤ C(M3 + L)ε. From this and since ε is an arbitrary positive number the claim follows. ¤
1145
GENERALIZED INTEGRATION OPERATORS
7 (n)
Theorem 3. Let ν be a weight, n ∈ N0 , g ∈ H(D) and ϕ ∈ S(D). Then Iϕ,g : K → Bν,0 is bounded if and only if condition (5) holds and ν(z)|g(z)| =0 |z|→1 |1 − ζϕ(z)|n+1 lim
(30)
for every ζ ∈ ∂D. Proof. First suppose that (5) and (30) hold. Note that (8) holds. From (5) and (30), the integrand in (8) tends to zero for every ζ ∈ ∂D, as |z| → 1, and is dominated by the function f (z) = M1 . Thus by the Lebesgue convergence theorem, the integral in (8) tends to zero as |z| → 1, implying (n) 0 lim ν(z)|(Ig,ϕ f ) (z)| = 0.
|z|→1
(n)
Hence, for every f ∈ K we have that Iϕ,g f ∈ Bν,0 , from which along with Theorem (n) 1, the boundedness of Iϕ,g : K → Bν,0 follows. (n) (n) Now suppose that Iϕ,g : K → Bν,0 is bounded. Then Iϕ,g fζ ∈ Bν,0 for every function fζ , ζ ∈ ∂D, defined in (11), that is lim
|z|→1
ν(z)|g(z)| =0 |1 − ζϕ(z)|n+1 (n)
(n)
for every ζ ∈ ∂D. Since Iϕ,g : K → Bν,0 is bounded, then Iϕ,g : K → Bν is bounded too. Thus by Theorem 1, (5) follows, as claimed. ¤ (n)
Corollary 3. Let ν be a weight, n ∈ N0 and g ∈ H(D). Then Ig bounded if and only if condition (14) holds and lim
|z|→1
ν(z)|g(z)| =0 |1 − ζz|n+1
: K → Bν,0 is (31)
for every ζ ∈ ∂D. Theorem 4. Let ν be a typical weight, n ∈ N0 , g ∈ H(D) and ϕ ∈ S(D). Then (n) Iϕ,g : K → Bν,0 is compact if and only if lim sup
|z|→1 ζ∈∂D
ν(z)|g(z)| = 0. |1 − ζϕ(z)|n+1
(32)
Proof. By a known result (see, e.g. Lemma 1 in [25]), a closed set F in Bν,0 is compact if and only if it is bounded and satisfies lim sup ν(z)|f 0 (z)| = 0.
|z|→1 f ∈F (n)
Thus the set {Iϕ,g f : f ∈ K, kf kK ≤ 1} has compact closure in Bν,0 if and only if (n) 0 lim sup{ν(z)|(Iϕ,g f ) (z)| : f ∈ K, kf kK ≤ 1} = 0.
|z|→1
Let f ∈ BK , then there is a µ ∈ M such that kµk = kf kK and Z dµ(ζ) f (z) = . 1 − ζz ∂D
(33)
1146
8
´ AJAY K. SHARMA AND S. D. SHARMA STEVO STEVIC,
From (8), we easily get that for each f ∈ BK ν(z)|g(z)| ν(z)|g(z)| ≤ n! sup . n+1 n+1 ζ∈∂D |1 − ζϕ(z)| ζ∈∂D |1 − ζϕ(z)|
(n) 0 ν(z)|(Iϕ,g f ) (z)| ≤ n!kµk sup
(34)
(n)
Using (32) in (34), we get (33). Hence Iϕ,g : K → Bν,0 is compact. (n) Conversely, suppose that Iϕ,g : K → Bν,0 is compact. Taking the test functions in (11) and using (12), we obtain that (32) follows from (33). ¤ (n)
Corollary 7. Let ν be a typical weight, n ∈ N0 and g ∈ H(D). Then Ig Bν,0 is compact if and only if lim sup
|z|→1 ζ∈∂D
ν(z)|g(z)| = 0. |1 − ζz|n+1
:K→
(35)
Acknowledgments. This paper is partially supported by National Board of Higher Mathematics (NBHM)/DAE, India (Grant No. 48/4/2009/ R&D-II/426) and the Serbian Ministry of Science (projects III44006 and III41025). References [1] K. Avetisyan and S. Stevi´ c, Extended Ces` aro operators between different Hardy spaces, Appl. Math. Comput. 207 (2009), 346-350. [2] P. Bourdon and J. A. Cima, On integrals of Cauchy-Stieltjes type, Houston J. Math. 14 (1988), 465-474. [3] J. S. Choa and H. O. Kim, Composition operators from the space of Cauchy transforms into its Hardy-type subspaces, Rockey Mountain J. Math. 31 (1) (2001), 95-113. [4] J. A. Cima and A. L. Matheson, Cauchy transforms and composition operators, Illinois J. Math. 4 (1998), 58-69. [5] Z. Hu, Extended Ces` aro operators on the Bloch space in the unit ball of n , Acta Math. Sci. Ser. B Engl. Ed. 23 (4) (2003), 561-566. [6] S. Li and S. Stevi´ c, Riemann-Stieltjes type integral operators on the unit ball in n , Complex Variables Elliptic Equations 52 (6) (2007), 495-517. [7] S. Li and S. Stevi´ c, Generalized composition operators on Zygmund spaces and Bloch type spaces, J. Math. Anal. Appl. 338 (2008), 1282-1295. [8] S. Li and S. Stevi´ c, Products of composition and integral type operators from H ∞ to the Bloch space, Complex Variables Elliptic Equations 53 (5) (2008), 463-474. [9] S. Li and S. Stevi´ c, On an integral-type operator from iterated logarithmic Bloch spaces into Bloch-type spaces, Appl. Math. Comput. 215 (2009), 3106-3115. [10] S. Li and S. Stevi´ c, Products of integral-type operators and composition operators between Bloch-type spaces, J. Math. Anal. Appl. 349 (2009), 596-610. [11] S. Li and S. Stevi´ c, Products of Volterra type operator and composition operator from H ∞ and Bloch spaces to the Zygmund space, J. Math. Anal. Appl. 345 (2008), 40-52. [12] C. Pommerenke, Schlichte funktionen und analytische funktionen von beschr¨ ankter mittlerer oszillation, Comment. Math. Helv. 52 (1977), 591-602. [13] W. Rudin, Function theory in the unit ball of n , New York: Springer-Verlag, 1980. [14] H. J. Schwartz, Composition operators on H p , Thesis, University of Toledo 1969. [15] A. Sharma and A. K. Sharma, Carleson measures and a class of generalized integration operators on the Bergman space, Rocky Mountain J. Math. 41 (5) (2011), 1711-1724. [16] A. L. Shields and D. L. Williams, Bounded projections, duality, and multipliers in spaces of analytic functions, Trans. Amer. Math. Soc. 162 (1971), 287-302. [17] S. Stevi´ c, Boundedness and compactness of an integral operator on mixed norm spaces on the polydisc, Siberian Math. J. 48 (3) (2007), 559-569. [18] S. Stevi´ c, Generalized composition operators between mixed norm space and some weighted spaces, Numer. Funct. Anal. Optimization 29 (7-8) (2008), 959-978.
C
C
C
1147
GENERALIZED INTEGRATION OPERATORS
9
[19] S. Stevi´ c, Generalized composition operators from logarithmic Bloch spaces to mixed-norm spaces, Util. Math. 77 (2008), 167-172. [20] S. Stevi´ c, Norm of weighted composition operators from Bloch space to Hµ∞ on the unit ball, Ars Combin. 88 (2008), 125-127. [21] S. Stevi´ c, Norms of some operators from Bergman spaces to weighted and Bloch-type space, Util. Math. 76 (2008), 59-64. [22] S. Stevi´ c, On a new operator from H ∞ to the Bloch-type space on the unit ball, Util. Math. 77 (2008), 257-263. [23] S. Stevi´ c, On a new operator from the logarithmic Bloch space to the Bloch-type space on the unit ball, Appl. Math. Comput. 206 (2008), 313-320. [24] S. Stevi´ c, Essential norm of an operator from the weighted Hilbert-Bergman space to the Bloch-type space, Ars Combin. 91 (2009), 123-127. [25] S. Stevi´ c, On a new integral-type operator from the Bloch space to Bloch-type spaces on the unit ball, J. Math. Anal. Appl. 354 (2009), 426-434. [26] S. Stevi´ c, On an integral operator from the Zygmund space to the Bloch-type space on the unit ball, Glasg. J. Math. 51 (2009), 275-287. [27] S. Stevi´ c, Integral-type operators from a mixed norm space to a Bloch-type space on the unit ball, Siberian Math. J. 50 (6) (2009), 1098-1105. [28] S. Stevi´ c, Products of integral-type operators and composition operators from the mixed norm space to Bloch-type spaces, Siberian Math. J. 50 (4) (2009), 726-736. [29] S. Stevi´ c, Norm of an integral-type operator from Dirichlet to Bloch space on the unit disk, Util. Math. 83 (2010), 301-303. [30] S. Stevi´ c, On an integral operator between Bloch-type spaces on the unit ball, Bull. Sci. Math. 134 (2010), 329-339. [31] S. Stevi´ c, On an integral-type operator from logarithmic Bloch-type spaces to mixed-norm spaces on the unit ball, Appl. Math. Comput. 215 (2010), 3817-3823. [32] S. Stevi´ c, On an integral-type operator from Zygmund-type spaces to mixed-norm spaces on the unit ball, Abstr. Appl. Anal. Vol. 2010, Article ID 198608, (2010), 7 pages. [33] S. Stevi´ c, On operator Pϕg from the logarithmic Bloch-type space to the mixed-norm space on unit ball, Appl. Math. Comput. 215 (2010), 4248-4255. [34] S. Stevi´ c, On some integral-type operators between a general space and Bloch-type spaces, Appl. Math. Comput. 218 (2011), 2600-2618. [35] S. Stevi´ c and A. K. Sharma, Composition operators from the space of Cauchy transforms to Bloch and the little Bloch-type spaces on the unit disk, Appl. Math. Comput. 217 (2011) 10187-10194. [36] S. Stevi´ c and S. I. Ueki, Integral-type operators acting between weighted-type spaces on the unit ball, Appl. Math. Comput. 215 (2009), 2464-2471. [37] S. Stevi´ c and S. I. Ueki, On an integral-type operator between weighted-type spaces and Bloch-type spaces on the unit ball, Appl. Math. Comput. 217 (2010), 3127-3136. [38] W. Yang, On an integral-type operator between Bloch-type spaces, Appl. Math. Comput. 215 (3) (2009), 954-960. [39] X. Zhu, Integral-type operators from iterated logarithmic Bloch spaces to Zygmund-type spaces, Appl. Math. Comput. 215 (3) (2009), 1170-1175. ´, Mathematical Institute of the Serbian Academy of Sciences, Knez Stevo Stevic Mihailova 36/III, 11000 Beograd, Serbia E-mail address: [email protected] Ajay K. Sharma, School of Mathematics, Shri Mata Vaishno Devi University, Kakryal, Katra-182320, J& K, India. E-mail address: aksju [email protected] S. D. Sharma, Department of Mathematics, University of Jammu, Jammu-180006, India E-mail address: somdatt [email protected]
JOURNAL OF COMPUTATIONAL ANALYSIS AND APPLICATIONS, VOL. 14, NO.6, 1148-1153 , 2012, COPYRIGHT 2012 EUDOXUS1148 PRESS, LLC
Some Remarks on Extended Hypergeometric, Extended Con‡uent Hypergeometric and Extended Appell’s Functions Mehmet Ali Özarslan Eastern Mediterranean University, Faculty of Arts and Sciences, Department of Mathematics, Gazimagusa, TRNC, Mersin 10, Turkey Email: [email protected] Abstract Recently, Chaudhry et al. have introduced the extended hypergeometric functions (EHF) and extended con‡uent hypergeometric functions (ECHF) by using the generalized beta functions [1]. In a similar way, Özarslan et al. extended the …rst two Appell’s hypergeometric functions. In this paper, we show that these extended functions can be represented in terms of a …nite number of well known higher transcendental functions, especially as an in…nite series containing hypergeometric, con‡uent hypergeometric, Whittaker’s, Lagrange functions, Laguerre polynomials, and products of them.
2000 Mathematics Subject Classi…cation. 33C45, 33C50. Key words : Extended hypergeometric functions, extended con‡uent hypergeometric functions, extended Appell’s hypergeometric functions, Laguerre polynomials, Whittaker’s function, extended fractional derivative operator.
1
Introduction
Recently, in [1], M.A. Chaudhry, A. Qadir, M. Ra…que, S.M. Zubair have introduced the following extension of Euler’s beta function Z 1 p y 1 Bp (x; y) = B (x; y; p) := tx 1 (1 t) exp dt; (1) t(1 t) 0 (Re(p) > 0; Re(x) > 0; Re(y) > 0) ;
where the case p = 0 gives the original beta function. They have shown that this extension has connection with Macdonald, error and Whittakers functions. Allen R. Miller obtained further representations of the extended beta functions as an in…nite series of Whittaker’s functions, simple Laguerre polynomials and their products [2]. In 2004, M. Aslam Chaudhry, Asghar Qadir, H.M. Srivastava, R.B. Paris [3], extended the hypergeometric functions and con‡uent hypergeometric functions as follows: Fp (a; b; c; z) =
1 X Bp (b + n; c b) zn (a)n ; B(b; c b) n! n=0
and p (b; c; z)
=
1 X Bp (b + n; c b) z n ; B(b; c b) n! n=0
(p
(p
0; jzj < 1; Re (c) > Re (b) > 0; )
0; Re (c) > Re (b) > 0; )
where ( ) denotes the Pochhammer’s symbol de…ned by ( ) :=
( + ) ; ( )0 = 1: ( ) 1
OZARSLAN: ...EXTENDED APPELL'S FUNCTIONS
1149
They called these functions as extended hypergeometric functions (EHF) and extended con‡uent hypergeometric functions (ECHF), respectively. They obtained the integral representation of EHF and ECHF as Z 1 1 p c b 1 a Fp (a; b; c; z) = tb 1 (1 t) (1 zt) exp dt (2) B (b; c b) 0 t(1 t) (p > 0; p = 0 and jarg (1
z)j < ; Re (c) > Re (b) > 0) ;
and Z 1 1 c b tb 1 (1 t) p (b; c; z) = B (b; c b) 0 (p 0; Re (c) > Re (b) > 0) :
1
p
: exp(zt
t(1
t)
)dt
They also obtained the di¤erentiation properties and Mellin transforms of Fp (a; b; c; z) ; gave their transformations and recurrence relations and proved summation and asymptotic formulas for this function. Noticed that F0 (a; b; c; z) = F (a; b; c; z) :=2 F1 (a; b; c; z). Finally, in [4], M. A. Özarslan and E. Özergin introduced the extensions of the Appell’s functions F1 (a; b; c; d; x; y; p) and F2 (a; b; c; d; e; x; y; p) by F1 (a; b; c; d; x; y; p) :=
1 X Bp (a + m + n; d B(a; d a) n;m=0
a)
(b)n (c)m
xn y m n! m!
(max fjxj ; jyjg < 1) ;
F2 (a; b; c; d; e; x; y; p) :=
1 X (a)m+n Bp (b + n; d b) Bp (c + m; e B (b; d b) B (c; e c) n;m=0
c) xn y m n! m!
(jxj + jyj < 1) ;
of which the case p = 0 gives the original functions. They obtained the integral representations of the functions F1 (a; b; c; d; x; y; p) and F2 (a; b; c; d; e; x; y; p) as Z 1 (d) p d a 1 b c F1 (a; b; c; d; x; y; p) = ta 1 (1 t) (1 xt) (1 yt) exp dt; (3) (a) (d a) 0 t(1 t) (p > 0; p = 0 and jarg (1
x)j < ; jarg (1
y)j < ; Re (d) > Re (a) > 0; Re (b) > 0; Re (c) > 0)
and
=
B (b; d
Z
1 b) B (c; e (p
1
F2 (a; b; c; d; e; x; y; p)
Z
1 b 1
t
d b 1 c 1
(1
(4) e c 1
t) s (1 s) p p exp dtds: a c) 0 0 t(1 t) s(1 s) (1 xt ys) 0 and jxj + jyj < 1; Re (d) > Re (b) > 0; Re (e) > Re (c) > 0; Re (a) > 0) :
By introducing extended Riemann-Liouville fractional derivative operator as
Dz ;p ff (z)g = and for m Dz
;p
1 (
)
Z
z
f (t) (z
1
t)
exp
0
pz 2 t(z t)
dt
(Re ( ) < 0; Re (p) > 0)
(5)
1 < Re ( ) < m (m = 1; 2; 3:::) ;
dm ff (z)g = m Dz dz
m
dm ff (z)g = m dz
1 (
+ m) 2
Z
0
z
f (t) (z
t)
+m 1
exp
pz 2 t(z t)
dt
OZARSLAN: ...EXTENDED APPELL'S FUNCTIONS
1150
where the path of integration is a line from 0 to z in complex t plane, they obtained linear and bilinear generating functions for the extended hypergeometric functions in terms of the Appell’s functions F1 (a; b; c; d; x; y; p) and F2 (a; b; c; d; e; x; y; p) by following the same method explained in [6]. We organize the paper as follows: In section 2, using the similar technics used in [2], we show that the EHFs can be expressed as a product of simple Laguerre polynomials and hypergeometric functions and as a product of simple Laguerre polynomials and Whittaker’s functions. Similar results for the ECHFs are also exhibited. In section 3, we obtain a representation of the extended Appel’s functions F1 (a; b; c; d; x; y; p) by means of Lagrange polynomials. Furthermore, using the extended fractional derivative operator, we show that the extended Appell’s functions F2 (a; b; c; d; e; x; y; p) can be represented as a product of simple Laguerre polynomials, the EHFs and Whittaker’s functions.
2
Representations of the EHF, ECHF
We start by obtaining the representation of EHF in terms of simple Laguerre and the hypergeometric functions. Theorem 1 For the extended hypergeometric functions we have the following representation: exp(2p)Fp (a; b; c; z) =
1 X (b)m+1 (c b)n+1 Lm (p)Ln (p)2 F1 (a; b + m; c + n + m + 1; z) (c)n+m+2 m;n=0
(p > 0; p = 0 and jarg (1
z)j
Re (b) > 0) :
Proof. It is well known that the simple Laguerre polynomials are generated by the relation exp( where jtj < 1: Replacing t by 1
1 X
Ln (p)tn
(6)
n=0
p t
1
X p ) = t exp( p) Ln (p)(1 t n=0
where 0 < t < 2: Again replacing t by 1
1
t)
t in (6), we have exp(
exp(
pt ) = (1 1 t
) = (1
t)n ;
(7)
t in (7), we have t) exp( p)
1 X
Lm (p)tm ;
m=0
jtj < 1:
(8)
Multiplying both sides of (7) and (8), we get, as in [2], exp
p t(1
t)
= exp( 2p)
1 X
Lm (p)Ln (p)tm+1 (1
t)n+1 ;
0 < t < 1:
(9)
m;n=0
Using (9) in (2) and then interchanging the order of integration and summation, which is valid since the series involved is uniformly convergent and the integral involved is absolutely convergent for p > 0; p = 0 and jarg (1 z)j < ; Re (c) > Re (b) > 0, we get Z 1 exp(2p) p c b 1 a exp(2p)Fp (a; b; c; z) = tb 1 (1 t) (1 zt) exp dt (10) B (b; c b) 0 t(1 t) Z 1 1 X 1 c b 1 a b 1 = t (1 t) (1 zt) Lm (p)Ln (p)tm+1 (1 t)n+1 dt B (b; c b) 0 m;n=0 Z 1 1 X 1 n+c b a = Lm (p)Ln (p) tm+b (1 t) (1 zt) dt: B (b; c b) m;n=0 0 3
OZARSLAN: ...EXTENDED APPELL'S FUNCTIONS
1151
Now considering the Euler’s integral representation of the hypergeometric functions Z 1 1 c b 1 a tb 1 (1 t) (1 zt) dt 2 F1 (a; b; c; z) = B (b; c b) 0 jarg (1 z)j < ; Re (c) > Re (b) > 0 it is obvious that Z 1 1 tm+b (1 B (b; c b) 0
(b)m+1 (c b)n+1 2 F1 (a; b + m; c + n + m + 1; z) : (c)n+m+2 (11) Hence the result follows by considering (11) in (10). In a similar way, for the ECHF, we are led fairly easily to Theorem 2 below. n+c b
t)
(1
zt)
a
dt =
Theorem 2 For the ECHF’s we have the following representation: exp(2p)
p (b; c; z) =
1 X (b)m+1 (c b)n+1 Lm (p)Ln (p)1 F1 (b + m; c + n + m + 1; z) (c)n+m+2 m;n=0
(p
0 ; Re (c) > Re (b) > 0) :
Now we give another representation of the extended hypergeometric functions in terms of the simple Laguerre polynomials and Whittakers functions. Theorem 3 For the extended hypergeometric functions, we have p 1 ( p)
b
exp(
1 b) (c) X (a)n Lm (p)W b (b) m;n=0
3p (c )Fp (a; b; c; z) = 2 (p
Proof. Since exp
h
p t(1 t)
i
exp
= exp(
n m 2
2c n+m+b ; 2
(p)
p
pz
n
pm=2
n!
0; Re (c) > Re (b) > 0; jzj < 1) :
p 1
p t(1
1
t)
t
) exp(
p ); we have by (8) that t
= exp( p) exp(
Therefore using (2) we have for jzj < 1, Z 1 1 exp(p)Fp (a; b; c; z) = tb B (b; c b) 0 Z 1 1 = tb B (b; c b) 0
1
1
(1 (1
p )(1 t
c b 1
t)
(1
t)
1 X
Lm (p)tm :
m=0
zt)
a
exp(
p )(1 t
t)
1 X
Lm (p)tm dt
m=0
1 1 X p X (a)n c b n t) (tz) exp( ) Lm (p)tm dt: n! t n=0 m=0
Interchanging the order of summations and integration, we have Z 1 1 X (a)n 1 Lm (p) tn+m+b exp(p)Fp (a; b; c; z) = B (b; c b) m;n=0 n! 0 Finally, using the integral representation [5, Section 3.471 (2)] Z 1 1 p p 1 t 1 (1 t) exp( )dt = ( )p 2 exp( )W 1 2 t 2 0 we get the result. In a similar manner, we get the following result: 4
2
;2
(p)
1
(1
c b
t)
exp(
p )dt z n : t
(Re( ) > 0; Re(p) > 0) ;
(12)
OZARSLAN: ...EXTENDED APPELL'S FUNCTIONS
1152
Theorem 4 For the ECHF, we have p 1 ( p)
b
3p exp( ) 2
p
(b; c; z) =
1 b) (c) X Lm (p)W b (b) m;n=0
(c (P
3
1
n m 2
2c n+m+b ; 2
(p)
p
pz
n
pm=2
n!
0; Re (c) > Re (b) > 0) :
Representations of the Extended Appel’s Functions
In this section we obtain some representations of the extended Appel’s functions F1 (a; b; c; d; x; y; p) and F2 (a; b; c; d; e; x; y; p): We start with the following theorem. Theorem 5 For the extended Appell’s functions F1 (a; b; c; d; x; y; p), we have (d) (a) (d
F1 (a; b; c; d; x; y; p) = (p
1 X
a) n=0
gn(b;c) (x; y)Bp (n + a; d
a) ;
0 and jxj < 1; jyj < 1; Re (d) > Re (a) > 0; Re (b) > 0; Re (c) > 0) :
Proof. It is known that, for jxtj < 1; jytj < 1 and Re (b) > 0; Re (c) > 0; we have (1
b
xt)
(1
yt)
c
=
1 X
gn(b;c) (x; y)tn
(13)
n=0 (b;c)
where gn (x; y) is the bivariate Lagrange polynomials [6, p. 441, equation 8.5 (12)]. Using (13) in (3) and then interchanging the order of summation and integration, which is valid since the series 1 X
gn(b;c) (x; y)tn
n=0
is uniformly convergent since jxj < 1; jyj < 1; Re (b) > 0; Re (c) > 0 and the integral Z 1 p d a 1 tn+a 1 (1 t) exp dt t(1 t) 0 is absolutely convergent for Re (d) > Re (a) > 0; p 0; we get, by using (1), Z 1 p (d) d a 1 b c ta 1 (1 t) (1 xt) (1 yt) exp dt F1 (a; b; c; d; x; y; p) = (a) (d a) 0 t(1 t) Z 1 1 X (d) p d a 1 (b;c) g (x; y) = tn+a 1 (1 t) exp dt (a) (d a) n=0 n t(1 t) 0 =
(d) (a) (d
1 X
a) n=0
gn(b;c) (x; y)Bp (n + a; d
a) :
Whence the result. In the next theorem by using the equalities, which are obtained in [4] , Dz
( ) z 1 Fp ( ; ; ; z) ( ) (Re ( ) > 0; Re ( ) > 0; Re ( ) < 0; jzj < 1) ; ;p
fz
1
(1
z)
g=
(14)
and Dz
;p
z
1
(1
z)
Fp
; ; ;
x
z
1
F2 ( ; ; ; ; ; x; z; p) (15) ) ( ) x Re ( ) > Re ( ) > 0; Re ( ) > 0; Re ( ) > 0; Re ( ) > 0; < 1 and jxj + jzj < 1 , 1 z 1
z
=
5
B( ;
OZARSLAN: ...EXTENDED APPELL'S FUNCTIONS
1153
where Dz ;p is the extended Riemann-Liouville fractional derivative operator de…ned by (5), we give the representation of the function F2 (a; b; c; d; e; x; y; p) in terms of the simple Laguerre, the Whittaker and the EHF. Theorem 6 For the extended Appell’s functions F2 (a; b; c; d; e; x; y; p), we have 1
=
(c 1 X
b
p 2 b + 1)B( ; (a)n Lm (p)W b
m;n=0
where Re (p)
1
exp(
n m 2
3p )F2 (a; b; ; c; ; x; z; p) 2
p n m x p p2 ; ; z) ; 2c n+m+b (p)Fp (a + n; ; 2 n!
0; Re ( ) > Re ( ) > 0; Re (a) > 0; Re (c) > Re (b) > 0;
Proof. Replacing z by p =
)
1
b 2
(c
x 1 z
in (12) and then multiplying both sides by z
=
1
(c
b 2
< 1 and jxj + jzj < 1.
z 1
(1
3p x )z 1 (1 z) a Fp a; b; c; 2 1 z p n m 1 X x p p2 b) (c) (a)n Lm (p)W b 1 n m 2c ; n+m+b (p) z 2 2 (b) n! m;n=0
z)
; we get
exp(
Now applying the extended fractional derivative operator Dz p
x 1
;p
(16) 1
(1
z)
a n
:
on both sides of (16), we have
3p x a )Dz ;p z 1 (1 z) Fp a; b; c; 2 1 z p n m 1 X x p p2 b) (c) (a)n Lm (p)W b 1 n m 2c ; n+m+b (p) Dz 2 2 (b) n! m;n=0
exp(
;p
z
1
(1
z)
a n
:
Using (14) and (15), we get the result.
References [1] M.A. Chaudhry, A. Qadir, M. Ra…que, S.M. Zubair, Extension of Euler s beta function, J. Comput. Appl. Math. 78 (1997) 19–32. [2] A.R. Miller, Remarks on generalized beta function, J. Comput. Appl. Math. 100 (1998) 23–32. [3] M. Aslam Chaudhry, Asghar Qadir, H.M. Srivastava, R.B. Paris, Extended hypergeometric and con‡uent hypergeometric functions, Applied Mathematics and Computation 159 (2004), no. 2, 589–602. [4] M.A. Özarslan and E. Özergin, Some generating relations for extended hypergeometric functions via generalized fractional derivative operator, Mathematical and Computer Modelling 52 (2010), no. 9-10, 1825–1833 [5] I.S. Gradshteyn, I.M. Ryzhik, Table of Integrals, Series, and Products 5th ed., Academic Press, San Diego, 1994. [6] H.M. Srivastava and H.L. Manocha, A Treatise on Generating Functions, Halsted/Ellis Horwood/ Wiley, New York/Chicester/ New York, 1984.
6
JOURNAL OF COMPUTATIONAL ANALYSIS AND APPLICATIONS, VOL. 14, NO.6, 1154-1164 , 2012, COPYRIGHT 2012 EUDOXUS1154 PRESS, LLC
Recurrence relations for the Harmonic mean Newton’s method in Banach spaces∗ Liang Chen Department of Mathematics, Shanghai University, Shanghai 200444, P.R.China School of Mathematical Science, Huaibei Normal University, Huaibei 235000, P.R. China Email: [email protected] / [email protected] Chuanqing Gu Department of Mathematics, Shanghai University, Shanghai 200444, P.R.China Yanfang Ma School of Computer Science and Technology,Huaibei Normal University, Huaibei 235000, P.R. China
Abstract In this paper, an attempt is made to using recurrence relations to establish the convergence of the Harmonic mean Newton’s method used for nonlinear equations in Banach spaces. The recurrence relations for the method are derived and then an existence-uniqueness theorem is given to establish the R-order of the method to be three and a priori error bounds. Finally, some numerical applications is presented to demonstrate our approach. Keywords: Nonlinear equations in Banach spaces; Recurrence relations; Semilocal convergence; Newton’s method; A priori error bounds MSC2000: 65D10; 65D99; 47H17
1
Introduction
Many scientific problems can be expressed in the form of a nonlinear equation F (x) = 0, ∗
(1)
The work is supported by Shanghai Natural Science Foundation (NO.10ZR1410900), Key Disciplines of Shanghai Municipality (NO.S30104), Natural Science Research Funds of Anhui Provincial for Universities (Grant No.KJ2011A248) and the Open Fund of Shanghai Key Laboratory of Trustworthy Computing.
1
CHEN ET AL: NEWTON'S METHOD IN BANACH SPACES
where F : Ω ⊂ X → Y is a nonlinear operator on an open convex subset Ω of a Banach space X with values in a Banach space Y . This equation can represent differential equations, integral equations or a system of equations in the simplest case. The Newton’s method is the most used iteration to solve those equations as a consequence of computational efficiency even less speed of convergence can be got. Our goals in this paper is to increase the speed of convergence of Newton’s method and not to increase its operational cost very much. Taking into account these goals, we consider a multipoint Newton-type method of order three called the Harmonic mean Newton’s ¨ method studied by Ozban [1] and Homeier [2] respectively. This method is defined for all n ≥ 0 by yn = xn − Γn F (xn ), 1 xn+1 = xn − [Γn + Γn ]F (xn ) (2) 2 where Γn = F 0 (xn )−1 and Γn = F 0 (yn )−1 . Recently, the convergence of iterative methods for solving nonlinear operator equations in Banach spaces is established from the convergence of majorizing sequences. An alternative approach is developed to establish this convergence by using recurrence relations. For many applications, third order methods are used in spite of high computational cost in evaluating the involved second order derivatives. They can also be used in stiff systems [4], where a quick convergence is required. Moreover, they are important from the theoretical standpoint as they provide results on existence and uniqueness of solutions that improve the results obtained from Newton’s method [3]. Some of the well-known third order methods are Chebyshev’s method, Halley’s method and super Halley’s method which convergence had studied by Candela, Gutuerrez and Hernandez et. al proposed in [5, 6, 7, 8, 9, 10, 11] using recurrence relations. In this paper, we shall use recurrence relations to establish the convergence of third order Harmonic mean Newton’s method for solving nonlinear operator equations (1). The recurrence relations based on two constants which depend on the operator F are derived. Then, based on this recurrence relations a priori error bounds are obtained for the said iterative method. Finally, some numerical examples are worked out for demonstrating our work.
2
Preliminary results
Let X, Y be Banach spaces and F : Ω ⊆ X → Y be a nonlinear twice Fr´echet differentiable operator in an open convex domain Ω0 ⊆ Ω. The Harmonic mean Newton’s method to
2
1155
CHEN ET AL: NEWTON'S METHOD IN BANACH SPACES
1156
solve the equation (1) given by (2) can be written in the following form: yn = xn − Γn F (xn ), H(xn , yn ) = Γn [F 0 (yn ) − F 0 (xn )], 1 xn+1 = yn − H(xn , yn )(yn − xn ). (3) 2 where Γn = F 0 (xn )−1 and Γn = F 0 (yn )−1 . Let us assume that Γ0 = F 0 (x0 )−1 ∈ L(Y, X) exists at some x0 ∈ Ω0 , where L(Y, X) is the set of bounded linear operators from Y into X. Throughout this paper we assume that: (I) kΓ0 k ≤ β,
kΓ0 F (x0 )k ≤ η,
kF 00 (x) ≤ M , x ∈ Ω,
(II) there exists a positive real number N such that kF 00 (x)−F 00 (y)k ≤ N kx−yk, Ω.
∀x, y ∈
We firstly give an approximation of the operator F in the following lemma, which will be used in the next derivation. Lemma 1. Assume that the nonlinear operator F : Ω ⊂ X → Y is continuously secondorder Fr´echet differentiable where Ω is an open set and X and Y are Banach spaces. Then we have Z 1 Z 1 1 00 00 2 F (xn+1 ) = F (xn + t(yn − xn )(1 − t)dt(yn − xn ) − F (xn + t(yn − xn )dt(yn − xn )2 2 0 0 Z 1 + F 00 (yn + t(xn+1 − yn ))(1 − t)dt(xn+1 − yn )2 . (4) 0
Proof. By the definition of method given by (3), we obtain 1 F 0 (yn )(xn+1 − yn ) = − [F 0 (yn ) − F 0 (xn )](yn − xn ). 2 Using Taylor’s formula, we have Z 1 0 F (xn+1 ) =F (yn ) + F (yn )(xn+1 − yn ) + F 00 (yn + t(xn+1 − yn ))(1 − t)dt(xn+1 − yn )2 0 Z 1 1 0 0 =F (yn ) − [F (yn ) − F (xn )](yn − xn ) + F 00 (yn + t(xn+1 − yn ))(1 − t)dt(xn+1 − yn )2 . 2 0 (5) Similarly, we obtain Z 1 0 F (yn ) = F (xn ) + F (xn )(yn − xn ) + F 00 (xn + t(yn − xn )(1 − t)dt(yn − xn )2 , (6) 0 Z 1 F 0 (yn ) = F 0 (xn ) + F 00 (xn + t(yn − xn )dt(yn − xn ). 0
3
CHEN ET AL: NEWTON'S METHOD IN BANACH SPACES
1157
It follows that F 0 (yn ) − F 0 (xn ) =
Z
1
F 00 (xn + t(yn − xn )dt(yn − xn ).
(7)
0
Substituting (6) and (7) into (5), we can obtain (4). Now, we denote η0 = η, β0 = β, a0 = M β0 η0 , b0 = N β0 η02 and c0 = h(a0 )ϕ(a0 , b0 ). Let a0 < σ and h(a0 )c0 < 1, we can define the following sequences for n ≥ 0. ηn+1 = cn ηn , bn+1 = where g(t) =
βn+1 = h(an )βn ,
2 N βn+1 ηn+1 ,
2−t , 2 − 2t
h(t) =
an+1 = M βn+1 ηn+1 ,
cn+1 = h(an+1 )ϕ(an+1 , bn+1 )
1 , 1 − tg(t)
ϕ(t, s) =
1 t3 5 + s 2 8 (1 − t) 12
(8) (9)
(10)
√ and σ = 2 − 2 is the smallest positive zero of the scalar function g(t)t − 1. From the definition of an+1 , bn+1 , (8) and (9), we also have bn+1 = h(an )c2n bn .
an+1 = h(an )cn an ,
(11)
Nextly we shall study some properties of the functions defined in (10) and the previous scalar sequences defined in (8)-(9), (11), later developments will require the following lemma. Lemma 2. Let the real functions g, h and ϕ be given in (10). Then (a) g(t) and h(t) are increasing and g(t) > 1, h(t) > 1 for all t ∈ (0, σ) and for all s > 0, (b) ϕ(t, s) is increasing for all t ∈ (0, σ) and for all s > 0, (c) g(θt) < g(t), h(θt) < h(t) and ϕ(θt, θ2 s) < θ2 ϕ(t, s) for θ ∈ (0, 1), t ∈ (0, σ) and s > 0. Lemma 3. Let the real functions g, h and ϕ be given in (10). If a0 < σ and h(a0 )c0 < 1, then we have (a) h(a0 ) > 1 and cn < 1 for n ≥ 0, (b) the sequence {ηn }, {an }, {bn } and {cn } are decreasing while {βn } is increasing, (c) g(an )an < 1 and h(an )cn < 1 for n ≥ 0. Proof. By Lemma 2 and assumption, h(a0 ) > 1 and cn < 1 hold. It follows from the definitions that η1 < η0 , a1 < a0 , b1 < b0 . Moreover, by Lemma 2, we have 1 < h(a1 ) < h(a0 ) and ϕ(a1 , b1 ) < ϕ(a0 , b0 ). This yields c1 < c0 and (b) holds. Based on these results we obtain g(a1 )a1 < g(a0 )a0 < 1 and h(a1 )c1 < h(a0 )c0 < 1 and (c) holds. By induction we can derive that the items (a), (b) and (c) hold.
4
CHEN ET AL: NEWTON'S METHOD IN BANACH SPACES
1158
Lemma 4. Under the assumptions of Lemma 3 and define γ = h(a0 )c0 , then n
cn ≤ λγ 3 ,
n ≥ 0,
(12)
where λ = 1/h(a0 ). Also for n ≥ 0, we have n Y 3n+1 −1 ci ≤ λn+1 γ 2 .
(13)
i=0
Proof. By the definition of an+1 and bn+1 given in (11) we obtain a1 = h(a0 )c0 a0 = γa0 , b1 = h(a0 )c20 b0 < γ 2 b0 , by Lemma 2 we have 1 −1
c1 < h(γa0 )ϕ(γa0 , γ 2 b0 ) < γ 2 h(a0 )ϕ(a0 , b0 ) = γ 3
1
c0 = λγ 3 .
k
Suppose ck ≤ λγ 3 , k ≤ 1. Then by Lemma 3, we have ak+1 < ak , bk+1 < bk and h(ak , bk )ck < 1. Thus ck+1 < h(ak )ϕ(h(ak )ck ak , h(ak )c2k bk ) < h(ak )ϕ(h(ak )ck ak , h2 (ak )c2k bk ) k+1
< h3 (ak )c2k ϕ(ak , bk ) = h2 (ak )c3k < λγ 3
.
n λγ 3 ,
Therefore it holds that cn ≤ n ≥ 0. By (12), we get n n Pn Y Y 3n+1 −1 i i ci ≤ λγ 3 = λn+1 γ i=0 3 = λn+1 γ 2 , i=0
n ≥ 0.
i=0
This shows (13) holds. The proof is completed. Lemma 5. Under the assumptions of Lemma 3. Let γ = h(a0 )c0 and λ = 1/h(a0 ). The 3n −1
sequence {ηn } satisfies ηn ≤ ηλn γ 2 , n ≥ 0. Hence the sequence {ηn } converges to 0. Moreover, for any n ≥ 0, m ≥ 1, it holds n+m X
n
ηi ≤ ηλ γ
3n −1 2
i=n
3n (3m +1) 2
1 − λm+1 γ 1 − λγ 3n
.
Proof. From the definition of sequence {ηn } given in (8) and (13), we have ! n−1 Y 3n −1 ηn = cn−1 ηn−1 = cn−1 cn−2 ηn−2 = · · · = η ci ≤ ηλn γ 2 . i=0
Because λ < 1 and γ < 1, it follows that ηn → 0 as n → ∞, hence the sequence {ηn } converges to 0. Since ! ! n+m n+m−1 n+m X X X 3i 3n 3i−1 3n 3i n n = λn γ 2 + λγ 3 λi γ 2 ≤ λn γ 2 + γ 3 λi γ 2 λi γ 2 i=n
= λn γ
3n 2
n
+ λγ 3
i=n+1 n+m X
i=n
λi γ
3i 2
i=n
5
− λn+m γ
5n+m 2
! ,
CHEN ET AL: NEWTON'S METHOD IN BANACH SPACES
where n ≥ 0, m ≥ 1. we can obtain n+m X
ηi ≤ η
i=n
Therefore
3
n+m X
λi γ
3i −1 2
Pn+m i=n
1
= ηγ − 2
i=n
i
3 λi γ 2
n+m X
λi γ
3i 2
3n (3m +1) 2
n
≤
m+1 γ 3 λn γ 2 1−λ 1−λγ 3n
≤ ηλn γ
i=n
1159
3n −1 2
. Furthermore,
3n (3m +1) 2
1 − λm+1 γ 1 − λγ 3n
.
P∞
n=0 ηn exists. The proof is completed.
Recurrence relations
We denote B(x, r) = {y ∈ X : ky − xk < r} and B(x, r) = {y ∈ X : ky − xk ≤ r} in this paper. In the following, the recurrence relations are derived for the method given by (3) under the assumptions mentioned in the previous section. For n = 0, the existence of Γ0 implies the existence of y0 . This gives us ku0 − x0 k = 0) kΓ0 F (x0 )k ≤ η0 , this means that y0 ∈ B(x0 , Rη) where R = g(a 1−c0 . By the initial hypotheses (I)-(II), we have kI − Γ0 F 0 (y0 )k ≤ kΓ0 kkF 0 (x0 ) − F 0 (y0 )k ≤ M kΓ0 kky0 − x0 k ≤ a0 < σ and, by the Banach lemma [3], Γ0 exists and kΓ0 k ≤
1 1−a0 kΓ0 k.
(14)
Consequently we have
1 1 a0 η0 kx1 − y0 k ≤ kH(x0 , y0 )kky0 − x0 k ≤ kΓ0 kkF 0 (x0 ) − F 0 (y0 )kky0 − x0 k ≤ 2 2 2(1 − a0 ) (15) and x1 is well defined a0 kx1 − x0 k ≤ kx1 − y0 k + ky0 − x0 k ≤ η0 + η0 = g(a0 )η0 . 2(1 − a0 ) By a0 < σ and g(t) is increasing in t ∈ (0, σ), we have kI − Γ0 F 0 (x1 )k ≤ kΓ0 kkF 0 (x0 ) − F 0 (x1 )k ≤ M β0 kx1 − x0 k ≤ a0 g(a0 ) < 1, it follows by the Banach lemma [3] that Γ1 = [F 0 (x1 )]−1 exists and β0 kΓ1 k ≤ = h(a0 )β0 = β1 . (16) 1 − a0 g(a0 ) Also
Z 1
Z
1 1 00 00
F (xn + t(yn − xn ))(1 − t)dt − F (xn + t(yn − xn ))dt
2 0 0
Z 1
Z 1
1
1 00 1 00 00 00
≤ F (xn + t(yn − xn ))(1 − t)dt − F (xn ) + F (xn + t(yn − xn ))dt − F (xn )
2 2 0 2 0
Z 1
Z 1
1
00 00
+ [F 00 (xn + t(yn − xn )) − F 00 (xn )]dt ≤ [F (x + t(y − x )) − F (x )](1 − t)dt n n n n
2
0 0 Z 1 Z 1 N 5 ≤N (t − t2 )dtkyn − xn k + tdtkyn − xn k = N kyn − xn k. (17) 2 12 0 0 6
CHEN ET AL: NEWTON'S METHOD IN BANACH SPACES
1160
By Lemma 1, we can get 1 5 kF (x1 )k ≤ M kx1 − y0 k2 + N ky0 − x0 k3 . 2 12 Then from (16) and (18), we have
(18)
ky1 − x1 k = kΓ1 F (x1 )k ≤ kΓ1 kkF (x1 )k ≤ h(a0 )ϕ(a0 , b0 )η0 = c0 η0 = η1 . Because of g(a0 ) > 1, we obtain ky1 − x0 k ≤ ky1 − x1 k + kx1 − x0 k ≤ (g(a0 ) + c0 )η0 < g(a0 )(1 + c0 )η < Rη, which shows y1 ∈ B(x0 , Rη). In addition, we have N kΓ1 kkΓ1 F (x1 )k2 ≤ h(a0 )c20 b0 = b1 .
M kΓ1 kkΓ1 F (x1 )k ≤ h(a0 )c0 a0 = a1 ,
Repeating the above derivation, we can obtain the system of recurrence relations given in the next lemma. Lemma 6. Let the assumptions and the conditions (I)-(II) hold. Then the following items are true for all n ≥ 0: (I) There exists Γn = [F 0 (xn )]−1 and kΓn k ≤ βn , (II) kΓn F (xn )k ≤ ηn , (III) M kΓn kkΓn F (xn )k ≤ an , (IV) N kΓn kkΓn F (xn )k2 ≤ bn , (V) kxn+1 − xn k ≤ g(an )ηn , (VI) kxn+1 − x0 k ≤ Rη, where R = g(a0 )/(1 − c0 ). Proof. The proof of (I)-(V) follows by using the above mentioned way and invoking the induction hypothesis. We only consider (VI). By (V) and Lemma 5 we obtain kxn+1 − x0 k ≤
n X
kxi+1 − xi k ≤
i=0
n X
g(ai )ηi ≤ g(a0 )
i=0
n X i=0
1 − λn+1 γ ηi ≤ g(a0 )η 1 − c0
So the lemma is proved.
4
Semilocal Convergence
Lemma 7. Let R = Proof. Since c0
0 (i.e., there exist a constant γ > 0 with the property: hAx, xi ≥ γkxk2 , ∀x ∈ H), 0 < γα < γ, S : C → H is a mapping defined by Sx = kx + (1 − k)T x and PC is the metric projection of H onto C. They proved that the sequence {xn } generated by (1.2) converges strongly to a fixed point q of T which is the unique solution of the variational inequality related to the linear operator A h(A − γf )q, p − qi ≥ 0, ∀p ∈ F (T ), (1.3) and is also the optimality condition for the minimization problem: 1 hAx, xi − h(x), 2 x∈F (T ) min
where h is a potential function for γf . They utilized the following as control conditions on {αn }: (C1) limn→∞ αn = 0; P∞ (C2) n=1 αn = ∞; P∞ (C3) n=0 |αn+1 − αn | < ∞; or, limn→∞
αn+1 αn
= 1,
Their result extended the result of Marino and Xu [8] from the class of nonexpansive mappings to the class of k-strictly pseudo-contractive mappings. In 2010, in order to improve the corresponding results of Cho et al. [5] as well as Marino and Xu [8] by removing the condition (C3), Jung [6] studied the following iterative scheme
2
JUNG: k-STRICTLY PSEUDO-CONTRACTIVE MAPPINGS
1167
for the k-strictly pseudo-contractive mapping T along with the same f , A, γ, PC and S as in [5]: xn+1 = αn γf (xn ) + (I − αn A)(βn xn + (1 − βn )PC Sxn ), n ≥ 1,
(1.4)
where αn , βn ∈ (0, 1). Under conditions (C1) and (C2) on {αn } and the condition 0 < lim inf n→∞ βn ≤ lim supn→∞ βn < 1 on {βn }, he proved that the sequence {xn } generated by (1.4) converges strongly to a fixed point q of T , which is the unique solution of a variational inequality (1.3). In this paper, motivated by the above-mentioned results, we consider the following iterative scheme (1.2) for the k-strictly pseudo-contractive mapping T . Under weaker control conditions than Cho et al. [5] (Marino and Xu [8]), we establish the strong convergence of the sequence {xn } generated by (1.2) to a fixed point of T , which is a solution of the variational inequality (1.3). The main results improve and develop the corresponding results of Cho et al. [5] and Jung [6] as well as Marino and Xu [8]. Our results also extend the corresponding results of Halpern [9], Moudafi [10], Wittmann [11] and Xu [12].
2. Preliminaries and Lemmas Throughout this paper, when {xn } is a sequence in E, then xn → x (resp., xn * x) will denote strong (resp., weak) convergence of the sequence {xn } to x. For every point x ∈ H, there exists a unique nearest point in C, denoted by PC (x), such that kx − PC (x)k ≤ kx − yk for all y ∈ C. PC is called the metric projection of H to C. It is well known that PC is nonexpansive. We need the following lemmas for the proof of our main results. Lemma 2.1 ([13]). Let H be a Hilbert space, C be a closed convex subset of H. If T is a k-strictly pseudo-contractive mapping on C, then the fixed point set F (T ) is closed convex, so that the projection PF (T ) is well defined. Lemma 2.2 ([13]). Let H be a Hilbert space and C be a closed convex subset of H. Let T : C → H be a k-strictly pseudo-contractive mapping with F (T ) 6= ∅. Then F (PC T ) = F (T ). Lemma 2.3 ([13]). Let H be a Hilbert space, C be a closed convex subset of H, and T : C → H be a k-strictly pseudo-contractive mapping. Define a mapping S : C → H by Sx = λx + (1 − λ)T x for all x ∈ C. Then, as λ ∈ [k, 1), S is a nonexpansive mapping such that F (S) = F (T ). The following Lemmas 2.4 and 2.5 can be obtained from the Proposition 2.6 of Acedo and Xu [4].
3
JUNG: k-STRICTLY PSEUDO-CONTRACTIVE MAPPINGS
1168
Lemma 2.4. Let H be a Hilbert space and C be a closed convex subset of H. For any N ≥ 1, assume that for each 1 ≤ i ≤ N , Ti : C → H is a ki -strictly pseudo-contractive mapping for some P 0 ≤ ki < 1. Assume that {ηi }N i=1 is a positive sequence such that PN N η = 1. Then η T is a nonself-k-strictly pseudo-contractive mapping with k = i=1 i i=1 i i max{ki : 1 ≤ i ≤ N }. N be given as in Lemma 2.4. Suppose that {T }N has Lemma 2.5. Let {Ti }N i i=1 i=1 and {ηi }i=1P N F (T ). a common fixed point in C. Then F ( N η T ) = ∩ i i=1 i=1 i i
Lemma 2.6 ([8]). Assume that A is a strongly positive linear bounded operator on a Hilbert space H with constant γ > 0 and 0 < ρ ≤ kAk−1 . Then kI − ρAk ≤ 1 − ργ. Lemma 2.7 ([14,15]). Let {sn } be a sequence of non-negative real numbers satisfying sn+1 ≤ (1 − λn )sn + λn δn + rn ,
n ≥ 1,
where {λn }, {δn } and {rn } satisfy the following conditions: (i) {λn } ⊂ [0, 1] and
P∞
n=1 λn
= ∞,
P (ii) lim supn→∞ δn ≤ 0 or ∞ n=1 λn δn < ∞, P∞ (iii) rn ≥ 0 (n ≥ 0), n=1 rn < ∞. Then limn→∞ sn = 0. Lemma 2.8. In a Hilbert space H, the following inequality holds: kx + yk2 ≤ kxk2 + 2hy, x + yi, x, y ∈ H.
Let µ be a mean on positive integers N , that is, a continuous linear functional on `∞ satisfying kµk = 1 = µ(1). Then we know that µ is a mean on N if and only if inf{an : n ∈ N } ≤ µ(a) ≤ sup{an : n ∈ N } for every a = (a1 , a2 , ...) ∈ `∞ . According to time and circumstances, we use µn (an ) instead of µ(a). A mean µ on N is called a Banach limit if µn (an ) = µn (an+1 ) for every a = (a1 , a2 , ...) ∈ `∞ . Using the Hahn-Banach theorem, we can prove the existence of a Banach limit. If µ is a Banach limit, the following are well-known: (i) for all n ≥ 1, an ≤ cn implies µ(an ) ≤ µ(cn ), (ii) µ(an+N ) = µ(an ) for any fixed positive integer N , (iii) lim inf n→∞ an ≤ µn (an ) ≤ lim supn→∞ an for all (a1 , a2 , · · · ) ∈ l∞ .
4
JUNG: k-STRICTLY PSEUDO-CONTRACTIVE MAPPINGS
1169
The following lemma was given in [16, Proposition 2]. Lemma 2.9. Let a ∈ R be a real number and a sequence {an } ∈ l∞ satisfy the condition µn (an ) ≤ a for all Banach limit µ. If lim supn→∞ (an+1 −an ) ≤ 0, then lim supn→∞ an ≤ a. Finally, we recall that the sequence {xn } in H is said to be weakly asymptotically regular if w − lim (xn+1 − xn ) = 0, that is, xn+1 − xn * 0 n→∞
and asymptotically regular if lim kxn+1 − xn k = 0,
n→∞
respectively.
3. Main results We need the following result for the existence of solutions of a certain variational inequality, which is slightly an improvement of Theorem 3.2 of Marino and Xu [8]. Theorem MX. Let H be a Hilbert space, C be a closed convex subset of H such that C ± C ⊂ C and T : C → C be a nonexpansive mapping with F (T ) 6= ∅. Let A be a strongly positive bounded linear operator on C with constant γ > 0 and f : C → C be a contraction with the contractive constant α ∈ (0, 1) such that 0 < γ < αγ . Let xt be a fixed point of a contraction Ct 3 x 7→ tγf (x) + (I − tA)T x for t ∈ (0, 1) and t < kAk−1 . Then {xt } converges strongly to a fixed point x of T as t → 0, which solves the following variational inequality: hAx − γf (x), x − pi ≤ 0, p ∈ F (T ).
Now, we study the strong convergence results for a general iterative scheme for the kstrictly pseudo-contractive mapping. Theorem 3.1. Let H be a Hilbert space, C be a closed convex subset of H such that C ± C ⊂ C, T : C → H be a k-strictly pseudo-contractive mapping with F (T ) 6= ∅ for some 0 ≤ k < 1. Let A be a strongly positive bounded linear operator on C with constant γ > 0 and f : C → C be a contraction with the contractive constant α ∈ (0, 1) such that 0 < γ < αγ . Let {αn } be a sequence in (0, 1) which satisfies the condition: (C1) limn→∞ αn = 0, Let {xn } be a sequence in C generated by ( x1 ∈ C xn+1 = αn γf (xn ) + (I − αn A)PC Sxn , n ≥ 1,
5
JUNG: k-STRICTLY PSEUDO-CONTRACTIVE MAPPINGS
1170
where S : C → H is a mapping defined by Sx = kx + (1 − k)T x and PC is the metric projection of H onto C. Let µ be a Banach limit.Then µn (hAq − γf (q), q − xn )i) ≤ 0, where q = limt→0+ xt with xt being the fixed point of the contraction x 7→ tγf (x) + (I − tA)PC Sx, 0 < t < 1 and t < kAk−1 . proof. Let {xt } be the net generated by xt = tγf (xt ) + (I − tA)PC Sxt ,
0 < t < 1 and t < kAk−1 .
(3.1)
Since PC S is a nonexpansive mapping from C into itself, by Theorem MX and Lemmas 2.2 and 2.3, there exists limt→0 xt ∈ F (S) = F (T ). Denote it by q. Moreover q is a solution of the variational inequality hAq − γf (q), q − pi ≤ 0,
p ∈ F (T ).
By (3.1), we have kxt − xn+1 k = k(I − tA)(PC Sxt − xn+1 ) + t(γf (xt ) − Axn+1 )k. Applying Lemma 2.6 and Lemma 2.8, we have kxt − xn+1 k2 ≤ (1 − γt)2 kPC Sxt − xn+1 k2 + 2thγf (xt ) − Axn+1 , xt − xn+1 i.
(3.2)
Let p ∈ F (T ). Then we have kxt − pk = ≤ ≤ ≤
kt(γf (xt ) − Ap) + (I − tA)(PC Sxt − p)k tkγf (xt ) − Apk + (1 − γt)kxt − pk t(kγf (xt ) − γf (p)k + kγf (p) − Apk) + (1 − γt)kxt − pk tγαkxt − pk + tkγf (p) − Apk + (1 − γt)kxt − pk
This gives that kxt − pk ≤
1 kγf (p) − Apk. γ − γα
Hence {xt } is bounded, so are {f (xt )} and {PC Sxt }. (p)k } for all n ≥ 1 and all p ∈ F (T ). Now we show that kxn − pk ≤ max{kx1 − pk, kAp−γf γ−γα Indeed, observing the condition (C1), we may assume, without loss of generality, that αn < kAk−1 and αn γ < 1 for all n ≥ 1. From Lemma 2.6, we know that, if 0 < ρ ≤ kAk−1 , then kI − ρAk ≤ 1 − ργ. Therefore, taking a point p ∈ F (T ), we have
kxn+1 − pk = kαn (γf (xn ) − Ap) + (I − αn A)(xn − p)k ≤ (1 − αn γ)kxn − pk + αn kγf (xn ) − Apk ≤ (1 − αn γ)kxn − pk + αn (kγf (xn ) − γf (p)k + kγf (p) − Apk) kγf (p) − Apk ≤ [1 − (γ − γα)αn ]kxn − pk + (γ − γα)αn . γ − γα
6
JUNG: k-STRICTLY PSEUDO-CONTRACTIVE MAPPINGS
1171
(p)k }. Hence {xn } is bounded, Using an induction, we have kxn −pk ≤ max{kx1 −pk, kAp−γf γ−γα and so are {γf (xn )}, {PC Sxn }, {APC Sxn }, and {Axn }. As a consequence with the control condition (C1), we get
kxn+1 − PC Sxn k = αn kγf (xn ) − APC Sxn k → 0
(n → ∞),
and kPC Sxt − xn+1 k ≤ kxt − xn k + en , where en = kxn+1 − PC Sxn k → 0 as n → ∞. Also observing that A is strongly positive linear, we have hAxt − Axn , xt − xn i = hA(xt − xn ), xt − xn i ≥ γkxt − xn k2 .
(3.3)
So, combining (3.2) and (3.3), we obtain kxt − xn+1 k2 ≤ (1 − γt)2 (kxt − xn k + en )2 + 2thγf (xt ) − Axt , xt − xn+1 i + 2thAxt − Axn+1 , xt − xn+1 i ≤(γ 2 t2 − 2γt)kxt − xn k2 + kxt − xn k2 + (1 − γt)2 (2kxt − xn ken + e2n ) + 2thf (xt ) − Axt , xt − xn+1 )i + 2thAxt − Axn+1 , xt − xn+1 i ≤ (γt2 − 2t)hAxt − Axn , xt − xn i + kxt − xn k2 2
(3.4)
e2n )
+ (1 − γt) (2kxt − xn ken + + 2thγf (xt ) − Axt , xt − xn+1 )i + 2thAxt − Axn+1 , xt − xn+1 i = γt2 hAxt − Axn , xt − xn i + kxt − xn k2 + (1 − γt)2 (2kxt − xn ken + e2n ) + 2thγf (xt ) − Axt , xt − xn+1 i + 2t(hAxt − Axn+1 , xt − xn+1 i − hAxt − Axn , xt − xn i) Applying the Banach limit µ to (3.4) together with limn→∞ en = 0, we have µn (kxt − xn+1 k2 ) ≤ µn (γt2 hAxt − Axn , xt − xn i) + µn (kxt − xn k2 ) + 2t(µn (hγf (xt ) − Axt , xt − xn+1 )i) + 2t(µn (hAxt − Axn+1 , xt − xn+1 i) − µn (hAxt − Axn , xt − xn i)).
(3.5)
Using the property µn (an ) = µn (an+1 ) of Banach limit in (3.5), we obtain µn (hAxt − γf (xt ), xt − xn )i) = µn (hAxt − γf (xt ), xt − xn+1 )i) γt 1 ≤ µn (hAxt − Axn , xt − xn i) + [µn (kxt − xn k2 ) − µn (kxt − xn k2 )] 2 2t + µn (hAxt − Axn , xt − xn i) − µn (hAxt − Axn , xt − xn i) γt = µn (hAxt − Axn , xt − xn i). 2 Since
(3.6)
thAxt − Axn , xt − xn i ≤ tkAkkxt − xn k2 ≤ tkAk(kxt − pk + kp − xn k)2 ¶2 µ 2 ≤ tkAk kγf (p) − Apk + kx0 − pk → 0 (as t → 0), γ − γα
7
(3.7)
JUNG: k-STRICTLY PSEUDO-CONTRACTIVE MAPPINGS
1172
we conclude from (3.6) and (3.7) that µn (hAq − γf (q), q − xn )i) ≤ lim sup µn (hAxt − γf (xt ), xt − xn i) t→0
≤ lim sup t→0
γt µn (hAxt − Axn , xt − xn i) ≤ 0, 2
where q = limt→0 xt . This completes the proof. ¤ Using Theorem 3.1, we give the following main result. Theorem 3.2. Let H be a Hilbert space, C be a closed convex subset of H such that C ± C ⊂ C, T : C → H be a k-strictly pseudo-contractive mapping with F (T ) 6= ∅ for some 0 ≤ k < 1. Let A be a strongly positive bounded linear operator on C with constant γ > 0 and f : C → C be a contraction with the contractive constant α ∈ (0, 1) such that 0 < γ < αγ . Let {αn } be a sequence in (0, 1) which satisfies the conditions: (C1) limn→∞ αn = 0; P∞ (C2) n=0 αn = ∞. Let {xn } be a sequence in C generated by ( x1 ∈ C xn+1 = αn γf (xn ) + (I − αn A)PC Sxn , n ≥ 1,
(3.8)
where S : C → H is a mapping defined by Sx = kx + (1 − k)T x and PC is the metric projection of H onto C. If {xn } is weakly asymptotically regular, then {xn } converges strongly to q ∈ F (T ), where solves the following variational inequality: hAq − γf (q), q − pi ≤ 0,
p ∈ F (T ).
(3.9)
Proof. First, note that from the condition (C1), without loss of generality, we assume that αn ≤ kAk−1 , αn γ < 1 and 2(γ−αγ) 1−αn αγ αn < 1 for n ≥ 0. Let xt be defined by (3.1), that is, xt = tγf (xt ) + (I − tA)Pc Sxt for 0 < t < 1 and limt→0 xt := q ∈ F (S) = F (T ) (by using Theorem MX and Lemmas 2.2 and 2.3). Then q is a solution of a variational inequality hAq − γf (q), q − pi ≤ 0
p ∈ F (T ).
We divides the proof several steps: ¾ ½ kAp−γf (p)k for all n ≥ 1 and all Step 1. We show that kxn − pk ≤ max kx1 − pk, γ−γα p ∈ F (T ) as in the proof of Theorem 3.1. Hence {xn } is bounded and so are {PC Sxn } and {f (xn )}.
8
JUNG: k-STRICTLY PSEUDO-CONTRACTIVE MAPPINGS
1173
Step 2. We show that lim supn→∞ hAq − γf (q), q − xn i ≤ 0. To this end, put an := hAq − γf (q), q − xn )i, n ≥ 1. Then Theorem 3.1 implies that µn (an ) ≤ 0 for any Banach limit µ. Since {xn } is bounded, there exists a subsequence {xnj } of {xn } such that lim sup(an+1 − an ) = lim (anj +1 − anj ) j→∞
n→∞
and xnj * v ∈ H. This implies that xnj +1 * v since {xn } is weakly asymptotically regular. Therefore, we have w − lim (q − xnj +1 ) = w − lim (q − xnj ) = (q − v), j→∞
j→∞
and so lim sup(an+1 − an ) = lim hAq − γf (q), (q − xnj +1 ) − (q − xnj )i = 0. j→∞
n→∞
Then Lemma 2.9 implies that lim supn→∞ an ≤ 0, that is, lim suphAq − γf (q), q − xn )i ≤ 0. n→∞
Step 3. We show that limn→∞ kxn − qk = 0. By using (3.8), we have xn+1 − q = αn (γf (xn ) − Aq) + (I − αn A)(xn − q). Applying Lemma 2.8, we obtain kxn+1 − qk2 = k(I − αn A)(xn − q) + αn (γf (xn ) − Aq)k2 ≤ k(I − αn A)(xn − q)k2 + 2αn hγf (xn ) − Aq, xn+1 − qi ≤ (1 − αn γ)2 kxn − qk2 + 2αn γαkxn − qkkxn+1 − qk + 2αn hγf (q) − Aq, xn+1 − qi
(3.10)
≤ (1 − αn γ)2 kxn − qk2 + αn γα(kxn − qk2 + kxn+1 − qk2 ) + 2αn hγf (q) − Aq, xn+1 − qi. It then follows from (3.10) that kxn+1 − qk2 2αn (1 − αn γ)2 + αn γα kxn − qk2 + hγf (q) − Aq, xn+1 − qi ≤ 1 − αn γα 1 − αn γα µ ¶ 2αn (γ − αγ) ≤ 1− kxn − Q(f )k2 1 − αn γα µ ¶ 1 2αn (γ − αγ) αn γ 2 hγf (q) − Aq, xn+1 − qi + M2 , + 1 − αn γα γ − αγ 2(γ − αγ) where M2 = supn≥1 kxn − qk2 . Put 2αn (γ − αγ) and 1 − αn αγ 1 αn γ 2 δn = hAq − γf (q), q − xn+1 i + M2 . γ − αγ 2(γ − αγ)
λn =
9
(3.11)
JUNG: k-STRICTLY PSEUDO-CONTRACTIVE MAPPINGS
1174
P By (C1), (C2) and Step 2, we have λn → 0, ∞ n=0 λn = ∞ and lim supn→∞ δn ≤ 0. Since (3.11) reduces to kxn+1 − qk2 ≤ (1 − λn )kxn − qk2 + λn δn , from Lemma 2.7 with rn = 0, we conclude that limn→∞ kxn − qk = 0. This completes the proof. ¤ Theorem 3.3. Let H be a Hilbert space and C be a closed convex subset of H such that C ± C ⊂ C. Let Ti : C → H be a ki -strictly pseudo-contractive mapping for some 0 ≤ ki < 1 and ∩N i=1 F (Ti ) 6= ∅. Let A be a strongly positive bounded linear operator on C with constant γ and f : C → C be a contraction with the contractive constant α ∈ (0, 1) such that 0 < γ < αγ . Let {αn }⊂ (0, 1) be sequence which satisfies the following conditions: (C1) limn→∞ αn = 0; P∞ (C2) n=0 αn = ∞. Let {xn } be a sequence in C generated by ( x0 = x ∈ C, xn+1 = αn γf (xn ) + (I − αn A)PC Sxn , P η T x with k = max{ki : where S : C → H is a mapping defined by Sx = kx + (1 − k) N PN i=1 i i 1 ≤ i ≤ N } and {ηi } is a positive sequence such that η i=1 i = 1. If {xn } is weakly asymptotically regular, then {xn } converges strongly to a common fixed point q of {Ti }N i=1 , which is a solution of the following variational inequality: hAq − γf (q), q − pi ≤ 0,
p ∈ ∩N i=1 F (Ti ).
P Proof. Define a mapping T : C → H by T x = N i=1 ηi Ti x. By Lemmas 2.4 and 2.5, we conclude that T : C → H P is a k-strictly pseudo-contractive mapping with k = max{ki : N 1 ≤ i ≤ N } and F (T ) = F ( N i=1 ηi Ti ) = ∩i=1 F (Ti ). Then the result follows from Theorem 3.2 immediately. ¤ Corollary 3.4. Let H, C, T , A, f and γ be as in Theorem 3.2. Let {xn } be a sequence in C generated by ( x1 ∈ C (3.12) xn+1 = αn γf (xn ) + (I − αn A)PC Sxn , n ≥ 0, where S : C → H is a mapping defined by Sx = kx + (1 − k)T x and PC is the metric projection of H onto C. Let {αn } be a sequence in (0, 1) which satisfies the conditions: (C1) limn→∞ αn = 0; P∞ (C2) n=0 αn = ∞.
10
JUNG: k-STRICTLY PSEUDO-CONTRACTIVE MAPPINGS
1175
If {xn } is asymptotically regular, then {xn } converges strongly to q ∈ F (T ), where is a solution of the variational inequality (3.9). Remark 3.1. If {αn } in Corollary 3.4 satisfies conditions (C1), (C2) and (C3)
P∞
n=1 |αn+1
(C4) limn→∞
− αn | < ∞; or
αn αn+1
= 1 or, equivalently, limn→∞
(C5) |αn+1 − αn | ≤ o(αn+1 ) + σn ,
P∞
n=1 σn
αn −αn+1 αn+1
= 0; or
< ∞ (the perturbed control condition),
then the sequence {xn } generated by (3.12) is asymptotically regular. Now we give only the proof in case when {αn } satisfies the conditions (C1), (C2) and (C5). By Step 1 in the proof of Theorem 3.2, there exists a constant L > 0 such that for all n ≥ 0, kAPC Sxn k + γkf (xn )k ≤ L. So,we obtain, for all n ≥ 0, kxn+1 − xn k =k(I − αn A)(PC Sxn − PC Sxn−1 ) + (αn − αn−1 )APC Sxn−1 + γ[αn (f (xn ) − f (xn−1 )) + f (xn−1 )(αn − αn−1 )]k ≤ (1 − αn γ)kxn − xn−1 k + |αn − αn−1 |kAPC Sxn−1 k + γ[αn αkxn − xn−1 k + kf (xn−1 k|αn − αn−1 ] ≤ (1 − αn (γ − γα))kxn − xn−1 k + L|αn − αn−1 | ≤ (1 − αn (γ − γα))kxn − xn−1 k + (o(αn ) + σn−1 )L.
(3.13)
By taking sn+1 = kxn+1 − xn k, λn = αn (γ − γα), λn δn = o(αn )L and rn = σn−1 L, from (3.13) we have sn+1 ≤ (1 − λn )sn + λn δn + rn . Hence, by (C1), (C2), (C5) and Lemma 2.7, we obtain lim kxn+1 − xn k = 0.
n→∞
In view of this observation, we have the following: Corollary 3.5. Let H, C, T , A, f and γ be as in Theorem 3.2. Let {αn } be a sequence in (0, 1) which satisfies the conditions (C1), (C2) and (C5) (or the conditions (C1), (C2) and (C3), or the conditions (C1), (C2) and (C4)). Let {xn } be a sequence in C generated by (3.10). ( x1 ∈ C xn+1 = αn γf (xn ) + (I − αn A)PC Sxn , n ≥ 0,
where S : C → H is a mapping defined by Sx = kx + (1 − k)T x and PC is the metric projection of H onto C. Then {xn } converges strongly to q ∈ F (T ), where is a solution of the variational inequality (3.9).
11
JUNG: k-STRICTLY PSEUDO-CONTRACTIVE MAPPINGS
1176
Remark 3.2. (1) Theorem 3.2 and Theorem 3.3 improve the corresponding results of Cho et al. [5] by using the weak asymptotic regularity on {xn } instead of the condition (C3) P ∞ n=1 |αn+1 − αn | < ∞. (2) Theorem 3.2 also includes the corresponding results of Halpern [9], Marino and Xu [8], and Wittmann [11] as some special cases. (3) The condition (C5) on {αn } in Corollary 3.5 is independent of the condition (C3) or (C4) in Remark 3.1, which was imposed in Theorem 2.1 of Cho et al. [5]. For this fact, see [17,18].
References [1] F. E. Browder, Fixed point theorems for noncompact mappings, Proc. Natl. Acad. Sci. USA 53 (1965) 1272–1276. [2] F. E. Browder, Convergence of approximants to fixed points of nonexpansive nonlinear mappings in Banach spaces, Arch. Ration. Mech. Anal. 24 (1967) 82–90. [3] F. E. Browder and W. V. Petryshn, Construction of fixed points of nonlinear mappings Hilbert space, J. Math. Anal. Appl. 20 (1967) 197–228. [4] G. L. Acedo and H. K. Xu, Iterative methods for strictly pseudo-contractions in Hilbert space, Nonlinear Anal. 67 (2007) 2258–2271. [5] Y. J. Cho, S. M. Kang and X. Qin, Some results on k-strictly pseudo-contractive mappings in Hilbert spaces, Nonlinear Anal. 70 (2009) 1956–1964. [6] J. S. Jung, Strong convergence of iterative methods for k-strictly pseudo-contractive mappings in Hilbert spaces, Applied Math. Comput. 215 (2010) 3746-3753. [7] C. H. Morales and J. S. Jung, Convergence of paths for pseudo-contractive mappings in Banach spaces, Proc. Amer. math. Soc. 128 (2000) 3411–3419. [8] G. Marino and H. X. Xu, A general iterative method for nonexpansive mappings in Hilbert spaces, J. Math. Anal. Appl. 318 (2006) 43–52. [9] B. Halpern, Fixed points of nonexpansive maps, Bull. Amer. Math. Soc. 73 (1967) 957–961. [10] A. Moudafi, Viscosity approximation methods for fixed-points problems, J. Math. Anal. Appl. 241 (2000) 46–55. [11] R. Wittmann, Approximation of fixed points of nonexpansive mappings, Arch. Math. 58 (1992) 486–491. [12] H. K. Xu, Viscosity approximation methods for nonexpansive mappings, J. Math. Anal. Appl. 298 (2004) 279–291. [13] H. Zhou, Convergence theorems of fixed points for k-strict pseudo-contractions in Hilbert spaces, Nonlinear Anal. 69 (2008) 456–462.
12
JUNG: k-STRICTLY PSEUDO-CONTRACTIVE MAPPINGS
1177
[14] L. S. Liu, Iterative processes with errors for nonlinear strongly accretive mappings in Banach spaces, J. Math. Anal. Appl. 194 (1995) 114-125. [15] H. K. Xu, Iterative algorithms for nonlinear operators, J. London Math. Soc. 66 (2002) 240–256. [16] N. Shioji and W. Takahashi, Strong convergence of approximated sequences for nonexpansive mappings in Banach spaces, Proc. Amer. Math. Soc. 125 (1997), no. 12, 3641–3645. [17] Y. J. Cho, S. M. Kang and H. Y. Zhou, Some control conditions on iterative methods, Commun. Appl. Nonlinear Anal. 12 (2005), no. 2, 27–34. [18] J. S. Jung, Y. J. Cho and R. P. Agarwal, Iterative schemes with some control conditions for a family of finite nonexpansive mappings in Banach space, Fixed Point Theory and Appl. 2005-2 (2005), 125–135.
13
1178
1179
TABLE OF CONTENTS, JOURNAL OF COMPUTATIONAL ANALYSIS AND APPLICATIONS, VOL. 14, NO. 6, 2012 On the "On Some Results in Fuzzy Metric Spaces", Reza Saadati, …….…………………... 996 Different Types Meir-Keeler Contractions on Partial Metric Spaces, I. M. Erhan, E. Karapinar and D.Turkoglu,……………………………………………………………………………....1000 q-Bernstein Polynomials Associated With q-Genocchi Numbers and Polynomials, Seog-Hoon Rim, Joo-Hee Jeong , Sun-Jung Lee, Jeong-Hee Jin and Eun-Jung Moon,…………………. 1006 Orthogonal Stability of an Additive-Quadratic Functional Equation in Non-Archimedean Spaces, Jung Rye Lee, Choonkil Park, Cihangir Alaca and Dong Yun Shin,……………………….. 1014 Stability of a General Mixed Additive-Cubic Equation in F-Spaces, Tian Zhou Xu, John Michael Rassias and Wan Xin Xu,…………………………………………………………………….1026 The Finite Difference Methods and Their Extrapolation for Solving Biharmonic Equations, Guang Zeng, Jin Huang, Li Lei and Pan Cheng,……………………………………………. 1038 I-Summability and I-Approximation through Invariant Mean, M. Mursaleen, Abdullah Alotaibi and Mohammed A. Alghamdi,………………………………………………………………..1049 Stability of Higher Ring Derivations in Fuzzy Banach Algebras, Ick-Soon Chang,………... 1059 Lp Convergence With Rates of General Singular Integral Operators, George A. Anastassiou and Razvan A. Mezei,……………………………………………………………………………. 1067 Convergence Theorems of Iterative Schemes for a Finite Family of Asymptotically QuasiNonexpansive Type Mappings in Metric Spaces, Jong Kyu Kim and Chu Hwan Kim,……. 1084 Operational Formula For Jacobi-Pineiro Polynomials, Cem Kaanoglu,…………………….. 1096 Fixed Points and Stability of Additive Functional Equations on the Banach Algebras, Yeol Je Cho, Jung Im Kang and Reza Saadati,………………………………………………………. 1103 Oscillation of Second Order Nonlinear Neutral Differential Equation, M. Tamer Senel and T. Candan,………………………………………………………………………………………. 1112 On the Fuzzy Stability of a Generalized Jensen Quadratic Functional Equation, Hassan Azadi Kenary, Choonkil Park and Sung Jin Lee,…………………………………………………… 1118 Landau Type Inequalities on Time Scales, George A. Anastassiou,………………………… 1130
1180
TABLE OF CONTENTS, JOURNAL OF COMPUTATIONAL ANALYSIS AND APPLICATIONS, VOL. 14, NO. 6, 2012 (continued) Generalized Integration Operators from the Space of Integral Transforms into Bloch-Type Spaces, Stevo Stevic, Ajay K. Sharma and S. D. Sharma,………………………………….. 1139 Some Remarks on Extended Hypergeometric, Extended Confluent Hypergeometric and Extended Appell’s Functions, Mehmet Ali Özarslan,………………………………………. 1148 Recurrence Relations for the Harmonic Mean Newton's Method in Banach Spaces, Liang Chen, Chuanqing Gu and Yanfang Ma,……………………………………………………………. 1154 A General Iterative Method with Some Control Conditions for k-Strictly Pseudo-Contractive Mappings, Jong Soo Jung,……………………………………………………………………1165
1181
Volume 14, Number 7 ISSN:1521-1398 PRINT,1572-9206 ONLINE
November 2012
Journal of Computational Analysis and Applications EUDOXUS PRESS,LLC
1182
Journal of Computational Analysis and Applications ISSNno.’s:1521-1398 PRINT,1572-9206 ONLINE SCOPE OF THE JOURNAL An international publication of Eudoxus Press, LLC(seven times annually) Editor in Chief: George Anastassiou Department of Mathematical Sciences, University of Memphis, Memphis, TN 38152-3240, U.S.A [email protected] http://www.msci.memphis.edu/~ganastss/jocaaa The main purpose of "J.Computational Analysis and Applications" is to publish high quality research articles from all subareas of Computational Mathematical Analysis and its many potential applications and connections to other areas of Mathematical Sciences. Any paper whose approach and proofs are computational,using methods from Mathematical Analysis in the broadest sense is suitable and welcome for consideration in our journal, except from Applied Numerical Analysis articles. Also plain word articles without formulas and proofs are excluded. The list of possibly connected mathematical areas with this publication includes, but is not restricted to: Applied Analysis, Applied Functional Analysis, Approximation Theory, Asymptotic Analysis, Difference Equations, Differential Equations, Partial Differential Equations, Fourier Analysis, Fractals, Fuzzy Sets, Harmonic Analysis, Inequalities, Integral Equations, Measure Theory, Moment Theory, Neural Networks, Numerical Functional Analysis, Potential Theory, Probability Theory, Real and Complex Analysis, Signal Analysis, Special Functions, Splines, Stochastic Analysis, Stochastic Processes, Summability, Tomography, Wavelets, any combination of the above, e.t.c. "J.Computational Analysis and Applications" is a peer-reviewed Journal. See the instructions for preparation and submission of articles to JoCAAA. Webmaster:Ray Clapsadle. Editor’s Assistant:Dr.Razvan Mezei,Lander University,SC 29649, USA.
Journal of Computational Analysis and Applications(JoCAAA) is published by EUDOXUS PRESS,LLC,1424 Beaver Trail Drive,Cordova,TN38016,USA,[email protected]
http//:www.eudoxuspress.com.Annual Subscription Prices:For USA and Canada,Institutional:Print $470,Electronic $300,Print and Electronic $500.Individual:Print $150,Electronic $100,Print &Electronic $200.For any other part of the world add $50 more to the above prices for Print.No credit card payments. Copyright©2012 by Eudoxus Press,LLCAll rights reserved.JoCAAA is printed in USA. JoCAAA is reviewed and abstracted by AMS Mathematical Reviews,MATHSCI,and Zentralblaat MATH. It is strictly prohibited the reproduction and transmission of any part of JoCAAA and in any form and by any means without the written permission of the publisher.It is only allowed to educators to Xerox articles for educational purposes.The publisher assumes no responsibility for the content of published papers.
1183
Editorial Board Associate Editors 1) George A. Anastassiou Department of Mathematical Sciences The University of Memphis Memphis,TN 38152,U.S.A Tel.901-678-3144 e-mail: [email protected] Approximation Theory,Real Analysis, Wavelets, Neural Networks,Probability, Inequalities. 2) J. Marshall Ash Department of Mathematics De Paul University 2219 North Kenmore Ave. Chicago,IL 60614-3504 773-325-4216 e-mail: [email protected] Real and Harmonic Analysis 3) Mark J.Balas Department Head and Professor Electrical and Computer Engineering Dept. College of Engineering University of Wyoming 1000 E. University Ave. Laramie, WY 82071 307-766-5599 e-mail: [email protected] Control Theory,Nonlinear Systems, Neural Networks,Ordinary and Partial Differential Equations, Functional Analysis and Operator Theory 4) Carlo Bardaro Dipartimento di Matematica e Informatica Universita di Perugia Via Vanvitelli 1 06123 Perugia, ITALY TEL+390755853822 +390755855034 FAX+390755855024 E-mail [email protected] Web site: http://www.unipg.it/~bardaro/ Functional Analysis and Approximation Theory, Signal Analysis, Measure Theory, Real Analysis.
20) Burkhard Lenze Fachbereich Informatik Fachhochschule Dortmund University of Applied Sciences Postfach 105018 D-44047 Dortmund, Germany e-mail: [email protected] Real Networks, Fourier Analysis,Approximation Theory 21) Hrushikesh N.Mhaskar Department Of Mathematics California State University Los Angeles,CA 90032 626-914-7002 e-mail: [email protected] Orthogonal Polynomials, Approximation Theory,Splines, Wavelets, Neural Networks 22) M.Zuhair Nashed Department Of Mathematics University of Central Florida PO Box 161364 Orlando, FL 32816-1364 e-mail: [email protected] Inverse and Ill-Posed problems, Numerical Functional Analysis, Integral Equations,Optimization, Signal Analysis 23) Mubenga N.Nkashama Department OF Mathematics University of Alabama at Birmingham Birmingham, AL 35294-1170 205-934-2154 e-mail: [email protected] Ordinary Differential Equations, Partial Differential Equations 24) Charles E.M.Pearce Applied Mathematics Department University of Adelaide Adelaide 5005, Australia e-mail: [email protected] Stochastic Processes, Probability Theory,Harmonic Analysis, Measure Theory, Special Functions, Inequalities
1184
5) Martin Bohner Department of Mathematics and Statistics Missouri S&T Rolla, MO 65409-0020, USA [email protected] web.mst.edu/~bohner Difference equations, differential equations, dynamic equations on time scale, applications in economics, finance, biology. 6) Jerry L.Bona Department of Mathematics The University of Illinois at Chicago 851 S. Morgan St. CS 249 Chicago, IL 60601 e-mail:[email protected] Partial Differential Equations, Fluid Dynamics 7) Luis A.Caffarelli Department of Mathematics The University of Texas at Austin Austin,Texas 78712-1082 512-471-3160 e-mail: [email protected] Partial Differential Equations 8) George Cybenko Thayer School of Engineering Dartmouth College 8000 Cummings Hall, Hanover,NH 03755-8000 603-646-3843 (X 3546 Secr.) e-mail: [email protected] Approximation Theory and Neural Networks 9) Ding-Xuan Zhou Department Of Mathematics City University of Hong Kong 83 Tat Chee Avenue Kowloon,Hong Kong 852-2788 9708,Fax:852-2788 8561 e-mail: [email protected] Approximation Theory, Spline functions,Wavelets 10) Sever S.Dragomir School of Computer Science and Mathematics, Victoria University, PO Box 14428, Melbourne City, MC 8001,AUSTRALIA Tel. +61 3 9688 4437
25) Svetlozar T.Rachev Department of Statistics and Applied Probability University of California at Santa Barbara, Santa Barbara,CA 93106-3110 805-893-4869 e-mail: [email protected] and Chair of Econometrics,Statistics and Mathematical Finance School of Economics and Business Engineering University of Karlsruhe Kollegium am Schloss, Bau II,20.12, R210 Postfach 6980, D-76128, Karlsruhe,GERMANY. Tel +49-721-608-7535, +49-721-608-2042(s) Fax +49-721-608-3811 [email protected] Probability,Stochastic Processes and Statistics,Financial Mathematics, Mathematical Economics. 26) Alexander G. Ramm Mathematics Department Kansas State University Manhattan, KS 66506-2602 e-mail: [email protected] Inverse and Ill-posed Problems, Scattering Theory, Operator Theory, Theoretical Numerical Analysis, Wave Propagation, Signal Processing and Tomography 27) Ervin Y.Rodin Department of Systems Science and Applied Mathematics Washington University,Campus Box 1040 One Brookings Dr.,St.Louis,MO 631304899 314-935-6007 e-mail: [email protected] Systems Theory, Semantic Control, Partial Differential Equations, Calculus of Variations, Optimization and Artificial Intelligence, Operations Research, Math.Programming 28) T. E. Simos Department of Computer Science and Technology Faculty of Sciences and Technology University of Peloponnese
1185
Fax +61 3 9688 4050 [email protected] Inequalities,Functional Analysis, Numerical Analysis, Approximations, Information Theory, Stochastics. 11) Saber N.Elaydi Department Of Mathematics Trinity University 715 Stadium Dr. San Antonio,TX 78212-7200 210-736-8246 e-mail: [email protected] Ordinary Differential Equations, Difference Equations 12) Augustine O.Esogbue School of Industrial and Systems Engineering Georgia Institute of Technology Atlanta,GA 30332 404-894-2323 e-mail: [email protected] Control Theory,Fuzzy sets, Mathematical Programming, Dynamic Programming,Optimization
GR-221 00 Tripolis, Greece Postal Address: 26 Menelaou St. Anfithea - Paleon Faliron GR-175 64 Athens, Greece [email protected] Numerical Analysis 29) I. P. Stavroulakis Department of Mathematics University of Ioannina 451-10 Ioannina, Greece [email protected] Differential Equations Phone +3 0651098283 30) Manfred Tasche Department of Mathematics University of Rostock D-18051 Rostock,Germany [email protected] Numerical Fourier Analysis, Fourier Analysis,Harmonic Analysis, Signal Analysis, Spectral Methods, Wavelets, Splines, Approximation Theory
13) Christodoulos A.Floudas Department of Chemical Engineering Princeton University Princeton,NJ 08544-5263 609-258-4595(x4619 assistant) e-mail: [email protected] OptimizationTheory&Applications, Global Optimization
31) Gilbert G.Walter Department Of Mathematical Sciences University of Wisconsin-Milwaukee,Box 413, Milwaukee,WI 53201-0413 414-229-5077 e-mail: [email protected] Distribution Functions, Generalised Functions, Wavelets
14) J.A.Goldstein Department of Mathematical Sciences The University of Memphis Memphis,TN 38152 901-678-3130 e-mail:[email protected] Partial Differential Equations, Semigroups of Operators
32) Halbert White Department of Economics University of California at San Diego La Jolla,CA 92093-0508 619-534-3502 e-mail: [email protected] Econometric Theory, Approximation Theory,Neural Networks
15) H.H.Gonska Department of Mathematics University of Duisburg Duisburg, D-47048 Germany 011-49-203-379-3542 e-mail:[email protected] Approximation Theory, Computer Aided Geometric Design
33) Xin-long Zhou Fachbereich Mathematik, Fachgebiet Informatik Gerhard-Mercator-Universitat Duisburg Lotharstr.65,D-47048 Duisburg,Germany e-mail:[email protected] Fourier Analysis,Computer-Aided Geometric Design, Computational Complexity, Multivariate
1186
16) John R. Graef Department of Mathematics University of Tennessee at Chattanooga Chattanooga, TN 37304 USA [email protected] Ordinary and functional differential equations, difference equations, impulsive systems, differential inclusions, dynamic equations on time scales , control theory and their applications 17) Weimin Han Department of Mathematics University of Iowa Iowa City, IA 52242-1419 319-335-0770 e-mail: [email protected] Numerical analysis, Finite element method, Numerical PDE, Variational inequalities, Computational mechanics 18) Christian Houdre School of Mathematics Georgia Institute of Technology Atlanta,Georgia 30332 404-894-4398 e-mail: [email protected] Probability, Mathematical Statistics, Wavelets 19) V. Lakshmikantham Department of Mathematical Sciences Florida Institute of Technology Melbourne, FL 32901 e-mail: [email protected] Ordinary and Partial Differential Equations, Hybrid Systems, Nonlinear Analysis
Approximation Theory, Approximation and Interpolation Theory 34) Xiang Ming Yu Department of Mathematical Sciences Southwest Missouri State University Springfield,MO 65804-0094 417-836-5931 e-mail: [email protected] Classical Approximation Theory, Wavelets 35) Lotfi A. Zadeh Professor in the Graduate School and Director, Computer Initiative, Soft Computing (BISC) Computer Science Division University of California at Berkeley Berkeley, CA 94720 Office: 510-642-4959 Sec: 510-642-8271 Home: 510-526-2569 FAX: 510-642-1712 e-mail: [email protected] Fuzzyness, Artificial Intelligence, Natural language processing, Fuzzy logic 36) Ahmed I. Zayed Department Of Mathematical Sciences DePaul University 2320 N. Kenmore Ave. Chicago, IL 60614-3250 773-325-7808 e-mail: [email protected] Shannon sampling theory, Harmonic analysis and wavelets, Special functions and orthogonal polynomials, Integral transforms
1187
Instructions to Contributors Journal of Computational Analysis and Applications A quartely international publication of Eudoxus Press, LLC, of TN.
Editor in Chief: George Anastassiou Department of Mathematical Sciences University of Memphis Memphis, TN 38152-3240, U.S.A.
1. Manuscripts files in Latex and PDF and in English, should be submitted via email to the Editor-in-Chief: Prof.George A. Anastassiou Department of Mathematical Sciences The University of Memphis Memphis,TN 38152, USA. Tel. 901.678.3144 e-mail: [email protected] Authors may want to recommend an associate editor the most related to the submission to possibly handle it. Also authors may want to submit a list of six possible referees, to be used in case we cannot find related referees by ourselves.
2. Manuscripts should be typed using any of TEX,LaTEX,AMS-TEX,or AMS-LaTEX and according to EUDOXUS PRESS, LLC. LATEX STYLE FILE. (Click HERE to save a copy of the style file.)They should be carefully prepared in all respects. Submitted articles should be brightly typed (not dot-matrix), double spaced, in ten point type size and in 8(1/2)x11 inch area per page. Manuscripts should have generous margins on all sides and should not exceed 24 pages. 3. Submission is a representation that the manuscript has not been published previously in this or any other similar form and is not currently under consideration for publication elsewhere. A statement transferring from the authors(or their employers,if they hold the copyright) to Eudoxus Press, LLC, will be required before the manuscript can be accepted for publication.The Editor-in-Chief will supply the necessary forms for this transfer.Such a written transfer of copyright,which previously was assumed to be implicit in the act of submitting a manuscript,is necessary under the U.S.Copyright Law in order for the publisher to carry through the dissemination of research results and reviews as widely and effective as possible.
1188
4. The paper starts with the title of the article, author's name(s) (no titles or degrees), author's affiliation(s) and e-mail addresses. The affiliation should comprise the department, institution (usually university or company), city, state (and/or nation) and mail code. The following items, 5 and 6, should be on page no. 1 of the paper. 5. An abstract is to be provided, preferably no longer than 150 words. 6. A list of 5 key words is to be provided directly below the abstract. Key words should express the precise content of the manuscript, as they are used for indexing purposes. The main body of the paper should begin on page no. 1, if possible. 7. All sections should be numbered with Arabic numerals (such as: 1. INTRODUCTION) . Subsections should be identified with section and subsection numbers (such as 6.1. Second-Value Subheading). If applicable, an independent single-number system (one for each category) should be used to label all theorems, lemmas, propositions, corollaries, definitions, remarks, examples, etc. The label (such as Lemma 7) should be typed with paragraph indentation, followed by a period and the lemma itself. 8. Mathematical notation must be typeset. Equations should be numbered consecutively with Arabic numerals in parentheses placed flush right, and should be thusly referred to in the text [such as Eqs.(2) and (5)]. The running title must be placed at the top of even numbered pages and the first author's name, et al., must be placed at the top of the odd numbed pages. 9. Illustrations (photographs, drawings, diagrams, and charts) are to be numbered in one consecutive series of Arabic numerals. The captions for illustrations should be typed double space. All illustrations, charts, tables, etc., must be embedded in the body of the manuscript in proper, final, print position. In particular, manuscript, source, and PDF file version must be at camera ready stage for publication or they cannot be considered. Tables are to be numbered (with Roman numerals) and referred to by number in the text. Center the title above the table, and type explanatory footnotes (indicated by superscript lowercase letters) below the table. 10. List references alphabetically at the end of the paper and number them consecutively. Each must be cited in the text by the appropriate Arabic numeral in square brackets on the baseline. References should include (in the following order): initials of first and middle name, last name of author(s) title of article,
1189
name of publication, volume number, inclusive pages, and year of publication. Authors should follow these examples: Journal Article 1. H.H.Gonska,Degree of simultaneous approximation of bivariate functions by Gordon operators, (journal name in italics) J. Approx. Theory, 62,170-191(1990).
Book 2. G.G.Lorentz, (title of book in italics) Bernstein Polynomials (2nd ed.), Chelsea,New York,1986.
Contribution to a Book 3. M.K.Khan, Approximation properties of beta operators,in(title of book in italics) Progress in Approximation Theory (P.Nevai and A.Pinkus,eds.), Academic Press, New York,1991,pp.483-495.
11. All acknowledgements (including those for a grant and financial support) should occur in one paragraph that directly precedes the References section.
12. Footnotes should be avoided. When their use is absolutely necessary, footnotes should be numbered consecutively using Arabic numerals and should be typed at the bottom of the page to which they refer. Place a line above the footnote, so that it is set off from the text. Use the appropriate superscript numeral for citation in the text. 13. After each revision is made please again submit via email Latex and PDF files of the revised manuscript, including the final one. 14. Effective 1 Nov. 2009 for current journal page charges, contact the Editor in Chief. Upon acceptance of the paper an invoice will be sent to the contact author. The fee payment will be due one month from the invoice date. The article will proceed to publication only after the fee is paid. The charges are to be sent, by money order or certified check, in US dollars, payable to Eudoxus Press, LLC, to the address shown on the Eudoxus homepage. No galleys will be sent and the contact author will receive one (1) electronic copy of the journal issue in which the article appears.
15. This journal will consider for publication only papers that contain proofs for their listed results.
JOURNAL OF COMPUTATIONAL ANALYSIS AND APPLICATIONS, VOL. 14, NO.7, 1190-1209 , 2012, COPYRIGHT 2012 EUDOXUS 1190 PRESS, LLC
Approximation of an additive-quadratic functional equation in RN-spaces Hassan Azadi Kenary, Sun-Young Jang∗ and Choonkil Park Abstract. In this paper, using the fixed point and direct methods, we prove the Hyers-Ulam stability of the following additive-quadratic functional equation: ( ) ( ) ( ) ( ) x−y+z x+y−z −x + y + z x+y+z + af + af + af = cf (x) + cf (y) + cf (z) af b b b b where a, b and c are positive real numbers, in random normed spaces.
1. Introduction and preliminaries A classical question in the theory of functional equations is the following: “When is it true that a function which approximately satisfies a functional equation must be close to an exact solution of the equation?”. If the problem accepts a solution, then we say that the equation is stable. The first stability problem concerning group homomorphisms was raised by Ulam [34] in 1940. In the next year, Hyers [14] gave a positive answer to the above question for additive groups under the assumption that the groups are Banach spaces. In 1978, Th.M. Rassias [26] proved a generalization of the Hyers’ theorem for additive mappings. The result of Th.M. Rassias has provided a lot of influence during the last three decades in the development of a generalization of the Hyers-Ulam stability concept (see [1]-[5]). Furthermore, in 1994, a generalization of the Th.M. Rassias’ theorem was obtained by Gˇavruta [13] by replacing the bound ϵ(∥x∥p + ∥y∥p ) by a general control function ϕ(x, y). The functional equation f (x + y) + f (x − y) = 2f (x) + 2f (y) is called a quadratic functional equation. In particular, every solution of the quadratic functional equation is said to be a quadratic mapping. In 1983, a Hyers-Ulam stability problem for the quadratic functional equation was proved by Skof [33] for mappings f : X → Y , where X is a normed space and Y is a Banach space. In 1984, Cholewa [6] noticed that the theorem of Skof is still true if the relevant domain X is replaced by an Abelian group and, in 2002, Czerwik [7] proved the Hyers-Ulam stability of the quadratic functional equation. 0
2010 Mathematics Subject Classification: 39B82, 39B52, 54E70, 47H10, 54E40. Keywords: Hyers-Ulam stability; Direct method; Random normed space; Fixed point method. ∗ Corresponding author. 0
1191
2
Additive-quadratic functional equation in RN-spaces
The stability problems of several functional equations have been extensively investigated by a number of authors and there are many interesting results concerning this problem ([8]-[31]). In the sequel, we adopt the usual terminology, notions and conventions of the theory of random normed spaces as in [32]. Throughout this paper (in random stability section), let Γ+ denote the set of all probability distribution functions F : R ∪ [−∞, +∞] → [0, 1] such that F is left-continuous and nondecreasing on R and F (0) = 0, F (+∞) = 1. It is clear that the set D+ = {F ∈ Γ+ : l− F (−∞) = 1}, where l− f (x) = limt→x− f (t), is a subset of Γ+ . The set Γ+ is partially ordered by the usual point-wise ordering of functions, that is, F ≤ G if and only if F (t) ≤ G(t) for all t ∈ R. For any a ≥ 0, the element Ha (t) of D+ is defined by {
Ha (t) =
0, if t ≤ a, 1, if t > a.
We can easily show that the maximal element in Γ+ is the distribution function H0 (t). Definition 1.1. A function T : [0, 1]2 → [0, 1] is a continuous triangular norm (briefly, a t-norm) if T satisfies the following conditions: (a) T is commutative and associative; (b) T is continuous; (c) T (x, 1) = x for all x ∈ [0, 1]; (d) T (x, y) ≤ T (z, w) whenever x ≤ z and y ≤ w for all x, y, z, w ∈ [0, 1]. Three typical examples of continuous t-norms are as follows: TP (x, y) = xy, T (x, y) = max{a + b − 1, 0}, TM (x, y) = min(a, b). Recall that, if T is a t-norm and {xn } is n 1 n a sequence in [0, 1], then Ti=1 xi is defined recursively by Ti=1 x1 = x1 and Ti=1 xi = n−1 ∞ ∞ T (Ti=1 xi , xn ) for all n ≥ 2. Ti=n xi is defined by Ti=1 xn+i . Definition 1.2. A random normed space (briefly, RN -space) is a triple (X, µ, T ), where X is a vector space, T is a continuous t-norm and µ : X → D+ is a mapping such that the following conditions hold: (a) µx (t) = H0 (t) for all x ∈ X and t > 0 if and only if x = 0; t ) for all α ∈ R with α ̸= 0, x ∈ X and t ≥ 0; (b) µαx (t) = µx ( |α| (c) µx+y (t + s) ≥ T (µx (t), µy (s)) for all x, y ∈ X and t, s ≥ 0. Every normed space (X, ∥ · ∥) defines a random normed space (X, µ, TM ), where µu (t) =
t t + ∥u∥
1192
H. Azadi Kenary, S. Jang, C. Park
3
for all t > 0 and TM is the minimum t-norm. This space X is called the induced random normed space. If the t-norm T is such that sup0 1 − λ}. Definition 1.3. Let (X, µ, T ) be an RN-space. (a) A sequence {xn } in X is said to be convergent to a point x ∈ X (write xn → x as n → ∞) if limn→∞ µxn −x (t) = 1 for all t > 0. (b) A sequence {xn } in X is called a Cauchy sequence in X if limn→∞ µxn −xm (t) = 1 for all t > 0. (c) The RN -space (X, µ, T ) is said to be complete if every Cauchy sequence in X is convergent. Theorem 1.1. ([32]) If (X, µ, T ) is an RN-space and {xn } is a sequence such that xn → x, then limn→∞ µxn (t) = µx (t). Definition 1.4. Let X be a set. A function d : X × X → [0, ∞] is called a generalized metric on X if d satisfies the following conditions: (a) d(x, y) = 0 if and only if x = y for all x, y ∈ X; (b) d(x, y) = d(y, x) for all x, y ∈ X; (c) d(x, z) ≤ d(x, y) + d(y, z) for all x, y, z ∈ X. Theorem 1.2. Let (X,d) be a complete generalized metric space and J : X → X be a strictly contractive mapping with Lipschitz constant L < 1. Then, for all x ∈ X, either d(J n x, J n+1 x) = ∞ for all nonnegative integers n or there exists a positive integer n0 such that (a) d(J n x, J n+1 x) < ∞ for all n0 ≥ n0 ; (b) the sequence {J n x} converges to a fixed point y ∗ of J; (c) y ∗ is the unique fixed point of J in the set Y = {y ∈ X : d(J n0 x, y) < ∞}; 1 (d) d(y, y ∗ ) ≤ 1−L d(y, Jy) for all y ∈ Y . In this paper, using the fixed point and direct methods, we prove the Hyers-Ulam stability of the following functional equation ) ( ) ( ) ( ) ( x−y+z x+y−z −x + y + z x+y+z + af + af + af (1.1) af b b b b = cf (x) + cf (y) + cf (z) in random normed spaces.
1193
Additive-quadratic functional equation in RN-spaces
4
2. Hyers-Ulam stability of the functional equation (1.1): a fixed point method In this section, using fixed point method, we prove the Hyers-Ulam stability of the functional equation (1.1) in random normed space for odd and even cases. Theorem 2.1. Let X be a linear space, (Y, µ, TM ) be a complete RN-space and Φ be a mapping from X 3 to D+ (Φ(x, y, z) is denoted by Φx,y,z ) such that there exists 0 < α < 12 such that Φ2x,2y,2z (t) ≤ Φx,y,z (αt)
(2.1)
for all x, y, z ∈ X and t > 0. Let f : X → Y be an odd mapping satisfying µaf ( x+y+z )+af ( x−y+z )+af ( x+y−z )+af ( −x+y+z )−cf (x)−cf (y)−cf (z) (t) ≥ Φx,y,z (t) b
b
b
b
for all x, y, z ∈ X and t > 0. Then the limit
(
(2.2)
)
x 2n exists for all x ∈ X and A : X → Y is a unique additive mapping such that A(x) := n→∞ lim 2n f
(
(
µf (x)−A(x) (t) ≥ TM Φx,x,0
)
(
(1 − 2α)ct (1 − 2α)ct , Φ2x,0,0 2α 2α
))
(2.3)
for all x ∈ X and t > 0. Proof. Replacing x, y, z by x2 , x2 , 0, respectively, in (2.2) and letting y = z = 0 in (2.2), one can easily obtain that ( ( ) ( )) ct ct µf (x)−2f ( x ) (t) ≥ TM Φ x2 , x2 ,0 , Φx,0,0 (2.4) 2 2 2 ) )) ( ( ( ct ct ≥ TM Φx,x,0 , Φ2x,0,0 . 2α 2α for all x ∈ X and t > 0. Consider the set S := {g : X → Y } and the generalized metric d in S defined by d(f, g) = inf{u ∈ R+ : µg(x)−h(x) (ut) ≥ TM (Φx,x,0 (t), Φ2x,0,0 (t)) , ∀x ∈ X, t > 0}, where inf ∅ = +∞. It is easy to show that (S, d) is complete (see [20, Lemma 2.1]). Now, we consider a linear mapping J : (S, d) → (S, d) such that ( ) x Jh(x) := 2h 2 for all x ∈ X.
1194
H. Azadi Kenary, S. Jang, C. Park
5
First, we prove that J is a strictly contractive mapping with the Lipschitz constant 2α. In fact, let g, h ∈ S be such that d(g, h) < ϵ. Then we have µg(x)−h(x) (ϵt) ≥ TM (Φx,x,0 (t), Φ2x,0,0 (t)) for all x ∈ X and t > 0 and so µJg(x)−Jh(x) (2αϵt) = µ2g( x2 )−2h( x2 ) (2αϵt) = µg( x2 )−h( x2 ) (αϵt) (
)
≥ TM Φ x2 , x2 ,0 (αt), Φx,0,0 (αt) ≥ TM (Φx,x,0 (t), Φ2x,0,0 (t))
for all x ∈ X and t > 0. Thus d(g, h) < ϵ implies that d(Jg, Jh) < 2αϵ. This means that d(Jg, Jh) ≤ 2αd(g, h) for all g, h ∈ S. It follows from (2.4) that 2α . c By Theorem 1.2, there exists a mapping A : X → Y satisfying the following: d(f, Jf ) ≤
(1) A is a fixed point of J, that is, ( )
x 1 = A(x) 2 2 for all x ∈ X. The mapping A is a unique fixed point of J in the set A
(2.5)
Ω = {h ∈ S : d(g, h) < ∞}. This implies that A is a unique mapping satisfying (2.5) such that there exists u ∈ (0, ∞) satisfying µf (x)−A(x) (ut) ≥ TM (Φx,x,0 (t), Φ2x,0,0 (t)) for all x ∈ X and t > 0. (2) d(J n f, A) → 0 as n → ∞. This implies the equality (
lim 2n f n→∞
x 2n
)
= A(x)
for all x ∈ X. (3) d(f, A) ≤
d(f,Jf ) 1−2α
with f ∈ Ω, which implies the inequality d(f, A) ≤
2α c − 2cα
1195
Additive-quadratic functional equation in RN-spaces
6
and so
(
µf (x)−A(x)
)
2αt ≥ TM (Φx,x,0 (t), Φ2x,0,0 (t)) c − 2cα
for all x ∈ X and t > 0. This implies that the inequality (2.3) holds. Replacing x, y and z by 2xn , 2yn and 2zn , respectively, in (2.2), we obtain µ2n [af ( x+y+z +af ( x−y+z +af ( x+y−z +af ( −x+y+z )−cf ( 2xn )−cf ( 2yn )−cf ( 2zn )] (t) ≥ Φ 2n b ) 2n b ) 2n b ) 2n b
(
x , y , z 2n 2n 2n
t 2n
)
for all x, y, z ∈ X, t > 0 and n ≥ 1 and so, from (2.1), it follows that (
Φ 2xn , 2yn , 2zn
t 2n
)
Since
≥ Φx,y,z (
lim Φx,y,z
n→∞
t n 2 αn
(
)
t . 2n αn
)
=1
for all x, y, z ∈ X and t > 0, we have µaA( x+y+z )+aA( x−y+z )+aA( x+y−z )+aA( −x+y+z )−cA(x)−cA(y)−cA(z) (t) = 1 b
b
b
b
for all x, y, z ∈ X and t > 0. Thus the mapping A : X → Y is additive. This completes the proof. Corollary 2.1. Let X be a real normed space, θ ≥ 0 and r be a real numbers with r > 1. Let f : X → Y be an odd mapping satisfying µaf ( x+y+z )+af ( x−y+z )+af ( x+y−z )+af ( −x+y+z )−cf (x)−cf (y)−cf (z) (t) ≥ b
b
b
b
t
(
t + θ ∥x∥r + ∥y∥r + ∥z∥r
for all x, y, z ∈ X and t > 0. Then the limit A(x) = limn→∞ 2 f n
(
) x 2n
x ∈ X and A : X → Y is a unique additive mapping such that (
µf (x)−A(x) (t) ≥ TM
)
(2r − 2)ct (2r − 2)ct , (2r − 2)ct + 4θ∥x∥r (2r − 2)ct + 2r+1 θ∥x∥r
exists for all )
for all x ∈ X and t > 0. Proof. The proof follows from Theorem 2.1 if we take Φx,y,z (t) =
(
t
t + θ ∥x∥r + ∥y∥r + ∥z∥r
)
for all x, y, z ∈ X and t > 0. In fact, if we choose α = 2−r , then we get the desired result.
1196
H. Azadi Kenary, S. Jang, C. Park
7
Theorem 2.2. Let X be a linear space, (Y, µ, TM ) be a complete RN-space and Φ be a mapping from X 3 to D+ (Φ(x, y, z) is denoted by Φx,y,z ) such that for some 0 < α < 2 Φ x2 , y2 , z2 (t) ≤ Φx,y,z (αt) for all x, y, z ∈ X and t > 0. Let f : X → Y be an odd mapping satisfying (2.2). Then n the limit A(x) := limn→∞ f (22n x) exists for all x ∈ X and A : X → Y is a unique additive mapping such that (
(
µf (x)−A(x) (t) ≥ TM Φx,x,0
)
(
(2 − α)ct (2 − α)ct , Φ2x,0,0 2 2
))
for all x ∈ X and t > 0. Proof. Let (S, d) be the generalized metric space defined in the proof of Theorem 2.1. Consider the linear mapping J : S → S such that 1 Jg(x) := g(2x) 2 for all x ∈ X. By (2.4), we obtain that ( )
t ≥ TM (Φx,x,0 (t), Φ2x,0,0 (t)) . 2 c The rest of the proof is similar to the proof of Theorem 2.1. µ f (2x) −f (x)
Corollary 2.2. Let X be a real normed space, θ ≥ 0 and r be a real numbers with 0 < r < 1. Let f : X → Y be an odd mapping satisfying µaf ( x+y+z )+af ( x−y+z )+af ( x+y−z )+af ( −x+y+z )−cf (x)−cf (y)−cf (z) (t) ≥ b
b
b
(
t
t + θ ∥x∥r + ∥y∥r + ∥z∥r
b
for all x, y, z ∈ X and t > 0. Then the limit A(x) = limn→∞ and A : X → Y is a unique additive mapping such that µf (x)−A(x) (t) ≥ TM
(
f (2n x) 2n
)
exists for all x ∈ X
(2 − 2r )ct (2 − 2r )ct , (2 − 2r )ct + 4θ∥x∥r (2 − 2r )ct + 2r+1 θ∥x∥r
)
for all x ∈ X and t > 0. Proof. The proof follows from Theorem 2.2 if we take Φx,y,z (t) =
t+
θ(∥x∥r
t + ∥y∥r + ∥z∥r )
for all x, y, z ∈ X and t > 0. In fact, if we choose α = 2r , then we get the desired result.
1197
Additive-quadratic functional equation in RN-spaces
8
Theorem 2.3. Let X be a linear space, (Y, µ, TM ) be a complete RN-space and Φ be a mapping from X 3 to D+ (Φ(x, y, z) is denoted by Φx,y,z ) such that there exists 0 < α < 14 such that Φ2x,2y,2z (t) ≤ Φx,y,z (αt) for all x, y, z ∈ X and t > 0. Let f : X → Y(be )an even mapping satisfying f (0) = 0 and (2.2). Then the limit Q(x) := limn→∞ 4n f 2xn exists for all x ∈ X and Q : X → Y is a unique quadratic mapping such that (
µf (x)−Q(x) (t) ≥ TM Φx,x,0
(
(
)
(1 − 4α)ct (1 − 4α)ct , Φ2x,0,0 4α 2α
))
for all x ∈ X and t > 0. Proof. Consider the set S ∗ := {g : X → Y : g(0) = 0} and the generalized metric d∗ in S ∗ defined by {
(
( )
)
}
t d (f, g) = inf u ∈ R : µg(x)−h(x) (ut) ≥ TM Φx,x,0 , Φ2x,0,0 (t) , ∀x ∈ X, t > 0 2 where inf ∅ = +∞. It is easy to show that (S ∗ , d∗ ) is complete (see [20, Lemma 2.1]). ∗
+
By the same method as in the proof of Theorem 2.1, we obtain that ( ) ) ( ( ) 2αt t µf (x)−4f ( x ) ≥ TM Φx,x,0 , Φ2x,0,0 (t) . 2 c 2 for all x ∈ X and t > 0. Now, we consider a linear mapping J ∗ : (S ∗ , d∗ ) → (S ∗ , d∗ ) such that ( ) x ∗ J h(x) := 4h 2 for all x ∈ X.
The rest of the proof is similar to the proof of Theorem 2.1.
Corollary 2.3. Let X be a real normed linear space, θ ≥ 0 and r be real numbers with r ∈ (1, ∞). If f : X → Y be an even mapping with f (0) = 0 such that t ( ) µaf ( x+y+z )+af ( x−y+z )+af ( x+y−z )+af ( −x+y+z )−cf (x)−cf (y)−cf (z) (t) ≥ b b b b t + θ ∥x∥r + ∥y∥r + ∥z∥r for all x, y, z ∈ X and t > 0, then the limit Q(x) = limn→∞ 4 f n
(
)
x 2n
exists for all x ∈ X
and defines a unique quadratic mapping Q : X → Y such that (
µf (x)−Q(x) (t) ≥ TM
(4r − 4)ct (4r − 4)ct , (4r − 4)ct + 8θ∥x∥r (4r − 4)ct + 2r+2 θ∥x∥r
)
1198
H. Azadi Kenary, S. Jang, C. Park
9
for all x ∈ X and t > 0. Proof. The proof follows from Theorem 2.3 if we take t Φx,y,z (t) = t + θ(∥x∥r + ∥y∥r + ∥z∥r ) for all x, y, z ∈ X and t > 0. In fact, if we choose α = 4−r , then we get the desired result. Theorem 2.4. Let X be a linear space, (Y, µ, TM ) be a complete RN-space and Φ be a mapping from X 3 to D+ (Φ(x, y, z) is denoted by Φx,y,z ) such that there exists 0 < α < 4 such that Φ x2 , y2 , z2 (t) ≤ Φx,y,z (αt) for all x, y, z ∈ X and t > 0. Let f : X → Y be an even mapping satisfying f (0) = 0 n and (2.2). Then the limit Q(x) := limn→∞ f (24n x) exists for all x ∈ X and Q : X → Y is a unique quadratic mapping such that (
µf (x)−Q(x) (t) ≥ TM Φx,x,0
(
)
(
(4 − α)ct (4 − α)ct , Φ2x,0,0 4 2
))
for all x ∈ X and t > 0. Proof. Let (S ∗ , d∗ ) be the generalized metric space defined in the proof of Theorem 2.3. Consider the linear mapping J ∗ : (S ∗ , d∗ ) → (S ∗ , d∗ ) such that 1 J ∗ g(x) := g(2x). 4 By the same method as in the proof of Theorem 2.1, we obtain that ( ) ) ( ( ) t t µ f (2x) −f (x) ≥ TM Φx,x,0 , Φ2x,0,0 (t) . 4 2c 2 The rest of the proof is similar to the proofs of Theorems 2.1 and 2.3.
Corollary 2.4. Let X be a normed vector space with norm ∥ · ∥, θ ≥ 0 and r be a real number with r ∈ (0, 1). Let f : X → Y be an even mapping with f (0) = 0 such that t ( ) µaf ( x+y+z )+af ( x−y+z )+af ( x+y−z )+af ( −x+y+z )−cf (x)−cf (y)−cf (z) (t) ≥ r b b b b t + θ ∥x∥ + ∥y∥r + ∥z∥r for all x, y, z ∈ X and t > 0. Then Q(x) = limn→∞ Q : X → Y is a unique quadratic mapping such that (
µf (x)−Q(x) (t) ≥ TM for all x ∈ X and t > 0.
f (2n x) 4n
exists for all x ∈ X and
(4 − 4r )ct (4 − 4r )ct , (4 − 4r )ct + 8θ∥x∥r (4 − 4r )ct + 2r+2 θ∥x∥r
)
1199
10
Additive-quadratic functional equation in RN-spaces
Proof. The proof follows from Theorem 2.4 if we take t ) ( Φx,y,z (t) = t + θ ∥x∥r + ∥y∥r + ∥z∥r for all x, y, z ∈ X and t > 0. In fact, if we choose α = 4r , then we get the desired result. (−x) Let f : X → Y be a mapping satisfying f (0) = 0 and (1.1) . Let fe (x) := f (x)+f 2 (−x) and fo (x) = f (x)−f . Then fe is an even mapping satisfying (1.1) and fo is an odd 2 mapping satisfying (1.1) such that f (x) = fe (x) + fo (x). So we obtain the following.
Theorem 2.5. Let X be a linear space, (Y, µ, TM ) be a complete RN-space and Φ be a mapping from X 3 to D+ (Φ(x, y, z) is denoted by Φx,y,z ) such that there exists 0 < α < 14 such that Φ2x,2y,2z (t) ≤ Φx,y,z (αt) for all x, y, z ∈ X and t > 0. Let f : X → Y be a mapping satisfying f (0) = 0 and (2.2). Then there exist an additive mapping A : X → Y and a quadratic mapping Q : X → Y such that µ2f (x)−A(x)−Q(x) (t) (
( )
( ))
t t ≥ TM µf (x)−A(x) , µf (x)−Q(x) 2 2 ) )) ( ( ( ( (1 − 2α)ct (1 − 2α)ct ≥ TM TM Φx,x,0 , Φ2x,0,0 , 4α 4α (
≥ TM Φx,x,0
(
)
(
(1 − 4α)ct (1 − 4α)ct , Φ2x,0,0 8α 4α
)))
for all x ∈ X and t > 0. 3. Random stability of the functional equation (1.1): a direct method In this section, using direct method, we prove the Hyers-Ulam stability of the functional equation (1.1) in random normed space for odd and even cases. Theorem 3.1. Let X be a real linear space, (Z, µ′ , min) be an RN-space and ϕ : X 3 → Z be a function such that there exists 0 < α < 2 such that µ′ϕ(2x,2y,2z) (t) ≥ µ′αϕ(x,y,z) (t) for all x, y, z ∈ X and t > 0 and lim µ′ϕ(2n x,2n y,2n z) (2n t) = 1
n→∞
(3.1)
1200
H. Azadi Kenary, S. Jang, C. Park
11
for all x, y, z ∈ X and t > 0. Let (Y, µ, min) be a complete RN-space. If f : X → Y is an odd mapping such that µaf ( x+y+z )+af ( x−y+z )+af ( x+y−z )+af ( −x+y+z )−cf (x)−cf (y)−cf (z) (t) ≥ µ′ϕ(x,y,z) (t) b
b
b
b
(3.2)
for all x, y, z ∈ X and t > 0, then the limit f (2n x) n→∞ 2n
A(x) = lim
exists for all x ∈ X and defines a unique additive mapping A : X → Y such that (
(
µf (x)−A(x) (t) ≥ TM
µ′ϕ(x,x,0)
)
(
(2 − α)ct (2 − α)ct , µ′ϕ(2x,0,0) 2 2
))
(3.3)
for all x ∈ X and t > 0. Proof. Putting y = z = 0 in (3.2), we see that µ2af ( x )−cf (x) (t) ≥ µ′ϕ(x,0,0) (t)
(3.4)
b
for all x ∈ X. Replacing x by 2x in (3.4), we obtain µ2af ( 2x )−cf (2x) (t) ≥ µ′ϕ(2x,0,0) (t)
(3.5)
b
for all x ∈ X. Putting y = x and z = 0 in (3.2), we have µ2af ( 2x )−2cf (x) (t) ≥ µ′ϕ(x,x,0) (t)
(3.6)
b
By (3.5) and (3.6), we obtain that µcf (2x)−2cf (x) (t) = µcf (2x)±2af ( 2x )−2cf (x) (t) b
= µcf (2x)−2af ( 2x )+2af ( 2x )−2cf (x) (t) ≥ TM
(
b
µ′ϕ(x,x,0)
( )
b
( ))
t t , µ′ϕ(2x,0,0) 2 2
.
So (
)
µ f (2x) −f (x) (t) ≥ TM µ′ϕ(x,x,0) (ct), µ′ϕ(2x,0,0) (ct) .
(3.7)
2
Replacing x by 2n x in (3.7) and using (3.1), we obtain (
µ f (2n+1 x) − f (2n x) (t) ≥ TM µ′ϕ(2n x,2n x,0) (2n ct), µ′ϕ(2n+1 x,0,0) (2n ct) 2n+1
2n
≥ TM
(
µ′ϕ(x,x,0)
(
)
(
2n ct 2n ct ′ , µ ϕ(2x,0,0) αn αn
)
))
.
1201
Additive-quadratic functional equation in RN-spaces
12
Since
f (2n x) 2n
− f (x) =
∑n−1
f (2k+1 x) 2k+1
k=0
µ f (2nn x) −f (x)
(n−1 ) ∑ tαk
2
f (2k x) , 2k
= µ∑n−1 f (2k+1 x) − f (2k x)
2k
k=0
−
2k+1
k=0
(
≥
n−1 Tk=0
(
(n−1 ) ∑ tαk
2k
(
µ f (2k+1 x) −2k f ( x ) 2k+1
2k
(
2k
k=0
tαk 2k
)) ))
n−1 TM µ′ϕ(x,x,0) (ct), µ′ϕ(2x,0,0) (ct) ≥ Tk=0
(
)
= TM µ′ϕ(x,x,0) (ct), µ′ϕ(2x,0,0) (ct) . This implies that
ct
µ f (2nn x) −f (x) (t) ≥ TM µ′ϕ(x,x,0) ∑n−1
αk k=0 2k
2
Replacing x by 2p x in (3.8), we obtain
∑
ϕ(2x,0,0)
ct
2p
2n+p
k=p
Since
ct lim µ′ϕ(x,x,0) ∑n+p−1
p,n→∞
{
f (2n x) 2n
k=p
}
αk 2k
, µ′
ct
n−1 αk k=0 2k
ϕ(2x,0,0)
, µ′
µ f (2n+p x) − f (xp ) (t) ≥ TM µ′ϕ(x,x,0) ∑n+p−1
∑
.
n+p−1 αk k=p 2k
(3.9)
= lim µ′ ∑ ϕ(2x,0,0) n+p−1 p,n→∞
(3.8)
ct
ct
αk 2k
.
k=p
=1
αk 2k
it follows that is a Cauchy sequence in complete RN-space (Y, µ, min) and so there exists a point A(x) ∈ Y such that f (2n x) = A(x). n→∞ 2n Fix x ∈ X and put p = 0 in (3.9). Then we obtain lim
µ f (2nn x) −f (x) (t) ≥ 2
TM µ′ϕ(x,x,0)
ct
∑ , µ′ ϕ(2x,0,0) n−1 αk k=0 2k
∑
ct
n−1 αk k=0 2k
.
and so, for any ϵ > 0, µA(x)−f (x) (t + ϵ)
(3.10)
)
(
≥ T µA(x)− f (2nn x) (ϵ), µ f (2nn x) −f (x) (t)
2
2
≥ T µA(x)− f (2nn x) (ϵ), TM µ′ϕ(x,x,0) ∑n−1
αk k=0 2k
2
Taking n → ∞ in (3.10), we get
(
µC(x)−f (x) (t + ϵ) ≥ TM µ′ϕ(x,x,0)
(
ct
)
, µ′
ϕ(2x,0,0)
(
∑
ct
n−1 αk k=0 2k
(2 − α)ct (2 − α)ct , µ′ϕ(2x,0,0) 2 2
.
))
. (3.11)
1202
H. Azadi Kenary, S. Jang, C. Park
Since ϵ is arbitrary, by taking ϵ → 0 in (3.11), we get (
(
µ′ϕ(x,x,0)
µC(x)−f (x) (t) ≥ TM
)
13
(
(2 − α)ct (2 − α)ct , µ′ϕ(2x,0,0) 2 2
))
.
Replacing x, y and z by 2n x, 2n y and 2n z, respectively, in (3.2), we get µ 1n [af ( 2n (x+y+z) )+af ( 2n (x−y+z) )+af ( 2n (x+y−z) )+af ( 2n (−x+y+z) )−cf (2n x)−cf (2n y)−cf (2n z)] (t) 2 b b b b n ′ ≥ µϕ(2n x,2n y,2n z) (2 t) for all x, y, z ∈ X and t > 0. Since limn→∞ µ′ϕ(2n x,2n y,2n z) (2n t) = 1, we conclude that A satisfies (1.1). Furthermore f (2n+1 x) f (2n x) − 2 lim n→∞ 2n 2n ] [ n+1 f (2n x) f (2 x) − lim = 2 lim n→∞ n→∞ 2n+1 2n = 0.
A(2x) − 2A(x) =
lim n→∞
So, A : X → Y is an additive mapping. To prove the uniqueness of the additive mapping A, assume that there exists another additive mapping L : X → Y which satisfies (3.3). Then we have µA(x)−L(x) (t) =
lim µ A(2nn x) − L(2nn x) (t)
n→∞
2
{
2
( )
( )}
t t ≥ lim min µ , µ f (2nn x) − L(2nn x) n→∞ 2 2 2 2 ( ) ( )) ( n 2 (2 − α)ct 2n (2 − α)ct ′ ′ ≥ lim TM µϕ(2n x,2n x,0) , µϕ(2n+1 x,0,0) n→∞ 4 4 A(2n x) f (2n x) − 2n 2n
(
≥
lim TM
n→∞
Since
(
lim
n→∞
we get
2n (2 − α)ct 4αn
(
lim TM
n→∞
(
µ′ϕ(x,x,0)
µ′ϕ(x,x,0)
(
(
)
2n (2 − α)ct 2n (2 − α)ct ′ , µ ϕ(2x,0,0) 4αn 4αn
)
(
= n→∞ lim
2n (2 − α)ct 4αn
)
(
))
.
)
=∞
2n (2 − α)ct 2n (2 − α)ct ′ , µ ϕ(2x,0,0) 4αn 4αn
))
= 1.
Therefore, it follows that µA(x)−L(x) (t) = 1 for all t > 0 and so A(x) = L(x). This completes the proof. Corollary 3.1. Let X be a real normed linear space, (Z, µ′ , min) be an RN-space and (Y, µ, min) be a complete RN-space. Let 0 < r < 1 , z0 ∈ Z and f : X → Y be an odd
1203
Additive-quadratic functional equation in RN-spaces
14
mapping satisfying µaf ( x+y+z )+af ( x−y+z )+af ( x+y−z )+af ( −x+y+z )−cf (x)−cf (y)−cf (z) (t) ≥ µ′(∥x∥r +∥y∥r +∥z∥r )z0 (t) b
b
b
b
for all x, y, z ∈ X and t > 0. Then there exists a unique additive mapping A : X → Y such that (
(
µ′∥x∥r z0
µf (x)−A(x) (t) ≥ TM
)
(2 − 2r )ct , µ′∥x∥r z0 4
(
(2 − 2r )ct 2r+1
))
for all x ∈ X and t > 0. Proof. Let α = 2r and ϕ : X 3 → Z be a mapping defined by ϕ(x, y, z) = (∥x∥r + ∥y∥r + ∥z∥r )z0 . Then, from Theorem 3.1, the conclusion follows. Corollary 3.2. Let X be a real normed linear space, (Z, µ′ , min) be an RN-space and (Y, µ, min) be a complete RN-space. Let z0 ∈ Z and f : X → Y be an odd mapping satisfying µaf ( x+y+z )+af ( x−y+z )+af ( x+y−z )+af ( −x+y+z )−cf (x)−cf (y)−cf (z) (t) ≥ µ′δz0 (t) b
b
b
b
for all x, y, z ∈ X and t > 0 Then there exists a unique additive mapping A : X → Y such that ( ) ct ′ µf (x)−A(x) (t) ≥ µδz0 2 for all x ∈ X and t > 0. Proof. Let α = 1 and ϕ : X 3 → Z be a mapping defined by ϕ(x, y, z) = δz0 . Then, from Theorem 3.1, the conclusion follows. Theorem 3.2. Let X be a real linear space, (Z, µ′ , min) be an RN-space and ϕ : X 3 → Z be a function such that there exists 0 < α < 21 such that µ′ϕ x , y , z (t) ≥ µ′αϕ(x,y,z) (t) for (2 2 2) all x, y, z ∈ X and t > 0 and lim µ′ x y z n→∞ ϕ( 2n , 2n , 2n )
(
t 2n
)
=1
for all x, y, z ∈ X and t > 0. Let (Y, µ, min) be a complete RN-space and ( )f : X → Y be n an odd mapping satisfying (3.2). Then the limit A(x) = limn→∞ 2 f 2xn exists for all x ∈ X and defines a unique additive mapping A : X → Y such that (
µf (x)−A(x) (t) ≥ TM for all x ∈ X and t > 0.
µ′ϕ(x,x,0)
(
)
(
(1 − 2α)ct (1 − 2α)ct , µ′ϕ(2x,0,0) 2α 2α
))
1204
H. Azadi Kenary, S. Jang, C. Park
15
Proof. By (3.7), we obtain (
(
)
(
))
ct ct µf (x)−2f ( x ) (t) ≥ TM , µ′ϕ(x,0,0) 2 2 2 ( ( ) ( )) ct ct ′ ′ ≥ TM µϕ(x,x,0) , µϕ(2x,0,0) . 2α 2α The rest of the proof is similar to the proof of Theorem 3.1. µ′ϕ( x , x ,0) 2 2
Corollary 3.3. Let X be a real normed linear space, (Z, µ′ , min) be an RN-space and (Y, µ, min) be a complete RN-space. Let r ∈ (1, +∞), z0 ∈ Z and f : X → Y be an odd mapping satisfying µaf ( x+y+z )+af ( x−y+z )+af ( x+y−z )+af ( −x+y+z )−cf (x)−cf (y)−cf (z) (t) ≥ µ′(∥x∥r +∥y∥r +∥z∥r )z0 (t) b
b
b
b
for all x, y, z ∈ X and t > 0. Then there exists a unique additive mapping A : X → Y such that (
(
µ′∥x∥r z0
µf (x)−A(x) (t) ≥ TM
)
(2r − 2)ct , µ′∥x∥r z0 4
(
(2r − 2)ct 2r+1
))
for all x ∈ X and t > 0. Proof. Let α = 2−r and ϕ : X 3 → Z be a mapping defined by ϕ(x, y, z) = (∥x∥r + ∥y∥r + ∥z∥r )z0 . Then, from Theorem 3.2, the conclusion follows. Theorem 3.3. Let X be a real linear space, (Z, µ′ , min) be an RN-space and ϕ : X 3 → Z be a function such that there exists 0 < α < 4 such that µ′ϕ(2x,2y,2z) (t) ≥ µ′αϕ(x,y,z) (t) for all x, y, z ∈ X and t > 0 and lim µ′ϕ(2n x,2n y,2n z) (4n t) = 1
n→∞
for all x, y, z ∈ X and t > 0. Let (Y, µ, min) be a complete RN-space. If f : X → Y n is an even mapping satisfyingf (0) = 0 and (3.2), then the limit A(x) = limn→∞ f (24n x) exists for all x ∈ X and defines a unique quadratic mapping Q : X → Y such that (
µf (x)−Q(x) (t) ≥ TM
(
µ′ϕ(x,x,0)
)
(
(4 − α)ct (4 − α)ct , µ′ϕ(2x,0,0) 4 2
))
for all x ∈ X and t > 0. Proof. Putting y = x and z = 0 in (3.2), we have µ2af ( 2x )−2cf (x) (t) ≥ µ′ϕ(x,x,0) (t)
(3.12)
b
for all x ∈ X. Putting y = z = 0 and then replacing x by 2x, we have µ4af ( 2x )−cf (2x) (t) ≥ µ′ϕ(2x,0,0) (t) b
(3.13)
1205
Additive-quadratic functional equation in RN-spaces
16
By (3.12) and (3.13), we see that µcf (2x)−4cf (x) (t) = µ2(2af ( 2x )−2cf (x))−4af ( 2x )+cf (2x) (t) b
b
(
( )
( ))
t t ≥ TM µ2af ( 2x )−2cf (x) , µ4af ( 2x )−cf (2x) b b 4 2 ( ) ( ( )) t t ≥ TM µ′ϕ(x,x,0) , µ′ϕ(2x,0,0) . 4 2 So (
)
µ f (2x) −f (x) (t) ≥ TM µ′ϕ(x,x,0) (ct), µ′ϕ(2x,0,0) (2ct) .
(3.14)
4
The rest of the proof is similar to the proof of Theorem 3.1.
Corollary 3.4. Let X be a real normed linear space, (Z, µ′ , min) be an RN-space and (Y, µ, min) be a complete RN-space. Let 0 < r < 1 , z0 ∈ Z and f : X → Y be an even mapping satisfying f (0) = 0 and µaf ( x+y+z )+af ( x−y+z )+af ( x+y−z )+af ( −x+y+z )−cf (x)−cf (y)−cf (z) (t) ≥ µ′(∥x∥r +∥y∥r +∥z∥r )z0 (t) b
b
b
b
for all x, y, z ∈ X and t > 0. Then there exists a unique quadratic mapping Q : X → Y such that (
µf (x)−Q(x) (t) ≥ TM
(
µ′∥x∥r z0
)
(4 − 4r )ct , µ′∥x∥r z0 8
(
(4 − 4r )ct 2r+1
))
for all x ∈ X and t > 0. Proof. Let α = 4r and ϕ : X 3 → Z be a mapping defined by ϕ(x, y, z) = (∥x∥r + ∥y∥r + ∥z∥r )z0 . Then, from Theorem 3.3, the conclusion follows. Corollary 3.5. Let X be a real normed linear space, (Z, µ′ , min) be an RN-space and (Y, µ, min) be a complete RN-space. Let z0 ∈ Z and f : X → Y be an even mapping with f (0) = 0 such that µaf ( x+y+z )+af ( x−y+z )+af ( x+y−z )+af ( −x+y+z )−cf (x)−cf (y)−cf (z) (t) ≥ µ′δz0 (t) b
b
b
b
for all x, y, z ∈ X and t > 0. Then there exists a unique quadratic mapping Q : X → Y such that ( ( ) ( )) 3ct 3ct , µ′δz0 µf (x)−Q(x) (t) ≥ TM µ′δz0 4 2 for all x ∈ X and t > 0. Proof. Let α = 1 and ϕ : X 3 → Z be a mapping defined by ϕ(x, y, z) = δz0 . Then, from Theorem 3.3, the conclusion follows.
1206
H. Azadi Kenary, S. Jang, C. Park
17
Theorem 3.4. Let X be a real linear space, (Z, µ′ , min) be an RN-space and ϕ : X 3 → Z be a function such that there exists 0 < α < 14 such that µ′ϕ x , y , z (t) ≥ µ′αϕ(x,y,z) (t) for (2 2 2) all x, y, z ∈ X and t > 0 and ( ) t ′ lim µ x y z =1 n→∞ ϕ( 2n , 2n , 2n ) 4n for all x, y, z ∈ X and t > 0. Let (Y, µ, min) be a complete RN-space and f : X → Y ( be ) n an even mapping satisfying f (0) = 0 and (3.2). Then the limit Q(x) = limn→∞ 4 f 2xn exists for all x ∈ X and defines a unique quadratic mapping Q : X → Y such that (
(
)
µf (x)−Q(x) (t) ≥ TM
(
(1 − 4α)ct (1 − 4α)ct , µ′ϕ(2x,0,0) 4α 2α
µ′ϕ(x,x,0)
))
for all x ∈ X and t > 0. Proof. By (3.14), we obtain
(
(
)
(
))
ct ct µf (x)−4f ( x ) (t) ≥ TM , µ′ϕ(x,0,0) 2 4 2 ( ( ( ) )) ct ct ′ ′ ≥ TM µϕ(x,x,0) , µϕ(2x,0,0) . 4α 2α The rest of the proof is similar to the proof of Theorem 3.1. µ′ϕ( x , x ,0) 2 2
Corollary 3.6. Let X be a real normed linear space, (Z, µ′ , min) be an RN-space and (Y, µ, min) be a complete RN-space. Let r ∈ (1, +∞), z0 ∈ Z and f : X → Y be an even mapping satisfying f (0) = 0 and µaf ( x+y+z )+af ( x−y+z )+af ( x+y−z )+af ( −x+y+z )−cf (x)−cf (y)−cf (z) (t) ≥ µ′(∥x∥r +∥y∥r +∥z∥r )z0 (t) b
b
b
b
for all x, y, z ∈ X and t > 0. Then there exists a unique quadratic mapping Q : X → Y such that ) ( )) ( ( (4r − 4)ct (4r − 4)ct ′ ′ , µ∥x∥r z0 µf (x)−Q(x) (t) ≥ TM µ∥x∥r z0 8 2r+4 for all x ∈ X and t > 0. Proof. Let α = 4−r and ϕ : X 3 → Z be a mapping defined by ϕ(x, y, z) = (∥x∥r + ∥y∥r + ∥z∥r )z0 . Then, from Theorem 3.4, the conclusion follows. (−x) Let f : X → Y be a mapping satisfying f (0) = 0 and (1.1) . Let fe (x) := f (x)+f 2 (−x) and fo (x) = f (x)−f . Then fe is an even mapping satisfying (1.1) and fo is an odd 2 mapping satisfying (1.1) such that f (x) = fe (x) + fo (x). So we obtain the following.
Theorem 3.5. Let X be a real linear space, (Z, µ′ , min) be an RN-space and ϕ : X 3 → Z be a function such that there exists 0 < α < 2 such that µ′ϕ(2x,2y,2z) (t) ≥ µ′αϕ(x,y,z) (t) for all x, y, z ∈ X and t > 0 and limn→∞ µ′ϕ(2n x,2n y,2n z) (2n t) = 1 for all x, y, z ∈ X and t > 0.
1207
Additive-quadratic functional equation in RN-spaces
18
Let (Y, µ, min) be a complete RN-space. If f : X → Y be a mapping satisfying f (0) = 0 and (3.2). Then there exist an additive mapping A : X → Y and a quadratic mapping Q : X → Y such that (
(
µ f (x)−f (−x) −A(x) (t) ≥ TM µ′ϕ(x,x,0) 2
(
µ f (x)+f (−x) −Q(x) (t) ≥ TM 2
(
µ′ϕ(x,x,0)
)
(
))
)
(
))
(2 − α)ct (2 − α)ct , µ′ϕ(2x,0,0) 2 2 (4 − α)ct (4 − α)ct , µ′ϕ(2x,0,0) 4 2
and µ2f (x)−A(x)−Q(x) (t) = µ f (x)−f (−x) + f (x)+f (−x) −A(x)−Q(x) (t) (
2
2
( )
( ))
t t , µ f (x)+f (−x) −Q(x) ≥ TM µ f (x)−f (−x) −A(x) 2 2 2 2 ( ( ( ( ) )) (2 − α)ct (2 − α)ct ′ ′ ≥ TM TM µϕ(x,x,0) , µϕ(2x,0,0) , 4 4 (
TM
µ′ϕ(x,x,0)
(
)
(
(4 − α)ct (4 − α)ct , µ′ϕ(2x,0,0) 8 4
)))
for all x ∈ X and t > 0. 4. Conclusion We linked here three different disciplines, namely, random normed spaces, functional equations and fixed point theory. We established the Hyers-Ulam stability of the functional equation (1.1) in random normed spaces. Acknowledgements The second author was supported by NRF Research Fund 2010-0013211 and has written during visiting the research Institute of Mathematics, Seoul National University. The third author was supported by Basic Science Research Program through the National Research Foundation of Korea funded by the Ministry of Education, Science and Technology (NRF-2009-0070788). References [1] L. M. Arriola and W. A. Beyer, Stability of the Cauchy functional equation over p-adic fields, Real Anal. Exchange 31 (2005/06), 125-132. [2] H. Azadi Kenary, Hyres-Rassias stability of the Pexiderial functional equation, to appear in Ital. J. Pure Appl. Math.
1208
H. Azadi Kenary, S. Jang, C. Park
19
[3] H. Azadi Kenary, The probabilistic stability of a Pexiderial functional equation in random normed spaces, to appear in Rend. Del Circolo Math. Di Palermo. [4] H. Azadi Kenary, Non-Archimedean stability of Cauchy-Jensen type functional equation, Int. J. Nonlinear Anal. Appl. 1 (2010) No. 2, 1-10. [5] H. Azadi Kenary, On the stability of a cubic functional equation in random normed spaces, J. Math. Extension 4 (2009), No. 1, 1-11. [6] P. W. Cholewa, Remarks on the stability of functional equations, Aequationes Math. 27 (1984), 76-86. [7] S. Czerwik, Functional Equations and Inequalities in Several Variables, World Scientific, River Edge, NJ, 2002. [8] M. Eshaghi Gordji and M. Bavand Savadkouhi, Stability of mixed type cubic and quartic functional equations in random normed spaces, J. Inequal. Appl. 2009 (2009), Article ID 527462, 9 pages. [9] M. Eshaghi Gordji and M. Bavand Savadkouhi and C. Park, Quadratic-quartic functional equations in RN-spaces, J. Inequal. Appl. 2009 (2009), Article ID 868423, 14 pages. [10] M. Eshaghi Gordji and H. Khodaei, Stability of Functional equations, Lap Lambert Academic Publishing, 2010. [11] M. Eshaghi Gordji, S. Zolfaghari, J.M. Rassias and M.B. Savadkouhi, Solution and stability of a mixed type cubic and quartic functional equation in quasi-Banach spaces, Abst. Appl. Anal. 2009 (2009), Article ID 417473, 14 pages. [12] W. Fechner, Stability of a functional inequality associated with the Jordan-von Neumann functional equation, Aequationes Math. 71 (2006), 149-161. [13] P. Gˇavruta, A generalization of the Hyers-Ulam-Rassias stability of approximately additive mappings, J. Math. Anal. Appl. 184 (1994), 431-436. [14] D. H. Hyers, On the stability of the linear functional equation, Proc. Natl. Acad. Sci. USA 27 (1941), 222-224. [15] D. H. Hyers, G. Isac and Th. M. Rassias, Stability of Functional Equations in Several Variables, Birkh¨auser, Basel, 1998. [16] K. Jun and H. Kim, On the Hyers-Ulam-Rassias stability problem for approximately k-additive mappings and functional inequalities, Math. Inequal. Appl. 10 (2007), 895-908. [17] S. Jung, Hyers-Ulam-Rassias stability of Jensen’s equation and its application, Proc. Amer. Math. Soc. 126 (1998), 3137-3143. [18] H. Khodaei and Th.M. Rassias, Approximately generalized additive functions in several variabels, Int. J. Nonlinear Anal. Appl. 1 (2010), 22-41. [19] L. Li, J. Chung and D. Kim, Stability of Jensen equations in the space of generalized functions, J. Math. Anal. Appl. 299 (2004), 578-586. [20] D. Mihet and V. Radu, On the stability of the additive Cauchy functional equation in random normed spaces, J. Math. Anal. Appl. 343 (2008), 567-572. [21] C. Park and H. Wee, Generalized Hyers-Ulam-Rassias stability of a functional equation in three variabels, J. Chungcheong Math. Soc. 18 (2005), 41-49. [22] C. Park, Fuzzy stability of a functional equation associated with inner product spaces, Fuzzy Sets and Systems 160 (2009), 1632-1642. [23] C. Park, Generalized Hyers-Ulam-Rassias stability of n-sesquilinear-quadratic mappings on Banach modules over C ∗ -algebras, J. Comput. Appl. Math. 180 (2005), 279–291. [24] C. Park, J. Hou, and S. Oh, Homomorphisms between JC -algebras and Lie C ∗ -algebras, Acta Math. Sin. (Engl. Ser.) 21 (2005), 1391-1398. [25] J. C. Parnami and H. L. Vasudeva, On Jensen’s functional equation, Aequationes Math. 43 (1992), 211-218.
1209
20
Additive-quadratic functional equation in RN-spaces
[26] Th. M. Rassias, On the stability of the linear mapping in Banach spaces, Proc. Amer. Math. Soc. 72 (1978), 297-300. [27] Th. M. Rassias, On the stability of functional equations and a problem of Ulam, Acta Appl. Math. 62 (2000), 23-130. [28] J. R¨atz, On inequalities associated with the Jordan-Von Neumann functional equation, Aequationes Math. 66 (2003), 191-200. [29] R. Saadati and C. Park, Non-Archimedean L-fuzzy normed spaces and stability of functional equations (preprint). [30] R. Saadati, M. Vaezpour and Y. Cho, A note to paper “On the stability of cubic mappings and quartic mappings in random normed spaces”, J. Inequal. Appl. 2009 (2009), Article ID 214530, 6 pages. [31] R. Saadati, M. M. Zohdi and S. M. Vaezpour, Nonlinear L-random stability of an ACQ functional equation, J. Inequal. Appl. 2011 (2011), Article ID 194394, 23 pages. [32] B. Schewizer and A. Sklar, Probabilistic Metric Spaces, North-Holland Series in Probability and Applied Mathematics, North-Holland, New York, USA, 1983. [33] F. Skof, Local properties and approximation of operators, Rend. Sem. Mat. Fis. Milano 53 (1983), 113-129. [34] S. M. Ulam, Problems in Modern Mathematics, Science Editions, John Wiley and Sons, 1964. Hassan Azadi Kenary Department of Mathematics, College of Sciences, Yasouj University, Yasouj 75914-353, Iran E-mail address: [email protected] Sun-Young Jang Department of Mathematics, University of Ulsan, Ulsan 680-749, Republic of Korea E-mail address: [email protected] Choonkil Park Department of Mathematics, Research Institute for Natural Sciences, Hanyang University, Seoul 133-791, Republic of Korea E-mail address: [email protected]
JOURNAL OF COMPUTATIONAL ANALYSIS AND APPLICATIONS, VOL. 14, NO.7, 1210-1226 , 2012, COPYRIGHT 2012 EUDOXUS 1210 PRESS, LLC
Mixed-Stable Models for Analyzing High-Frequency Financial Data Audrius Kabasinskas1 , Leonidas Sakalauskas2 , Edward W. Sun3,4 , Igoris Belovas2 1
Department of Mathematical Research in Systems, Faculty of Fundamental Sciences, Kaunas University of Technology, Kaunas, Lithuania
2
Operational Research Sector at Systems Analysis Dept., Vilnius University Institute of Mathematics and Informatics, Vilnius, Lithuania 3 4
BEM Management School Bordeaux, France
School of Economics and Business Engineering, KIT, Karlsruhe, Germany
August 8, 2011 Abstract In this paper, we propose the mixed-stable model for analyzing high-frequency stock return data that usually contain a large number of zeros. Based on the data of German Dax component stocks, we apply the stable and mix-stable laws (both with dependent and independent states) to model the data. We also investigate the self-similarity and multifractality of the data. We show the empirical results of the model performance. Keywords: high frequency data; stable models, mixed-stable models; financial modeling; self-similarity; Hurst index.
1
Introduction
With the introduction of electronic trading, an enormous quantity of trading data became available. Intra-daily price oscillation, trade duration distribution analysis and overall market microstructure investigations attract more and more attention from the researchers. Containing all transactions of the financial market high frequency data can reveal events and laws that are impossible to identify with monthly, weekly or daily data. A summary of the literature covering intra-daily data research is presented in Sun et al. (2008) and the reference therein. In our work we propose a new modeling methodology, that is, mixed-stable model to model the intra-daily data from German DAX component stocks returns. In practice, we often observe a large number of zero returns in the high-frequency return data due to the underlying asset price does not change at 1
KABASINSKAS ET AL: MIXED STABLE MODELS...
a given very short time intervals. Our model is designed to capture this unique features observed in the high-frequency return data. This paper is organized as follows. The first part is the introduction. Section 2 describe the data we investigated. In Section 3, we introduce our modeling methodology applied for modeling the data (i.e., stable and mixed-stable models, self-similarity and multifractality analysis, Hurst exponent calculation, stagnation intervals modeling etc.). We present our empirical results in Section 4 and conclude in Section 5.
2
The data
In this paper we analyze high-frequency data series of 29 stocks from DAX, representing August 17, 2007 (one of typical business-active days of the year). We aggregate raw inhomogeneous intra-daily data into the equally-spaced homogeneous 10 seconds intra-daily time series. To mitigate opening auction effect, transactions of the first 10 minutes of the day were omitted. The aggregation was done with previous-tick interpolation (see Wasserfallen and Zimmermann (1985)). Dacorogna et al. (2001) point out that linear interpolation relies on the future information whereas previous-tick interpolation is based on the information already known. Previous-tick interpolation. We denote times of raw intra-daily series as {ti } and the corresponding price as {Pi }. The aggregated homogeneous high frequency series is obtained at times t0 + j∆t with ∆t = 10 sec. The index j identifies the regularly spaced sequence. The time t0 + j∆t is bounded by two times ti of the irregularly spaced series, tI ≤ t0 + j∆t < tI+1 , here I = max{i|ti ≤ t0 + j∆} . With previous-tick interpolation we obtain that P (t0 + j) = PI . Having obtained homogeneous price series, we can calculate corresponding logarithmic returns series {Xi }: Xi = ln PPi+1 . i The length of all series is 3060. The number of zero 10-sec stock returns differs from 19% to 69%, with the average 47%. Almost all data series are asymmetric, and the empirical kurtosis (see Table 1) shows that density functions of the series are more peaked than that of Gaussian. That is why we make an assumption that Gaussian models are not applicable to these financial series. Moreover, it has been observed by researchers, that intra-daily data are characterized by fat tails and long-range dependence (see Dacorogna et al. (2001)).
3
Methodology
Methodology of this research follows the main aspects of stable and mixed-stable modeling, developed by Kabasinskas et al. (2009, 2010). We start the analysis 2
1211
KABASINSKAS ET AL: MIXED STABLE MODELS...
Figure 1: Example of intraday and corresponding high frequency data.
3
1212
KABASINSKAS ET AL: MIXED STABLE MODELS...
1213
Table 1: Empirical moments Company name Adidas AG Deutsche Bank AG BASF SE BMW AG St Continental AG Deutsche Post AG Deutsche Telekom AG Bayer AG O.N. Fresenius Medical Care AG and Co. KGaA St Deutsche Borse AG MAN SE St Henkel AG and Co. KGaA Vz Infineon Technologies AG Linde AG Merck KGaA RWE AG St Daimler AG SAP AG Siemens AG METRO AG St ThyssenKrupp AG Volkswagen AG St Deutsche Postbank AG HYPO REAL ESTATE Commerzbank AG Deutsche Lufthansa AG Allianz SE Munchener Ruck AG TUI AG
Mean 9.027E-06 9.780E-06 5.711E-06 4.417E-06 5.677E-06 3.682E-06 4.738E-06
St.dev. 0.0008 0.0007 0.0008 0.0006 0.0009 0.0006 0.0007
Skewness 0.1757 -0.1265 -0.6836 0.2197 -0.8912 -0.0232 0.2499
Kurtosis 31.12 6.65 31.52 6.83 122.42 8.73 3.83
Zeros, % 36.18 78.63 68.56 48.10 48.40 37.97 41.73
6.468E-06 5.774E-06
0.0007 0.0008
0.3973 -0.2354
7.82 10.49
65.72 34.58
1.548E-05 5.520E-06 3.280E-06
0.0008 0.0010 0.0008
0.0732 0.5253 0.0879
14.57 11.44 13.97
54.58 53.14 36.11
4.891E-06
0.0008
0.3245
8.53
31.47
8.652E-06 1.473E-05 1.166E-05 8.151E-06 8.250E-06 9.405E-06 1.221E-05 5.698E-06 1.776E-05 1.544E-05
0.0008 0.0008 0.0008 0.0008 0.0007 0.0007 0.0009 0.0004 0.0010 0.0009
-0.4563 0.1754 0.0881 -0.3854 0.2107 -0.1094 -0.1969 -1.6274 -0.1207 0.6914
16.87 17.56 6.99 14.23 7.14 7.23 20.98 50.38 16.45 15.37
39.93 38.86 69.15 77.65 62.94 77.39 55.39 56.41 32.12 44.77
1.575E-05
0.0010
-0.1406
21.22
57.12
9.627E-06 1.510E-05
0.0007 0.0006
-1.0132 0.1229
27.87 5.79
37.84 81.21
1.510E-05 5.706E-06
0.0006 0.0006
0.1229 0.3447
5.79 16.50
81.21 61.37
5.038E-06
0.0009
-0.1190
11.48
41.60
4
KABASINSKAS ET AL: MIXED STABLE MODELS...
1214
of our log-return series with stable and mixed-stable parameter estimation and goodness-of-fit hypothesis testing. Next we analyze the self-similarity and multifractality of our data and calculate Hurst exponents. Recall that for Gaussian processes, Hurst exponent H = 0.5 indicates Brownian motion; 0.5 < H < 1 indicates long time memory processes and persistent behavior; 0 < H < 0.5 shows anti-persistent behavior, see Samorodnitsky and Taqqu (2000). Finally we are applying mixed-stable models and analyze the behavior of zeros in the series (how they are distributed and occur in the series, are they random, how they can be simulated, etc.).
3.1
Stable and mixed stable models
The stable distribution is usually described by its characteristic function ( −σ α |t|α {1 − iβsign(t) tan πα 2 } + iµt, α 6= 1, log ϕ(t) = −σ|t|{1 + iβsign(t) π2 log |t|} + iµt, α = 1, Each stable distribution is described by four parameters: the first one and the most important is the stability index α ∈ (0, 2], which is essential when characterizing financial data. The others are: skewness −1 ≤ β ≤ 1, location µ ∈ R, and scale σ > 0. Let us denote the parameter vector Θ = (α, β, µ, σ). In financial modeling it is generally assumed that 1 < α ≤ 2. An overview of stable distributions properties can be found in Rachev and Mittnik (2002), Samorodnitsky and Taqqu (2000), and Zolotarev (1986). The probability density function of the stable laws cannot be expressed in elementary functions (except few cases: Levy, Cauchy and Gaussian distributions). Exact and fast calculation of stable densities is a nontrivial task [1, 2, 3, 11, 21, 22, 23]. To calculate the density we use Zolotarev integral representation [2, 3]: 1 α−1 R 1 α α| x−µ σ | α−1 U (ϕ, θ)}dϕ, U (ϕ, θ) exp{−| x−µ x 6= µ, α 2σ|α−1| σ | −θ α p(x, Θ) = 1 Γ 1 + 1 cos 1 arctan β tan πα , x = µ, πσ α α 2 where
Uα (ϕ, θ) =
sin
πα 2 (ϕ + cos πϕ 2
θ)
α ! 1−α
and
cos
π 2 ((α
− 1)ϕ + αθ) cos πϕ 2
πα 2 θ = arctan β tan sign(x − µ). 2 πα The problem of parameter estimation in stable modeling is hampered by the lack of known closed form of density functions for almost all stable distributions, hence many statistical methods depending on the explicit form of the probability density function cannot be applied. However, there are several numerical methods that have been found useful in practice: McCulloch method [20], method 5
KABASINSKAS ET AL: MIXED STABLE MODELS...
of moments [26], regression methods [15, 16, 18]. Comparative studies [2, 24] confirm that the most accurate method of estimation is the Maximum Likelihood method. However, it is the most time consuming, but the implementation of parallel algorithms can allow us to get result in an adequate time even for a large amount of long data series [4]. Stable parameters can be estimated from the returns by maximizing the log-likelihood function n X L(Θ) = log p(Xi , Θ). i=1
To optimize the function we use Davidon-Fletcher-Powell quasi-Newton method [28]. We also verify two hypotheses: the first one is that our sample follows Gaussian distribution. The is that our sample follows the stable nonGaussian distribution. Both hypotheses are examined by Anderson-Darling and Kolmogorov-Smirnov tests. The first test is more sensitive to the difference between empirical and theoretical distribution functions in far quantiles (tails), in contrast to the Kolmogorov-Smirnov test which is more sensitive to the difference in the central part of distribution. The mixed-stable model was introduced by Belovas et al. [3] to cope with the problem of daily zero returns. Cumulative distribution function of mixed-stable random variable is F (x) =
n−k k P (x, Θmax ) + ε(x), n n
where P (x, Θ) the the cumulative distribution function of the stable distribution, ε(x) is the cumulative distribution function of the degenerate distribution ( 0, x ≤ 0, ε(x) = 1, x > 0, the vector Θmax is estimated with nonzero returns, and k is the number of zero returns in the given data set X = {X1 , X2 , . . . , Xn }. The probability density function of mixed-stable random variable is f (x) =
n−k k p(x, Θmax ) + δ(x), n n
where δ(x) is Dirac delta function. There arises a problem when we are trying to test the adequacy hypothesis for these models. Since we have a discontinuous distribution function, classic methods for continuous distributions (Kolmogorov-Smirnov, Anderson–Darling) do not work. So we have to choose a special goodness-of-fit test suitable for discontinuous distributions, ex. Koutrouvelis empirical characteristic function test. Let X1 , X2 , . . . , Xn be independent and identically distributed random variables having a common characteristic function ϕ(t) = C(t) + iS(t) and let n
ϕn (t) =
1 X itXj e n j=1 6
1215
KABASINSKAS ET AL: MIXED STABLE MODELS...
be the empirical characteristic function. Denote by ϕ0 (t) = C0 (t) + iS0 (t) a completely specified characteristic function. I. Koutrouvelis has proposed a goodness-of-fit test for the simple hypothesis H0 : ϕ(t) = ϕ0 (t) based on the quadratic form Q0n = 2n(ξn − ξ0 )T Ω−1 0 (ξn − ξ0 ), where ξl is the 2m-dimensional vector ξlT = (Cl (t1 ), . . . , Cl (tm ), Sl (t1 ), . . . , Sl (tm )), for l = 0, n, and Ω0 /2n is the covariance matrix of ξn under H0 . The values t1 , t2 , . . . , tn a suitably chosen in a region of t near zero, and are such that Ω0 is non-singular, see [17, 19]. Another alternative is modified χ2 ) [14].
3.2
Self-similarity, multifractality and Hurst exponent
Financial time series do often exhibit fractionality (characterized by Hurst exponent) or self-similarity [12, 30]. Continuous time process Y = {Y (t), t ∈ T } is self-similar, with the self-similarity parameter H (Hurst index), if it satisfies the condition: Y (t) = a−H Y (at),
∀t ∈ T, ∀a > 0, 0 ≤ H < 1,
where the equality is in the sense of finite-dimensional distributions. This definition of a self-similar process given above can be generalized to that of multifractal processes. Methodology of self-similarity and multifractality analysis is fully described in Section 2.2 of Kabasinskas et al. [13]. Hurst exponent was estimated by R/S [9] and the ratio of variance of residuals method. Given a time-series X1 , X2 , . . . , Xn , the rescaled range (R/S) statistic is defined as the ratio of the maximal range normalized to the standard deviation: Pk Pk max1≤k≤n i=1 (Xi − X) − min1≤k≤n i=1 (Xi − X) q P R/S(n) = . n 1 2 (X − X) i i=1 n
The expected value of R/S scales like cnH as n → ∞, where H is the Hurst exponent. To estimate the Hurst exponent, we plot R/S(n) versus n in log-log axes. The slope of the regression line approximates the Hurst exponent. The ratio of variance of residuals method is based on the variance of residuals method for estimating long-ranged dependence. This method, proposed by Peng et al. [25] and further studied by Taqqu et al. [31], includes following procedures. First, we calculate the integrated series Yk =
k X
(Xi − X)
i=1
The integrated series is divided into d sub-series of equal length m. In each sub-series, we fit a least-squares line to the partial sums within each block and 7
1216
KABASINSKAS ET AL: MIXED STABLE MODELS...
compute the sample variance of the residuals. We repeat this procedure for each of the blocks, and average the resulting sample variances. We should get a straight line with a slope of 2H if the result is plotted on a log-log plot versus m. Estimates were calculated using SELFIS software, which is freeware and can be found on the web page http://www.cs.ucr.edu/˜tkarag.
3.3
Modeling of stagnation intervals
Let us define the set of discrete states {Xi }, corresponding given time series {Pi } following way: Xi = 0, if Pi+1 = Pi and Xi = 1, if Pi+1 6= Pi . A set of zeros between two units in {Xi } we call a run. The first run is a set of zeros before the first unit and the last one after the last unit. The length of the run is equal to the number of zeros between two units. If there are no zeros between two units, then an empty set has zero length. Theoretically, if states are independent (Bernoulli scheme), then the series of lengths of zero state runs should be distributed by the geometric law. Obviously, if the probability of zero is p, then the probability of k-length run (before the first unit) is pk = pk (1 − p), k ∈ N0 . However, the results of empirical tests do not corroborate this theoretic assumption. To determine the law, we will fit with χ2 goodness-of-fit test the series distribution of lengths of zero state runs by geometric, generalized logarithmic, Poisson and generalized Poisson, Hurwitz zeta and generalized Hurwitz zeta, discrete stable laws. The statistic of the test is m X (Oi − Ei )2 χ2 = , Ei i=1 here m is the number of cells for n states to divide, Oi - observed frequencies, Ei - expected theoretical frequency, asserted by the null hypothesis [5]. The probability mass function of the generalized logarithmic distribution is defined by P (ξ = k) =
∞ X i=0
pi i+m
!−1
pk , k+m
k ∈ N0 , 0 < p < 1, m > 0.
The probability mass function of the generalized Poisson law is defined by P (ξ = k) =
∞ X i=0
λi Γ(i + µ + 1)
!−1
λk , Γ(k + µ + 1)
k ∈ N0 , λ > 0, m > −1.
The probability mass function of Hurwitz distribution is defined by P (ξ = k) =
∞ X i=0
1 (i + q)s
!−1
1 , (k + q)s
8
k ∈ N0 , q > 0, s > 1.
1217
KABASINSKAS ET AL: MIXED STABLE MODELS...
1218
The probability mass function of the generalized Hurwitz distribution is defined by P (ξ = k) =
∞ X i=0
λi (i + q)s
!−1
λk , (k + q)s
k ∈ N0 , q > 0, s > 1, 0 < λ < 1.
Discrete stable distribution with the probability mass function k m (−1)k X X m (−1)j λm γj P (ξ = k) = , k ∈ N0 , λ > 0, γ ∈ (0, 1] k j eλ m=0 j=0 m! was introduced by Christoph and Schreiber [6]. Parameters of generalized logarithmic, generalized Poisson, Hurwitz zeta, generalized Hurwitz zeta and discrete stable laws are estimated by the maximal likelihood method numerically.
3.4
Model of stagnation intervals with dependent states
In order to examine the interdependencies between states further, we tried the Markov chain model. With the Hoel [10] criterion we test the order m, m ∈ N0 , of the chain: the hypothesis H0m:m+1 , that the series represents the mth -order Markov chain, with the alternative H1m:m+1 , that the series represents (m+1)th order Markov chain. The statistic X nij...kl n·j...kl L=2 nij...kl log − log nij...k· n·j...k· i...l
is distributed by the χ2 law with sm−1 (s − 1)2 degrees of freedom. Here s is the number of the Markov chain series states,nij...kl is the number of visits in state ij . . . kl, symbol · indicates the summation index. Number of indexes in this equation depends on the order of the chain: if the order is m, then number of indexes is equal to m+1. To test if series is generated by the Bernullian scheme we have to test the hypothesis H00:1 . In this research five hypothesis were tested: H00:1 , H01:2 , H02:3 , H03:4 , H04:5 . Since the runs test rejects the randomness hypothesis (see e.g. [12, 13]) of the sequence of states, the probability of state depends on the position n in the sequence: P (Xn = 1| . . . , Xn−k−1 = 0, Xn−k = 1, . . . , Xn−1 = 1) =
1−
pk Pk−1 j=0
pj
,
(1)
where pk are probabilities of a model law, P (X0 = 1) = p0 , n ∈ N, k ∈ N0 . It should be noted that P (Xn = 0| . . . ) = 1 − P (Xn = 1| . . . ), n, k ∈ N0 . With the probabilities of states and the distribution of nonzero returns we can generate sequences of stock returns, interchanging in the state sequence ones with stable r.v.
9
KABASINSKAS ET AL: MIXED STABLE MODELS...
4 4.1
1219
Empirical results Stable and mixed-stable models
First of all we have estimated parameters Θ of the α-stable distribution for full series. The results are presented in Table 2. Table 2: Estimates of parameters of α-stable distribution for full series Data set Adidas AG Deutsche Bank AG BASF SE BMW AG St Continental AG Deutsche Post AG Deutsche Telekom AG Bayer AG O.N. Fresenius Medical Care AG Deutsche Borse AG MAN SE St Henkel AG Infineon Technologies AG Linde AG Merck KGaA RWE AG St Daimler AG SAP AG Siemens AG METRO AG St ThyssenKrupp AG Volkswagen AG St Deutsche Postbank AG HYPO REAL ESTATE Commerzbank AG Deutsche Lufthansa AG Allianz SE Munchener Ruck AG TUI AG
α 1.409 1.479 1.118 1.281 1.787 1.640 1.587 1.153 1.654 1.238 1.267 1.441 1.171 1.809 1.352 1.139 1.506 1.177 1.169 1.207 1.364 1.281 1.345 1.323 1.436 1.376 1.439 1.126 1.620
β 0.1327 -0.7460 0.3844 0.0949 0.7234 -0.4147 -0.2994 0.7240 0.3097 -0.5806 0.3916 -0.6664 0.0335 -0.2578 0.1420 0.0923 0.0607 -0.4520 0.7111 -0.0115 -0.0874 -0.0574 0.8168 0.2379 0.7541 -0.2873 0.0474 -0.3917 0.4667
µ 0.0001 -0.0002 0.0003 0.0001 0.0001 -0.0002 -0.0001 0.0004 0.0001 -0.0002 0.0003 -0.0002 0.0000 0.0000 0.0001 0.0001 0.0000 -0.0002 0.0008 0.0000 -0.0001 -0.0001 0.0002 0.0001 0.0002 -0.0001 0.0000 -0.0005 0.0001
σ 0.0002 0.0003 0.0001 0.0002 0.0003 0.0002 0.0003 0.0001 0.0002 0.0002 0.0002 0.0002 0.0001 0.0002 0.0001 0.0003 0.0004 0.0002 0.0003 0.0001 0.0004 0.0001 0.0002 0.0002 0.0004 0.0001 0.0003 0.0001 0.0003
Anderson-Darling test rejected hypotheses of full series stability in all cases. Since we have found a lot of zeros in our series (see Table 2, and Fig. 2) we proceed with the mixed-stable model parameters estimation (see Section 3.1). Distribution of zero returns in a series is given in Fig. 2. The results of parameter estimation for 29 DAX series are given in Table 3. The results of goodness-of-fit test based on the empirical characteristic function are presented in Table 4. The results of Table 4 show that in 75.86% cases the goodness-of-fit hypothesis were not rejected for mixed-stable law.
10
KABASINSKAS ET AL: MIXED STABLE MODELS...
1220
Figure 2: Distribution of zero returns.
Table 3: Estimates of parameters of mixed-stable distribution and and value of Anderson-Darling test statistics. Data set Adidas AG Deutsche Bank AG BASF SE BMW AG St Continental AG Deutsche Post AG Deutsche Telekom AG Bayer AG O.N. Fresenius Medical Care AG Deutsche Borse AG MAN SE St Henkel AG Infineon Technologies AG Linde AG Merck KGaA RWE AG St Daimler AG SAP AG Siemens AG METRO AG St ThyssenKrupp AG Volkswagen AG St Deutsche Postbank AG HYPO REAL ESTATE Commerzbank AG Deutsche Lufthansa AG Allianz SE Munchener Ruck AG TUI AG
α 1.8059 1.7252 1.7317 1.8498 1.2659 1.8895 1.9716 1.7746 1.8121 1.6818 1.4455 1.1397 1.9904 1.7621 1.7315 1.7206 1.6322 1.2294 1.8571 1.3243 1.7265 1.6523 1.7997 1.4384 1.7772 1.9294 1.4425 1.3572 1.7978
β -0.2105 0.5040 0.0802 0.0114 0.7174 0.1019 -0.1529 0.1342 -0.1385 -0.0214 -0.7883 -0.5828 0.8684 -0.1606 -0.1406 0.0329 0.6379 -0.1224 0.4400 -0.7434 -0.0046 0.0912 -0.2273 0.6894 -0.1563 0.1672 0.6277 -0.1744 0.1681
11
µ 0.0000 0.0001 0.0002 0.0000 0.0004 0.0000 0.0000 0.0000 0.0000 0.0000 -0.0002 -0.0016 0.0000 0.0000 0.0000 0.0000 -0.0001 0.0000 0.0000 -0.0004 0.0000 0.0000 0.0000 0.0002 0.0000 0.0000 0.0002 -0.0001 0.0000
σ 0.0008 0.0005 0.0005 0.0006 0.0003 0.0007 0.0007 0.0005 0.0009 0.0006 0.0008 0.0006 0.0009 0.0007 0.0006 0.0005 0.0005 0.0006 0.0004 0.0004 0.0006 0.0003 0.0011 0.0007 0.0007 0.0007 0.0004 0.0005 0.0008
p 0.6382 0.2134 0.3144 0.5190 0.5160 0.6203 0.5827 0.3428 0.6542 0.4542 0.4686 0.6389 0.6853 0.6007 0.6114 0.3085 0.2235 0.3706 0.2261 0.5552 0.4461 0.4359 0.6791 0.5523 0.4288 0.6216 0.1879 0.3863 0.5840
A-D 3.54 15.97 75.55 8.60 375.26 31.66 94.50 7.71 3.62 2.24 42.58 1290.29 61.19 2.01 2.56 3.33 123.59 81.73 10.26 255.92 8.65 2.84 1.82 15.87 14.46 29.65 39.64 25.53 31.93
KABASINSKAS ET AL: MIXED STABLE MODELS...
1221
Table 4: Part of accepted hypothesis with given significance level, for mixedstable and mixed-Gaussian distributions, by Koutrouvelis test. Significance level 0.01 0.05 0.1
4.2
Mixed-stable 62.07% 75.86% 75.86%
Mixed-Gaussian 3.45% 6.90% 6.90%
Modeling of stagnation intervals: mixed stable model with dependent states
As mentioned above, theoretically (if states are independent) the series of zeros should be distributed by the binomial law and the lengths of these series should be distributed by the geometrical law, however, from Table 5 we can see that other laws fit our data (82.76% series) much better. It means that zero state series from our data are better described by the Hurwitz zeta distribution. Table 5: Modelling of the distribution of runs. Significance level Hurwitz distribution Generalized Hurwitz distribution Generalized logarithmic distribution Discrete stable distribution Poisson distribution Generalized Poisson distribution Geometrical distribution
0.01 93.10% 82.76% 68.97% 0 0 0 0
0.05 82.76% 72.41% 44.83% 0 0 0 0
0.10 68.97% 65.52% 34.48% 0 0 0 0
This result allows us to assume that zero-unit states are not purely independent. The WaldWolfowitz runs test corroborates this assumption for almost all series from the given shares. The inner series dependence was tested by the Hoel [10] criterion on the order of the Markov chain. It has been concluded that there are one zero order series or Bernoulli scheme series. 82% of given series are higher than 4th -order Markov chains with 0.05 significance level (see Table 6). Table 6: Orders of Markov chains. Order of Markov chain 0 1 2 3 ≥4
12
Percent of the series 3.45 13.79 58.62 75.86 82.76
KABASINSKAS ET AL: MIXED STABLE MODELS...
4.3
Self-similarity, multifractality and Hurst exponent
Having these results we may then proceed to analysis of self-similarity and multifractality. Financial series studies have shown that asset prices exhibit strong variability and burstiness on many time scales. This was the reason to introduce fractal models, which were able to capture the discovered scaling properties [2]. Self-similarity expresses the monofractal property, that is, fluctuations look statistically similar on many time scales. Long range dependence is revealed by the power. The degree of this slow decay is determined by the Hurst exponent. Multufractality expresses a more complex scaling behavior, which cannot be explained in a self-similar framework. It would be next logical step to try and analyze applicability of fractal models for high frequency financial data. As we can see (results of absolute moments method for full series and series without zeros are given in Table 7), this approach is validated by the empirical studies. Table 7: Multifractality and self-similarity analysis: 0 indicates that series do not have the property, 1 indicated when it does Data set Adidas AG Deutsche Bank AG BASF SE BMW AG St Continental AG Deutsche Post AG Deutsche Telekom AG Bayer AG O.N. Fresenius Medical Care AG Deutsche Borse AG MAN SE St Henkel AG Infineon Technologies AG Linde AG Merck KGaA RWE AG St Daimler AG SAP AG Siemens AG METRO AG St ThyssenKrupp AG Volkswagen AG St Deutsche Postbank AG HYPO REAL ESTATE Commerzbank AG Deutsche Lufthansa AG Allianz SE Munchener Ruck AG TUI AG
Full series Multifractal Self-similar 1 0 1 1 1 0 1 1 0 0 1 0 1 0 1 0 1 1 1 1 1 1 1 0 1 1 1 1 1 0 1 0 1 0 1 1 1 1 0 0 1 0 1 0 1 0 1 0 1 0 1 1 1 1 1 0 1 0
Without zero returns Multifractal Self-similar 1 0 1 1 1 0 1 1 0 0 1 0 1 1 1 0 1 1 1 0 1 1 1 1 1 0 1 1 1 0 1 0 0 0 1 0 1 1 0 0 1 0 1 0 1 0 1 0 1 0 1 0 1 0 0 0 1 0
As we can see, 93% of full financial series and 90% series without zeros exibit multifractality. However, only 42% of full series are self-similar (see Table 7). Table 8 presents results about the Hurst exponent of full series. The Hurst exponent was estimated by two methods: R/S and ratio of variance residuals 13
1222
KABASINSKAS ET AL: MIXED STABLE MODELS...
1223
(RVR). The results are given with the corresponding correlation cofficent ρ. Table 8: Hurst exponent estimates
Data set Adidas AG Deutsche Bank AG BASF SE BMW AG St Continental AG Deutsche Post AG Deutsche Telekom AG Bayer AG O.N. Fresenius Medical Care AG Deutsche Borse AG MAN SE St Henkel AG Infineon Technologies AG Linde AG Merck KGaA RWE AG St Daimler AG SAP AG Siemens AG METRO AG St ThyssenKrupp AG Volkswagen AG St Deutsche Postbank AG HYPO REAL ESTATE Commerzbank AG Deutsche Lufthansa AG Allianz SE Munchener Ruck AG TUI AG
5
R/S 0.457 0.629 0.551 0.486 0.565 0.419 0.514 0.552 0.498 0.596 0.562 0.496 0.484 0.380 0.531 0.492 0.555 0.333 0.575 0.504 0.395 0.496 0.547 0.527 0.548 0.537 0.608 0.566 0.434
Full series ρ RVR 0.9957 0.518 0.9942 0.632 0.9988 0.575 0.9934 0.437 0.9981 0.636 0.9394 0.477 0.9953 0.475 0.9993 0.600 0.9976 0.470 0.9973 0.629 0.9995 0.605 0.9938 0.498 0.9879 0.504 0.9125 0.464 0.9991 0.516 0.9981 0.413 0.9951 0.553 0.8382 0.431 0.9990 0.569 0.9960 0.469 0.8612 0.544 0.9953 0.488 0.9948 0.574 0.9993 0.529 0.9950 0.555 0.9987 0.529 0.9965 0.545 0.9984 0.557 0.9456 0.524
ρ 0.9763 0.9777 0.9719 0.9455 0.9859 0.9953 0.9611 0.9729 0.9847 0.9901 0.9874 0.9841 0.9807 0.9858 0.9914 0.9993 0.9494 0.9636 0.9958 0.9882 0.9853 0.9854 0.9952 0.9873 0.9756 0.9917 0.9837 0.9761 0.9924
R/S 0.479 0.622 0.550 0.472 0.551 0.151 0.307 0.529 0.232 0.623 0.563 0.175 0.238 0.158 0.525 0.476 0.377 0.431 0.355 0.477 0.302 0.480 0.552 0.280 0.258 0.276 0.593 0.568 0.232
Without zero returns ρ RVR ρ 0.9952 0.491 0.9826 0.9975 0.617 0.9826 0.9996. 0.599 0.9827 0.9988 0.445 0.9537 0.9996 0.600 0.9790 0.2870 0.432 0.9872 0.6897 0.467 0.9853 0.9987 0.508 0.9969 0.5719 0.447 0.9822 0.9964 0.652 0.9895 0.9996 0.610 0.9875 0.3555 0.464 0.9776 0.5889 0.441 0.9860 0.3197 0.443 0.9821 0.9984 0.518 0.9896 0.9974 0.408 0.9988 0.7981 0.552 0.9633 0.9976 0.298 0.9750 0.7389 0.529 0.9970 0.9903 0.412 0.9908 0.6278 0.539 0.9921 0.9976 0.462 0.9733 0.9976 0.461 0.9901 0.6117 0.460 0.9906 0.5031 0.518 0.9639 0.6032 0.524 0.9934 0.9992 0.539 0.9947 0.9991 0.488 0.9923 0.4889 0.517 0.9825
Conclusions • As we can see from Table 1 empirical data contain many repeating values (in average 47% of data) and exhibit significant kurtosis. Classical α-stable models in this case are not applicable, the fact is reflected by Table 2. • Mixed-stable models can be used to describe intraday high frequency data, see Table 3. • The returns series exhibit the dependences between states (the price change indicators), which is corroborated by Wald-Wolfowitz runs test and Hoel test. Therefore the intraday (high frequency) series should be modeled by mixed-stable law with dependent states. The runs (sets of zero returns in a series) are best described by the Hurwitz zeta distribution (see Table 5). The conditional probability of the price change in the nth time tick can be calculated by formula (1). 14
KABASINSKAS ET AL: MIXED STABLE MODELS...
• Analysis of series behavior showed that less than a half of them are selfsimilar, however 93% are multifractal. The range of the Hurst index, calculated by RVR method, is [0.413; 0.636]. Since the Hurst index in some cases is less that 0.5 the series implies antipersistency. Acknowledgments I. Belovas and A. Kabasinskas were supported by the grant from German Academic Exchange Service.
References [1] I. Belov, On the computation of the probability density function of a-stable distributions, Mathematical modelling and analysis. Proceedings of the 10th International Conference MMA 2005, Technika, 2005, pp. 333–341. [2] I. Belovas, A. Kabasinskas and L. Sakalauskas, Study of stable models of equity markets, Informational technologies 2005. Proceedings of the conference, Technologija, Kaunas, vol. 2, 2005, pp. 439-462 (in Lithuanian). [3] I. Belovas, A. Kabasinskas and L. Sakalauskas, Returns modelling problem in the Baltic equity market. Proceedings of the 5th International Conference on Operational Research: Simulation and Optimization in Business and Industry, Technologija, Kaunas, 2006, pp. 3–8. [4] I. Belov, V. Starikovicius, Parallelization of α-stable modelling algorithms. Mathematical modelling and analysis, 12(4), 409–418 (2007). [5] H. Chernoff, E.L. Lehmann, The use of maximum likelihood estimates in χ2 tests for goodness-of-fit, The Annals of Mathematical Statistics, 25 (3), pp. 579-586 (1954). [6] G. Christoph, K. Schreiber, Discrete Stable Random Variables, Statistics & Probability Letters, 37, pp. 243-247 (1998). [7] R. Dacorogna, U. Gencay, A. Muller, R. B. Olsen, and O. V. Pictet, An Introduction of High-Frequency Finance, Academic Press, San Diego, 2001. [8] T. Doganoglu, S. Mittnik An approximation procedure for asymmetric stable paretian densities, Computational Statistics, 13(4), pp. 463-475. [9] J. Feder, Fractals, New York, Plenum Press, 1988. [10] P. G. Hoel, A test for Markoff chains, Biometrika, 41(3/4), pp. 430-433 (1954). [11] D.R. Holt, E.L. Crow, Tables and graphs of the stable probability density functions, Journal of research of the National Bureau of Standards. B. Mathematical Sciences, 77B(34), pp. 143-198 (1973).
15
1224
KABASINSKAS ET AL: MIXED STABLE MODELS...
[12] A. Kabasinskas, S. Rachev, L. Sakalauskas, W. Sun and I. Belovas, αStable paradigm in financiual markets, Journal of Computational Analysis and Applications, 11(3), pp. 642–688 (2009). [13] A. Kabasinskas, S. Rachev, L. Sakalauskas, W. Sun and I. Belovas, Stable mixture model with dependent states for financial return series exhibiting short histories and periods of strong passivity, Journal of Computational Analysis and Applications, 12(N1-B), pp. 268–292 (2010). [14] A.I. Kobzar, Matematiko-statisticheskie metody v elektronnoj technike, Nauka, Moskva, 1978 (in Russian). [15] S.M. Kogon, D.B. Williams, Characteristic function based estimation of stable parameters. A Practical Guide to Heavy Tailed Data, Boston, Birkhauser, 1998, pp. 311–338. [16] I.A. Koutrouvelis, Regression type estimation of the parameters of stable laws, Journal of the American Statistical Association, 75, pp. 918–928 (1980). [17] I.A. Koutrouvelis, A goodness-of-fit test of simple hypotheses based on the empirical characteristic function, Biometrika, 67(1), pp. 238-240 (1980). [18] I.A. Koutrouvelis, An iterative procedure for the estimation of the parameters of stable laws, Communications in Statistics. Simulation and Computation, 10, pp. 17–28 (1981) [19] I.A. Koutrouvelis, J.A. Kellermeier, Goodness-of-fit test based on the empirical characteristic function when parameters must be estimated, Journal of the Royal Statistical Society, Series B(Methodological), 43(2), pp. 173– 176 (1981). [20] J.H. McCulloch, Simple consistent estimators of stable distribution parameters, Communications in Statistics. Simulation and Computation, 15, pp. 1109–1136 (1986). [21] S. Mittnik, T. Doganoglu and D. Chenyao, Computing the probability density function of the stable paretian distribution, Mathematical and Computer Modelling, 29, pp. 235-240 (1999). [22] J.P. Nolan, Numerical calculation of stable densities and distribution functions, Communications is Statistics. Stochastic Models, 13, pp. 759-774 (1997). [23] J.P. Nolan, An algorithm for evaluating stable densities in Zolotarevs (M) parametrization, Mathematical and Computer Modelling, 29, pp. 229-233 (1999). [24] D. Ojeda, Comparative study of stable parameter estimators and regression with stably distributed errors. PhD thesis, American University, 2001. 16
1225
KABASINSKAS ET AL: MIXED STABLE MODELS...
[25] C.K. Peng et al. Mosaic organization of DNA nucleotides, Physical Review E, 49, pp. 16851689 (1994). [26] S.J. Press Estimation in univariate and multivariate stable distributions, Journal of the American Statistical Association, vol. 67, pp. 842–846 (1972). [27] S. Rachev, S. Mittnik, Stable Paretian Models in Finance, John Wiley and Sons, New York, 2002. [28] G.V. Reklaitis, A. Ravindran and K.M. Ragsdell, Engineering Optimization: Methods and Applications, John Wiley Sons, New York, 1983. [29] G. Samorodnitsky, M.S. Taqqu, Stable non-Gaussian random processes, stochastic models with infinite variance, Chapman & Hall, New YorkLondon, 2000. [30] W. Sun, S. Rachev and F. Fabozzi, Long-range dependence, fractal processes, and intra-daily data, Handbook of IT and Finance, Springer, 2008, pp. 543–586. [31] M. S. Taqqu, V. Teverovsky and W. Willinger, Estimators for long range dependence: an empirical study Fractals, 3(4), pp.785-788 (1995) [32] W. Wasserfallen, H. Zimmermann, The behavior of intradaily exchange rates, Journal of Banking and Finance, 9, pp. 55–72 (1985). [33] V.M. Zolotarev, One-Dimensional Stable Distributions, American Mathematical Society, 1986.
17
1226
JOURNAL OF COMPUTATIONAL ANALYSIS AND APPLICATIONS, VOL. 14, NO.7, 1227-1236 , 2012, COPYRIGHT 2012 EUDOXUS 1227 PRESS, LLC
. NEW REPRESENTATIONS FOR THE EULER-MASCHERONI CONSTANT AND INEQUALITIES FOR THE GENERALIZED-EULER-CONSTANT FUNCTION CHAO-PING CHEN∗ AND WING-SUM CHEUNG
Abstract. (i) We establish new representations for the Euler-Mascheroni constant γ in terms of the gamma and psi functions. (ii) A class of two-sided inequalities for the generalized-Euler-constant function are presented. These inequalities provide the lower and upper bounds on the error of approximating the generalized-Euler-constant function by means of a power series.
1. Introduction The Euler constant (or, more popularly, the Euler-Mascheroni constant) γ is defined by the limit γ = lim Dn = 0.57721566 . . . , n→∞
(1)
where Dn =
n X 1 j=1
j
− ln n,
n ∈ N := {1, 2, 3, . . .}.
(2)
The constant γ is closely related to the celebrated gamma function Γ(x) by means of the familiar Weierstrass formula [1, p. 255, Equation (6.1.3)] (see also [32, Chapter 1, Section 1.1]): ∞ h³ Y 1 z ´ −z/n i = zeγz 1+ e Γ(z) n n=1
(|z| < ∞).
The logarithmic derivative of the gamma function: ψ(z) =
Γ0 (z) Γ(z)
Z or
ln Γ(z) =
z
ψ(t) dt 1
is known as the psi (or digamma) function. 2010 Mathematics Subject Classification. Primary 11Y60, 26D15; Secondary 40A05. Key Words and Phrases. Euler-Mascheroni constant; Generalized-Euler-constant function; Gamma function; Psi (or Digamma) function; Inequality. The second author is supported in part by the Research Grants Council of the Hong Kong SAR Project no. HKU7016/07P. *Corresponding Author. 1
1228
2
CHAO-PING CHEN AND WING-SUM CHEUNG
Sondow and Hadjicostas [31] introduced and studied the generalized-Euler-constant function γ(z), defined by γ(z) =
∞ X
µ z
n=1
n−1
1 n+1 − ln n n
¶ ,
(3)
where the series converges when |z| ≤ 1. Pilehrood and Pilehrood [23] considered the function zγ(z) (|z| ≤ 1). The function γ(z) generalizes both Euler’s constant γ(1) and the alternating Euler constant ln π4 = γ(−1) [28, 29], where γ is defined by (1). The limit definition (1) of Euler’s constant is equivalent to the series formula ¶ ∞ µ X n+1 1 γ= − ln (4) n n n=1 (see [28]). The corresponding alternating series gives the alternating Euler constant [28, 29] (see also [15]) γ(−1) = ln
µ ¶ ∞ X 1 n+1 4 = (−1)n−1 − ln = 0.24156447 . . . . π n=1 n n
(5)
It is known (see [31]) that γ(0) = 1 − ln 2
and γ
µ ¶ 1 2 = 2 ln , 2 σ
(6)
where r q ∞ Y √ n σ = 1 2 3 · · · = 11/2 21/4 31/8 · · · = n1/2 = 1.66168794 . . .
(7)
n=1
is the Somos’s quadratic recurrence constants [27] (see also [38] and [14, p. 446]). An alternative method in estimating the generalized-Euler-constant function γ(z) was proposed by Lampret [18] who exploited the Euler-Maclaurin (Boole/Hermite) summation formula. Mortici [21] provided some estimates about the Somos’s quadratic recurrence constant. This paper is organized as follows. In Section 2 we establish new representations for the Euler-Mascheroni constant γ in terms of the gamma and psi functions. In Section 3, A class of two-sided inequalities for the generalized-Euler-constant function are presented. 2. New representations for the Euler-Mascheroni constant Several bounds for Dn −γ have been given in the literature [2, 3, 24, 33, 34, 35, 39]. For example, the following bounds for Dn − γ were established in [24, 39]: 1 1 < Dn − γ < , 2(n + 1) 2n
n ∈ N.
(8)
1229
EULER’S CONSTANT AND THE GENERALIZED-EULER-CONSTANT FUNCTION
3
Alzer [2] obtained sharp form of the inequality (8). The convergence of the sequence Dn to γ is very slow. Some quicker approximations to the Euler-Mascheroni constant were established in [4, 7, 6, 8, 5, 12, 16, 19, 20, 22, 25, 26, 36, 37]. For example, DeTemple [12] studied in 1993 a modified sequence which converges faster and established the following inequality: 1 1 < Rn − γ < , 2 24(n + 1) 24n2 where Rn =
µ
n X 1 j=1
1 − ln n + j 2
(9)
¶ .
(10)
Recently, Chen [7] obtained sharp form of the inequalities (9). Let Dn and Rn be defined by (2) and (10), respectively. Considering the Pn Pn sums k=1 Dk and k=1 Rk , we find new analytical representations for the EulerMascheroni constant γ in terms of the gamma and psi functions. Theorem 1. Let n be a positive integer. Then we have n k X X 1 1 1 − ln k − ψ(n + 2) + 1 + ln Γ(n + 1) γ= n+1 j n + 1 j=1
(11)
k=1
and
µ ¶ n k 1 X X 1 1 γ= − ln k + − ψ(n + 2) + 1 n+1 j 2 j=1 k=1 ¶ µ √ 1 ln( π/2) 3 + ln Γ n + − . n+1 2 n+1
(12)
Proof. It is easily observed that the formula (11) is equivalent to the following result: n k X X ¡ ¢ 1 − ln k = (n + 1) ψ(n + 2) + γ − 1 − ln Γ(n + 1), j j=1
n ∈ N. (13)
k=1
We now prove the representation formula (13) by using the principle of mathematical induction. In our proof of the representation formula (13), we require the integer values [1, p. 258, Equation (6.3.2)]: ψ(1) = −γ
and
ψ(n + 1) = −γ +
n X 1 , k
(14)
k=1
the recurrence formulas [1, p. 258, Equation (6.3.5)]: Γ(z + 1) = zΓ(z)
and
1 ψ(z + 1) = ψ(z) + . z
(15)
1230
4
CHAO-PING CHEN AND WING-SUM CHEUNG
For n = 1, we find from (13) that 1 k X X ¡ ¢ 1 − ln k = 2 ψ(3) + γ − 1 − ln Γ(2) = 1, j j=1 k=1
which shows that the formula (13) holds true for n = 1. We assume now that the formula (13) holds true for a fixed positive integer n. Then, for n 7→ n + 1 in (13), we have n+1 X k=1
k n k n+1 X X X X1 1 1 − ln k = − ln k + − ln(n + 1) j j j j=1 j=1 j=1 k=1
¡ ¢ = (n + 1) ψ(n + 2) + γ − 1 − ln Γ(n + 1) + ψ(n + 2) + γ − ln(n + 1) ¡ ¢ = (n + 2) ψ(n + 3) + γ − 1 − ln Γ(n + 2). The proof of the formula (13) is thus completed by means of the principle of mathematical induction on n. It is easily observed that the formula (12) is equivalent to the following result: µ ¶ k n X X ¡ ¢ 1 1 = (n + 1) ψ(n + 2) + γ − 1 − ln k + j 2 j=1 k=1 ¶ µ √ 3 + ln( π/2), n ∈ N. (16) − ln Γ n + 2 We now prove the representation formula (16) by using the principle of mathematical induction. For n = 1, we find from (16) that µ ¶ µ ¶ 1 k X X ¡ ¢ √ 1 1 = 2 ψ(3) + γ − 1 − ln Γ 5 + ln( π/2) − ln k + j 2 2 j=1 k=1
= 1 + ln 2 − ln 3. which shows that the formula (16) holds true for n = 1. We assume now that the formula (16) holds true for a fixed positive integer n. Then, for n 7→ n + 1 in (16), we have µ ¶ µ ¶ µ ¶ n k n+1 n+1 k X X X X 1 1 1 X1 3 1 − ln k + = − ln k + + − ln n + j 2 j 2 j 2 j=1 j=1 j=1 k=1 k=1 µ ¶ ¡ ¢ √ 3 = (n + 1) ψ(n + 2) + γ − 1 − ln Γ n + + ln( π/2) 2 µ ¶ 3 + ψ(n + 2) + γ − ln n + 2 ¶ µ ¡ ¢ √ 5 = (n + 2) ψ(n + 3) + γ − 1 − ln Γ n + + ln( π/2). 2
1231
EULER’S CONSTANT AND THE GENERALIZED-EULER-CONSTANT FUNCTION
5
The proof of the formula (16) is thus completed by means of the principle of mathematical induction on n.
¤
Remark 1. Choi [10] summarized some known representations for the EulerMascheroni constant γ. For a rather impressive collection of various classes of integral representations for the Euler-Mascheroni constant γ, the interested reader may be referred to a recent paper by Choi and Srivastava [11]. Very recent, Chen and Srivastava [9] presented the following representations for the Euler-Mascheroni constant γ: µ ¶ µ ¶ 1 1 1 + ln 2 − 1 + n + ψ n+ γ=− i+j 2 2 i=1 j=1 ¶ µ 3 3 ψ(n) + (2 ln 2)n − . − n+ 2 2n n X n X
(17)
3. A class of two-sided inequalities for the generalized-Euler-constant function Theorem 2 presents a class of two-sided inequalities for the generalized-Eulerconstant function. These inequalities provide the lower and upper bounds on the error of approximating the generalized-Euler-constant function by means of a power series. Theorem 2. Let |z| ≤ 1 and n = 1, 2, . . .. Then " # ∞ n−1 ∞ X (−1)i+1 X 1 1 k−1 1 X k−1 n+1 z < (−1) γ(z) − z n+1 (k + θ(∞))n+1 i+1 k i+1 i=1 k=1
k=1
0 . dt (1 + t)tn+1
(22)
However, since g(t) > 0, (22) is equivalent to the following inequality: h(t) = g(t) −
1 (n + 1)(1 +
t)(n+1)/(n+2) t(n+1)2 /(n+2)
0 n+2 n+2
(t > 0) ,
by using the weighted arithmetic-geometric mean inequality. We thus find from (20) that " # ∞ n−1 ∞ X (−1)i+1 X 1 1 k−1 1 X k−1 n+1 z < (−1) γ(z) − z n+1 (k + θ(∞))n+1 i+1 k i+1 i=1 k=1
k=1
< Next, by applying (21), we get ( θ(1) =
n
1 n+1
∞ X k=1
"
(−1) (n + 1) ln 2 −
1 z k−1 . (k + θ(1))n+1
n−1 X i=0
(−1)i i+1
#)−1/(n+1) −1
(23)
1233
EULER’S CONSTANT AND THE GENERALIZED-EULER-CONSTANT FUNCTION
7
and · θ(t) =
1
¸−1/(n+1) n+1 1 −n−3 − + O(t ) −t n + 2 tn+2
tn+1 1 = + O(t−1 ) as n+2
t −→ ∞ ,
so that θ(∞) = lim θ(k) = k→∞
1 . n+2
By combining each of these observations with (23), we complete the proof of theorem.
¤
Remark 2. Clearly, we have for |z| ≤ 1, ∞
1 X 1 z k−1 −→ 0, n+1 (k + θ(k))n+1
n −→ ∞.
(24)
k=1
Proceeding to the limit when n → ∞ in (20), we obtain the following series expansion of the generalized-Euler-constant function γ(z): γ(z) =
∞ X ∞ X (−1)n 1 k−1 z . n kn n=2
(25)
k=1
In particular, taking z = 1 in (25), we obtain the following well-known (rather classical) series expansion of Euler’s constant (see, for example, [13, p.45, Equation 1.17 (3)], [17, p.355, Entry (54.2.3)] and [32, p.161, Equation 3.4 (23)]): γ=
∞ X (−1)n ζ(n) , n n=2
(26)
where ζ(s) =
∞ X 1 , s n n=1
1
is the Riemann zeta function. In fact, (25) can be written as [31, Theorem 1]: zγ(z) =
∞ X
(−1)k
k=2
Lik (z) , k
(27)
where Lik (z) (|z| ≤ 1) denotes the polylogarithm (see [15, 30]), defined for k = 2, 3, . . . by the convergent series Lik (z) =
∞ X zn . nk n=1
(28)
1234
8
CHAO-PING CHEN AND WING-SUM CHEUNG
References [1] M. Abramowitz and I. A. Stegun (Editors), Handbook of Mathematical Functions with Formulas, Graphs, and Mathematical Tables, Applied Mathematics Series 55, Ninth printing, National Bureau of Standards, Washington, D.C., 1972. [2] H. Alzer, Inequalities for the gamma and polygamma functions, Abh. Math. Sem. Univ. Hamburg 68 (1998), 363–372. [3] G. D. Anderson, R. W. Barnard, K. C. Richards, M. K. Vamanamurthy and M. Vuorinen, Inequalities for zero-balanced hypergeometric functions, Trans. Amer. Math. Soc. 347 (1995), 1713–1723. [4] E. Ces` aro, Sur la serie harmonique, Nouvelles Annales de Math´ ematiques 4 (1885), 295–296. [5] C.-P. Chen, The Best Bounds in Vernescu’s Inequalities for the Euler’s Constant, RGMIA Res. Rep. Coll. 12 (2009), no.3, Article 11. Available online at http://ajmaa.org/RGMIA/ v12n3.php. [6] C.-P. Chen, Inequalities and monotonicity properties for some special functions, J. Math. Inequal. 3 (2009), 79–91. [7] C.-P. Chen, Inequalities for the Euler-Mascheroni constant, Appl. Math. Lett. 23 (2010), 161–164. [8] C.-P. Chen, Monotonicity properties of functions related to the psi function, Appl. Math. Comput. 217 (2010), 2905–2911. [9] C.-P. Chen and H. M. Srivastava, New representations for the Lugo and Euler-Mascheroni constants, Appl. Math. Lett. 24 (2011), 1239–1244. [10] J. Choi, Some mathematical constants, Appl. Math. Comput. 187 (2007), 122–140. [11] J. Choi and H. M. Srivastava, Integral representations for the Euler-Mascheroni constant γ, Integral Transforms Spec. Funct. 21 (2010), 675–690 [12] D. W. DeTemple, A quicker convergence to Euler’s constant, Amer. Math. Monthly 100 (1993), 468–470. [13] A. Erd´ elyi, W. Magnus, F. Oberhettinger and F. G. Tricomi, Higher Transcendental Functions, Vol. I, McGraw-Hill Book Company, New York, Toronto and London, 1953. [14] S. Finch, Mathematical Constants, Cambridge Univ. Press, Cambridge, 2003. [15] J. Guillera and J. Sondow, Double integrals and infinite products for some classical constants via analytic continuations of Lerch’s transcendent, Ramanujan J. 16 (2008), 247–270. [16] B.-N. Guo and F. Qi, Sharp bounds for harmonic numbers, Appl. Math. Comput. (2011), doi:10.1016/ j.amc.2011.01.089 [17] E. R. Hansen, A Table of Series and Products, Prentice-Hall, Englewwood Cliffs, New Jersey, 1975. [18] V. Lampret, Approximation of Sondow’s generalized-Euler-constant function on the interval [−1, 1], Ann. Univ. Ferrara 56 (2010), 65–76. [19] C. Mortici, On new sequences converging towards the Euler-Mascheroni constant. Comput. Math. Appl. 59 (2010), 2610–2614. [20] C. Mortici, Improved convergence towards generalized Euler-Mascheroni constant. Appl. Math. Comput. 215 (2010), 3443–3448.
1235
EULER’S CONSTANT AND THE GENERALIZED-EULER-CONSTANT FUNCTION
9
[21] C. Mortici, Estimating the Somos’ quadratic recurrence constant. J. Number Theory 130 (2010), 2650–2657. [22] T. Negoi, A faster convergence to the constant of Euler, Gazeta Matematic˘ a, seria A, 15 (1997), 111–113 (in Romanian). [23] K. H. Pilehrood and T.H. Pilehrood, Arithmetical properties of some series with logarithmic coefficients, Math. Z. 255 (2007), 117–131. [24] P. J. Rippon, Convergence with pictures, Amer. Math. Monthly, 93 (1986), 476–478. [25] A. Sˆınt˘ am˘ arian, A generalization of Euler’s constant, Numer. Algorithms 46 (2007), 141–151. [26] A. Sˆınt˘ am˘ arian, Some inequalities regarding a generalization of Euler’s constant, J. Inequal. Pure Appl. Math. 9 (2008), no.2, Article 46. http://www.emis.de/journals/JIPAM/images/ 352_07_JIPAM/352_07.pdf. [27] M. Somos, Several constants related to quadratic recurrences, unpublished note, 1999. [28] J. Sondow, Double integrals for Euler’s constant and ln(4/π) and an analog of Hadjicostas’s formula, Amer. Math. Monthly, 112 (2005), 61–65. [29] J. Sondow, New Vacca-type rational series for Euler’s constant and its alternating analog ln(4/π), preprint. Available at: http://www.arxiv.org/abs/math.NT/0508042. [30] J. Sondow, A faster product for π and a new integral for ln(π/2), Amer. Math. Monthly 112 (2005), 729–734. [31] J. Sondowa and P. Hadjicostas, The generalized-Euler-constant function γ(z) and a generalization of Somoss quadratic recurrence constant, J. Math. Anal. Appl. 332 (2007), 292–314. [32] H. M. Srivastava, J. Choi, Series associated with the zeta and related functions, Kluwer Academic Publishers, Dordrecht, Boston and London, 2001. [33] S. R. Tims and J. A. Tyrrell, Approximate evaluation of Euler’s constant, Math. Gaz. 55 (1971), 65–67. [34] L. T´ oth, Problem E3432, Amer. Math. Monthly, 98 (1991), 264. [35] L. T´ oth, Problem E3432 (Solution), Amer. Math. Monthly, 99 (1992), 684–685. [36] A. Vernescu, A new accelerate convergence to the constant of Euler, Gaz. Mat. Ser. A (17) 96 (1999), 273-278 (in Romanian). [37] M. Villarino, Ramanujan’s harmonic number expansion into negative powers of a triangular number, J. Inequal. Pure Appl. Math. 9 (2008), no. 3, Article 89. Available online at http: //www.emis.de/journals/JIPAM/images/245_07_JIPAM/245_07.pdf. [38] E. fram
W.
Weisstein,
Web
Resource.
Somos’s
quadratic
Published
recurrence
electronically
at:
constant,
MathWorldA
Wol-
http://mathworld.wolfram.com/
SomossQuadraticRecurrenceConstant.html. [39] R. M. Young, Euler’s Constant, Math. Gaz. 75 (1991), 187–190.
1236
10
CHAO-PING CHEN AND WING-SUM CHEUNG
(Chao-Ping Chen) School of Mathematics and Informatics, Henan Polytechnic University, Jiaozuo City 454003, Henan Province, China E-mail address: [email protected] (Wing-Sum Cheung) Department of Mathematics, The University of Hong Kong, Pokfulam Road, Hong Kong, China E-mail address: [email protected]
JOURNAL OF COMPUTATIONAL ANALYSIS AND APPLICATIONS, VOL. 14, NO.7, 1237-1247 , 2012, COPYRIGHT 2012 EUDOXUS 1237 PRESS, LLC
CAUCHY-JENSEN FUNCTIONAL INEQUALITY IN BANACH SPACES AND NON-ARCHIMEDEAN BANACH SPACES ∗ ICK-SOON CHANG, M. ESHAGHI GORDJI AND HARK-MAHN KIM
†
Abstract. In this paper, we prove the generalized Hyers-Ulam stability of the following Cauchy-Jensen functional inequality n l n ° ³ Pl x °X ° ´° X X ° ° ° ° i=1 i + yj °, f (xi ) + m f (yj )° ≤ °mf ° m i=1 i=1 j=1 in the class of mappings from normed spaces to Banach spaces and non-Archimedean Banach spaces.
1. Introduction The stability problem of functional equations originated from a question of Ulam [21] concerning the stability of group homomorphisms. We are given a group G1 and a metric group G2 with metric ρ(·, ·). Given ² > 0, does there exist a δ > 0 such that if f : G1 → G2 satisfies ρ(f (xy), f (x)f (y)) < δ for all x, y ∈ G1 , then a homomorphism h : G1 → G2 exists with ρ(f (x), h(x)) < ² for all x ∈ G1 ? In other words, we are looking for situations when the homomorphisms are stable, i.e., if a mapping is almost a homomorphism, then there exists a true homomorphism near it. First of all, D. H. Hyers [8] considered the case of approximately additive mappings between Banach spaces and proved the following result. The method which was provided by Hyers, and which produces the additive mapping h, was called a direct method. This method is the most important and most powerful tool for studying the stability of various functional equations. Hyers’ Theorem was generalized by T. Aoki [1] and D.G. Bourgin [3] for additive mappings by considering an unbounded Cauchy difference. In 1978, Th.M. Rassias [17] also provided a generalization of Hyers Theorem for linear mappings which allows the Cauchy difference to be unbounded like this kxkp + kykp . A generalized result of Th.M. Rassias’ theorem was obtained by P. Gˇavruta in [6] and S. Jung in [9]. In 1990, Th.M. Rassias [18] during the 27th International Symposium on Functional Equations 2000 Mathematics Subject Classification. 39B82, 46H99, 39B72. Key words and phrases. Cauchy-Jensen functional inequality, Non-Archimedean Banach spaces, Generalized Hyers-Ulam stability. ∗ This work was supported by Basic Research Program through the National Research Foundation of Korea funded by the Ministry of Education, Science and Technology (No. 2011–0002614). † Corresponding author. [email protected]. 1
1238
I. CHANG, M. ESHAGHI GORDJI AND H. KIM
2
asked the question whether such a theorem can also be proved for p ≥ 1. In 1991, Z. Gajda [5] following the same approach as in [17], gave an affirmative solution to this question ˘ for p > 1. It was shown by Z. Gajda [5], as well as by Th.M. Rassias and P. Semrl [19], that one cannot prove a Th.M. Rassias type theorem when p = 1. The counterexamples ˘ of Z. Gajda [5], as well as of Th.M. Rassias and P. Semrl [19], have stimulated several mathematicians to invent new approximately additive or approximately linear mappings. In particular, J.M. Rassias [15, 16] proved a similar stability theorem in which he replaced the unbounded Cauchy difference by this factor kxkp kykq for p, q ∈ R with p + q 6= 1. In addition, in the paper [4] the authors proves that if a mapping f satisfies the following functional inequality
° µ ¶° ° x + y + z °° kf (x) + f (y) + f (z)k ≤ °°kf °,
k = 3, |3| > |k|, k in non-Archimedean Banach spaces then f is additive, and they prove the generalized Hyers-Ulam stability of the functional inequality in non-Archimedean Banach spaces. The stability problems of several functional inequalities have been extensively investigated by a number of authors and there are many interesting results concerning this problem [10, 11, 13, 14]. In this paper, we generalize the functional inequality to the following generalized Cauchy-Jensen functional inequality ° l ° n X °X ° ° ° f (x ) + m f (y ) i j ° ≤ ° i=1
° ¶° µ Pl n X ° ° i=1 xi ° °mf + y j °, °
m
j=1
where l ≥ 2, m ≥ 1, n ≥ 0 are integers and
P0
i=1
(1.1)
i=1
E(i) := 0 by notational convenience.
Now, it is easy to see that if a mapping f satisfies the generalized Cauchy-Jensen inequality (1.1), then f is additive. In this work, we are going to improve the theorems given in the paper [4] without using the oddness of approximate additive functions concerning the functional inequality (1.1) for a more general case. Moreover, we obtain stability results of the functional inequality (1.1) in non-Archimedean Banach spaces in the last section. 2. Stability of (1.1) in Banach spaces In this section, let X be a normed space, Y a Banach space and let l ≥ 2, m ≥ 1, n ≥ 1 be integers. Theorem 2.1. Suppose that a mapping f : X → Y with f (0) = 0 satisfies the functional inequality ° l ° ° ¶° µ Pl n n X X ° °X ° ° i=1 xi ° ° ° ° + y f (x ) + m f (y ) ≤ mf j ° i j ° ° ° i=1
j=1
m
j=1
+ϕ(x1 , · · · , xl , y1 , · · · , yn )
(2.1)
1239
CAUCHY-JENSEN FUNCTIONAL INEQUALITY
3
for all x1 , · · · , xl , y1 , · · · , yn ∈ X and there exists a constant L with 0 < L < 1 for which the perturbing function ϕ : X l+n → R+ satisfies µ
ϕ
¶
l l (x1 , · · · , xl , y1 , · · · , yn ) ≤ L · ϕ(x1 , · · · , xl , y1 , · · · , yn ) mn mn
(2.2)
for all x1 , · · · , xl , y1 , · · · , yn ∈ X. Then there exists a unique additive mapping h1 : X → l k )k f (( mn ) x), (x ∈ X) such that Y , defined as h1 (x) = limk→∞ ( mn l
kf (x) − h1 (x)k ≤
1 Ψ(x) 1−L
(2.3)
for all x ∈ X, where Ψ(x) :=
1 ϕ(x, −x, · · · , x, −x, 0, · · · , 0, x, −x, · · · , x, −x) {z } {z } | K | n 2d 2l e
2d 2 e
l 1 l x, · · · , x), (x ∈ X), + ϕ(−x, · · · , −x, {z } mn l | mn } {z | l n
and K := d 2l e + md n2 e and d·e denotes Gaussian notation. Proof. Putting (x1 , · · · , xl , y1 , · · · , yn ) := (x, −x, · · · , x, −x, 0, · · · , 0, x, −x, · · · , x, −x) {z
|
}
{z
|
}
e 2d n 2
2d 2l e
in (2.1), we have an approximate odd condition kf (x) + f (−x)k 1 ≤ ϕ(x, −x, · · · , x, −x, 0, · · · , 0, x, −x, · · · , x, −x) {z } {z } | K | n 2d 2l e
(2.4)
2d 2 e
for all x ∈ X, where K := d 2l e + md n2 e and d·e denotes Gaussian notation. Replacing l l x, · · · , x) } mn {z mn } |
(x1 , · · · , xl , y1 , · · · , yn ) := (−x, · · · , −x, |
{z l
n
in (2.1), we lead to ° ° ° ° l l l °lf (−x) + mnf ( x)°° ≤ ϕ(−x, · · · , −x, x, · · · , x) ° {z } | mn mn {z mn } | l
(2.5)
n
for all x ∈ X. Associating (2.4) with (2.5) yields kf (x) −
l mn f( x)k ≤ Ψ(x) l mn
(2.6)
1240
I. CHANG, M. ESHAGHI GORDJI AND H. KIM
4
for all x ∈ X. Thus, it follows from (2.6) that for all nonnegative integers k and j with j > k ≥ 0 and x ∈ X ° µ ¶ µ ¶° ° mn k l k mn k+j l k+j °° °( ) f ( ) x −( ) f ( ) x ° ° l mn l mn µ ¶ µ ¶° k+j−1 X ° ° mn i l i mn i+1 l i+1 °° ° )f ( )x −( ) f ( ) x ° ≤ °( l mn l mn i=k ≤
µ
i=k
¶
k+j−1 X l i mn i )Ψ ( )x ≤ Li Ψ(x), ( l mn i=k
k+j−1 X
µ
½
¶¾
l k )k f ( mn ) x which tends to zero as k → ∞. Hence the sequence ( mn l
is Cauchy for all
x ∈ X, and so we can define a function h1 : X → Y by µ ¶ l k mn k ) f ( ) x , x ∈ X. h1 (x) = lim ( k→∞ l mn Moreover, letting k = 0 and j → ∞ in the last inequality yields 1 Ψ(x) kf (x) − h1 (x)k ≤ 1−L for all x ∈ X, which yields the estimation (2.3).
(2.7)
Next, let h1 0 : G → Y be another additive mapping satisfying the inequality (2.7). l k l k ) x) = ( mn )k h1 0 (x) and h1 (( mn ) x) = ( mn )k h1 (x) for all Then it is obvious that h1 0 (( mn l l k ∈ N and all x ∈ X. Thus, we have ° µ ¶ µ ¶° l k l k °° mn k °° 0 0 ) h1 ( ) x − h1 ( ) x ° k h1 (x) − h1 (x)k = ( l ° mn mn ½° µ ¶ µ ¶° ° µ ¶ µ ¶°¾ l k l k °° °° l k l k °° mn k °° 0 ) °h1 ( ) x −f ( ) x ° + °f ( ) x − h1 ( ) x ° ≤ ( l mn mn mn mn mn k l k 2 2 ( ) Ψ(( ) x) ≤ Lk Ψ(x) ≤ 1−L l mn 1−L for all k ∈ N and all x ∈ X. Taking the limit as k → ∞, we lead to the uniqueness of the mapping h1 near f satisfying the inequality (2.7). It follows from (2.1) and (2.2) that °
µ ¶ µ ¶° l n X ° l k l k mn k °° X ) ° f ( ) xi + m f ( ) yj °° ( l mn mn i=1 j=1 °
≤(
µ X ¶° n X ° 1 l l k l k mn k °° ) °mf ( ) xi + ( ) yj °° l m i=1 mn j=1 mn
+Lk ϕ(x1 , · · · , xl , y1 , · · · , yn ) for all k ∈ N and all x1 , · · · , xl , y1 , · · · , yn ∈ X. Taking k → ∞ in the last relation, we see that ° l ° n X °X ° ° h1 (xi ) + m h1 (yj )°° ≤ ° i=1
j=1
° ¶° µ Pl n X ° ° i=1 xi °mh1 + yj °° °
m
j=1
1241
CAUCHY-JENSEN FUNCTIONAL INEQUALITY
5
for all x1 , · · · , xl , y1 , · · · , yn ∈ X. This implies that the mapping h1 is additive. This ¤ completes the proof. Remark 2.2. Suppose that a mapping f : X → Y with f (0) = 0 satisfies the functional inequality (2.1) for which the perturbing function ϕ : X l+n → R+ satisfies µ ¶ ∞ X l i mn i )ϕ ( ) (x1 , · · · , xl , y1 , · · · , yn ) < ∞ (
l
i=0
mn
for all x1 , · · · , xl , y1 , · · · , yn ∈ X instead of the condition (2.2). Then it follows from the similar argument to Theorem 2.1 that there exists a unique additive mapping h1 : X → Y , l k )k f (( mn ) x), (x ∈ X) such that defined as h1 (x) = limk→∞ ( mn l
µ ¶ ∞ X l i mn i )Ψ ( )x kf (x) − h1 (x)k ≤ ( i=0
l
mn
for all x ∈ X, where Ψ is defined as in Theorem 2.1. Theorem 2.3. Suppose that a mapping f : X → Y with f (0) = 0 satisfies the functional inequality (2.1) and there exists a constant L with 0 < L < 1 for which the perturbing function ϕ : X l+n → R+ satisfies ¶ µ mn mn (x1 , · · · , xl , y1 , · · · , yn ) ≤ L · ϕ(x1 , · · · , xl , y1 , · · · , yn ) (2.8) ϕ l l for all x1 , · · · , xl , y1 , · · · , yn ∈ X. Then there exists a unique additive mapping h2 : X → l k ) f (( mn )k x), (x ∈ X) such that Y , defined as h2 (x) = limk→∞ ( mn l L Ψ(x) 1−L for all x ∈ X, where Ψ is given as in Theorem 2.1. kf (x) − h2 (x)k ≤
(2.9)
Proof. It follows from the inequality (2.6) that ° µ ¶ µ ¶° ° l k mn k l k+j mn k+j °° °( ) f ( ) x −( ) f ( ) x ° °
mn
≤
l
k+j−1 X
(
i=k
µ
mn
¶
mn i+1 l i+1 ) Ψ ( ) x ≤ mn l
l
k+j−1 X
Li+1 Ψ(x),
i=k
which tends to zero as k → ∞. The remaining proof is similar to the corresponding proof of Theorem 2.1. This completes the proof.
¤
Remark 2.4. Suppose that a mapping f : X → Y with f (0) = 0 satisfies the functional inequality (2.1) for which the perturbing function ϕ : X l+n → R+ satisfies ∞ X
µ
¶
mn i l i )ϕ ( ) (x1 , · · · , xl , y1 , · · · , yn ) < ∞ ( l i=0 mn
1242
I. CHANG, M. ESHAGHI GORDJI AND H. KIM
6
for all x1 , · · · , xl , y1 , · · · , yn ∈ X instead of the condition (2.8). Then it follows from the similar argument to Theorem 2.3 that there exists a unique additive mapping h2 : X → Y , l k ) f (( mn )k x), (x ∈ X) such that defined as h2 (x) = limk→∞ ( mn l
µ
∞ X
mn i+1 l i+1 ) Ψ ( ) x kf (x) − h2 (x)k ≤ ( l i=0 mn
¶
for all x ∈ X, where Ψ is defined as in Theorem 2.1. Corollary 2.5. Let 0 < p 6= 1,l 6= mn and θ > 0. If a mapping f : X → Y with f (0) = 0 satisfies the following functional inequality ° ° °X ¶° µX ¶ µ Pl n l n n X X X ° ° l ° ° i=1 xi p p ° ° ° ° + yj ° + θ kxi k + kyj k f (xi ) + m f (yj )° ≤ °mf ° i=1
m
j=1
j=1
i=1
j=1
for all x1 , · · · , xl , y1 , · · · , yn ∈ X, then there exists a unique additive mapping h : X → Y , defined as (
h(x) =
l k )k f (( mn ) x), (x ∈ X), if l < mn, p > 1, (or l > mn, 0 < p < 1); limk→∞ ( mn l l k k ) f (( mn ) x), (x ∈ X), if l < mn, 0 < p < 1, (or l > mn, p > 1), limk→∞ ( mn l
such that µ l 2d 2 e+2d n e (mn)p−1 2 +1+ p−1 −lp−1 (mn) e d 2l e+md n 2 µ l kf (x)−h(x)k ≤ 2d 2 e+2d n e (mn)p−1 2 +1+ n l p−1 p−1 l −(mn) e+md e d 2 2
¶ 1 ( l )p−1 m mn
θkxkp , if l < mn, p > 1, (or l > mn, 0 < p < 1);
¶ 1 ( l )p−1 m mn
θkxkp , if l < mn, 0 < p < 1, (or l > mn, p > 1),
for all x ∈ X. Corollary 2.6. Let l 6= mn and θ > 0. If a mapping f : X → Y with f (0) = 0 satisfies the following functional inequality °X ° ° ¶° µ Pl n n X X ° ° l ° ° i=1 xi ° ° ° ° + y f (x ) + m f (y ) ≤ mf j °+θ i j ° ° ° i=1
m
j=1
j=1
for all x1 , · · · , xl , y1 , · · · , yn ∈ X, then there exists a unique additive mapping h : X → Y , defined as
(
h(x) =
l k )k f (( mn ) x), (x ∈ X), if l > mn; limk→∞ ( mn l mn k l k limk→∞ ( mn ) f (( l ) x), (x ∈ X), if l < mn,
such that
(
kf (x) − h(x)k ≤ for all x ∈ X, where K := d 2l e + md n2 e.
l (1 l−mn K l (1 mn−l K
+ 1l )θ, if l > mn; + 1l )θ, if l < mn,
1243
CAUCHY-JENSEN FUNCTIONAL INEQUALITY
7
3. Stability of (1.1) in non-Archimedean Banach spaces We recall that a field K, equipped with a function (non-Archimedean absolute value, valuation) |·| from K into [0, ∞), is called a non-Archimedean field if the function |·| : K → [0, ∞) satisfies the following conditions: (1) |r| = 0 if and only if r = 0; (2) |rs| = |r||s|; (3) the strong triangle inequality, namely, |r + s| ≤ max{|r|, |s|} for all r, s ∈ K. Clearly, |1| = 1 = | − 1| and |n| ≤ 1 for all nonzero integer n. Let Y be a vector space over the non-Archimedean field K with a non-trivial nonArchimedean valuation | · |. A function k · k : Y → [0, ∞) is called a non-Archimedean norm (valuation) if it satisfies the following conditions: (1) kxk = 0 if and only if x = 0; (2) krxk = |r|kxk for all x ∈ Y and all r ∈ K; (3) the strong triangle inequality, namely, kx + yk ≤ max{kxk, kyk} for all x, y ∈ Y. In this case, the pair (Y, k · k) is called a non-Archimedean space. By a complete nonArchimedean space we mean one in which every Cauchy sequence is convergent. It follows from the strong triangle inequality that kxn − xm k ≤ max{kxj+1 − xj k : m ≤ j < n − 1} for all xn , xm ∈ Y and all m, n ∈ N with n > m. Therefore, a sequence {xn } is a Cauchy sequence in non-Archimedean space (Y, k · k) if and only if the sequence {xn+1 − xn } converges to zero in the space (Y, k · k). In this section, let X be a non-Archimedean normed space and Y a complete non-Archimedean space. Now, we will investigate the generalized Hyers–Ulam stability problem for the functional inequality (1.1) with nonArchimedean valuations |l + mn| > |m| in a complete non-Archimedean space Y . Theorem 3.1. If a mapping f : X → Y with f (0) = 0 satisfies the functional inequality (2.1) for which there exists a constant L with 0 < L < 1 for which the perturbing function ϕ : X l+n → R+ satisfies ¯ ¯ µ ¶ ¯ l ¯ l ¯ϕ(x1 , · · · , xl , y1 , · · · , yn ) ¯ (x1 , · · · , xl , y1 , · · · , yn ) ≤ L · ¯ ϕ (3.1) mn mn ¯ for all x1 , · · · , xl , y1 , · · · , yn ∈ X, then there exists a unique additive mapping h1 : X → Y which satisfies the inequality (1.1) and the inequality kf (x) − h1 (x)k ≤ Ψ(x)
(3.2)
1244
8
I. CHANG, M. ESHAGHI GORDJI AND H. KIM
for all x ∈ X, where | · | is a non-Archimedean valuation and ½ 1 ϕ(x, −x, · · · , x, −x, 0, · · · , 0, x, −x, · · · , x, −x), Ψ(x) := max {z } {z } | |K| | n 2d 2l e
2d 2 e
¾
n l K := d e + md e 2 2
l 1 l ϕ(−x, · · · , −x, x, · · · , x) , {z } mn |l| | mn } {z | l n
Proof. Associating (2.4) with (2.5) in the complete non-Archimedean space Y yields l mn f( x)k ≤ Ψ(x) kf (x) − (3.3) l mn for all x ∈ X. It follows from the inequality (3.3) that ° µ ¶ µ ¶° ° mn k l k mn k+j l k+j °° °( ) f ( ) x − ( ) f ( ) x ° ° l mn l mn ¯i µ ¶ ¾ ½¯ ¯ mn ¯ l i ¯ ¯ Ψ ( )x :k ≤i≤k+j−1 ≤ max ¯ l ¯ mn ½ ¾ ≤ max Li Ψ(x) : k ≤ i ≤ k + j − 1 = Lk Ψ(x), µ
½
which tends to zero as k → ∞. Hence the sequence
)k f ( mn l
¶¾
l k ( mn ) x
is Cauchy for all
x ∈ X, and so one can define a mapping h1 : X → Y by µ ¶ l k mn k ) f ( ) x , x ∈ X. h1 (x) = lim ( k→∞ l mn The remaining proof is similar to the corresponding proof of Theorem 2.1. This completes the proof. ¤ Theorem 3.2. If a mapping f : X → Y with f (0) = 0 satisfies the functional inequality (2.1) and there exists a constant L with 0 < L < 1 for which the perturbing function ϕ : X l+n → R+ satisfies ¯ ¯ µ ¶ ¯ mn ¯ mn ¯ϕ(x1 , · · · , xl , y1 , · · · , yn ) ¯ (x1 , · · · , xl , y1 , · · · , yn ) ≤ L · ¯ ϕ (3.4) l l ¯ for all x1 , · · · , xl , y1 , · · · , yn ∈ X, then there exists a unique additive mapping h2 : X → Y which satisfies the inequality (1.1) and the inequality kf (x) − h2 (x)k ≤ LΨ(x) for all x ∈ X, where Ψ is given as in Theorem 3.1. Proof. It follows from the inequality (2.6) that ° µ ¶ µ ¶° ° l k mn k l k+j mn k+j °° °( ) f ( ) x − ( ) f ( ) x ° ° mn l mn l ¯i+1 µ ¶ ¾ ½¯ ¯ l ¯ mn i+1 ¯ ¯ Ψ ( ) x :k ≤i≤k+j−1 ≤ max ¯ mn ¯ l ½ ¾ ≤ max Li+1 Ψ(x) : k ≤ i ≤ k + j − 1 = Lk+1 Ψ(x),
(3.5)
1245
CAUCHY-JENSEN FUNCTIONAL INEQUALITY
9
which tends to zero as k → ∞. The remaining assertion is similar to that of Theorem 3.1. This completes the proof. ¤ ¯
¯
l ¯ ¯ 6= 1 and θ > 0. If a mapping f : X → Y with f (0) = 0 Corollary 3.3. Let p 6= 1,¯¯ mn
satisfies the following functional inequality °X ° ° ¶ ¶° µX µ Pl n l n n X X X ° ° l ° ° i=1 xi p p ° ° ° ° + y + θ kx k + ky k f (x ) + m f (y ) ≤ mf j ° i j i j ° ° ° i=1
j=1
m
j=1
i=1
j=1
for all x1 , · · · , xl , y1 , · · · , yn ∈ X, then there exists a unique additive mapping h : X → Y , defined as ¯ ¯ ¯ l ¯ l k k ) f (( ) x), (x ∈ X), if ¯ ¯< limk→∞ ( mn l mn mn ¯ ¯ ¯ l ¯ ¯ (or¯ ¯ mn ¯ h(x) = ¯ ¯ mn mn l k k limk→∞ ( mn ) f (( l ) x), (x ∈ X), if ¯ l ¯ ¯ 1, > 1 & 0 < p < 1); 1 & p > 1, > 1 & 0 < p < 1),
such that ¾ ½ l ¯ ¯ l p l+n| mn e | 2d 2 e+2d n ¯ l ¯ p 2 θkxk , if ¯ ¯< max |d l e+md n e| , |l| mn 2 2 ¯ ¯ ¯ l ¯ ¯ (or ¯ mn ¾¯ ½ l ¯p−1 ¯ ¯ kf (x)−h(x)k ≤ l p n l+n| mn | 2d e+2d e ¯ mn ¯ ¯ ¯ ¯ l ¯ θkxkp , if ¯ mn ¯< max |d l 2e+md n2 e| , |l| l 2 2 ¯ ¯ ¯ (or ¯¯ mn ¯ l
1 & p > 1, > 1 & 0 < p < 1); 1 & p > 1, > 1 & 0 < p < 1),
for all x ∈ X. P
P
Corollary ¯3.4. Let ri , si be positive reals with li=1 ri := r, nj=1 si := s and r + s 6= 1, ¯ l ¯ ¯ 6= 1 and θ > 0. If a mapping f : X → Y with f (0) = 0 satisfies the following and let ¯¯ mn functional inequality °X ° ° ¶° µY ¶ µ Pl n l n n X Y X ° ° l ° ° i=1 xi ri sj ° ° ° ° + y + θ kx k ky k f (x ) + m f (y ) ≤ mf j ° i j i j ° ° ° i=1
j=1
m
j=1
i=1
j=1
for all x1 , · · · , xl , y1 , · · · , yn ∈ X, then there exists a unique additive mapping h : X → Y , defined as ¯ ¯ ¯ l ¯ l k mn k ) f (( ) x), (x ∈ X), if ¯ ¯< lim ( k→∞ l mn mn ¯ ¯ ¯ l ¯ ¯ (or¯ ¯ mn ¯ h(x) = ¯ ¯ mn mn l k k limk→∞ ( mn ) f (( l ) x), (x ∈ X), if ¯ l ¯ ¯ 1, > 1 & 0 < r + s < 1); 1 & r + s > 1, > 1 & 0 < r + s < 1),
1246
I. CHANG, M. ESHAGHI GORDJI AND H. KIM
10
such that
¾ ½ ¯ ¯ l s e) | mn | Il+n (2d 2l e+2d n ¯ l ¯ r+s 2 , θkxk , if ¯ ¯ < 1& r + s > 1, max |l| mn e| |d 2l e+md n 2 ¯ ¯ ¯ l ¯ ¯ > 1&0 < r + s < 1); (or ¯ mn ¾¯ ½ ¯r+s−1 ¯ ¯ kf (x)−h(x)k ≤ l s n l Il+n (2d 2 e+2d 2 e) | mn | ¯ ¯ ¯ ¯ , |l| ¯ mn ¯ θkxkp , if ¯ mn ¯ < 1& r + s > 1, max n l l l e+md e| |d 2 2 ¯ ¯ ¯ ¯ ¯ > 1&0 < r + s < 1), (or ¯ mn l
for all x ∈ X, where
(
Ic (x) :=
0, if x < c; 1, if x ≥ c.
We remark that Il+n (2d 2l e + 2d n2 e) = 0 if l or n is odd, and Il+n (2d 2l e + 2d n2 e) = 1 if both l and n are even. ¯ ¯
¯ ¯
l ¯ 6= 1 and θ > 0. If a mapping f : X → Y with f (0) = 0 satisfies Corollary 3.5. Let ¯ mn the following functional inequality
° l ° ° ¶° µ Pl n n X X ° °X ° ° i=1 xi ° ° ° ° + y f (x ) + m f (y ) ≤ mf j °+θ i j ° ° ° i=1
m
j=1
j=1
for all x1 , · · · , xl , y1 , · · · , yn ∈ X, then there exists a unique additive mapping h : X → Y , defined as
(
h(x) = such that
l k )k f (( mn ) x), (x ∈ X), if | mn | < 1; limk→∞ ( mn l l mn k l l k | < 1, limk→∞ ( mn ) f (( l ) x), (x ∈ X), if | mn
½ 1 , max e| |d 2l e+md n 2 ½ kf (x) − h(x)k ≤ 1 max n , l |d 2 e+md 2 e|
¾ 1 |l| 1 |l|
θ,
¾
| < 1; if | mn l
l l | mn |θ, if | mn | < 1,
for all x ∈ X. We remark that stability results of the functional inequality (1.1) in non-Archimedean Banach spaces are very different from those of the inequality (1.1) in Banach spaces. References [1] T. Aoki, On the stability of the linear transformation in Banach spaces, J. Math. Soc. Japan, 2 (1950), 64–66. [2] Y. Benyamini and J. Lindenstrauss, Geometric Nonlinear Functional Analysis, vol. 1, Colloq. Publ. vol. 48, Amer. Math. Soc., Providence, RI, 2000. [3] D.G. Bourgin, Classes of transformations and bordering transformations, Bull. Amer. Math. Soc. 57(1951), 223-237. [4] Y. Cho, C. Park and R. Saadati, Functional inequalities in non-Archimedean Banach spaces, Appl. Math. Lett. 23(2010), 1238-1242. [5] Z. Gajda, On the stability of additive mappings, Intern. J. Math. Math. Sci. 14 (1991), 431–434. [6] P. Gˇavruta, A generalization of the Hyers–Ulam–Rassias stability of approximately additive mappings, J. Math. Anal. Appl. 184 (1994), 431–436. [7] Z-X. Gao, H-X. Cao, W-T. Zheng and Lu Xu. Generalized Hyers–Ulam–Rassias stability of functional inequalities and functional equations, J. Math. Inequal. 3(1)(2009), 63-77.
1247
CAUCHY-JENSEN FUNCTIONAL INEQUALITY
11
[8] D.H. Hyers, On the stability of the linear functional equation, Proc. Nat. Acad. Sci. U.S.A. 27 (1941), 222–224. [9] S. Jung, On the Hyers–Ulam–Rassias stability of approximately additive mappings, J. Math. Anal. Appl. 204 (1996), 221–226. [10] H. Kim, S. Kang, and I. Chang, On Functional Inequalities Originating from Module Jordan Left Derivations, Journal of Inequalities and Applications, Volume 2008 (2008), Article ID 278505, 9 pages. [11] J. Lee, C. Park, and D. Shin On the Stability of Generalized Additive Functional Inequalities in Banach Spaces, Journal of Inequalities and Applications, Volume 2008 (2008), Article ID 210626, 13 pages. [12] A. Najati and M.B. Moghimi, Stability of a functional equation deriving from quadratic and additive function in quasi-Banach spaces, J. Math. Anal. Appl. 337 (2008), 399–415. [13] C. Park, Fuzzy Stability of Additive Functional Inequalities with the Fixed Point Alternative, Journal of Inequalities and Applications, Volume 2009 (2009), Article ID 410576, 17 pages. [14] C. Park, J. An, and F. Moradlou, Additive Functional Inequalities in Banach Modules, Journal of Inequalities and Applications, Volume 2008 (2008), Article ID 592504, 10 pages. [15] J.M. Rassias, On approximation of approximately linear mappings by linear mappings, J. Funct. Anal. 46(1982), 126-130. [16] J.M. Rassias, On approximation of approximately linear mappings by linear mappings, Bull. Sci. Math. 108(1984), 445-446. [17] Th.M. Rassias, On the stability of the linear mapping in Banach spaces, Proc. Amer. Math. Soc. 72 (1978), 297–300. [18] Th.M. Rassias, The stability of mappings and related topics, In ‘Report on the 27th ISFE’, Aequ. Math. 39 (1990), 292–293. ˇ [19] Th.M. Rassias and P. Semrl, On the behaviour of mappings which do not satisfy Hyers–Ulam– Rassias stability, Proc. Amer. Math. Soc. 114 (1992), 989–993. [20] S. Rolewicz, Metric Linear Spaces, PWN-Polish Sci. Publ./Reidel, Warszawa/Dordrecht, 1984. [21] S. M. Ulam, A Collection of the Mathematical Problems, Interscience Publ. New York, 1960. (Ick-Soon Chang) Department of Mathematics, Mokwon University, Mokwon Gil 21, Seogu, Daejeon, 302-318, Korea E-mail address: [email protected] (M. Eshaghi Gordji) Department of Mathematics, Semnan University, P. O. Box 35195-363, Semnan, Iran E-mail address: [email protected] (Hark-Mahn Kim) Department of Mathematics, Chungnam National University, 79 Daehangno, Yuseong-gu, Daejeon 305-764, Korea E-mail address: [email protected]
JOURNAL OF COMPUTATIONAL ANALYSIS AND APPLICATIONS, VOL. 14, NO.7, 1248-1257 , 2012, COPYRIGHT 2012 EUDOXUS 1248 PRESS, LLC
SOME PROPERTIES OF CERTAIN CLASS OF MULTIVALENT FUNCTIONS WITH NEGATIVE COEFFICIENTS ˘ ˘ ˘ ADRIANA CATAS ¸ AND CALIN DUBAU Abstract. The aim of this paper is to derive several interesting propern ties of a new class denoted by Wp,λ (α, γ) consisting of analytic functions with negative coefficients for which coefficient inequalities, distortion theorems, closure theorems are determined. Furthermore, integral operators and modified Hadamard products of several functions belonging to the n class Wp,λ (α, γ) are studied. The results are obtained using a generalized S˘ al˘ agean operator and they are improvement of a known results.
AMS Subject Classifications: 30C45. Key Words: analytic functions, generalized S˘al˘agean operator, modified Hadamard product, extreme points, negative coefficient. 1. Introduction and preliminaries Let Vp denote the class of functions of the form (1)
f (z) = z p +
∞ ∑
ap+j z p+j ,
(p ∈ N = {1, 2, . . . })
j=1
which are analytic and p-valent on the unit disc U = {z ∈ C : |z| < 1}. We also denote by Wp the subclass of Vp containing functions f which can be expressed in the form (2)
f (z) = z − p
∞ ∑
ap+j z p+j
j=1
with ap+j ≥ 0 for all p ∈ N. The differential operator Dλn is defined by (3) (4) (5)
Dλ0 f (z) = f (z) Dλ1 f (z) = (1 − λ)f (z) + λzf ′ (z) = Dλ , Dλn f (z) = Dλ (Dλn−1 f (z)). 1
λ>0
CATAS, DUBAU: ABOUT MULTIVALENT FUNCTIONS...
This operator was introduced by Al-Oboudi [1] and when λ = 1 we get the S˘al˘agean differential operator [3]. It can be easily seen that for a function f ∈ Wp one obtains (6)
Dλn f (z)
= (1 − λ + λp) z − n p
∞ ∑
[1 − λ + λ(p + j)]n ap+j z p+j .
j=1
2. Coefficient estimates Now we propose n (α, γ), Definition 1. We say that a function f (z) ∈ Vp is in the class Vp,λ p ∈ N∗ , n ∈ N, α ∈ [0, p), γ ∈ [−1, 1), λ > 0 if Dλn+1 f (z) −p n D f (z) λ (7) < 1. n+1 Dλ f (z) Dn f (z) − γp − (1 − γ)α λ Let n n Wp,λ (α, γ) = Vp,λ (α, γ) ∩ Wp .
(8)
n (α, γ) has been studied by G.S. S˘ Remark 1. The class Wp,1 al˘agean and ∗ 0 F.I. Stan [4] and the class Wp (α, γ) := Wp,1 (α, γ) has been studied by D.Z. Pashkouleva and K.V. Vasilev [2]. Theorem 1. Let p ∈ N, n ∈ N0 = N∪{0}, γ ∈ [−1, 1), λ ≥ 1 and α ∈ [0, p). n (α, γ) if and only if Then a function f is in the class Wp,λ ∞ ∑
(9)
cp,j (n, λ)[2(p − 1)(λ − 1) + 2λj + (1 − γ)(p − α)]ap+j ≤
j=1
≤ (1 − λ + λp)n (1 − γ)(p − λ) and cp,j (n, λ) = [1 + (p + j − 1)λ]n .
(10)
Proof. Assume that the inequality (9) holds and let |z| = 1. Then we have |Dλn+1 f (z) − pDλn f (z)| − |Dλn+1 f (z) − [γp + (1 − γ)α]Dλn f (z)| = = |(1 − λ + λp)n (λ − 1)(p − 1)z p −
∞ ∑
cp,j (n, λ)[1 − p + λ(p + j − 1)]ap+j z p+j |−
j=1
−
∞ ∑
−|(1 − λ + λp)n [(p − 1)(λ − 1) + (1 − γ)(p − α)]z p − cp,j (n, λ)[(p − 1)(λ − 1) + λj + (1 − γ)(p − α)]ap+j z p+j | ≤
j=1 2
1249
CATAS, DUBAU: ABOUT MULTIVALENT FUNCTIONS...
≤
∞ ∑
1250
cp,j (n, λ)[2(p − 1)(λ − 1) + 2λj + (1 − γ)(p − α)]ap+j −
j=1
−(1 − λ + λp)n (1 − γ)(p − α) ≤ 0. Consequently, by the maximum modulus theorem, the function f (z) is in n (α, γ). the class Wp,λ n (α, γ) then from (7) we have Conversely, if f ∈ Wp,λ Dλn+1 f (z) − pDλn f (z) n+1 = Dλ f (z) − [γp + (1 − γ)α]Dλn f (z) ∞ ∑ (1 − λ + λp)n (λ − 1)(p − 1)z p − cp,j (n, λ)[1 − p + λ(p + j − 1)]ap+j z p+j j=1 = < 1. ∞ ∑ p+j n p (1 − λ + λp) bp (α, γ, λ)z − cp,j (n, λ)[bp (α, γ, λ) + λj]ap+j z j=1 where bp (α, γ, λ) = (p − 1)(λ − 1) + (1 − γ)(p − α) Since |Re(z)| ≤ |z|, z ∈ C, we obtain ∞ ∑ p+j n (λ − 1)(p − 1)z p − (1 − λ + λp) c (n, λ)[1 − p + λ(p + j − 1)]a z p,j p+j Re
(1 − λ +
λp)n b
p
(α, γ, λ)z p
−
j=1 ∞ ∑
cp,j (n, λ)[bp (α, γ, λ) + λj]ap+j z
p+j
j=1
Choosing z on the real axis such that through real values we get (1 − λ + λp)n (λ − 1)(p − 1) −
∞ ∑
Dλn+1 f (z) is real and letting z → 1− Dλn f (z)
cp,j (n, λ)[1 − p + λ(p + j − 1)]ap+j ≤
j=1
≤ (1 − λ + λp)n [(p − 1)(λ − 1) + (1 − γ)(p − α)]− −
∞ ∑
cp,j (n, λ)[(p − 1)(λ − 1) + λj + (1 − γ)(p − α)]ap+j
j=1
and this inequality gives the required condition. This completes the proof of Theorem 1. 3
< 1.
CATAS, DUBAU: ABOUT MULTIVALENT FUNCTIONS...
1251
Remark 2. The result obtained in Theorem 1 is sharp as can be seen by (11) (1 − λ + λp)n (1 − γ)(p − α) z p+j , j ∈ N. fj (z) = z p − [2(p − 1)(λ − 1) + 2λj + (1 − γ)(p − α)]cp,j (n, λ) Each function on the form (11) is an extremal function for the theorem. Remark 3. Theorem 1 improves the result obtained in [[4], Theorem 1]. n (α, γ). Then Corollary 1. Let f (z) ∈ Wp,λ (12)
ap+j ≤
(1 − λ + λp)n (1 − γ)(p − α) , cp,j (n, λ)[2(p − 1)(λ − 1) + 2λj + (1 − γ)(p − α)]
j ∈ N∗ .
The equality in (12) is attained for the function fj (z) given in (11). 3. Growth and distortion theorems Next we prove the following growth and distortion properties for the class n (α, γ). Wp,λ
n (α, γ). Then we have Theorem 2. Let the function f (z) be in the class Wp,λ
|Dλk f (z)| ≤ (1 − λ + λp)k |z|p +
(13) +
(1 − λ + λp)n (1 − γ)(p − α) |z|p+1 2(p − 1)(λ − 1) + 2λ + (1 − γ)(p − α)
and |Dλk f (z)| ≥ (1 − λ + λp)k |z|p −
(14)
(1 − λ + λp)n (1 − γ)(p − α) |z|p+1 2(p − 1)(λ − 1) + 2λ + (1 − γ)(p − α) for z ∈ U , where 0 ≤ k ≤ n. The equalities (13) and (14) are attained for the function f (z) defined by −
(15) Dk f (z) = (1 − λ + λp)k z p −
(1 − λ + λp)n (1 − γ)(p − α) z p+1 . 2(p − 1)(λ − 1) + 2λ + (1 − γ)(p − α)
Proof. From Theorem 1, for 0 ≤ k ≤ n, one obtains (16)
∞ ∑ j=1
cp,j (k, λ)ap+j ≤
(1 − λ + λp)n (1 − γ)(p − α) . 2(p − 1)(λ − 1) + 2λ + (1 − γ)(p − α)
Furthermore, we note from (6) that |Dλk f (z)| ≤ (1 − λ + λp)k |z|p +
∞ ∑ j=1 4
[1 + λ(p + j − 1)]k ap+j |z|p+j
CATAS, DUBAU: ABOUT MULTIVALENT FUNCTIONS...
1252
and |Dλk f (z)| ≥ (1 − λ + λp)k |z|p −
∞ ∑
[1 + λ(p + j − 1)]k ap+j |z|p+j .
j=1
Using the fact that |z| < 1 the assertions of (13) and (14) of Theorem 2 follow immediately. n (α, γ) then Corollary 2. If f (z) belongs to the class Wp,λ (17)
|f (z)| ≤ |z|p +
(1 − λ + λp)n (1 − γ)(p − α) |z|p+1 2(p − 1)(λ − 1) + 2λ + (1 − γ)(p − α)
|f (z)| ≥ |z|p −
(1 − λ + λp)n (1 − γ)(p − α) |z|p+1 2(p − 1)(λ − 1) + 2λ + (1 − γ)(p − α)
and (18)
for |z| < 1. The equalities in (17) and (18) hold for the function (19)
f (z) = z p −
(1 − λ + λp)n (1 − γ)(p − α) z p+1 , 2(p − 1)(λ − 1) + 2λ + (1 − γ)(p − α)
z ∈ U.
n (α, γ) then Corollary 3. If f belongs to the class Wp,λ
(20)
|f ′ (z)| ≤ p|z|p−1 +
(p + 1)(1 − λ + λp)n (1 − γ)(p − α) |z|p 2(p − 1)(λ − 1) + 2λ + (1 − γ)(p − α)
|f ′ (z)| ≥ p|z|p−1 −
(p + 1)(1 − λ + λp)n (1 − γ)(p − α) |z|p . 2(p − 1)(λ − 1) + 2λ + (1 − γ)(p − α)
and (21)
The equalities in (20) and (21) hold for the function f (z) given by (19). Corollary 4. The unit disc is mapped into a domain which contains ) ( (1 − λ + λp)n (1 − γ)(p − α) U 0, 1 + 2(p − 1)(λ − 1) + 2λ + (1 − γ)(p − α) n (α, γ). The sharpness is assured by the function (19). by any f ∈ Wp,λ
4. Closure theorems Let the functions fk (z) be defined, for k = 1, 2, . . . , m by (22)
fk (z) = z p −
∞ ∑
ap+j,k z p+j ,
ap+j,k ≥ 0.
j=1
We shall prove the following results for the closure of functions in the class n (α, γ). Wp,λ 5
CATAS, DUBAU: ABOUT MULTIVALENT FUNCTIONS...
1253
Theorem 3. Let the functions fk (z) defined by (22) be in the class for every k = 1, 2, . . . , m. Then the function h(z) defined by
n (α, γ) Wp,λ
(23)
h(z) =
m ∑
ck fk (z),
ck ≥ 0
k=1 n (α, γ) where is also in the class Wp,λ m ∑
(24)
ck = 1.
k=1
Proof. According to the definition of the function h(z), we can write (m ) ∞ ∑ ∑ p (25) h(z) = z − ck ap+j,k z p+j . j=1
k=1
n Wp,λ
Further, since fk are in for every k = 1, 2, . . . , m, we get from Theorem 1 ∞ ∑ (26) cp,j (n, λ)[2(p − 1)(λ − 1) + 2λj + (1 − γ)(p − α)]ap+j,k ≤ j=1
≤ (1 − λ + λp)n (1 − γ)(p − α). Hence we can see that ) (m ∞ ∑ ∑ ck ap+j,k = cp,j (n, λ)[2(p − 1)(λ − 1) + 2λj + (1 − γ)(p − α)] j=1
=
∞ ∑
ck
k=1
≤
(m ∑
k=1 ∞ ∑
cp,j (n, λ)[2(p − 1)(λ − 1) + 2λj + (1 − γ)(p − α)]ap+j,k ≤
j=1
)
ck
(1 − λ + λp)n (1 − γ)(p − α) = (1 − λ + λp)n (1 − γ)(p − α)
k=1 n (α, γ). which implies that h(z) belongs to the class Wp,λ n (α, γ) is closed under a convex linear combinaTheorem 4. The class Wp,λ tions. Proof. Let the functions fk (z) (k = 1, 2) defined by (22) be in the class n (α, γ). It is sufficient to show that the function h(z) defined by Wp,λ
(27)
h(z) = ηf1 (z) + (1 − η)f2 (z),
0≤η≤1
n (α, γ). But, taking m = 2, c = η and c = 1−η in Theorem is in the class Wp,λ 1 2 3 we get the required result.
6
CATAS, DUBAU: ABOUT MULTIVALENT FUNCTIONS...
Remark 4. As a consequence of Theorem 4, there exists the extreme points n (α, γ). of the class Wp,λ Theorem 5. Let fj be defined as in (11), j ∈ N and z ∈ U.
f0 (z) = z p ,
(28) Then f (z) is in the class (29)
n (α, γ) Wp,λ
f (z) =
if and only if it can be expressed as
∞ ∑
µj fj (z)
j=0
where µj ≥ 0 for j ∈ N0 = N ∪ {0} and
∞ ∑
µj = 1.
j=0
Proof. Suppose that f can be expressed as in (29), namely f (z) = z p −
∞ ∑
µj
j=1
(1 − λ + λp)n (1 − γ)(p − α) z p+j [2(p − 1)(λ − 1) + 2λj + (1 − γ)(p − α)]cp,j (n, λ)
then it follows that ∞ ∑ cp,j (n, λ)[2(p − 1)(λ − 1) + 2λj + (1 − γ)(p − α)]· j=1
(1 − λ + λp)n (1 − γ)(p − α) µj ≤ [2(p − 1)(λ − 1) + 2λj + (1 − γ)(p − α)]cp,j (n, λ) ≤ (1 − λ + λp)n (1 − γ)(p − α) ∞ ∑ is equivalent to µj ≤ 1 and from Theorem 1 f (z) belongs to the class ·
j=1 n (α, γ). Wp,λ n (α, γ). Conversely, assume that the function f (z) belongs to the class Wp,λ Then ∞ ∑ cp,j (n, λ)[2(p − 1)(λ − 1) + 2λj + (1 − γ)(p − α)] ap+j ≤ 1. (1 − λ + λp)n (1 − γ)(p − α) j=1
We denote cp,j (n, λ)[2(p − 1)(λ − 1) + 2λj + (1 − γ)(p − α)] ap+j (1 − λ + λp)n (1 − γ)(p − α) ∑ for j ∈ N and µ0 = 1 − ∞ j=1 µj . We notice that f (z) can be expressed in the form (29). This completes the proof of Theorem 5. n (α, γ) are the functions Corollary 5. The extreme points of the class Wp,λ fj (z), j ∈ N, given by Theorem 5. µj =
7
1254
CATAS, DUBAU: ABOUT MULTIVALENT FUNCTIONS...
1255
5. Modified Hadamard product Let the functions fk (k = 1, 2) defined by (22). The modified Hadamard product of f1 (z) and f2 (z) is defined by f1 ∗ f2 (z) = z p −
(30)
∞ ∑
ap+j,1 ap+j,2 z p+j .
j=1
Theorem 6. Let the functions fk (z) (k = 1, 2) defined by (22) be in the n (α, γ). Then the modified Hadamard product belongs to the class class Wp,λ n (α, γ) where Wp,λ (31)
β = β(α, γ, λ, n, p) = =p−
2(1 − λ + λp)n (1 − γ)(p − α)2 [(p − 1)(γ − 1) + λ] (1 + pλ)n [dp (α, γ, λ) + 2λ]2 − (1 − λ + λp)n (1 − γ)2 (p − α)2
where dp (α, γ, λ) = 2(p − 1)(λ − 1) + (1 − γ)(p − α). The result is sharp. Proof. Employing the technique used earlier by Schild and Silverman [5], we need to find the largest β = β(α, γ, λ, n, p) such that ∞ ∑ cp,j (n, λ)[2(p − 1)(λ − 1) + 2λj + (1 − γ)(p − β)]
(32)
j=1
(1 − λ + λp)n (1 − γ)(p − β)
ap+j,1 ap+j,2 ≤ 1.
Since (33) ∞ ∑ cp,j (n, λ)[2(p − 1)(λ − 1) + 2λj + (1 − γ)(p − α)] ap+j,i ≤ 1, i = 1, 2 (1 − λ + λp)n (1 − γ)(p − α) j=1
by the Cauchy-Schwarz inequality we have (34)
∞ ∑ cp,j (n, λ)[2(p − 1)(λ − 1) + 2λj + (1 − γ)(p − α)] √ j=1
(1 − λ + λp)n (1 − γ)(p − α)
ap+j,1 ap+j,2 ≤ 1.
It is sufficient to show that cp,j (n, λ)[2(p − 1)(λ − 1) + 2λj + (1 − γ)(p − β)] ap+j,1 ap+j,2 ≤ (1 − λ + λp)n (1 − γ)(p − β) ≤
cp,j (n, λ)[2(p − 1)(λ − 1) + 2λj + (1 − γ)(p − α)] √ ap+j,1 ap+j,2 (1 − λ + λp)n (1 − γ)(p − α)
thus [2(p − 1)(λ − 1) + 2λj + (1 − γ)(p − α)](p − β) √ , ap+j,1 ap+j,2 ≤ [2(p − 1)(λ − 1) + 2λj + (1 − γ)(p − β)](p − α) 8
j ∈ N.
CATAS, DUBAU: ABOUT MULTIVALENT FUNCTIONS...
We note that √ ap+j,1 ap+j,2 ≤
(1 − λ + λp)n (1 − γ)(p − α) . cp,j (n, λ)[2(p − 1)(λ − 1) + 2λj + (1 − γ)(p − α)]
Consequently, we only need to prove that (1 − λ + λp)n (1 − γ)(p − α) ≤ cp,j (n, λ)[2(p − 1)(λ − 1) + 2λj + (1 − γ)(p − α)] [2(p − 1)(λ − 1) + 2λj + (1 − γ)(p − α)](p − β) [2(p − 1)(λ − 1) + 2λj + (1 − γ)(p − β)](p − α) which is equivalent to ≤
β ≤p−
2(1 − λ + λp)n (1 − γ)(p − α)2 [(p − 1)(λ − 1) + λj] cp,j (n, λ)[dp (α, γ, λ) + 2λj]2 − (1 − λ + λp)n (1 − γ)2 (p − α)2
We denote E(α, γ, λ, n, p; j) = p−
2(1 − λ + λp)n (1 − γ)(p − α)2 [(p − 1)(λ − 1) + λj] . cp,j (n, λ)[dp (α, γ, λ) + 2λj]2 − (1 − λ + λp)n (1 − γ)2 (p − α)2
But E(α, γ, λ, n, p; j + 1) − E(α, γ, λ, n, p; j) = λp)n (1−γ)(p−α)2
2(1−λ + A(α, γ, λ, n, p; j) where =
[B(α, γ, λ, n, p; j)+λ(1−λ+λp)n (1−γ)2 (p−α)2 ]
A(α, γ, λ, n, p; j) = = {cp,j (n, λ)[dp (α, γ, λ) + 2λj]2 − (1 − λ + λp)n (1 − γ)2 (p − α)2 }· ·{cp,j+1 (n, λ)[dp (α, γ, λ) + 2λ(j + 1)]2 − −(1 − λ + λp)n (1 − γ)2 (p − α)2 } > 0 and B(α, γ, λ, n, p; j) = = cp,j+1 (n, λ)[(p − 1)(λ − 1) + λj] · [dp (α, γ, λ) + 2λ(j + 1)]2 − −cp,j (n, λ)[(p − 1)(λ − 1) + λ(j + 1)] · [dp (α, γ, λ) + 2λj]2 > 0. Thus, we deduce that E(α, γ, λ, n, p; j) is an increasing function of j ∈ N. Finally, we obtain β ≤ E(α, γ, λ, n, p; 1) which is β ≤p−
2(1 − λ + λp)n (1 − γ)(p − α)2 [(p − 1)(γ − 1) + λ] . cp,1 (n, λ)[dp (α, γ, λ) + 2λ]2 − (1 − λ + λp)n (1 − γ)2 (p − α)2
For the proof of the sharpness we take the functions f1 and f2 of the form (19). 9
1256
CATAS, DUBAU: ABOUT MULTIVALENT FUNCTIONS...
n (α, γ) 6. Integral property of the class Wp,λ n (α, γ) and c a real number such that Theorem 7. Let f be in the class Wp,λ c > −1. Then the integral operator of Libera-Bernardi type ∫ c + p z c−1 (35) F (z) = t f (t)dt zc 0 n (α, γ). also belongs to the class Wp,λ Proof. We note that ∫ 1 ∞ ∑ c+p F (z) = (c + p) τ c−1 f (tτ )dτ = z p − ap+j z p+j c + p + j 0 j=1
and it follows ∞ ∑ c+p cp,j (n, λ)[2(p − 1)(λ − 1) + 2λj + (1 − γ)(p − α)] ap+j ≤ c+p+j j=1
≤
∞ ∑
cp,j (n, λ)[2(p − 1)(λ − 1) + 2λj + (1 − γ)(p − α)]ap+j ≤
j=1
≤ (1 − λ + λp)n (1 − γ)(p − α). Using Theorem 1 the proof is complete. Remark 5. The result obtained in Theorem 7 improves the result obtained in [[4], Theorem 6]. References [1] F. Al-Oboudi, On univalent functions defined by a generalized S˘ al˘ agean operator, Inter. J. of Math. and Math. Sci. 27(2004), 1429-1436. [2] D.Z. Pashkouleva and K.V. Vasilev, On a class of multivalent functions with negative coefficients, Plovdiv Univ. Scient. Works. vol. 34, 3(2004), 61-67. [3] G.S. S˘ al˘ agean, Subclasses of univalent functions, Lecture Notes in Math. Springer Verlag, 1013(1983), 362-372. [4] G.S. S˘ al˘ agean and F.I. Stan, On a class of multivalent functions with negative coefficients, International Conference on Complex Analysis and Related Topics. The X th Romanian-Finnish Seminar. Cluj-Napoca, 14-19 August, 2005. [5] A. Schild and H. Silverman, Convolutions of univalent functions with negative coefficients, Ann. Univ. Mariae Curie-Sklodowska. Sect. A. 29(1975), 99-107.
Department of Mathematics, University of Oradea 1 University Street, 410087 Oradea, Romania E-mail: [email protected] Faculty of Environmental Protection, University of Oradea 26, Gen. Magheru Street, Oradea, Romania E-mail: calin [email protected]
10
1257
JOURNAL OF COMPUTATIONAL ANALYSIS AND APPLICATIONS, VOL. 14, NO.7,1258-1268 , 2012, COPYRIGHT 2012 EUDOXUS 1258 PRESS, LLC
Smooth and optimal shape of aeolian blade profile by splines C˘alin Dub˘au and Adriana C˘ata¸s University of Oradea Faculty of Environmental Protection 26, Gen. Magheru Street, Oradea, Romania calin [email protected] and University of Oradea Department of Mathematics and Computer Sciences 1 University Street 410087, Oradea, Romania [email protected]
Abstract A minimal property of cubic splines generated by initial conditions is obtained in order to generate smooth and optimal aerodynamic shape for the aeolian blade profile.
Key words and phrases: cubic splines, optimal property, error bound. AMS 2000 Subject Classification: 65D05.
1
Introduction
Within aero-electrical aggregates the wine turbine is the component that ensures the conversion of kinetic energy of wind into mechanical energy useable to turbine shaft, through the interaction between air current and moving blade. Wind turbine is composed mainly of an rotator fixed on a support shaft, comprising a hub and a moving blade consisting of one or more blades. Active body of aeolian turbines which made the quantity of converted energy is the blade. The achieving of aerodynamic performances, kinematics and energy curves of the aeolian turbines depend on the choice of a certain geometry. Wind energy conversion is achieved by the interaction between air currents and solid surface of the blade. For design of the blade profile are used optimized shape (aerodynamic profile) selected and positioned such that the obtained performances for certain conditions, proper to the location, to be optimal. Interaction moment 1
DUBAU, CATAS: AEOLIAN BLADE BY SPLINES
between moving blade and fluid current is deduced from lifted aerodynamic forces and resistance. Calculation of the blade profile was realized by determining the geometric contour of the aerodynamic profile deduced by an analytical method, combining two mathematical functions, a framework function and a gauge function, respectively. Presented model was synthesized by a section calculation which set the final shape of the blade, using a computer algorithm, taking into account the behavior of aerodynamic profile which was placed in a stream of air. The aerodynamic profile behavior depends mainly on profile position compared to the air current speed, through the angle of incidence value. In order to generate an aerodynamic shape for the aeolian blade profile with initial minimal ecart from the segment line we can consider cubic splines with partial minimal quadratic oscillation in average on the first subinterval. The notion of quadratic oscillation in average is defined in [3] and [5] and the partial minimal quadratic oscillation in average on the first and on the last two subintervals was obtained in [3] in order to improve the Akima’s method (see [2]) for the Hermite type interpolation. In the definition of the notion of quadratic oscillation in average is considered in [5] a partition ∆ of an interval [a, b], ∆ : a = x0 < x1 < . . . < xn−1 < xn = b a set of values y0 , y1 , . . . , yn ∈ R and an arbitrary function f : [a, b] → R, f ∈ C[a, b] satisfying the interpolation conditions: f (xi ) = yi , ∀i = 0, n. Let L : [a, b] → R such that Li = L [xi−1 ,xi ] , i = 1, n and Li (x) = yi−1 +
x − xi−1 · (yi − yi−1 ), xi − xi−1
x ∈ [xi−1 , xi ]
the polygonal line joining the points (xi , yi ),i = 0, n. It is denoted by fi the restriction of f to each interval [xi−1 , xi ], i = 1, n and hi = xi − xi−1 , i = 1, n. Definition 1.1 [5] The quadratic oscillation of the function f corresponding to the partition ∆ and to the values yi , i = 0, n, is the following functional ρ : C[a, b] × Rn+1 → R v u n ∫ xi u∑ (1.1) ρ(f ; ∆, y) = t [fi (x) − Li (x)]2 dx i=1
xi−1
where y = (y0 , y1 , . . . , yn ). Remark 1.1 If there exists a set K ⊂ {1, 2, . . . , n} such that in (1.1) the sum is ∑ ∫ xi [fi (x) − Li (x)]2 dx i∈K
xi−1
2
1259
DUBAU, CATAS: AEOLIAN BLADE BY SPLINES
then, the functional is called partial quadratic oscillation in average (see [3], where the set is K = {1, n − 1, n}). In this paper, we consider the cubic splines generated by initial conditions (see [10]) with minimal partial quadratic oscillation in average corresponding to the set K = {1}. The paper is organized as follows. In Section 2 we construct the algorithm of cubic splines generated by initial conditions with minimal partial quadratic oscillation in average on the first subinterval. Section 3 is devoted to the error estimation of the interpolation with this spline, in the class of continuous functions. A numerical experiment to illustrate an application in the construction of aerodynamic profile of aeolian blade is presented in Section 4.
2
The cubic spline procedure
The cubic spline generated by initial conditions was introduced by C. Iancu in [10] and it is constructed integrating on each subinterval [xi−1 , xi ], i = 1, n the initial value problem. ′′ i−1 = Mi−1 + xx−x · (Mi − Mi−1 ) si (x) i −xi−1 . s (x ) = yi−1 ′i i−1 si (xi−1 ) = mi−1 , x ∈ [xi−1 , xi ], i = 1, n The expression is (2.1) Mi − Mi−1 Mi−1 si (x) = · (x − xi−1 )3 + · (x − xi−1 )2 + mi−1 · (x − xi−1 ) + yi−1 , 6hi 2 and imposing the conditions si (xi ) = yi , i = 1, n, we obtain a continuous S ∈ C[a, b], S : [a, b] → R with the restrictions Si to the intervals [xi−1 , xi ], i = 1, n. Since a cubic spline without deficiency is S ∈ C 2 [a, b], imposing the condition S ∈ C 1 [a, b] that is Si′ (xi ) = mi , i = 1, n we obtain together with Si (xi ) = yi , i = 1, n the relations: (2.2)
Mi = mi =
6 6mi−1 (yi − yi−1 ) − − 2Mi−1 2 hi hi
3 hi (yi − yi−1 ) − 2mi−1 − · Mi−1 , hi 2
i = 1, n.
which uniquely determine the values mi , Mi , i = 1, n in recurrent way starting from y0 , . . . , yn , m0 and M0 . So, the values m0 and M0 remains free and can be specified in order to obtain a supplementary property of the cubic spline. In [5], these values are obtained such that the spline to be natural cubic spline. In [4], these values are obtained such that the cubic spline generated by initial conditions to be with minimal quadratic oscillation in average.
3
1260
DUBAU, CATAS: AEOLIAN BLADE BY SPLINES
1261
For the cubic spline generated by boundary conditions, the first and the last moment are computed such that the corresponding spline to have minimal quadratic oscillation in average, in [7]. Here, we determine m0 and M0 such that the cubic spline generated by initial conditions such that S will be with minimal partial quadratic oscillation in average on the first interval [x0 , x1 ]. So, the set in Remark 1 is K = 1 and the functional is √∫ x1
ρp (S; ∆, y) =
[S1 (x) − L1 (x)]2 dx.
x0
Then, we can consider the residual ∫ x1 R(m0 , M0 ) = [S1 (x) − L1 (x)]2 dx, x0
where S1 is obtained by the interpolation conditions: S1 (x0 ) = y0 ,
S1 (x1 ) = y1 ,
S1′ (x0 ) = m0 ,
S1′′ (x0 ) = M0 ,
having the expression [ (2.3)
(
S1 (x) = 1 − [
(
+(x − x0 ) · 1 −
x − x0 h1
x − x0 h1
)3 ]
( · y0 +
)2 ] · m0 +
x − x0 h1
)3 · y1 +
[ ( )] (x − x0 )2 x − x0 · 1− · M0 , 2 h1
x ∈ [x0 , x1 ] and h1 = x1 − x0 . So, the residual is, R : R2 → R ∫ x1 R(m0 , M0 ) = [A(x)y0 + B(x)y1 + C(x)m0 + D(x)M0 − L(x)]2 dx, x0
where
(
)3 ( )3 x − x0 x − x0 , B(x) = , h1 h1 [ ( )2 ] [ ( )] x − x0 (x − x0 )2 x − x0 C(x) = (x − x0 ) · 1 − , D(x) = · 1− , h1 2 h1 A(x) = 1 −
L1 (x) =
x − x0 x1 − x · y1 + · y0 . h1 h1
The values m0 and M0 will be determined in order to minimize the residual R and consequently, the functional ρp will be minimized.
4
DUBAU, CATAS: AEOLIAN BLADE BY SPLINES
1262
Proposition 2.1 There exist unique values m0 , M0 ∈ R such that { } R(m0 , M0 ) = min R(m0 , M0 ) : (m0 , M0 ) ∈ R2 . The values are (2.4)
m0 =
44(y1 − y0 ) y1 − y0 , M0 = 9h1 9h21
Proof. Applying the least squares method, the system of normal equations is: ) (∫ x1 ) (∫ x1 2 ∂R [C(x)] dx · m + C(x) · D(x)dx · M0 = 0 ∂m0 = 0 x0 x0 ⇔ (∫ x1 ) (∫ x1 ) ∂R = 0 ∂M0 C(x) · D(x)dx · m0 + [D(x)]2 dx · M0 = x0
x0
∫ x1 = − [A(x)y0 + B(x)y1 − L(x)]C(x)dx x0 ∫ = −
x1
[A(x)y0 + B(x)y1 − L(x)]D(x)dx
x0
and after elementary calculus, 23 3 210 h1 · m0 +
11 4 840 h1
· M0 =
8 2 105 h1
· (y1 − y0 )
1 5 420 h1
· M0 =
11 3 840 h1
· (y1 − y0 )
. 11 4 840 h1
· m0 +
Since the determinant of the system is ∆ = unique solution (m0 , M0 ) of this system m0 =
63 8 705600 h1
> 0, there exist an
y1 − y0 44(y1 − y0 ) . , M0 = 9h1 9h21
The hessian of R has the diagonal minors: ( 2 ) ∫ x1 ∂ R (m δ = det , M ) = 2 [C(x)]2 dx > 0 0 0 ∂m20 x0 (
and D = det
∂2R (m0 , M0 ) ∂m20 ∂2R ∂m0 ∂M0 (m0 , M0 )
∂2R ∂m0 ∂M0 (m0 , M0 ) ∂2R (m0 , M0 ) ∂M02
So, (m0 , M0 ) is the unique point which minimizes R.
5
) = 4∆ > 0.
DUBAU, CATAS: AEOLIAN BLADE BY SPLINES
1263
Remark 2.1 Replacing in (2.3) the values m0 , M0 given in (2.4) we get S1 (x) = [1 − Q(x)] · y0 + Q(x) · y1 ,
(2.5) where
Q(x) =
1 · 9
(
x − x0 h1
) +
22 · 9
(
x − x0 h1
x ∈ [x0 , x1 ],
)2 −
14 · 9
(
x − x0 h1
)3 .
If we introduce the values m0 , M0 in the relation m1 =
h1 3 · (y1 − y0 ) − 2m0 − · M0 h1 2
−y0 −y0 we get m1 = y13h , the same value as S1′ (x) = y13h . 1 1 We construct on the interval [x1 , xn ], the complete cubic spline generated by initial conditions S : [x1 , xn ] → R with the conditions
S ′ (x1 ) = m1 =
y1 − y0 3h1
Defining S : [a, b] → R, (2.6)
S(x) =
and S ′ (xn ) = S1 (x),
S(x),
yn − yn−1 , S ∈ C 2 [a, b]. hn
x ∈ [x0 , x1 ] x ∈ [x1 , xn ]
we see that S ∈ C 1 [a, b] and S has minimal partial quadratic oscillation in average on the interval [x0 , x1 ]. Using the algorithm of complete cubic spline presented in [5], given in (2.7)
m1 =
y1 − y0 S ′ (xn ) − bn yn − yn−1 , M1 = , S ′ (xn ) = 3h1 an hn
where an , bn , cn , dn are obtained recurrently in ai = −2ai−1 − h2i · ci−1 , ci = −2ci−1 − 6 · ai−1 , hi (2.8) bi = h3i · (yi − yi−1 ) − 2 · bi−1 − h2i · di−1 , di = 62 · (yi − yi−1 ) − 2 · di−1 − 6 · bi−1 , n ≥ 2, i = 3, n hi h i
starting from a2 = − and (2.9)
h2 6 6 3 , c2 = −2, b2 = · (y2 − y1 ) − 2m1 , d2 = 2 · (y2 − y1 ) − · m1 2 h2 h2 h2 {
mi = Mi =
3 hi 6 h2i
· (yi − yi−1 ) − 2mi−1 − h2i · Mi−1 , i = 2, n − 1 , · (yi − yi−1 ) − h6i · mi−1 − 2 · Mi−1 , i = 2, n
this complete cubic spline is uniquely obtained according to the following result: 6
DUBAU, CATAS: AEOLIAN BLADE BY SPLINES
Proposition 2.2 There exist unique values for m1 and M1 , given in (2.7), such that the obtained cubic spline generated by initial conditions is complete cubic spline [x1 , xn ]. Proof. For the shape of aeolian blade profile we use the cubic spline S ∈ C 1 [a, b] given in (2.6), with S1 as in (2.5) and S being complete cubic spline generated by initial conditions with the restrictions (2.1) to the intervals [xi−1 , xi ], i = 2, n and the parameters mi , Mi , i = 2, n given in (2.9) and m1 =
y1 − y0 , 3h1
M1 =
yn −yn−1 hn
− bn
an
,
an , bn as in (2.8). We see that from (2.8) we can write ( ) ( ) ( ) hi −2 ai ai−1 2 = · , ∀ i = 3, n − h6i −2 ci ci−1 and since a1 = − h22 < 0, c1 = −2 < 0 we obtain a2 > 0, c2 > 0 and generally ai > 0, ci > 0 for even i, ai < 0, ci < 0 for odd i. Then, an ̸= 0 and M1 −y0 is uniquely given in (2.7). The parameter m1 has unique value, m1 = y13h . 1 With the recurrent relations (2.9), these uniquely determine the complete cubic spline S.
3
The error estimation
In order to obtain the error estimation in the interpolation of a continuous function f : [a, b] → R, with f (xi ) = yi , ∀i = 0, n, by S we observe that |S1 (x) − f (x)| = |[1 − Q(x)] · y0 + Q(x) · y1 − f (x)| ≤ (3.1)
≤ |1 − Q(x)| · |f (x) − y0 | + |Q(x)| · |f (x) − y1 | ≤ [|1 − Q(x)| + |Q(x)|] · ω(f, h1 ) = ω(f, h1 ),
∀x ∈ [x0 , x1 ], since 0 ≤ Q(x) ≤ 1, ∀x ∈ [x0 , x1 ] and it remains to estimate |S(x) − f (x)| for x ∈ [x1 , xn ]. The condition Si (xi ) = yi , ∀i = 2, n lead us to (3.2)
mi−1 =
yi − yi−1 (Mi + 2Mi−1 )hi − , hi 6
i = 2, n
and replacing (3.2) in (2.1) we obtain ] [ (x − xi−1 )3 hi (x − xi−1 ) (x − xi−1 )2 − − (3.3) Si (x) = · Mi−1 + 2 6hi 3 [ ] (x − xi−1 )3 hi (x − xi−1 ) xi − x x − xi−1 + − · Mi + · yi−1 + · yi , 6hi 6 hi hi 7
1264
DUBAU, CATAS: AEOLIAN BLADE BY SPLINES
∀x ∈ [xi−1 , xi ], i = 2, n. The smoothness conditions on the interior knots ′ Si′ (xi ) = Si+1 (xi ),
∀i = 2, n − 1,
lead us to (3.4)
hi + hi+1 hi+1 yi+1 − yi yi − yi−1 hi · Mi−1 + · Mi + · Mi+1 = − . 6 3 6 hi+1 hi
The conditions of complete cubic spline S ′ (x1 ) = m1 =
y1 − y0 , 3h1
S ′ (xn ) = mn =
yn − yn−1 hn
provide the equations: { (3.5)
h2 3 hn 6
1 −y0 ) · M1 + h62 · M2 = 2(y3h 1 · Mn−1 + h3n · Mn = 0
which together with (3.4) form a linear system in n equations and n unknowns M1 , . . . , Mn . The system (3.4)+(3.5) is tridiagonal, diagonally dominant and therefore has unique solution which can be found using the iterative algorithm presented in [8] and [1] pages 14-15. Here, we use this system in order to derive the error estimation. Theorem 3.1 If f ∈ C[a, b] with f (xi ) = yi , ∀i = 0, n and there exists β ≥ 1 such that max{hi : i = 2, n} ≤ β, min{hi : i = 2, n} then for S ∈ C 2 [x1 , xn ] complete cubic spline interpolating the values yi , i = 1, n, where S ′ (x1 ) and S ′ (xn ) are given in (2.7), the following error estimate holds: ) ( 3 2 (3.6) |S(x) − f (x)| ≤ 1 + β · ω(f, h), ∀x ∈ [x1 , xn ], 4 where h = max{hi : i = 2, n} and ω(f, h) is the modulus of continuity. 3 Proof. Multiplying (3.4) by hi +h , i = 2, n − 1 and the equations (3.5) by i+1 3 3 and respectively, the system (3.4)+(3.5) takes the form G · M = d, where h2 hn M = (M1 , . . . Mn ) 1 1 0 0 0 0 0 ... 0 2 h2 h3 0 0 0 0 ... 0 2(h2 +h3 ) 1 2(h2 +h3 ) ... ... ... ... ... ... ... ... ... G= hi+1 hi 0 ... 0 1 2(hi +hi+1 ) 0 . . . 0 2(hi +hi+1 ) ... ... ... ... ... ... ... ... ... 1 0 0 0 0 0 ... 0 1 2
8
1265
DUBAU, CATAS: AEOLIAN BLADE BY SPLINES
3(yi+1 −yi ) 3(yi −yi−1 ) 0) and d = (d1 , . . . , dn ), d1 = 2(yh11−y h2 , dn = 0, di = hi+1 (hi +hi+1 ) − hi (hi +hi+1 ) , i = 2, n − 1. We see that G = I + A with ∥A∥∞ = 12 and therefore G is invertible and 1 M = G−1 · d with ∥M ∥∞ ≤ ∥G−1 ∥∞ · ∥d∥∞ , ∥G−1 ∥∞ ≤ 1−∥A∥ = 2 and ∞
|d1 | ≤ |di | ≤
2 2 · ω(f, h1 ) ≤ 2 · ω(f, h), h1 h2 h
|dn | = 0,
3ω(f, hi+1 ) 3ω(f, hi ) 3ω(f, h) + ≤ , ∀i = 2, n − 1. 2 2 2h 2h h2
So, ∥d∥∞ ≤ h32 · ω(f, h), where h = min{hi : i = 2, n} and ∥M ∥∞ ≤ h62 · ω(f, h). Let x ∈ [x1 , xn ] arbitrary. Then, there exists i ∈ {2, . . . , n} such that x ∈ [xi−1 , xi ]. Using (3.3), we obtain: (x − xi−1 )2 (x − xi−1 )3 hi (x − xi−1 ) |S(x) − f (x)| ≤ − − · |Mi−1 |+ 2 6hi 3 xi−x (x−xi−1 )3 x−xi−1 hi (x−xi−1 ) − · Mi + hi · |yi−1−f (x)| + hi · |yi−f (x)| 6hi 6 and because (x − xi−1 )2 (x − xi−1 )3 hi (x − xi−1 ) (x − xi−1 )3 hi (x − xi−1 ) − − ≤ 0, − ≤0 2 6hi 3 6hi 6 ∀x ∈ [xi−1 , xi ] and |Mi−1 | ≤
6 h2
· ω(f, h), |Mi | ≤
6 h2
· ω(f, h) we infer that,
6 |(x − xi−1 )(xi − x)| ≤ 2 · ω(f, h) · 2 h ( ( )2 ) 6 h2 3 3 h ω(f, h) + 2 · ω(f, h) · i ≤ ω(f, h) + · · ω(f, h) = 1 + β 2 · ω(f, h), 8 4 h 4 h |S(x) − f (x)| ≤ ω(f, hi ) +
∀x ∈ [xi−1 , xi ], ∀i = 2, n. Remark 3.1 For equidistant partitions h = h and β = 1 the inequality (3.6) becomes |f (x) − S(x)| ≤ h7 · ω(f, h), ∀x ∈ [x1 , xn ]. The error estimate in the interpolation of continuous functions by cubic splines generated by boundary conditions, the error estimates were obtained in [12]. For the case of smooth functions f ∈ C 1 [a, b] and f ∈ C 2 [a, b] optimal error estimates were obtained in [11]. Corollary 3.1 The error estimate is h), x ∈ [x0 , x1 ] ω(f, [ ( )2 ] f (x) − S(x) ≤ · ω(f, h), x ∈ [x1 , xn ]. 1 + 43 · hh Remark 3.2 To obtain shape with high order of smoothness we can construct an interpolation procedure based on quartic splines (see [6]). 9
1266
DUBAU, CATAS: AEOLIAN BLADE BY SPLINES
4
Numerical experiments
In order to illustrate the interpolation procedure we consider a transversal section of an aeolian blade (see [9]). We consider at an cross section the points xi , i = 0, 10 situated on the axis of the section and the corresponding values on the surface of the blade yi , i = 0, 10. There are: x0 = 0, x1 = 35.5, x2 = 71.00, x3 = 106.50, x4 = 142.00, x5 = 177.50, x6 = 213.00, x7 = 248.50, x8 = 284.00, x9 = 319.50, x10 = 355.00 (for top shape). As we can see in Figure 1, the inflection point near the first node increases aerodynamic profile properties.
Figure 1: Aeolian blade profile
References [1] J. H. Ahlberg, E. N. Nilson, J. L. Walsh, The theory of splines and their applications, Academic Press, New York, London, 1967. [2] H. Akima, A new method for interpolation and smooth curve fitting based on local procedure, J. of Assoc. for Comp. Machinery 4 (1970), 589-602. [3] A.M. Bica, M. Degeratu, L. Demian, E. Paul, Optimal alternative to the Akima’s method of smooth interpolation applied in diabetology, Surveys in Math. and its Applic. 1 (2006), 41-49. [4] A.M. Bica, V.A. C˘au¸s, I. Fechete, S. Mure¸san, Application of the CauchyBuniakovski-Schwarz’s inequality to an optimal property for cubic splines, J. of Comput. Analysis and Appl. 9, No. 1, 2007, 43-53. [5] A.M. Bica, Current applications of the method of successive approximations, Oradea University Press, 2009 (in Romanian). 10
1267
DUBAU, CATAS: AEOLIAN BLADE BY SPLINES
[6] A.M. Bica, Quartic spline of interpolation with minimal quadratic oscillation, in Lecture Notes in Computer Science Numerical Analysis and its Applications LNCS5434 (eds. S. Margenov, L. G. Vulkov, J. Wasnienski) Springer, Berlin, Heidelberg, New York 2009, 200-207. [7] A.M. Bica, M. Curil˘a, S. Curil˘a, Optimal piecewise smooth interpolation of experimental data, International Journal of Computers Communications and Control, Vol. 1, No. 1, (2006), 74-79. [8] C. de Boor, A practical guide to splines, Applied Math. Sciences, 27, Springer Verlag, Berlin (revised edition), 2001. [9] C. Dub˘au, Using aeolian microagregates in the structure of complex systems, Ph. D. Thesis, Timisoara Politechnic Press, 2007, (in Romanian). [10] C. Iancu, On the cubic spline of interpolation, Seminar on Functional Analysis and Numer. Methods, 4, (1981), 52-71. [11] Ye Maodeng, Optimal error bounds for the cubic spline interpolation of lower smooth functions (II), Appl. Math. JCU 13B (1998), 223-230. [12] R. Mennicken, E. Wagenfuhrer, Numerische Mathematik vol.2, Vieweg, Braunschweig/Wiesbaden, 1977.
11
1268
JOURNAL OF COMPUTATIONAL ANALYSIS AND APPLICATIONS, VOL. 14, NO.7, 1269-1287 , 2012, COPYRIGHT 2012 EUDOXUS 1269 PRESS, LLC
Viscosity approximations with weak contraction for finding a common solution of fixed points and a general system of variational inequalities for two accretive operators∗ Poom Kumam1,3 and Phayap Katchang2,3,† 1
Department of Mathematics, Faculty of Science, King Mongkut’s University of Technology Thonburi (KMUTT), Bangmod, Bangkok 10140, Thailand 2
Department of Mathematics and Statistics, Faculty of Science and Agricultural Technology, Rajamangala University of Technology Lanna Tak, Tak 63000, Thailand 3
Centre of Excellence in Mathematics, CHE, Si Ayutthaya Rd., Bangkok 10400, Thailand
Abstract In this paper, we prove a strong convergence theorem for finding a common solutions of a general system of variational inequalities involving two different inverse-strongly accretive operators and solutions of fixed point problems involving the nonexpansive mapping in a Banach space by using a modified viscosity extragradient method with weak contraction. Moreover, using the above results, we can apply to finding solutions of zeros of accretive operators and the class of k-strictly pseudocontractive mappings. The results presented in this paper extend and improve the corresponding results of Qin et al. [Convergence of an iterative algorithm for systems of variational inequalities and nonexpansive mappings with applications, J. Comput. Appl. Math., 233 (2009), 231-240.], Aoyama et al. [Weak convergence of an iterative sequence for accretive operators in Banach spaces, Fixed Point Theory and Applications, vol. 2006, Article ID 35390, 1-13.], Yao et al. [Modified extragradient methods for a system of variational inequalities in Banach spaces, Acta. Appl. Math., 110(3) (2010), 1211-1224.], Katchang et al. [A viscosity iterative scheme for inversestrongly accretive operators in Banach spaces, Journal of Computational Analysis and Applications, 12(3) (2010), 678–686.] and many others.
Keywords: Inverse-strongly accretive operator; Fixed point; General system of variational inequalities; Modified viscosity extragradient approximation method; Sunny nonexpansive retraction; Weak contraction 2000 Mathematics Subject Classification: Primary 47H05; 47H10; 47J25. ∗ This
research was partially supported by the Centre of Excellence in Mathematics, under the Commission on Higher Education, Thailand. † Corresponding author. E-mail: [email protected] (P. Katchang) and [email protected] (P. Kumam)
1
1270
2
1
P. Katchang and P. Kumam
Introduction
Variational inequality has become a rich of inspiration in pure and applied mathematics. In recent years, classical variational inequality problems have been extended and generalized to study a large variety of problems arising in structural analysis, economics, optimization, operations research and engineering sciences and have withnessed an explosive growth in theoretical advances, algorithmic development, etc; see for e.g. [5, 7, 24] and the references therein. It has been shown a wide class of nonsymmetric and odd-order moving, obstacle, free, equilibrium problems arising in various branches of pure and applied sciences can be studied via the general variational inequalities. In brief, this field is dynamic and is experiencing an explosive growth in both theory and applications. Consequently, several numerical techniques including projection, the Wiener-Hopf equations, auxiliary principle, decomposition and descent are being developed for solving various classes of variational inequalities and related optimization problems. Several numerical methods have been developed for solving variational inequalities and related optimization problems, see [4, 8, 21, 23, 39] and the references therein. To convey an idea of the variational inequality, let C be a closed and convex set in a real Hilbert space H. For a given operator A, we consider the problem of finding x∗ ∈ C such that hAx∗ , x − x∗ i ≥ 0 for all x ∈ C, which is known as the variational inequality, introduced and studied by Stampacchia [30] in 1964 in the field of potential theory. Let E be a real Banach space with norm k · k and C be a nonempty closed convex subset of E. Let E ∗ be the dual space of E and h·, ·i denote the pairing between E and E ∗ . We recall the following concepts (see also [32] for). Let S : C → C a nonlinear mapping. We use F (S) to denote the set of fixed points of S, that is, F (S) = {x ∈ C : Sx = x}. A mapping S is called nonexpansive if kSx − Syk ≤ kx − yk, ∀x, y ∈ C. A mapping f : C → C is said to be contraction if there exists a constant α ∈ [0, 1) and x, y ∈ C such that kf (x) − f (y)k ≤ αkx − yk. In the cases f is any contraction have recently been studied by Moudafi [20] for a nonexpansive selfmapping S defined on a Hilbert space. Very recently, Suzuki [29] replaced a contraction f by a Meir-Keeler contractions Φ to find a fixed point of a nonexpansive mapping S. A mapping f is called weakly contractive on a closed convex set C in the Banach space E if there exists ϕ : [0, ∞) → [0, ∞) is a continuous and strictly increasing function such that ϕ is positive on (0, ∞), ϕ(0) = 0, limt→∞ ϕ(t) = ∞ and x, y ∈ C kf (x) − f (y)k ≤ kx − yk − ϕ(kx − yk).
(1.1)
If ϕ(t) = (1 − k)t, then f is called to be contractive with the contractive coefficient k. If ϕ(t) ≡ 0, then f is said to be nonexpansive. ∗
For q > 1, the generalized duality mapping Jq : E → 2E is defined by Jq (x) = {f ∈ E ∗ : hx, f i = kxkq , kf k = kxkq−1 } for all x ∈ E. In particular, if q = 2, the mapping J2 is called the normalized duality mapping and, usually, write J2 = J. Further, we have the following properties of the generalized duality mapping Jq : (i) Jq (x) =
1271
Viscosity approximations with weak contraction for finding a common solution of fixed points. . .
3
kxkq−2 J2 (x) for all x ∈ E with x 6= 0; (ii) Jq (tx) = tq−1 Jq (x) for all x ∈ E and t ∈ [0, ∞); and (iii) Jq (−x) = −Jq (x) for all x ∈ E. It is known that if X is smooth, then J is single-valued, which is denoted by j . Recall that the duality mapping j is said to be weakly sequentially continuous if for each xn −→ x weakly, we have j(xn ) −→ j(x) weakly-*. We know that if X admits a weakly sequentially continuous duality mapping, then X is smooth (for the details, see [32, 33, 39]). Let U = {x ∈ E : kxk = 1}. A Banach space E is said to uniformly convex if, for any ² ∈ (0, 2], there exists δ > 0 such that, for any x, y ∈ U , kx − yk ≥ ² implies k x+y 2 k ≤ 1 − δ. It is known that a uniformly convex Banach space is reflexive and strictly convex. A Banach space E is said to be smooth if the exists for all x, y ∈ U . It is also said to be uniformly smooth if the limit is attained limit limt→0 kx+tyk−kxk t uniformly for x, y ∈ U . The modulus of smoothness of E is defined by 1 ρ(τ ) = sup{ (kx + yk + kx − yk) − 1 : x, y ∈ E, kxk = 1, kyk = τ }, 2 ) where ρ : [0, ∞) → [0, ∞) is a function. It is known that E is uniformly smooth if and only if limτ →0 ρ(τ τ = 0. Let q be a fixed real number with 1 < q ≤ 2. A Banach space E is said to be q-uniformly smooth if there exists a constant c > 0 such that ρ(τ ) ≤ cτ q for all τ > 0: see, for instance, [1, 32].
We note that E is a uniformly smooth Banach space if and only if Jq is single-valued and uniformly continuous on any bounded subset of E. Typical examples of both uniformly convex and uniformly smooth Banach spaces are Lp , where p > 1. More precisely, Lp is min{p, 2}-uniformly smooth for every p > 1. Note also that no Banach space is q-uniformly smooth for q > 2; see [32, 36] for more details. Recall that an operator A : C → E is said to be accretive if there exists j(x − y) ∈ J(x − y) such that hAx − Ay, j(x − y)i ≥ 0 for all x, y ∈ C. A mapping A : C → E is said to be β-strongly accretive if there exists a constant β > 0 such that hAx − Ay, j(x − y)i ≥ βkx − yk2 ∀x, y ∈ C. An operator A : C → E is said to be β-inverse strongly accretive if, for any β > 0, hAx − Ay, j(x − y)i ≥ βkAx − Ayk2 for all x, y ∈ C. Evidently, the definition of the inverse strongly accretive operator is based on that of the inverse strongly monotone operator. Recently, Aoyama et al. [1] first considered the following generalized variational inequality problem in a smooth Banach space. Let A be an accretive operator of C into E. Find a point x ∈ C such that hAx, j(y − x)i ≥ 0,
(1.2)
for all y ∈ C. This problem is connected with the fixed point problem for nonlinear mappings, the problem of finding a zero point of an accretive operator and so on. For the problem of finding a zero point of an accretive operator by the proximal point algorithm, see Kamimura and Takahashi [10, 11] . In order to find a solution
1272
4
P. Katchang and P. Kumam
of the variational inequality (1.2), Aoyama et al. [1] proved the strong convergence theorem in the framework of Banach spaces which is generalized Iiduka et al. [8] from Hilbert spaces. In 2006, Aoyama, Iiduka and Takahashi [1] proved the following weak convergence theorem. Theorem AIT.([1, Aoyama et al. Theorem 3.1]) Let E be a uniformly convex and 2-uniformly smooth Banach space and C a nonempty closed convex subset of E. Let QC be a sunny nonexpansive retraction from E onto C, α > 0, and A be an α-inverse strongly accretive operator of C into E with S(C, A) 6= ∅, where S(C, A) = {x∗ ∈ C : hAx∗ , j(x − x∗ )i ≥ 0, x ∈ C}. If {λn } and {αn } are chosen such that λn ∈ [a, Kα2 ], for some a > 0 and αn ∈ [b, c], for some b, c with 0 < b < c < 1, then the sequence {xn } defined by the following manners: x1 − x ∈ C and xn+1 = αn xn + (1 − αn )QC (xn − λn Axn ), converges weakly to some element z of S(C, A), where K is the 2-uniformly smoothness constant of E and QC is a sunny nonexpansive retraction. Motivated by Aoyama et al. [1] and also Ceng et al. [4], Qin et al. [23] and Yao et al. [39] first considered the following system of general variational inequalities in Banach spaces: Let A : C → E be an β-inverse strongly accretive mapping. Find (x∗ , y ∗ ) ∈ C × C such that ( hλAy ∗ + x∗ − y ∗ , j(x − x∗ )i ≥ 0 ∀x ∈ C, hµAx∗ + y ∗ − x∗ , j(x − y ∗ )i ≥ 0 ∀x ∈ C.
(1.3)
Let C be nonempty closed convex subset of a real Banach space E. For given two operators A, B : C → E, consider the problem of finding (x∗ , y ∗ ) ∈ C × C such that ( hλAy ∗ + x∗ − y ∗ , j(x − x∗ )i ≥ 0 ∀x ∈ C, (1.4) hµBx∗ + y ∗ − x∗ , j(x − y ∗ )i ≥ 0 ∀x ∈ C, where λ and µ are two positive real numbers. This system is called the system of general variational inequalities in a real Banach spaces. If we add up the requirement that A = B, then the problem (1.4) is reduced to the system (1.3). Recently, the so-called viscosity approximation methods have been studied by many author. They are very important because they are applied to convex optimization, linear programming, monotone inclusions and elliptic differential equations. In [20], Moudafi proposed the viscosity approximation method of selecting a particular fixed point of a given nonexpansive mapping in Hilbert spaces. Other investigations of a viscosity approximation method can be found in Reference [3, 6, 34] and many results not cited here. An interesting problem to extend the above results to find a solution of a general system of variational inequalities. In 2008, Ceng et al. [4] introduced a relaxed extragradient method for finding solutions of a general system of variational inequalities with inverse-strongly monotone mappings in a real Hilbert space.
1273
Viscosity approximations with weak contraction for finding a common solution of fixed points. . . Suppose x1 = u ∈ C and xn is generated by ( yn = PC (xn − µBxn ), xn+1 = αn u + βn xn + γn SPC (yn − λAyn ),
5
(1.5)
for all n ≥ 1 where λ ∈ (0, 2α), µ ∈ (0, 2β), S is a nonexpansive mapping and A and B are α and β-inversestrongly monotone, respectively. They proved the strong convergence theorem under quite mild conditions. In 2010, Katchang et al. [13] introduced a viscosity iterative schemes for an inverse-strongly accretive operator in Banach spaces as follows: xn+1 = αn f (xn ) + βn xn + γn QC (xn − λn Axn ),
(1.6)
we proved the strong convergent theorems under some parameters controlling conditions. In the same time, Yao et al. [39] introduce the following iteration scheme for solving a general system of variational inequality problem (1.4) and some fixed point problem involving the nonexpansive mapping in Banach spaces. For arbitrarily given x0 = u ∈ C and {xn } is given by ( yn = QC (xn − Bxn ), (1.7) xn+1 = αn u + βn xn + γn QC (yn − Ayn ), for all n ≥ 0 where C ⊂ E, QC is a sunny nonexpansive retraction from E onto C and A and B are inverse-strongly accretive mappings. They obtained a strong convergence theorem in Banach spaces. Recently, Katchang and Kumam [12] introduced a viscosity iterative scheme for finding solutions of a general system of variational inequalities (1.4) for two inverse-strongly accretive operators with a viscosity of modified extragradient methods and solutions of fixed point problems involving the nonexpansive mapping in Banach spaces. We prove that the sequence {xn } generated by yn = QC (xn − µBxn ), (1.8) vn = QC (yn − λAyn ), xn+1 = αn f (xn ) + βn xn + γn [δSxn + (1 − δ)vn ], converge strongly to x ¯ = QF (G)∩F (S) f (¯ x) which (¯ x, y¯) is a solution of the system of general variational inequalities (1.4), where y¯ = QC (¯ x − µB x ¯). This naturally brings us to the following questions: Question 1. Can the results of Ceng et al. [4] be extend from a Hilbert space to a general Banach space? Question 2. Can we extend and modify the iterative scheme (1.7) to a general algorithm? Question 3. Can we extend and modify the iterative algorithm (1.7) for finding a common element of the set of solutions of (1.4) and the set of fixed points for nonexpansive mappings? In this paper, motivated and inspired by the idea of Ceng et al. [4], Yao et al. [39], Iiduka, Takahashi and Toyoda [8], and Iiduka and Takahashi [9] we introduce a new viscosity iterative scheme for finding solutions of a general system of variational inequalities (1.4) involving two different inverse-strongly accretive operators
1274
6
P. Katchang and P. Kumam
and solutions of fixed point problems involving the nonexpansive mapping in a Banach space by using a modified viscosity extragradient method. Consequently, we obtain new strong convergence theorems for fixed point problems which solves the system of general variational inequalities (1.3) and (1.4). Moreover, using the above theorem, we can apply to finding solutions of zeros of accretive operators and the class of k-strictly pseudocontractive mappings. The results presented in this paper extend and improve the corresponding results of Yao et al. [39], Ceng et al. [4], Qin et al. [23], Katchang and Kumam [12], Katchang et al. [13] and many others.
2
Preliminaries
We always assume that E is a real Banach space and C be a nonempty closed convex subset of E. Let D be a subset of C and Q : C → D. Then Q is said to sunny if Q(Qx + t(x − Qx)) = Qx, whenever Qx + t(x − Qx) ∈ C for x ∈ C and t ≥ 0. A subset D of C is said to be a sunny nonexpansive retract of C if there exists a sunny nonexpansive retraction Q of C onto D. A mapping Q : C → C is called a retraction if Q2 = Q. If a mapping Q : C → C is a retraction, then Qz = z for all z is in the range of Q. For example, see [1, 31] for more details. The following result describes a characterization of sunny nonexpansive retractions on a smooth Banach space. Proposition 2.1. ([25]) Let E be a smooth Banach space and let C be a nonempty subset of E. Let Q : E → C be a retraction and let J be the normalized duality mapping on E. Then the following are equivalent: (i) Q is sunny and nonexpansive; (ii) kQx − Qyk2 ≤ hx − y, J(Qx − Qy)i, ∀x, y ∈ E; (iii) hx − Qx, J(y − Qx)i ≤ 0, ∀x ∈ E, y ∈ C. Proposition 2.2. ([14]) Let C be a nonempty closed convex subset of a uniformly convex and uniformly smooth Banach space E and let T be a nonexpansive mapping of C into itself with F (T ) 6= ∅. Then the set F(T) is a sunny nonexpansive retract of C. We need the following lemmas for proving our main results. Lemma 2.3. ([36]) Let E be a real 2-uniformly smooth Banach space with the best smooth constant K. Then the following inequality holds: kx + yk2 ≤ kxk2 + 2hy, Jxi + 2kKyk2 ,
∀x, y ∈ E.
Lemma 2.4. ([28]) Let {xn } and {yn } be bounded sequences in a Banach space X and let {βn } be a sequence in [0, 1] with 0 < lim inf n→∞ βn ≤ lim supn→∞ βn < 1. Suppose xn+1 = (1 − βn )yn + βn xn for all integers n ≥ 0 and lim supn→∞ (kyn+1 − yn k − kxn+1 − xn k) ≤ 0. Then, limn→∞ kyn − xn k = 0.
1275
Viscosity approximations with weak contraction for finding a common solution of fixed points. . . Lemma 2.5. (Lemma 2.2 in [16]) Let {an } and {bn } be two nonnegative real number sequences and {αn } a P∞ positive real number sequence satisfying the conditions: n=1 αn = ∞ and limn−→∞ αbnn = 0. Let the recursive inequality an+1 ≤ an − αn ϕ(an ) + bn , n ≥ 0 where ϕ(a) is a continuous and strict increasing function for all a ≥ 0 with ϕ(0) = 0. Then limn−→∞ an = 0. Lemma 2.6. ([22]) Let E be a Banach space. Then for all x, y, z ∈ E and α, β, γ ∈ [0, 1] with α + β + γ = 1, we have kαx + βy + γzk2 = αkxk2 + βkyk2 + γkzk2 − αβkx − yk2 − αγkx − zk2 − βγky − zk2 . Lemma 2.7. ([2]) Let C be a nonempty bounded closed convex subset of a uniformly convex Banach space E and let T be nonexpansive mapping of C into itself. If {xn } is a sequence of C such that xn → x weakly and xn − T xn → 0 strongly, then x is s fixed point of T. Lemma 2.8. (Yao et al. [39, Lemma 3.1]; see also [1, Lemma 2.8]) Let C be a nonempty closed convex subset of a real 2-uniformly smooth Banach space E. Let the mapping A : C → E be β-inverse-strongly accretive. Then, we have k(I − λA)x − (I − λA)yk2 ≤ kx − yk2 + 2λ(λK 2 − β)kAx − Ayk2 . If β ≥ λK 2 , then I − λA is nonexpansive. Proof . For any x, y ∈ C, from Lemma 2.3, we have k(I − λA)x − (I − λA)yk2
=
k(x − y) − λ(Ax − Ay)k2
≤ kx − yk2 − 2λhAx − Ay, j(x − y)i + 2λ2 K 2 kAx − Ayk2 ≤ kx − yk2 − 2λβkAx − Ayk2 + 2λ2 K 2 kAx − Ayk2 =
kx − yk2 + 2λ(λK 2 − β)kAx − Ayk2 .
If β ≥ λK 2 , then I − λA is nonexpansive. In order to prove our main results, we need the following two lemmas which is proved along the proof of Yao et al.’s lemmas as it appears in [39]. Lemma 2.9. Let C be a nonempty closed convex subset of a real 2-uniformly smooth Banach space E. Let QC be the sunny nonexpansive retraction from E onto C. Let the mapping A, B : C → E be β-inverse-strongly accretive and γ-inverse-strongly accretive, respectively. Let Ω : C → C be a mapping defined by Ω(x) = QC (QC (x − µBx) − λAQC (x − µBx)) ∀x ∈ C. If β ≥ λK 2 and γ ≥ µK 2 , then Ω is nonexpansive.
7
1276
8
P. Katchang and P. Kumam
Proof . For any x, y ∈ C, from Lemma 2.8 and QC is nonexpansive, we obtain kΩ(x) − Ω(y)k =
kQC [QC (I − µB)x − λAQC (I − µB)x] − QC [QC (I − µB)y − λAQC (I − µB)y]k
≤
k[QC (I − µB)x − λAQC (I − µB)x] − [QC (I − µB)y − λAQC (I − µB)y]k
=
k(I − λA)QC (I − µB)x − (I − λA)QC (I − µB)yk
≤
kQC (I − µB)x − QC (I − µB)yk
≤
k(I − µB)x − (I − µB)yk
≤
kx − yk.
Therefore Ω is nonexpansive. Lemma 2.10. Let C be a nonempty closed convex subset of a real smooth Banach space E. Let QC be the sunny nonexpansive retraction from E onto C. Let A, B : C → E be two possibly nonlinear mappings. For given x∗ , y ∗ ∈ C, (x∗ , y ∗ ) is a solution of problem (1.4) if and only if x∗ = QC (y ∗ −λAy ∗ ) where y ∗ = QC (x∗ −µBx∗ ). Proof . From (1.4), we rewrite as (
hx∗ − (y ∗ − λAy ∗ ), j(x − x∗ )i ≥ 0 ∀x ∈ C, hy ∗ − (x∗ − µBx∗ ), j(x − y ∗ )i ≥ 0 ∀x ∈ C.
Using Proposition 2.1 (iii), the system (2.1) equivalent to ( x∗ = QC (y ∗ − λAy ∗ ), y ∗ = QC (x∗ − µBx∗ ).
(2.1)
(2.2)
Remark 2.11. From Lemma 2.10, we observe that x∗ = QC (QC (x∗ − µBx∗ ) − λAQC (x∗ − µBx∗ )), which implies that x∗ is a fixed point of the mapping Ω. Throughout this paper, the set of fixed points of the mapping Ω is denoted by F (Ω).
3
Main results
In this section, we prove a strong convergence theorem. The next result states the main result of this work. Theorem 3.1. Let E be a uniformly convex and 2-uniformly smooth Banach space which admits a weakly sequentially continuous duality mapping and C be a nonempty closed convex subset of E. Let S : C → C be a nonexpansive mapping and QC be a sunny nonexpansive retraction from E onto C. Let A, B : C → E be β-inverse-strongly accretive with β ≥ λK 2 and γ-inverse-strongly accretive with γ ≥ µK 2 , respectively and K be the best smooth constant. Let f be a weakly contractive of C into itself with function ϕ. Suppose
1277
Viscosity approximations with weak contraction for finding a common solution of fixed points. . .
9
F := F (Ω) ∩ F (S) 6= ∅ where Ω defined by Lemma 2.9. For arbitrary given x0 = x ∈ C, the sequence {xn } generated by ( yn = QC (xn − µBxn ), (3.1) xn+1 = αn f (xn ) + βn xn + γn SQC (yn − λAyn ). where the sequences {αn }, {βn } and {γn } in (0, 1) satisfy αn + βn + γn = 1, n ≥ 1 and λ, µ are positive real numbers. The following conditions are satisfied: P∞ (C1). limn→∞ αn = 0 and n=0 αn = ∞; (C2). 0 < lim inf n→∞ βn ≤ lim supn→∞ βn < 1, Then {xn } converges strongly to x ¯ = QF f (¯ x) and (¯ x, y¯) is a solution of the problem (1.4), where y¯ = QC (¯ x− µB x ¯) and QF is the sunny nonexpansive retraction of C onto F. Proof . First, we prove that {xn } is bounded. Let x∗ ∈ F, from Lemma 2.10, we see that x∗ = QC (QC (x∗ − µBx∗ ) − λAQC (x∗ − µBx∗ )). Put y ∗ = QC (x∗ − µBx∗ ) and vn = QC (yn − λAyn ). Then x∗ = QC (y ∗ − λAy ∗ ). By nonexpansiveness of I − λA, I − µB and QC , we have kvn − x∗ k
=
kQC (yn − λAyn ) − QC (y ∗ − λAy ∗ )k
≤
k(yn − λAyn ) − (y ∗ − λAy ∗ )k
=
k(I − λA)yn − (I − λA)y ∗ k
≤
kyn − y ∗ k
=
kQC (xn − µBxn ) − QC (x∗ − µBx∗ )k
≤
k(xn − µBxn ) − (x∗ − µBx∗ )k
=
k(I − µB)xn − (I − µB)x∗ k
≤
kxn − x∗ k.
(3.2)
It follows from (3.1) and (3.2), we also have kxn+1 − x∗ k
= kαn f (xn ) + βn xn + γn Svn − x∗ k ≤ αn kf (xn ) − x∗ k + βn kxn − x∗ k + γn kSvn − x∗ k h i ≤ αn kxn − x∗ k − ϕ(kxn − x∗ k) + βn kxn − x∗ k + γn kvn − x∗ k ≤ αn kxn − x∗ k + βn kxn − x∗ k + γn kxn − x∗ k = kxn − x∗ k ≤ max{kx1 − x∗ k}.
This implies that {xn } is bounded, so are {f (xn )}, {yn }, {vn }, {Svn }, {Ayn } and {Bxn }.
(3.3)
1278
10
P. Katchang and P. Kumam Next, we show that limn→∞ kxn+1 − xn k = 0. Notice that kvn+1 − vn k
= kQC (yn+1 − λAyn+1 ) − QC (yn − λAyn )k ≤ k(yn+1 − λAyn+1 ) − (yn − λAyn )k = k(I − λA)yn+1 − (I − λA)yn k ≤ kyn+1 − yn k = kQC (xn+1 − µBxn+1 ) − QC (xn − µBxn )k ≤ k(xn+1 − µBxn+1 ) − (xn − µBxn )k = k(I − µB)xn+1 − (I − µB)xn k ≤ kxn+1 − xn k.
Setting xn+1 = (1 − βn )zn + βn xn for all n ≥ 0, we see that zn = kzn+1 − zn k
xn+1 −βn xn , 1−βn
then we have
xn+1 − βn xn xn+2 − βn+1 xn+1 − k 1 − βn+1 1 − βn αn f (xn ) + γn Svn αn+1 f (xn+1 ) + γn+1 Svn+1 − k k 1 − βn+1 1 − βn αn+1 f (xn ) αn+1 f (xn ) γn+1 Svn γn+1 Svn αn+1 f (xn+1 ) + γn+1 Svn+1 − + − + k 1 − βn+1 1 − βn+1 1 − βn+1 1 − βn+1 1 − βn+1 αn f (xn ) + γn Svn k − 1 − βn αn αn+1 αn+1 γn+1 (f (xn+1 ) − f (xn )) + (Svn+1 − Svn ) + ( − )f (xn ) k 1 − βn+1 1 − βn+1 1 − βn+1 1 − βn γn γn+1 − )Svn k +( 1 − βn+1 1 − βn γn+1 αn+1 αn αn+1 kf (xn+1 ) − f (xn )k + kvn+1 − vn k + | − |kf (xn )k 1 − βn+1 1 − βn+1 1 − βn+1 1 − βn 1 − βn − αn 1 − βn+1 − αn+1 − |kSvn k +| 1 − βn+1 1 − βn i αn+1 h γn+1 kxn+1 − xn k − ϕ(kxn+1 − xn k) + kvn+1 − vn k 1 − βn+1 1 − βn+1 αn αn+1 αn αn+1 − |kf (xn )k + | − |kSvn k +| 1 − βn+1 1 − βn 1 − βn+1 1 − βn γn+1 αn+1 αn αn+1 kxn+1 − xn k + kvn+1 − vn k + | − |(kf (xn )k + kSvn k) 1 − βn+1 1 − βn+1 1 − βn+1 1 − βn αn+1 αn αn+1 kxn+1 − xn k + kvn+1 − vn k + | − |(kf (xn )k + kSvn k) 1 − βn+1 1 − βn+1 1 − βn αn+1 αn αn+1 kxn+1 − xn k + kxn+1 − xn k + | − |(kf (xn )k + kSvn k). 1 − βn+1 1 − βn+1 1 − βn
= k = =
=
≤
≤
≤ ≤ ≤ Therefore
kzn+1 − zn k − kxn+1 − xn k ≤
αn+1 αn αn+1 kxn+1 − xn k + | − |(kf (xn )k + kSvn k). 1 − βn+1 1 − βn+1 1 − βn
1279
Viscosity approximations with weak contraction for finding a common solution of fixed points. . .
11
It follow from the condition (C1) and (C2), which implies that lim sup(kzn+1 − zn k − kxn+1 − xn k) ≤ 0. n→∞
Applying Lemma 2.4, we obtain limn→∞ kzn − xn k = 0 and also kxn+1 − xn k = (1 − βn )kzn − xn k → 0 as n → ∞. Therefore, we have lim kxn+1 − xn k = 0.
n→∞
(3.4)
Next, we show that limn→∞ kSvn − vn k = 0. Since x∗ ∈ F, from Lemma 2.6, we obtain kxn+1 − x∗ k2
=
kαn f (xn ) + βn xn + γn Svn − x∗ k2
≤
αn kf (xn ) − x∗ k2 + (1 − αn − γn )kxn − x∗ k2 + γn kvn − x∗ k2
=
αn kf (xn ) − x∗ k2 + (1 − αn )kxn − x∗ k2 − γn (kxn − x∗ k2 − kvn − x∗ k2 )
≤
αn kf (xn ) − x∗ k2 + kxn − x∗ k2 − γn kxn − vn k(kxn − x∗ k + kvn − x∗ k).
Therefore, we have γn kxn − vn k(kxn − x∗ k + kvn − x∗ k) ≤ αn kf (xn ) − x∗ k2 + kxn − x∗ k2 − kxn+1 − x∗ k2 ≤ αn kf (xn ) − x∗ k2 + (kxn − x∗ k + kxn+1 − x∗ k)kxn − xn+1 k. From the condition (C1) and (3.4), this implies that kxn − vn k −→ 0 as n −→ ∞. Now, we note that kxn − Svn k
≤
kxn − xn+1 k + kxn+1 − Svn k
=
kxn − xn+1 k + kαn f (xn ) + βn xn + γn Svn − Svn k
=
kxn − xn+1 k + kαn (f (xn ) − Svn ) + βn (xn − Svn )k
≤
kxn − xn+1 k + αn kf (xn ) − Svn k + βn kxn − Svn k.
Therefore, we get kxn − Svn k ≤
αn 1 kxn − xn+1 k + kf (xn ) − Svn k. 1 − βn 1 − βn
From the condition (C1), (C2) and (3.4), this implies that kxn − Svn k −→ 0 as n −→ ∞. Since kSvn − vn k ≤
kSvn − xn k + kxn − vn k,
and hence it follows that limn→∞ kSvn − vn k = 0. Next, we show that lim supn→∞ h(f − I)¯ x, J(xn − x ¯)i ≤ 0, where x ¯ = QF f (¯ x). Since {xn } is bounded, ∗ we can choose a sequence {xni } of {xn } which xni * x such that lim suph(f − I)¯ x, J(xn − x ¯)i = lim h(f − I)¯ x, J(xni − x ¯)i. n→∞
i→∞
(3.5)
1280
12
P. Katchang and P. Kumam
Next, we prove that x∗ ∈ F := F (Ω) ∩ F (S). (a) First, we show that x∗ ∈ F (S). To show this, we choose a subsequence {vni } of {vn }. Since {vni } is bounded, we have that a subsequence {vnij } of {vni } converges weakly to x∗ . We may assume without loss of generality that vni * x∗ . Since kSvn − vn k → 0, we obtain Svni * x∗ . Then we can obtain x∗ ∈ F. Assume that x∗ ∈ / F (S). Since vni * x∗ and Sx∗ 6= x∗ , from Opial’s condition, we get lim inf i−→∞ kvni − x∗ k
< ≤ ≤
lim inf i−→∞ kvni − Sx∗ k lim inf i−→∞ (kvni − Svni k + kSvni − Sx∗ k) lim inf i−→∞ kvni − x∗ k.
This is a contradiction. Thus, we have x∗ ∈ F (S). (b) Next, we show that x∗ ∈ F (Ω). From Lemma 2.9, we know that Ω is nonexpansive, it follows that kvn − Ω(vn )k =
kQC (QC (xn − µBxn ) − λAQC (xn − µBxn )) − G(vn )k
=
kΩ(xn ) − Ω(vn )k
≤ kxn − vn k −→ 0, as n −→ ∞. Thus limn→∞ kvn − Ω(vn )k = 0. Since Ω is nonexpansive, we get kxn − Ω(xn )k
≤
kxn − vn k + kvn − Ω(vn )k + kΩ(vn ) − Ω(xn )k
≤
2kxn − vn k + kvn − Ω(vn )k,
and so lim kxn − Ω(xn )k = 0.
n→∞
(3.6)
By Lemma 2.7 and (3.6), we have x∗ ∈ F (Ω). Therefore x∗ ∈ F . Now, from (3.5), Proposition 2.1 (iii) and the weakly sequential continuity of the duality mapping J, we have lim suph(f − I)¯ x, J(xn − x ¯)i
=
n→∞
=
lim h(f − I)¯ x, J(xni − x ¯)i
i→∞
h(f − I)¯ x, J(x∗ − x ¯)i ≤ 0.
(3.7)
From (3.4), it follows that lim suph(f − I)¯ x, J(xn+1 − x ¯)i ≤ 0. n→∞
(3.8)
1281
Viscosity approximations with weak contraction for finding a common solution of fixed points. . .
13
Finally, we show that {xn } converges strongly to x ¯ = QF f (¯ x). We compute that kxn+1 − x ¯k2
=
hxn+1 − x ¯, J(xn+1 − x ¯)i
=
hαn f (xn ) + βn xn + γn Svn − x ¯, J(xn+1 − x ¯)i
=
hαn (f (xn ) − x ¯) + βn (xn − x ¯) + γn (Svn − x ¯), J(xn+1 − x ¯)i
=
αn hf (xn ) − f (¯ x), J(xn+1 − x ¯)i + αn hf (¯ x) − x ¯, J(xn+1 − x ¯)i + βn hxn − x ¯, J(xn+1 − x ¯)i
≤
+γn hSvn − x ¯, J(xn+1 − x ¯)i h i αn kxn − x ¯k − ϕ(kxn − x ¯k) kxn+1 − x ¯k + αn hf (¯ x) − x ¯, J(xn+1 − x ¯)i +βn kxn − x ¯kkxn+1 − x ¯k + γn kvn − x ¯kkxn+1 − x ¯k
≤
αn kxn − x ¯kkxn+1 − x ¯k − αn ϕ(kxn − x ¯k)kxn+1 − x ¯k + αn hf (¯ x) − x ¯, J(xn+1 − x ¯)i +βn kxn − x ¯kkxn+1 − x ¯k + γn kxn − x ¯kkxn+1 − x ¯k
= =
kxn − x ¯kkxn+1 − x ¯k − αn ϕ(kxn − x ¯k)kxn+1 − x ¯k + αn hf (¯ x) − x ¯, J(xn+1 − x ¯)i ´ 1³ kxn − x ¯k2 + kxn+1 − x ¯k2 − αn ϕ(kxn − x ¯k)kxn+1 − x ¯k + αn hf (¯ x) − x ¯, J(xn+1 − x ¯)i. 2
By (3.3) and since {xn+1 − x ¯} bounded i.e., there exist M > 0 such that kxn+1 − x ¯k ≤ M , which implies that kxn+1 − x ¯ k2
≤ kxn − x ¯k2 − 2αn M ϕ(kxn − x ¯k) + 2αn hf (¯ x) − x ¯, J(xn+1 − x ¯)i.
(3.9)
Now, from (C1) and applying Lemma 2.5 to (3.9), we get kxn − x ¯k → 0 as n → ∞. This completes the proof. ¤ Corollary 3.2. Let E be a uniformly convex and 2-uniformly smooth Banach space which admits a weakly sequentially continuous duality mapping and C be a nonempty closed convex subset of E. Let S : C → C be a nonepansive mapping and QC be a sunny nonexpansive retraction from E onto C. Let A, B : C → E be β-inverse-strongly accretive with β ≥ λK 2 and γ-inverse-strongly accretive with γ ≥ µK 2 , respectively and K be the best smooth constant. Let f be a contraction of C into itself with coefficient α ∈ (0, 1). Let the sequences {αn }, {βn } and {γn } in (0, 1) satisfy αn + βn + γn = 1, n ≥ 1 and satisfy the condition (C1) and (C2) in Theorem 3.1. Suppose F := F (Ω) ∩ F (S) 6= ∅ where Ω defined by Lemma 2.9 and let λ, µ are positive real numbers. For arbitrary given x0 = x ∈ C, the sequences {xn } generated by ( yn = QC (xn − µBxn ), (3.10) xn+1 = αn f (xn ) + βn xn + γn SQC (yn − λAyn ). Then {xn } converges strongly to QF u, where QF is the sunny nonexpansive retraction of C onto F. Proof . Replacing a weakly contractive f in Theorem 3.1 by an α-contraction f , we can conclude the desired conclusion easily. This completes the proof. ¤ Corollary 3.3. Let E be a uniformly convex and 2-uniformly smooth Banach space which admits a weakly sequentially continuous duality mapping and C be a nonempty closed convex subset of E. Let S : C → C be a nonepansive mapping and QC be a sunny nonexpansive retraction from E onto C. Let A, B : C → E be β-inverse-strongly accretive with β ≥ λK 2 and γ-inverse-strongly accretive with γ ≥ µK 2 , respectively and K
1282
14
P. Katchang and P. Kumam
be the best smooth constant. Let the sequences {αn }, {βn } and {γn } in (0, 1) satisfy αn + βn + γn = 1, n ≥ 1 and satisfy the condition (C1) and (C2) in Theorem 3.1. Suppose F := F (Ω) ∩ F (S) 6= ∅ where Ω defined by Lemma 2.9 and let λ, µ are positive real numbers. For arbitrary given x0 = x ∈ C, the sequences {xn } generated by ( yn = QC (xn − µBxn ), (3.11) xn+1 = αn u + βn xn + γn SQC (yn − λAyn ). Then {xn } converges strongly to QF u, where QF is the sunny nonexpansive retraction of C onto F. Proof . Put f (x) = u for all x ∈ C. Then, by Theorem 3.1, we can conclude the desired conclusion easily. This completes the proof. ¤ Corollary 3.4. [39, Theorem 3.1,] Let E be a uniformly convex and 2-uniformly smooth Banach space which admits a weakly sequentially continuous duality mapping and C be a nonempty closed convex subset of E. Let QC be a sunny nonexpansive retraction from E onto C. Let A, B : C → E be β-inverse-strongly accretive with β ≥ K 2 and γ-inverse-strongly accretive with γ ≥ K 2 , respectively and K be the best smooth constant. Suppose the sequences {αn }, {βn } and {γn } in (0, 1) satisfy αn + βn + γn = 1, n ≥ 1 and satisfy the condition (C1) and (C2) in Theorem 3.1. Assume F (Ω) 6= ∅ where Ω defined by Lemma 2.9. For arbitrary given x1 = u ∈ C, the sequences {xn } generated by (1.7). Then {xn } converges strongly to QF (Ω) u, where QF (Ω) is the sunny nonexpansive retraction of C onto F (Ω). Proof . Taking f (x) = u for all x ∈ C, S = I and λ = µ = 1 in (3.1). Then, from Theorem 3.1, we can conclude the desired conclusion easily. ¤ Remark 3.5. Theorem 3.1, Corollary 3.2, Corollary 3.3 and Corollary 3.4 extend, improve and unify the results of Aoyama et al. [1] and also Ceng et al. [4], Qin et al. [23] and Yao et al. [39].
4
Applications
(I) Application to finding zeros of accretive operators. In Banach space E, we always assume that E is a uniformly convex and 2-uniformly smooth. Recall that an accretive operator T is m-accretive if R(I + rT ) = E for each r > 0. We assume that T is m-accretive and has a zero (i.e., the inclusion 0 ∈ T (z) is solvable) [15, 35, 37]. The set of zeros of T is denoted by T −1 (0), that T −1 (0) = {z ∈ D(T ) : 0 ∈ T (z)}. The resolvent of T , i.e., JrT = (I + rT )−1 , for each r > 0. If T is m-accretive, then JrT : E −→ E is nonexpansive and F (JrT ) = T −1 (0), ∀r > 0. For example, see Rockafellar [27] and [17, 18, 19, 25, 26] for more details. From the main result Theorem 3.1, we can conclude the following result immediately.
1283
Viscosity approximations with weak contraction for finding a common solution of fixed points. . .
15
Theorem 4.1. Let E be a uniformly convex and 2-uniformly smooth Banach space and C a nonempty closed convex subset of E. Let A, B : C → E be β-inverse-strongly accretive with β ≥ λK 2 and γ-inverse-strongly accretive with γ ≥ µK 2 , respectively, K is the 2-uniformly smoothness constant of E and let T be an maccretive mapping. Let f be a weakly contractive of C into itself with function ϕ and suppose the sequences {αn }, {βn } and {γn } in (0, 1) satisfy αn + βn + γn = 1, n ≥ 1. Suppose Θ := T −1 (0) ∩ A−1 (0) ∩ B −1 (0) 6= ∅ and let λ, µ are positive real numbers. The following conditions are satisfied: P∞ (i). limn→∞ αn = 0 and n=0 αn = ∞; (ii). 0 < lim inf n→∞ βn ≤ lim supn→∞ βn < 1. The sequences {xn } generated by x0 = x ∈ C and ( yn = xn − µBxn , (4.1) xn+1 = αn f (xn ) + βn xn + γn JrT (yn − λAyn ). Then {xn } converges strongly to x ¯ = QΘ f (¯ x), where QΘ is the sunny nonexpansive retraction of E onto Θ. (II) Application to strictly pseudocontractive mappings Let E be a Banach space and let C be a subset of E. Recall that a mapping T : C −→ C is said to be k-stricly pseudocontractive if there exist k ∈ [0, 1) and j(x − y) ∈ J(x − y) such that hT x − T y, j(x − y)i ≤ kx − yk2 −
1−k k(I − T )x − (I − T )yk2 2
(4.2)
for all x, y ∈ C. Then (4.2) can be written in the following form h(I − T )x − (I − T )y, j(x − y)i ≥ We know that, A is
1−k 2 −
1−k k(I − T )x − (I − T )yk2 . 2
(4.3)
inverse strongly monotone and A−1 0 = F (T ) (see [9]).
Theorem 4.2. Let E be a uniformly convex and 2-uniformly smooth Banach space and C a nonempty closed convex subset of E. Let S : C → C be a nonepansive mapping and a sunny nonexpansive retraction of E. Let (1−l) T, U : C → C be k-stricly pseudocontractive and l-stricly pseudocontractive with λ ≤ (1−k) 2K 2 and µ ≤ 2K 2 , respectively. Let f be a weakly contractive of C into itself with function ϕ and suppose the sequences {αn }, {βn } and {γn } in (0, 1) satisfy αn + βn + γn = 1, n ≥ 1. Suppose Θ := F (S) ∩ F (T ) ∩ F (U ) 6= ∅ and let λ, µ are positive real numbers. The following conditions are satisfied: P∞ (i). limn→∞ αn = 0 and n=0 αn = ∞; (ii). 0 < lim inf n→∞ βn ≤ lim supn→∞ βn < 1. The sequences {xn } generated by x0 = x ∈ C and ( yn = (1 − µ)xn + µU xn , (4.4) xn+1 = αn f (xn ) + βn xn + γn S((1 − λ)yn + λT yn ). Then {xn } converges strongly to QΘ , where QΘ is the sunny nonexpansive retraction of E onto Θ. 1−l Proof. Put A = I − T and B = I − U . Form (4.3), we get A, B are 1−k 2 and 2 − inverse strongly accretive operators, respectively. It follows that V I(C, A) = V I(C, I − T ) = F (T ) 6= ∅, CI(C, B) = V I(C, I − U ) =
1284
16
P. Katchang and P. Kumam
F (U ) 6= ∅ and CI(C, I − T ) ∩ V I(C, I − U ) = F (U ) = F (Ω) ⇔ is the solution of problems (1.3) ⇔ problems (1.4) (see also Ceng et al. [4, Theorem 4.1 pp. 388–389]) and also have (see Aoyama et al.[1, Theorem 4.1 pp. 10.]) (1 − λ)yn + λT yn = QC ((1 − λ)yn − λT yn ) and (1 − λ)xn + λU xn = QC ((1 − λ)xn − λU xn ). Therefore, by Theorem 3.1, {xn } converges strongly to some element x∗ of Θ. (III) Application to Hilbert spaces. In real Hilbert spaces, by Lemma 2.10 and Remark 2.11 it follow from Lemma 4.1 of [23], we obtain the following Lemma: Lemma 4.3. For given (x∗ , y ∗ ) ∈ C, where y ∗ = PC (x∗ − µBx∗ ), (x∗ , y ∗ ) is a solution of problem (1.4) if and only if x∗ is a fixed point of the mapping Ω0 : C −→ C defined by Ω0 (x) = PC [PC (x − µBx) − λAPC (x − µBx)], where PC is a metric projection H onto C. It is well known that the smooth constant K = the following result immediately.
√ 2 2
in Hilbert spaces. From Theorem 3.1, we can obtain
Theorem 4.4. Let C be a nonempty closed convex subset of a real Hilbert space H. Let A, B : C −→ H are an β-inverse-strongly monotone mapping with λ ∈ (0, 2β) and γ-inverse-strongly monotone mapping with µ ∈ (0, 2γ), respectively. Let f be a weakly contractive of C into itself with function ϕ. Suppose the sequences {αn }, {βn } and {γn } in (0, 1) satisfy αn + βn + γn = 1, n ≥ 1. Assume that F (Ω0 ) ∩ F (S) 6= ∅ where G0 defined by Lemma 4.3 and let λ, µ are positive real numbers. The following conditions are satisfied: P∞ (i). limn→∞ αn = 0 and n=0 αn = ∞; (ii). 0 < lim inf n→∞ βn ≤ lim supn→∞ βn < 1. For arbitrary given x0 = x ∈ C, the sequences {xn } is generated by ( yn = PC (xn − µBxn ), (4.5) xn+1 = αn f (xn ) + βn xn + γn SPC (yn − λAyn ). Then {xn } converges strongly to x ¯ = PF (Ω0 )∩F (S) f (¯ x) and (¯ x, y¯) is a solution of the problem (1.4), where y¯ = PC (¯ x − µB x ¯). Acknowledgment This research is supported by the Centre of Excellence in Mathematics, the Commission on Higher Education, Thailand.
References [1] K. Aoyama, H. Iiduka and W. Takahashi, Weak convergence of an iterative sequence for accretive operators in Banach spaces, Fixed Point Theory and Applications, vol. 2006, Article ID 35390, 1–13.
1285
Viscosity approximations with weak contraction for finding a common solution of fixed points. . . [2] F. E. Browder, Nonlinear operators and nonlinear equations of evolution in Banach spaces, Proceedings of the Symposium on Pure Mathematics, 18 (1976), 78–81. [3] L.-C. Ceng and J.-C. Yao, Convergence and certain control conditions for hybrid viscosity approximation methods, Nonlinear Analysis, 73 (2010), 2078–2087. [4] L.-C. Ceng, C.-Y. Wang and J.-C. Yao, Strong convergence theorems by a relaxed extragradient method for a general system of variational inequalities, Mathematical Methods of Operations Research, 67 (2008), 375–390. [5] S.S. Chang, H.W.J. Lee and C.K. Chan, Generalized system for relaxed cocoercive variational inequalities in Hilbert spaces, Applied Mathematics Letters, 20 (2007), 329–334. [6] J. Chen, L. Zhang and T. Fan, Viscosity approximation methods for nonexpansive mappings and monotone mappings, Journal of Mathematical Analysis and Applications, 334 (2007), 1450–1461. [7] Y.J. Cho, Y. Yao and H. Zhou, Strong convergence of an iterative algorithm for accretive operators in Banach spaces, J. Comput. Anal. Appl., 10 (2008), 113–125. [8] H. Iiduka, W. Takahashi, and M. Toyoda, Approximation of solutions of variational inequalities for monotone mappings, Panamerican Mathematical Journal, 14(2) (2004), 49–61. [9] H. Iiduka and W. Takahashi, Strong convergence theorems for nonexpansive mapping and inverse-strong monotone mappings, Nonlinear Analysis, 61 (2005), 341–350. [10] S. Kamimura and W. Takahashi, Approximating solutions of maximal monotone operators in Hilbert space, Journal of Approximation Theory, 106 (2000), 226–240. [11] S. Kamimura and W. Takahashi, Weak and strong convergence of solutions to accretive operator inclusions and applications, Set-Valued Analysis, 8(4) (2000), 361–374. [12] P. Katchang and P. Kumam, An iterative algorithm for finding a common solution of fixed points and a general system of variational inequalities for two inverse strongly accretive operators, Positivity, 15 (2011), 281–295. [13] P. Katchang, Y. Khamlae and P. Kumam, A viscosity iterative scheme for inverse-strongly accretive operators in Banach spaces, Journal Of Computational Analysis and Applications, 12(3) (2010), 678–686. [14] S. Kitahara and W. Takahashi, Image recovery by convex combinations of sunny nonexpansive retractions, Method Nonlinear Analysis, 2 (1993), 333–342. [15] F. Kohsaka and W. Takahashi, Fixed point theorems for a class of nonlinear mappings related to maximal monotone operators in Banach spaces,Archiv der Mathematik, 91 (2008), 166-177. [16] S. Li, Y. Su, L. Zhang, H. Zhao and L. Li, Viscosity approximation methods with weak contraction for L-Lipschitzian pseudocontractive self-mapping, Nonlinear Analysis, DOI: 10.1016/j.na.2010.07.024. [17] S. Matsushita and W. Takahashi, Existence of zero points for pseudomonotone operators in Banach spaces, Journal of Global Optimization, 42 (2008), 549–558.
17
1286
18
P. Katchang and P. Kumam
[18] S. Matsushita and W. Takahashi, On the existence of zeros of monotone operators in reflexive Banach spaces. Journal of Mathematical Analysis and Applications, 323 (2006), 1354–1364. [19] S. Matsushita and W. Takahashi, Existence theorems for set-valued operators in Banach spaces, SetValued Analysis, 15 (2007), 251–264. [20] A. Moudafi, Viscosity Approximation Methods for Fixed-Points Problems, Journal of Mathematical Analysis and Applications, 241 (2000), 46–55. [21] M.A. Noor, K.I. Noor and T.M. Rassias, Some aspects of variational inequalities, Journal of Computational and Applied Mathematics, 47(3) (1993), 285–312. [22] M.O. Osilike and D.I. Igbokwe, Weak and strong convergence theorems for fixed points of pseudocontractions and solutions of monotone type operator equations, Computers & Mathematics with Applications, 40 (2000), 559–567. [23] X. Qin, S.Y. Cho and S.M. Kang, Convergence of an iterative algorithm for systems of variational inequalities and nonexpansive mappings with applications, Journal of Computational and Applied Mathematics, 233 (2009), 231–240. [24] X. Qin, S.M. Kang and M. Shang, Generalized system for relaxed cocoercive variational inequalities in Hilbert spaces, Appl. Anal., 87 (2008), 421–430. [25] S. Reich, Asymptotic behavior of contractions in Banach spaces, Journal of Mathematical Analysis and Applications, 44(1) (1973), 57–70. [26] S. Reich, Strong convergence theorems for resolvents of accretive operators in Banach spaces, Journal of Mathematical Analysis and Applications, 75 (1980), 287–292. [27] R. T. Rockafellar, Monotone operators and the proximal point algorithm, SIAM Journal on Control and Optimization, 14 (1976), 877–898. [28] T. Suzuki, Strong convergence of Krasnoselskii and Mann’s type sequences for one-parameter nonexpansive semigroups without Bochner integrals, Journal of Mathematical Analysis and Applications, 305(1) (2005), 227–239. [29] T. Suzuki, Moudafis viscosity approximations with MeirKeeler contractions, Journal of Mathematical Analysis and Applications, 325 (2007), 342-352. [30] G. Stampacchi, Formes bilineaires coercivites sur les ensembles convexes, C. R. Acad. Sci. Paris, 258 (1964), 4413–4416. [31] W. Takahashi, Convex Analysis and Approximation Fixed Points. Yokohama Publishers, Yokohama (2000) (Japanese) [32] W. Takahashi, Nonlinear Functional Analysis. Fixed Point Theory and its Applications, Yokohama Publishers, Yokohama (2000).
1287
Viscosity approximations with weak contraction for finding a common solution of fixed points. . . [33] W. Takahashi, Viscosity approximation methods for resolvents of accretive operators in Banach spaces, Journal of Fixed Point Theory and Applications, 1 (2007), 135–147. [34] S. Takahashi and W. Takahashi, Viscosity approximation methods for equilibrium problems and fixed point problems in Hilbert spaces, Journal of Mathematical Analysis and Applications, 331 (2007), 506– 515. [35] W. Takahashi and Y. Ueda, On Reichs strong convergence theorems for resolvents of accretive operators. Journal of Mathematical Analysis and Applications, 104 (1984), 546–553. [36] H. K. Xu, Inequalities in Banach spaces with applications, Nonlinear Analysis, 16 (1991), 1127–1138. [37] H.K. Xu, Strong convergence of an iterative method for nonexpansive and accretive operators, Journal of Mathematical Analysis and Applications, 314 (2006), 631–643. [38] H. K. Xu, Viscosity approximation methods for nonexpansive mappings, Journal of Mathematical Analysis and Applications, 298 (2004), 279–291. [39] Y. Yao, M. A. Noor, K. I. Noor, Y.-C. Liou and H. Yaqoob, Modified extragradient methods for a system of variational inequalities in Banach spaces, Acta Applicandae Mathematicae, 110(3) (2010), 1211-1224.
19
JOURNAL OF COMPUTATIONAL ANALYSIS AND APPLICATIONS, VOL. 14, NO.7, 1288-1298 , 2012, COPYRIGHT 2012 EUDOXUS 1288 PRESS, LLC
Characterization on Simultaneous Approximation for Left Gamma Quasi-Interpolants in Lp Spaces ∗ Hongbiao Jiang Department of mathematics, Lishui University, Lishui, 323000, P. R. China
Abstract:Recently M.W.M¨ uller [9] gave the Gamma quasi-interpolants and obtained approximation equivalence theorems with ωφ2r (f, t)p . In this paper we obtain characterizations on simultaneous approximation for left Gamma quasi-interpolants with ωφ2r (f, t)p . Key words:Gamma operator; quasi-interpolants; simultaneous approximation; characterization; modulus of smoothness AMS (2000): 41A25, 41A35, 41A36
1. Introduction The Gammaoperators ∫ ∞ xn+1 −xt n n e t , x ∈ I := (0, ∞), Gn (f, x) = gn (x, t)f ( )dt, gn (x, t) = t n! 0
(1.1)
were introduced by M. M¨ uller and A. Lupas [8]. The other representation of this operator is ∫ 1 ∞ −t n nx Gn (f, x) = e t f ( )dt, x ∈ I. (1.2) n! 0 t These operators have been investigated in subsequent papers (e.g.[3],[6],[7],[13],[14]) and their references. In order to obtain much faster convergence rate, quasi-interpolants (k) Gn of Gn in the sense of Sablonni`ere[11] are considered (cf.[9]), and some approximation equivalence theorem for the quasi-interpolants of Bernstein-type operators and Gammatype operators were given(cf.[4],[5],[10]). First, we recall their construction. Πn denotes the space of algebraic polynomials of degree at most n, on Πn the Gamma operator Gn ∗
Supported by NSF of Zhejiang Province(102005)and the Foundation of Key Discipline of Zhejiang Province(2005). E-mail: [email protected].
1
JIANG: SIMULTANEOUS APPROXIMATION...
1289
and its inverse G−1 n can be expressed as linear differential operators with polynomials n n ∑ ∑ n j −1 coefficients in the form Gn = βj D and Gn = αjn Dj with D = d/dx and D0 = id. j=0
j=0
The left Gamma quasi-interpolants of k degree are defined by (cf.[9]) G(k) n (f, x)
=
k ∑
αjn (x)Dj Gn (f, x), 0 ≤ k ≤ n.
(1.3)
j=0
In 2001, M¨ uller[9] obtained an approximation equivalence theorem:for f ∈ Lp (I), 1 ≤ p ≤ ∞, φ(x) = x, n ≥ 4r, r ∈ N, and 0 < α < r, the following statements are equivalent, that is ∥G(2r−1) f − f ∥p = O(n−α ) ⇔ ωφ2r (f, t)p = O(t2α ). (1.4) n In 2005, Guo[5] gave a weighted approximation equivalence theorem: for f ∈ L∞ (I), 0 ≤ λ ≤ 1, φ(x) = x, w(x) = xa (1 + x)b (a ≥ 0, b is arbitrary), n ≥ 4r, and 0 < α < 2r, then |w(x)(G(2r−1) (f, x) − f (x))| = O((n− 2 φ1−λ (x))α ) ⇔ ωφ2rλ (f, t)w = O(tα ). n 1
(1.5)
The aim of the present article is to prove the analogous result in the Lp -metri. It is easy (2r−1) (f, x) is defined for x ∈ I and n ≥ 4r. to see that for every function f ∈ Lp (I), Gn The analogous result in Lp space is as follow Theorem 1.1. Let f (s) ∈ Lp (I), 0 ≤ s ≤ 2r − 1, 1 ≤ p ≤ ∞, φ(x) = x, n ≥ 4r, and 0 < α < 2r − s. Then the following statements are equivalent ∥Ds G(2r−1) (f, x) − f (s) (x)∥p = O(n−α/2 ) ⇔ ωφ2r−s (f (s) , t)p = O(tα ). n
(1.6)
Remark 1.2. when s = 0, Theorem 1.1 is the M¨ uller[9] results. Throughout this paper, C denotes a positive constant independent of n.x and not necessarily the same at each occurrence. 2. Preliminaries
In this section, we first give the definition of the modulus and k-functional(cf.[3]): ωφr (f, t)p = sup ∥∆rhφ f (x)∥p , φ(x) = x.
(2.1)
0 p and all a ∈ A, we have
X
q−1
1 1 1 p q
≤
f (k a) − f (k a) χ(kn a, 0). (2.7)
k2p 2n+2 k2q 2k n=p 1 It follows from (2.1) and (2.7) that sequence { k2n f (kn a)} is Cauchy for all a ∈ A. By 1 f (kn a)} converges. So we define the mapping the completeness of A, the sequence { k2n L : A → A by 1 L(a) := lim 2n f (kn a) n→∞ k for all a ∈ A. It follows from (2.1) and (2.2) that
k L(ka + b) − L(ka − b) − 2k2 L(a) − 2L(b) k 1 = lim 2n k f (kn+1 a + kn b) − L(kn+1 a − kn b) − 2k2 f (kn a) − 2L(kn b) k n→∞ k 1 ≤ lim 2n χ(kn a, kn b) = 0. n→∞ k So the mapping L is quadratic. Replacing a and b by kn a and kn b in (2.3), respectively, we have k 2f (k2n ab) − 2f (kn a)k2n b2 k≤ σ(kn a, kn b). Dividing the both sides of the above inequality by k4n , we get
f (k2n ab) f (kn a) 2
≤ 1 σ(kn a, kn b).
− b
k4n k2n 2k4n Taking limit as n → ∞, we conclude L(ab) = L(a)b2 . So the quadratic mapping L is a quadratic left centralizer. Putting p = 0 and passing the limit q → ∞ in (2.7), we get 1 χ e(a, 0). 2k2 e(a, 0). Let L0 : A → A be another quadratic mapping satisfying k f (a) − L0 (a) k≤ 2k12 χ Since L and L0 are quadratic, for each a ∈ A, we have 1 k L(a) − L0 (a) k = 2n k L(kn a) − L0 (kn a) k k 1 ≤ 2n (k L(kn a) − f (kn a) k + k L0 (kn a) − f (kn a) k) k 1 ≤ 2n+2 χ e(kn a, 0), k which tend to zero as n → ∞ for all a ∈ A. So we have L(a) = L0 (a) for all a ∈ A. A similar argument for g shows that there exists a unique quadratic right centralizer R : A → A satisfying 1 k g(a) − R(a) k≤ 2 χ e(a, 0) & R(ab) = a2 R(b) 2k k f (a) − L(a) k≤
1302
4
C. Park, J. Lee, D. Shin, M. Eshaghi-Gordji
for all a, b ∈ A. The quadratic right centralizer R : A → A is defined by R(A) := n a) limn→∞ g(k . k2n Replacing a and b by kn a and kn b in (2.6) and dividing by k4n , we get f (kn b) g(kn a) 2 1 − b k≤ 4n σ(kn a, kn b). 2n k k2n k Passing to the limit as n → ∞, we obtain a2 L(b) = R(a)b2 for all a, b ∈ A, as desired. k a2
Theorem 2.2. Let A be a Banach algebra. Suppose that f : A → A is a mapping with f (0) = 0. If there exist a mapping g : A → A with g(0) = 0 and functions χ : A2 → [0, ∞) and σ : A2 → [0, ∞) satisfying (2.2)–(2.6) and ∞ X χ e(a, b) := k2n χ(k−n a, k−n b) < ∞, n=1
lim k4n σ(k−n a, k−n b) = 0
n→∞
for all a, b ∈ A, then there exists a unique quadratic double centralizer (L, R) such that 1 k f (a) − L(a) k≤ 2 χ e(a, 0), 2k 1 k g(a) − R(a) k≤ 2 χ e(a, 0) 2k for all a ∈ A Proof. Using the same method as in proof of Theorem 2.1, one can obtain the result.
Acknowledgements The second author and third author were supported by Basic Science Research Program through the National Research Foundation of Korea funded by the Ministry of Education, Science and Technology (NRF-2010-0009232) and (NRF-2010-0021792), respectively.
References [1] T. Aoki, On the stability of the linear transformation in Banach spaces, J. Math. Soc. Japan 2 (1950) 64–66. [2] M. Eshaghi Gordji, A. Bodaghi, On the stability of quadratic double centralizers on Banach algebras, J. Comput. Anal. Appl. 13 (2011) 724–729. [3] D.H. Hyers, On the stability of the linear functional equation, Proc. Natl. Acad. Sci. USA 27 (1941) 222–224. [4] M.S. Moslehian, F. Rahbarnia and P.K. Sahoo, Approximate double centralizers are exact double centralizers, Bol. Soc. Mat. Mexicana 13 (2007) 111–122. [5] Th.M. Rassias, On the stability of the linear mapping in Banach spaces, Proc. Amer. Math. Soc. 72 (1978) 297–300. [6] F. Skof, Propriet locali e approssimazione di operatori, Rend. Sem. Mat. Fis. Milano 53 (1983) 113–129. [7] S.M. Ulam, Problems in Modern Mathematics, Chapter VI, Science Editions., Wiley, New York, 1964.
JOURNAL OF COMPUTATIONAL ANALYSIS AND APPLICATIONS, VOL. 14, NO.7, 1303-1320 , 2012, COPYRIGHT 2012 EUDOXUS 1303 PRESS, LLC
A weighted bivariate blending rational interpolation function and visualization control Yunfeng Zhanga,b , Fangxun Baoc,∗, Caiming Zhanga,b , Qi Duanc a
School of Computer Science & Technology, Shandong Economic University, Jinan, 250014, China b Shandong Provincial Key Laboratory of Digital Media Technology, Jinan, 250014, China c School of Mathematics and System Science, Shandong University, Jinan, 250100, China
Abstract A new weighted bivariate blending rational spline with parameters is constructed based on function values of a function only. The interpolation is C 1 in the whole interpolating region under the condition which free parameters is not limited. This paper deals with the properties of the interpolation surface, including the properties of basis function, the properties of integral weighted coefficients and bounded property of the interpolation. In order to meet the needs of practical design, an interpolation technique is employed to control the shape of surfaces. The method of value control of the interpolation at any point in the interpolating region is developed. This control method can be applied to modify the local shape of an interpolating surface by selecting suitable parameters simply. Keywords: Rational spline, bivariate blending interpolation, weighted interpolation, computer-aided geometric design 1. Introduction The construction method of curve and surface and the mathematical description of them is a key issue in computer-aided geometric design. There are many ways to tackle this problem [2-4,9,10,12-14,17,18], for example, the polynomial spline method, the NURBS method and the B` ezier method. These methods are effective and applied widely in shape design of industrial ∗
Corresponding author Email address: [email protected] (Fangxun Bao )
Preprint submitted to Journal of Computational Analysis and Applications July 9, 2011
ZHANG ET AL: BIVARIATE RATIONAL INTERPOLATION
products. Generally speaking, most of the polynomial spline methods are the interpolating methods. However, to construct the polynomial spline, usually derivative values are needed as the interpolating data besides the function values. Unfortunately, in many practical problems, such as the description of rainfall in some rainy region and some industrial geometric shapes, derivative values are difficult to obtain. On the other hand, one of the disadvantages of the polynomial spline method is its global property; it is not possible for the local modification for unchanged given data. The NURBS and B` ezier methods are the so-called ”no-interpolating type” methods; this means that the constructed curve and surface do not pass through the given data, and the given points play the role of the control points. In order to meet the need of ever-increasing model complexity and to incorporate manufacturing requirement, the main difficulties to be solved are: (i) how to construct the spline interpolation with simple and explicit mathematical expressions so that they may be convenient to use both in practical application and in theoretical study; (ii) how to modify the shape of a curve or surface for unchanged given data. It seems that these two difficulties are contradictory in view of the uniqueness of the interpolation for the given interpolating data. In recent years, a univariate rational spline interpolation with the parameters has received attention in the literature [1,5,11,15,16]. Since the parameters in the interpolation function are selective according to the control need, the constrained control of the shape becomes possible. Motivated by the univariate rational spline interpolation, the bivariate rational interpolation with parameters which has simple and explicit mathematical representation has been studied [6-8]. The construction of these kinds of interpolation splines can solve aforementioned difficulties. However, for the bivariate rational interpolation, in order to maintain smoothness of the interpolating surface, the parameters of y-direction for bivariate interpolating surface must be limited, they can not be selected freely for different interpolating subregion. In this paper, A new weighted bivariate blending rational interpolation will be constructed based only on function values of interpolated function, and the general properties and local shape control technique of this interpolation will be also discussed. This paper is arranged as follows. In Section 2, a new weighted bivariate blending rational interpolation based only on the function values is constructed. Section 3 discusses the bases function of this bivariate interpolation. Section 4 is about some properties of the interpolation, especially, the properties of integral weights coefficients of the interpolation is derived. In Section 2
1304
ZHANG ET AL: BIVARIATE RATIONAL INTERPOLATION
1305
5, the bounded property of this interpolation is given. Section 6 deals with the modification of the interpolating surface, in the special case, the central point value control technique is developed. And numerical examples are presented to show the performance of the method in Section 7. 2. Interpolation Let Ω : [a, b; c, d] be the plane region, and {(xi , yi , fi,j , i = 1, 2, · · · , n + 1; j = 1, 2, · · · , m + 1} be a given set of data points, where a = x1 < x2 < · · · < xn+1 = b and c = y1 < y2 < · · · < ym+1 = d are the knot spacings, fi,j = f (xi , yj ). Let hi = xi+1 − xi , and lj = yj+1 − yj , and for any point (x, y) ∈ [xi , xi+1 ; yj , yj+1 ] in the xy-plane, let θ = (x − xi )/hi and η = (y − yj )/lj . First, for each y = yj , j = 1, 2, · · · , m + 1, construct the x-direction interpolant curve; this is given by ∗
P i,j (x) =
p∗i,j (x) , q ∗i,j (x)
i = 1, 2, · · · , n − 1,
(1)
where ∗
p∗i,j (x) = (1 − θ)3 αi,j fi,j + θ(1 − θ)2 V i,j ∗
+ θ2 (1 − θ)W i,j + θ3 fi+1,j , q ∗i,j (x) = (1 − θ)αi,j + θ, and ∗
V i,j = (αi,j + 1)fi,j + αi,j fi+1,j , ∗
∗
W i,j = (αi,j + 2)fi+1,j − hi ∆i+1,j , ∗
with αi,j > 0, and ∆i,j = (fi+1,j − fi,j )/hi . This interpolation is called the rational cubic interpolation based on function values which satisfies ∗
∗
P i,j (xi ) = fi,j , P i,j (xi+1 ) = fi+1,j , ∗ 0
∗
∗ 0
∗
P i,j (xi ) = ∆i,j , P i,j (xi+1 ) = ∆i+1,j . ∗
Obviously, the interpolating function P i,j (x) on [xi , xi+1 ] is unique for the given data {xr , fr,j , r = i, i + 1, i + 2} and positive parameter αi,s . 3
ZHANG ET AL: BIVARIATE RATIONAL INTERPOLATION
1306
For each pair of (i, j), i = 1, 2, · · · , n − 1 and j = 1, 2, · · · , m − 1, using the ∗ x-direction interpolation function P i,j (x), define the interpolation function P (x, y) on each rectangular region [xi , xi+1 ; yj , yj+1 ] as follows: ∗
∗
P i,j (x, y) = (1 − η)3 P i,j (x) + η(1 − η)2 (2P i,j (x) ∗
∗
+ P i,j+1 (x)) + η 2 (1 − η)(3P i,j+1 (x) ∗
− lj ∆i,j+1 (x)) + η 3 P i,j+1 (x), ∗
(2)
∗
where ∆i,j = (P i,j+1 (x)−P i,j (x))/lj . P i,j (x, y) is called the bivariate blending interpolating function based on function values which satisfies P i,j (xr , ys ) = f (xr , ys ), r = i, i + 1, s = j, j + 1 ∂P i,j (xr , ys ) ∂P i,j (xr , ys ) ∗ = ∆r,s , = ∆r,s . ∂x ∂y This form of interpolation function P i,j (x, y) on [xi , xi+1 ; yj , yj+1 ] is unique for the given data {(xr , ys , fr,s ), r = i, i + 1, i + 2; s = j, j + 1, j + 2} and parameters αi,s , and it is easy to test that the interpolation is C 1 in the whole interpolating region [x1 , xn ; y1 , ym ], no matter what the parameters αi,s might be. The interpolating scheme above begins in x-direction first. Now, let the interpolation begins with y-direction first. For each x = xi , i = 1, 2, · · · , n+1, denote the y-direction interpolation in [yj , yj+1 ] by ∗
pi,j (y)
∗
P i,j (y) =
∗
q i,j (y)
,
i = 1, 2, · · · , m − 1,
(3)
where ∗
∗
pi,j (y) = (1 − η)3 βi,j fi,j + η(1 − η)2 V i,j ∗
+ η 2 (1 − η)W i,j + η 3 fi,j+1 , ∗
q i,j (y) = (1 − η)βi,j + η, and ∗
V i,j = (βi,j + 1)fi,j + βi,j fi,j+1 , ∗
∗
W i,j = (βi,j + 2)fi,j+1 − lj ∆i,j+1 , 4
ZHANG ET AL: BIVARIATE RATIONAL INTERPOLATION
1307
∗
with βi,j > 0 and ∆i,j = (fi,j+1 − fi,j )/lj . For each pair (i, j), i = 1, 2, · · · , n − 1 and j = 1, 2, · · · , m − 1, using ∗ the y-direction interpolation function P i,j (y), define the bivariate blending interpolation function P i,j (x, y) on [xi , xi+1 ; yj , yj+1 ] as follows: ∗
∗
P i,j (x, y) = (1 − θ)3 P i,j (y) + θ(1 − θ)2 (2P i,j (y) ∗
∗
+ P i+1,j (y)) + θ2 (1 − θ)(3P i+1,j (y) ∗
− hi ∆i+1,j (y)) + θ3 P i+1,j (y), ∗
(4)
∗
where ∆i,j = (P i+1,j (y) − P i,j (y))/hi . The interpolation function P i,j (x, y) satisfies P i,j (xr , ys ) = f (xr , ys ), r = i, i + 1, s = j, j + 1 ∗ ∂P i,j (xr , ys ) ∂P i,j (xr , ys ) = ∆r,s , = ∆r,s . ∂x ∂y
P i,j (x, y) on [xi , xi+1 ; yj , yj+1 ] is unique for the given data {(xr , ys , fr,s ), r = i, i + 1, i + 2; s = j, j + 1, j + 2} and parameters βr,j , and it is easy to derive that the interpolation is C 1 in the whole interpolating region [x1 , xn ; y1 , ym ], no matter what the parameters βr,j might be. Where, we let Pi,j (x, y) = λP i,j (x, y) + (1 − λ)P i,j (x, y),
(5)
with the weighted coefficient λ ∈ [0, 1], then the weighted bivariate blending rational interpolation function Pi,j (x, y) satisfies Pi,j (xr , ys ) = f (xr , ys ), r = i, i + 1, s = j, j + 1, ∂Pi,j (xr , ys ) ∗ = λ∆r,s + (1 − λ)∆r,s , ∂x ∗ ∂Pi,j (xr , ys ) = λ∆r,s + (1 − λ)∆r,s , ∂y and it is C 1 in the whole interpolating region [x1 , xn ; y1 , ym ], no matter what the parameters αi,s and βr,j might be. In the following of this paper, consider the interpolation defined in Eq. (5). 5
ZHANG ET AL: BIVARIATE RATIONAL INTERPOLATION
1308
3. The Bases of the Interpolation Consider the equally spaced knots case, namely, for all i = 1, 2, · · · , n and j = 1, 2, · · · , m, hi = hj and li = lj . From the Eq.(1)-(5), the interpolation function Pi,j (x, y) defined in (5) can be written as follows: Pi,j (x, y) =
2 X 2 X
ωr,s (θ, η)fi+r,j+s ,
r=0 s=0
where ω0,0 (θ, η) = + ω0,1 (θ, η) = + ω0,2 (θ, η) = − ω1,0 (θ, η) = + ω1,1 (θ, η) = + ω1,2 (θ, η) = −
(1 − θ)2 (θ + αi,j )(1 − η)2 (1 + η)λ (1 − θ)αi,j + θ (1 + θ)(1 − θ)2 (1 − η)2 (η + βi,j )(1 − λ) , (1 − η)βi,j + η (1 − θ)2 (θ + αi,j+1 )η(1 + 2η − 2η 2 )λ (1 − θ)αi,j+1 + θ 2 (1 − θ) (1 + θ)η(3η − 2η 2 + (1 − η)βi,j )(1 − λ) , (1 − η)βi,j + η (1 − θ)2 (θ + αi,j+2 )η 2 (1 − η)λ − (1 − θ)αi,j+2 + θ 2 (1 − θ) (1 + θ)η 2 (1 − η)(1 − λ) , (1 − η)βi,j + η θ(3θ − 2θ2 + (1 − θ)αi,j )(1 − η)2 (1 + η)λ (1 − θ)αi,j + θ θ(1 + 2θ − 2θ2 )(1 − η)2 (η + βi+1,j )(1 − λ) , (1 − η)βi+1,j + η θ(3θ − 2θ2 + (1 − θ)αi,j+1 )η(1 + 2η − 2η 2 )λ (1 − θ)αi,j+1 + θ 2 θ(1 + 2θ − 2θ )η(3η − 2η 2 + (1 − η)βi+1,j )(1 − λ) , (1 − η)βi+1,j + η θ(3θ − 2θ2 + (1 − θ)αi,j+2 )η 2 (1 − η)λ − (1 − θ)αi,j+2 + θ θ(1 + 2θ − 2θ2 )η 2 (1 − η)(1 − λ) , (1 − η)βi+1,j + η 6
(6)
ZHANG ET AL: BIVARIATE RATIONAL INTERPOLATION
θ2 (1 − θ)(1 − η)2 (1 + η)λ (1 − θ)αi,j + θ θ2 (1 − θ)(1 − η)2 (η + βi+2,j )(1 − λ) , (1 − η)βi+2,j + η θ2 (1 − θ)η(1 + 2η − 2η 2 )λ − (1 − θ)αi,j+1 + θ 2 θ (1 − θ)η(3η − 2η 2 + (1 − η)βi+2,j )(1 − λ) , (1 − η)βi+2,j + η θ2 (1 − θ)η 2 (1 − η)λ (1 − θ)αi,j+2 + θ θ2 (1 − θ)η 2 (1 − η)(1 − λ) , (1 − η)βi+2,j + η
ω2,0 (θ, η) = − − ω2,1 (θ, η) = − ω2,2 (θ, η) = +
and ωr,s (θ, η), (r =, i, i + 1; s = j, j + 1) are called the basis of the bivariate interpolation defined by (6). These base functions satisfy 2 X 2 X
ωr,s (θ, η) = 1.
r=0 s=0
Furthermore, when αi,s → 0 and βr,j → 0, the basis of the weighted bivariate interpolation Pi,j (x, y) defined in (6) become ω0,0 (θ, η) ω0,1 (θ, η) ω0,2 (θ, η) ω1,0 (θ, η) ω1,1 (θ, η) ω1,2 (θ, η) ω2,0 (θ, η) ω2,1 (θ, η) ω2,2 (θ, η)
= = = = = = = = =
(1 − θ)2 (1 − η)2 (1 + θ(1 − λ) + ηλ), (1 − θ)2 η((1 + 2η − 2η 2 )λ + (1 + θ)(3 − 2η)(1 − λ)), −(1 − θ)2 η(1 − η)(1 + θ(1 − λ) − (1 − η)λ), θ(1 − η)2 ((3θ − 2θ2 )(1 + η)λ + (1 + 2θ − 2θ2 )(1 − λ)), θη((3 − 2θ)(1 + 2η − 2η 2 )λ + (1 + 2θ − 2θ2 )(3 − 2η)(1 − λ)), −θη(1 − η)((3 − 2θ)ηλ + (1 + 2θ − 2θ2 )(1 − λ)), −θ(1 − θ)(1 − η)2 (θ(1 − λ) + (1 + η)λ), −θ(1 − θ)η(θ(3 − 2η)(1 − λ) + (1 + 2η − 2η 2 )λ), θ(1 − θ)η(1 − η)(θ(1 − λ) + ηλ).
Also, αi,s → ∞ and βr,j → ∞, the basis of the weighted bivariate interpolation Pi,j (x, y) defined in (5) become ω0,0 (θ, η) = (1 − θ)(1 − η)(1 − θ2 (1 − λ) − η 2 λ), 7
1309
ZHANG ET AL: BIVARIATE RATIONAL INTERPOLATION
ω0,1 (θ, η) ω0,2 (θ, η) ω1,0 (θ, η) ω1,1 (θ, η) ω1,2 (θ, η) ω2,0 (θ, η) ω2,1 (θ, η) ω2,2 (θ, η)
= = = = = = = =
(1 − θ)η(1 − θ2 (1 − λ) + 2η(1 − η)λ), −(1 − θ)(1 − η)η 2 λ, θ(1 − η)(1 + 2θ(1 − θ)(1 − λ) − η 2 λ), θη(1 + 2θ(1 − θ)(1 − λ) + 2η(1 − η)λ), −θ(1 − η)η 2 λ, −θ2 (1 − θ)(1 − η)(1 − λ), −θ2 (1 − θ)η(1 − λ), 0.
4. Some integral properties For the weighted bivariate blending interpolation defined by (5), we use Eq.(7) to arrive at the following unity property. Property 1. Let f (x, y) ≡ 1, ∀(x, y) ∈ Ω, and Pi,j (x, y) is its interpolation function over [xi , xi+1 ; yj , yj+1 ] defined in (5). When 0 ≤ λ ≤ 1, no matter what positive number the parameters αi,s , βr,j take, the unity property holds, namely Z Z Pi,j (x, y)dxdy = hi lj , D
where D denotes the subregion [xi , xi+1 ; yj , yj+1 ]. Use Eq.(6) to define Z Z 2 X 2 X ∗ ωr,s fi+r,j+s , Pi,j (x, y)dxdy = hi lj D
r=0 s=0
where
Z Z ∗ ωr,s
=
ωr,s (θ, η)dθdη, [0,1;0,1]
r = i, i + 1, i + 2; s = j, j + 1, j + 2, ∗ ωr,s are called the integral weights coefficients of the interpolation defined in (5). Furthermore, when λ = 1, the interpolation function Pi,j (x, y), that is, P i,j (x, y) defined in (2). Denote Z 1 ur,s = ωr,s (θ, η)dη, 0
8
1310
ZHANG ET AL: BIVARIATE RATIONAL INTERPOLATION
r = i, i + 1, i + 2; s = j, j + 1, j + 2. It is easy to show that the following conclusions hold for any parameters αi,s > 0, 0 ≤ u0,0 ≤
2 , 15
2 0 ≤ u0,1 ≤ , 3
−
1 ≤ u0,2 ≤ 0, 12
3 3 0 ≤ u1,1 ≤ , − ≤ u1,2 ≤ 0, 4 32 5 1 1 − ≤ u2,0 ≤ 0, − ≤ u2,1 ≤ 0, 0 ≤ u2,2 ≤ . 48 6 48 Thus, it is easy to conclude the following property. 0 ≤ u1,0 ≤
15 , 32
Property 2. For the bivariate blending interpolation P i,j (x, y) defined in (2), the integral weights coefficients of interpolation satisfy: 1 ∗ < ω0,2 < 0, 12
∗ 0 < ω0,0
0 : λf (y) ≤ t } , t ≥ 0
where inf φ = ∞. Also the average(maximal) function of f on (0, ∞) is given by 1 t (2.3) f ∗∗ (t) = 0 f ∗ (s) ds. t Note that λf (·) , f ∗ (·) and f ∗∗ (·) are non-increasing and right continuous functions. Also, it is obvious that if X has a finite measure, then λf is bounded above by µ (X) and so f ∗ (t) = 0 for all t ≥ µ (X). A positive measurable function L, defined on some neighborhood of infinity, is said to be slowly varying if, for every s > 0, L (st) → 1 (t → +∞) . (2.4) L (t) These functions were introduced by Karamata [11], (see also [3],[14]). Another definition of slowly varying functions can be found in [8] such as: D 2.3. A positive and Lebesgue measurable function b is said to be slowly varying (s.v.) on [1, ∞) in the sense of Karamata if, for each ε > 0, tε b (t) is equivalent to a non-decreasing function and t−ε b (t) is equivalent to a non-increasing function on [1, ∞). The detailed study of Karamata theory, properties and examples of s.v. functions can be found in [3,6,7,11] and [17,Chap.V, p.186]. Let m ∈ N and α = (α1 , ..., αm ) ∈ Rm . If we denote by ϑm α the real function defined by m αi m ϑα (t) = i=1 li (t) for all t ∈ (0, ∞) , where l1 , ..., lm are positive functions defined on (0, ∞) by
l1 (t) = 1 + |log t| , li (t) = 1 + log li−1 (t) , i ≥ 2 then the (1) (2) (3) (4)
following functions are s.v. on [1, ∞): m b (t) = ϑm α (t) with m ∈ N and α ∈ R ; α b (t) = exp (log t) with 0 < α < 1; α b (t) = exp (lm (t)) with 0 < α < 1, m ∈ N; b (t) = lm (t) with m ∈ N.
1334
SOM E ASPECTS OF LORENTZ-KARAM ATA SPACES
3
Given a s.v. function b on [1, ∞), we denote by γ b the positive function defined by
1 γ b (t) = b max t, for all t > 0. t It is known that any s.v. function b on (0, ∞) is equivalent to a s.v. continuous function b on (0, ∞). Consequently, without loss of generality, we assume that all s.v. functions in question are continuous functions in (0, ∞) [9]. D 2.4. Let p, q ∈ (0, ∞] and let b be a s.v. function on [1, ∞). Lorentz-Karamata space Lp,q;b (G) is defined to be the set of all functions f ∈ M0 (G, µ) such that 1 1 (2.5) f ∗p,q;b := t p − q γ b (t) f ∗ (t) q;(0,∞)
is finite. Here · q;(0,∞) stands for the usual Lq (quasi-) norm over the interval (0, ∞).
Let 0 < p, q ≤ ∞ and b be a s.v. function on [1, ∞). Let us introduce the functional f p,q;b defined by 1 1 (2.6) f p,q;b := t p − q γ b (t) f ∗∗ (t) ; q;(0,∞)
this is identical with that defined in (2.5) except that f ∗ is replaced by f ∗∗ . However, p = ∞, Lp,q;b (G) spaces are different from the trivial spaces if and only 1when p − 1q γ b (t) if t < ∞. It is easy to see that Lp,q;b (G) spaces endowed with q;(0,∞)
a convenient norm (2.6), are rearrangement-invariant Banach function spaces and have absolutely continuous norm when p ∈ (1, ∞) and q ∈ [1, ∞). It is clear that, for 0 < p < ∞, Lp,q;b (G) spaces contain the characteristic function of every measurable subset of G with finite measure and hence, by linearity, every µ−simple function. From the definition of · ∗p,q;b , it follows that if f ∈ Lp,q;b (G) and p, q ∈ (0, ∞), then the function λf (y) is finite valued. In this case, with a little thought, it is possible to construct a sequence of (simple) functions which satisfy Lemma 1.1 in [4]. Therefore, if we use the same method as employed in the proof of Proposition 2.4 in [10], we can show that Lebesgue dominated convergence theorem holds and so the set of simple functions is dense in LorentzKaramata spaces for p ∈ (1, ∞) and q ∈ [1, ∞). Also, we can see the density of Cc (G) since µ is a Haar measure. ∗ ∗ It follows from [12] that f p,q;b ≤ f p,q;b f p,q;b for all f ∈ M0 (G, µ) where 1 < p ≤ ∞, 1 ≤ q ≤ ∞ and b is a s.v. function on [1, ∞). In particular, Lp,q;b (G) spaces consist of all those functions f for which f p,q;b is finite. Since the function f → f ∗∗ is subadditive, it is obvious that · p,q;b is a norm if q ≥ 1. For more information on Lorentz-Karamata spaces, one can refer to [2, 6, 8, 9, 12] and references therein. 3. Some results for Lorentz-Karamata spaces The following proposition can be proved by using the techniques done in [15]. P 3.1. Let f be a scalar valued, measurable functions on (G, µ). If we define the function Ls f (t) = f (t − s) for any s ∈ G, then we have the following:
1335
I˙ LKER ERYILMAZ
4
(1) λLs f (y) = λf (y) for all y ≥ 0, (2) (Ls f )∗ (t) = f ∗ (t) for all t ≥ 0 and (Ls f )∗∗ (t) = f ∗∗ (t) for all t > 0, (3) If p, q ∈ (0, ∞), then Ls f ∗p,q;b = f ∗p,q;b , Ls f p,q;b = f p,q;b . P 3.2. For any f ∈ Lp,q;b (G), 1 < p < ∞ and 1 ≤ q < ∞, the function s → Ls f is continuous from G into Lp,q;b (G). P. Since the set of simple functions is dense in Lp,q;b (G), it is sufficient to show
that for any simple function f, the mapping s → Ls f is continuous. Let f = ni=1 ki
χEi where χE is the characteristic function of E. Then, we can find that Ls f = ni=1 ki χEi +s . Now, we have 1, t ∈ (Ei + s) △ Ei (3.1) χEi +s − χEi (t) = 0, otherwise and
(3.2)
λχEi +s −χEi (y) =
µ ((Ei + s) △ Ei ) , t < 1 0, t≥1
where △ denotes the symmetric difference of sets. Therefore, we write
∗ 1, t < µ ((Ei + s) △ Ei ) (3.3) χEi +s − χEi (t) = 0, t ≥ µ ((Ei + s) △ Ei ) and q χE +s − χE ∗ i i p,q;b
(3.4)
1 1
∗ q = t p − q γ b (t) χEi +s − χEi (t) q;(0,∞) ∞ q
1 1 ∗ p−q γ b (t) χEi +s − χEi (t) dt = t 0
=
µ((Ei +s)△Ei )
0
=
µ((Ei +s)△Ei )
q
t p −1 γ bq (t) dt
0
≈
q
t p −1 γ qb (t) dt
sup
q
t p −1 γ bq (t)
0 0, α > −1 consists of all f ∈ H(D) such that Z p kf kApα = |f (z)|p dAα (z) < ∞. D
When p ≥ 1, the weighted Bergman space with the norm k · kApα becomes a Banach space. If p ∈ (0, 1), it is a Frechet space with the translation invariant metric d(f, g) = kf − gkpApα . The corresponding Lebesgue space is denoted by Lpα = Lpα (D). The weighted-type space Hα∞ , α ≥ 0, consists of all f ∈ H(D) such that kf kHα∞ = sup(1 − |z|2 )α |f (z)| < ∞. z∈D
With the norm k · kHα∞ it is a Banach space. For α = 0 the space becomes the space of bounded holomorphic functions H ∞ . The little weighted-type space, denoted ∞ by Hα,0 is a subspace of Hα∞ consisting of all holomorphic functions f such that lim|z|→1 (1 − |z|2 )α |f (z)| = 0. 2000 Mathematics Subject Classification. Primary 47B38; Secondary 30H20. Key words and phrases. Integral-type operator, weighted Bergman space, boundedness, compactness, unit disk. ∗ Corresponding author. This paper is partially supported by NBHM/DAE, India (Grant No. 48/4/2009/ R&D-II/426) and the Serbian Ministry of Science (projects III44006 and III41025). 1
1340
´ AND AJAY K. SHARMA STEVO STEVIC
2
Here we characterize the boundedness and compactness of operator (1) between weighted Bergman spaces (for the case p ∈ (0, 1) we mean the metric boundedness and compactness). Our results extend one-dimensional results in [8]. Throughout the paper C denotes a positive constant not necessarily the same at each occurrence. The notation A ³ B means that there is a positive constant C such that A/C ≤ B ≤ CA. 2. Auxiliary results Here we quote some auxiliary results which will be used in the proofs of the main results of this paper. The first one is the next classical result. Lemma 1. Let f ∈ H(D). Then f ∈ Apα if and only if f (n) (z)(1 − |z|2 )n ∈ Lpα , and µZ ¶1/p n−1 X kf kApα ³ |f (k) (0)| + |f (n) (z)|p (1 − |z|2 )pn+α dA(z) . (2) D
k=0
From (2) we see that if f ∈ Apα , then f (n) ∈ Appn+α and kf (n) kAppn+α ≤ Ckf kApα , and it is easily get that for every z ∈ D there is a C > 0 independent of f such that |f (n) (z)| ≤ C
kf (n) kAppn+α (1 −
|z|2 )(2+pn+α)/p
≤C
kf kApα . (1 − |z|2 )(2+pn+α)/p
(3)
The following criterion for compactness is proved in a standard way, see [14]. Lemma 2. Let 0 < p, q < ∞, −1 < α, β < ∞, g ∈ H(D) and n ∈ N0 . Then (n) Ig : Apα → Aqβ is compact if and only if for any bounded sequence (fm )m∈N ⊂ Apα (n)
converging to zero on compacts of D, we have kIg fm kAqβ → 0 as m → ∞. 3. Main results Here we prove our main results, which are incorporated in the following theorem. Theorem 1. Let 0 < p, q < ∞, −1 < α, β < ∞, g ∈ H(D) and n ∈ N0 . Then the following statements hold true. (n)
(a) If p ≤ q, then Ig : Apα → Aqβ is bounded if and only if g ∈ M, where ½ ∞ H1−n+(2+β)/q−(2+α)/p , if n + (2 + α)/p ≤ 1 + (2 + β)/q M= {0}, if n + (2 + α)/p > 1 + (2 + β)/q. (n)
(b) If p ≤ q, then Ig : Apα → Aqβ is compact if and only if g ∈ M0 , where ½ ∞ H1−n+(2+β)/q−(2+α)/p,0 , if n + (2 + α)/p < 1 + (2 + β)/q M0 = {0}, if n + (2 + α)/p ≥ 1 + (2 + β)/q. (c) If p > q, then the following statements are equivalent: (n) (i) Ig : Apα → Aqβ is bounded; (n)
(ii) Ig
: Apα → Aqβ is compact;
(iii) If n + (1 + α)/p < 1 + (1 + β)/q, then g ∈ Lss+γ−ns , where 1s = 1q − p1 , γ = (pβ − qα)/(p − q), while if n + (1 + α)/p ≥ 1 + (1 + β)/q, then g ≡ 0.
1341
INTEGRAL-TYPE OPERATORS
3
Proof. (a) By Lemma 1, we have kf kqAq ³ |f (0)|q + kf 0 kqAq β
q+β
.
(n)
Also Ig f (0) = 0 for every f ∈ H(D) and n ∈ N0 . Thus, we have Z kIg(n) f kqAq ³ |f (n) (z)|q |g(z)|q (1 − |z|2 )β+q dA(z). β
(4)
D
∞ First suppose that n + (2 + α)/p ≤ 1 + (2 + β)/q and g ∈ H1−n+(2+β)/q−(2+α)/p . Then by the H¨older inequality, (2), (3) and (4) we obtain Z kIg(n) f kqAq ≤ CkgkqH ∞ |f (n) (z)|q (1 − |z|2 )(q/p)(2+α)+nq−2 dA(z) 1−n+(2+β)/q−(2+α)/p β D Z q ≤ CkgkH ∞ kf kq−p |f (n) (z)|p (1 − |z|2 )α+np dA(z) p Aα 1−n+(2+β)/q−(2+α)/p
≤ CkgkqH ∞
1−n+(2+β)/q−(2+α)/p
kf kqApα ,
D
(5)
(n)
from which the boundedness of Ig : Apα → Aqβ follows in this case. If n + (2 + α)/p > 1 + (2 + β)/q and g ≡ 0 the boundedness of the operator trivially follows. (n) Conversely, suppose that Ig : Apα → Aqβ is bounded. Let fa (z) =
(1 − |a|2 ) (1 − az)
2+α p
2(2+α) p
,
(6)
a ∈ D.
It is known that kfa kApα = 1, for each a ∈ D. Using (2) and (3), we get (n)
k(Ig fa )0 kAqq+β |a|n |g(a)| (n) 0 = C|(I f ) (a)| ≤ C a g (1 − |a|2 )n+(2+α)/p (1 − |a|2 )1+(2+β)/q (n)
≤C
kIg fa kAqβ (1 − |a|2 )1+(2+β)/q
(n)
≤C
kIg kApα →Aqβ (1 − |a|2 )1+(2+β)/q
,
(7)
from which along with the maximum modulus theorem, for n + (2 + α)/p > 1 + (2 + β)/q, it follows that g ≡ 0, while for n + (2 + α)/p ≤ 1 + (2 + β)/q it follows ∞ that g ∈ H1−n+(2+β)/q−(2+α)/p . (n)
(b) First assume that Ig : Apα → Aqβ is compact. By using the functions in (6) which are norm bounded and converge to zero uniformly on compacts of D, as |a| → 1, we get lim kIg(n) fa kAqβ = 0.
|a|→1
From this, the second inequality in (7) and the maximum modulus theorem we easily get that g ∈ M0 . Now suppose that g ∈ M0 . If n + (2 + α)/p ≥ 1 + (2 + β)/q and g ≡ 0 then the result is obvious. ∞ If n + (2 + α)/p < 1 + (2 + β)/q, and g ∈ H1−n+(2+β)/q−(2+α)/p,0 , then for every ε > 0, there is an r ∈ (0, 1) such that for r < |z| < 1 |g(z)|(1 − |z|2 )1−n+(2+β)/q−(2+α)/p < ε.
1342
´ AND AJAY K. SHARMA STEVO STEVIC
4
Let (fm )m∈N be a bounded sequence in Apα , say by M , converging to zero uniformly on compacts of D as m → ∞. We have Z (n) kIg(n) fm kqAq ³ |fm (z)|q |g(z)|q (1 − |z|2 )β+q dA(z) β µDZ ¶ Z (n) = + |fm (z)|q |g(z)|q (1 − |z|2 )β+q dA(z) |z|≤r
|z|>r
= J1 (m) + J2 (m). (n)
By the Weierstrass theorem it follows that fm → 0 uniformly on compacts of D, as m → ∞. Hence for large enough m, say m ≥ m0 , we have (n) |fm (z)| < ε,
for
|z| ≤ r.
Using (8) we have that for m ≥ m0 Z J1 (m) ≤ εq |g(z)|q (1 − |z|2 )β+q dA(z) = εK(r).
(8)
(9)
|z|≤r
Similar to (5) is obtained J2 (m) ≤ εq Ckfm kq−p Ap α
Z D
(n) |fm (z)|p (1 − |z|2 )α+np dA(z) ≤ εq CM q .
(10)
From (9) and (10) and since ε is an arbitrary positive number it follows that (n) (n) kIg fm kAqβ → 0 as m → ∞. Hence by Lemma 2 the compactness of Ig : Apα → Aqβ follows. (n) (c) (i) ⇒ (iii). Suppose that Ig : Apα → Aqβ is bounded. Following the lines of the proof of Theorem 1 in [12] it is not difficult to show that ¶p/(p−q) Z µ Z 1 q 2 β+q B(g) := |g(w)| (1 − |w| ) dA(w) dA(z) (1 − |z|2 )2+nq+αq/p D(z,1/2) D is finite, where
¯ z − w ¯ 1o n ¯ ¯ D(z, 1/2) = w : ¯ ¯< 1 − z¯w 2 is the pseudohyperbolic disk centered at z with radius 1/2. By the subharmonicity, we have Z 1 q |g(z)| ≤ |g(w)|q dA(w), (1 − |z|2 )2 D(z,1/2) from which along with the relation 1 − |z|2 ³ 1 − |w|2 for w ∈ D(z, 1/2), we get Z kgksLss+γ−ns = C |g(w)|s (1 − |w|2 )s+γ−ns dA(w) ≤ CB(g). D
Thus g ∈ Lss+γ−ns . If s + γ − ns ≤ −1 ⇔ n+(1 + α)/p ≥ 1+(1 + β)/q, we get g ≡ 0. (ii) ⇒ (i). This implication is obvious. (iii) ⇒ (ii). If 1 + (1 + β)/q ≥ n + (1 + α)/p and g ≡ 0, then it is obvious that (n) Ig : Apα → Aqβ is compact. So let 1 + (1 + β)/q < n + (1 + α)/p and (fm )m∈N be a bounded sequence in Apα that converges to zero uniformly on compacts of D as (n) m → ∞. To show that Ig : Apα → Aqβ is compact, by Lemma 2, it is enough to show that Z q (n) (n) kIg fm kAq ³ |fm (z)|q |g(z)|q (1 − |z|2 )β+q dA(z) → 0, as m → ∞. β
D
1343
INTEGRAL-TYPE OPERATORS
5
Let ε > 0 be arbitrary. By the hypothesis there is an r ∈ (0, 1) such that Z |g(z)|s (1 − |z|2 )s+γ−ns dA(z) < εp/(p−q) . |z|>r
By H¨older’s inequality and (2) it follows that Z (n) |fm (z)|q |g(z)|q (1 − |z|2 )β+q dA(z) |z|>r Z (n) = |fm (z)|q (1 − |z|2 )nq+αq/p |g(z)|q (1 − |z|2 )β+q−nq−αq/p dA(z) |z|>r
µZ ¶q/p µ Z (n) ≤ |fm (z)|p (1 − |z|2 )np+α dA(z) D
≤
¶(p−q)/p |g(z)|s (1 − |z|2 )s+γ−ns dA(z)
|z|>r
εCkfm kqApα
q
≤ εCM .
(11)
Since (8) holds for large m, say m ≥ m1 , then we have that Z Z (n) q q 2 β+q q |fm (z)| |g(z)| (1 − |z| ) dA(z) ≤ ε |g(z)|q (1 − |z|2 )β+q dA(z) |z|≤r
|z|≤r
= εq K(r).
(12)
(n)
From (11) and (12), we get kIg fm kAqβ → 0 as m → ∞, as claimed.
¤
Corollary 1. Let 0 < p < ∞, −1 < α < ∞, g ∈ H(D) and n ∈ N0 . Then the following statements hold. (n)
(a) Ig
(n)
(b) Ig
: Apα → Apα is bounded if and only ∞ H1 , H ∞, N= {0},
if g ∈ N, where if n = 0 if n = 1 if n > 1.
∞ , if n = 0, and g ≡ 0 if n ∈ N. : Apα → Apα is compact if and only if g ∈ H1,0
References [1] K. Avetisyan and S. Stevi´ c, Extended Ces` aro operators between different Hardy spaces, Appl. Math. Comput. 207 (2009), 346-350. [2] D. C. Chang, S. Li and S. Stevi´ c, On some integral operators on the unit polydisk and the unit ball, Taiwanese J. Math. 11 (5) (2007), 1251-1286. [3] Z. Hu, Extended Ces` aro operators on the Bloch space in the unit ball of n , Acta. Math. Sci. 23 B (4) (2003), 561-566. [4] S. Li and S. Stevi´ c, Riemann-Stieltjes operators on Hardy spaces in the unit ball of n , Bull. Belg. Math. Soc. Simon Stevin 14 (4) (2007), 621-628. [5] S. Li and S. Stevi´ c, Riemann-Stieltjes type integral operators on the unit ball in n , Complex Variables Elliptic Equations 52 (6) (2007), 495-517. [6] S. Li and S. Stevi´ c, Generalized composition operators on Zygmund spaces and Bloch type spaces, J. Math. Anal. Appl. 338 (2008), 1282-1295. [7] S. Li and S. Stevi´ c, Products of Volterra type operator and composition operator from H ∞ and Bloch spaces to the Zygmund space, J. Math. Anal. Appl. 345 (2008), 40-52. [8] S. Li and S. Stevi´ c, Riemann-Stieltjes operators between different weighted Bergman spaces, Bull. Belg. Math. Soc. Simon Stevin 15 (4) (2008), 677-686. [9] S. Li and S. Stevi´ c, Integral-type operators from Bloch-type spaces to Zygmund-type spaces, Appl. Math. Comput. 215 (2009), 464-473. [10] S. Li and S. Stevi´ c, On an integral-type operator from iterated logarithmic Bloch spaces into Bloch-type spaces, Appl. Math. Comput. 215 (2009), 3106-3115.
C
C
C
1344
6
´ AND AJAY K. SHARMA STEVO STEVIC
[11] S. Li and S. Stevi´ c, Products of integral-type operators and composition operators between Bloch-type spaces, J. Math. Anal. Appl. 349 (2009), 596-610. [12] D. H. Luecking, Embedding theorems for spaces of analytic functions via Khinchine’s inequality, Mich. Math. J. 40 (1993), 333-358. [13] C. Pommerenke, Schlichte funktionen und analytische funktionen von beschr¨ ankter mittlerer oszillation, Comment. Math. Helv. 52 (1977), 591-602. [14] H. J. Schwartz, Composition operators on H p , Thesis, University of Toledo 1969. [15] A. Sharma and A. K. Sharma, Carleson measures and a class of generalized integration operators on the Bergman space, Rocky Mountain J. Math. 41 (5) (2011), 1711-1724. [16] S. Stevi´ c, On an integral operator on the unit ball in n , J. Inequal. Appl. Vol. 2005 (1), (2005), 81-88. [17] S. Stevi´ c, Boundedness and compactness of an integral operator on mixed norm spaces on the polydisc, Siberian Math. J. 48 (3) (2007), 559-569. [18] S. Stevi´ c, On a new operator from H ∞ to the Bloch-type space on the unit ball, Util. Math. 77 (2008), 257-263. [19] S. Stevi´ c, On a new operator from the logarithmic Bloch space to the Bloch-type space on the unit ball, Appl. Math. Comput. 206 (2008), 313-320. [20] S. Stevi´ c, Boundedness and compactness of an integral operator between H ∞ and a mixed norm space on the polydisk, Siberian Math. J. 50 (3) (2009), 495-497. [21] S. Stevi´ c, On a new integral-type operator from the Bloch space to Bloch-type spaces on the unit ball, J. Math. Anal. Appl. 354 (2009), 426-434. [22] S. Stevi´ c, On an integral operator from the Zygmund space to the Bloch-type space on the unit ball, Glasg. J. Math. 51 (2009), 275-287. [23] S. Stevi´ c, Integral-type operators from a mixed norm space to a Bloch-type space on the unit ball, Siberian Math. J. 50 (6) (2009), 1098-1105. [24] S. Stevi´ c, Products of integral-type operators and composition operators from the mixed norm space to Bloch-type spaces, Siberian Math. J. 50 (4) (2009), 726-736. [25] S. Stevi´ c, On an integral operator between Bloch-type spaces on the unit ball, Bull. Sci. Math. 134 (2010), 329-339. [26] S. Stevi´ c, On an integral-type operator from logarithmic Bloch-type spaces to mixed-norm spaces on the unit ball, Appl. Math. Comput. 215 (2010), 3817-3823. [27] S. Stevi´ c, On operator Pϕg from the logarithmic Bloch-type space to the mixed-norm space on unit ball, Appl. Math. Comput. 215 (2010), 4248-4255. [28] S. Stevi´ c, On some integral-type operators between a general space and Bloch-type spaces, Appl. Math. Comput. 218 (2011), 2600-2618. [29] S. Stevi´ c and S. I. Ueki, Integral-type operators acting between weighted-type spaces on the unit ball, Appl. Math. Comput. 215 (2009), 2464-2471. [30] S. Stevi´ c and S. I. Ueki, On an integral-type operator between weighted-type spaces and Bloch-type spaces on the unit ball, Appl. Math. Comput. 217 (2010), 3127-3136. [31] W. Yang, On an integral-type operator between Bloch-type spaces, Appl. Math. Comput. 215 (3) (2009), 954-960. [32] W. Yang and X. Meng, Generalized composition operators from F (p, q, s) spaces to Blochtype spaces, Appl. Math. Comput. 217 (6) (2010), 2513-2519. [33] X. Zhu, Extended Ces` aro operator from H ∞ to Zygmund type spaces in the unit ball, J. Comput. Anal. Appl. 11 (2) (2009), 356-363 [34] X. Zhu, Integral-type operators from iterated logarithmic Bloch spaces to Zygmund-type spaces, Appl. Math. Comput. 215 (3) (2009), 1170-1175. [35] X. Zhu, On an integral-type operator between H 2 space and weighted Bergman spaces, Bull. Belg. Math. Soc. Simon Stevin 18 (1) (2011), 63-71.
C
´, Mathematical Institute of the Serbian Academy of Sciences, Knez Stevo Stevic Mihailova 36/III, 11000 Beograd, Serbia E-mail address: [email protected] Ajay K. Sharma, School of Mathematics, Shri Mata Vaishno Devi University, Kakryal, Katra-182320, J& K, India. E-mail address: aksju [email protected]
JOURNAL OF COMPUTATIONAL ANALYSIS AND APPLICATIONS, VOL. 14, NO.7, 1345-1353 , 2012, COPYRIGHT 2012 EUDOXUS 1345 PRESS, LLC
Random Iterative Algorithms for Nonlinear Mixed Family of Random Fuzzy and Crisp Operator Equation Couples in Fuzzy Normed Spaces1 Heng-you Lan2 and Fang Li Department of Mathematics, Sichuan University of Science & Engineering, Zigong, Sichuan 643000, P. R. China
Abstract. The purpose of this paper is to introduce and study a new class of nonlinear mixed family of random fuzzy and crisp operator equation couples in fuzzy normed spaces based on the random version of the theory of (φ, ψ)contractor due to Mihet. Further, some new random iterative algorithms for solving this kind of nonlinear operator equation couples in fuzzy normed spaces are constructed and the convergence of iterative sequences generated by the algorithms under joint orbitally complete conditions is proved. As applications, some new common fixed point theorems for a mixed family of fuzzy and crisp operators in fuzzy normed spaces are also given. The results presented in this paper improve and generalize the corresponding results of recent works. Key Words and Phrases. Nonlinear mixed family of random fuzzy and crisp operator equation couple, (φ, ψ)-contractor, Joint orbitally complete condition, new random iterative algorithm, approximation and convergence. AMS Subject Classification. 47H15, 54E70, 47S40.
1
Introduction
It is well known that fuzzy normed spaces are an important class of fuzzy metric spaces and offer an appropriate frame work for inexact measurements of an ordinary length in a linear space. The value of a fuzzy norm for a vector x is a fuzzy set on [0, +∞) rather than a number (see [1]). In [2], Golet. introduced a generalization of the concept of fuzzy normed spaces, which includes some earlier defined fuzzy normed spaces as special cases. Further, Golet. [2] pointed out: if we define on a linear space L the mapping x → Fx , Fx (t) = N (x, t) for t ∈ [0, +∞), then the triple (L, F, ⋆) becomes a generalized probabilistic normed space (see [3], the random variable associated to the distribution function Fx can take the value ∞ with a probability greater than 0). If the following condition is satisfied: limt→+∞ N (x, t) = 1 for all x ∈ L, then the triple (L, F, ⋆) is a probabilistic normed space. Conversely, if (L, F, ⋆) is a probabilistic normed space, and we define N (x, t) = Fx (t) then (L, F, ⋆) becomes a fuzzy normed space. One can see a generality of fuzzy normed space. The specific for the both fuzzy and probabilistic norms are the distinct areas of applicability and the different interpretation ways. For more detail, see, for example, [4] and [5]. 1
This work was supported by the Sichuan Youth Science and Technology Foundation (08ZQ026-008), the Open Foundation of Artificial Intelligence of Key Laboratory of Sichuan Province (2009RZ001), the Scientific Research Fund of Sichuan Provincial Education Department (10ZA136) and the Cultivation Project of Sichuan University of Science and Engineering (2011PY01). 2 The corresponding author: [email protected] (H.Y. Lan)
1
LAN, LI: FUZZY RANDOM ITERATIVE ALGORITHMS...
1346
On the other hand, since then, by using ψ-probabilistic contraction and a truthful probabilistic version of the Banach fixed point principle, some problems for nonlinear operators and common fixed point theorems have been introduced and studied by many authors (see, e.g., [6-14]). Sharma et al. [15] considered two nonfuzzy mappings and a sequence of fuzzy mappings to define a hybrid D-compatible condition. They also showed the existence of common fixed points under such condition, where the range of the one of the two nonfuzzy mappings is joint orbitally complete, and sketched a view of the present stage of the confrontation. As Sharma et al. [15] point out “instead of trying to modelize a not well enough known physiological system, it seemed rather preferable to examine what are the required mathematical conditions holding on mathematical systems that would be progressively adjusted closer and closer to known biological situations”. Furthermore, by using a random version of the theory of contractor and the concept of probabilistic Ψ-contractor couple, Cho et al. [11] introduced and studied a system of nonlinear operator equations for a mixed family of fuzzy and crisp operators in probabilistic normed spaces, and also proved some new existence theorems of solutions of a new system of nonlinear operator equations for a mixed family of fuzzy and crisp operators and some new convergence results of sequences generated by the new iterative algorithms under joint orbitally complete conditions. In 2007, Mihet [16] introduced the concept of probabilistic (φ, ψ)-contractor couple, and considered a class of set-valued nonlinear operator equations in Menger probabilistic normed spaces, under a larger class of t-norms. Based on the concept of probabilistic (φ, ψ)-contractor couple duo to Mihet [16], Lan et al. [17] introduced and studied a new class of nonlinear operator equation couples with a mixed family of fuzzy and crisp operator equations in Menger probabilistic normed spaces, and discussed the existence of solutions for the nonlinear operator equation couples and the convergence of iterative sequences generated by the algorithms under a larger class of t-norms and joint orbitally complete conditions. Motivated and inspired by the above works, in this paper, by using a random version of the theory of (φ, ψ)-contractor duo to Mihet [16], we shall introduce and study the following new class of nonlinear mixed family of random fuzzy and crisp operator equation couples in fuzzy normed spaces: Find a measurable operator x : Λ → X such that for all λ ∈ Λ, { Si λ,x(λ) (u(λ)) ≥ d(x(λ)), (1.1) Sj λ,x(λ) (u(λ)) ≥ e(x(λ)), where d, e : X → (0, 1] are two nonlinear operators, Si , Sj : Λ × X → W (Y) are two random fuzzy operators satisfying some conditions for some i, j ∈ N, (Λ, A) is a measure space, X is a separable real vector space and W (Y) be a collection of all fuzzy sets over Y. Obviously, equations (1.1) are equivalent to the following nonlinear equations for setvalued operators: { u(λ) ∈ Si (λ, x(λ)), (1.2) u(λ) ∈ Sj (λ, x(λ)). Moreover, (1.2) becomes to the following nonlinear operator equations: { u(λ) = f (λ, x(λ)), u(λ) = g(λ, x(λ)),
(1.3)
when f, g : Λ × X → X are two random operators satisfied Si (λ, x(λ)) ⊂ f (Λ × X) and Sj (λ, x(λ)) ⊂ g(Λ × X) for any (λ, x) ∈ Λ × X and some i, j ∈ N. 2
LAN, LI: FUZZY RANDOM ITERATIVE ALGORITHMS...
1347
Without loss of generality, we can suppose that u = θ. In fact, if u ̸= θ, for some i, j ∈ N, putting Ti , Tj : Λ × X → CB(Y) are two set-valued operators{with Ti (x) = Si (x) − u and θ ∈ Ti (λ, x(λ)), Tj (x) = Sj (x) − u for all x ∈ X, then (1.2) is equivalent to Thus, in θ ∈ Tj (λ, x(λ)). order to discuss equations (1.1), we can turn to discuss the following equations: {
{
Si λ,x(λ) (θ) ≥ d(λ, x(λ)), Sj λ,x(λ) (θ) ≥ e(λ, x(λ)),
or
θ = f (λ, x(λ)), θ = g(λ, x(λ)),
(1.4)
under some conditions. Further, we will construct some new random iterative algorithms for solving this kind of nonlinear operator equation couples in fuzzy normed spaces and prove the convergence of iterative sequences generated by the algorithms under joint orbitally complete conditions. As applications, we shall also prove some new common fixed point theorems for a mixed family of fuzzy and crisp operators in fuzzy normed spaces.
2
Preliminaries
Throughout this paper, we suppose that (Λ, A) is a complete σ-finite measure space and X is a separable real vector space over the real or complex number field K. We denote by B(X), 2X the class of Borel σ-fields in X and the family of all the nonempty subsets of X, respectively. Let R = (−∞, +∞), R+ = [0, +∞) and N be the set of all natural numbers. Definition 2.1. An operator x : Λ → X is said to be measurable if, for any B ∈ B(X), {λ ∈ Λ : x(λ) ∈ B} ∈ A. Definition 2.2. An operator F : Λ × X → X is called a random operator if for any x ∈ X, F (λ, x) = y(λ) is measurable. A random operator F is said to be continuous (resp. linear, bounded) if for any λ ∈ Λ, the operator F (λ, ·) : X → X is continuous (resp. linear, bounded). Similarly, we can define a random operator a : Λ × X × X → X. We shall write Fλ (x) = F (λ, x(λ)) and aλ (x, y) = a(λ, x(λ), y(λ)) for all λ ∈ Λ and x(λ), y(λ) ∈ X. It is well known that a measurable operator is necessarily a random operator. Definition 2.3. A set-valued operator P : Λ → 2X is said to be measurable if, for any B ∈ B(X), P −1 (B) = {λ ∈ Λ : P (λ) ∩ B ̸= ∅} ∈ A. Definition 2.4. An operator u : Λ → X is called a measurable selection of a set-valued measurable operator Γ : Λ → 2X if u is measurable and for any λ ∈ Λ, u(λ) ∈ Γ(λ). Definition 2.5. A set-valued operator Q : Λ × X → 2X is called a random set-valued operator if, for any x ∈ X, Q(·, x) is measurable. In what follows, let W (X) denote the collection of all fuzzy sets over X. An operator S : D ⊂ X → W (X) is called a fuzzy operator. For each x ∈ X, S(x) (denote it by Sx , in the sequel) is a fuzzy set on X and Sx (y) is the membership function of y in Sx . Definition 2.6. A fuzzy operator T : Λ × X → W (X) is a random fuzzy operator if, for any x ∈ X, T (·, x) : Λ → W (X) is a measurable fuzzy operator (denoted by Tλ,x , short down T ). Obviously, the random fuzzy operator includes set-valued operator, random set-valued operator and fuzzy operator as the special cases. 3
LAN, LI: FUZZY RANDOM ITERATIVE ALGORITHMS...
1348
A fuzzy number x : R → [0, 1] is called convex if its α-level sets [x]α (i.e., α-cut set of x) is a convex set in R for every α ∈ (0, 1], where [x]α = {s : x(s) ≥ α}. A fuzzy number x is called normal if there exists s0 ∈ R such that x(s0 ) = 1. A fuzzy number x is called nonnegative if x(s) = 0 for all s < 0. The fuzzy number e 0 is defined by e 0(s) = 1 for s = 0 e and 0(s) = 0 for s ̸= 0. Let G be the set of all nonnegative upper semicontinuous normal convex fuzzy numbers. Obviously, if x ∈ G, then each of its α-level sets [x]α is a closed interval [aα , bα ], where aα ∈ R+ and bα ∈ R+ ∪ {+∞}. When bα = +∞, then [aα , bα ] means the interval [aα , +∞). Definition 2.7. Let ∥ · ∥ be a mapping from real vector space X into G and the mappings l, r (respectively, left norm and right norm) : [0, 1] × [0, 1] → [0, 1] be symmetric, nondecreasing in both arguments and satisfy l(0, 0) = 0, r(1, 1) = 1. Denote [∥x∥]α = [∥|x∥|α1 , ∥|x∥|α2 ], for all x ∈ X, α ∈ (0, 1]. The quadruple (X, ∥ · ∥, l, r) is called a fuzzy normed space (briefly, an FN-space) and ∥ · ∥ a fuzzy norm, if the following conditions are satisfied: (FN-1) ∥x∥ = e 0 if and only if x = θ; (FN-2) ∥kx∥ = |k| · ∥x∥ for all x ∈ X, k ∈ R; (FN-3) for all x, y ∈ X, ∥x + y∥(ν + µ) ≥ l(∥x∥(ν), ∥y∥(µ)) whenever ν ≤ ∥|x∥|11 , µ ≤ ∥|y∥|11 and ν + µ ≤ ∥|x + y∥|11 , and ∥x + y∥(ν + µ) ≤ r(∥x∥(ν), ∥y∥(µ)) whenever ν ≥ ∥|x∥|11 , µ ≥ ∥|y∥|11 and ν + µ ≥ ∥|x + y∥|11 . Remark 2.1. ([12]) There is a little difference between the above definition and the definition of the fuzzy normed space given in [18]. In [18], the definition requires that for all x ∈ X, x ̸= θ, there exists α0 ∈ (0, 1] independent of x such that ∥|x∥|α2 < ∞ for all α ≤ α0 and inf α≤α0 ∥|x∥|α1 > 0. Remark 2.2. ([19]) By induction, we can generalize∑ the triangle inequality (FN-3) as the ′ following: (FN-3) for any n ∈ N and x1 , · · · , xn ∈ X, ∥ ni=1 xi ∥(ν1 ) ≥ l(∥x1 ∥(s1 ), l(∥x2 ∥(s2 ), · · · , l(∥xn−1 ∥(sn−1 ), )) whenever si ≤∑ ∥|xi ∥|11 , i = 1, · · · , n and for any 1 ≤ n ∥(sn )) · · ·∑ ∑∥x n n n 1 k ≤ n − 1, νk = i=k si ≤ ∥| i=k xi ∥|1 , and ∥ i=1 xi ∥(υ1 ) ≤ r(∥x1 ∥(τ1 ), r(∥x2 ∥(τ2 ), · · · , r(∥xn−1∑ ∥(τn−1 ), ∥xn ∥(τ whenever τi ≥ ∥|xi ∥|11 , i = 1, · · · , n and for all 1 ≤ k ≤ ∑nn )) · · · )) n 1 n − 1, υk = i=k τi ≥ ∥| i=k xi ∥|1 . Remark 2.3. ([1, 20]) Each classical normed space and each Menger probabilistic normed space can be considered as a fuzzy normed space. By using (FN-3)′ and the same method as Theorem 2.1 in [12], we can obtain the following lemma. Lemma 2.1. Let (X, ∥·∥, l, r) be an FN-space. If {rn (s)} is an equi-continuous sequence of mappings at s = 0, where r1 (s) = s, rn (s) = r(s, rn−1 (s)), n = 2, 3, · · · , then for each ∑ ∑ α ∈ (0, 1], there exists β ∈ (0, α] such that ∥| ni=1 xi ∥|α2 ≤ ni=1 ∥|xi ∥|β2 for all xi ∈ X, i = 1, 2, · · · , n. Definition 2.8. Let (X, ∥ · ∥, l, r) be an FN-space. A sequence {xn } in X is said to (i) converge to x ∈ X (often write limn→+∞ xn = x or xn → x ) if limn→+∞ ∥xn −x∥ = e 0, i.e., limn→+∞ ∥|xn − x∥|α2 = 0 for all α ∈ (0, 1]. (ii) be a Cauchy sequence if limm,n→+∞ ∥xm −xn ∥ = e 0, i.e., limm,n→+∞ ∥|xm −xn ∥|α2 = 0 for all α ∈ (0, 1]. An FN-space (X, ∥ · ∥, l, r) is called complete if every Cauchy sequence in X converges in X. In the sequel, we always suppose that (X, ∥ · ∥, l, r) is an FN-space satisfying the following condition: limt→+∞ ∥x∥(t) = 0 for all x ∈ X. Definition 2.9. A nonempty subset A in (X, ∥ · ∥, l, r) is said to be bounded if there exists an M > 0 such that supx∈A ∥|x∥|α2 ≤ M ∀α ∈ (0, 1]. Definition 2.10. Let (X, ∥·∥, l, r) be an FN-space and A, B ∈ W (X). For each α ∈ (0, 1], 4
LAN, LI: FUZZY RANDOM ITERATIVE ALGORITHMS...
1349
we define ∥|A∥|α2 and ρα (A, B) as follows: ∥|A∥|α2 = inf ∥|a∥|α2 a∈A
(2.1)
and ρα (A, B) = max{supa∈A inf b∈B ∥|a − b∥|α2 , supb∈B inf a∈A ∥|a − b∥|α2 }. By the definition of ∥|A∥|α2 and ρα (A, B), it is easy to obtain the following result: Lemma 2.2. Let (X, ∥ · ∥, l, r) be an FN-space which satisfies the following conditions: lim r(s, s) = 0 and lim ∥x∥(s) = 0 for all x ∈ X. If A, B ∈ 2X , then: (i) ∥|A∥|α2 = 0
s→0+
s→∞
for all α ∈ (0, 1] if and only if θ ∈ A; (ii) ∥|kA∥|α2 = |k| · ∥|A∥|α2 for all k ∈ R; (iii) ρα (x + A, x + B) = ρα (x − A, x − B) = ρα (A, B) for all x ∈ X. In the following Ψ is the class of all functions ψ : [0, +∞) → [0, +∞) and Φ is the class of all functions φ : (0, 1] → (0, 1].∑Ψ is the class of all functions ψ in Ψ which are n n bijective, nondecreasing and such that ∞ n=1 ψ (t) < +∞, for all t > 0, where ψ denotes the n-th iteration of ψ. Φ is the collection of all nondecreasing functions φ such that limn→+∞ φn (t) = 1 for all t ∈ (0, 1]. It is easy to see that if ψ ∈ Ψ then ψ(t) < t and limn→+∞ (ψ −1 )n (t) = +∞ for all t > 0 and if φ ∈ Φ, then φ(t) > t for t ∈ (0, 1]. Definition 2.11. Let (X, ∥ · ∥, l, r) and (Y, ∥ · ∥, l, r) be two FN-spaces, and Γ1 , Γ2 : Λ × X → S(Y, X) be two operators, where S(Y, X) denotes the set of all odd operators from Y to X (i.e., T ∈ S(Y, X) if and only if T (−y) = −T (y) for all y ∈ Y), and φ ∈ Φ, ψ ∈ Ψ. Then (Γ1 , Γ2 ) is said to be a (φ, ψ)-contractor couple of two fuzzy operators P, Q : D ⊂ X → W (Y), if for all α ∈ (0, 1], ρφ(α) (P (x + Γ1 (x)y), Q(x) + y) ≤ ψ(max{∥|Q(x)∥|α2 , ∥|y∥|α2 , ∥|Q(x) + y∥|α2 }), (2.2) ∀x ∈ D(Q), y ∈ {y ∈ Y : x + Γ1 (x)y ∈ D(P )}, ρφ(α) (Q(x + Γ2 (x)y), P (x) + y) ≤ ψ(max{∥|P (x)∥|α2 , ∥|y∥|α2 , ∥|P (x) + y∥|α2 }), (2.3) ∀x ∈ D(P ), y ∈ {y ∈ Y : x + Γ2 (x)y ∈ D(Q)}. Remark 2.4. If φ(t) = t for t ∈ (0, 1], (Γ1 , Γ2 ) is simply called a ψ-contractor couple of P and Q (see [19]). Now, we define an orbit for mixed operators (Sn , f, g) and a joint orbitally complete space as follows: Definition 2.12. Let f, g : Λ × X → X be two nonlinear operators and {Sn λ }∞ n=1 a sequence of random fuzzy operators from Λ × X into W (X). If, for some x0 (λ) ∈ X, there exist sequences {xn (λ)} and {yn (λ)} in X such that {y2n+1 (λ)} = {fλ (x2n+1 )} ⊂ S2n+1 λ (x2n ), {y2n+2 (λ)} = {gλ (x2n+2 )} ⊂ S2n+2 λ (x2n+1 )
(2.4)
for all n = 0, 1, 2, · · · , then ϑ(Sn λ , fλ , gλ , x0 (λ)) = {yn (λ) : n ∈ N} is called an orbit for the mixed operators (Sn λ , fλ , gλ ) for all λ ∈ Λ. Definition 2.13. X is called x0 -joint orbitally complete if every Cauchy sequence of each orbit at x0 (λ) is convergent in X for all λ ∈ Λ. Remark 2.5. ([15]) Clearly, if X is an any complete space and x0 ∈ X, then X is x0 -joint orbitally complete, while converse is not necessarily true. 5
LAN, LI: FUZZY RANDOM ITERATIVE ALGORITHMS...
3
1350
Random Iterative Algorithms
In this section, we suggest and analyze a new class of iterative methods and construct some new random iterative algorithms for solving the problem (1.4). Definition 3.1. Let (X, ∥ · ∥, l, r) and (Y, ∥ · ∥, l, r) be two FN-spaces. An operator P : D(P ) ⊂ X → W (Y) is said to be closed, if for any xn ∈ D(P ) and yn ∈ P (xn ) we have x ∈ D(P ) and y ∈ P (x) whenever xn → x and yn → y. In order to give our main results, we first list the following condition (C ∗ ): Let P, Q : D ⊂ X → W (Y) be two fuzzy operators. For any given x ∈ D(Q), y ∈ P (x) and x ˆ ∈ D(P ), yˆ ∈ Q(¯ x), there exist z ∈ P (x − Γ1 (x)y) and zˆ ∈ Q(ˆ x − Γ2 (ˆ x)ˆ y ), respectively, such that φ(α)
∥|z∥|2
φ(α)
≤ ρφ(α) (P (x − Γ1 (x)y), Q(x) − y), ∥|ˆ z ∥|2
≤ ρφ(α) (Q(ˆ x − Γ2 (ˆ x)ˆ y ), P (ˆ x) − yˆ).
Remark 3.1. If P = Q is a set-valued operator and φ(t) = t for t ∈ (0, 1], then the condition (C ∗ ) coincides with (iv) of Theorem 3.1 in [12]. By (2.1) and Lemma 2.2, we introduce the following algorithms for our main results: Algorithm 3.1. Let (X, ∥ · ∥, l, r) and (Y, ∥ · ∥, l, r) be two FN-spaces, f , g be two operators from Λ × X into X, {Sn λ }∞ n=1 be a sequence of random fuzzy operators from Λ×X into W (X). Suppose that the condition (C ∗ ) holds and for some i, j ∈ N, the following conditions are satisfied: (i) Si λ (x) ⊂ f (Λ × X) and Sj λ (x) ⊂ g(Λ × X) for all λ ∈ Λ and x ∈ X; (ii) x + Γ1 (x)y ∈ D(Si λ ) for all x ∈ D(Sj λ ) and y ∈ Y, x + Γ2 (x)y ∈ D(Sj λ ) for all x ∈ D(Si λ ) and y ∈ Y; (iii) (Γ1 , Γ2 ) is a (φ, ψ)-contractor couple of Si λ and Sj λ . For all λ ∈ Λ, some i, j ∈ N and any given x0 (λ) ∈ D(Sj λ ) and y0 (λ) ∈ Sj λ (x0 ), we define two sequences {xn (λ)} in X and {yn (λ)} in Y satisfying the following conditions: x2n+1 (λ) = (1 − σ2n )x2n (λ) + σ2n (x2n (λ) − Γ1 (x2n (λ))y2n (λ)), x2n+2 (λ) = (1 − σ2n+1 )x2n+1 (λ) + σ2n+1 (x2n+1 (λ) − Γ2 (x2n+1 (λ))y2n+1 (λ)), y2n (λ) ∈ Sj λ (x2n ), y2n+1 (λ) ∈ Si λ (x2n+1 ), φ(α) ∥|yn ∥|2 ≤ ψ n (∥|y0 (λ)∥|α2 ), ∀α ∈ (0, 1] and some i, j ∈ N,
(3.1)
where {σn } is a real monotone decreasing sequence in (0, 1] and σn → σ ∈ (0, 1] as n → +∞. Algorithm 3.2. Let (X, ∥·∥, l, r) and (Y, ∥·∥, l, r) be two FN-spaces and A, V : Λ×X → X be two nonlinear operators. Let {Sn λ }∞ n=1 , Γ1 , Γ2 , φ and ψ be the same as in Algorithm ∗ 3.1. Suppose that the conditions (C ), and (ii) and (iii) in Algorithm 3.1 hold. If (I) for all λ ∈ Λ, some i, j ∈ N and x ∈ X, Si λ (x) ⊂ X − A(Λ × X) and Sj λ (x) ⊂ X − V (Λ × X), then, for some i, j ∈ N and any x0 (λ) ∈ D(Sj λ ) and y0 (λ) ∈ Sj λ (x0 ), we have two sequences {xn (λ)} in X and {yn (λ)} in Y, respectively, defined as follows: x2n+1 (λ) = x2n (λ) − Γ1 (x2n (λ))y2n (λ), x2n+2 (λ) = x2n+1 (λ) − Γ2 (x2n+1 (λ))y2n+1 (λ), where the sequence {yn (λ)} in Y is defined by (2.4). 6
LAN, LI: FUZZY RANDOM ITERATIVE ALGORITHMS...
4
1351
Approximation and Convergence
In this section, we first prove the existence of solutions of the equation couple (1.4) and the convergence of iterative sequences generated by Algorithm 3.1. Theorem 4.1. Let (X, ∥ · ∥, l, r) and (Y, ∥ · ∥, l, r) be two complete FN-spaces and f , g, {Sn λ }∞ n=1 , Γ1 , Γ2 , φ and ψ be the same as in Algorithm 3.1. Suppose that the condition (C ∗ ), the conditions (i)-(iii) in Algorithm 3.1 and the following conditions hold: (iv) in FN-space (X, ∥ · ∥, l, r), lima→0+ r(a, a) = 0, {rn (t)} is equi-continuous at t = 0 and limt→+∞ ∥x∥(t) = 0 for all x ∈ X; (v) g(Λ × X) is x0 -joint orbitally complete for some x0 (λ) ∈ X; (vi) Sn λ is closed for any n ∈ N and all λ ∈ Λ, and there exists a constant M > 0 such that for all α ∈ (0, 1] and any constant ω > τ > 0, φ(α)
∥|ωΓ1 (x)y∥|2
φ(α) ∥|ωΓ2 (x)y∥|2
φ(α)
≤ τ M ∥|y∥|2 ≤
,
∀x ∈ D(Sj λ ),
y ∈ Y,
(4.1)
φ(α) τ M ∥|y∥|2 ,
∀x ∈ D(Si λ ),
y ∈ Y.
(4.2)
Then the nonlinear operator equation couple (1.4) has a solution x∗ (λ) such that {fλ (x∗ )} = ∩ ∗ {gλ (x∗ )} ⊂ ∞ i=1 Si λ (x ) for all λ ∈ Λ. Further, xn (λ) → x∗ (λ),
yn (λ) → θ
as n → ∞,
∀λ ∈ Λ,
where {xn (λ)} in X and {yn (λ)} in Y are two sequences generated by Algorithm 3.1. Proof. By (3.2), (4.1) and (4.2), for all α ∈ (0, 1] we get φ(α)
∥|x2n+1 (λ) − x2n (λ)∥|2
φ(α)
≤ σM ∥|y2n (λ)∥|2
φ(α)
∥|x2n+2 (λ) − x2n+1 (λ)∥|2
≤ σM ψ 2n (∥|y0 (λ)∥|α2 ), φ(α)
≤ σM ∥|y2n+1 (λ)∥|2
≤ σM ψ 2n+1 (∥|y0 (λ)∥|α2 ),
φ(α)
and so ∥|xn+1 − xn ∥|2 ≤ σM ψ n (∥|y0 (λ)∥|α2 ) for all α ∈ (0, 1]. Since {rn (t)} is equicontinuous at t = 0, for each α ∈ (0, 1], Lemma 2.11 implies that there exists a constant β ∈ (0, α] such that for all m, n ∈ N (m > n), ∥|xm (λ) −
φ(α) xn (λ)∥|2
≤
m−1 ∑
∥|xi+1 (λ) −
φ(β) xi (λ)∥|2
≤
i=n
By the definition of ψ ∈ Ψ, we know there exists N0 ∈ N such that m−1 ∑
σM ψ i (∥|y0 (λ)∥|β2 ).
(4.3)
i=n
∑∞
i=0 ψ
ψ i (∥|y0 (λ)∥|β2 )
0,
∀m > n ≥ N0 .
(4.4)
From (4.3), (4.4) and φ : (0, 1] → (0, 1], it follows that for all α ∈ (0, 1] φ(α) ∈ (0, 1] and
φ(α)
∥|xn (λ) − xm (λ)∥|2
< ϵ,
∀m > n ≥ N0
and so {xn (λ)} is a Cauchy sequence in X for all λ ∈ Λ. By the completeness of X, we can suppose that xn (λ) → x∗ (λ) ∈ X for all λ ∈ Λ. Moreover, it follows from (3.2) that yn (λ) → θ as n → +∞ for all λ ∈ Λ. Since y2n (λ) ∈ Sj λ (x2n (λ)), y2n+1 (λ) ∈ Si λ (x2n+1 (λ)) 7
LAN, LI: FUZZY RANDOM ITERATIVE ALGORITHMS...
1352
and Si λ , Sj λ are closed for some i, j ∈ N and all λ ∈ Λ, it follows from (3.2) and the assumption (i) in Algorithm 3.1 that for some i, j ∈ N, θ ∈ Si λ (x∗ ),
θ ∈ Sj,λ (x∗ ),
∀λ ∈ Λ,
∩ ∗ i.e., x∗ (λ) is a solution of (1.4) and {fλ (x∗ )} = {gλ (x∗ )} ⊂ ∞ ∀λ ∈ Λ. This i=1 Si λ (x ), completes the proof. 2 As the applications of Theorem 4.1, we give the following results. Theorem 4.2. Let (X, ∥ · ∥∗ ) be a line metric space, and f, g : Λ × X → X be two nonlinear operators. Suppose that {Sn λ }∞ n=1 , Γ1 , Γ2 , φ and ψ are the same as in Theorem 4.1 and satisfy the condition (C ∗ ), the conditions (i)-(iii) in Algorithm 3.1 and the conditions (v) and (vi) in Theorem 4.1 ∩∞hold. Then there exists z(λ) ∈ X such that θ = fλ (z) = gλ (z) and {fλ (z)} = {gλ (z)} ⊂ i=1 Si λ (z) for all λ ∈ Λ. Moreover, xn (λ) → z(λ) and yn (λ) → θ as n → +∞ for all λ ∈ Λ, where {xn (λ)} in X and {yn (λ)} in Y are two sequences generated by Algorithm 3.1. Proof. Let ∥x∥(t) = e 0(t − ∥x∥∗ ). Then (X, ∥ · ∥, min, max) is an FN-space induced by (X, ∥ · ∥∗ ) and the all conditions of Theorem 4.1 hold. Hence, the conclusions follow directly from Theorem 4.1. 2 Theorem 4.3. Assume that (X, ∥ · ∥, l, r) and (Y, ∥ · ∥, l, r) are two complete FN-spaces, and A, V , {Sn λ }∞ n=1 , Γ1 , Γ2 , φ and ψ are the same as in Algorithm 3.2 and satisfy the ∗ condition (C ). Further, suppose that the conditions (ii) and (iii) in Algorithm 3.1 and (I) in Algorithm 3.2 hold. If the following additional conditions are satisfied: (II) in FN-space (X, ∥·∥, l, r), lima→0+ r(a, a) = 0, {rn (t)} is equi-continuous at t = 0 and limt→+∞ ∥x∥(t) = 0 for all x ∈ X; (III) X − V (Λ × X) is x0 -joint orbitally complete for some x0 (λ) ∈ X; (IV) Sn λ is closed for any n ∈ N and all λ ∈ Λ, and there exists a constant M > 0 such that for all α ∈ (0, 1] and some i, j ∈ N, φ(α)
≤ M ∥|y∥|2
φ(α)
≤ M ∥|y∥|2
∥|Γ1 (x)y∥|2 ∥|Γ2 (x)y∥|2
φ(α)
,
∀x ∈ D(Sj λ ),
y ∈ Y,
φ(α)
,
∀x ∈ D(Si λ ),
y ∈ Y,
∩ ∗ then there exists x∗ (λ) ∈ X such that x∗ (λ) = Aλ (x∗ ) = Vλ (x∗ ) and {x∗ (λ)} ⊂ ∞ i=1 Si λ (x ), for all λ ∈ Λ. Furthermore, xn (λ) → x∗ (λ), yn (λ) → θ as n → ∞ for all λ ∈ Λ, where {xn (λ)} in X and {yn (λ)} in Y are two sequences generated by Algorithm 3.2. Proof. Let fλ (x) = x(λ) − Aλ (x) and gλ (x) = x(λ) − Vλ (x) for all λ ∈ Λ, x ∈ X. It is easy to see that all the conditions of Theorem 4.1 are satisfied. Therefore, the conclusion of Theorem 4.3 follows from Theorem 4.1 immediately. 2 Remark 4.1. Similarly, we can obtain the corresponding conclusions if the FN-spaces in Theorem 4.3 are replaced by the classical normed space, probabilistic normed space and the Menger probabilistic normed space, respectively. Therefore, the results presented in this paper improve and generalize corresponding results of [11-17] and [19].
References [1] J.Z. Xiao, X.H. Zhu, On linearly topological structure and property of fuzzy normed linear space, Fuzzy Sets and Systems 125(2) (2002) 153–161. [2] I. Golet., On generalized fuzzy normed spaces and coincidence point theorems, Fuzzy Sets and Systems 161(8) (2010) 1138–1144.
8
LAN, LI: FUZZY RANDOM ITERATIVE ALGORITHMS...
1353
[3] B. Schweizer, A. Sklar, Probabilistic Metric Spaces, Elsevier North-Holland, New York, 1983. [4] George A. Anastassiou, Oktay Duman, Statistical fuzzy approximation by fuzzy positive linear operators, Comput. Math. Appl. 55(3) (2008) 573–580. [5] Saadati, Reza, Park, Choonkil: Non-Archimedean L-fuzzy normed spaces and stability of functional equations, Comput. Math. Appl. 60(8) (2010) 2488–2496. [6] O. Hadˇzi´c, E. Pap, New classes of probabilistic contractions and applications to random operators, Y.J. Cho et al. (Eds.) Fixed Point Theory and Application Vol.4, pp.97–119, Nova Sci. Publ., New York, 2003. ´ c, Solving the Banach fixed point principle for nonlinear contractions in probabilistic [7] L. Ciri´ metric spaces, Nonlinear Anal. TMA 72(3-4) (2010) 2009–2018. [8] C.X. Zhu, Research on some problems for nonlinear operators, Nonlinear Anal. TMA 71(10) (2009) 4568–4571. [9] J. Jachymski, On probabilistic φ-contractions on Menger spaces, Nonlinear Anal. TMA 73(7) (2010) 2199–2203. ´ c, Common fixed point theorems for a family of non-self mappings in convex metric [10] L. Ciri´ spaces, Nonlinear Anal. TMA 71(5-6) (2009) 1662–1669. [11] Y.J. Cho, H.Y. Lan, N.J. Huang, A system of nonlinear operator equations for a mixed family of fuzzy and crisp operators in probabilistic normed spaces, J. Inequal. Appl. 2010 (2010) Art. 152978, 12pp. [12] J.X. Fang, G.A. Song, Φ-contractor and the solutions for nonlinear operator equations in fuzzy normed spaces, Fuzzy Sets and Systems 121(2) (2001) 267–273. [13] S.S. Chang, J.K. Kim, Y.M. Nam, K.H. Kim, Remark on the three-step iteration for nonlinear operator equations and nonlinear variational inequalities, J. Comput. Anal. Appl. 8(2) (2006) 139–149. [14] D. Turkoglu, C. Alaca, Y.J. Cho, C. Yildiz, Common fixed point theorems in intuitionistic fuzzy metric spaces. J. Appl. Math. Comput. 22 (1-2) (2006) 411–424. [15] B.K. Sharma, D.R. Sahu, M. Bounias, Common fixed point theorems for a mixed family of fuzzy and crisp mappings, Fuzzy Sets and Systems 125(2) (2002) 261–268. [16] D. Mihet, On set-valued nonlinear equations in Menger probabilistic normed spaces, Fuzzy Sets and Systems 158(16) (2007) 1823–1831. [17] H.Y. Lan, T.X. Lu, H.L. Zeng, X.H. Ren, Perturbed iterative approximation of common fixed points on nonlinear fuzzy and crisp mixed family operator equation couples in Menger PNspaces, J. Yu et al. (Eds.) Proceedingd of the Fifth International Conference on Rough Set and Knowledge Technology, LNAI 6401, pp.228–233, Springer-Verlag Berlin Heidelberg, New York, 2010. [18] F. Clementina, Finite dimensional fuzzy normed space, Fuzzy Sets and Systems 48(2) (1992) 239–248. [19] N.J. Huang, H.Y. Lan, A couple of nonlinear equations with fuzzy mappings in fuzzy normed spaces, Fuzzy Sets and Systems 152(2) (2005) 209–222. [20] O. Kaleva, S. Seikkala, On fuzzy metric spaces. Fuzzy Sets and Systems 12(3) (1984) 215–229.
9
JOURNAL OF COMPUTATIONAL ANALYSIS AND APPLICATIONS, VOL. 14, NO.7,1354-1361,2012, COPYRIGHT 2012 EUDOXUS 1354 PRESS, LLC
Non-Polynomial Spline Method for Fractional Diffusion Equation Hikmet Caglar1 , Nazan Caglar2,∗ , Mehmet Fatih Ucar1 , Canan Akkoyunlu1 1
Istanbul Kultur University, Department of Mathematics - Computer, 34156 Atakoy Istanbul, Turkey. Tel.: +90 2124984366; fax: +90 212 4658310; email: [email protected] 2
Istanbul Kultur University, Faculty of Economic and Administrative Science, 34156 Atakoy Istanbul, Turkey. Tel.: +90 2124984415; fax: +90 212 4658310; e-mail: [email protected] Abstract The one-dimensional fractional diffusion equation is studied systematically using the non-polynomial spline method. The Caputo fractional derivative is used for formulation. An example is solved to assess the accuracy of the method. The numerical results are obtained for different values (n) of equation. An effective and easy-to-use method for solving such equations is needed. Keywords: Fractional diffusion equation, Caputo fractional derivative, nonpolynomial spline. 1. Introduction Fractional diffusion equations have attracted during the last few decades for modelling many physical and chemical processes and in engineering. Many authors have presented the existence and approximations of the solutions to one-dimensional fractional diffusion equation. In [1] two-step Adomian decomposition method is used analytical solution for the space fractional diffusion equation. Mingrong Cui [2] proposed high-order compact finite difference scheme and analysis the condition for stability. Finite difference method is presented for this problem and some examples are given in [3]. Also in [5] a class of initial-boundary value fractional diffusion equations with variable coefficients on a finite domain are examined using numerical method and analysis of stability, consistency and convergence. The analytical solutions of the space fractional diffusion equations are presented by 1
CAGLAR ET AL: FRACTIONAL DIFFUSION EQUATION
1355
modified decomposition method [6]. Ray examined the analytical solutions of the space fractional diffusion equations by two-step Adomian decomposition method [7]. In this paper, we consider one-dimensional fractional diffusion equation : ∂u(x, t) ∂ α u(x, t) + q(x, t), = d(x) ∂t ∂xα
(1)
with initial condition u(x, 0) = f (x), 0 ≤ x ≤ 1 and boundary conditions u(0, t) = g0 (t), u(1, t) = g1 (t), t ≥ 0 where d(x) represents the diffusion coefficent and q(x, t) the source/sink function. Sources provide energy or material to the system where sinks absorb energy or material. Eq. (1) becomes the classical diffusion equation for α = 2. It models a superdiffuse flow for 1 < α < 2 and a classical advective flow for α = 1 [5]. In this paper, non-polynomial spline method is considered for numerical solution of one-dimensional fractional diffusion equation. Many authors have used spline method to solve different problems. In [8], parabolic equation is solved using non-polynomial cubic spline method. The paper has been organized as follows: In section 2, Caputo fractional derivative is given briefly. In section 3, non-polynomial cubic spline method is investigated. Analysis of the method is given in section 4. In section 5 an example is given, a conclusion is given in section 6. 2. Caputo Fractional Derivative There are various kind of fractional derivatives that widely used ones are the Grunwald-Letnikov, the Riemann-Liouville and the Caputo fractional derivatives. Caputo fractional derivative is a regularization in the time origin for the Riemann-Liouville derivative [9,10]. A nice comparison of these definitions from the view point of their applications in physics and engineering can be found in [11]. In this study, we use the Caputo fractional derivative that is defined as follow [12]: α f (x) = J m−α Dm f (x) = D∗x
Z x 1 (x − t)m−α−1 f (m) (t)dt Γ(m − α) 0
for m − 1 < α ≤ m and m ∈ N .
2
(2)
CAGLAR ET AL: FRACTIONAL DIFFUSION EQUATION
1356
3. Non-Polynomial Spline Method We divide the interval [a, b] into n equal subintervals using the grid points xi = a + ih, i = 0, 1, 2, ..., n, with a = x0 , xn = b, h = (b − a)/n where n is an arbitrary positive integer. Let u(x) be the exact solution and ui be an approximation to u(xi ) obtained by the non-polynomial cubic Si (x) passing through the points (xi , ui ) and (xi+1 , ui+1 ), we do not only require that Si (x) satisfies interpolatory conditions at xi and xi+1 , but also the continuity of first derivative at the common nodes (xi , ui ) are fulfilled. We write Si (x) in the form: Si (x) = ai + bi (x − xi ) + ci sinτ (x − xi ) + di cosτ (x − xi ),
i = 0, 1, ..., n − 1 (3)
where ai , bi , ci and di are constants and τ is a free parameter. A non-polynomial function S(x) of class C 2 [a, b] interpolates u(x) at the grid points xi , i = 0, 1, 2, ..., n, depends on a parameter τ , and reduces to ordinary cubic spline S(x) in [a, b] as τ → 0. To derive expression for the coefficients of Eq. (3) in term of ui , ui+1 , Mi and Mi+1 , we first define: Si (xi ) = ui ,
Si (xi+1 ) = ui+1 ,
′′
S (xi ) = Mi ,
′′
S (xi+1 ) = Mi+1 .
(4)
From algebraic manipulation, we get the following expression: ai = ui +
Mi , τ2
bi =
ui+1 −ui h
+
Mi+1 −Mi , τθ
ci =
Mi cosθ−Mi+1 , τ 2 sinθ
i di = − M , τ2
where θ=τ h and i = 0, 1, 2, ..., n − 1. ′
′
Using the continuity of the first derivative at (xi , ui ), that is Si−1 (xi ) = Si (xi ) we obtain the following relations for i=1, ..., n − 1. αMi+1 + 2βMi + αMi−1 = (1/h2 )(ui+1 − 2ui + ui−1 )
(5)
where α = (−1/θ2 + 1/θ sin θ), β = (1/θ2 − cos θ/θ sin θ) and θ = τ h. The method is fourth-order convergent if 1−2α−2β = 0 and α = 1/12 [8].
3
CAGLAR ET AL: FRACTIONAL DIFFUSION EQUATION
1357
4. Analysis 0f The Method To illustrate the application of the non-polynomial spline method developed in the previous section, we consider the one-dimensional fractional diffusion equation that is given in Eq.(1). At the grid point (xi , ui ), the proposed equation may be discretized by using Caputo fractional derivative #
"
Z xi ui − f i 1 (m) = di (xi − µ)m−α−1 ui (µ)dµ + qi , k Γ(m − α) 0
(6)
where di = d(xi ), fi = f (xi ) = ui−1 and qi = q(xi , tj ). In this paper, we take α = 1.8, so the previous discretization will be as follow: di ui − f i = I + qi , k Γ(0.2) where I =
R xi 0
(7)
′′
(xi − µ)−0.8 ui (µ)dµ.
Applying integration by parts two times, we obtain ′
I = x−0.8 ui (0) − 0.8x−1.8 ui (0) + i i where I1 =
R xi 0
36 I1 , 25
(8)
(xi − µ)−2.8 ui (µ)dµ.
Expanding ui (µ) in Taylor series about a point µ = xi and then substituting I1 in Eq.(8) and then I in Eq. (7), we obtain Z xi ui − f i di 36 −1.8 −0.8 ′ = [x ui (0) − 0.8xi ui (0) + (ui (xi ) (xi − µ)−2.8 dµ k Γ(0.2) i 25 0
′
+ui (xi )
Z
xi 0
′′
(xi − µ)
−1.8
ui (xi ) Z xi dµ + (xi − µ)−0.8 dµ)] + qi . 2 0 4
(9)
CAGLAR ET AL: FRACTIONAL DIFFUSION EQUATION
′′
1358
′
Substituting Mi = ui (xi ) and Ni = ui (0) in Eq. (9) we get Z xi ui − f i di 36 −1.8 = [x−0.8 N − 0.8x u (0) + (u (x ) (xi − µ)−2.8 dµ i i i i i k Γ(0.2) i 25 0
′
+ui (xi )
Z
xi 0
(xi − µ)−1.8 dµ +
Mi Z x i (xi − µ)−0.8 dµ)] + qi . 2 0
(10)
Solving Eq. (10) for Mi , we get 1 25 10 ′ Mi = Gi ui − Gi fi − x−0.8 Ni + x−1.8 ui (0) − 2Di ui − 2Ei ui − Gi qi (11) i i Ci 18 9
where Ci = and Gi =
R xi 0
(xi − µ)−0.8 dµ, Di =
18kdi . 25Γ(0.2)
R xi 0
(xi − µ)−2.8 dµ, Ei =
R xi 0
(xi − µ)−1.8 dµ
The following approximations for the first-order derivative of u in Eq. (11) can be used ui+1 − ui−1 ′ ∼ 3ui+1 − 4ui + ui−1 ′ ∼ −ui+1 + 4ui − 3ui−1 ′ , ui+1 = , ui−1 = ,(12) ui ∼ = 2h 2h 2h So Eq. (11) becomes 1 25 10 ui+1 − ui−1 Mi = Gi ui − Gi fi − x−0.8 Ni + x−1.8 ui (0) − 2Di ui − 2Ei − Gi qi .(13) i i Ci 18 9 2h
Mi+1 and Mi−1 can be obtained easily from Mi putting i + 1 and i − 1 respectively instead of i. Substituting this equalities in Eq. (5), we find the following (n − 1) linear algebraic equations in the (n + 1) unknowns for i = 0, 1, ..., n:
5
CAGLAR ET AL: FRACTIONAL DIFFUSION EQUATION
"
1359
#
α 3 2βEi αEi−1 1 Gi+1 − 2Di+1 − Ei+1 + + − 2 ui+1 + Ci+1 h Ci h Ci−1 h h # " 4αEi−1 2 4αEi+1 2β + (Gi − 2Di ) − + 2 ui + Ci+1 h Ci Ci−1 h h " # α 3 2βEi αEi+1 1 Gi−1 − 2Di−1 − Ei−1 + − − ui−1 = Ci−1 h Ci h Ci+1 h h2 ! ! 25βNi 25αNi−1 αGi+1 fi+1 βGi fi αGi−1 fi−1 25αNi+1 + + + + + Ci+1 Ci Ci−1 18Ci+1 x0.8 9Ci x0.8 18Ci−1 x0.8 i+1 i i−1 ! ! αGi+1 qi+1 2βGi qi αGi−1 qi−1 10αui+1 (0) 20βui (0) 10αui−1 (0) + + + + + (14) − 9Ci+1 x1.8 9Ci x1.8 9Ci−1 x1.8 Ci+1 Ci Ci−1 i+1 i i−1
We need two more equations. The two end conditions can be derivated as follows: u(0, t) = g(t),
u(1, t) = h(t)
(15)
Substituting α 3 2βEi Gi+1 − 2Di+1 − Ei+1 + + Ci+1 h Ci h 4αEi−1 2 4αEi+1 2β + (Gi − 2Di ) − + 2 c2 = Ci+1 h Ci Ci−1 h h 3 2βEi α c3 = Gi−1 − 2Di−1 − Ei−1 + − Ci−1 h Ci h c1 =
αEi−1 1 − 2 Ci−1 h h
αEi+1 1 − 2 Ci+1 h h
The method is described in matrix form in the following way for the equations above:
A=
1 c1 0 . . . 0 .
0 c2 c1 . . . ... .
0 c3 c2 . . . 0 .
0 0 c3 . . . 0 .
6
... ... 0 . . . c1 0
0 0 ... . . . c2 0
0 0 0 . . . c3 1
CAGLAR ET AL: FRACTIONAL DIFFUSION EQUATION
B =
αG0 f0 C0 αG1 f1 C1
+ +
βG1 f1 C1 βG2 f2 C2
+ +
αG2 f2 C2
+
αG3 f3 C3
+
25αN0 18C0 x0.8 0 25αN1 18C1 x0.8 1
+ +
25βN1 9C1 x0.8 1 25βN2 9C2 x0.8 2
+ +
25αN2 18C2 x0.8 2 25αN3 18C3 x0.8 3
g(t) −
−
10αu0 (0) 9C0 x1.8 0 10αu1 (0) 9C1 x1.8 1
1360
+ +
20βu1 (0) 9C1 x1.8 1 20βu2 (0) 9C2 x1.8 2
+ +
10αu2 (0) 9C2 x1.8 2 10αu3 (0) 9C3 x1.8 3
. . .
αGn−2 fn−2 Cn−2
+
βGn−1 fn−1 Cn−1
+
αGn fn Cn
+
25αNn−2 18Cn−2 x0.8 n−2
+
25βNn−1 9Cn−1 x0.8 n−1
+
+
αG0 q0 C0
+
2βG1 q1 C1
+
αG2 q C2
+
αG1 q1 C1
+
2βG2 q2 C2
+
αG3 q C3
25αNn 18Cn x0.8 n
h(t)
U = [u0 , u1 , ..., un ]′
(16)
Finally the approximate solution is obtained by solving the fractional diffusion equation using Matlab 7.0.1. AU = B.
(17)
5. A Numerical Example In this section, to illustrate our methods we have solved fractional diffusion equation. All computations are done by using MATLAB 7.0.1. Consider the fractional diffusion problem (1) with the diffusion coefficient 2.8 , the source/sink function q(x, t) = −(1 + x)e−t x3 , subject to d(x) = Γ(2.2)x 6 initial condition u(x, 0) = x3 , for 0 ≤ x ≤ 1 and boundary conditions u(0, t) = 0, u(1, t) = e−t for t ≥ 0. The exact solution of this problem for α = 2 is u(x, t) = e−t x3 . The observed maximum absolute errors for various values of n for the fraction order 1.8 are given in Table 1. Table 1: The maximum absolute errors, k = 0.01 n 11 21 41 61 121
Spline method 0.1762 0.1663 0.1481 0.1307 0.0859
7
− ...
CAGLAR ET AL: FRACTIONAL DIFFUSION EQUATION
1361
6. Conclusion In this paper non-polynomial cubic spline method is applied for the numerical solution of one-dimentional fractional diffusion equation. Caputo fractional derrivative is used for fractional derivative term. We have concluded that numerical results converge to the exact solution when h goes to zero. Use of non-polynomial splines have shown that it is an applicable method for determining the diffusion. 7. References [1] Santanu Saha Ray, Analytical solution for the space fractional diffusion equation by two-step Adomian Decomposition Method, Communications in Nonlinear Science and Numerical Simulation, 14, 2009 12951306. [2] Mingrong Cui, Compact finite difference method for the fractional diffusion equation, Journal of Computational Physics, 228, 2009, 77927804 [3] Hong Wangb, Kaixin Wangb, Treena Sircar , A direct O(Nlog2N) finite difference method for fractional diffusion equations, Journal of Computational Physics,2010 [4] Podlubny I.,Dorcak L.,Kostial I., On fractional derivatives, fractional-order dynamic systems and PID-controllers, Decision and Control,1997, Proceedings of the 36th IEEE conference on Volume 5,10-12 Dec. 1997 ,4985-4990. [5] Charles Tadjeran, Mark M. Meerschaert, Hans-Peter Scheffler, A second-order accurate numerical approximation for the fractional diffusion equation, Journal of Computational Physics, 213, 2006, 205-213. [6] S. Saha Ray, K.S. Chaudhuri, R.K. Bera, Application of modified decomposition method for the analytical solution of space fractional diffusion equation, Applied Mathematics and Computation, 196, 1, 2008, 294-302. [7] Santanu Saha Ray, Analytical solution for the space fractional diffusion equation by two-step Adomian Decomposition Method,Communications in Nonlinear Science and Numerical Simulation, 14, 4, 2009, 1295-1306. [8] Rashidinia, J., Mohammadi R., Non-polynomial cubic spline methods for the solution of parabolic equations, International Journal of Computer Mathematics 85 (2008) 843-850. [9] Caputo, M., Linear models of dissipation whose Q is almost frequency independent, Part II, Geophys. J. Roy. Astron. Soc. 13(1967)529-539. [10] Gorenflo, R., Mainardi, F., Fractional calculus and stable probability distributions, Arch. Mech. 50(1998)377-388. [11] Jocelyn, S. Om Prakash, A., Tenreiro Machado, J.A., Advances in Fractional Calculus: Theoretical Developments and Applications in Physics and Engineering, Springer Netherlands, 2007. [12] Changpin Li, Weihua Deng, Remarks on fractional derivatives, Applied Mathematics and Computation, 187 2(2007) 777-784.
8
1362
1363
TABLE OF CONTENTS, JOURNAL OF COMPUTATIONAL ANALYSIS AND APPLICATIONS, VOL. 14, NO. 7, 2012 Approximation of an Additive-Quadratic Functional Equation in RN-Spaces, Hassan Azadi Kenary, Sun-Young Jang and Choonkil Park,...…….………………………………………...1190 Mixed-Stable Models for Analyzing High-Frequency Financial Data, Audrius Kabasinskas, Leonidas Sakalauskas, Edward W. Sun and Igoris Belovas,………………………………….1210 New Representations for the Euler-Mascheroni Constant and Inequalities for the GeneralizedEuler-Constant Function, Chao-Ping Chen and Wing-Sum Cheung,………………………... 1227 Cauchy-Jensen Functional Inequality in Banach Spaces and Non-Archimedean Banach Spaces, Ick-Soon Chang, M. Eshaghi Gordji and Hark-Mahn Kim,…………………………………. 1237 Some Properties of Certain Class of Multivalent Functions with Negative Coefficients, Adriana Catas and Calin Dubau,……………………………………………………………………… 1248 Smooth and Optimal Shape of Aeolian Blade Profile by Splines, Calin Dubau and Adriana Catas,…………………………………………………………………………………………. 1258 Viscosity Approximations With Weak Contraction for Finding a Common Solution of Fixed Points and a General System of Variational Inequalities for Two Accretive Operators, Poom Kumam and Phayap Katchang,………………………………………………………………. 1269 Characterization on Simultaneous Approximation for Left Gamma Quasi-Interpolants in Lp Spaces, Hongbiao Jiang,……………………………………………………………………….1288 Comment on “On the stability of quadratic double centralizers on Banach algebras” [M. Eshaghi Gordji, A. Bodaghi, J. Comput. Anal. Appl. 13 (2011), 724-729], Choonkil Park, Jung Rye Lee, Dong Yun Shin and Madjid Eshaghi Gordji,………………………………………………… 1299 A Weighted Bivariate Blending Rational Interpolation Function and Visualization Control, Yunfeng Zhang, Fangxun Bao, Caiming Zhang and Qi Duan,………………………………. 1303 On Asymmetric Hermitian and Skew-Hermitian Splitting Iteration Methods for Weakly Nonlinear Systems, Mu-Zheng Zhu,………………………………………………………….. 1321 Some Aspects of Lorentz-Karamata Spaces, Ilker Eryilmaz,………………………………….1332 Integral-Type Operators between Weighted Bergman Spaces on the Unit Disk, Stevo Stevic and Ajay K. Sharma,………………………………………………………………………………. 1339
1364
TABLE OF CONTENTS, JOURNAL OF COMPUTATIONAL ANALYSIS AND APPLICATIONS, VOL. 14, NO. 7, 2012 (continued) Random Iterative Algorithms for Nonlinear Mixed Family of Random Fuzzy and Crisp Operator Equation Couples in Fuzzy Normed Spaces, Heng-you Lan and Fang Li,……………………1345 Non-Polynomial Spline Method for Fractional Diffusion Equation, Hikmet Caglar, Nazan Caglar, Mehmet Fatih Ucar and Canan Akkoyunlu,…………………………………………..1354