Contemporary algorithms. theory and applications. Vol.2


185 47 19MB

English Pages [424] Year 2019

Report DMCA / Copyright

DOWNLOAD PDF FILE

Table of contents :
Contemporary AlgorithmsTheory and ApplicationsVolume II
Contents
Glossary of Symbols
Preface
Chapter 1Correcting and Extending theApplicability of Two Fast Algorithms
1. Introduction
2. Semi-Local Convergence
3. Conclusion
References
Chapter 2On the Solution of GeneralizedEquations in Hilbert Space
1. Introduction
2. Convergence
3. Numerical Examples
4. Conclusion
References
Chapter 3Gauss-Newton Algorithm for ConvexComposite Optimization
1. Introduction
2. Convergence of GNA
3. Conclusion
References
Chapter 4Local Convergence of Newton’sAlgorithm of Riemannian Manif
1. Introduction
2. Convergence
3. Conclusion
References
Chapter 5Newton’s Algorithm on RiemannianManifolds with Values in a Cone
1. Introduction
2. Semi-Local Convergence
3. Conclusion
References
Chapter 6Gauss-Newton Algorithm onRiemannian Manifolds underL-Average Lipschitz Conditions
1. Introduction
2. Semi-Local Convergence
3. Conclusion
References
Chapter 7Newton’s Method with Applicationsto Interior Point Algorithmsof Mathematical Programming
1. Introduction
2. An Improved Newton–Kantorovich Theorem
3. Applications to Interior-Point Algorithm
4. Conclusion
References
Chapter 8Newton’s Method for SolvingNonlinear Equations UsingGeneralized Inverses: Part I OuterInverses
1. Introduction
2. Convergence
3. Conclusion
References
Chapter 9Newton’s Method for SolvingNonlinear Equations UsingGeneralized Inverses: Part IIMatrices
1. Introduction
2. Local Convergence
3. Conclusion
References
Chapter 10Newton’s Method for SolvingNonlinear Equations UsingGeneralized Inverses: Part III Ball ofConvergence for NonisolatedSolutions
1. Introduction
2. Convergence of Method (10.2)
3. Conclusion
References
Chapter 11On an Efficient Steffensen-LikeMethod to Solve Equations
1. Introduction
2. Analysis
3. Conclusion
References
Chapter 12Convergence Analysis forKing-Werner-Like Methods
1. Introduction
2. Semi-Local Convergence of Method (12.2)
3. Local Convergence ofMethod (12.2)
4. Numerical Examples
5. Conclusion
References
Chapter 13Multi-Point Family of High OrderMethods
1. Introduction
2. Local Convergence
3. Numerical Examples
4. Conclusion
References
Chapter 14Ball Convergence Theorems forSome Third-Order Iterative Methods
1. Introduction
2. Local Convergence forMethod (14.2)
3. Local Convergence ofMethod (14.3)
4. Numerical Examples
5. Conclusion
References
Chapter 15Convergence Analysis of FrozenSteffensen-Type Methods underGeneralized Conditions
1. Introduction
2. Semi-Local Convergence Analysis
3. Conclusion
References
Chapter 16Convergence of Two-Step IterativeMethods for Solving Equations withApplications
1. Introduction
2. Semi-Local Convergence Analysis
3. Local Convergence Analysis
4. Numerical Examples
5. Conclusion
References
Chapter 17Three Step Jarratt-Type Methodsunder Generalized Conditions
1. Introduction
2. Local Analysis
3. Numerical Examples
4. Conclusion
References
Chapter 18Extended Derivative FreeAlgorithms of Order Seven
1. Introduction
2. Local Analysis
3. Numerical Examples
4. Conclusion
References
Chapter 19Convergence of Fifth Order Methodsfor Equations under the SameConditions
1. Introduction
2. Local Convergence
3. Numerical Examples
4. Conclusion
References
Chapter 20A Novel Eighth Convergence OrderScheme with Derivatives andDivided Difference
1. Introduction
2. Convergence
3. Numerical Examples
4. Conclusion
References
Chapter 21Homocentric Ball for Newton’s andthe Secant Method
1. Introduction
2. Local Convergence
3. Semi-Local Convergence
4. Numerical Examples
5. Conclusion
References
Chapter 22A Tenth Convergence Order Methodunder Generalized Conditions
1. Introduction
2. Convergence
3. Numerical Examples
4. Conclusion
References
Chapter 23Convergence of Chebyshev’s Method
1. Introduction
2. Semi-Local Convergence Analysis
3. Local Convergence Analysis
4. Numerical Experiments
5. Conclusion
References
Chapter 24Gauss-Newton Algorithms forOptimization Problems
1. Introduction
2. Convergence
3. Conclusion
References
Chapter 25Two-Step Methods under GeneralContinuity Conditions
1. Introduction
2. Majorizing Sequences
3. Semi-Local Convergence
4. Numerical Experiments
5. Conclusion
References
Chapter 26A Noor-Waseem Third OrderMethod to Solve Equations
1. Introduction
2. Majorizing Sequences
3. Semi-Local Convergence
4. Numerical Experiments
5. Conclusion
References
Chapter 27Generalized Homeier Method
1. Introduction
2. Local Convergence
3. Numerical Experiments
4. Conclusion
References
Chapter 28A Xiao-Yin Third Order Method forSolving Equations
1. Introduction
2. Majorizing Sequences
3. Semi-Local Convergence
4. Numerical Experiments
5. Conclusion
References
Chapter 29Fifth Order Scheme
1. Introduction
2. Scalar Sequences
3. Semi-Local Convergence
4. Numerical Experiments
5. Conclusion
References
Chapter 30Werner Method
1. Introduction
2. Majorizing Sequences
3. Semi-Local Convergence
4. Numerical Experiments
5. Conclusion
References
Chapter 31Yadav-Singh Method of Order Five
1. Introduction
2. Semi-Local Convergence
3. Local Convergence
4. Numerical Experiments
5. Conclusion
References
Chapter 32Convergence of a P+1 Step Methodof Order 2P+1 with FrozenDerivatives
1. Introduction
2. Local Convergence
3. Numerical Experiments
4. Conclusion
References
Chapter 33Efficient Fifth Order Scheme
1. Introduction
2. Ball Convergence
3. Numerical Experiments
4. Conclusion
References
Chapter 34Sharma-Gupta Fifth Order Method
1. Introduction
2. Convergence
3. Numerical Experiments
4. Conclusion
References
Chapter 35Seventh Order Method for Equations
1. Introduction
2. Convergence
3. Numerical Experiments
4. Conclusion
References
Chapter 36Newton-Like Method
1. Introduction
2. Mathematical Background
3. Majorizing Sequences
4. Semi-Local Convergence
5. Numerical Experiments
6. Conclusion
References
Chapter 37King-Type Methods
1. Introduction
2. Majorizing Sequences
3. Semi-Local Convergence
4. Numerical Experiments
5. Conclusion
References
Chapter 38Single Step Third Order Method
1. Introduction
2. Semi-Local Analysis
3. Local Convergence
4. Numerical Example
5. Conclusion
References
Chapter 39Newton-Type Method forNon-Differentiable InclusionProblems
1. Introduction
2. Majorizing Sequences
3. Analysis
4. Conclusion
References
Chapter 40Extended Kantorovich-Type Theoryfor Solving Nonlinear EquationsIteratively: Part I Newton’s Method
1. Introduction
2. Convergence of NM
3. Conclusion
References
Chapter 41Extended Kantorovich-Type Theoryfor Solving Nonlinear EquationsIteratively: Part II Newton’s Method
1. Introduction
2. Convergence of NLM
3. Conclusion
References
Chapter 42Updated and Extended ConvergenceAnalysis for Secant-Type Iterations
1. Introduction
2. Convergence of STI
3. Conclusion
References
Chapter 43Updated Halley’s and Chebyshev’sIterations
1. Introduction
2. Semi-Local Convergence Analysis for HI and CI
3. Conclusion
References
Chapter 44Updated Iteration Theory for NonDifferentiable Equations
1. Introduction
2. Convergence
3. Conclusion
References
Chapter 45On Generalized Halley-LikeMethodsfor Solving Nonlinear Equations
1. Introduction
2. Majorizing Convergence Analysis
3. Semi-Local Analysis
4. Special Cases
5. Conclusion
References
Chapter 46Extended Semi-Local Convergence ofSteffensen-Like Methods forSolving Nonlinear Equations
1. Introduction
2. Majorizing Real Sequences
3. Convergence
4. Conclusion
References
About the Authors
Index
Blank Page
Recommend Papers

Contemporary algorithms. theory and applications. Vol.2

  • 0 0 0
  • Like this paper and download? You can publish your own PDF file online for free in a few minutes! Sign Up
File loading please wait...
Citation preview

Mathematics Research Developments

No part of this digital document may be reproduced, stored in a retrieval system or transmitted in any form or by any means. The publisher has taken reasonable care in the preparation of this digital document, but makes no expressed or implied warranty of any kind and assumes no responsibility for any errors or omissions. No liability is assumed for incidental or consequential damages in connection with or arising out of information contained herein. This digital document is sold with the clear understanding that the publisher is not engaged in rendering legal, medical or any other professional services.

Mathematics Research Developments Contemporary Algorithms: Theory and Applications. Volume I Christopher I. Argyros, Samundra Regmi, Ioannis K. Argyros and Santhosh George (Authors) 2022. ISBN: 978-1-68507-994-9 (Hardcover) 2022. ISBN: 979-8-88697-425-6 (eBook) Non-Euclidean Geometry in Materials of Living and Non-Living Matter in the Space of the Highest Dimension Gennadiy Zhizhin (Author) 2022. ISBN: 978-1-68507-885-0 (Hardcover) 2022. ISBN: 979-8-88697-064-7 (eBook) Frontiers in Mathematical Modelling Research M. Haider Ali Biswas and M. Humayun Kabir (Editors) 2022. ISBN: 978-1-68507-430-2 (Hardcover) 2022. ISBN: 978-1-68507-845-4 (eBook) Mathematical Modeling of the Learning Curve and Its Practical Applications Charles Ira Abramson and Igor Stepanov (Authors) 2022. ISBN: 978-1-68507-737-2 (Hardcover) 2022. ISBN: 978-1-68507-851-5 (eBook) Partial Differential Equations: Theory, Numerical Methods and Ill-Posed Problems Michael V. Klibanov and Jingzhi Li (Authors) 2022. ISBN: 978-1-68507-592-7 (Hardcover) 2022. ISBN: 978-1-68507-727-3 (eBook) Outliers: Detection and Analysis Apra Lipi, Kishan Kumar, and Soubhik Chakraborty (Authors) 2022. ISBN: 978-1-68507-554-5 (Softcover) 2022. ISBN: 978-1-68507-587-3 (eBook)

More information about this series can be found at https://novapublishers.com/productcategory/series/mathematics-research-developments/

Christopher I. Argyros, Samundra Regmi, Ioannis K. Argyros and Santhosh George

Contemporary Algorithms Theory and Applications Volume II

Copyright © 2023 by Nova Science Publishers, Inc. https://doi.org/10.52305/ZTPR4079 All rights reserved. No part of this book may be reproduced, stored in a retrieval system or transmitted in any form or by any means: electronic, electrostatic, magnetic, tape, mechanical photocopying, recording or otherwise without the written permission of the Publisher. We have partnered with Copyright Clearance Center to make it easy for you to obtain permissions to reuse content from this publication. Simply navigate to this publication’s page on Nova’s website and locate the “Get Permission” button below the title description. This button is linked directly to the title’s permission page on copyright.com. Alternatively, you can visit copyright.com and search by title, ISBN, or ISSN. For further questions about using the service on copyright.com, please contact: Copyright Clearance Center Phone: +1-(978) 750-8400 Fax: +1-(978) 750-4470 E-mail: [email protected].

NOTICE TO THE READER The Publisher has taken reasonable care in the preparation of this book, but makes no expressed or implied warranty of any kind and assumes no responsibility for any errors or omissions. No liability is assumed for incidental or consequential damages in connection with or arising out of information contained in this book. The Publisher shall not be liable for any special, consequential, or exemplary damages resulting, in whole or in part, from the readers’ use of, or reliance upon, this material. Any parts of this book based on government reports are so indicated and copyright is claimed for those parts to the extent applicable to compilations of such works. Independent verification should be sought for any data, advice or recommendations contained in this book. In addition, no responsibility is assumed by the Publisher for any injury and/or damage to persons or property arising from any methods, products, instructions, ideas or otherwise contained in this publication. This publication is designed to provide accurate and authoritative information with regard to the subject matter covered herein. It is sold with the clear understanding that the Publisher is not engaged in rendering legal or any other professional services. If legal or any other expert assistance is required, the services of a competent person should be sought. FROM A DECLARATION OF PARTICIPANTS JOINTLY ADOPTED BY A COMMITTEE OF THE AMERICAN BAR ASSOCIATION AND A COMMITTEE OF PUBLISHERS. Additional color graphics may be available in the e-book version of this book.

Library of Congress Cataloging-in-Publication Data

ISBN:  H%RRN

Published by Nova Science Publishers, Inc. † New York

The first author dedicates this book to his beloved parents Diana and Ioannis.

The second author dedicates this book to his mother Madhu Kumari Regmi and father Moti Ram Regmi.

The third author dedicates this book to his wonderful children Christopher, Gus, Michael, and lovely wife Diana.

The fourth author dedicates this book to Srijith, Snehal, Abijith and Abhinav.

Contents Glossary of Symbols

xv

Preface

xvii

1 Correcting and Extending the Applicability of Two Fast Algorithms 1. Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2. Semi-Local Convergence . . . . . . . . . . . . . . . . . . . . . . . . . . . 3. Conclusion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2 On the Solution of Generalized Equations in Hilbert Space 1. Introduction . . . . . . . . . . . . . . . . . . . . . . . . 2. Convergence . . . . . . . . . . . . . . . . . . . . . . . 3. Numerical Examples . . . . . . . . . . . . . . . . . . . 4. Conclusion . . . . . . . . . . . . . . . . . . . . . . . .

1 1 3 6

. . . .

9 9 10 12 13

3 Gauss-Newton Algorithm for Convex Composite Optimization 1. Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2. Convergence of GNA . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3. Conclusion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

15 15 16 19

4 Local Convergence of Newton’s Algorithm of Riemannian Manifolds 1. Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2. Convergence . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3. Conclusion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

21 21 21 24

5 Newton’s Algorithm on Riemannian Manifolds with Values in a Cone 1. Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2. Semi-Local Convergence . . . . . . . . . . . . . . . . . . . . . . . . . . . 3. Conclusion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

27 27 28 30

6 Gauss-Newton Algorithm on Lips chitz Conditions 1. Introduction . . . . . . . 2. Semi-Local Convergence 3. Conclusion . . . . . . .

33 33 34 36

. . . .

. . . .

. . . .

. . . .

. . . .

. . . .

. . . .

. . . .

. . . .

Riemannian Manifolds under L-Average . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

viii

Contents

7 Newton’s Method with Applications to Interior Point Algorithms of Mathematical Programming 1. Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . 2. An Improved Newton–Kantorovich Theorem . . . . . . . . . 3. Applications to Interior-Point Algorithm . . . . . . . . . . . . 4. Conclusion . . . . . . . . . . . . . . . . . . . . . . . . . . .

. . . .

39 39 39 42 47

8 Newton’s Method for Solving Nonlinear Equations Using Generalized Inverses: Part I Outer Inverses 1. Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2. Convergence . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3. Conclusion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

49 49 50 55

9 Newton’s Method for Solving Nonlinear Equations Using Generalized Inverses: Part II Matrices 1. Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2. Local Convergence . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3. Conclusion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

57 57 57 60

10 Newton’s Method for Solving Nonlinear Equations Using Generalized Inverses: Part III Ball of Convergence for Nonisolated Solutions 1. Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2. Convergence of Method (10.2) . . . . . . . . . . . . . . . . . . . . . . . . 3. Conclusion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

63 63 64 67

11 On an Efficient Steffensen-Like Method to Solve Equations 1. Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2. Analysis . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3. Conclusion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

69 69 69 74

12 Convergence Analysis for King-Werner-Like Methods 1. Introduction . . . . . . . . . . . . . . . . . . . . . 2. Semi-Local Convergence of Method (12.2) . . . . 3. Local Convergence of Method (12.2) . . . . . . . . 4. Numerical Examples . . . . . . . . . . . . . . . . 5. Conclusion . . . . . . . . . . . . . . . . . . . . .

. . . . .

77 77 77 80 81 84

. . . .

87 87 89 92 93

14 Ball Convergence Theorems for Some Third-Order Iterative Methods 1. Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2. Local Convergence for Method (14.2) . . . . . . . . . . . . . . . . . . . .

95 95 96

13 Multi-Point Family of High Order Methods 1. Introduction . . . . . . . . . . . . . . . 2. Local Convergence . . . . . . . . . . . 3. Numerical Examples . . . . . . . . . . 4. Conclusion . . . . . . . . . . . . . . .

. . . .

. . . .

. . . .

. . . .

. . . .

. . . .

. . . . .

. . . .

. . . . .

. . . .

. . . . .

. . . .

. . . . .

. . . .

. . . . .

. . . .

. . . . .

. . . .

. . . .

. . . . .

. . . .

. . . .

. . . . .

. . . .

. . . .

. . . . .

. . . .

. . . .

. . . . .

. . . .

. . . .

. . . . .

. . . .

. . . .

. . . . .

. . . .

Contents 3. 4. 5.

ix

Local Convergence of Method (14.3) . . . . . . . . . . . . . . . . . . . . . 100 Numerical Examples . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 104 Conclusion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 105

15 Convergence Analysis of Frozen Steffensen-Type Conditions 1. Introduction . . . . . . . . . . . . . . . . . . 2. Semi-Local Convergence Analysis . . . . . . 3. Conclusion . . . . . . . . . . . . . . . . . .

Methods under Generalized 109 . . . . . . . . . . . . . . . . 109 . . . . . . . . . . . . . . . . 111 . . . . . . . . . . . . . . . . 114

16 Convergence of Two-Step Iterative Methods for Solving Equations with Appli cations 1. Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2. Semi-Local Convergence Analysis . . . . . . . . . . . . . . . . 3. Local Convergence Analysis . . . . . . . . . . . . . . . . . . . 4. Numerical Examples . . . . . . . . . . . . . . . . . . . . . . . 5. Conclusion . . . . . . . . . . . . . . . . . . . . . . . . . . . .

. . . . .

. . . . .

. . . . .

. . . . .

. . . . .

. . . . .

117 117 118 125 127 128

17 Three Step Jarratt-Type Methods under Generalized Conditions 1. Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . 2. Local Analysis . . . . . . . . . . . . . . . . . . . . . . . . . 3. Numerical Examples . . . . . . . . . . . . . . . . . . . . . . 4. Conclusion . . . . . . . . . . . . . . . . . . . . . . . . . . .

. . . .

. . . .

. . . .

. . . .

. . . .

. . . .

131 131 132 139 140

. . . .

. . . .

. . . .

. . . .

. . . .

. . . .

. . . .

143 143 144 150 150

19 Convergence of Fifth Order Methods for Equations under the Same Conditions 1. Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2. Local Convergence . . . . . . . . . . . . . . . . . . . . . . . . . 3. Numerical Examples . . . . . . . . . . . . . . . . . . . . . . . . 4. Conclusion . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

. . . .

. . . .

. . . .

. . . .

. . . .

153 153 154 160 161

20 A Novel Eighth Convergence Order Scheme with Derivatives and Divided Difference 1. Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2. Convergence . . . . . . . . . . . . . . . . . . . . . . . . . . . 3. Numerical Examples . . . . . . . . . . . . . . . . . . . . . . . 4. Conclusion . . . . . . . . . . . . . . . . . . . . . . . . . . . .

. . . .

. . . .

. . . .

. . . .

. . . .

163 163 164 170 171

18 Extended Derivative Free Algorithms of Order Seven 1. Introduction . . . . . . . . . . . . . . . . . . . . . 2. Local Analysis . . . . . . . . . . . . . . . . . . . 3. Numerical Examples . . . . . . . . . . . . . . . . 4. Conclusion . . . . . . . . . . . . . . . . . . . . .

. . . .

. . . .

. . . .

. . . .

. . . .

. . . .

. . . .

. . . .

x

Contents

21 Homocentric Ball for Newton’s and the Secant Method 1. Introduction . . . . . . . . . . . . . . . . . . . . . . 2. Local Convergence . . . . . . . . . . . . . . . . . . 3. Semi-Local Convergence . . . . . . . . . . . . . . . 4. Numerical Examples . . . . . . . . . . . . . . . . . 5. Conclusion . . . . . . . . . . . . . . . . . . . . . .

. . . . .

. . . . .

. . . . .

. . . . .

. . . . .

. . . . .

173 173 174 177 181 182

22 A Tenth Convergence Order Method under Generalized Conditions 1. Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2. Convergence . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3. Numerical Examples . . . . . . . . . . . . . . . . . . . . . . . . 4. Conclusion . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

. . . .

. . . .

. . . .

. . . .

. . . .

185 185 186 191 192

23 Convergence of Chebyshev’s Method 1. Introduction . . . . . . . . . . . . 2. Semi-Local Convergence Analysis 3. Local Convergence Analysis . . . 4. Numerical Experiments . . . . . . 5. Conclusion . . . . . . . . . . . .

. . . . .

. . . . .

. . . . .

. . . . .

. . . . .

195 195 195 198 202 202

. . . . .

. . . . .

. . . . .

. . . . .

. . . . .

. . . . .

. . . . .

. . . . .

. . . . .

. . . . .

. . . . .

. . . . .

. . . . .

. . . . .

. . . . .

. . . . .

. . . . .

. . . . .

. . . . .

. . . . .

. . . . .

. . . . .

. . . . .

24 Gauss-Newton Algorithms for Optimization Problems 205 1. Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 205 2. Convergence . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 206 3. Conclusion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 208 25 Two-Step Methods under General Continuity Conditions 1. Introduction . . . . . . . . . . . . . . . . . . . . . . . 2. Majorizing Sequences . . . . . . . . . . . . . . . . . . 3. Semi-Local Convergence . . . . . . . . . . . . . . . . 4. Numerical Experiments . . . . . . . . . . . . . . . . . 5. Conclusion . . . . . . . . . . . . . . . . . . . . . . .

. . . . .

. . . . .

. . . . .

. . . . .

. . . . .

. . . . .

. . . . .

. . . . .

. . . . .

. . . . .

. . . . .

211 211 212 216 218 219

26 A Noor-Waseem Third Order Method to Solve Equations 1. Introduction . . . . . . . . . . . . . . . . . . . . . . . 2. Majorizing Sequences . . . . . . . . . . . . . . . . . . 3. Semi-Local Convergence . . . . . . . . . . . . . . . . 4. Numerical Experiments . . . . . . . . . . . . . . . . . 5. Conclusion . . . . . . . . . . . . . . . . . . . . . . .

. . . . .

. . . . .

. . . . .

. . . . .

. . . . .

. . . . .

. . . . .

. . . . .

. . . . .

. . . . .

. . . . .

223 223 224 227 229 230

27 Generalized Homeier Method 1. Introduction . . . . . . . . 2. Local Convergence . . . . 3. Numerical Experiments . . 4. Conclusion . . . . . . . .

. . . .

. . . .

. . . .

. . . .

. . . .

. . . .

. . . .

. . . .

. . . .

. . . .

. . . .

233 233 234 236 237

. . . .

. . . .

. . . .

. . . .

. . . .

. . . .

. . . .

. . . .

. . . .

. . . .

. . . .

. . . .

. . . .

. . . .

. . . .

Contents

xi

28 A Xiao-Yin Third Order Method for Solving Equations 1. Introduction . . . . . . . . . . . . . . . . . . . . . . 2. Majorizing Sequences . . . . . . . . . . . . . . . . . 3. Semi-Local Convergence . . . . . . . . . . . . . . . 4. Numerical Experiments . . . . . . . . . . . . . . . . 5. Conclusion . . . . . . . . . . . . . . . . . . . . . .

. . . . .

. . . . .

. . . . .

. . . . .

. . . . .

. . . . .

. . . . .

. . . . .

. . . . .

. . . . .

. . . . .

. . . . .

239 239 240 243 246 246

29 Fifth Order Scheme 1. Introduction . . . . . . . 2. Scalar Sequences . . . . 3. Semi-Local Convergence 4. Numerical Experiments . 5. Conclusion . . . . . . .

. . . . .

. . . . .

. . . . .

. . . . .

. . . . .

. . . . .

. . . . .

. . . . .

. . . . .

. . . . .

. . . . .

. . . . .

. . . . .

. . . . .

. . . . .

. . . . .

. . . . .

. . . . .

. . . . .

. . . . .

. . . . .

. . . . .

. . . . .

. . . . .

. . . . .

. . . . .

. . . . .

249 249 250 250 254 255

30 Werner Method 1. Introduction . . . . . . . 2. Majorizing Sequences . 3. Semi-Local Convergence 4. Numerical Experiments . 5. Conclusion . . . . . . .

. . . . .

. . . . .

. . . . .

. . . . .

. . . . .

. . . . .

. . . . .

. . . . .

. . . . .

. . . . .

. . . . .

. . . . .

. . . . .

. . . . .

. . . . .

. . . . .

. . . . .

. . . . .

. . . . .

. . . . .

. . . . .

. . . . .

. . . . .

. . . . .

. . . . .

. . . . .

. . . . .

257 257 258 258 261 261

31 Yadav-Singh Method of Order Five 1. Introduction . . . . . . . . . . . 2. Semi-Local Convergence . . . . 3. Local Convergence . . . . . . . 4. Numerical Experiments . . . . . 5. Conclusion . . . . . . . . . . .

. . . . .

. . . . .

. . . . .

. . . . .

. . . . .

. . . . .

. . . . .

. . . . .

. . . . .

. . . . .

. . . . .

. . . . .

. . . . .

. . . . .

. . . . .

. . . . .

. . . . .

. . . . .

. . . . .

. . . . .

. . . . .

. . . . .

. . . . .

263 263 264 267 269 269

. . . .

273 273 274 277 278

32 Convergence of a P + 1 Step Method of Order 2P + 1 with Frozen Derivatives 1. Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2. Local Convergence . . . . . . . . . . . . . . . . . . . . . . . . 3. Numerical Experiments . . . . . . . . . . . . . . . . . . . . . . 4. Conclusion . . . . . . . . . . . . . . . . . . . . . . . . . . . . 33 Efficient Fifth Order Scheme 1. Introduction . . . . . . . 2. Ball Convergence . . . . 3. Numerical Experiments . 4. Conclusion . . . . . . .

. . . .

. . . .

. . . .

. . . .

. . . .

. . . .

. . . .

. . . .

. . . .

. . . .

. . . .

. . . .

. . . .

. . . .

. . . .

. . . .

. . . .

. . . .

. . . .

. . . .

. . . .

. . . .

. . . .

. . . .

. . . .

. . . .

. . . .

. . . .

. . . .

281 281 282 285 285

34 Sharma-Gupta Fifth Order Method 1. Introduction . . . . . . . . . . . 2. Convergence . . . . . . . . . . 3. Numerical Experiments . . . . . 4. Conclusion . . . . . . . . . . .

. . . .

. . . .

. . . .

. . . .

. . . .

. . . .

. . . .

. . . .

. . . .

. . . .

. . . .

. . . .

. . . .

. . . .

. . . .

. . . .

. . . .

. . . .

. . . .

. . . .

. . . .

. . . .

. . . .

289 289 290 293 293

. . . .

. . . .

. . . .

xii

Contents

35 Seventh Order Method for Equations 1. Introduction . . . . . . . . . . . . 2. Convergence . . . . . . . . . . . 3. Numerical Experiments . . . . . . 4. Conclusion . . . . . . . . . . . .

. . . .

. . . .

. . . .

. . . .

. . . .

. . . .

. . . .

. . . .

. . . .

. . . .

. . . .

. . . .

. . . .

. . . .

. . . .

. . . .

. . . .

. . . .

. . . .

. . . .

. . . .

. . . .

297 297 298 301 301

36 Newton-Like Method 1. Introduction . . . . . . . . 2. Mathematical Background 3. Majorizing Sequences . . . 4. Semi-Local Convergence . 5. Numerical Experiments . . 6. Conclusion . . . . . . . .

. . . . . .

. . . . . .

. . . . . .

. . . . . .

. . . . . .

. . . . . .

. . . . . .

. . . . . .

. . . . . .

. . . . . .

. . . . . .

. . . . . .

. . . . . .

. . . . . .

. . . . . .

. . . . . .

. . . . . .

. . . . . .

. . . . . .

. . . . . .

. . . . . .

. . . . . .

. . . . . .

. . . . . .

. . . . . .

. . . . . .

305 305 306 307 310 315 315

37 King-Type Methods 1. Introduction . . . . . . . 2. Majorizing Sequences . 3. Semi-Local Convergence 4. Numerical Experiments . 5. Conclusion . . . . . . .

. . . . .

. . . . .

. . . . .

. . . . .

. . . . .

. . . . .

. . . . .

. . . . .

. . . . .

. . . . .

. . . . .

. . . . .

. . . . .

. . . . .

. . . . .

. . . . .

. . . . .

. . . . .

. . . . .

. . . . .

. . . . .

. . . . .

. . . . .

. . . . .

. . . . .

. . . . .

317 317 318 320 323 324

38 Single Step Third Order Method 1. Introduction . . . . . . . . . 2. Semi-Local Analysis . . . . 3. Local Convergence . . . . . 4. Numerical Example . . . . . 5. Conclusion . . . . . . . . .

. . . . .

. . . . .

. . . . .

. . . . .

. . . . .

. . . . .

. . . . .

. . . . .

. . . . .

. . . . .

. . . . .

. . . . .

. . . . .

. . . . .

. . . . .

. . . . .

. . . . .

. . . . .

. . . . .

. . . . .

. . . . .

. . . . .

. . . . .

. . . . .

. . . . .

325 325 326 330 332 334

39 Newton-Type Method for Non-Differentiable Inclusion Problems 1. Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . 2. Majorizing Sequences . . . . . . . . . . . . . . . . . . . . . . 3. Analysis . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4. Conclusion . . . . . . . . . . . . . . . . . . . . . . . . . . .

. . . .

. . . .

. . . .

. . . .

. . . .

. . . .

. . . .

335 335 336 339 341

. . . . .

40 Extended Kantorovich-Type Theory for Solving Nonlinear Equations Iteratively: Part I Newton’s Method 343 1. Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 343 2. Convergence of NM . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 346 3. Conclusion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 350 41 Extended Kantorovich-Type Theory for Solving Nonlinear Equations Iteratively: Part II Newton’s Method 353 1. Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 353 2. Convergence of NLM . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 355 3. Conclusion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 356

Contents

xiii

42 Updated and Extended Convergence Analysis for Secant-Type Iterations 359 1. Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 359 2. Convergence of STI . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 360 3. Conclusion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 362 43 Updated Halley’s and Chebyshev’s Iterations 365 1. Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 365 2. Semi-Local Convergence Analysis for HI and CI . . . . . . . . . . . . . . 366 3. Conclusion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 368 44 Updated Iteration Theory for Non Differentiable Equations 371 1. Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 371 2. Convergence . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 371 3. Conclusion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 374 45 On Generalized Halley-Like Methods for Solving Nonlinear Equations 1. Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2. Majorizing Convergence Analysis . . . . . . . . . . . . . . . . . . . 3. Semi-Local Analysis . . . . . . . . . . . . . . . . . . . . . . . . . . 4. Special Cases . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5. Conclusion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 46 Extended Semi-Local Convergence of Steffensen-Like Methods for Solving Nonlinear Equations 1. Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2. Majorizing Real Sequences . . . . . . . . . . . . . . . . . . . . 3. Convergence . . . . . . . . . . . . . . . . . . . . . . . . . . . 4. Conclusion . . . . . . . . . . . . . . . . . . . . . . . . . . . .

. . . .

. . . .

. . . .

. . . . .

. . . .

. . . . .

. . . .

. . . . .

379 379 380 383 386 387

. . . .

389 389 390 394 396

About the Authors

397

Index

399

Glossary of Symbols et al. etc. i.e. iff e.g. w.r.t resp. 6 = 0/ ∈, ∈ / ⇒ ⇔ max min sup inf for all n Rn Cn X ×Y, X × X = X 2 e1 , . . ., en x = (x1 , . . ., xn )T xT {xn }n≥0 k.k k.k p |.| /./ U(x0 , R) U(x0 , R) U(R) = U(x0 , R) U,U I L L−1

et alii (and others) et cetera id est (that is) if and only if exempli gratia (for example) with respect to respectively non-equality empty set belong to and does not belong to implication if and only if maximum minimum supremum (least upper bound) infimum (greatest lower bound) for all n ∈ N Real n-dimensional space Complex n-dimensional space Cartesian product space of X and Y The coordinate vector of Rn Column vector with component xi The transpose of x Sequence of point from X Norm on X L p norm Absolute value symbol Norm symbol of a generalized Banach space X Open ball {z ∈ X|kx0 − zk < R} Closed ball {z ∈ X|kx0 − zk ≤ R} Ball centered at the zero element on X and of radius R Open, closed balls, respectively no particular reference to X, x0 or R Identity matrix operator Linear operator Inverse

xvi

Christopher I. Argyros, Samundra Regmi, Ioannis K. Argyros et al.

M = {mi j } M −1 det M or |M| ∑ ∏ R

∈ ⊂, ⊆ ∀ ⇒ ∪, ∩ A−B F :D ⊆ X →Y F 0 (x), F 00 (x)

Matrix 1 ≤ i, J ≤ n Inverse of M Determinant of M Summation symbol Product of factors symbol Integration symbol Element inclusion Strict and non-strict set inclusion For all implication Union, intersection Difference between set A and B An operator with domain D included in X, and values in Y First, second Frchet-derivatives of F evaluated at x

Preface The book is a continuation of Volume I with the same title. It provides different avenues to study algorithms. It also brings new techniques and methodologies to problem solving in computational Sciences, Engineering, Scientific Computing and Medicine (imaging, radiation therapy) to mention a few. A plethora of algorithms which are universally applicable is presented on a sound analytical way. The chapters are written independently of each other, so they can be understood without reading earlier Chapters. But some knowledge of Analysis, Linear Algebra, and some Computing experience are required. The organization and content of the book cater to senior undergraduate, graduate students, researchers, practitioners, professionals, and academicians in the aforementioned disciplines. It can also be used as a reference book and includes numerous references and open problems.

Chapter 1

Correcting and Extending the Applicability of Two Fast Algorithms 1.

Introduction

Let F : Ω ⊆ Cm −→ Cm be a G-continuously differentiable mapping defined on an open and convex set Ω. A plethora of problems from diverse disciplines reduce to finding a solution x∗ ∈ Ω of equation F(x) = 0. (1.1) For example, in many discretization studies of nonlinear differential equations, we end up solving a system like (1.1) [1]-[16]. The solution x∗ can be found in closed form only in special cases. That forces researchers and practitioners to develop algorithms generating sequences approximating x∗ . Recently, there is a surge in the development of algorithms faster than Newton’s [6]. These algorithms share common setbacks limiting their applicability such as the convergence region being small and the estimates on the error distance being pessimistic in general. The information on the location of x∗ is not the best either. Motivated by these setbacks, we develop a technique extending the applicability of algorithms without adding conditions. In particular, we find a subset of Ω (on which the original Lipschitz parameters are defined) which also contains the iterates. But on this subset, the parameters are special cases of the earlier ones and at least as tight. Therefore, they can replace the old ones in the proofs, leading to finer convergence analysis. In section 2, we demonstrate our technique to extend the applicability of NHSS [13] and INHSS [3] algorithms. However, this technique is so general that it can be used to extend the applicability of other algorithms in an analogous way [1]-[16]. Next, we reintroduce NHSS and INHSS. Notice that the semi-local convergence analysis of these Algorithms given in [3] and [13] respectively are false since they both rely on a false semi-local convergence criterion (see Remark 1). We reported this problem to the journal of Linear Algebra Applications, especially since many other related publications have appeared relying on the same false criterion (see (1.3)) without checking (see [3,13] and the references therein ). But unfortunately, they refused to publish the corrections, we submitted. That simply means, among other things

2

Christopher I. Argyros, Samundra Regmi, Ioannis K. Argyros et al.

that this missed information will not stop, since many other authors will use this false criterion in future works. This is our second motivation for presenting this chapter (with the first being the introduction of a new technique). For brevity and to avoid repetitions, we refer the reader to [3,13] for the benefits and applications of using these fast algorithms, and we only concern ourselves with the presentation of the correct version of the results. Following the notation of [3,13], we have: NHSS Algorithm [13] Input: x(0) and tol. For n = 1, 2, . . . until kF(x(n))k ≤ tolkF(x(0))k do: (n)

1 For hn ∈ [0, 1) find e1 such that (n)

kF(x(n)) + F 0 (x(n))e1 k < hn kF(x(n) )k (n+1)

2 Set x∗

(n)

= x(n) + e1 .

(n)

3 Find e2 such that (n+1)

kF(x∗ (n+1)

4 Set x(n+1) = x∗

(n)

(n+1)

) + F 0 (x(n))e2 k < hn kF(x∗

)k

(n)

+ e2 .

End For INHSS Algorithm [3] 0 ∞ Input: w(0) and tol, a and positive integer sequences {µn }∞ n=0 , {µn }n=0 . (n) (0) For n = 1, 2, . . . until kF(w )k ≤ tolkF(w )k do: 1 set e1n,0 = 0. 2 For l = 0, 1, 2, . . .µn − 1 until kF(w(n)) + F 0 (w(n) )e1n,µn k < hn kF(w(n))k apply the HSS algorithm: (aI + H(w(n) )e1n,l+ 1 = (aI − S(w(n) ))e1n,l − F (w(n)) 2

(aI + S(w(n) )e1n,l+1 = (aI − H(w(n) ))e1n,l+ 1 − F(w(n)) 2

(n+1)

3 Set w∗

= w(n) + e1n,ln .

4 Set e2n,0 = 0.

Correcting and Extending the Applicability of Two Fast Algorithms

3

5 For l 0 = 0, 1, 2, . . .ln0 − 1 until (n+1)

kF(w∗

(n+1)

) + F 0 (w(n))e2n,ln0 k < hn kF(w∗

)k

apply the HSS algorithm: (aI + H(w(n) )e2n,l 0 + 1 1 = (aI − S(w(n) ))e2n,l − F(w(n+1)) 2

(aI + S(w(n) )e2n,l 0+1 = (aI − H(w(n) ))e2n,l 0+ 1 − F (w(n+1)) 2

(n+1)

6 Ser w(n+1) = w∗

2 + dn,l 0. n

End For

2.

Semi-Local Convergence

Pick x(0) ∈ Cm and consider F : Ω ⊂ Cm −→ Cm to be a G-differentiable function on an open set Ω on which F 0 (x) is continuous and positive definite. Suppose that F 0 (x) = H(x) + S(x), with H(x) = 21 (F 0 (x)+F 0 (x)∗ ) and S(x) = 12 (F 0 (x)−F 0 (x)∗ ) being the Hermitian and SkewHermitian part of the Jacobian matrix F 0 (x), respectively. The semi-local convergence is based on the following conditions. (a1) There exists positive constants b, c and d such that max{kH(x(0)k, kS(x(0))k} ≤ b, kF 0 (x(0))−1 k ≤ c, kF(x(0))k ≤ c. (a2) There exist non negative parameters `0h and L0s such that for all x ∈ U(x(0), r) ⊆ Ω, kH(x) − H(x(0) )k ≤ `0h kx − x(0) k, kS(x) − S(x(0) )k ≤ `0s kx − x(0) k,

Set `0 = `0h + `0s and r0 =

1 `0 .

(a3) (Lipschitz condition)There exist nonnegative constants `k and `s such that for all x, y ∈ U(x(0), `10 ) ∩ Ω, kH(x) − H(y)k ≤ `h kx − yk, kS(x) − S(y)k ≤ `s kx − yk, Set ` = `h + `s. By applying Banach’s lemma, the next result holds, improving the corresponding one in [13]. Lemma 1. Suppose conditions (a1)-(a3) hold. Then, we have (1) kF 0 (x) − F 0 (y)k ≤ `kx − yk, (2) kF 0 (x)k ≤ `0 kx − x(0) k + 2b,

4

Christopher I. Argyros, Samundra Regmi, Ioannis K. Argyros et al. (3) If r ≤

1 c`0 ,

then F 0 (x) is nonsingular and satisfies kF 0 (x)−1 k ≤

c . 1 − c`0 kx − x(0) k

The following semi-local convergence result corrects the corresponding ones in [13, Theorem 1]. Theorem 1. Suppose conditions (a1)-(a3), so that dc2 ` ≤ η0 =

(1 − η)2 , −η3 + 2η2 + 3η + 4

(1.2)

where η = max{ηk } < 1, r = min{ρ1 , ρ2 } with s ! a+b 2aξτ ρ1 = 1+ −1 , ` (2c + cξτ)(a + b)2 ρ2 = α=

β−

p

β2 − 2αγ , α

c`(1 + η) , β = 1 − η, γ = 2cd 1 + 2c2 d`η

ln(η) and with `∗ = limk−→∞ in f `k satisfying `∗ > b ln((ξ+1)τ c, ξ ∈ (0, 1−τ τ ) and

τ = τ(a; x(0)) = kM(a; x(0))k, where bvc is the largest integer smaller than or equal to v. Then, the iteration sequence {x(k)}∞ k=0 generated by NHSS algorithm is well defined and converges to x∗ , with F(x∗ ) = 0. Next, we state and prove the extension of this result for the INHSS method. We first introduce s0 = 0, sk+1 = sk −

f (sk ) , k = 0, 1, 2, . . . f0 (sk )

where f (s) = 12 αs2 − βs + γ and f 0 (s) = αs − 1. It is shown in, [13] that the above sequence converges to ρ2 monotone increasingly and f (sk ) ≤ 0. So, we get sk < sk+1 < ρ2 and sk −→ s∗ (= ρ2 ). Theorem 2. Suppose that the tolerance in INHSS algorithm is smaller than 18 η, and conditions (a1)-(a3) hold for constants defined in Theorem 1 and condition (a1) is d max{kH(w(0))k, kS(w(0) )k} ≤ b, kF 0 (w(0) )k ≤ c0 , kF(w(0))k ≤ , 4 for an initial guess w(0) . Moreover, µ∗ = min{limn−→∞ in f µn , limn−→∞ in f µ0n }, satisfying ln η µ∗ > b ln(ξ+1)τ c, ξ ∈ (0, 1−τ τ ) and τ = τ(a; w(0)) = kM(a; w(0))k < 1.

Correcting and Extending the Applicability of Two Fast Algorithms

5

Then, the iteration sequence {w(n)}∞ n=0 generated by INHSS algorithm is well defined and converges to w∗ with F(w∗ ) = 0. Further, sequence {w(n)}∞ n=0 hold the following relations 1 (1) kw∗ − w(0) k ≤ (s1 − s0 ), 4 (n)

kw∗ − w(n−1)k ≤

1 2n+3

(s2n−1 − sn−1 ), n = 2, 3, . . .

and also for n = 1, 2, . . ., we have (n)

kF(w∗ )k ≤

1 1 − cµs2n−1 (s2n − sn ), c(1 + c)

2n+3

w(n) − w(n−1) k ≤ (n)

kF(w∗ )k ≤

1 2n+2

(s2n − sn−1 ),

1 1 − cµs2n (s2n+1 − sn ), c(1 + c)

2n+2

1 kw(n) − w(0) k ≤ r2 , 2 1 (n) kw∗ − w(0) k ≤ r2 , 4 (n+1)

where c = 4c0 , w∗ = w(n) − F 0 (w(n) )−1 F(w(n)), ρ2 is defined as in Theorem 1 and {sn } is defined previously. Remark 1. (a) The following stronger (than (a3)) condition was used in [3,13] (a2’) kH(x) − H(y)k ≤ `¯h kx − yk kS(x) − S(y)k ≤ `¯s kx − yk for all x, y ∈ Ω0 . But, we have

Ω1 ⊆ Ω0 ,

so `h ≤ `¯h `s ≤ `¯s

and ` ≤ `¯ = `¯h + `¯s .

¯ Ω0 , respectively in all the results in [3]. MoreHence, `h , `s, `, Ω1 can replace `¯h , `¯s, `, over, Lemma 1 reduces to Lemma 1 in [3, 13], if `h = `¯h and `s = `¯s . Otherwise it constitutes an improvement. That is how, we obtain the advantages already stated in the introduction.

6

Christopher I. Argyros, Samundra Regmi, Ioannis K. Argyros et al. (b) On the other hand, we correct the semi-local convergence criterion δγ2 `¯ ≤ η¯ 0 :=

1−η 2(1 + η2 )

(1.3)

given in [3,13] and other publications based on it. Indeed, if (1.3) exists then that ¯ so ρ1 and ρ2 may not does not necessarily imply b2 − 4ac ≥ 0 (with ` replaced by `), exist. Then, there is no guarantee that the algorithms NHSS or INHSS converge to the solution. Notice also that η0 ≤ η¯ 0 .

However, our condition (1.2) guarantees the existence of ρ1 and ρ2 . Hence, all the proofs of the results in [3,13] break down at this point.

3.

Conclusion

It turns out that the sufficient semi-local convergence criteria of two fast algorithms for solving systems with nonlinear equations given in several articles are false. In this chapter, we present the corrected version. Moreover, we develop a new technique for extending the applicability of algorithms.

References [1] Amat, S., Bermudez, C., Hernandez, M. A., Martinez, E., On an efficient k−step iterative method for nonlinear equations, J. Comput. Appl. Math., 302, (2016), 258271. [2] Amat, S., Argyros, I. K., Busquier, S., Hern´andez, M. A., On two high-order families of frozen Newton-type methods, Numer. Linear. Algebra Appl., 25, (2018), e2126, 1–13. [3] Amiri, A., Cordero, A., Darvishi, M. T., Torregrosa, J. R., A fast algorithm to solve systems of nonlinear equations, J. Comput. Appl. Math., 354, (2019), 242-258. [4] An, H. B., Bai, Z. Z., A globally convergent Newton-GMRES method for large sparse systems of nonlinear equations, Appl. Numer. Math. 57 (2007), 235-252. [5] Argyros, I. K., Computational theory of iterative solvers. Series: Studies in Computational Mathematics, 15, Editors: Chui C. K. and Wuytack L., Elsevier Publ. Co. New York, U.S.A, 2007. [6] Argyros, I. K., Hilout, S., Weaker conditions for the convergence of Newton’s method. J. Complexity 28 (2012) 364–387. [7] Argyros, I. K., Magr´en˜ an, A. A., A contemporary study of iterative methods, Elsevier (Academic Press), New York, 2018. [8] Argyros, I. K., Magr´en˜ an, A. A., Iterative methods and their dynamics with applications, CRC Press, New York, USA, 2017.

Correcting and Extending the Applicability of Two Fast Algorithms

7

[9] Argyros, I. K., Computational Theory of Iterative Methods, Series: Studies in Computational Mathematics, 15, Editors: Chui C. K. and Wuytack L. Elsevier Publ. Company, New York (2007). [10] Argyros, I. K., George, S., Mathematical modeling for the solution of equations and systems of equations with applications, Volume-IV, Nova Publishes, NY, 2020. [11] Argyros, I. K., George, S., On the complexity of extending the convergence region for Traub’s method, Journal of Complexity 56 (2020) 101423, https://doi.org/10.1016/j.jco.2019.101423. [12] Bai, Z. Z., Yang, X., On HSS-based iteration methods for weakly nonlinear systems, Appl. Numer. Math. 59 (2009) 2923-2936. [13] Guo, X. P., Duff, I. S., Semilocal and global convergence of the Newton-HSS method for systems of nonlinear equations, Linear Algebra Appl. 18 (2010) 299-315. [14] Magre˜na´ n, A. A., Cordero, A., Guti´errez, J. M., Torregrosa, J. R., Real qualitative behavior of a fourth-order family of iterative methods by using the convergence plane, Mathematics and Computers in Simulation, 105:49-61, 2014. [15] Magre˜na´ n, A. A., Argyros, I. K., Two-step Newton methods. Journal of Complexity, 30(4):533-553, 2014. [16] Ortega, J. M., Rheinboldt, W. C., Iterative Solution of Nonlinear Equations in Several Variables, Academic Press, New York, 1970.

Chapter 2

On the Solution of Generalized Equations in Hilbert Space 1.

Introduction

A new technique is developed to extend the convergence region and provide tighter error estimates of Newton’s Algorithm (NA) for solving generalized equations for Hilbert spacevalued operators without additional conditions. The technique is very general, so it can be used to extend the applicability of other algorithms too. In this chapter, we approximate a solution x∗ of the generalized inclusion F(x) + Q(x) 3 0,

(2.1)

F(xn ) + F 0 (xn )(xn+1 − xn ) + Q(xn ),

(2.2)

using Newton’s algorithm (NA)

where H is a Hilbert space, D ⊆ H is an open set, F : D −→ H is differentiable according to Fr´echet, and Q : H −→ H is a maximal set-valued operator. Many problems in applied mathematics and other disciplines can be formulated as (2.1) using mathematical modeling [1]-[13]. Many researchers have worked on the convergence of NA and its special cases [1]-[13]. But in all these studies the convergence region is not large enough to limit the applicability of NA. That is why motivated by the work in [6] (that generalized earlier ones [6]) as well as optimization considerations, we develop a technique that determines a D0 ⊂ D also containing the iterates xn . But on which the majorant functions are tighter and specializations of the ones in [6]. This way, the convergence region is enlarged, and the error estimates on kxn − x∗ k are more precise than in [6]. Hence, the applicability of NA is extended without additional conditions. The semi-local convergence is given in Section 2, followed by an example in Section 3 and the conclusion in Section 4.

10

Christopher I. Argyros, Samundra Regmi, Ioannis K. Argyros et al.

2.

Convergence

We assume, for brevity, familiarity with the standard concepts introduced in this section. More details can be found in [3]-[13] and the references therein. Let M : H −→ H denote a bounded linear operator. Let M˜ := 12 (M + M ∗ ), with M ∗ standing for the conjugate of M. Set kF 0 (x0 )−1 k ≤ γ for some x0 ∈ D such that F 0 (x˜0 )−1 exists and λ > 0. Let also ρ > 0 and define ρ0 := sup{s ∈ [0, ρ) : U(x0 , s) ⊂ D}. The following majorant conditions on F 0 are needed. Definition 1. Suppose that there exists function w0 : [0, ρ0) −→ R twice differentiable (continuously) such that for each x ∈ U(x0 , ρ0 ) γkF 0 (x) − F 0 (x0 )k ≤ w00 (kx − x0 k) − w00 (0).

(2.3)

Suppose that equation w00 (s) − w00 (0) − 1 = 0

(2.4)

has a least solution r ∈ (0, ρ0 ). Definition 2. Suppose that there exists function w : [0, r) −→ R twice differentiable (continuously) such that for each x, y ∈ U(x0 , r) γkF 0 (y) − F 0 (0)k ≤ w0 (ky − xk + kx − x0 k) − w0 (kx − x0 ).

(2.5)

Definition 3. Suppose that there exists function w1 : [0, ρ0) −→ R twice differentiable (continuously) such that for each x, y ∈ U(x0 , ρ0 ) γkF 0 (y) − F 0 (x)k ≤ w01 (ky − xk + kx − x0 k) − w01 (kx − x0 k).

(2.6)

Remark 2. It follows from r ≤ ρ0

(2.7)

w00 (s) ≤ w01 (s)

(2.8)

w0 (s) ≤ w01 (s)

(2.9)

w00 (s) ≤ w0 (s) for each s ∈ [0, r).

(2.10)

that and for all s ∈ [0, r). We also assume, from now on

Otherwise, w¯ replaces w0 , w, in the results that follow, where this function is the largest of the two on [0, r). Using only (2.6) the semi-local convergence of NA was given in [6]. In particular the following estimate was obtained ˜ −1 k ≤ − kF 0 (x)

γ

. w01 (s)

(2.11)

On the Solution of Generalized Equations in Hilbert Space

11

But, we use the more precise, weaker, and needed ˜ −1 k ≤ − kF 0 (x)

γ w00 (s)

≤−

γ w0 (s)

.

(2.12)

Under this modification and using w instead of w1 in the proofs of [6], we extend the applicability of NA (see (2.7)-(2.12)). Next, we present the main semi-local convergence result for NA. Theorem 3. Suppose that (2.3), (2.5), kx1 − x0 k ≤ w(0) = η

(2.13)

and the following conditions hold, c1. w(0) > 0 and w0 (0) = −1; c2. w0 is convex and strictly increasing, c3. w(s) = 0 for some s ∈ (0, R). Then, w has a smallest zero s∗ ∈ (0, R), the sequence generated by NA for solving the generalized equation F(x) + T (x) 3 0 and the equation w(s) = 0, with starting point x0 and s0 = 0, respectively, 0 ∈ F(xn ) + F 0 (xn )(xn+1 − xn ) + T (xn+1 ), sn+1 = sn −

w.(sn) , n = 0, 1, 2, . . .. w0 (sn )

(2.14)

are well defined, {sn } is strictly increasing, is contained in (0, s∗ ) and converges to s∗ , ¯ 0 , s∗ ) which is the unique {xn } is contained in U(x0 , s∗ ) and converges to the point x∗ ∈ U(x ¯ 0 , s∗ ). Moreover, the sequence solution of the generalized equation F(x) + T (x) 3 0 in U(x {xn } and {sn } satisfies, en := kx∗ − xn k ≤ s∗ − sn , en+1 ≤

s∗ − sn+1 en , (s∗ − sn )2

(2.15)

for all n = 0, 1, 2, . . . and the sequence {sn } and {xn } converge Q−linearly as follows 1 1 en+1 ≤ en , s∗ − sn+1 ≤ (s∗ − sn ), n = 0, 1, 2, . . .. 2 2

(2.16)

If, additionally c4. w0 (s∗ ) < 0 then, the sequences, {sn } and {xn } converge Q− quadratically so that en+1 ≤

D− w0 (s∗ ) D− w0 (s∗ ) e , s − s ≤ (s∗ − sn )2 , n ∗ n+1 −2w0 (s∗ ) −2w0 (s∗ )

(2.17)

for all n = 0, 1, 2, . . .. Remark 3. (a) If w0 = w = w1 , then our results reduce to the ones in [6]. Otherwise, these results constitute an improvement with advantages already stated previously. Notice that (2.6) implies (2.3) and (2.5) and w0 , w are specializations of w1 . Hence, these advanatges are obtained without additional conditions

12

Christopher I. Argyros, Samundra Regmi, Ioannis K. Argyros et al.

(b) In the Lipschitz case suppose that functions w0 , w, w1 are constants, then we can choose `0 w0 (s) = s2 − s + η, 2 ` w(s) = s2 − s + η 2 and `1 w1 (s) = s2 − s + η, 2 with `0 < ` < `1 (see also the numerical example). Then, we have 1 1 =⇒ h = `η ≤ , 2 2 √ √ 1 − 1 − 2η` 1 − 1 − 2η`1 s∗ = ≤ t∗ = , ` `1 sn ≤ tn h 1 = `1 η ≤

and sn+1 − sn ≤ tn+1 − tn , where tn+1 = tn −

w1 (tn ) . w01 (tn )

(c) Similar advantages are obtained under the Smale-Wang condition by considering instead functions s w0 (s) = − 2s + η, 1 − γ0 s s w(s) = − 2s + η, 1 − γs and

w1 (s) =

s − 2s + η, 1 − γ1 s

for γ0 < γ < γ1 . Examples where (2.7)-(2.10) are strict can be found in [1]-[5] (see also the numerical example).

3.

Numerical Examples

Example 1. Let H = R, D = U(x0 , 1 − ξ), x0 = 1, and ξ ∈ (0, 12 ). Consider F on D as F(s) = s3 − ξ. Then, we get 1 1 µ = , `0 = 3 − ξ < ` = 2(1 + ) < `1 = 2(2 − ξ) 3 3−ξ

for all ξ ∈ (0, 12 ) and η = 13 (1 − ξ).

On the Solution of Generalized Equations in Hilbert Space The old criterion h1 ≤

1 2

13

is not satisfied, since 1 1 h1 = 2((2 − ξ) (1 − ξ) > 3 2

for all ξ ∈ (0, 12 ). Then, there is no guarantee sequence {xn } converges to x∗ = under the new criterion h ≤ 12 , we have h = 2(1 + √

p 3

ξ. But

1 1−ξ 1 ) ≤ 3−ξ 3 2

provided that ξ ∈ [ 17−8 177 ≈ 0.461983163, 12 ). Hence, the applicability of NA is extended p with advantages as stated before and limn−→∞ xn = 3 ξ is guaranteed by the verification of the new convergence criterion.

4.

Conclusion

Motivated by optimization considerations and the fact that the convergence region of algorithms is not large in general, we developed a technique that extends this region for NA, improves the error estimates, and provides a better knowledge of the solution without additional conditions. The technique is very general so it can provide the same advantages if applied to other algorithms too.

References [1] Argyros, I. K., On an extension of the mesh-independence principle for operator equations in Banach spaces, Appl. Math. Lett., 9(3), 1-7, (1996). [2] Argyros, I. K., On the Newton–Kantorovich hypothesis for solving equations, J. Comput. Appl. Math. 169 (2004) 315–332. [3] Argyros, I. K., Computational Theory of Iterative Methods, Series: Studies in Computational Mathematics, 15, Editors: Chui C. K. and Wuytack L. Elsevier Publ. Company, New York (2007). [4] Argyros, I. K., George, S., Mathematical modeling for the solution of equations and systems of equations with applications, Volume-IV, Nova Publishes, NY, 2020. [5] Argyros, I. K., George, S., On the complexity of extending the convergence region for Traub’s method, Journal of Complexity 56 (2020) 101423, https://doi.org/10.1016/j.jco.2019.101423. [6] Silva, G. N., Kantorovich’s theorem on Newton’s method for solving generalized equations under the majorant condition, Appl. Math. Compu., 286, 178-188, (2016). [7] Dontchev, A. L., Rockafellar, R. T., Implicit functions and solution mappings Springer Monographs in Mathematics, Springer, Dordrecht (2009).

14

Christopher I. Argyros, Samundra Regmi, Ioannis K. Argyros et al.

[8] Ferreira, O. P., Gonalves, M. L. N., Oliveira, P. R., Convergence of the Gauss-Newton method for convex composite optimization under a majorant condition SIAM J. Optim., 23 (3) (2013), 1757-1783. [9] Kantorovich, L. V., On Newtons method for functional equations Dokl. Akad. Nauk SSSR, 59 (1948), 1237-1240. [10] Rockafellar, R. T., Monotone processes of convex and concave type Memoirs of the American Mathematical Society, vol. 77, American Mathematical Society, Providence, R.I. (1967). [11] Smale, S., Newton’s method estimates from data at one point, The Merging of Disciplines: New Directions in Pure, Applied, and Computational Mathematics, Springer, New York (1986), 185-196. [12] Wang J. Convergence ball of Newtons method for generalized equation and uniqueness of the solution J. Nonlinear Convex Anal., 16 (9) (2015), 1847-1859. [13] Zhang, Y., Wang, J., Gu, S. M., Convergence criteria of the generalized Newton method and uniqueness of solution for generalized equations, J. Nonlinear Convex Anal. 16(7) 1485–1499.

Chapter 3

Gauss-Newton Algorithm for Convex Composite Optimization 1.

Introduction

Let T : Ri −→ R be a convex functional and F : R j −→ Ri stand for a continuously differentiable mapping. A plethora of problems from mathematical programmings, such as penalization algorithms minimax programming, to mention a few [9], can be written in the form min T (F(x)). (3.1) It is known that dealing with (3.1) results to studying F(x) ∈ K = argminT,

(3.2)

where K is a nonempty minimizer. This is due to to the fact that, if x∗ ∈ Ri satisfies (3.2), then x∗ solves (3.1) but not necessarily vice versa. There are many results used to determine x∗ [1]-[18]. We are motivated by optimization considerations and the seminal work in [9] which extended and generalized earlier works in [9] on the usage of the Gauss-Newton Algorithm (GNA) developed to generate a sequence converging to x∗ under certain criteria on the initial data [9, 13]. In particular, let δ ∈ (0, ∞], γ ∈ [1, ∞), and x0 ∈ Ri . Then, GNA is formally developed as [9]: (i) Choose δ ∈ (0, ∞), γ ∈ [1, ∞) and x0 ∈ Ri . Set n = 0. (ii) Determine Dδ (xn ). If 0 ∈ Dδ (xn ), Stop. Otherwise. (iii) Compute en satisfying en ∈ Dδ (xn ), ken k ≤ γd(0, Dδ(xn )) and let xn+1 = xn + en , n 7→ n + 1 and go to (ii).

16

Christopher I. Argyros, Samundra Regmi, Ioannis K. Argyros et al.

The convergence region in the previous works is small, whereas the estimations on kxn −x∗ k are pessimistic in general. These problems limit the applicability of GNA and its specializations. That is why we determine a subset of the original region also containing the iterates. But in this new set, the majorant functions are tighter and specializations of the corresponding ones in [9]. Hence, these new functions can replace the old ones in all the results in [9], leading to weaker sufficient semi-local convergence criteria and tighter estimations on kxn − x∗ k but without additional conditions. Hence, the applicability of GNA is extended. The developed technique is also so general that it can be used on other algorithms. The new technique is given in Section 2.

2.

Convergence of GNA

We develop certain types of majorant conditions on F 0 , so that we can make a comparison between them. Definition 4. It is said that a twice differentiable function w0 : [0, ρ) −→ R is center majorant for F ∈ C(Ri , Ri ), ρ > 0 on U(x0 , ρ) if kF 0 (x) − F 0 (x0 )k ≤ w00 (kx − x0 k) − w00 (0)

(3.3)

holds for all x ∈ U(x0 , ρ). Suppose that equation w00 (t) − w00 (0) − 1 = 0

(3.4)

has a least solution ρ0 ∈ (0, ρ]. Definition 5. It is said that a twice differentiable function w : [0, ρ0 ) −→ R is center majorant for F ∈ C(Ri , Ri ), ρ0 > 0 on U(x0 , ρ0 ) if kF 0 (x) − F 0 (y)k ≤ w0 (ky − xk + kx − x0 k) − w0 (kx − x0 k)

(3.5)

holds for all x, y ∈ U(x0 , ρ0 ), ky − xk + kx − x0 k < ρ0 . Definition 6. It is said that a twice differentiable function w1 : [0, ρ) −→ R is center majorant for F ∈ C(Ri , Ri ), ρ > 0 on U(x0 , ρ) if kF 0 (x) − F 0 (y)k ≤ w01 (ky − xk + kx − x0 k) − w01 (kx − x0 k)

(3.6)

holds for all x, y ∈ U(x0 , ρ), ky − xk + kx − x0 k < ρ. Remark 4. Condition (3.6) implies (3.3) and (3.5) but not necessarily vice versa. It follows from ρ0 ≤ ρ (3.7) that w00 (t) ≤ w01 (t)

(3.8)

w0 (t) ≤ w01 (t)

(3.9)

and

Gauss-Newton Algorithm for Convex Composite Optimization

17

hold for each t ∈ [0, ρ0 ). We assume from now on that for each t ∈ [0, ρ0 ), w00 (t) ≤ w0 (t) for each t ∈ [0, ρ0).

(3.10)

Otherwise function w¯ standing for the largest of w0 and w on the interval [0, ρ0 ) can replace them in the results that follow. Let x0 ∈ Ri be a quasi regular point of (3.2). Define ρx0 by ρx0 := sup{ρ : ∃β : [0ρ) −→ (0, ∞) satisfying (3.12)}

(3.11)

/ d(0, DK (x)) ≤ β(kx − x0 k)d(F(x), K) DK 6= 0,

(3.12)

βx0 = inf{β(t) : β ∈ U(x0 , ρx0 )} for each t ∈ [0, ρx0 ),

(3.13)

where for each x ∈ U(x0 , ρ). Moreover, define

where U(x0 , ρ) = {β : [0, ρ) −→ (0, ∞) : β satisfying (3.12)}.

Let α > 0 and µ > 0. Define auxiliary functions w0µ,α : [0, ρ) −→ R, w1µ,α : [0, ρ) −→ R, and wµ,α : [0, ρ0 ) −→ R by w0µ,α (t) = µ + (α − 1)t + αw0 (t),

w1µ,α (t) = µ + (α − 1)t + αw1 (t), and Notice that we can assume

wµ,α (t) = µ + (α − 1)t + αw(t).

w0µ,α (t) ≤ wµ,α (t) ≤ w1µ,α (t) for each t ∈ [0, ρ0 ).

(3.14)

Based on (3.6) the following sufficient convergence criterion was assumed in [9]: α ≥ sup{

γβx0 (t) : µ ≤ t ≤ ρ∗ } = α 1 γβx0 (t)(1 + w01 (t)) + 1

(3.15)

where ρ∗ is the smallest solution of equation w1µ,α (t) = 0 approximated by the sequence t0 = 0,tn+1 = tn −

w1µ,α (tn ) , n = 0, 1, 2, . . .. (w1µ,α )0(tn )

(3.16)

But it turns that weaker (3.3) is actually needed and the criterion is α ≥ sup{

γβx0 (t) : µ ≤ t ≤ ρ∗ } = α 2 . γβx0 (t)(1 + w0 (t)) + 1

(3.17)

Notice that we have an improvement, since α1 ≤ α2 .

(3.18)

Hence, w can replace w1 in all the results in [9] with advantages as already stated in the introduction. We need some additional conditions.

18

Christopher I. Argyros, Samundra Regmi, Ioannis K. Argyros et al.

(c1) w0 (0) = w(0) = 0, w00 (0) = w0 (0) = −1; (c2) w0 , w are convex and strictly increasing; (c3) there exists ρ∗ ∈ (0, ρ0 ) such that wµ,α (t) > 0 for each t ∈ (0, ρ∗) and wµ,α (ρ∗ ) = 0 (c4) (w0µ,α)0 , w0µ,α are convex and strictly increasing. Next, we present the main semi-local convergence result for GNA. Theorem 4. Assume (3.3), (3.5) hold and ρ0 given in (3.4) exists. If wµ,α satisfies (c3), then sequence {sn } for solving wµ,α (t) = 0 defined as s0 = 0, sn+1 = sn −

wµ,α (sn ) , n = 0, 1, 2, . . . (wµ,α )0(sn )

(3.19)

is well defined, strictly increasing, contained in [0, s∗), and it converges Q− linearly to s∗ . Let γ ∈ [1, ∞), δ ∈ [0, ∞], and T : Ri −→ R be a real valued convex function with minimizer / Suppose that x0 ∈ Ri is a quasi regular point of the inclusion set K 6= 0. F(x) ∈ K

(3.20)

with the quasi regular radius ρx0 and the quasi regular bound function βx0 as defined in (3.11)-(3.13). If d(F(x0 ), K) > 0, s∗ ≤ ρx0 , (3.17) and δ ≥ µ ≥ γβx0 (0)d(F(x0 ), K) hold. Then, the sequence {xn } generated by GNA stays in U(x0 , x∗ ), F(xn ) + F 0 (xn )(xn+1 − xn ) ∈ K, n = 0, 1, , 2 . . ., satisfies kxn+1 − xn k ≤ sn+1 − sn , kxn+1 − xn k ≤

sn+1 − sn kxn − xn−1 k2 , (sn − sn−1 )2

(3.21)

¯ 0 , s∗ ) for all n = 0, 1, , 2, . . . and n = 1, , 2, . . ., respectively, and converges to a point x∗ ∈ U(x ∗ such that F(x ) ∈ K, en = kx∗ − xn k ≤ s∗ − sn , n = 0, 1, 2, . . . (3.22) and the convergence is R−linear. If, additionally, wµ,α satisfies (c4, then the following inequalities hold: w00µ,α (s∗ ) w00µ,α (s∗ ) 2 kxn+1 − xn k ≤ kxn − xn−1 k , sn+1 − sn ≤ (sn − sn−1 )2 −2w0µ,α (s∗ ) −2w0µ,α (s∗ )

(3.23)

for all n = 1, 2, . . .. Moreover, the sequence {xn } and {sn } converge Q− quadratically to x∗ and s∗ , respectively, as follows w00µ,α (s∗ ) w00µ,α (s∗ ) en+1 ≤ , s − s ≤ (s∗ − sn )2 ∗ n+1 −2w0µ,α (s∗ ) −2w0µ,α (s∗ ) n−→∞ en

lim sup

for all n = 0, 1, 2, . . ..

(3.24)

Gauss-Newton Algorithm for Convex Composite Optimization

19

Remark 5. The corresponding to (3.21)-(3.24) estimates (using {sn } and w1 ) are less precise, since sn ≤ tn , (3.25) sn+1 − sn ≤ tn+1 − tn ,

(3.26)

lim sn = s∗ ≤ t∗ = lim tn ,

(3.27)

wµ,α (sn ) ≤ w1µ,α (tn ),

(3.28)

w0µ,α (sn ) ≤ (w1µ,α )0 (tn ),

(3.29)

w1µ,α (sn ) ≤ w1µ,α (tn ),

(3.30)

n−→∞

n−→∞

and −

1 w0µ,α (t)

≤−

1 w0µ,α (t)

≤−

1 (w1µ,α )0 (t)

,

(3.31)

for each t ∈ [0, ρ0 ). Examples where (3.7)-(3.10), (3.18), can be found in [1]-[4]. Functions tighter than w can also be found if in (3.5) U(x0 , ρ0 ) is replaced by U(x1 , ρ0 − s0 ) which is included in U(x0 , ρ0 ) [1]-[4].

3.

Conclusion

The convergence region of the Gauss-Newton Algorithm for solving composite optimization problems is extended without additional conditions.

References [1] Argyros, I. K., On an extension of the mesh-independence principle for operator equations in Banach spaces, Appl. Math. Lett., 9(3), 1-7, ( 1996). [2] Argyros, I. K., Computational Theory of Iterative Methods, Series: Studies in Computational Mathematics, 15, Editors: Chui C. K. and Wuytack L. Elsevier Publ. Company, New York (2007). [3] Argyros, I. K., George, S., Mathematical modeling for the solution of equations and systems of equations with applications, Volume-IV, Nova Publishes, NY, 2020. [4] Argyros, I. K., George, S., On the complexity of extending the convergence region for Traub’s method, Journal of Complexity 56 (2020) 101423, https://doi.org/10.1016/j.jco.2019.101423. [5] Burke, J. and Ferris, M. C., A Gauss-Newton method for convex composite optimization. Technical report, 1993.

20

Christopher I. Argyros, Samundra Regmi, Ioannis K. Argyros et al.

[6] Dennis, J. E., Jr. and Schnabel, R. B., Numerical methods for unconstrained optimization and nonlinear equations (Classics in Applied Mathematics, 16). Soc for Industrial & Applied Math, 1996. [7] Ferreira, O. P., Goncalves, M. L. N., and Oliveira, P. R., Local convergence analysis of the Gauss-Newton method under a majorant condition. J. Complexity, 27(1):111 125, 2011. [8] Ferreira, O. P. and Svaiter, B. F., Kantorovich’s theorem on Newton’s method in Riemannian manifolds. J. Complexity, 18(1):304 329, 2002. [9] Ferreira, O. P., Goncalves, M. L. N., Oliveira, P. R., Convergence of the Gauss Newton method for convex composite optimization under a majorant condition, SIAM J Optim, 23, 3, 1757-1783, (2013). [10] Ferreira, O. P. and Svaiter, B. F., Kantorovich’s majorants principle for Newton’s method. Comput. Optim. Appl., 42(2):213229, 2009. [11] Hiriart-Urruty, J. B. and Lemar´echal, C., Convex analysis and minimization algorithms. I, volume 305 of Grundlehren der Mathematischen Wissenschaften [Fundamental Prin- ciples of Mathematical Sciences]. Springer-Verlag, Berlin, 1993. Fundamentals. [12] Li, C. and Ng, K. F., Majorizing functions and convergence of the Gauss-Newton method for convex composite optimization. SIAM J. Optim, pages 613642, 2007. [13] Li, C. and Wang, X., On convergence of the Gauss-Newton method for convex composite optimization. Math. Program., 91:349356, 2002. 10.1007/s101070100249. [14] Robinson, S. M., Extension of Newton’s method to nonlinear functions with values in a cone. Numer. Math., 19:341347, 1972. 10.1007/BF01404880. [15] Rockafellar, R. T., Monotone processes of convex and concave type. Memoirs of the American Mathematical Society, No. 77. American Mathematical Society, Providence, R.I., 1967. [16] Rockafellar, R. T., Convex analysis. Princeton Mathematical Series, No. 28. Princeton University Press, Princeton, N.J., 1970. [17] Smale, S., Newton’s method estimates from data at one point. In The merging of disciplines: new directions in pure, applied, and computational mathematics (Laramie, Wyo., 1985), pages 185196. Springer, New York, 1986. [18] Wang, X., Convergence of Newton’s method and uniqueness of the solution of equations in Banach space. IMA J. Numer. Anal., 20(1):123134, 2000.

Chapter 4

Local Convergence of Newton’s Algorithm of Riemannian Manifolds 1.

Introduction

Let M stand for a Riemannian manifold, D ⊂ M be an open set, and X : D −→ T M denote a continuously differentiable vector field. We are interested in finding a singularity z∗ for the equation X(z) = 0

(4.1)

using Newton’s Algorithm(NA) defined as [11]: zn+1 = expzn (−∇X(zn )−1 X(zn )).

(4.2)

There is extensive literature on local as well as semi-local convergence results for NA under various conditions [1]-[14]. But the convergence region is not large; the error estimates on kzn − z∗ k are pessimistic, and the information on the location of the solution z∗ is not optimum in general. But these problems limit the applicability of NA. That is why, motivated by these problems and the elegant work by Ferreira and Silva in [11], where they unified and generalized earlier results, we develop a technique via which: the ball of convergence is enlarged (i.e., more initial points z0 become available); tighter error estimates on kzn − z∗ k are provided (i.e. fewer iterates are needed to achieve a predetermined error tolerance) and better information on the solution z∗ is realized. The novelty of the technique is that these improvements are obtained without additional conditions since the developed majorant functions are special cases of the ones used in [11]. Moreover, the technique is very general, so it can be used to expand the applicability of other algorithms along the same lines [1]-[14]. The new technique is presented in Section 2.

2.

Convergence

For brevity, we refer the reader to [9] for the standard concepts introduced in this section. First, we develop some majorant conditions. Let r > 0 and set µ = sup{t ∈ [0, r) : U(z∗ ,t) ⊂ D}.

22

Christopher I. Argyros, Samundra Regmi, Ioannis K. Argyros et al.

Definition 7. It is said that ∇X satisfies the center-majorant condition U(z∗ , µ), if there exists an w0 : [0, r) −→ R continuously differentiable such that k∇X(z∗ )−1 [Pλ,1,0 ∇X(z) − Pλ,0,0 ∇X(z∗ )Pλ,1,0 ]k ≤ w00 (d(z∗ , z)) − w00 (0),

(4.3)

for each z ∈ U(z∗ , µ), where λ : [0, 1] −→ M a minimizing geodesic from z∗ to z. Suppose that equation w00 (s) − w00 (0) − 1 = 0

(4.4)

has a least solution µ0 ∈ (0, µ]. Definition 8. It is said that ∇X satisfies the restricted-majorant condition U(z∗, µ0 ), if there exists an w : [0, µ0 ) −→ R continuously differentiable such that k∇X(z∗ )−1 [Pλ,1,0 ∇X(z) − Pλ,θ,0 ∇X(λ(θ))Pλ,1,θ ]k ≤ w0 (d(z∗ , z)) − w0 (θd(z∗, z)),

(4.5)

for each z ∈ U(z∗ , µ0 ), θ ∈ [0, 1], and λ as in Definition 7. Definition 9. It is said that ∇X satisfies the majorant condition U(z∗ , µ), if there exists an w1 : [0, r) −→ R continuously differentiable such that k∇X(z∗ )−1 [Pλ,1,0 ∇X(z) − Pλ,θ,0 ∇X(λ(θ))Pλ,1,θ]k ≤ w01 (d(z∗ , z)) − w01 (θd(z∗, z)),

(4.6)

for each z ∈ U(z∗ , µ), θ ∈ [0, 1], and λ as in Definition 7. Remark 6. In view of these definitions and µ0 ≤ µ, we have w00 (s) ≤ w01 (s)

(4.7)

w0 (s) ≤ w01 (s)

(4.8)

and for all s ∈ [0, µ0 ]. Notice that (4.6) (used in [11]) implies (4.3) and (4.5) but not vice versa. Under (4.6) the following estimate was given in [11, Lemma 4.4]: k∇X(z)−1 Pλ,0,1 ∇X(z∗ )k ≤

1

. |w01 (d(z∗, z))|

(4.9)

But under the weaker (4.3) and actually needed, the more precise estimate can be obtained k∇X(z)−1 Pλ,0,1 ∇X(z∗ )k ≤

1 . |w00 (d(z∗, z))|

(4.10)

Hence, with this modification and w used instead of w1 for the rest of the estimates (in [11]), the results in [11] can be rewritten in this extended setting. More precisely, we present the main local convergence result for NA. Theorem 5. Suppose: (4.3, (4.5) hold and µ0 given in (4.4) exists; (c1) w0 (0) = w(0) = 0, w00 (0) = w0 (0) = −1; w00 and w0 are strictly increasing and convex;

Local Convergence of Newton’s Algorithm of Riemannian Manifolds (c2) Let a := sup{t ∈ [0, r) : w00 (t) < 0}, b := sup{c ∈ (0, a) : and ρ := min{T, b, ρz∗ }.

w(t)−w00 (t)t w00 (t)t


0. Definition 10. DF is said to satisfy: (i) The L0 − average Lipschitz condition on U(q0 , ρ), if for any point p ∈ U(q0, ρ) and any geodesic g connecting q0 , q with m(g) < ρ, we have k.kDF(q)Pg,q,q0 − DF(q0 )k kVq−1 0



Z m(g) 0

L0 (s)ds.

(5.5)

(ii) The center L0 − average Lipschitz condition on U(q0 , ρ), if for any point q, x ∈ U(q0, ρ) and any geodesic g connecting q, x with e(q0 , x) + m(g) < ρ, we have k.kDF(x)Pg,x,q − DF(q)k ≤ kVq−1 0 Suppose that equation

Z ρ 0

Z e(q0,q)+m(g) e(q0 ,q)

L1 (s)ds.

L0 (s)ds − 1 = 0

(5.6)

(5.7)

has a least solution ρ0 ∈ (0, ρ]. (iii) The restricted L− average Lipschitz condition on U(q0, ρ0 ), if for any point q, x ∈ U(q0, ρ0 ) and any geodesic g connecting q, x with e(q0 , q) + m(g) < ρ0 , we have kVq−1 k.kDF(x)Pg,x,q − DF(q)k ≤ 0

Z e(q0 ,q)+m(g)

L(s)ds.

(5.8)

e(q0 ,q)

Remark 8. It follows from ρ0 ≤ ρ

(5.9)

L0 (s) ≤ L1 (s)

(5.10)

L(s) ≤ L1 (s)

(5.11)

and (5.5), (5.6) and (5.8) that and hold for all s ∈ [0, ρ0 ]. Moreover, (5.8) implies (5.5) and (5.6) but not necessarily vice versa unless if L0 = L = L1 . We suppose from now on that L0 (s) ≤ L(s) for all s ∈ [0, ρ0].

(5.12)

Newton’s Algorithm on Riemannian Manifolds with Values in a Cone

29

Otherwise L¯ replaces L0 and L in the results that follow, where this function is the largest of L0 and L on [0, ρ0 ]. In Lemma 3.1 (see [9]) using (5.6) the following estimate was obtained kVq−1 k 0

kVq−1 k ≤

1−

R m(g) 0

L1 (s)ds

.

(5.13)

But we use the weaker, and actually needed (5.5) to obtain instead the tighter estimate kVq−1 k ≤

kVq−1 k 0

1−

R m(g) 0

L0 (s)ds



kVq−1 k 0

1−

R m(g) 0

.

(5.14)

L(s)ds

Hence, L can simply replace L1 in all the results in [9] to obtain the advantages already stated in the introduction. That is why we omit the proofs of the results that follow. Let β > 0 be such that β=

Z ρ0

L(u)du

(5.15)

0

and let η > 0. For our main theorem, we define the majorizing function ϕ by ϕ(s) = η − s + Thus ϕ0 (s) = −1 +

Z s 0

Z s

L(u)(s − u)du for each s ≥ 0.

(5.16)

L(u)du and ϕ00 (s) = L(s) for each s ≥ 0.

(5.17)

0

Let sn denote the Newton sequence for ϕ with initial point s0 = 0, developed by sn+1 = sn − ϕ0 (sn )−1 ϕ(sn ) for each n = 0, 1, . . ..

(5.18)

In particular, by (5.16), (5.17) and (5.18), we get s1 − s0 = η.

(5.19)

Lemma 2. Suppose that 0 < η ≤ β. Then, the following hold: (i) ϕ is strictly decreasing on [0, ρ0] and strictly increasing on [ρ0 , ∞) with ϕ(η) > 0, ϕ(ρ0 ) = η − β ≤ 0, ϕ(∞) ≥ η > 0.

(5.20)

Moreover, if η < β, ϕ has two zeros, denoted by ρ1 and ρ2 such that η < ρ1
0, L = λkVq−1 k 0 2 or the Smale-Wang [17, 18] case: for L(s) =

2γ 1 ,0≤s< (1 − γs)3 γ

are extended too in our setting. Concrete examples where (5.5)-(5.7) are strict can be found in [2]-[5].

3.

Conclusion

Inclusion problems on Riemannian manifolds with values in a cone are solved using Newton’s algorithm. The new results extend earlier ones under the same semi-local convergence criteria and conditions using our idea of the restricted convergence domain.

References [1] Adler, J. P., Dedieu, J., Margulies, M. Martens, and Shub, M., Newtons method on Riemannian manifolds and a geometric model for human spine, IMA J. Numer. Anal., 22(2002), 359-390. [2] Argyros, I. K., On an extension of the mesh-independence principle for operator equations in Banach spaces, Appl. Math. Lett., 9(3), ( 1996), 1-7. [3] Argyros, I. K., Computational Theory of Iterative Methods, Series: Studies in Computational Mathematics, 15, Editors: Chui C. K. and Wuytack L. Elsevier Publ. Company, New York (2007).

Newton’s Algorithm on Riemannian Manifolds with Values in a Cone

31

[4] Argyros, I. K., George, S., Mathematical modeling for the solution of equations and systems of equations with applications, Volume-IV, Nova Publishes, NY, 2020. [5] Argyros, I. K., George, S., On the complexity of extending the convergence region for Traub’s method, Journal of Complexity 56 (2020) 101423, https://doi.org/10.1016/j.jco.2019.101423. [6] Daniel, W., Newton’s method for nonlinear inequalities, Numer. Math., 40(1973), 381-387. [7] DoCarmo, M. P., Riemannian Geometry, Birkhauser, Boston, 1992. [8] Ezquerro, J. A. and Hern´andez, M. A., Generalized differentiability conditions for Newton’s method, IMA J. Numer. Anal., 22(2002), 187-205. [9] Huang, J. H., Huang, S., Li, C., Extended Newton’s method for mappings on Riemannian manifolds with values in a cone, Taiwanianese J. Math., 13, 2B, (2009), 633-656. [10] Ferreira, O. P. and Svaiter, B. F., Kantorovich’s Theorem on Newton’s method in Riemannian manifolds, J. Complexity, 18(2002), 304-329. [11] Guti´errez, J. M. and Hern´andez, M. A., Newton’s method under weak Kantorovichconditions, IMA J. Numer. Anal., 20(2000), 521-532 [12] Kantorovich, L. V. and Akilov, G. P., Functional Analysis, Oxford: Pergamon, 1982. [13] Rockafellar, R. T., Convex Analysis, Princeton University Press, Princeton, NJ, 1970. [14] Rockafellar, R. T., Monotone processes of convex and concave type, Mem. Amer. Math. Soc. 77, AMS., Providence, R.I., 1967. [15] Robinson, S. M., Normed convex process, Trans. Amer. Math. Soc., 174(1972), 127-140. [16] Robinson, S. M., Extension of Newtons method to nonlinear functions with values in a cone, Numer. Math., 19(1972), 341-347. [17] Smale, S., Newtons method estimates from data at one point, The Merging of Disciplines: New Directions in Pure, Applied and Computational Mathematics (R.Ewing, K.Gross and C. Martin, eds), New York: Springer, (1986), 185-196. [18] Wang, X. H., Convergence of Newton’s method and inverse function theorem in Banach space, Math. of Comput., 225(1999), 169-186.

Chapter 6

Gauss-Newton Algorithm on Riemannian Manifolds under L-Average Lipschitz Conditions 1.

Introduction

Many problems in constrained optimization, calculus of variations, optimal control, first order partial differential equations on Riemannian manifolds and other spaces with no linearity can be formulated using mathematical modeling as convex composite optimization problems on Riemannian manifold R as min F(q) := H(F(q)), q∈R

(6.1)

where H is a real valued convex mapping on R j and F : R −→ R j is differentiable. To study (6.1) we look for q∗ ∈ K that satisfy the inclusion problem F(q) ∈ K,

(6.2)

where K = argminH is the set of all optimum points of H. Closed form solutions q∗ can be found only in special cases. That is why algorithms are used instead to develop a sequence approximating q∗ under suitable conditions. In particular, the following Gauss-Newton Algorithm (GNA) has been used [19]. (A) Algorithm. R(µ, δ, q0 ). Let µ ≥ 1, 0 < δ ≤ ∞. Let q0 ∈ R be given. For k = 0, 1, 2, . . . having q0 , q1 , . . .qk , determine qk+1 as follows. If 0 ∈ Mδ (qk) then stop; if 0 ∈ / δ(qk ), choose zk such that zk ∈ Mδ (qk ) and kzk k ≤ µd(0, Mδ(qk )), and set qk+1 = expqk zk , where for each q ∈ K, Mδ (q) is defined by Mδ (q) := {z ∈ Tq K : kzk ≤ δ, h(F(q) + DF(q)z) ≤ h(F(q)) + DF(q)z0), ∀z0 ∈ Tq K with kz0k ≤ δ}.

34

Christopher I. Argyros, Samundra Regmi, Ioannis K. Argyros et al.

The semi-local convergence of (A) was given in [19] using l−average conditions as in the previous work. But as already mentioned, there the convergence domain is small, and the estimates on kqn − q∗ k are pessimistic in general. That is why we extend the applicability of (A) as before (see also [1]-[18]).

2.

Semi-Local Convergence

Let K be a closed convex set in Rn . Consider the inclusion (6.2). Let q ∈ R and M(q) := {z ∈ Tq R : F(q) + DF(q)z ∈ K}.

(6.3)

Remark 10. In the case when K is the set of all minimum points of h and if there exists q0 ∈ Tq R with kq0 k ≤ δ(q) and for each z ∈ T + qR with kzk ≤ δ one has z ∈ Mδ (q) ⇐⇒ z ∈ M(q) ⇐⇒ z ∈ M∞(q).

(6.4)

Definition 11. A point q0 ∈ R is called a quasi regular point of the inclusion (6.2) if there exists ρ > 0 and an increasing positive valued function β on [0, ρ) such that M(q) 6= 0/ and d(0, M(q)) ≤ β(d(q0, q))d(F(q), K) for all q ∈ U(q0, ρ).

(6.5)

Following [15, 16, 19], let ρq0 denote the supremum of ρ such that (6.5) holds for some increasing positive valued function β on [0, ρ.) Let ρ ∈ [0, ρq0 ] and let U(q0 , ρ) denote the set of all increasing positive valued function β on [0, ρ) such that (6.5) holds. Define βq0 (t) = inf{β(t) : β ∈ U(q0 , ρq0 ) for each t ∈ [0, ρq0 ).

(6.6)

Note that each β ∈ U(q0 , ρ) with limt−→ρ− β(t) < ∞ can be extended to an element of U(q0 , ρq0 ). We can verify from this that βq0 (t) = inf{β(t) : β ∈ U(q0 , ρq0 ) for each t ∈ [0, ρ).

(6.7)

We call ρq0 and βq0 respectively the quasi-regular radius and the quasi-regular bound function of the quasi-regular point q0 . Let γ > 0, and let ργ > 0 and bγ > 0 be such that γ

Z ργ 0

L(u)du = 1 and bγ = γ

Z ργ

L(u)udu

(6.8)

0

(i.e., bγ < ργ ). Let λ ≥ 0 and define ϕγ (t) = λ − t + γ

Z t 0

Then, we have ϕ0γ (t) = −1 + γ

L(u)(t − u)du for each t ≥ 0.

Z t 0

L(u)du for each t ≥ 0.

(6.9)

(6.10)

Let sγ,n denote the sequence generated by Newton’s method for ϕγ with the initial point sγ,0 = 0 : sγ,n+1 = sγ,n − ϕ0γ (sγ,n )−1 ϕγ (sγ,n) n = 0, 1, 2, . . .. (6.11)

Gauss-Newton Algorithm on Riemannian Manifolds ...

35

In particular, by (6.9) and (6.10) sγ,1 = λ.

(6.12)

Next, we provide a series of auxiliary standard results [19]. Lemma 3. Suppose that 0 < λ ≤ bγ. Then, bγ < rγ and the following assertions hold: (i) ϕγ is strictly decreasing in [0, ργ] and strictly increasing in [ργ , ∞) with ϕγ (λ) > 0, ϕγ(ργ) = λ − bγ ≤ 0, ϕγ (∞) ≥ λ > 0.

(6.13)

Moreover, if λ < bγ, ϕγ has two zeros, denoted respectively by ρ∗γ and ρ∗∗ γ , such that λ < ρ∗γ
γ with the corresponding ϕγ0 . Then, the following assertions hold: (i) The function γ → ργ and γ → bγ are strictly decreasing on (0, ∞). (ii) ϕγ < ϕγ0 on (0, ∞). (iii) The function γ → ρ∗γ is strictly increasing on the interval I(λ) where I(λ) denotes the set of all γ > 0 such that λ ≤ bγ . Lemma 5. Define wγ (t) = ϕ0γ (t)−1ϕγ (t) t ∈ [0, ρ∗γ ). Suppose that 0 < λ ≤ bγ. Then, wγ is increasing on [0, ργ). We assume throughout the remainder of this paper that K is the set of all minimum points on h. Let q0 ∈ R be a quasi regular point of the inclusion (6.2) with the quasi regular radius ρq0 and the quasi-regular bound function βq0 . Let η ∈ [1, ∞) and let λ := ηβq0 (0)d(F(q0 ), K).

(6.15)

For all r ∈ (0, ρq0 ], we define  ηβq0 (t) R :λ≤t 0, 0 < ρ ≤ ρq0 and γ0 (ρ) be defined by (6.16). Let γ ≥ γ0 (ρ) be a positive constant and let bγ, ργ be defined by (6.8). Let ρ∗γ denote the smallest zero of the function ϕγ defined by (6.9). Suppose that DF satisfies the L− average Lipschitz condition on U(q0 , ρ∗γ ), and that λ ≤ min{bγ, δ} and ρ∗γ ≤ ρ.

(6.17)

Let {qn } denote the sequence generated by (A). Then, {qn } converges to some q∗ such that F(q∗ ) ∈ K, and the following assertions hold for each n = 0, 1, 2, . . . d(qn , q∗ ) ≤ ρ∗γ ) ≤ ρ∗γ − sγ,n

(6.18)

F(qn ) + DF(qn )zn ∈ K.

(6.19)

and Remark 11. If L0 = L = L1 , then our results reduce to the ones in [19]. Otherwise, they constitute an extension with advantages already stated. Clearly, the specializations of the rest of the results in [19] are extended too. We leave the details to the motivated reader.

3.

Conclusion

Inclusion problems on Riemannian manifolds with values in a cone are solved using the Gauss-Newton algorithm. The new results extend earlier ones under the same semi-local convergence criteria and conditions using our idea of the restricted convergence domain.

References [1] Adler, J. P., Dedieu, J., Margulies, M. Martens, and Shub, M., Newtons method on Riemannian manifolds and a geometric model for human spine, IMA J. Numer. Anal., 22(2002), 359-390. [2] Argyros, I. K., On an extension of the mesh-independence principle for operator equations in Banach spaces, Appl. Math. Lett., 9(3), (1996), 1-7. [3] Argyros, I. K., Computational Theory of Iterative Methods, Series: Studies in Computational Mathematics, 15, Editors: Chui C. K. and Wuytack L. Elsevier Publ. Company, New York (2007). [4] Argyros, I. K., George, S., Mathematical modeling for the solution of equations and systems of equations with applications, Volume-IV, Nova Publishes, NY, 2020. [5] Argyros, I. K., George, S., On the complexity of extending the convergence region for Traub’s method, Journal of Complexity 56 (2020) 101423, https://doi.org/10.1016/j.jco.2019.101423 [6] Daniel, W., Newton’s method for nonlinear inequalities, Numer. Math., 40(1973), 381-387.

Gauss-Newton Algorithm on Riemannian Manifolds ...

37

[7] DoCarmo, M. P., Riemannian Geometry, Birkhauser, Boston, 1992. [8] Ezquerro, J. A. and Hern´andez, M. A., Generalized differentiability conditions for Newton’s method, IMA J. Numer. Anal., 22(2002), 187-205. [9] Huang, J. H., Huang, S., Li, C., Extended Newton’s method for mappings on Riemannian manifolds with values in a cone, Taiwanianese J. Math., 13, 2B, (2009), 633-656. [10] Ferreira, O. P. and Svaiter, B. F., Kantorovich’s Theorem on Newton’s method in Riemannian manifolds, J. Complexity, 18(2002), 304-329. [11] Guti´errez, J. M. and Hern´andez, M. A., Newton’s method under weak Kantorovichconditions, IMA J. Numer. Anal., 20(2000), 521-532 [12] Kantorovich, L. V. and Akilov, G. P., Functional Analysis, Oxford: Pergamon, 1982. [13] Rockafellar, R. T., Convex Analysis, Princeton University Press, Princeton, NJ, 1970. [14] Rockafellar, R. T., Monotone processes of convex and concave type, Mem. Amer. Math. Soc. 77, AMS., Providence, R.I., 1967. [15] Robinson, S. M., Normed convex process, Trans. Amer. Math. Soc., 174(1972), 127-140. [16] Robinson, S. M., Extension of Newtons method to nonlinear functions with values in a cone, Numer. Math., 19(1972), 341-347. [17] Smale, S., Newtons method estimates from data at one point, The Merging of Disciplines: New Directions in Pure, Applied and Computational Mathematics (R.Ewing, K.Gross and C. Martin, eds), New York: Springer, (1986), 185-196. [18] Wang, X. H., Convergence of Newton’s method and inverse function theorem in Banach space, Math. of Comput., 225(1999), 169-186. [19] Wang, J. H., Yao, J. C., Li, C., Gauss-Newton method for convex composite optimizations on Riemannian manifolds, J. Glob. Optim., 53, (2012), 5-28.

Chapter 7

Newton’s Method with Applications to Interior Point Algorithms of Mathematical Programming 1.

Introduction

In this chapter, we are concerned with the problem of approximating a locally unique solution x? of equation F(x) = 0, (7.1) where, F is a differentiable operator defined on a convex domain D of Ri (i an integer) with values in Ri . The famous Newton–Kantorovich theorem [4] has been used extensively to solve equations (7.1). A survey of such results can be found in [1] and the references there. Recently [1]–[3], we improved the Newton–Kantorovich theorem. We use this development to show that the results obtained in the elegant work in [6] in connection to interior-point methods can be extended, if our convergence conditions simply replace the stronger ones given there. Finally, a numerical example is provided to show that fewer iterations than the ones suggested in [6] are needed to achieve the same error tolerance.

2.

An Improved Newton–Kantorovich Theorem

Let k · k be a given norm on Ri , and x0 be a point of D such that the closed ball of radius r centered at x0 , U(x0 , r) = {x ∈ Ri : kx − x0 k ≤ r} is included in D , i.e., U(x0 , r) ⊆ D . We assume that the Jacobian F 0 (x0 ) is nonsingular and that the following affine-invariant Lipschitz condition is satisfied: kF 0 (x0 )−1 [F 0 (x) − F 0 (y)]k ≤ w kx − yk for

some w ≥ 0, and for all x, y ∈ U(x0 , r). (7.2) Our technique extends other methods using inverses along the same lines [1]–[3]. The famous Newton–Kantorovich Theorem [4] states that if the quantity α := kF 0 (x0 )−1 F(x0 )k

(7.3)

40

Christopher I. Argyros, Samundra Regmi, Ioannis K. Argyros et al.

together with w satisfy 1 k = αw ≤ , 2

(7.4)

then there exists x? ∈ U(x0 , r) with F(x? ) = 0. Moreover the sequences produced by Newton’s method xn+1 = xn − F 0 (xn )−1 F(xn )(n ≥ 0), (7.5) and by the modified Newton’s method yn+1 = yn − F 0 (y0 )−1 F(yn ), y0 = x0 (n ≥ 0)

(7.6)

are well defined and converge to x? . Notice that we assume that F 0 (x0 ) is invertible, so it does not become zero on U(x0 , r). In [1]–[3] we introduced the center–Lipschitz condition kF 0 (x0 )−1 [F 0 (x) − F 0 (x0 )]k ≤ w0 kx − x0 k, for

some w0 ≥ 0, and for all x ∈ U(x0 , r). (7.7) This way, we provided a finer local and semi-local convergence analysis of method (7.5) by using the combination of conditions (7.2) and (7.7) given by 1 `0 = αw ≤ , 2 where, w= In general

p 1 (w + 4 w0 + w2 + 8 w0 w)[3]. 8 w0 ≤ w ≤ w

holds, and

w w0 ,

(7.8)

(7.9) (7.10)

w w , can be arbitrarily large [1]. Note also that w0 w k≤

1 1 ⇒ `0 ≤ 2 2

(7.11)

but not vice versa unless if w0 = w. Examples where weaker condition (7.8) holds but (7.4) fails have been also given in [1]–[3]. We can do even better as follows: ¯ 0 , r) ∩ U(x0, 1 − kF 0 (x0 )−1 F(x0 )k); Suppose: there exists w1 > 0 such that Set U0 := U(x w0 for each x, y ∈ U0 : kF 0 (x0 )−1 [F 0 (x) − F 0 (y)]k ≤ w1 kx − yk; (7.12) there exist β > 0, γ > 0 such that for each x ∈ U0 and θ ∈ [0, 1] kF 0 (x0 )−1 (F 0 (x0 + θ(x1 − x0 )) − F 0 (x0 ))k ≤ γθkx1 − x0 k 0

−1

0

0

kF (x0 ) (F (x1 ) − F (x0 ))k ≤ βkx1 − x0 k,

(7.13) (7.14)

where x1 = x0 − F 0 (x0 )−1 F(x0 ).

(7.15)

Newton’s Method with Applications to Interior Point Algorithms ...

41

Notice that β ≤ γ ≤ w0 ≤ w and w1 ≤ w. Define

2w1

δ= w1 + and

δ1 =

q

(7.17)

w21 + 8w0 w1

1 w0 +β ,

     

(7.16)

i f q = w1 γ + δw0(γ − 2β) = 0

√ −δ(w0 +β)+ δ2 (w0 +β)2 +δ(w1 γ+2δw0 (γ−2β)) 2 ,  q  √   2 2  2δ(w0 +β)+ δ (w0 +β) +δ(w1 γ+2δL0 (γ−2β)) − , q

if q > 0

(7.18)

if q < 0

In [3], we presented a semi-local convergence analysis using the conditions below (7.11) and the condition 1 k0 = αδ ≤ , (7.19) 2 replacing (7.8). In view of (7.16)-(7.19), we have that w0 ≤ δ ≤ w, ¯

(7.20)

so

1 1 =⇒ k0 ≤ . (7.21) 2 2 Notice that by (7.19) αw0 < 1, so the set U0 is defined. Similarly by simply replacing w with w0 (since (7.7) instead of (7.2) is actually needed in the proof) and condition (7.4) by the weaker `0 ≤

k1 = αw0 ≤

1 2

(7.22)

in the proof of Theorem 1 in [6] we show that method (7.6) also converges to x? and the improved bounds 2 β0 λ20 n−1 kyn − x? k ≤ ξ (n ≥ 1) (7.23) 1 − λ20 0 where,

√ p 1 − 2k1 1 − 1 − 2k1 − h1 β0 = , λ0 = and ξ0 = 1 − 1 − 2k1 , 1 1 k k hold. In case w0 = w, (7.23) reduces to (7.10) in [6]. Otherwise our error bounds are finer. Note also that 1 1 k ≤ ⇒ k1 ≤ 2 2 but not vice versa unless if w0 = w. Let us provide an example to show that (7.8) or (7.19) or(7.22) hold but (7.4) fails. √

42

Christopher I. Argyros, Samundra Regmi, Ioannis K. Argyros et al.   Example 2. Let i = 1, x0 = 1, D = [p, 2 − p], p ∈ 0, 12 , and define functions F on D by F(x) = x3 − p.

(7.24)

Using (7.2), (7.3), (7.7), and (7.24) we obtain 1 α = (1 − p), w = 2(2 − p) and w0 = 3 − p, 3

(7.25)

  2 1 1 k = (1 − p)(2 − p) > for all p ∈ 0, . 3 2 2

(7.26)

which imply that

That is there is no guarantee that Newton’s method (7.5) converges to x? = Newton–Kantorovich hypothesis (7.4) is violated.   Moreover, condition (7.22) holds for all p ∈   1 p ∈ 0.450339002, 2 . We also have that β =

√ 4− 10 1 2 ,2

√ 3

p, since the

. Furthermore , (7.8) holds for

5+p 3 , γ = 2, and w1

=

2 (−2p2 + 5p + 6). 2(3−p)

Finally, (7.19) is satisfied for p ∈ [0.0984119,0.5) which is the largest interval. The above suggests that all results on interior point methods obtained in [6] for Newton’s method using (7.4) can now be rewritten using only (7.19). The same holds for the modified Newton’s method, where (7.22) also replaces (7.4).

3.

Applications to Interior-Point Algorithm

It has already been shown in [5] that the Newton–Kantorovich theorem can be used to construct and analyze optimal–complexity path following algorithms for linear complementary problems. Potra has chosen to apply this theorem to linear complementary problems because such problems provide a convenient framework for analyzing primal-dual interiorpoint algorithms. Theoretical and experimental work conducted over the past decade has shown that primal–dual-path following algorithms are among the best solution methods for (LP), quadratic programming (QP), and linear complementary problems (LCP) (see for example [7], [11]). Primal–dual-path following algorithms are the basis of the best generalpurpose practical methods and they have important theoretical properties [10,11,12]. Potra, using (7.4), in particular showed how to construct path–following algorithms for √ LCP that have O( nL) iteration complexity [6]. Given a point x that approximates a point x(τ) on the central path of the LCP with complementarity gap τ, the algorithms compute a parameter θ ∈ (0, 1) so that x satisfies the Newton–Kantorovich hypothesis (7.4) for the equation defining x ((1 − θ) τ). It is proven that θ is bounded below by a multiple of n−1/2 . Since (7.4) is satisfied, the sequence generated by Newton’s method or by the modified Newton method (take F 0 (xn ) = F 0 (x0 ), n ≥ 0) with starting x, will converge to x ((1 − θ) τ) . He showed that the number of steps required to obtain an acceptable approximation of x ((1 − θ) τ) is bounded above by a number independent a point with complementarity less than ε can be obtained in at √of n. Therefore,  ε most O n log ε0 steps (for both methods), where ε0 is the complementary gap of the starting point. For linear complementarity problems with rational input data of bit length

Newton’s Method with Applications to Interior Point Algorithms ...

43

√ L, this implies that an exact solution can be obtained in at most O ( nL) iterations plus a rounding procedure including O n3 arithmetic operations [11] (see also [9]). We also refer the reader to the excellent monograph of Nesterov and Nemirovskii [5] for an analysis of the construction of interior point methods for a larger class of problems than that considered in [6] (see also [9]). We can now describe the linear complementarity problem as follows: Given two matrices Q, R ∈ Rn×n (n ≥ 2) and a vector b ∈ Rn , the horizontal linear complementarity problem (HLCP) consists of approximating a pair of vectors (w, s) such that ws = 0 Q(w) + R(s) = b

(7.27)

w, s ≥ 0. The monotone linear complementarity problem (LCP) is obtained by taking R = −I and Q positive semidefinite. Moreover, the linear programming problem (LP) and the quadratic programming problem (QP) can be formulated as HLCPs. That is, HLCP is a suitable way of studying interior point methods. We assume HLCP (7.27) is monotone in the sense that: Q(u) + R(v) = 0 implies ut v ≥ 0, for all u, v ∈ Rn .

(7.28)

Condition (7.28) holds if the HLCP is a reformulation of a QP. If the HLCP is a reformulation of an LP then the following stronger condition holds: Q(u) + R(v) = 0 implies ut v = 0, for all u, v ∈ Rn .

(7.29)

Then, we say in this case that the HLCP is skew-symmetric. If the HLCP has an interior point, i.e. there is (w, s) ∈ Rn++ × Rn++ satisfying Q(w) + R(s) = b, then for any parameter τ > 0 the nonlinear system ws = τe Q(w) + R(s) = b

(7.30)

w, s ≥ 0 has a unique positive solution x(τ) = [w(τ)t , s(τ)t ]t . The set of all such solutions defines the central path C of the HLCP. It can be proved that (w(τ), s(τ)) converges to a solution of the HLCP as τ → 0. Such an approach for solving the HLCP is called the path following algorithm. At a basic step of a path following algorithm, an approximation (w, s) of (w(τ), s(τ)) has already been computed for some τ > 0. The algorithm determines the smaller value of the central path parameter τ+ = (1 − θ) τ, where the value θ ∈ (0, 1) is computed in some unspecified way. The approximation (wt , st ) of (w(τ+ ), s(τ+)) is computed. The procedure is then repeated with (w+ , s+, τ+ ) in place of (w, s.τ).

44

Christopher I. Argyros, Samundra Regmi, Ioannis K. Argyros et al.

In order for us to relate the path following algorithm and the the Newton–Kantorovich theorem we introduce the notations     w w (τ) x= , x (τ) = , s s (τ)  +    w w (τ+ ) + x = , x (τ+ ) = , etc. s+ s (τ+) Then for any θ > 0 we define the nonlinear operator   ws − σe Fσ (x) = . Q(w) + R(s) − b

(7.31)

Then system (7.30) defining x(τ) becomes Fσ (x) = 0,

(7.32)

whereas the system defining x(τ+ ) is given by F(1−θ)τ (x) = 0.

(7.33)

We assume that the initial guess x belongs in the interior of the feasible set of the HLCP  F 0 = x = (wt , st )t ∈ R2n (7.34) ++ : Q(w) + R(s) = b .

In order to verify the Newton–Kantorovich hypothesis for equation (7.32) we introduce the quantity

η = η(x, τ) = F 0 (x)−1 Fτ (x) , (7.35)

the measure of proximity

k = k(x, τ) = η`, w = w(x) ¯ δ = δ(x) k0 = k0 (x, τ) = η`,

(7.36)

k1 = k1 (x, τ) = ηw0 , w0 = w0 (x) and the normalized primal–dual gap µ = µ(x) =

wt s . η

(7.37)

If for a given interior point x and a given parameter τ we have k0 (x, τ) ≤ .5 for the Newton– Kantorovich method or k1 (x, τ) ≤ .5 for the modified Newton–Kantorovich method then corresponding sequences starting from x will converge to the point x (τ) on the central path. We can now describe our algorithm which is a weaker version of the one given in [6]: Algorithm (Using Newton–Kantorovich method). Given 0 < k10 < k20 < .5, ε > 0, and x0 ∈ F 0 satisfying k0 (x0 , µ (x0 )) ≤ k10 ; Set k0 ← 0 and τ0 ← µ (x0 ) ; repeat (outer iteration) Set (x, τ) ← (xk , τk ), x¯ ← xk ;

Newton’s Method with Applications to Interior Point Algorithms ... Determine the largest θ ∈ (0, 1) such that k0 (x, (1 − θ)τ) ≤ k20 ; Set τ ← (1 − θ)τ; repeat (inner iteration) Set x ← x − F 0 (x)−1 Fτ (x)

45

(7.38)

until k0 (x, µ) ≤ k10 ; Set (xk+1 , τk+1 ) ← (x, τ); Set k ←k + 1; t until wk sk ≤ ε. For the modified the Newton–Kantorovich algorithm k10 , k20 , k0 should be replaced by k11 , k21 , k1 , and (7.38) by Set x ← x − F 0 (x) ¯ −1 Fτ (x), respectively. In order to obtain Algorithm 1 in [6] we need to replace k10 , k20 , k0 by k1 , k2 , k respectively. The above suggests that all results on interior point methods obtained in [6] using (7.4) can now be rewritten using only the weaker (7.8) (or (7.22)). We only state those results for which we will provide applications. Let us introduce the notation ( p 1 + θai + q2θai + ria , if HLCP is monotone, Ψai = (7.39) 1 + qia + 2qia + q2ia , if HLCP is skew–symmetric where,

p p a ri = θai , tia = kia , a = 0, 1,   tia tia a θi = ti 1 + , q = , i = 1, 2. ia 1 − tia 2

(7.40)

Then by simply replacing k, k1 , k2 by k0 , k10 , k20 respectively in the corresponding results in [6] we obtain the following improvements: Theorem 8. The parameter θ determined at each outer iteration of algorithm 3. satisfies χa θ ≥ √ = λa n where,

χa =

where,

 √  2 k a −k a  √ ( 22 √1 ) a ,    2+p ti ψ1     



(

(

if HLCP is skew–symmetric or if no simplified Newton–Kanorovich steps are performed, (7.41)

√) , ) ψa1

2 k2a −k1a √ 2+pk1a

p=

otherwise  √

2, if HLCP is monotone, 1, if HLCP is skew–symmetric.

(7.42)

The lower bound on λa on θ is an improvement over the corresponding one in [6, Corollary 4].

46

Christopher I. Argyros, Samundra Regmi, Ioannis K. Argyros et al.

In the next result, abound on the number of steps of the inner iteration that depends only on k10 and k20 is provided. Theorem 9. If Newton–Kantorovich method is used in Algorithm 3. then each inner iteration terminates in at most N 0 k10 , k20 steps, where   

and

    N 0 k10 , k20 = integer part  log2 

 log2 (xN 0 )     q  log2 1 − 1 − 2k20 − k20 /k20

"  2 # q   √ k10 1 − pk20 / 2 t20 − 1 − 1 − 2k20

xN 0 = 2



2t20

q  . q q  0 0 0 0 1 − 2k2 ψ2 + 1 − 1 − 2k2 1 + k1

(7.43)

(7.44)

If the modified the Newton–Kantorovich method is used in Algorithm 3. then each iteration terminates in at most S0 (k1 , k2 ) steps, where,   S

1

k11 , k21

and

xS1 =



  = integer part   

log2 (xS1 ) r q

log2 1 −

1−

  ! +1 1 

(7.45)

1 − 2k2

"  2 # q   √ k10 1 − pk21 / 2 t21 − 1 − 1 − 2k21 − k21

.  2 q  q q √ q  2 2 1 − 2k21 1 − 1 − 2k21 − k21 ψ12 + 1 − 1 − 2k21 1 + k11

Remark 12. Clearly, if k11 = k10 = k1 , k21 = k20 = k2 , k1 = k0 = k, Theorem 8 reduces to the corresponding Theorem 2 in [6]. Otherwise the following improvement hold:   N 0 k10 , k20 < N (k1 , k2 ) , N 0 < N, S1 k11 , k21 < S (k1 , k2 ), and S1 < S.

Since kk10 , 1 the choices

k2 k1 k2 , , k20 k11 k21

can be arbitrarily large [1]–[3] for given triplet β, γ, η, w1 and w0 ,

k10 = k11 = .12, k20 = k21 = .24, when k1 = .21, and k2 = .42 and k10 = k11 = .24, k20 = k21 = .48, when k1 = .245, and k2 = .49 are possible. Then using formulas (7.42), (7.43) and (7.45), we obtain the following tables:

Newton’s Method with Applications to Interior Point Algorithms ...

47

(a) If the HLCP is monotone and only Newton directions are performed, then: Potra (7.41) χ(.21, .42) > .17

Argyros (7.41) χ(.12, .24) > .09

χ(.245, .49) > .199

χ(.24, .48) > .190

Potra (7.43) N(.21, .42) = 2

Argyros (7.43) N(.12, .24) = 1

N(.245, .49) = 4

N(.24, .48) = 3

(b) If the HLCP is monotone and Modified Newton directions are performed: Potra (7.41) χ(.21, .42) > .149

Argyros (7.41) χ(.12, .24) > .097

χ(.245, .49) > .164

χ(.24, .48) > .160

Potra (7.45) S(.21, .42) = 5

Argyros (7.45) S(.12, .24) = 1

S(.245, .49) = 18

S(.24, .48) = 12

All the above improvements are obtained under weaker hypotheses and the same computational cost (in the case of Newton’s method) or less computational cost (in the case of the modified Newton method), since inpractice, the computation of w requires that of w0 and w1 . In general, the computation of w0 is less expensive than that of w.

4.

Conclusion

We use a weaker Newton–Kantorovich theorem for solving equations, introduced in [3], to analyze interior point methods. This way, our approach extends earlier works in [6] on Newton’s method and interior point algorithms.

References [1] Argyros, I. K., George, S., Mathematical Modeling For The Solution Of Equations And Systems Of Equations With Applications, Volume-IV, Nova Publishes, NY, 2020. [2] Argyros, I. K., Computational theory of iterative methods, Studies in Computational Mathematics, 15, Elsevier, 2007, New York, U.S.A.

48

Christopher I. Argyros, Samundra Regmi, Ioannis K. Argyros et al.

[3] Argyros, I. K., On an improved convergence analysis of Newton’s method, Appl. Math. Comput., 225, (2013), 372–386. [4] Kantorovich, L. V., Akilov, G. P., Functional Analysis in Normed Spaces, Pergamon Press, Oxford, 1982. [5] Nesterov, Y., Nemirovskii, A., Interior–point polynomial algorithms in convex programming, SIAM Studies in Appl. Math., 13, Philadelphia, PA, (1994). [6] Potra, F. A., The Kantorovich theorem and interior point method, Math. Programming, 102 (2005), 47–50. [7] Potra, F. A., Wright, S. J., Interior–point methods. Numerical analysis 2000, Vol. IV, Optimization and nonlinear equations, J. Comp. Appl. Math., 124 (2000), 281–302. [8] Renegar, J., A polynomial-time algorithm, based on Newton’s method, for linear programming, Math. Progr., Ser. A, 40 (1988), 59–93. [9] Renegar, J., Shub, M., Unified complexity analysis for Newton LP methods, Math. Progr., Ser. A, 53 (1992), 1–16. [10] Roos, C., Vial, J.–Ph., Terlaky, T., Theory and algorithms for linear optimization: an interior point approach, Wiley Interscience Series in Discrete Mathematics and Optimization, John Wiley and Sons, 1997. [11] Wright, S. J., Primal-Dual Interior-Point Methods, SIAM, Philadelphia, 1997. [12] Ye, Y., Interior Point Algorithms: Theory and Analysis, Wiley–Interscience Series in Discrete Mathematics and Optimization, John Wiley and Sons, 1997.

Chapter 8

Newton’s Method for Solving Nonlinear Equations Using Generalized Inverses: Part I Outer Inverses 1.

Introduction

Let B and B1 be Banach spaces, Ω ⊂ B be open, L(B, B1 ) stand for the space of all bounded linear operators from B into B1 . A linear operator M # : B1 −→ B for M ∈ L(B, B1 ) is said to be outer inverse of M if M # MM # = M # . Denote by D(M) = {E ∈ L(B1 , B) : EME = M, E 6= I}. Newton’s method defined for all n = 0, 1, 2, . . . and x0 ∈ Ω by xn+1 = xn − F 0 (xn )# F(xn )

(8.1)

has been used extensively to generate a sequence {xn } converging to a solution x∗ of equation F 0 (x0 )# F(x) = 0 (8.2) under certain conditions [1,5,6,7]. Note that if F 0 (x)# = F 0 (x)−1 , we obtain the celebrated Newton’s method for solving nonlinear equations [1,5,6,7]. We refer the reader to [1][12] for the properties of generalized inverses. The local convergence of method (8.1) is extended without additional conditions with benefits: (a) Larger convergence domain, i.e. weaker sufficient convergence criteria. (b) Tighter error bounds on the distances kxn − x∗ k and (c) A more precise information on the location of x∗ . This is done in Section 2.

50

2.

Christopher I. Argyros, Samundra Regmi, Ioannis K. Argyros et al.

Convergence

We first introduce some definitions of Lipschitz type conditions that will make a difference in the convergence analysis of the method (8.1). We denote by U[x∗ , γ] the closure of the ball U(x∗ , γ) having x∗ ∈ B as a center and γ > 0 as a radius. Let x0 ∈ Ω Definition 12. We say that operator F 0 is center Lipschitz continuous on Ω if there exists K0 > 0 such that for all x ∈ Ω kF 0 (x0 )# (F 0 (x) − F 0 (x0 ))k ≤ K0 kx − x0 k

(8.3)

holds. Set Ω0 = U(x0 , K10 ) ∩ Ω. Definition 13. We say that operator F 0 is restricted Lipschitz continuous on Ω0 if there exists K > 0 such that for all x, y ∈ Ω0 kF 0 (x0 )# (F 0 (y) − F 0 (x))k ≤ Kky − xk

(8.4)

holds. Definition 14. We say that operator F 0 is Lipschitz continuous on Ω if there exists K1 > 0 such that for all x, y ∈ Ω kF 0 (x0 )# (F 0 (y) − F 0 (x))k ≤ K1 ky − xk

(8.5)

holds. Remark 13. Condition (8.5) is usually employed in the study of convergence for iterative methods. But we have Ω0 ⊆ Ω, (8.6) so K0 ≤ K1

(8.7)

K ≤ K1

(8.8)

K0 ≤ K.

(8.9)

and hold. We suppose from now on that Otherwise, the results that follow hold with K0 replacing K. Then, the results in [7,8,9,11] can be extended by simply replacing K with K1 . First, we extend the following semi-local convergence version of the NewtonKantorovich theorem (see Theorem 3.1 in [7]).

Newton’s Method for Solving Nonlinear Equations Using Generalized Inverses

51

Theorem 10. Suppose: There exists x0 ∈ Ω with F 0 (x0 )# ∈ D(F 0 (x0 )), constants α, K0 , K > 0 such that kF 0 (x0 )# F(x0 )k ≤ α,

(8.10)

(8.3) and (8.4) hold, h = Kα ≤

1 2

(8.11)

and U(x0 ,t∗ ) ⊂ Ω,

(8.12)

where

√ 1 − 1 − 2h . (8.13) t∗ = K Then, the sequence {xn } generated by method (8.1) is well defined, remains in U(x0 ,t∗) and converges to a solution x∗ of equation F 0 (x0 )# F(x) = 0, where F 0 (xn )# = (I + F 0 (x0 )# (F 0 (xn ) − F 0 (x0 )))−1 F 0 (x0 )# . Proof. Simply notice that the iterates {xn } lie in Ω0 (according to the proof of Theorem 3.1 and corollary 3.1 in [7]). Then, the proof follows exactly as in [7] with these modifications. Remark 14. a. Condition (8.5) was used in [7] together with the sufficient convergence (semi-local) criteria 1 H = K1 η ≤ (8.14) 2 and U(x0 , t¯∗ ) ⊂ Ω, (8.15) where

√ 1 − 1 − 2H t¯∗ = . K1

(8.16)

t∗ ≤ t¯∗

(8.17)

But notice that and

1 1 ⇒h≤ 2 2 (not necessarily vice versa unless if K = K1 ). H≤

(8.18)

b. We can do even better if instead we use condition kF 0 (x0 )# (F 0 (x) − F 0 (y))k ≤ Mkx − yk for all x, y ∈ Ω1 , where Ω1 = U(x1 , K10 − kx1 − x0 k) if kx1 − x0 k < for z ∈ Ω1 , kz − x1 k
0; (i) There exists an F 0 (x∗ )# ∈ D(F 0 (x∗ )) such that kF 0 (x∗ )k ≤ q; and for all x ∈ U(x∗ , ρ0 ), the set D(F 0 (x)) has an element of minimal norm, where for γ ∈ [0, 1), ρ0 = 1−γ qL0 . Then, there exists ρ ∈ (0, ρ1 ) so that U(x∗ , ρ) ⊂ Ω so that for any x0 ∈ U(x∗ , ρ) sequence {xn } generated by method (8.1) converges quadratically to y∗ ∈ U(x0 , ρ1 ) ∩ {x0 + 1 0 # R(F 0 (x0 ))} with F 0 (x0 )# |F(y∗ ) = 0, where ρ1 = 1−γ qL0 + qL , F (x0 ) ∈ argmin{kQk : Q ∈ 0 D(F (x0 ))}, F 0 (xn )# = (I + F 0 (x0 )# (F 0 (xn ) − F 0 (x0 ))−1F(x0 )# and R(F 0 (x0 )# + x0 = {x + x0 : x ∈ R(F 0 (x0 )# }. Proof. Set γ =

1 2L

 2 γ q

. By the continuity of F at x∗ , there exists ρ ∈ (0, ρ0 ) such that for

each x ∈ U(x∗ , ρ), kF(x)k ≤ γ. Using (8.22) there exists F 0 (x∗ )# ∈ D(F 0 (x∗ )) so that kF 0 (x∗ )# (F 0 (x) − F 0 (x∗ ))k ≤ qL0 ρ < 1.

(8.24)

Hence, by the Banach perturbation lemma for outer inverses [7, Lemma 2.1] kF 0 (x)# k ≤

kF 0 (x∗ )# k q ≤ = β, 1 − qL0 ρ 1 − qL0 ρ

(8.25)

where F 0 (x)# is an outer inverse of F 0 (x) defined by F 0 (x)# = (I + F 0 (x∗ )# (F 0 (x) − F 0 (x∗ ))−1 F 0 (x∗ )# . So the outer inverse F 0 (x0 )# ∈ argmin{kQk : Q ∈ D(F 0 (x0 ))} for all x0 ∈ U(x∗ , ρ) is such that kF 0 (x0 )# k ≤ β. In view of (8.24) for all x, y ∈ Ω2 kF 0 (x0 )# (F 0 (y) − F 0 (x))k ≤ kF 0 (x0 )# kkF 0 (y) − F 0 (x)k ≤ βLky − xk = Kky − xk

Newton’s Method for Solving Nonlinear Equations Using Generalized Inverses

53

for K = βL, so h = KkF 0 (x0 )# F(x0 )k ≤ βLβγ ≤

1 2

by the choice of γ. Moreover, for all z ∈ U(x0 ,t∗ ), we get kx∗ − zk ≤ kx0 − x∗ k + kz − x0 k ≤ ρ0 +

1 = ρ1 , qL

so U(x0 ,t∗ ) ⊂ U(x∗ , ρ1 ) ⊂ Ω. That is all conditions of Theorem 2.5 are verified. Hence, sequence {xn } ⊂ U(x0 ,t∗ ) with limn−→∞ xn = y∗ , where y∗ solver equation F 0 (x0 )# F(x) = 0. Next, we show the quadratic convergence of method (8.1). By the definition of F 0 (xn )# , R(F 0 (xn )# ) = R(F 0 (x0 )# ), so xn+1 − xn = F 0 (xn )# F(xn ) ∈ R(F 0 (xn )# )

(8.26)

gives xn+1 ∈ R(F 0 (xn )# ) + xn = R(F 0 (xn−1 )# + xn = . . . = R(F 0 (x0 )# ) + x0 ,

so y∗ ∈ R(F 0 (xn )# ) + xn+1 . But then, we get y∗ ∈ R(F 0 (x0 )# ) + x0 = R(F 0 (xn )# ) + x0 , and F 0 (xn )# F 0 (xn )(y∗ − xn+1 )

= F 0 (xn )# F 0 (xn )(y∗ − x0 ) − F 0 (xn )# F 0 (xn )(xn+1 − x0 )

= y∗ − xn+1 .

But we have F 0 (xn )# = F 0 (xn )# F 0 (x0 )F 0 (x0 )# which together with F 0 (x0 )# F(y∗ ) = 0 and N(F 0 (x0 )# ) = N(F 0 (xn )# ) implies F 0 (xn )# F(y∗ ) = 0, and ky∗ − xn+1 k = kF 0 (xn )# F 0 (xn )(y∗ − xn+1 )k

= kF 0 (xn )# F 0 (xn )(y∗ − xn + F 0 (xn )# (F(xn ) − F(y∗ ))k

= kF 0 (xn )# (F 0 (xn )(y∗ − xn ) −

Z 1 0

F 0 (xn + τ(xn − y∗ ))(y∗ − xn )dτk

≤ kF 0 (xn )# F 0 (x0 )k 0

#

0

×kF (x0 ) (F (xn ) − ≤

Z 1 0

F 0 (xn + τ(xn − y∗ ))(y∗ − xn )dτk

1 K ky∗ − xn k2 = en kxn − y∗ k2 . 1 − Kt∗ 2

The conclusions of Theorem 2.7 hold if condition (i) is replaced as follows:

(8.27)

54

Christopher I. Argyros, Samundra Regmi, Ioannis K. Argyros et al.

Proposition 1. Suppose the conditions of Theorem 2.7 hold but (i) is replaced by (ii) the generalized inverse F 0 (x∗ )+ exists, kF 0 (x∗ )k ≤ q; For all z ∈ U(x∗ , ρ0 ) dimN(F 0 (z)) = dimN(F 0 (x∗ )) and codimR(F 0 (z)) = codimR(F 0 (x∗ )). Then, the conclusions of Theorem 2.7 hold for F 0 (x0 )# ∈ {Q : Q ∈ D(F 0 (x0 )) : kQk ≤ kF 0 (x0 )+ k}. Proof. As in Theorem 2.7 with “+” replacing ( “#” we have kF 0 (x∗ )+(F 0 (x) − F 0 (x∗ ))k ≤ qL0 ρ < 1, so F 0 (x)+ = (I + F 0 (x∗ )+ (F 0 (x) − F 0 (x∗ )))−1F 0 (x∗ )+, kF 0 (x)+ k ≤ and

q kF 0 (x∗ )+ k ≤ = β, 1 − qL0 ρ 1 − qL0 ρ kF 0 (x0 )# k ≤ β.

Hence, the conditions of Theorem 2.7 are verified. Remark 15. (a) Similar favorable comments to Remark 2.4 can be made. Notice that if K ≤ K0 or L ≤ L0 , then the obtained results simply hold with K0 (or L0 ) replacing K (or L), respectively. Clearly, K0 is used to define Ω0 which in turn defines K. That is we have K = K(Ω, K0 ), L = L(Ω, L0 ) but K = K(Ω) and L1 = L(Ω), where L1 is the constant used in [7], i.e., kF 0 (x) − F 0 (y)k ≤ L1 kx − yk for all x, y ∈ Ω2 . Then, again Ω2 ⊆ Ω, so r=

1 ≤ ρ0 3qL1

and where e1n =

1 K 1−K1t¯∗ 2

en ≤ e1n

is used in [7] for K1 = 32 β1 L1 ≥ K and β1 =

q 1−qL1 r .

(b) Examples where these estimations are strict can be found in [1]-[4]. Hence, we have justified the benefits as claimed in the introduction. It is worth noticing that these benefits are obtained without additional conditions since the computation of K1 (or L1 ) in practice involves the computation of K0 , K (to L0 , L) as special cases. Finally, our results have extended the results by Rall [10] and Yamamoto and Chen [12], which are special cases of the results in [7].

Newton’s Method for Solving Nonlinear Equations Using Generalized Inverses

3.

55

Conclusion

We present a convergence analysis of Newton’s method to solve Banach space-valued equations using generalized inverses. Earlier works are extended without additional hypotheses with benefits: more initial points for convergence and tighter computable error distances (so fewer iterates are needed to reach a predetermined accuracy).

References [1] Argyros, I. K., Computational theory of iterative methods. Series: Studies in Computational Mathematics, 15, Editors: Chui C. K. and Wuytack L., Elsevier Publ. Co. New York, U.S.A, 2007. [2] Argyros, I. K., Hilout, S., Weaker conditions for the convergence of Newton’s method. J. Complexity 28 (2012) 364–387. [3] Argyros, I. K., Magr´en˜ an, A. A., A contemporary study of iterative methods, Elsevier (Academic Press), New York, 2018. [4] Argyros, I. K., Magr´en˜ an, A. A., Iterative methods and their dynamics with applications, CRC Press, New York, USA, 2017. [5] Ben-Israel, A., Greville, T. N. E., Generalized inverses: Theory and applications, J. Wiley, 1974. [6] Ben-Israel, A., A Newton-Raphson method for the solution of systems of equations, J. Math. Anal., 15, (1966), 243-252. [7] Chen, X., Nashed, Z., Qi, L., Convergence of Newton’s Method for Singular Smooth and Nonsmooth Equations Using Adaptive Outer Inverses, SIAM Journal on Optimization, 1997, 7, 2, pp. 445-462, https://doi.org/10.1137/S1052623493246288. [8] Chen, X., Nashed, Z., Qi, L., Convergence of Newton-like methods for singular operator equations using outer inverses, Numer. Math., 66, (1993), 235-257. [9] Haussler, W. M., A Kantorovich type convergence analysis for the Gauss-Newton method, Numer. Math., 48, (1986), 119-125. [10] Rall, L. B., A note on the convergence of Newton’s method, SIAM J. Numer. Anal., 1, (1974), 34-36. [11] Ortega, J. M., Rheinboldt, W. C., Iterative Solution of Nonlinear Equations in Several Variables, Academic Press, New York, 1970. [12] Yamamoto, T., Chen, X., Ball convergence theorems and error estimates for certain iterative methods for nonlinear equations, Japan J. Appl. Math., 7, (1990), 131-143.

Chapter 9

Newton’s Method for Solving Nonlinear Equations Using Generalized Inverses: Part II Matrices 1.

Introduction

Let F : D ⊂ Ri −→ R j be a differentiable mapping, and J(x) stand for the Jacobian of F at x ∈ Ri . Newton’s method in the form xn+1 = xn − T (xn )F(xn )

(9.1)

for x0 ∈ D and each n = 0, 1, 2, . . . was used in [7] to generate a sequence {xn } converging quadratically under certain conditions to a limit x∗ satisfying T (x)F(x) = 0.

(9.2)

Here T (x) stands for a {2}-inverse of J(x). A {2}− inverse of E ∈ Ri× j is a matrix E1 ∈ R j×i satisfying E1 EE1 = E1 . Then, we have rankE1 ≤ rankE with equality E1 = E + . In this chapter, we show how to obtain the same benefits as in [7] but using method (9.1) (instead of the corresponding method (9.1) in [7]). To avoid repetitions, we refer the reader to [7] for motivation and benefits (see also [1]-[14]).

2.

Local Convergence

Let β, M0 , M and N be nonnegative parameters. Define real quadratic polynomials by " # 2   M0 β M0 β 2 p1 (t) = + N − 2M0 t − +N t +1 2 2

58

Christopher I. Argyros, Samundra Regmi, Ioannis K. Argyros et al.

and

Set α0 = α to be

1 

M0 β 2 +N



   αM0 M 2 Mβ t − 1−α +N t + α. p2 (t) = 2 2  2 and γ = M20 β + N − 2M0 provided not all M0 , β and N are zero. Define

  the smallest of α0 and the smallest root of p1 (t) = 0, if γ > 0 1 , if γ = 0 α<  2α0 the smallest of α0 and the largest root of p1 (t) = 0, if γ < 0.

(9.3)

Then, we have that the discriminant of p1 (t) = 0 is nonnegative, so p2 (t) ≤ 0 holds provided that t ∈ [r1 , r2 ], where r1 ≤ r2 are two nonnegative roots of p2 (t) = 0. Define r to be ( r1 , if M10 ∈ [r1 , ∞) r= (9.4) 0, if M10 ∈ [0, r1). By U[z, ε] we denote the closure of the open ball U(z, ε) with center z ∈ D and of radius ε > 0. Next, we show the following extension of Theorem 1 in [7]using the preceding notation. Theorem 12. Suppose: kJ(x) − J(x0 )k ≤ M0 kx − x0 k for all x ∈ D.

(9.5)

kJ(x) − J(y)k ≤ Mkx − yk for all x, y ∈ D0 .

(9.6)

Set D0 = U(x0 , r) ∩ D. The Jacobian J(x) has a {2}− inverse T (x) ∈ R j×i . kF(x0 )k ≤ β,

(9.7)

βkF(x0 )k ≤ λ,

(9.8)

k(T (x) − T (y))F(y)k ≤ Nkx − yk2 for all x, y ∈ D0 ,

(9.9)

h = λK < 1,

(9.10)

and r> where

λ , 1−h

(9.11)

M (M0 (r + β) + N), 1]. (9.12) 2 Then, sequence {xn } generated by method (9.1) is well defined in U(x0 , r), remains in U(x0 , r) for all n = 0, 1, 2, . . . and converges to its limit x∗ ∈ U(x0 , r) solving equation K ∈[

T (x∗ )F(x) = 0. Moreover, method (9.1) is at least quadratically convergent.

(9.13)

Newton’s Method for Solving Nonlinear Equations Using Generalized Inverses

59

Proof. Mathematical induction shall be used to verify items {xn } ⊂ U(x0 , r)

(9.14)

and kxn+1 − x∗ k ≤ en = λh2

n −1

.

(9.15)

If n = 0 (9.15) and n = 1, (9.14) follow from (9.8). Suppose (9.15) holds for m = 0, 1, . . .n − 1. Then, we have kxn+1 − x∗ k ≤

n+1

n

m=1

m=0

λ

n

∑ kxm − xm−1 k ≤ λ ∑ h2 −1 < 1 − h < r,

(9.16)

which implies (9.14). Estimations (9.15) is shown if we first consider that for all z ∈ R(T ), T JT = T ⇒ T J(x) = x. Then, we can write in turn by method (9.1) and (9.6)-(9.14) that xm+1 − xm = −T (xm )F(xm )

= xm − xm−1 − T (xm )F(xm ) + T (xm−1 )F(xm−1 )

= T (xm−1 )J(xm−1 )(xm − xm−1 ) − T (xm )F(xm ) + T (xm−1 )F(xm−1 ) = −T (xm−1 )(F(xm ) − F(xm−1 )) − J(xm−1 )(xm − xm−1 ) +(T (xm−1 − T (xm ))F(xm ),

so kxm+1 − xm k ≤ k − T (xm−1 )(F(xm ) − F(xm−1 )) − J(xm−1 )(xm − xm−1 )k +k(T (xm−1 − T (xm ))F(xm )k   M ≤ kT (xm−1 )k + N kxm − xm−1 k2 2 ≤ Kkxm − xm−1 k2 ,

since kT (xm−1 )k ≤ kT (xm−1 ) − T (x0 )k + kT (x0 )k ≤ M20 kxm−1 − x0 k + β ≤ lows that m kxm+1 − xm k ≤ αh2 −1 .

M0 2 r + β.

(9.17) It fol(9.18)

Estimate (9.18) valid for m = 0 because of (9.8). Suppose it is true for all m ≥ 0, then, we get m m kxm+1 − xm k ≤ Kkxm − xm−1 k2 ≤ Kα2 h2 −2 ≤ αh2 −1 , showing (9.15). Moreover, from (9.15), we obtain for all m ≥ k, kxm+1 − xk k ≤ kkxm+1 − xm k + kxm − xm−1 k + . . . + kxk+1 − xk k k

k

k

k

≤ αh2 −1 (1 + h2 + (h2 )2 + . . .)
j). The solution methods for determining a zero x∗ of equation (10.1) are usually iterative, since x∗ can be determined in closed form only in special cases. Newton-type methods are the most popular used to generate a sequence quadratically converging to x∗ under certain conditions [1]-[16]. We study the local convergence of Newton’s method determined for all n = 0, 1, 2, . . . by xn+1 = xn − A† (xn )F(xn ), (10.2) where A = Jrank−m the Jacobian of F of rank m and A† is the rank − m projection of the Jacobian A or the so called Moore-Penrose inverse of A and B = J. Notice that if m = i = j we obtain the classical Newton’s method for solving equation (10.1). By k.k we denote the norm k.k2 in this chapter. We show that it is possible to provide computable: (a) Ball of convergence (so we know from where to pick initial point x0 so that limn−→∞ xn =

64

Christopher I. Argyros, Samundra Regmi, Ioannis K. Argyros et al.

x∗ ) and (b) Error bound on kxn − x∗ k (so we know in advance how many iterations are needed to obtain a pre-decided error tolerance ε > 0). Benefits (a) and (b) are not given in the motivational study [16] (see also [1]-[15]). Our results are obtained under weaker conditions. To avoid repetitions, we assume familiarity with basic matrix theory [7,9,14]. Next, we present our results.

2.

Convergence of Method (10.2)

It is convenient for the local convergence of method (10.2) to define some real parameters. Let a, b, b0, c0 , α, γ and d0 be given parameters. Define q −(2γ + ab)d0 + (2γ + ab)2 d02 + 4(2γ + ab)c0 ρ0 = , (2c0 + ab)c0 q −d0 b0 + d02 b20 + 4c20 ρ1 = , 2c20 q −d0 b0 + d02 b20 + 4c20 αµ ρ2 = 2c20 and ρ = min{ρ0 , ρ1 , ρ2 },

(10.3)

ab )}. where µ = max{1, d( b2 + 1−α We denote by U[z, r] the closure of the ball U(z, r) having z ∈ S as a center and r > 0 as a radius. Next, we develop the local convergence analysis of Newton’s method (10.2) using the preceding notation.

Theorem 13. Suppose that rank(A(x∗)) = k. Let D1 = U(x∗ , R) ⊂ D for some R > 0. Then, if ρ ∈ (0, R] and x0 ∈ D0 = U(x∗ , ρ), Newton’s method (10.2) converges at least quadratically to a zero (semi-regular) y∗ ∈ D1 of F along the same branch as x∗ . Proof. By Lemma 1 in [16] we have that for all u, v ∈ D1 , rank(A(u) ≥ k and kA(u)A†(u) − A(v)A† (v)k ≤ aku − vk,

(10.4)

kA(u) − A(v)k ≤ dku − vk,

(10.5)

kA† (u) − A† (x∗ )k ≤ c0 ku − x∗ k, kA† (x∗ )k ≤ d0 ,

(10.6) (10.7)

kF(u) − F(x∗ )k ≤ b0 ku − x∗ k,

(10.8)

kF(u) − F(v)k ≤ bku − vk,

(10.9)

kF(v) − F(u) − A(u)(v − u)k ≤ γku − vk2 .

(10.10)

and

Newton’s Method for Solving Nonlinear Equations Using Generalized Inverses

65

Then, for all u, v ∈ U(x∗ , ρ) kA† (u)k(γku − vk + akF(v)k)

≤ (kA† (u) − A† (x∗ )k + kA† (x∗ )k) ×(γ(ku − x∗ k + kv − x∗ k) + akF(v) − F(x∗ )k))

≤ (c0 ρ + d0 )(2γρ + abρ) < 1.

(10.11)

Such an α exists by the choice of ρ. Moreover, there exists θ ∈ (0, λ2 ) for λ ∈ (0, 2) such that kA† (w)kkF(w)k ≤ (kA† (w) − A† (x∗ )k + kA† (x∗ )k)kF(w) − F(x∗ )k 1−α λ ≤ (c0 ρ + d0 )b0 ρ ≤ λ< 0 and [., .; F] : D × D −→ L(E, E) is a divided difference of order one on E [3]-[8]. The semi-local convergence of SLM was given in [12]. Motivated by optimization concerns we provide a new analysis with benefits (B): weaker sufficient convergence criteria; extended convergence region; tighter error estimates on kxn+1 − xn k, kx∗ − xn k and a more precise information on the location of the solution. Benefits B are obtained under the same computational effort, since in practice the new constants are special cases of the ones used in [12].

2.

Analysis

The following conditions are used in [12]. Suppose: There exist(s)

70

Christopher I. Argyros, Samundra Regmi, Ioannis K. Argyros et al. (c1) x0 ∈ E, α > 0, λ ≥ 0 such that F 0 (x0 )−1 ∈ L(E, E), kF 0 (x0 )−1 k ≤ α

and kF 0 (x0 )−1 F(x0 )k ≤ λ. (c2) L2 > 0 such that for all u, v ∈ D kF 0 (u) − F 0 (v)k ≤ L2 ku − vk.

(11.3)

The following items are also connected to this analysis: scalar polynomial f (t) =

K2 2 t t − + δ, 2 γ

(11.4)

sequence t0 = 0, tn+1 − tn = − where

and

αL2 (tn − tn−1 )2 f (tn ) = , f 0 (tn) 1 − αK2 tn

δ = δα−1 , 2α γ= 2 − pαL2 δ

(11.5)

(11.6) (11.7)

p K2 = L2 (1 + ). γ

(11.8)

pαL1 δ < 2,

(11.9)

Conditions

h2 = K2 γλ ≤

1 2

(11.10)

and parameters √ √ 1 − 1 − 2h 1 + 1 − 2h t∗ = , t∗∗ = . γK2 γK2

(11.11)

Then, the following semi-local convergence result was shown in [12, Theorem 4]. Theorem 14. Under conditions (c1), (c2), (11.9), (11.10) further suppose U[x0 ,t∗ + pδ] ⊂ D.

(11.12)

Then, sequence {xn } starting at x0 ∈ D, xn , xn − ψn , xn + ψn ∈ U[x0 ,t∗ + pδ] and generated by method (11.2) is well defined in U[x0 ,t∗ + pδ], remains in U[x0 ,t∗ + pδ] and converges to a solution x∗ ∈ U[x0 ,t∗ + pδ] of equation F(x) = 0. Moreover, the following error estimates hold kxn+1 − xn k ≤ tn+1 − tn , (11.13)

On an Efficient Steffensen-Like Method to Solve Equations kx∗ − xn k ≤ t∗ − tn ,

71 (11.14)

2n

t∗ − tn = and

t∗ (t∗∗ − t∗ )θ , if t( < t∗∗ , θ = 0, b > 0 such that kF 0 (x0 + θ(x1 − x0 )) − F 0 (x0 )k ≤ θakx1 − x0 k

(11.42)

kF 0 (x1 ) − F 0 (x0 )k ≤ bkx1 − x0 k

(11.43)

and

74

Christopher I. Argyros, Samundra Regmi, Ioannis K. Argyros et al.

for all θ ∈ [0, 1]. Then, it was shown in [5] that sequence {qn } defined by αa(q1 − q0 )2 , 2(1 − bαq1 ) Kα(qn+1 − qn )2 = qn+1 + 2(1 − L0 αqn+1 )

q0 = 0, q1 = λ, q2 = q1 + qn+2

is also majorizing for sequence {xn }. The convergence criterion for sequence {qn } is given by λ 1 ≤ , (11.44) h0 = 2c 2 where p1 (s) = (Ka + 2dL0 (a − 2b))s2 + 4p(L0 + b)s − 4d, d= and c=

 

1 L0 +b ,

2K

K+

p

K 2 + 8L0 K

,

Ka + 2dL0 (a − 2b) = 0 positive root of p1 , Ka + 2dL0 (a − 2b) > 0  smaller positive root of p1 , Ka + 2dL0 (a − 2b) < 0.

Notice that b ≤ a ≤ L0 . Hence, {qn } is a tighter majorizing sequence than {rn }. Criterion (11.44) was given by us in [5] for K = L2 . Therefore (11.44) and {qn } can also replace (11.34) and {sn } in Theorem 15. (iv) It follows from the definition of sequence {sn } that if L0 αsn < 1. Then, sequence {sn } is such that 0 ≤ sn ≤ sn+1 and limn−→∞ sn = s∗ ≤ than all condition (11.33) can be used in Theorem 15.

3.

(11.45) 1 L0 α .

Hence, weaker

Conclusion

The technique of recurrent functions has been utilized to extend the application of NM to solve nonlinear equations. The new results are finer than earlier ones. So, they can replace them. No additional conditions have been used. The technique is very general rendering useful to extend the usage of other iterative methods.

References [1] Adly, S., Ngai, H. V., Nguyen, V. V., Newtons procedure for solving generalized equations: Kantorovich’s and Smale’s approaches, J. Math. Anal. Appl., 439, (2016), 396-418 [2] Adly, S., Cibulka, R., Ngai, H. V., Newton’s procedure for solving inclusions using set-valued approximations, SIAM J. Optim.,25 (1) (2015) 159-184.

On an Efficient Steffensen-Like Method to Solve Equations

75

[3] Argyros, I. K., Unified Convergence Criteria for Iterative Banach Space Valued Methods with Applications, Mathematics 2021, 9(16), 1942; https://doi. org/10.3390/math9161942. [4] Argyros I. K., Hilout, S., Weaker conditions for the convergence of Newton’s method, J. Complexity, 28, 3, (2012), 364-387. [5] Argyros I. K., Hilout, S., On an improved convergence analysis of Newton’s method, Applied Mathematics and Computation, 225, (2013), 372-386. [6] Argyros, I. K., Magr´en˜ an, A. A., A contemporary study of iterative procedures, Elsevier (Academic Press), New York, 2018. [7] Argyros, I. K., George, S., Mathematical modeling for the solution of equations and systems of equations with applications, Volume-IV, Nova Publisher, NY, 2021. [8] Argyros, I. K., Unified convergence criteria for Banach space valued methods with applications, Mathematics, 9, 16, (2021), 1942. [9] Ciarlet P. G., Madare, C., On the Newton-Kantorovich theorem, Anal. (Springer), 10(2012), 249-269.

Appl.

[10] Cibulka, R., Dontchev, A. L., Preininger, J., Veliov, V., Roubai, T., Kantorovich-Type Theorems for Generalized Equations, Journal of Convex Analysis 25 (2018), No. 2, 459–486. [11] Ezquerro, J. A., Hernandez, M. A., Newton’s procedure: An updated approach of Kantorovich’s theory, Cham Switzerland, (2018). [12] Hernandez Veron, M. A., Magren´an, A. A., Martinez, E., Singh, S., An improvement of derivative free point-to-point iterative processes with central divided differences, International J. of nonlinear Sciences and Numerical simulation, (2022). [13] Kantorovich, L. V., Akilov, G. P., Functional analysis in normed spaces. The Macmillan Co, New York (1964). [14] Magr´en˜ an, A. A., Guti´errez, J. M., Real dynamics for damped Newton’s procedure applied to cubic polynomials, J. Comput. Appl. Math., 275, (2015), 527–538. [15] Potra, F. A., Pt´ak, V., Nondiscrete induction and iterative processes, Research Notes in Mathematics 103, Pitman, Boston (1984). [16] Proinov, P. D., New general convergence theory for iterative processes and its applications to Newton-Kantorovich type theorems, J. Complexity, 26, (2010), 3-42. [17] Verma, R., New Trends in Fractional Programming, Nova Science Publisher, New York, USA, (2019).

Chapter 12

Convergence Analysis for King-Werner-Like Methods 1.

Introduction

In [6], Argyros and Ren studied King-Werner-like methods for approximating a locally unique solution x? of equation F(x) = 0, (12.1) where F is Fr´echet-differentiable operator defined on a convex subset of a Banach space B1 with values in a Banach space B2 . In particular, they studied the semi-local convergence analysis of method defined for n = 0, 1, 2, . . . by xn+1 = xn − A−1 n F(xn ) (12.2) yn+1 = xn+1 − A−1 n F(xn+1 ), where x0 , y0 are initial points, An = [xn , yn ; F] and [x, y; F] denotes a divided difference of order one for operator F at points x, y ∈ Ω [2, 4, 7] satisfying [x, y; F](x − y) = F(x) − F(y)

f or each x, y ∈ Ω with x 6= y.

(12.3)

If F is Fr´echet-differentiable on Ω, then F 0 (x) = [x, x; F] for each x ∈ Ω. The local convergence analysis of method (12.2) was given in [9] in the special √ case when B1 = B2 = R. The convergence order of method (12.2) was shown to be 1 + 2. Using the idea of restricted convergence domains, we improve the applicability of method (12.2). The chapter is organized as follows: Section 2 contains the semi-local convergence analysis of the method (12.2), and Section 3 contains the local convergence analysis of method (12.2). The numerical examples include favorable comparisons with earlier studies such as [6,9] are presented in the concluding Section 4.

2.

Semi-Local Convergence of Method (12.2)

For the semi-local convergence analysis of method (12.2) requires the following auxiliary result on majorizing sequences. The proof of these results can be found in see [6].

78

Christopher I. Argyros, Samundra Regmi, Ioannis K. Argyros et al.

Lemma 6. Let L0 > 0, L > 0, s0 ≥ 0, t1 ≥ 0 be given parameters. Denote by α the only root in the interval (0, 1) of polynomial p defined by p(t) = L0 t 3 + L0 t 2 + 2Lt − 2L.

(12.4)

Suppose that 0< where

L(t1 + s0 ) 2L0 t1 ≤ α ≤ 1− , 1 − L0 (t1 + s1 + s0 ) 1 − L0 s0 s1 = t1 + L(t1 + s0 )t1.

(12.5)

(12.6)

Then, scalar sequence {tn} defined for each n = 1, 2, . . . by −tn +sn −tn )(tn+1 −tn ) , t0 = 0, sn+1 = tn+1 + L(tn+1 1−L0 (tn −t0 +sn +s0 )

tn+2 = tn+1 +

L(tn+1 −tn +sn −tn )(tn+1 −tn ) 1−L0 (tn+1 −t0 +sn+1 +s0 ) ,

f or each n = 1, 2, . . .,

f or each n = 0, 1, 2, ...

(12.7)

is well defined, increasing, bounded above by t ?? =

t1 1−α

(12.8)

and converges to its unique least upper bound t ? which satisfies t1 ≤ t ? ≤ t ?? .

(12.9)

Moreover, the following estimates hold sn − tn ≤ α(tn − tn−1 ) ≤ αn (t1 − t0 ),

(12.10)

tn+1 − tn ≤ α(tn − tn−1 ) ≤ αn (t1 − t0 )

(12.11)

tn ≤ sn

(12.12)

and for each n = 1, 2, .... Denote by U(w, ξ), U(w, ξ), the open and closed balls in B1 , respectively, with center w ∈ B1 and of radius ξ > 0. Next, we present the semilocal convergence of method (12.2) using {tn } as a majorizing sequence. Theorem 16. Let F : Ω ⊂ B1 → B2 be a Fr´echet-differentiable operator. Suppose that there exists a divided differentiable [., ., ; F] of order one for operator F on Ω × Ω. Moreover, suppose that there exist x0 , y0 ∈ Ω, L0 > 0, L > 0, s0 ≥ 0, t1 ≥ 0 such that A−1 0 ∈ L(B2 , B1 ),

(12.13)

kA−1 0 F(x0 )k ≤ t1 ,

(12.14)

kx0 − y0 k ≤ s0 ,

(12.15)

kA−1 0 ([x, y; F] − A0 )k ≤ L0 (kx − x0 k + ky − y0 k), , for each x, y ∈ Ω

(12.16)

Convergence Analysis for King-Werner-Like Methods kA−1 0 ([x, y; F] − [z, v; F])k ≤ L(kx − zk + ky − vk),

79 (12.17)

for each x, y, z ∈ Ω1 = Ω ∩U(x0 , 2L1 0 ) U(x0 ,t ? ) ⊆ Ω

(12.18)

and hypotheses of Lemma 2.1 hold, where A0 = [x0 ; y0 ; F] and t ? is given in Lemma 2.1. Then, sequence {xn } generated by method (12.2) is well defined, remains in U(x0 ,t ? ) and converges to a unique solution x? ∈ U(x0 ,t ? ) of equation F(x) = 0. Moreover, the following estimates hold for each n = 0, 1, 2, ... kxn − x? k ≤ t ? − tn .

(12.19)

Furthermore, if there exists R > t ? such that U(x0 , R) ⊆ Ω

(12.20)

L0 (t ? + R + s0 ) < 1.

(12.21)

and Then, the point x? is the only solution of equation F(x) = 0 in U(x0 , R). Proof. Simply notice that the iterates remain in Ω1 , which is a more precise location than Ω used in [6], since Ω1 ⊆ Ω. Then, given this, the proof follows from the corresponding one in [6]. Remark 20. (a) The limit point t ? can be replaced by t ?? given in closed form by (12.8) in Theorem 2.1. (b) In [6], Argyros and Ren used the stronger condition kA−1 0 ([x, y; F] − [z, v; F])k ≤ L1 (kx − zk + ky − vk) for each x, y, z, v ∈ Ω. Notice that from we have L1 L0

L0 ≤ L1 and L ≤ L1

holds in general, and can be arbitrarily large [2]-[6]. Moreover, it follows from the proof of the Theorem 2.2 that hypothesis (12.17) is not needed to compute an upper bound for kA−1 0 F(x1 )k. Hence, we can define the more precise (than {tn }) majorizing sequence {t n } (for {xn }) by t 0 = 0,t 1 = t1 , s0 = s0 , s1 = t 1 + L0 (t 1 + s0 )t 1 , L(t −t n +sn −t n )(t n+1 −t n ) f or each n = 1, 2, ... sn+1 = t n+1 + n+1 1−L0 (t n −t 0 +sn +s0 ) and

n+1 −t n +sn −t n )(t n+1 −t n ) t n+2 = t n+1 + L(t1−L 0 (t n+1 −t 0 +sn+1 +s0 )

f or each n = 0, 1, ...

(12.22)

(12.23)

Then, using a simple induction argument we have that t n ≤ tn ,

(12.24)

80

Christopher I. Argyros, Samundra Regmi, Ioannis K. Argyros et al. sn ≤ sn ,

(12.25)

t n+1 − t n ≤ tn+1 − tn ,

(12.26)

sn − t n ≤ sn − tn

(12.27)

t ? = lim t n ≤ t ? .

(12.28)

and n→∞

Furthermore, if L0 < L, then (12.24)-(12.27) are strict for n ≥ 2, n ≥ 1, n ≥ 1, n ≥ 1, respectively. Clearly, sequence {t n } increasing converges to t ? under the hypotheses of Lemma 2.1 and can replace {tn } as a majorizing sequence for {xn } in Theorem 2.2. Finally, the old sequences using L1 instead of L in [6] are less precise than the new ones.

3.

Local Convergence of Method (12.2)

We present the local convergence of method (12.2) in this section. We have: Theorem 17. Let F : Ω ⊆ B1 → B2 be a Fr´echet-differentiable operator. Suppose that there exist x? ∈ Ω, l0 > 0 and l > 0 such that for each x, y, z, u ∈ Ω F(x? ) = 0, F 0 (x? )−1 ∈ L(B2 , B1 ),

(12.1)

kF 0 (x? )−1 ([x, y; F] − F 0 (x? ))k ≤ l0 (kx − x? k + ky − x? k) for each x, y ∈ Ω,

(12.2)

kF 0 (x? )−1 ([x, y; F] − [z, u; F])k ≤ l(kx − zk + ky − uk),

(12.3)

for each x, y, z, u ∈ Ω2 := Ω ∩U(x∗ , 2l10 ) and U(x? , ρ) ⊆ Ω, where ρ=

1 . (1 + 2)l + 2l0 √

(12.4)

(12.5)

Then, sequence {xn } generated by method (12.2) is well defined, remains in U(x? , ρ) and √ converges to x? with order of 1 + 2 at least, provided that x0 , y0 ∈ U(x? , ρ). Moreover, the following estimates √ 2−1 ? kxn+1 − x? k2 kxn − x? k (12.6) kxn+2 − x k ≤ ρ2 and kxn − x? k ≤ (

p√

2 − 1 Fn −1 ) kx1 − x? kFn ρ

(12.7)

hold for each n = 1, 2, . . ., where Fn is a generalized Fibonacci sequence defined by F1 = F2 = 1 and Fn+2 = 2Fn+1 + Fn . Proof. As in the proof of Theorem 2.2.

Convergence Analysis for King-Werner-Like Methods

81

Remark 21. (a) For the special case B1 = B2 = R, the radius of convergence ball for method (12.2) is given in [10] by s? (12.8) ρ? = , M where s? ≈ 0.55279 is a constant and M > 0 is the upper bound for |F(x? )−1 F 00 (x)| in the given domain Ω. Using (12.3) we have kF 0 (x? )−1 (F 0 (x) − F 0 (y))k ≤ 2lkx − yk That is, we can choose l =

f or any x, y ∈ Ω.

(12.9)

M 2.

Simply set l0 = l, we have from (12.5) that √ 2(3 − 2) 0.63432 s? 2 √ = ≈ > = ρ? . ρ= 5M M M (3 + 2)M

(12.10)

Therefore, even in this special case, a bigger radius of convergence ball for method (12.2) has been given in Theorem 3.1. (b) Notice that we have l0 ≤ l1 and l ≤ l1 (12.11) kA−1 0 ([x, y; F] − [z, u; F])k ≤ l1 (kx − zk + ky − uk) for each x, y, z, u ∈ Ω. The radius given in [6]: ρ0 =

1 ≤ ρ. (1 + 2)l1 + l0 √

(12.12)

Moreover, if l < l1 , then ρ0 < ρ and the new error bounds(12.6) and (12.7) are tighter than the old ones in [6] using ρ0 instead of ρ.

4.

Numerical Examples

We present some numerical examples in this section. Example 3. Let B1 = B2 = R, Ω = (−1, 1) and define F on Ω by F(x) = ex − 1.

(12.1)

Then, x? = 0 is a solution of Eq. (1.1), and F 0 (x? ) = 1. Note that for any x, y, z, u ∈ Ω, we have |F 0 (x? )−1 ([x, y; F] − [z, u; F])| R = | 01 (F 0 (tx + (1 − t)y) − F 0 (tz + (1 − t)u))dt|  R R = | 01 01 (F 00 θ(tx + (1 − t)y) + (1 − θ)(tz + (1 − t)u)  × tx + (1 − t)y − (tz + (1 − t)u) dθdt|  R 1 R 1 θ(tx+(1−t)y)+(1−θ)(tz+(1−t)u) tx + (1 − t)y − (tz + (1 − t)u) dθdt| = | 0 0 (e R ≤ 01 e|t(x − z) + (1 − t)(y − u)|dt ≤ 2e (|x − z| + |y − u|)

(12.2)

82

Christopher I. Argyros, Samundra Regmi, Ioannis K. Argyros et al.

and |F 0 (x? )−1 ([x, y; F] − [x? , x? ; F])| = | 01 F 0 (tx + (1 − t)y)dt − F 0 (x? )| R = | 01 (etx+(1−t)y − 1)dt| R (tx+(1−t)y)2 tx+(1−t)y + + · · · )dt| = | 01 (tx + (1 − t)y)(1 + 2! 3! R1 1 1 ≤ | 0 (tx + (1 − t)y)(1 + 2! + 3! + · · · )dt| ? ? ≤ e−1 2 (|x − x | + |y − x |). R

(12.3)

That is to say, the Lipschitz condition (12.3) and the center-Lipschitz condition (12.2) are 1 e−1

true for l1 = 2e , l = e 2 and l0 = e−1 2 , respectively. Using (12.5) in Theorem 3.1, we can deduce that the radius of convergence ball for method (12.2) is given by ρ0 =

2 1 √ = ≈ 0.200018471, (1 + 2)l1 + 2l0 (3 + 2)e − 2 √

(12.4)

which is smaller than the corresponding radius ρ=

1 ≈ 0.2578325131698342986 (1 + 2)l + 2l0 √

(12.5)

Let us choose x0 = 0.2, y0 = 0.199. Suppose sequences {xn } and {yn } are generated by method (12.2). Table 1 gives a comparison results of error estimates for Example 4.1, which shows that tighter error estimates can be obtained from the new (12.6) or (12.7) using ρ instead of ρ0 used in [6]. Table 12.1. The comparison results of error estimates for Example 4.1 n 3 4 5 6 6

using ρ new (12.7) 0.0498 0.0031 2.9778e-06 1.7108e-13 5.4309e-31

using ρ0 old (12.7) 0.0828 0.0142 1.7305e-04 4.4047e-09 3.4761e-20

Hence the new results are more precise than the old ones in [6]. Example 4. Let B1 = B2 = C[0, 1], the space of continuous functions defined on [0, 1], equipped with the max norm and Ω = U(0, 1). Define function F on Ω, given by Z 1

stx3 (t)dt,

(12.6)

F 0 (tx + (1 − t)y)dt.

(12.7)

F(x)(s) = x(s) − 5

0

and the divided difference of F is defined by [x, y; F] =

Z 1 0

Convergence Analysis for King-Werner-Like Methods

83

Then, we have [F 0 (x)y](s) = y(s) − 15

Z 1 0

stx2 (t)y(t)dt, f or all y ∈ Ω.

(12.8)

We have x? (s) = 0 for all s ∈ [0, 1], l0 = 3.75 and l = l1 = 7.5. Using Theorem 3.1, we can deduce that the radius of convergence ball for method (12.2) is given by ρ0 = ρ =

1 ≈ 0.039052429. (1 + 2)l + 2l0 √

(12.9)

Example 5. Let B1 = B2 = C[0, 1] be equipped with the max norm and Ω = U(0, r) for some r > 1. Define F on Ω by F(x)(s) = x(s) − y(s) − µ

Z 1

G(s,t)x3(t)dt,

x ∈ C[0, 1], s ∈ [0, 1].

0

y ∈ C[0, 1] is given, µ is a real parameter, and the Kernel G is the Green’s function defined by  (1 − s)t if t ≤ s G(s,t) = s(1 − t) if s ≤ t. Then, the Fr´echet derivative of F is defined by (F 0 (x)(w))(s) = w(s) − 3µ

Z 1

G(s,t)x2(t)w(t)dt,

w ∈ C[0, 1], s ∈ [0, 1].

0

Let us choose x0 (s) = y0 (s) = y(s) = 1 and |µ| < 38 . Then, we have that kI − A0 k ≤ 38 µ, A−1 0 ∈ L(B2 , B1 ), |µ| −1 8 kA0 k ≤ 8−3|µ| , s0 = 0, t1 = 8−3|µ| , and L=

L0 =

3(1+r)|µ| 2(8−3|µ|) ,

3r|µ| . 8 − 3|µ|

Let us choose r = 3 and µ = 12 . Then, we have that t1 = 0.076923077, and

L0 ≈ 0.461538462,

L(t1 + s0 ) ≈ 0.057441746, 1 − L0 (t1 + s1 + s0 ) 1−

L = L1 ≈ 0.692307692 α ≈ 0.711345739,

2L0 t1 ≈ 0.928994083. 1 − L0 s0

That is, condition (12.5) is satisfied and Theorem 2.2 applies.

84

5.

Christopher I. Argyros, Samundra Regmi, Ioannis K. Argyros et al.

Conclusion

We present a semi-local √ and local convergence analysis of some efficient King-Werner-like methods of order 1 + 2 free of derivatives in a Banach space setting. We use our idea of restricted domains, where the iterates lie, leading to smaller Lipschitz constants yielding in turn a more precise local as well as semi-local convergence analysis than in earlier studies. Numerical examples are presented to illustrate the theoretical results.

References [1] Amat, S., Busquier, S., Negra, M., Adaptive approximation of nonlinear operators, Numer. Funct. Anal. Optim. 25 (2004) 397–405. [2] Argyros, I. K., Computational theory of iterative methods, Series: Studies in Computational Mathematics 15, Editors, C.K. Chui and Wuytack L., Elservier Publ. Co. New York, USA, 2007. [3] Argyros, I. K., A semilocal convergence analysis for directional Newton methods, Math. Comput. 80 (2011) 327–343. [4] Argyros, I. K., Hilout, S., Computational methods in nonlinear analysis. Efficient algorithms, fixed point theory and applications, World Scientific, 2013. [5] Argyros, I. K., Hilout, S., Weaker conditions for the convergence of Newton’s method, J. Complexity, 28 (2012) 364–387. [6] Argyros, I. K., √ Ren, H., On the convergence of efficient King-Werner-type methods of order 1 + 2 free of derivatives, Applied Mathematics and Computation 217(2): 612-621 (2010). [7] Kantorovich, L. V., Akilov, G. P., Functional Analysis, Pergamon Press, Oxford, 1982. [8] King, R. F., Tangent methods for nonlinear equations, Numer. Math. 18 (1972) 298– 304. [9] McDougall, T. J., Wotherspoon, S.√J., A simple modification of Newton’s method to achieve convergence of order 1 + 2, Appl. Math. Lett. 29 (2014) 20–25. [10] Ren, H., Wu, Q., Bi, W., On convergence of a new secant like method for solving nonlinear equations, Appl. Math. Comput. 217 (2010) 583–589. [11] Rheinboldt, W. C., An adaptive continuation process for solving systems of nonlinear equations, Polish Academy of Science, Banach Ctr. Publ. 3 (1977) 129–142. [12] Traub, J. F., Iterative Methods for the Solution of Equations, Englewood Cliffs, Prentice Hull, 1984.

Convergence Analysis for King-Werner-Like Methods 85 √ [13] Werner, W., Uber ein Verfahren der Ordnung 1 + 2 zur Nullstellenbestimmung, Numer. Math. 32 (1979) 333–342. √ [14] Werner, W., Some supplementary results on the 1 + 2 order method for the solution of nonlinear equations, Numer. Math. 38 (1982) 383–392.

Chapter 13

Multi-Point Family of High Order Methods 1.

Introduction

In this chapter we are concerned with the problem of approximating a solution x∗ of the equation F(x) = 0, (13.1) where F is a Fr´echet-differentiable operator defined on a convex subset D of a Banach space X with values in a Banach space Y. Many problems in computational sciences and other disciplines can be brought in a form like (13.1) using mathematical modelling [2, 3, 4, 9, 15, 16]. The solutions to these equations can rarely be found in closed form. That is why most solution methods for these equations are iterative. The study of convergence matter of iterative procedures is usually based on two types: semi-local and local convergence analysis. The semi-local convergence matter is, based on the information around an initial point, to give conditions ensuring the convergence of the iterative procedure; while the local one is, based on the information around a solution, to find estimates of the radii of convergence balls. In particular, the practice of Numerical Functional Analysis for finding solution x∗ of equation (13.1) is essentially connected to variants of Newton’s method. This method converges quadratically to x∗ if the initial guess is close enough to the solution. Iterative methods of convergence order higher than two, such as Chebyshev-Halley-type methods [1, 3, 4, 6]–[17] require the evaluation of the second Fr´echet-derivative, which is very expensive in general. However, there are integral equations, where the second Fr´echet-derivative is diagonal by blocks and inexpensive [8]–[11] or for quadratic equations, the second Fr´echet-derivative is constant [9, 10, 14]. Moreover, in some applications involving stiff systems [2,4] high order methods are useful. However, in general, the use of the second Fr´echet-derivative restricts the use of these methods as their informational efficiency is less than or equal to unity. That is why

88

Christopher I. Argyros, Samundra Regmi, Ioannis K. Argyros et al.

we study the local convergence of multi-point methods defined for each n = 0, 1, 2, · · · by yn = xn − F 0 (xn )−1 F(xn ), zn = xn + θ(yn − xn ), θ ∈ (0, 2), 1 Hn = H(xn , yn ) = F 0 (xn )−1 [F 0 (zn ) − F 0 (xn )], θ 1 1 Qn = Q(xn , yn ) = − Hn (I + Hn )−1 , 2 2 xn+1 = yn + Qn (yn − xn ),

(13.2)

where x0 is an initial point, I the identity operator and F 0 (x) denotes the Fr´echet derivative of the operator F. There is a plethora of semi-local convergence results for these methods under conditions (C ) [1]–[18]: (C1 ) F : D → Y is twice Fr´echet-differentiable and F 0 (x0 )−1 ∈ L(Y, X) for some x0 ∈ D such that kF 0 (x0 )−1 k ≤ β; (C2 )

kF 0 (x0 )−1 F(x0 )k ≤ η;

(C3 )

kF 00 (x)k ≤ β1 for each x ∈ D;

(C4 )

kF 00 (x) − F 00 (y)k ≤ β2 kx − yk p for each x, y ∈ D and some p ∈ (0, 1].

In particular, Parida and Gupta [18] provided a semi-local convergence analysis of method (13.2) but for θ ∈ (0, 1]. If p = 1 method (13.2) is shown to be of order two [9] and if p ∈ (0, 1) the order of method is 2 + p [18]. Conditions (C3 ) and (C4 ) restrict the applicability of these methods. In our chapter we assume the conditions(A ): (A1 ) F : D → Y is Fr´echet-differentiable and there exists x∗ ∈ D such that F(x∗ ) = 0 and F 0 (x∗ )−1 ∈ L(Y, X); (A2 ) kF 0 (x∗ )−1 (F 0 (x) − F 0 (x∗ ))k ≤ L0 kx − x∗ k p for each x ∈ D and some p ∈ (0, 1] and (A3 ) kF 0 (x∗ )−1 (F 0 (x) − F 0 (y))k ≤ Lkx − yk p for each x, y ∈ D and some p ∈ (0, 1]. The convergence ball of method (13.2) and the error estimates on the distances kxn − x∗ k are given in this chapter. The chapter is organized as follows: In Section 2 we present the local convergence of the method (13.2). The numerical examples are given in the concluding Section 3. In the rest of this chapter, U(w, q) and U(w, q) stand, respectively, for the open and closed ball in X with center w ∈ X and of radius q > 0.

Multi-Point Family of High Order Methods

2.

89

Local Convergence

We present the local convergence of the method (13.2) in this section. It is convenient for the local convergence of method (13.2) to introduce some parameters and functions. Define parameters r0 , rA , r1 , r2 and r3 by 1

r0 =



1 L0

rA =



1+ p (1 + p)L0 + L

r1 r2

p

,

(13.3)  1p

,



(1 + p)(1 − |1 − θ|) = L|θ| + (1 − |1 − θ|)(1 + p)L0   1p |θ| = + |θ|L0 2 p−1 L

(13.4) 1

p

,

(13.5) (13.6)

and r3 = min{r1 , r2 }.

(13.7)

Notice tha rA < r0 , r1 < r0 , r2 < r0 and r3 < r0 . Define function f on [0, r3). by

f (t) =

2 p−1 Lt p Lt p Lt p + [1 + ] − 1. (1 + p)(1 − L0 t p ) θ − (L0 θ + L2 p−1 )t p (1 + p)(1 − L0 t p )

(13.8)

Notice that function f is continuous on interval [0, r3) and f (t) → ∞ as t → r2 . We also have that f (0) = −1 < 0. Hence, it follows from the intermediate value theorem that function f has zeros in [0, r3). Denote by rs the smallest such zero. If r ∈ [0, rs),

(13.9)

f (r) < 1.

(13.10)

then Then, we can show the following local convergence result for method (13.2) under the (A ) conditions Theorem 18. Suppose that the (A ) conditions and U(x∗ , r) ⊆ D, hold, where r is given by (13.9). Then, sequence {xn } generated by method (13.2) for some x0 ∈ U(x∗ , r) is well defined, remains in U(x∗ , r) for each n = 0, 1, 2, · · · and converges to x∗ . Moreover, the following estimates hold for each n = 0, 1, 2, · · · . kxn+1 − x∗ k ≤ f (kxn − x∗ k)kxn − x∗ k < kxn − x∗ k.

(13.11)

Proof. We shall use induction to show that estimate (13.11) hold and yn , zn , xn+1 ∈ U(x∗ , r) for each n = 0, 1, 2, · · · . Using (A2 ) and the hypothesis x0 ∈ U(x∗ , r) we have that kF 0 (x∗ )−1 (F 0 (x0 ) − F 0 (x∗ ))k ≤ L0 kx0 − x∗ k p < L0 r p < 1,

(13.12)

90

Christopher I. Argyros, Samundra Regmi, Ioannis K. Argyros et al.

by the choice of r. It follows from (13.12) and the Banach Lemma on invertible operators [2, 5, 15] that F 0 (x0 )−1 ∈ L(Y, X) and kF 0 (x0 )−1 F 0 (x∗ )k ≤

1 1 < . 1 − L0 kx0 − x∗ k p 1 − L0 r p

(13.13)

Using (A2 ), (A3 ), F(x∗ ) = 0, (13.12) and the choice of rA we get from the first sub-step of method (13.2) y0 − x∗ = x0 − F 0 (x0 )−1 F(x0 )

= −[F 0 (x0 )−1 F 0 (x∗ )][F 0 (x∗ )−1 Z 1 0

(F 0 (x∗ + τ(x0 − x∗ )) − F 0 (x0 ))(x0 − x∗ )dτ],

(13.14)

that ky0 − x∗ k ≤ kF 0 (x0 )−1 F 0 (x∗ )kkF 0 (x∗ )−1 Z 1 0

≤ = ≤

[F 0 (x∗ + θ(x0 − x∗ )) − F 0 (x0 )]dθkkx0 − x∗ k

1 L0 kx0 − x∗ k p kx0 − x∗ k ∗ p 1 − L0 kx0 − x k 1+ p Lkx0 − x∗ k1+p (1 + p)(1 − L0 kx0 − x∗ k p ) Lr p kx0 − x∗ k ≤ kx0 − x∗ k < r, (1 + p)(1 − L0 r p )

(13.15)

which shows that y0 ∈ U(x∗ , r). In view of the second sub-step of method (13.2) and (13.15) we get that z0 − x∗ = x0 − x∗ + θ((y0 − x∗ ) + (x∗ − x0 )) = (1 − θ)(x0 − x∗ ) + θ(y0 − x∗ ),

(13.16)

so kz0 − x∗ k ≤ |1 − θ|kx0 − x∗ k + θky0 − x∗ k Lkx0 − x∗ k1+p ≤ |1 − θ|kx0 − x∗ k + θ (1 + p)(1 − L0 kx0 − x∗ k p ) Lθr p ≤ [|1 − θ| + ]kx0 − x∗ k (1 + p)(1 − L0 r p ) ≤ kx0 − x∗ k < r,

(13.17)

which implies that z0 ∈ U(x∗ , r). Hence, H0 is well defined. Next, we shall show that

Multi-Point Family of High Order Methods

91

(I + 12 H0 )−1 exists. Using (A3 ), (13.13) and the choice of r2 we get in turn that 1 k H0 k ≤ 2 ≤ ≤ ≤

1 kF 0 (x0 )−1 F 0 (x∗ )kkF 0 (x∗ )−1 (F 0 (z0 ) − F 0 (x0 ))k 2θ 1 Lkz0 − x0 k p 2θ 1 − L0 kx0 − x∗ k p L(kz0 − x∗ k + kx0 − x∗ k) p 2θ(1 − L0 kx0 − x∗ k p ) 2 p−1 Lr p 2 p Lr p = < 1. 2θ(1 − L0 r p ) θ(1 − L0 r p )

(13.18)

It follows from (13.18) and the Banach lemma that (I + 12 H0 )−1 exists and 1 k(I + H0 )−1 k ≤ 2 ≤

1 1−

Lkxn −zn k p 2θ(1−L0 kxn −x∗ k p )

1−

2 p−1 Lr p θ(1−L0 r p )

1

.

(13.19)

Hence, x1 is well defined. We shall show that (13.11) holds for n = 0 and x1 ∈ U(x∗ , r). Using the last sub-step of method (13.2) for n = 0, (13.13), (13.15), (13.17), (13.18), (13.19) and (13.10) we obtain in turn that kx1 − x∗ k ≤ ky0 − x∗ k + kQ0 k(ky0 − x∗ k + kx0 − x∗ k) Lkx0 − x∗ k p Lkx0 − x∗ k p ≤ [ + (1 + p)(1 − L0 kx0 − x∗ k p ) 2θ(1 − L0 kx0 − x∗ k p ) − Lkz0 − x0 k p Lkx0 − x∗ k p (1 + )]kx0 − x∗ k (1 + p)(1 − L0 kx0 − x∗ k p ) ≤ f (kx0 − x∗ k)kx0 − x∗ k ≤ f (r)kx0 − x∗ k < kx0 − x∗ k < r, (13.20) where we also used the estimate kQ0 k ≤ ≤ ≤ ≤ ≤

1 1 kH0 kk(I + H0 )−1 k 2 2 Lkz0 − x0 k p 2θ(1 − L0 kx0 − x∗ k p ) 1 −

1 Lkz0 −x0 k p 2θ(1−L0 kx0 −x∗ k p )

Lkz0 − x0 k p 2θ(1 − L0 kx0 − x∗ k p ) − Lkz0 − x0 k p 2 p Lr p 2θ − (2θL0 + 2 p L)r p 2 p−1 Lr p . θ − (θL0 + 2 p L)r p

It then follows from (13.20) that (13.11) holds for n = 0 and x1 ∈ U(x∗ , r). To complete the induction simply replace x0 , y0 , x1 , H0 , Q0 by xk , yk , xk+1 , Hk, Qk in all the preceding estimates to arrive in particular at kxk+1 − x∗ k < ||xk − x∗ k < r, which imply that xk+1 ∈ U(x∗ , r) and that limk→∞ xk = x∗ .

92

Christopher I. Argyros, Samundra Regmi, Ioannis K. Argyros et al.

Remark 22. (a) Condition (A2 ) can be dropped, since this condition follows from (A3 ). Notice, however that L0 ≤ L, (13.21) holds in general and

L L0

can be arbitrarily large [2]-[6].

(b) It is worth noticing that it follows from (13.4) and (13.9) that r is such that r < rA .

(13.22)

The convergence ball of radius rA was given by us in [2, 3, 5] for Newton’s method under conditions (A1 )- (A3 ). Estimate (13.19) shows that the convergence ball of method (13.2) is smaller than the convergence ball of the quadratically convergent Newton’s method. (c) The local results can be used for projection methods such as Arnoldi’s method, the generalized minimum residual method (GMREM), the generalized conjugate method(GCM) for combined Newton/finite projection methods and in connection to the mesh independence principle to develop the cheapest and most efficient mesh refinement strategy [2]-[5], [14, 15]. (d) The results can also be used to solve equations where the operator F 0 satisfies the autonomous differential equation [2]-[5], [14, 15]: F 0 (x) = T (F(x)), where T is a known continuous operator. Since F 0 (x∗ ) = T (F(x∗ )) = T (0), we can apply the results without actually knowing the solution x∗ . Let as an example F(x) = ex − 1. Then, we can choose T (x) = x + 1 and x∗ = 0.

3.

Numerical Examples

We present numerical examples where we compute the radii of the convergence balls. Example 6. Let X = Y = R. Define function F on D = [1, 3] by 2 3 F(x) = x 2 − x. 3

(13.23)

Then, x∗ = 94 = 2.25, F 0 (x∗ )−1 = 2, L0 = 1 < L = 2 and p = 0.5 Choose θ = 1. Then, we get that r ∈ [0, 0.1667) and rA = 0.5. Example 7. Let X = Y = R3 , D = U(0, 1). Define F on D for v = x, y, z) by F(v) = (ex − 1,

e−1 2 y + y, z). 2

Then, the Fr´echet-derivative is given by  x  e 0 0 F 0 (v) =  0 (e − 1)y + 1 0  . 0 0 1

(13.24)

Multi-Point Family of High Order Methods

93

Notice that x∗ = (0, 0, 0), F 0 (x∗ ) = F 0 (x∗ )−1 = diag{1, 1, 1}, L0 = e − 1 < L = e, and p = 1. Choose θ = 1 then, we get that r ∈ [0, 0.1175) and rA = 0.3249. Example 8. Let X = Y = C[0, 1], the space of continuous functions defined on [0, 1]be and equipped with the max norm. Let D = U(0, 1). Define function F on D by F(ϕ)(x) = ϕ(x) − 5

Z 1

xτϕ(τ)3 dτ.

(13.25)

0

We have that F 0 (ϕ(ξ))(x) = ξ(x) − 15

Z 1 0

xτϕ(τ)2 ξ(τ)dτ, for each ξ ∈ D.

Then, we get that x∗ = 0, L0 = 7.5, L = 15 and p = 1.

4.

Conclusion

We present a local convergence analysis for a multi-point family of high order methods to approximate a solution of a nonlinear equation in a Banach space setting. The convergence ball and error estimates are given for these methods under Holder ¨ continuity conditions. Numerical examples are also provided in this chapter.

References [1] Amat, S., Busquier, S., Guti´errez, J. M., Geometric constructions of iterative functions to solve nonlinear equations, J. Comput. Appl. Math. 157, (2003), 197-205. [2] Argyros, I. K., A unifying local-semilocal convergence analysis and applications for two-point Newton-like methods in Banach spaces, J. Math. Anal. Appl., 20, 8 (2004), 373-397. [3] Argyros, I. K., Computational theory of iterative methods. Series: Studies in Computational Mathematics, 15, Editors: C.K. Chui and Wuytack L., Elsevier Publ. Co. New York, U.S.A, 2007. [4] Argyros, I. K., Hilout, S., Numerical methods in Nonlinear Analysis, World Scientific Publ. Comp. New Jersey, 2013. [5] Argyros, I. K., Hilout, S., Weaker conditions for the convergence of Newton’s method. J. Complexity 28 (2012) 364–387. [6] Candela, V., Marquina, A., Recurrence relations for rational cubic methods I: The Halley method, Computing, 44(1990), 169-184. [7] Candela, V., Marquina, A., Recurrence relations for rational cubic methods II: The Chebyshev method, Computing, 45(1990), 355-367.

94

Christopher I. Argyros, Samundra Regmi, Ioannis K. Argyros et al.

[8] Guti´errez, J. M., Hern´andez, M. A., Recurrence relations for the super-Halley method, Computers Math. Applic. 36(1998), 1-8. [9] Guti´errez, J. M., Hern´andez, M. A., Third-order iterative methods for operators with bounded second derivative, Journal of Computational and Applied Mathematics, 82(1997), 171-183. [10] Hern´andez, M. A., Salanova, M. A., Modification of the Kantorovich assumptions for semilocal convergence of the Chebyshev method, Journal of Computational and Applied Mathematics, 126(2000), 131-143. [11] Hern´andez, M. A., Chebyshev’s approximation algorithms and applications, Computers Math. Applic. 41(2001),433-455. [12] Hern´andez, M. A., Reduced recurrence relations for the Chebyshev method, Journal of Optimization Theory and Applications, 98(1998), 385-397. [13] Ezquerro, J. A., Hern´andez, M. A., Avoiding the computation of the second Fr´echetderivative in the convex acceleration of Newton’s method, Journal of Computational and Applied Mathematics, 96(1998), 1-12. [14] Ezquerro, J. A., Hern´andez, M. A., On Halley-type iterations with free second derivative, Journal of Computational and Applied Mathematics, 170(2004), 455-459. [15] Kantorovich, L. V., Akilov, G. P., Functional Analysis, Pergamon Press, Oxford, 1982. [16] Ortega, J. M., Rheinboldt, W. C., Iterative Solution of Nonlinear Equations in Several Variables, Academic Press, New York, 1970. [17] Parida, P. K., Gupta, D. K., Recurrence relations for semi-local convergence of a Newton-like method in Banach spaces, J. Math. Anal. Applic. 345, (2008), 350-361. [18] Parida, P. K., Gupta, D. K., Semilocal convergence of a family of third order methods in Banach spaces under Holder ¨ continuous second derivative, Nonlinear Analysis, 69, (2008), 4163- 4173.

Chapter 14

Ball Convergence Theorems for Some Third-Order Iterative Methods 1.

Introduction

In this chapter we are concerned with the problem of approximating a locally unique solution x∗ of equation F(x) = 0, (14.1) where F : D ⊆ S → S is a nonlinear function, D is a convex subset of S and S is R or C. Newton-like methods are used for finding solutions of (14.1) these methods are usually studied based on: semi-local and local convergence. The semi-local convergence matter is, based on the information around an initial point, to give conditions ensuring the convergence of the iterative procedure; while the local one is, based on the information around a solution, to find estimates of the radii of convergence balls [1]-[27]. Third-order methods such as Euler’s, Halley’s, super Halley’s, and Chebyshev’s [1][27] require the evaluation of the second derivative F 00 at each step, which in general is very expensive. To overcome this difficulty, many third-order methods have been introduced. In particular, Kou, J., Y.Li and Wang, X. in [16] introduced iterative methods defined for each n = 0, 1, 2, · · · by F(xn ) , F 0 (xn ) + λn F(xn ) 2F(xn ) = xn − 0 , F (yn ) + F 0 (xn ) + µn F 2 (xn )

yn = xn − xn+1

(14.2)

and zn = xn − xn+1 = xn −

F(xn ) , 0 2(F (xn ) + λn F(xn )) F 0 (z

F(xn ) , 2 n ) + µn F (xn )

(14.3)

where x0 is an initial point and {λn }, {µn } are given bounded sequences in S. The third order of convergence was shown in [16] under the assumptions that there exists a single

96

Christopher I. Argyros, Samundra Regmi, Ioannis K. Argyros et al.

root x∗ ∈ D; F is three times differentiable; sign(λnF(xn )) = sign(F 0 (xn )), sign(µn) = sign(F 0 (xn ) + F 0 (yn )) (for method (14.2)) and sign(µnF(xn )) = sign(F 0 (xn )), 1 sign(µn) = sign(F 0 ( (xn + zn )) 2 for each n = 0, 1, 2, · · · (for method (14.3)), where  1, t≥0 sign(t) = −1, t < 0 is the sign function. Method (14.2) and Method (14.3) were introduced as alternatives to other iterative methods that do not converge to x∗ if the derivative of the function is either zero or very small in the vicinity of the solution (see e.g. [14]-[16], [19, 26, 27]). Other single and multi-point methods can be found in [2, 3, 19, 24] and the references therein. The convergence of the preceding methods has been shown under hypotheses up to the third derivative (or even higher). These hypotheses restrict the applicability of these methods. As a motivational example, let us define function f on D = [− 21 , 52 ] by f (x) =



x3 lnx2 + x5 − x4 , x 6= 0 0, x = 0

Choose x∗ = 1. We have that f 0 (x) = 3x2 ln x2 + 5x4 − 4x3 + 2x2 , f 0 (1) = 3,

f 00 (x) = 6x lnx2 + 20x3 − 12x2 + 10x

f 000 (x) = 6 lnx2 + 60x2 − 24x + 22.

Then, obviously, function f 000 is unbounded on D. In the present chapter, we only use hypotheses on the first Fr´echet derivative. This way we expand the applicability of method (14.2) and method (14.3). The rest of the chapter is organized as follows: Section 2 and Section 3 contain the local convergence analysis of methods (14.2) and (14.3), respectively. The numerical examples are presented in the concluding Section 4.

2.

Local Convergence for Method (14.2)

We present the local convergence analysis of method (14.2) in this section. Let ¯ ρ) the open and closed balls in S, respectively, with center v ∈ S and of raU(v, ρ), U(v, dius ρ > 0.

Ball Convergence Theorems for Some Third-Order Iterative Methods

97

It is convenient for the local convergence analysis that follows to define some functions and parameters. Let L0 > 0, L > 0, M > 0, α > 0, λ ≥ 0 and µ ≥ 0 be given parameters. Let r0 =

1 1 ≤ . L0 + λM L0

Define functions on the interval [0, L10 ) by g1 (t) = h1 (t) = g2 (t) = h2 (t) = g3 (t) =

1 2λM 2 [L + ]t, 2(1 − L0 t) 1 − (L0 + λM)t g1 (t) − 1, 1 [L0 (1 + g1 (t)) + µαM 2t], 2 g2 (t)t − 1, 1 2Mg2 (t) [L + ]t 2 1 − g2 (t)t

and h3 (t) = g3 (t) − 1.

We have that h1 (0) = −1 < 0 and h1 (t) → +∞ as t → r0− . It follows from the Intermediate value theorem that function h1 has zeros in the interval (0, r0). Denote by r1 the smallest such zero. Similarly, h2 (0) = −1 < 0 and h2 (t) → +∞ as t → r0− . Denote by r2 the smallest zero of function h2 in the interval (0, r0 ). We also have that h3 (0) = −1 < 0 and h3 (t) → +∞ as t → r0−. Denote by r3 the smallest zero of function h3 in the interval (0, r0 ). Set r = min{r1 , r2 , r3 }. (14.4) Then, we have that 0 ≤ g0 (t) < 1,

(14.5)

0 ≤ g2 (t)t < 1,

(14.6)

0 ≤ g3 (t) < 1 for each t ∈ [0, r).

(14.7)

and Next, using the above notation, we can show the local convergence result for the method (14.2). Theorem 19. Let F : D ⊆ S → S be a differentiable function. Suppose that there exist x∗ ∈ D, parameters L0 > 0, L > 0, M > 0, α > 0, {λn }, {µn} ∈ S, λ ≥ 0 and µ ≥ 0 such that for each x, y ∈ D the following hold F(x∗ ) = 0, F 0 (x∗ ) 6= 0,

(14.8)

|F 0 (x∗ )−1 (F 0 (x) − F 0 (x∗ ))| ≤ L0 |x − x∗ |,

(14.9)

|F 0 (x∗ )−1 (F 0 (x) − F 0 (y))| ≤ L|x − y|, |F 0 (x∗ )| ≤ α,

(14.10) (14.11)

98

Christopher I. Argyros, Samundra Regmi, Ioannis K. Argyros et al. |F 0 (x∗ )−1 F 0 (x)| ≤ M,

(14.12)

|λn | < λ,

(14.13)

|µn | ≤ µ,

(14.14)

¯ ∗ , r) ⊆ D, U(x

(14.15)

|yn − x∗ | ≤ g1 (|xn − x∗ |)|xn − x∗ | < |xn − x∗ | < r,

(14.16)

and where r is given by (14.4). Then, sequence {xn } generated for x0 ∈ U(x∗ , r) by method (14.2) is well defined, remains in U(x∗ , r) for each n = 0, 1, 2, · · · and converges to x∗ . Moreover, the following estimates hold for each n = 0, 1, 2, · · · ,

|(F 0 (xn ) + F 0 (yn ) + µn F 2 (xn ))−1F 0 (x∗ )| ≤ and

1 2(1 − g2 (|xn − x∗ |)|xn − x∗ |)

|xn+1 − x∗ | ≤ g3 (|xn − x∗ |)|xn − x∗ | < |xn − x∗ |,

(14.17)

(14.18)

where the ”g” functions are defined above Theorem 19. Furthermore, suppose that there ¯ ∗ , T ) ⊂ D, then the limit point x∗ is the only solution of exists T ∈ [r, L20 ) such that U(x ∗ ¯ , T ). equation F(x) = 0 in U(x Proof. By hypothesis x0 ∈ U(x∗ , r), the definition of r2 and (14.9) we get that |F 0 (x∗ )−1 (F 0 (x0 ) − F 0 (x∗ ))| ≤ L0 |x0 − x∗ | < L0 r2 < 1.

(14.19)

It follows from (14.19) and the Banach Lemma on invertible functions [2, 3, 21, 24] that F 0 (x0 ) is invertible and |F 0 (x0 )−1 F 0 (x∗ )| ≤

1 1 < . ∗ 1 − L0 |x0 − x | 1 − L0 r

(14.20)

We also have by (14.8), (14.9), (14.12), (14.13) and the definition of r0 , that |(F 0 (x∗ ) + λ0 F(x∗ ))−1 (F 0 (x0 ) + λ0 F(x0 ) − (F 0 (x∗ ) + λ0 F(x∗ ))|

≤ |F 0 (x∗ )−1 (F 0 (x0 ) − F 0 (x∗ ))|

+|λ0 ||F 0 (x∗ )−1 (F(x0 ) − F(x∗ ))|)

≤ L0 |x0 − x∗ | + λM|x0 − x∗ |

= (L0 + λM)|x0 − x∗ | < 1,

(14.21)

where we also used the estimate |F 0 (x∗ )−1 F(x0 )| = |F 0 (x∗ )−1 (F(x0 ) − F(x∗ ))| = |

Z 1 0

F 0 (x∗ )−1 F 0 (x∗ + θ(x0 − x∗ ))(x0 − x∗ )dθ|

≤ M|x0 − x∗ |,

(14.22)

Ball Convergence Theorems for Some Third-Order Iterative Methods

99

since x∗ + θ(x0 − x∗ ) − x∗ | = θ|x0 − x∗ | < r for each θ ∈ [0, 1]. It follows from (14.21) that F 0 (x0 ) + λ0 F(x0 ) is invertible and |(F 0 (x0 ) + λ0 F(x0 ))−1 F 0 (x∗ )| ≤

1 . 1 − (L0 + λM)|x0 − x∗ |

(14.23)

Hence, y0 is well defined by the first sub-step of method (14.2) for n = 0. We also have that y0 − x∗ = x0 − x∗ −

F(x0 ) F(x0 ) F(x0 ) + − . F 0 (x0 ) F 0 (x0 ) F 0 (x0 ) + λ0 F(x0 )

(14.24)

Using (14.4), (14.5), (14.10), (14.12), (14.20) and (14.23)– (14.25) we get in turn that

|y0 − x∗ | ≤ |F 0 (x0 )−1 F 0 (x∗ )||

Z 1 0

F 0 (x∗ )−1 [F 0 (x∗ + θ(x0 − x∗ )) − F 0 (x0 )]dθ(x0 − x∗ )|

+|λ|F 0 (x0 )−1 F 0 (x∗ )||(F 0 (x0 ) + λ0 F(x0 ))−1F 0 (x∗ )||F 0 (x∗ )−1 F(x0 )| L|x0 − x∗ |2 λM 2 |x0 − x∗ |2 ≤ + 2(1 − L0 |x0 − x∗ |) (1 − (L0 + λM)|x0 − x∗ |)(1 − L0 |x0 − x∗ |) = g2 (|x0 − x∗ |)|x0 − x∗ | < |x0 − x∗ | < r, which shows (14.16) for n = 0. Similarly, we get from (14.4), (14.5), (14.8), (14.9), (14.11), (14.12) and (14.13) that |(F 0 (x∗ ) + F 0 (x∗ ) + µ0 F 2 (x∗ ))−1 [F 0 (y0 ) + F 0 (x0 ) + µ0 F 2 (x0 ) ≤ ≤ ≤ =

−(F 0 (x∗ ) + F 0 (x∗ ) + µ0 F 2 (x∗ ))] 1 0 ∗ −1 0 [|F (x ) (F (y0 ) − F 0 (x∗ ))| + |F 0 (x∗ )−1 (F 0 (x0 ) − F 0 (x∗ ))| 2 +µ|F 0 (x∗ )−1 (F 2 (x0 ) − F 2 (x∗ ))| 1 [L0 |y0 − x∗ | + L0 |x0 − x∗ | + µα|F 0 (x∗ )−1 (F(x0 ) − F(x∗ ))| 2 1 [L0 (1 + g1 (|x0 − x∗ |)) + µαM 2|x0 − x∗ |]|x0 − x∗ | 2 g2 (|x0 − x∗ |)|x0 − x∗ | < g2 (r)r < 1.

(14.25)

It follows from (14.25) that F 0 (y0 ) + F 0 (x0 ) + µ0 F 2 (x0 ) is invertible and |(F 0 (y0 ) + F 0 (x0 ) + µ0 F 2 (x0 ))−1F 0 (x∗ )| ≤

1 , 2(1 − g2 (|x0 − x∗ |)|x0 − x∗ |)

which shows (14.7) for n = 0. Hence, x1 is well defined by the second sub-step of method (14.2) for n = 0. Then, we have from the approximation F(x0 ) F(x0 ) + 0 0 F (x0 ) F (x0 ) 2F(x0 ) , − 0 F (y0 ) + F 0 (x0 ) + µ0 F 2 (x0 )

x1 − x∗ = x0 − x∗ −

(14.26)

100

Christopher I. Argyros, Samundra Regmi, Ioannis K. Argyros et al.

(14.4), (14.5), (14.8), (14.9), (14.11), (14.12), (14.14), (14.16), (14.17) and (14.26) that |x1 − x∗ | ≤ |x0 − x∗ −

F(x0 ) | F 0 (x0 )

+|F 0 (x0 )−1 F 0 (x∗ )||F 0 (x∗ )−1 (F 0 (y0 ) + F 0 (x0 ) + µ0 F 2 (x0 ) − 2F 0 (x0 ))|

×|(F 0 (y0 ) + F 0 (x0 ) + µ0 F 2 (x0 ))−1 F 0 (x∗ )||F 0 (x∗ )−1 F(x0 )| L|x0 − x∗ |2 ≤ 2(1 − L0 |x0 − x∗ |) (L0 (|y0 − x∗ | + |x0 − x∗ |) + µαM 2 |x0 − x∗ |2 ) + M|x0 − x∗ | 2(1 − L0 |x0 − x∗ |)(1 − g2 (|x0 − x∗ |))|x0 − x∗ | 1 ≤ 2(1 − L0 |x0 − x∗ |) (L0 (1 + g1 (|x0 − x∗ |)) + µαM 2 |x0 − x∗ |) ×[L + M]|x0 − x∗ |2 2(1 − L0 |x0 − x∗ |)(1 − g2 (|x0 − x∗ |)|x0 − x∗ |) = g3 (|x0 − x∗ |)|x0 − x∗ | < |x0 − x∗ | < r,

which shows (14.18) for n = 0. By simply replacing x0 , y0 , x1 by xk , yk , xk+1 in the preceding estimates we arrive at estimate (14.16)–(14.18). Using the estimate |xk+1 − x∗ | < |xk − x∗ | < r, weRdeduce that xk+1 ∈ U(x∗ , r2 ) and limk→∞ xk = x∗ . To show the uniqueness part, let ¯ ∗ , T ) with F(y∗ ) = 0. Using (14.9) we get Q = 01 F 0 (y∗ + θ(x∗ − y∗ )dθ for some y∗ ∈ U(x that |F (x ) (Q − F (x ))| ≤

Z 1

L0 |y∗ + θ(x∗ − y∗ ) − x∗ |dθ



Z 1

(1 − θ)|x∗ − y∗ |dθ ≤

0

∗ −1

0



0

0

L0 T < 1. 2

(14.27)

It follows from (14.27) and the Banach Lemma on invertible functions that Q is invertible. Finally, from the identity 0 = F(x∗ ) − F(y∗ ) = Q(x∗ − y∗ ), we deduce that x∗ = y∗ .

3.

Local Convergence of Method (14.3)

We present the local convergence analysis of method (14.3) along the lines of Section 2 for method (14.2). Let L0 > 0, L > 0, M ∈ (0, 2), α > 0, λ ≥ 0 and µ ≥ 0 be given parameters with L0 M < 2. Define functions on the interval [0, L10 ) by 1 ((L0 + 2λM)t + 1)M [Lt + ], 2(1 − L0t) 1 − (L0 + λM)t H1 (t) = G1 (t) − 1,

G1 (t) =

G2 (t) = L0 G1 (t) + µαM 2t,

H2 (t) = G2 (t) − 1, 1 2M(L0 + G2 (t)) G3 (t) = [L + ]t, 2(1 − L0t) 1 − G2 (t)t

Ball Convergence Theorems for Some Third-Order Iterative Methods

101

and H3 (t) = G3 (t) − 1. We have

M −1 < 0 2 by the choice of M and H1 (t) → +∞ as t → r0− . Hence, function H1 has zeros in the interval (0, r0). Denote by R1 the smallest such zero. Moreover, H2 (0) = L02M − 1 < 0 and H2 (t) → +∞ as t → r0− . Hence, function H2 has zeros in the interval (0, r0 ). Denote by R2 the smallest such zero. Furthermore, H3 (0) = −1 < 0 and H3 (t) → +∞ as t → r0− . Hence, function H3 has zeros in the interval (0, r0). Denote by R3 the smallest such zero. Set H1 (0) =

R = min{R1 , R2 , R3 }.

(14.28)

0 ≤ G1 (t) < 1,

(14.29)

0 ≤ G2 (t) < 1,

(14.30)

0 ≤ G3 (t) < 1 for each t ∈ [0, R).

(14.31)

Then, we have that

and Next, we present the local convergence analysis of the method (14.3). Theorem 20. Let F : D ⊆ S → S be a differentiable function. Suppose that there exist x∗ ∈ D, L0 > 0, L > 0, M > 0, α > 0, {λn }, {µn} ∈ S and µ ≥ 0 such that for each x, y ∈ D, F(x∗ ) = 0, F 0 (x∗ ) 6= 0, M < 2 max{1,

1 }, L0

(14.32)

|F 0 (x∗ )−1 (F 0 (x) − F 0 (x∗ ))| ≤ L0 |x − x∗ |, |F 0 (x∗ )−1 (F 0 (x) − F 0 (y))| ≤ L|x − y|, |F 0 (x∗ )| ≤ α,

|F 0 (x∗ )−1 F 0 (x)| ≤ M, |λn | ≤ λ, |µn | ≤ µ,

and ¯ ∗ , R) ⊆ D, U(x

where R is defined by (14.28). Then, sequence {xn } generated for x0 ∈ U(x∗ , R) by method (14.3) is well defined, remains in U(x∗ , R) for each n = 0, 1, 2, · · · and converges to x∗ . Moreover, the following estimates hold for each n = 0, 1, 2, · · · , |zn − x∗ | ≤ G1 (|xn − x∗ |)|xn − x∗ | < |xn − x∗ | < R, |(F 0 (zn ) + µn F 2 (xn ))−1 F 0 (x∗ )| ≤

1 1 − G2 (|xn − x∗ |)|xn − x∗ |

(14.33) (14.34)

102

Christopher I. Argyros, Samundra Regmi, Ioannis K. Argyros et al.

and |xn+1 − x∗ | ≤ G3 (|xn − x∗ |)|xn − x∗ | < |xn − x∗ |,

(14.35)

where the ”G” functions are defined above Theorem 20. Furthermore, if there exists T ∈ ¯ ∗ , T ) ⊂ D, then the limit point x∗ is the only solution of equation F(x) = [R, L20 ) such that U(x ¯ ∗ , T ). 0 in U(x Proof. As in Theorem 19 we show (14.19)-(14.23). Hence, z0 is well defined by the first sub-step of method (14.3). Then, by (14.8), (14.9), (14.13) and (14.21) we get that |F 0 (x∗ )−1 (F 0 (x0 ) + 2λ0 F 0 (x0 ))|

≤ |F 0 (x∗ )−1 [(F 0 (x0 ) − F 0 (x∗ )) + 2λ0 (F 0 (x0 ) − F(x∗ )) + F 0 (x∗ )]| ≤ |F 0 (x∗ )−1 (F 0 (x0 ) − F 0 (x∗ ))| + 2λ|F 0 (x∗ )−1 (F 0 (x0 ) − F(x∗ ))| + 1

≤ L0 |x0 − x∗ | + 2λM|x0 − x∗ | + 1.

(14.36)

Then, using method (14.3) for n = 0, (14.16), (14.20), (14.23), (14.28), (14.29) and (14.36) we get in turn that F(x0 ) F(x0 ) + 0 0 F (x0 ) F (x0 ) F(x0 ) − , 2(F 0 (x0 ) + λ0 F(x0 ))

z0 − x∗ = x0 − x∗ −

(14.37)

so, L|x0 − x∗ |2 2(1 − L0 |x0 − x∗ |) 1 + |F 0 (x∗ )−1 F 0 (x∗ )||F 0 (x∗ )−1 (2F 0 (x0 ) + 2λ0 F(x0 )) − F 0 (x0 ))| 2 ×|F 0 (x∗ )−1 (F 0 (x0 ) + λ0 F 0 (x0 ))−1 ||F 0 (x∗ )−1 F 0 (x0 )| L|x0 − x∗ |2 (1 + (L0 + 2λM)|x0 − x∗ |)M|x0 − x∗ | ≤ + 2(1 − L0 |x0 − x∗ |) 2(1 − L0 |x0 − x∗ |)(1 − (L0 + λM)|x0 − x∗ |) = G1 (|x0 − x∗ |)|x0 − x∗ | < |x0 − x∗ | < R,

|z0 − x∗ | ≤

which shows (14.33) for n = 0. Next, we show that F 0 (z0 ) + µ0 F 2 (x0 ) is invertible. Using (14.8), (14.9), (14.11), (14.13), (14.21) and (14.33) , we get that |(F 0 (x∗ ) + µ0 F 2 (x∗ ))−1 (F 0 (z0 ) + µ0 F 2 (x0 )) − (F 0 (x∗ ) + µ0 F 2 (x∗ ))|

≤ |F 0 (x∗ )−1 (F 0 (z0 ) − F 0 (x∗ ))| + |µ0 ||F 0 (x∗ )−1 (F 2 (x0 ) − F 2 (x∗ ))|

≤ L0 |z0 − x∗ | + µ|F 0 (x∗ )|F 0 (x∗ )−1 (F(x0 ) − F(x∗ )|2

≤ L0 G1 (|x0 − x∗ |)|x0 − x∗ | + µαM 2 |x0 − x∗ |2

≤ L0 G1 (|x0 − x∗ |)|x0 − x∗ | < G2 (R)R < 1.

(14.38)

It follows from (14.38) that F 0 (z0 ) + µ0 F 2 (x0 ) is invertible and |(F 0 (z0 ) + µ0 F 2 (x0 ))−1 F 0 (x∗ )| ≤

1 . 1 − G2 (|x0 − x∗ |)|x0 − x∗ |

(14.39)

Ball Convergence Theorems for Some Third-Order Iterative Methods

103

Then, we also have the estimate |F 0 (x∗ )−1 (F 0 (z0 ) + µ0 F 2 (x0 ) − F 0 (x0 ))|

≤ |F 0 (x∗ )−1 (F 0 (z0 ) − F 0 (x∗ ))| + |F 0 (x∗ )−1 (F 0 (x0 ) − F 0 (x∗ ))| +|µ0 ||F 0 (x∗ )||F 0 (x∗ )−1 (F(x0 ) − F(x∗ ))|2

≤ L0 (|z0 − x∗ | + |x0 − x∗ |) + αµM 2 |x0 − x∗ |2

≤ L0 (1 + G1 (|x0 − x∗ |))|x0 − x∗ | + αµM 2 |x0 − x∗ |2

= L0 |x0 − x∗ | + G2 (|x0 − x∗ |)|x0 − x∗ |

= [L0 + G2 (|x0 − x∗ |)]|x0 − x∗ |.

(14.40)

Then, using the second substep of method (14.3) for n = 0, (14.20), (14.23), (14.28), (14.31), (14.33), (14.39), (14.40) and x1 − x∗ = x0 − x∗ −

F(x0 ) F(x0 ) F(x0 ) + − , F 0 (x0 ) F 0 (x0 ) F 0 (z0 ) + µ0 F 2 (x0 )

we get that that |x1 − x∗ | ≤

L|x0 − x∗ |2 2(1 − L0 |x0 − x∗ |)

+|F 0 (x0 )−1 F 0 (x∗ )||F 0 (x∗ )−1 [F 0 (z0 ) + µ0 F(x0 ) − F 0 (x0 )]| ×|(F 0 (z0 ) + µ0 F 2 (x0 ))−1 F 0 (x∗ )||F 0 (x∗ )−1 F 0 (x0 )| L|x0 − x∗ |2 ≤ 2(1 − L0 |x0 − x∗ |) 2M(L0 + G2 (|α|))|x0 − x∗ |2 + (1 − L0 |x0 − x∗ |)(1 − G2 (|x0 − x∗ |)|x0 − x∗ |) = G3 (|x0 − x∗ |)|x0 − x∗ | < |x0 − x∗ | < R, which shows (14.35) for n = 0. The rest of the proof follows exactly as the proof of Theorem 19. Remark 23.

1. In view of (14.9) and the estimate

kF 0 (x∗ )−1 F 0 (x)k = kF 0 (x∗ )−1 (F 0 (x) − F 0 (x∗ )) + Ik

≤ 1 + kF 0 (x∗ )−1 (F 0 (x) − F 0 (x∗ ))k ≤ 1 + L0 kx − x∗ k

condition (14.12) can be dropped and M can be replaced by M(t) = 1 + L0 t. 2. The results obtained here can be used for operators F satisfying autonomous differential equations [3] of the form F 0 (x) = P(F(x)) where P is a continuous operator. Then, since F 0 (x∗ ) = P(F(x∗ )) = P(0), we can apply the results without actually knowing x∗ . For example, let F(x) = ex − 1. Then, we can choose: P(x) = x + 1.

104

Christopher I. Argyros, Samundra Regmi, Ioannis K. Argyros et al.

3. The local results obtained here can be used for projection methods such as the Arnoldi’s method, the generalized minimum residual method (GMRES), the generalized conjugate method(GCR) for combined Newton/finite projection methods and in connection to the mesh independence principle can be applied to develop the cheapest and most efficient mesh refinement strategies [2, 3]. 4. It is worth noticing that method (14.2) or method (14.3) are not changing when we use the conditions of Theorem 19 instead of the stronger conditions used in [16]. Moreover, we can compute the computational order of convergence (COC) defined by     kxn+1 − x∗ k kxn − x∗ k ξ = ln / ln , kxn − x∗ k kxn−1 − x∗ k or the approximate computational order of convergence     kxn − xn−1 k kxn+1 − xn k / ln . ξ1 = ln kxn − xn−1 k kxn−1 − xn−2 k

This way, we obtain in practice the order of convergence in a way that avoids the bounds involving estimates using estimates higher than the first Fr´echet derivative of the operator F.

4.

Numerical Examples

We present three numerical examples in this section. Example 9. Returning back to the motivational example at the introduction of this chapter, we have L0 = L = 146.6629073, M = 101.5578008, α = 4, λ = λn = µn = 1, for all n. The parameters are given in Table 1. Table 14.1. method (14.2) r0 = 6.6483e − 05 r1 = 0.0001 r2 = 0.0009 r3 = 0.0001 r = 0.0001 ξ1=1.1334 ξ = 1.4876

Example 10. Let D = [−1, 1]. Define function f of D by f (x) = ex − 1.

(14.41)

Using (14.41) and x∗ = 0, we get that L0 = e − 1 < L = M = e, α = 3λ = 2.7183, λn = µn = 1 for all n. The parameters are given in Table 2.

Ball Convergence Theorems for Some Third-Order Iterative Methods

105

Table 14.2. method (14.2) r0 = 0.1098 r1 = 0.0319 r2 = 0.0897 r = 0.0702 ξ1 = 1.1230 ξ = 1.3167

Example 11. Let D = [−∞, +∞]. Define function f of D by f (x) = sin(x).

(14.42)

Then, we have for x∗ = 0 that L0 = L = M = α = 1, λ = λn = µn = 1 for all n. The parameters are given in Table 3. Table 14.3. method (14.3) r0 = 0.5 r1 = 0.1069 r2 = 0.0910 r = 0.2530 ξ1 = 2.9987 ξ = 2.9967

5.

Conclusion

We present a local convergence analysis for some third-order methods to approximate a solution of a nonlinear equation. We use hypotheses up to the first derivative in contrast to earlier studies such as [1, 5]–[27] using hypotheses up to the third derivative. This way, the applicability of these methods is extended under weaker hypotheses. Moreover, the radius of convergence and computable error bounds on the distances involved are also given in this chapter. Numerical examples where earlier results cannot be used to solve equations, but our results can be used are also presented in this chapter.

References [1] Amat, S., Hern´andez, M. A., Romero, N., A modified Chebyshev’s iterative method with at least sixth order of convergence, Appl. Math. Comput. 206(1), 164-174 (2008).

106

Christopher I. Argyros, Samundra Regmi, Ioannis K. Argyros et al.

[2] Argyros, I. K., “Convergence and Application of Newton-type Iterations,” Springer, 2008. [3] Argyros, I. K. and Hilout, Said, Computational methods in nonlinear Analysis, World Scientific Publ. Co., New Jersey, USA, 2013. [4] Candela, V., Marquina, A., Recurrence relations for rational cubic methods I: The Halley method, Computing, 44, 169-184(1990). [5] Chen, J., Some new iterative methods with three-order convergence, Appl. Math. Comput. 181, (2006), 1519-1522. [6] Cordero, A., Torregrosa, J., Variants of Newton’s method using fifth order quadrature formulas, Appl. Math. Comput. 190,(2007), 686-698. [7] Ezquerro, J. A., Hern´andez, M. A., A uniparametric Halley-type iteration with free second derivative, Int. J.Pure and Appl. Math. 6, 1, (2003), 99-110. [8] Ezquerro, J. A., Hern´andez, M. A., New iterations of R-order four with reduced computational cost. BIT Numer. Math. 49, 325- 342 (2009). [9] Ezquerro, J. A., Hern´andez, M. A., On the R-order of the Halley method, J. Math. Anal. Appl. 303, 591-601 (2005). [10] Frontini, M., Sormani, E., Some variants of Newton’s method with third order convergence, Appl. Math. Comput. 140, (2003), 419-426. [11] Guti´errez, J. M., Hern´andez, M. A., Recurrence relations for the super-Halley method, Computers Math. Applic. 36(7), 1-8 (1998). [12] Hern´andez, M. A., Chebyshev’s approximation algorithms and applications, Computers Math. Applic. 41(3-4), 433-455 (2001). [13] Hern´andez, M. A., Salanova, M. A., Sufficient conditions for semilocal convergence of a fourth order multipoint iterative method for solving equations in Banach spaces. Southwest J. Pure Appl. Math (1), 29-40 (1999). [14] Kanwar, M. V., Kukreja, V. K., Singh, S., On some third-order iterative methods for solving nonlinear equations, Appl. Math. Comput. 171, (2005), 272-280. [15] Kou, J., Li, Y., An improvement of the Jarratt method, Appl. Math. Comput. 189, 1816-1821 (2007). [16] Kou, J., Li, Y., Wang, X., On modified Newton methods with cubic convergence, Appl. Math. Comput. 176, (2006), 123-127. ¨ [17] Ozban, A. Y., Some new variants of Newton’s method, Appl. Math. Lett. 17, (2004), 677-682. [18] Parhi, S. K., Gupta, D. K., Semilocal convergence of a Stirling-like method in Banach spaces, Int. J. Comput. Methods 7(02), 215-228(2010).

Ball Convergence Theorems for Some Third-Order Iterative Methods

107

[19] Petkovic, M. S., Neta, B., Petkovic, L., Dˇzuniˇc, J., Multipoint methods for solving nonlinear equations, Elsevier, 2013. [20] Potra, F. A., Ptak, V., Nondiscrete induction and iterative processes, Research Notes in Mathematics, Vol. 103, Pitman Publ., Boston, MA, 1984. [21] Rall, L. B., Computational solution of nonlinear operator equations, Robert E. Krieger, New York (1979). [22] Ren, H., Wu, Q., Bi, W., New variants of Jarratt method with sixth-order convergence, Numer. Algorithms 52(4), 585-603(2009). [23] Rheinboldt, W. C., An adaptive continuation process for solving systems of nonlinear equations, In Mathematical models and numerical methods (Tikhonov A. N. et al. eds.) pub.3, (19), 129-142 Banach Center, Warsaw Poland. [24] Traub, J. F., Iterative methods for the solution of equations, Prentice-Hall Englewood Cliffs, New Jersey, USA, 1964. [25] Weerakoon, S., Fernando, T. G. I., A variant of Newton’s method with accelerated third-order convergence, Appl. Math. Lett. 13, (2000), 87-93. [26] Xiao, X., Yin, H., A new class of methods with higher order of convergence for solving systems of nonlinear equations, (submitted for publication). [27] Wang, X., Kou, J., Convergence for modified Halley-like methods with less computation of inversion, J. Diff. Eq. and Appl. 19, 9, (2013), 1483-1500.

Chapter 15

Convergence Analysis of Frozen Steffensen-Type Methods under Generalized Conditions 1.

Introduction

The goal of this chapter is to present a unified local convergence analysis of frozen Steffensen-type methods under generalized Lipschitz-type conditions for Banach spacevalued operators. We also use our new idea of restricted convergence domains, where we find a more precise location where the iterates lie leading to at least as tight majorizing functions. Consequently, the new convergence criteria are weaker than in earlier works resulting in the expansion of the applicability of these methods. The conditions do not necessarily imply the differentiability of the operator involved. This way, our method is suitable for solving equations and systems of equations. Let E stand for a Banach space and S be a nonempty open subset of E . By L (E ) we denote the space of bounded linear operators from E into E . Let also F : S −→ E be a continuous operator. The problem of locating a zero x∗ of the operator F is very important in many diverse areas such as inverse theory, optimization, control theory, Mathematical Physics, Chemistry, Biology, Economics, Computational Sciences, and also in Engineering. A plethora of problems from the aforementioned disciplines can be formulated to find a zero of F using Mathematical modeling [1]-[12]. The exact zero of F is desirable, but this goal can be achieved only in some special cases. That is why researchers and practitioners generate a sequence converging to x∗ under some conditions on operator F. In this chapter, we introduce the method defined for each n = 0, 1, 2, . . . by xn+1 = xn − A−1 n F(xn ),

(15.1)

where An = A(h1 (ttn ), h2(ytn )), x0 ∈ S is an initial point; A(., .) : S × S −→ L (E ); {tn} is a nondecreasing sequence of integers such that t0 = 0,tn ≤ n for each n = 0, 1, 2, . . ., h1 : S −→ S, h2 : S −→ S are continuous data operators and ytn stands for the highest indexed point x0 , x1 , . . ., xtn for which A−1 exists provided n A(h1 (x0 ), h2 (x0 ))−1 ∈ L (E ). It is well established that it is not advantageous to replace

110

Christopher I. Argyros, Samundra Regmi, Ioannis K. Argyros et al.

the operator A−1 n at each step of the iterative method (from the numerical efficiency point of view). If this operator is kept piecewise constant, then we obtain more efficient iterative methods. It is also well known that optimal methods can be obtained based on the dimension of the space. Many widely used iterative methods can be obtained as special cases of (15.1) say, if tn = n, so ytn = xn : Steffensen-type method [10]: xn+1 = xn − [h1 (xn ), h2 (xn ); F]−1 F(xn ) for each n = 0, 1, 2, . . .,

(15.2)

where [., .; F] : S × S −→ L (E ) and for each x, y ∈ E with x 6= y [x, y; F](x − y) = F(x) − F(y). Steffensen method: xn+1 = xn − [xn , xn + F(xn ); F]−1 F(xn ) for each n = 0, 1, 2, . . ..

(15.3)

That is we specialize (15.2) by setting setting h1 (x) = x and h2 (x) = x + F (x). Backward-Steffensen method: xn+1 = xn − [xn − F (xn ), xn ; F]−1 F(xn ) for each n = 0, 1, 2, . . ..

(15.4)

Method (15.2) reduces to (15.4), if h1 (x) = x − F (x) and h2 (x) = x. Central-Steffensen method xn+1 = xn − [xn − F(xn ), xn + F(xn ); F]−1 F(xn ) for each n = 0, 1, 2, . . ..

(15.5)

This method is obtained from (15.2), if h1 (x) = x − F (x) and h2 (x) = x + F (x). Generalized Steffensen-type method xn+1 = xn − [xn − θ(xn )F(xn ), xn + τ(xn )F(xn ); F]−1 F(xn ) for each n = 0, 1, 2, . . ., (15.6) where θ, τ : S −→ R+ ∩ {0} are real functions such that sequences {θ(xn )} and {τ(xn )} are convergent sequences. Method (15.6) is a specialization of (15.2), if h1 (x) = x − θ(x)F(x) and h2 (x) = x + τ(x)F(x). Many other choices of tn and An are possible [1]-[12]. The motivation and novelty of method (15.1) are listed below: (i) Method (15.1) is always well defined, since we can choose ytn = xn for each n = 0, 1, 2, . . .. (ii) Many methods are special cases of method (15.1), so it is important to unify their convergence analysis. (iii) The local convergence analysis uses generalized Lipschitz-type conditions and the continuity of operator F. The differentiability of operator F is not assumed or implied by the conditions. This way (15.1) is suitable for solving non-differentiable

Convergence Analysis of Frozen Steffensen-Type Methods ...

111

equations. It is worth noticing that method (15.2) or its special aforementioned cases or other similar methods using divided differences cannot be used to solve nondifferentiable equations, e.g., when h1 (xi ) = h2 (xi ) for some i = 0, 1, 2, . . .. Another possibility is when E = Ri (i a natural number) and vectors h1 (xi ), h2(xi ) are not the same but have at least one entry equal. Then, the classically divided difference cannot be defined. It is also worth noticing that local convergence results are important since they reveal the degree of difficulty in choosing initial points. (iv) Another problem appearing when we study the convergence of iterative methods is the fact that the ball of convergence is small in general, and the error bounds on the distances kxn − x∗ k are pessimistic. We address these problems by using the centerLipschitz-type condition that helps us determine a subset S0 of S also containing the iterates. By concentrating on S0 instead of S the Lipschitz functions are tighter than the ones depending on S. This way, the convergence ball is at least as large (i.e., we obtain at least as many initial points), the error bounds at least as tight (i.e., at least as few iterations are needed to obtain an error tolerance ε), and the information on the location of the solution is at least as precise. These improvements are obtained under the same computational effort since in practice the computation of the old Lipschitz functions requires the computation of the new Lipschitz functions as special cases. The rest of the chapter is developed as follows: Section 2. contain the local convergence analysis of method (15.1) whereas in Section 3., we present the conclusion.

2.

Semi-Local Convergence Analysis

Let U(x, δ) stand for the open ball centered at x ∈ E and of radius δ > 0. Moreover, we ¯ δ). Define parameter ρ1 by ρ1 = sup{t ≥ 0 : U(x∗ ,t) ⊆ S}. The denote its closure by U(x, local convergence analysis is based on the conditions (A ): (a1) There exist x∗ ∈ S, with F(x∗ ) = 0, α > 0 and x¯ ∈ S with kx¯ − x∗ k = α, such that A−1 ¯ −1 ∈ L (E ). ∗ := A(x∗ , x) (a2) kA−1 ¯ for each x ∈ S where ∗ (A(h(x), h2 (x)) − A∗ )k ≤ ϕ0 (kh1 (x) − x∗ k, kh2(x) − xk), hm : S −→ S, m = 1, 2 are continuous operators, ϕ0 : I × I −→ I is a continuous, nondecreasing function in both variables and I = R+ ∩ {0}. (a3) khm(x) − x∗ k ≤ ψm (kx − x∗ k) for each x ∈ S, for some functions ψm : I −→ I which are continuous and nondecreasing. (a4) Equation ϕ0 (ψ1 (t), α + ψ2 (t)) = 1 has at least one positive solution. Denote by ρ2 the smallest such solution. Set S0 = U(x∗ , ρ), where ρ = min{ρ1 , ρ2 }. (a5) kA−1 ∗ (A(h1(y), h2 (y))(x − x∗ ) − F(x))k ≤ ϕ(kh1 (y) − xk, kh2 (y) − x∗ k)kx − x∗ k for each x, y ∈ S0 , where ϕ : I0 × I0 −→ I is some continuous non-decreasing function and I0 = [0, ρ].

112

Christopher I. Argyros, Samundra Regmi, Ioannis K. Argyros et al.

(a6) Equation ϕ(t + ψ1 (t), ψ2(t)) + ϕ0 (ψ1 (t), α + ψ2 (t)) = 1 has at least one solution in (0, ρ2). Denote by ρ∗ the smallest such solution. Next, we show the local convergence result for method (15.1) using the conditions (A ) and the preceding notation. Theorem 21. Suppose that the conditions (A ) hold. Then, sequence {xn } generated by method (15.1) for starter x0 ∈ U(x∗ , ρ∗ ) − {x∗ } is well defined in U(x∗ , ρ∗ ), stays in U(x∗ , ρ∗ ) for each n = 0, 1, 2, . . . and limn−→∞ xn = x∗ . Proof. Let x ∈ U(x∗ , ρ∗ ). Using (a1)-(a4) and (a6), we have in turn that kA−1 ¯ ∗ (A(h1(x), h2 (x)) − A∗ )k ≤ ϕ0 (kh1 (x) − x∗ k, kh2 (x) − xk)

≤ ϕ0 (ψ1 (kx − x∗ k), kh2(x) − x∗ k + kx∗ − xk) ¯

≤ ϕ0 (ψ1 (ρ∗ ), α + ψ2 (ρ∗ )) < 1.

(15.7)

It follows from (15.7) and the Banach lemma on invertible operators [5,6] that A(h1 (x), h2(x)) is invertible and kA(h1(x), h2 (x))−1 A∗ k ≤

1 . 1 − ϕ0 (ψ1 (kx − x∗ k), α + ψ2 (kx − x∗ k))

(15.8)

Moreover, x1 is well defined by (15.1) by (15.8) for x = x0 . We can write by method (15.1) that x1 − x∗ = x0 − x∗ − A−1 0 F(x0 )

= A−1 0 [A0 (x0 − x∗ ) − F(x0 )]

−1 = [A−1 0 A∗ ]A∗ [A0 (x0 − x∗ ) − F(x0 )].

(15.9)

By (15.8), (15.9), (a3) and (a4), we obtain in turn that −1 kx1 − x∗ k ≤ kA−1 0 A∗ kkA∗ [A0 (x0 − x∗ ) − F(x0 )]k ϕ(kh1 (x0 ) − x0 k, kh2(x0 ) − x∗ k)kx0 − x∗ k ≤ 1 − ϕ0 (ψ1 (kx0 − x∗ k), α + ψ2 (kx0 − x∗ k)) ϕ(kh1 (x0 ) − x0 k + kx0 − x∗ k, kh2(x0 ) − x∗ k)kx0 − x∗ k ≤ 1 − ϕ0 (ψ1 (kx0 − x∗ k), α + ψ2 (kx0 − x∗ k)) ≤ ckx0 − x∗ k < ρ∗ ,

(15.10)

which shows that x1 ∈ U(x∗ , ρ∗ ), where c=

ϕ(kh1 (x0 ) − x∗ k + kx0 − x∗ k, kh2 (x0 ) − x∗ k) ∈ [0, 1). 1 − ϕ0 (ψ1 (kx0 − x∗ k), α + ψ2 (kx0 − x∗ k))

(15.11)

Suppose that ytm ∈ U(x∗ , ρ∗ ) for each m = 0, 1, 2, . . .k. Then, as in (15.8)-(15.10) with x = ytk , and x replaced by xk+1 we get kA(h1 (y)tk), h2 (ytk ))−1A∗ k ≤

1 1 − ϕ0 (ψ1 (kytk − x∗ k, α + ψ2 (kytk − x∗ k)

(15.12)

Convergence Analysis of Frozen Steffensen-Type Methods ...

113

and kA−1 ∗ [A∗ (xk − x∗ ) − F(x∗ )]k ≤ ϕ(kh1 (ytk ) − xk k, h2 (ytk ) − x∗ k)kxk − x∗ k,

(15.13)

so again kxk+1 − x∗ k ≤ kAk−1 A∗ kkA−1 ∗ [Ak (xk − x∗ ) − F(xk )]k ≤ ckxk − x∗ k < ρ∗ ,

(15.14)

which implies xk+1 ∈ U(x∗ , ρ∗ ) and limk−→∞ xk = x∗ . Concerning the uniqueness of the solution x∗ suppose (a6)’ Equation ϕ(t + ψ1 (t),t + ψ2 (t)) + ϕ0 (ψ1 (t), α + ψ2 (t)) = 1, has at least one solution in (0, ρ2). Denote by ρ¯ ∗ the smallest such solution. Replace (a6) by (a6)’ in conditions (A ) and (a1)-(a5) and (a6)’ conditions (A )’. Then, we have: Proposition 2. Suppose that the conditions (A )’ hold. Then, x∗ is the only solution of equation F(x) = 0 in U(x∗ , ρ¯ ∗ ) provided that x0 ∈ U(x∗ , ρ¯ ∗ ) − {x∗ }. Proof. Let y∗ ∈ U(x∗ , ρ¯ ∗ ) be such that F(y∗ ) = 0. We have instead of (15.14) the estimate kxk+1 − y∗ k ≤ ckx ¯ k − y∗ k,

(15.15)

0 )−x∗ k+kx0 −x∗ k,kh2(x0 )−x∗ k+kx∗ −y∗ k) where c¯ = ϕ(kh1(x ∈ [0, 1), (by (a6)’). Then, it follows from 1−ϕ0 (ψ1 (kx0 −x∗ k),α+ψ2 (kx0−x∗ k)) (15.15) that limk−→∞ xk = y∗ . But we showed in Theorem 21 that limk−→∞ xk = x∗ . Hence, we conclude that x∗ = y∗ .

Remark 24. Let us look again at the method (15.2) and consider the conditions given in [10], so we can compare the results. Their conditions were given in non-affine invariant form, but we present them here in affine invariant form. The advantages of affine invariant results over non-affine invariant results are well known [5,6,11,12]. (c1) (a1) ¯ 0 (kx − x∗ k, ky − x∗ k) for each x, y ∈ S and x 6= y. (c2) kA−1 ∗ ([x, y; F] − A∗ )k ≤ ϕ ¯ (c3) kA−1 ky−wk) for each x, y, z, w ∈ S and (x, y), (z, w) ∗ ([x, y; F]−[z, w; F])k ≤ ϕ(kx−zk, different pairs. ¯ m (kx − x∗ k) (c4) khm(x) − hm (x∗ )k ≤ ψ (c5) hm(x∗ ) = x∗ , h01 (x∗ ) 6= h02 (x∗ ), h1(x) = h2 (x) ⇔ x = x∗ . ¯ ψ ¯ 1 (t) +t, ψ ¯ 2(t)) + ϕ¯ 0 (ψ ¯ 1, α + ψ ¯ 2 (t)) = 1 has at least one positive solution. Denote (c6) ϕ( by r∗ the smallest such solution.

114

Christopher I. Argyros, Samundra Regmi, Ioannis K. Argyros et al.

Restrictive conditions (c4) and (c5) were not used in our chapter. Conditions (a2), (a3) are weaker than (c3), (c4), respectively. Condition (a2) helps us determine a more precise domain (i.e., S0 ) containing the iterates than S and also helps us define function ψ. Moreover, we have ϕ0 (t) ≤ ϕ¯ 0 (t) (15.16) ¯ ϕ(t) ≤ ϕ(t),

(15.17)

c ≤ c, ¯

(15.18)

r ∗ ≤ ρ∗

(15.19)

S0 ⊆ S.

(15.20)

and Estimates (15.16)-(15.19) justify the improvements stated in the introduction. Examples, where (15.16)-(15.19) hold as strict inequalities can be found in [4,5,6].

3.

Conclusion

We presented a local convergence analysis of frozen Steffensen-type methods for generating a sequence approximating Banach space-valued equations. This method specializes in many popular methods. If the starting inverse exists, then the method is always well defined. Using our idea of the restricted convergence domain, we provide a more precise domain where the iterates lie. Hence, the majorant functions involved are at least as tight as in previous studies. This way, the convergence criteria are at least as weak; the convergence domain is enlarged; the error bounds on the distance kxn − x∗ k is tighter, and the information on the location of the solution is at least as precise. The results reduce to earlier ones if only divided differences are used. Moreover, the differentiability of operator F is not assumed or implied as in previous works, making the method (15.1) suitable for solving systems of equations.

References [1] Amat, S., Busquier, S., Grau, A., Grau-S´anchez, M., Maximum efficiency for a family of Newton-like methods with frozen derivatives and some applications, Appl. Math. Comput., 219, (2013), 7954–7963. [2] Amat, S., Argyros, I. K., Busquier, S., Hern´andez, M. A., On two high-order families of frozen Newton-type methods, Numer. Linear. Algebra Appl., 25, (2018), e2126, 1–13. [3] Argyros, I. K., A unifying local semi-local convergence analysis and applications for two-point Newton-like methods in Banach space, Journal of Mathematical Analysis and Applications, 298(2), (2004), 374–397, [4] Argyros, I. K., Hilout, S., Weaker conditions for the convergence of Newton’s method, J. Complexity, 28, (2012), 364-387.

Convergence Analysis of Frozen Steffensen-Type Methods ...

115

[5] Argyros, I. K., Magre˜na´ n, A. A., Iterative methods and their dynamics with applications, CRC Press, New York, USA, 2017. [6] Argyros, I. K., George, S., Unified convergence analysis of frozen Newton-like methods under generalized conditions, Journal of Computational and Applied Mathematics, 347, (2019) 95–107 DOI: 10.1016/j.cam.2018.08.010. [7] Ezquerro, J. A., Hern´andez, M. A., Newton’s method: an updated approach of Kantorovich’s theory, Birkhauser, Elsevier, Cham, Switzerland, 2017. [8] Hald, C. H., On a Newton-Moser type method, Numer. Math., 23, (1975), 411-426. [9] Hern´andez, M. A. and Rubio, M. J., A uniparameteric family of iterative processes for solving nondifferentiable equations, J Math. Anal. Appl., 275, (2002), 821–834. [10] Hern´andez, M. A., Magre˜na´ n, A. A., Rubio, M. J., Dynamics and local convergence of a family of derivative-free iterative processes, selected papers of CMMSE, Journal of Computational and Applied Mathematics, 2018. [11] Potra, F. A., A characterization of the divided differences of an operator which can be represented by Riemann integrals, Anal. Numer. Theory. Approx., 9, (1980), 251-253. [12] Potra, F. A., On the convergence of a class of Newton-like methods, Preprint Series in Mathematics, INCREST, Bucharest, Romania, No. 22/1982.

Chapter 16

Convergence of Two-Step Iterative Methods for Solving Equations with Applications 1.

Introduction

Let B1 ,B2 stand for Banach spaces and Ω ⊆ B1 be a nonempty, convex and open set. By LB(B1 , B2 ) we denote the space of bounded linear operators from B1 to B2 . There is a plethora of problems in various disciplines that can be written using mathematical modeling like F(x) = 0 (16.1) where F : Ω → B2 is differentiable in the sense of Fr´echet. Therefore finding a solution x∗ of equation (16.1) is of great importance and challenge [1]-[20]. One wishes x∗ to be found in closed form but this is done only in special cases. This is why, we resort to iterative methods approximating x∗ . Numerous studies have been published on the local as well as the semi-local convergence of iterative methods. Among these methods is the single step Newton’s method defined by z0 ∈ Ω, zn+1 = zn − F 0 (zn−1)F(zn )

(16.2)

for each n = 0, 1, 2, .... which is considered the most popular. Iterative methods converge under certain hypotheses. However, their convergence region is small in general. Finding a more precise than Ω set D containing the iterates is very important, since the Lipschitz constants in D will be at least as tight as in Ω. This will in turn lead to a finer convergence analysis of these methods. We pursue this goal in the present chapter by studying the two-step fourth convergence order Newton’s method defined as x0 ∈ Ω, yn = xn − F 0 (xn )−1 F(xn ) 0

(16.3)

−1

xn+1 = yn − F (yn ) F(yn )

as well as the two-step third order Traub method [20] x¯0 ∈ Ω, y¯n = x¯n − F 0 (x¯n )−1 F(x¯n ) xn+1 ¯ = y¯n − F 0 (x¯n )−1 F(y¯n )

(16.4)

118

Christopher I. Argyros, Samundra Regmi, Ioannis K. Argyros et al.

The local, as well as the semi-local convergence of methods (16.3) and (16.4), is carried out under the same set of hypotheses. The layout of the rest of the chapter involves the semi-local and local convergence of these methods in Section 2 and Section 3, respectively. Numerical examples are given in Section 4.

2.

Semi-Local Convergence Analysis

We present first the semi-local convergence analysis for the method (16.3). First, we need to show an auxiliary result on majorizing sequences for the method (16.3). The proof is an extension of the corresponding one given by us in [8]. Let l > 0, l0 > 0 and η > 0 be given parameters. Define parameters γ, s0 and t1 by γ=

2l l+

p

l 2 + 8l0 l

, s0 = η, and t1 = s0 [1 +

l0 s0 ], for l0 s0 6= 1, 2(1 − l0 s0 )

(16.5)

and the scalar sequence tn for each n=1,2,..... by ls20 , 2(1 − l0 s0 ) l(tn − sn−1 )2 = tn + , 2(1 − l0 tn ) l(sn − tn )2 = sn + . 2(1 − l0 sn )

t0 = 0,t1 = s0 + sn tn+1

(16.6)

Lemma 7. Let > 0, l0 > 0 and η > 0 be given parameters. Suppose that 0 < max{

l(s0 − t0 ) l(t1 − s0 ) , } ≤ γ < 1 − l0 s0 . 2(1 − l0 s0 ) 2(1 − l0t1 )

(16.7)

η Then, the sequence tn is nondecreasing, bounded from above by t∗∗ = 1−γ , and converges to its unique least upper bound t∗ satisfying η ≤ t∗ ≤ t∗∗ . Moreover, the following items hold for each n = 1, 2, . . . 0 < tn+1 − sn ≤ γ(sn − tn ) ≤ γ2n+1η, (16.8)

0 < sn − tn ≤ γ(tn − sn−1 ) ≤ γ2n η

(16.9)

tn ≤ sn ≤ tn+1 .

(16.10)

and Proof. Estimations (16.8)-(16.10) hold true, if 0
0 such that for each x ∈ Ω kF 0 (x0 )−1 (F 0 (x) − F 0 (x0 ))k ≤ l0 kx − x0 k.

(16.37)

Define S0 = Ω ∩ S(x1 , l10 − η), where x1 = x0 − F 0 (x0 )−1 F(x0 ), and l0 η < 1 by (16.35) and (16.36). (a3) There exists l > 0 such that for each x, y ∈ S0 kF 0 (x0 )−1 (F 0 (y) − F 0 (x))k ≤ lky − xk.

(16.38)

(a4) Hypotheses of Lemma 7 hold with (16.7) replaced by (16.35) and (16.36). ¯ 0 ,t∗ ) ⊂ Ω, where t∗ is given in Lemma 7. (a5) S(x (a6) There exists t∗1 ≥ t∗ such that l0 (t∗ + t∗1 ) < 2. ¯ 0 ,t∗1 ). Next, we can show the semi-local convergence result for methods Set S1 = Ω ∩ S(x (16.3) Theorem 22. Assume that the conditions (A) hold. Then, xn ⊂ S(y0 , l10 − η), n = 1, 2 . . . and converges to some x∗ which is the only solution of equation F(x) = 0 in the set S1 . Proof. We must prove using mathematical induction that kxm+1 − ym k ≤ tm+1 − sm

(16.39)

kym − xm k ≤ sm − tm

(16.40)

and By (a1) and (16.35), we have ky0 − x0 k = kF 0 (x0 )−1 F(x0 )k ≤ η ≤

1 − η, l0

(16.41)

122

Christopher I. Argyros, Samundra Regmi, Ioannis K. Argyros et al.

so x0 ∈ S(y0 , l10 − η), and (16.40) holds for η = 0. By method (16.3) for η = 0, (a1),(16.6) and (a2) we have in turn that kx1 − y0 k = kF 0 (x0 )−1 [F(y0 ) − F(x0 ) − F 0 (x0 )(y0 − x0 )]k = k

Z 1 0

F 0 (x0 )−1 (F 0 (x0 + τ(y0 − x0 )) − F 0 (x0 ))dτ(y0 − x0 )k

l0 ky0 − x0 k2 2 l0 ≤ (s0 − t0 )2 2 1 = t1 − s0 < − η, l0 ≤

¯ 0 , 1 − η). so x1 ∈ S(y l0 Then, by (a2), we have kF 0 (x0 )−1 (F 0 (x1 ) − F 0 (x0 )k ≤ l0 kx1 − x0 k ≤ l0t1 < 1,

(16.42)

so by (16.42) and the Banach lemma on invertible operators [7] F 0 (x1 )−1 ∈ LB(B2 , B1 ), and kF 0 (x1 )−1 F 0 (x0 )k ≤ In view of method (16.3) and (a3), we get

1 . 1 − l0 kx1 − x0 k

(16.43)

kF 0 (x0 )−1 F(x1 )k = kF 0 (x0 )−1 [F(x1 ) − F(y0 ) − F 0 (y0 )(x1 − y0 )]k =k

Z 1 0

F 0 (x0 )−1 (F 0 (y0 + τ(x1 − y0 ) − F 0 (y0 )))kdτkx1 − y0 k ≤

l0 l kx1 − y0 k2 ≤ (t1 − s0 )2 2 2

so ky1 − x1 k = k[F 0 (x1 )−1 F 0 (x0 )][F 0 (x0 )−1 F(x1 )]k

and

≤ k[F 0 (x1 )−1 F 0 (x0 )]kk[F 0 (x0 )−1 F(x1 )]k l(t1 − s0 )2 ≤ = s1 − t1 , 2(1 − l0t1 )

ky1 − y0 k ≤ ky1 − x1 k + kx1 − y0 k ≤ s1 − t1 + t1 − s0 = s1 − s0 < ¯ 0 , 1 − η). so (16.40) holds for m = 1 and y1 ∈ S(y l0 Using method (16.3) as above, we have kx2 − y1 k ≤ kF 0 (y0 )−1 F 0 (x0 )kkF 0 (x0 )−1 F 0 (y1 )k lky1 − x1 k2 ≤ 2(1 − l0 ky1 − x0 k) l(s1 − t1 )2 ≤ = t2 − s1 , 2(1 − l0 s1 )

1 − η, l0

Convergence of Two-Step Iterative Methods for Solving Equations ...

123

and kx2 − y0 k ≤ kx2 − y1 k + ky1 − y0 k ≤ t2 − s1 + s1 − s0 = t2 − s0
0, so p1 increasing, so p1 (t) = 0 has a unique root in (0, 1). Denote by δ this root. The following estimate is needed: 0
0, l > 0 and η > 0 be positive parameters. Assume that 0
0 such that for each x, y ∈ S2 kF 0 (x∗ )−1 (F 0 (y) − F 0 (x))k ≤ Lky − xk.

126

Christopher I. Argyros, Samundra Regmi, Ioannis K. Argyros et al.

(h4) S(x∗ , ρ) ⊆ Ω, where ρ=

(

µA = 2L02+L 4 √ R = 4L +(1+ 5)L 0

, if method (16.3) is used , if method (16.4) is used

(h5) There exists r¯ ≥ ρ such that L0 r¯ < 1. ¯ ∗ , r¯). Set S3 = Ω ∩ S(x The proofs of the following two results are omitted since they follow the corresponding ones for single-step Newton’s method (16.2) given in [8,10]. Theorem 24. Under the hypotheses (H) starting from x0 ∈ S(x∗ , µA ) − x∗ sequence xn produced by method (16.3) converges to x∗ which is the only solution of equation F(x) = 0 in the set S3 . Moreover, the following items hold: kyn − x∗ k ≤

Lkxn − x∗ k2 2(1 − L0 kxn − x∗ k)

and kxn+1 − x∗ k ≤

Lkyn − x∗ k2 . 2(1 − L0 kyn − x∗ k

(16.67)

(16.68)

Theorem 25. Under the hypotheses (H) starting from x0 ∈ S(x∗ , R) − x∗ sequence x¯n produced by method (16.4) converges to x∗ which is the only solution of equation F(x) = 0 in the set S3 . Moreover the following items hold: kyn − x∗ k ≤ and kyn+1 − x∗ k ≤

Lkxn − x∗ k2 2(1 − L0 kxn − x∗ k)

L(2kxn − x∗ k + kyn − x∗ ) kyn − x∗ k. 2(1 − L0 kxn − x∗ k)

(16.69)

(16.70)

Lemma 9. The radius of convergence in [8,10] for the single step Newton’s method was given by 2 µ¯ A = , (16.71) 2L0 + L1 where L1 is the Lipschitz constant on Ω. Then, since s0 ⊆ Ω, we get L ≤ L1

(16.72)

µ¯ A ≤ µA

(16.73)

so The error bounds are tighter too, since L1 is used in (16.67) and (16.68).

Convergence of Two-Step Iterative Methods for Solving Equations ...

4.

127

Numerical Examples

Example 12. Let B1 = B2 = R, Ω = S(x0 , 1 − α), x0 = 1 and α ∈ I = [0, 12 ). Define function f on Ω by f (x) = x3 − α. 2(6+5α−2α2) Then, using hypotheses (a1)-(a3), we get l0 = 3 − α,l = 3(3−α) and l1 = 2(2 − α). Method (16.2). For α ∈ I0 = [0.371269, 0.5] has solutions under our approach but no solutions according to Kantorovich, since hK = l1 η > 12 for each α ∈ [0, 0.5]. Method (16.3). has no solution in [0, 0.5]. Example 13. Let B1 = B2 = C[0, 1], where C[0, 1] stands for the space of continuous function on [0, 1]. We shall use the mimum norm. Let Ω0 = {x ∈ C[0, 1] : kxk ≤ d} Define operator G on Ω0 by G(x)(s) = x(s) − g(s) − b

Z 1 0

K(s,t)x(t)3dt, x ∈ C[0, 1], s ∈ [0, 1]

(16.74)

where g ∈ C[0, 1] is a given function, ξ is a real constant and the kernel K is the Green’s function. In this case, for each x ∈ D∗ , F 0 (x) Ris a linear operator defined on D∗ by the following expression: [F 0 (x)(v)](s) = v(s) − 3ξ 01 K(s,t)x(t)2v(t)dt, v ∈ C[0, 1], s ∈ [0, 1] If 8 0 −1 is we choose x0 (s) = f (s) = 1, it follows kI − F 0 (x0 )k ≤ 3|ξ| 8 . Thus, if |ξ| < 3 , f (x0 ) defined and kF 0 (x0 )−1 k ≤

8 |ξ| |ξ| , kF(x0 )k ≤ , η = kF 0 (x0 )−1 F(x0 )k ≤ 8 − 3|ξ| 8 8 − 3|ξ|

Choosing ξ = 1.00 and x = 3, we have η = 0.2, T = 3.8, b = 2.6, L1 = 2.28, and l = 1.38154 . . . Using this values we obtain that conditions (16.31)-(16.36) are not satisfied, since the Kantorovich condition hK = l1 η ≤ 12 , gives hK = 0.76 > 12 . but condition (16.35) is satisfied since 0.485085 < 12 The convergence of the Newton’s method follows by Theorem 22 Example 14. Let B1 = B2 = R3 , Ω = S(0, 1), x∗ = (0, 0, 0)T and define G on Ω by G(x) = F(x1 , x2 , x3 ) = (ex1 − 1,

e−1 2 x2 + x2 , x3 )T . 2

For the points u = (u1 , u2 , u3 )T , the Fr´echet derivative is given by  u  e1 0 0 G0 (u) =  0 (e − 1)u2 + 1 0  . 0 0 1 1

Then, G0 (x∗ ) = diag(1, 1, 1), we have L0 = e − 1, L = e e−1 , L1 = e. Then, we obtain that  µA = 0.3827 , if method (16.3) is used ρ= R = 0.3158 , if method (16.4) is used

(16.75)

128

Christopher I. Argyros, Samundra Regmi, Ioannis K. Argyros et al.

Example 15. Let B1 = B2 = C[0, 1], the space of continuous functions defined on [0, 1] and be equipped with the max norm. Let Ω = S(0, 1). Define function G on Ω by G(ϕ)(x) = ϕ(x) − 5

Z 1

xθϕ(θ)3 dθ.

(16.76)

0

We have that G0 (ϕ(ξ))(x) = ξ(x) − 15

Z 1 0

xθϕ(θ)2 ξ(θ)dθ, for each ξ ∈ Ω.

Then, we get that x∗ = 0, L0 = 7.5, L = 15 = L1 . This way, we have that  µA = 0.0667 , if method (16.3) is used ρ= R = 0.0509 , if method (16.4) is used

5.

Conclusion

The convergence of two-step iterative methods of third and fourth-order of convergence are studied under weaker hypotheses than in earlier works using our new idea of the restricted convergence region. This way, we obtain a finer semi-local and local convergence analysis, and under the same or weaker hypotheses. Hence, we extend the applicability of these methods in cases not covered before. Numerical examples are used to compare our results favorably to earlier ones.

References [1] Amat, S., Busquier, S., and Guti´errez, J. M., Geometric constructions of iterative functions to solve nonlinear equations. J. Comput. Appl. Math. 2003, 157, 197-205. [2] Amat, S. and Busquier, S., Third-order iterative methods under Kantorovich conditions. J. Math. Anal. Appl. 2007, 336, 243-261. [3] Amat, S., Busquier, S. and Negra, M., Adaptive approximation of nonlinear operators, Numer. Funct. Anal. Optim., 25 (2004), 397-405. [4] Amat, S., Busquier, S., and Alberto Magre˜na´ n, A., Improving the dynamics of Steffensen-type methods. Applied Mathematics and Information Sciences, 9(5):2403, 2015. [5] Argyros, I. K., On the Newton-Kantorovich hypothesis for solving equations, J. Comput. Math., 169, (2004), 315-332. [6] Argyros, I. K., A semi-local convergence analysis for directional Newton methods, Math. Comput. 80, (2011), 327-343. [7] Argyros, I. K. and Gonz´alez, D., Extending the applicability of Newton’s method for k-Fr´echet differentiable operators in Banach spaces, Appl. Math. Comput., 234(2014), 167-178.

Convergence of Two-Step Iterative Methods for Solving Equations ...

129

[8] Argyros, I. K. and Hilout, S., Weaker conditions for the convergence of Newton’s method, Journal of Complexity, 28(2012), 364-387. ´ A., Iterative Methods and Their Dynamics with [10] Argyros, I. K. and Magre˜na´ n, A. Applications: A Contemporary Study, CRC Press, BocaRat´on, 2017. [10] Argyros, I. K. and Hilout, S., On an improved convergence analysis of Newton’s method, Applied Mathematics and Computation, 225(2013), 372-386. [11] Argyros, I. K., Behl, R., and Motsa, S. S., Unifying semilocal and local convergence of newton’s method on Banach space with a convergence structure. Applied Numerical Mathematics, 115:225-234, 2017. [12] Argyros, I. K., Cho, Y. J., and Hilout, S., On the midpoint method for solving equations. Applied Mathematics and Computation, 216(8): 2321-2332, 2010. [13] Behl, R., Cordero, A., Motsa, S. S., and Torregrosa, J. R., Stable high-order iterative methods for solving nonlinear models. Applied Mathematics and Computation, 303:70-88, 2017. [14] Catinas, E., A survey on the high convergence orders and computational convergence orders of sequences, Appl. Math. Comput., 343 (2019), 1-20. [15] Chen, J., Argyros, I. K., and Agarwal, R. P., Majorizing functions and twopoint newton-type methods. Journal of computational and applied mathematics, 234(5):1473-1484, 2010. [16] Ezquerro, J. A. and Hern´andez, M. A., How to improve the domain of parameters for Newton’s method, Appl. Math. Let. 48, (2015), 91-101. [7] Kantorovich, L. V. and Akilov, G. P., Functional Analysis, Pergamon Press, Oxford, 1982. [18] Magr´en˜ an, A. A. and Argyros, I. K., Two-step newton methods. Journal of Complexity, 30(4):533-553, 2014. [19] Rheinboldt, W. C., An adaptive continuation process for solving systems of nonlinear equations, Polish Academy of Science, Banach Ctr. Publ. 3, no. 1, 129–142, 1978. [20] Traub, J. F., Iterative methods for the solution of equations, volume 312. American Mathematical Soc., 1982.

Chapter 17

Three Step Jarratt-Type Methods under Generalized Conditions 1.

Introduction

In this chapter we compare the radii of convergence of two sixth convergence order Jarratttype methods for solving nonlinear equation F(x) = 0,

(17.1)

where F : Ω ⊂ B1 −→ B2 is continuously Fr´echet differentiable, B1 , B2 are Banach spaces, and Ω is a nonempty convex set. The methods under consideration in this chapter are: 2 yn = xn − F 0 (xn )−1 F(xn ) 3 23 3 9 zn = yn − ( I − F 0 (xn )−1 F 0 (yn ))(3I − F 0 (xn )−1 F 0 (yn )) 8 2 8 ×F 0 (xn )−1 F(xn ) 5 3 xn+1 = zn − ( I − F 0 (xn )−1 F 0 (yn ))F 0 (xn )−1 F(zn ) 2 2

(17.2)

and 2 yn = xn − F 0 (xn )−1 F(xn ) 3 21 9 15 zn = xn − (I + an − a2n + a3n )F 0 (xn )−1 F(xn ) 8 2 8 5 1 2 0 xn+1 = zn − (3I − an + an )F (xn )−1 )F(zn ), 2 2

(17.3)

where an = F 0 (xn )−1 F 0 (yn ). The convergence order of iterative methods, in general, was obtained in [9], [1], respectively, when B1 = B2 = Rm using Taylor expansions and conditions up to the seven order derivative not appearing on the method. These conditions limit the applicability of the methods [1]-[22].

132

Christopher I. Argyros, Samundra Regmi, Ioannis K. Argyros et al. For example: Let B1 = B2 = R, Ω = [− 21 , 32 ]. Define f on Ω by  3 t logt 2 + t 5 − t 4 i f t 6= 0 f (t) = 0 i f t = 0.

Then, we have t∗ = 1,

f 000 (t) = 6 logt 2 + 60t 2 − 24t + 22.

Obviously f 000 (t) is not bounded on Ω. So, the convergence of these methods is not guaranteed by the analysis in these papers. Our convergence analysis is based on the first Fr´echet derivative that only appears in the method. We also provide a computable radius of convergence also not given in [1]-[19]. This way, we locate a set of initial points for the convergence of the method. The numerical examples are chosen to show how the radii theoretically predicted are computed. In particular, the last example shows that earlier results cannot be used to show the convergence of the method. Our results significantly extend the applicability of these methods and provide a new way of looking at iterative methods. The chapter contains local convergence analysis in Section 2 and the numerical examples in Section 3.

2.

Local Analysis

The local convergence analysis of the method (17.2) uses scalar functions and parameters. Set M = [0, ∞). Suppose equation ϕ0 (t) − 1 = 0 (17.4)

has a least solution R0 ∈ M − {0} for some function ϕ0 : M −→ M continuous and nondecreasing. Set M0 = [0, R0 ). Suppose equation ψ1 (t) − 1 = 0, (17.5)

has a least solution r1 ∈ (0, R0 ), for some functions ϕ : M0 −→ M and ϕ1 : M0 −→ M continuous and nondecreasing, where ψ1 (t) =

R1 0

Suppose equation

has a least solution r2 = [0, R0), where

ψ2 (t) =

R1

ϕ((1 − θ)t)dθ + 13 1 − ϕ0 (t)

0

ϕ1 (θt)dθ

.

ψ2 (t) − 1 = 0

(17.6)

R1

ϕ((1 − θ)t)dθ 1 − ϕ0 (t) 3 (ϕ0 (t) + ϕ0 (ψ1 (t)t)) 2 + (3( ) 8 (1 − ϕ0 (t)) 0

(ϕ0 (t) + ϕ0 (ψ1 (t)t)) 01 ϕ1 (θt)dθ +2 ) . 1 − ϕ0 (t) 1 − ϕ0 (t) R

Three Step Jarratt-Type Methods under Generalized Conditions

133

Suppose equation ϕ0 (ψ2 (t)t) − 1 = 0 has a least solution R1 ∈ M − {0}. Set R = min{R0 , R1 }. Suppose equation ψ3 (t) − 1 = 0

(17.7)

(17.8)

has a least solution r3 ∈ (0, R), where "R 1 0 ϕ1 (θψ2 (t)t)dθ ψ3 (t) = 1 − ϕ0 (ψ2 (t)t)

(ϕ0 (t) + ϕ0 (ψ2 (t)t)) 01 ϕ1 (θψ2 (t)t)dθ + (1 − ϕ0 (t))(1 − ϕ0(ψ2 (t)t)) R

# R 3 (ϕ0 (t) + ϕ0 (ψ1 (t)t)) 01 ϕ1 (θψ2 (t)t)dθ ψ2 (t). + 2 (1 − ϕ0 (t))2 The parameter r∗ = min{rk }, k = 1, 2, 3

(17.9)

shall be shown to be a radius of convergence for method (17.2). The definition of r∗ gives 0 ≤ ϕ0 (t) < 1,

(17.10)

0 ≤ ϕ0 (ψ2 (t)t) < 1,

(17.11)

0 ≤ ψk (t) < 1

(17.12)

and hold for all t ∈ [0, r∗ ). The notations T (x, a), T¯ (x, a) are used for the open and closed balls, respectively in B1 with center x ∈ B1 and of radius a > 0. The hypotheses (H) are needed with functions ϕ0 , ϕ and ϕ1 as defined previously, Assume: (h1) F : Ω ⊂ B1 −→ B2 is Fr´echet continuously differentiable; there exists simple x∗ ∈ Ω such that F(x∗ ) = 0. (h2) For each x ∈ Ω kF 0 (x∗ )−1 (F 0 (x) − F 0 (x∗ ))k ≤ ϕ0 (kx − x∗ k). Set Ω0 = Ω ∩ T (x∗ , R0 ). (h3) For each x, y ∈ Ω0 kF 0 (x∗ )−1 (F 0 (y) − F 0 (x))k ≤ ϕ(ky − xk). kF 0 (x∗ )−1 F 0 (x)k ≤ ϕ1 (kx − x∗ k). (h4) T¯ (x∗ , α) ⊂ Ω for some α > 0 to be determined.

134

Christopher I. Argyros, Samundra Regmi, Ioannis K. Argyros et al.

(h5) There exists b ≥ r∗ such that

R1 0

ϕ0 (θb)dθ < 1. Set Ω1 = Ω ∩ T¯ (x∗ , b).

Next, hypotheses (H) and the previous notation are used to prove the local convergence of the method (17.2). Theorem 26. Under the hypotheses (H) with α = r∗ choose x0 ∈ T (x∗ , r) − {x∗ }. Then, sequence {xn } generated by method (17.2) is well defined in T (x∗ , r), remains in T (x∗ , r) and converges to x∗ for all n = 0, 1, 2, . . ., so that kyn − x∗ k ≤ ψ1 (kxn − x∗ k)kxn − x∗ k ≤ kxn − x∗ k < r,

(17.13)

kzn − x∗ k ≤ ψ2 (kxn − x∗ k)kxn − x∗ k ≤ kxn − x∗ k,

(17.14)

kxn+1 − x∗ k ≤ ψ3 (kxn − x∗ kkxn − x∗ k ≤ kxn − x∗ k,

(17.15)

and where r∗ is given by (17.9) and the functions ψk , k = 1, 2, 3 are developed previously. Moreover, x∗ the only solution of equation F(x) = 0 in the set Ω1 given in (h5). Proof. Error estimates (17.13)-(17.15) are proved by mathematical induction on j. Let u ∈ T (x∗ , r∗ ) − {x∗ } be arbitrary. In view of (17.9), (17.10), (h1) and (h2), we obtain in turn that kF 0 (x∗ )−1 (F 0 (u) − F 0 (x∗ ))k ≤ ϕ0 (ku − x∗ k) ≤ ϕ0 (r) < 1. (17.16)

Using (17.16) and lemma on invertible operators by Banach [5] that F 0 (u)−1 ∈ L(B2 , B1 ), with 1 . (17.17) kF 0 (u)−1 F 0 (x∗ )k ≤ 1 − ϕ0 (ku − x∗ k)

Then, iterate y0 is well defined by the first sub-step of method (17.2) for n = 0, from which we also have y0 − x∗ = x0 − x∗ − F 0 (x0 )−1 F(x0 ) 1 + F 0 (x0 )−1 F 0 (x0 ). 3

(17.18)

By (17.9), (17.12) (for k = 1), (h3), (17.17) (for u = x0 ) and (17.18), we get in turn that ky0 − x∗ k ≤ kF 0 (x0 )−1 F(x∗ )k ×k

Z 1 0

F 0 (x∗ )−1 (F 0 (x∗ + θ(x0 − x∗ )) − F 0 (x∗ ))dθ(x0 − x∗ )

1 + kF 0 (x0 )−1 F(x∗ )kkF 0 (x0 )−1 F(x0 )k 3 R1 1R1 0 ϕ((1 − θ)kx0 − x∗ k)dθ + 3 0 ϕ1 (θkx0 − x∗ k)dθ ≤ 1 − ϕ0 (kx0 − x∗ k) = ψ1 (kx0 − x∗ k)kx0 − x∗ k ≤ kx0 − x∗ k < r∗ ,

(17.19)

Three Step Jarratt-Type Methods under Generalized Conditions

135

showing y0 ∈ T (x∗ , r∗ )−{x∗ }, and estimate (17.13) for n = 0. Next, iterate z0 is well defined by the second substep of method (17.2) from which we can also write. z0 − x∗ = x0 − x∗ − F 0 (x0 )−1 F(x0 ) 15 −( I − 3F 0 (x0 )−1 F 0 (y0 ) 8 9 0 + (F (x0 )−1 F 0 (y0 ))2 )F 0 (x0 )−1 F(x0 ) 8 3 = x0 − x∗ − F 0 (x0 )−1 F(x0 ) + (3(F 0 (x0 )−1 F 0 (y0 ) − I)2 8 0 −1 0 0 −2(F (x0 ) F (y0 ) − I))F (x0 )−1 F(x0 ).

(17.20)

Using (17.9), (17.12)(for k = 2), (17.17) (for u = x0 ), (17.19) and (17.20), we obtain in turn "R 1 0 ϕ((1 − θ)kx0 − x∗ k)dθ kz0 − x∗ k ≤ 1 − ϕ0 (kx0 − x∗ k) (ϕ0 (kx0 − x∗ k) + ϕ0 (ky0 − x∗ k)) 2 3 ) + (3( 8 1 − ϕ0 (kx0 − x∗ k) (ϕ0 (kx0 − x∗ k) + ϕ0 (ky0 − x∗ k)) +2 ) 1 − ϕ0 (kx0 − x∗ k) # R1 0 ϕ1 (θkx0 − x∗ k)dθ kx0 − x∗ k 1 − ϕ0 (kx0 − x∗ k)

≤ ψ2 (kx0 − x∗ k)kx0 − x∗ k ≤ kx0 − x∗ k,

(17.21)

showing z0 ∈ T (x∗ , r) and (17.14) holds for n = 0. Iterate x1 is well defined by the method (17.2) for n = 0. We can also write x1 − x∗ = z0 − x∗ − F 0 (z0 )−1 F(z0 )

+F 0 (z0 )−1 (F 0 (x0 ) − F(z0 ))F 0 (x0 )−1 F 0 (z0 )) 3 − F 0 (x0 )−1 (F 0 (x0 ) − F 0 (y0 ))F 0 (x0 )−1 F(z0 ). 2

(17.22)

It follows from (17.9), (17.12) (for k = 3), (17.17) (for u = z0 ), (17.21) and (17.22), we have in turn "R 1 0 ϕ((1 − θ)kz0 − x∗ k)dθ kx1 − x∗ k ≤ 1 − ϕ0 (kz0 − x∗ k) +

(ϕ0 (kx0 − x∗ k) + ϕ0 (kz0 − x∗ k)) 01 ϕ1 (θkz0 − x∗ k)dθ (1 − ϕ0 (kx0 − x∗ k))(1 − ϕ0 (kz0 − x∗ k)) R

# R (ϕ0 (kx0 − x∗ k) + ϕ0 (ky0 − x∗ k)) 01 ϕ1 (θkz0 − x∗ k)dθ + kz0 − x∗ k (1 − ϕ0 (kx0 − x∗ k))2

≤ ψ3 (kx0 − x∗ k)kx0 − x∗ k ≤ kx0 − x∗ k,

(17.23)

showing (17.15) for n = 0 and x1 ∈ T (x∗ , r∗ ). If we repalce x0 , y0 , z0 , x1 by x j , y j , z j , x j+1 in the preceding calculations to complete the induction for items (17.13)-(17.15). Hence, by

136

Christopher I. Argyros, Samundra Regmi, Ioannis K. Argyros et al.

the estimate kx j+1 − x∗ k ≤ βkx j − x∗ k < r∗ ,

(17.24)

where β = ψ3 (kx0 − x∗ k) ∈ [0, 1), we conclude lim j−→∞ x j = x∗ and x j+1 ∈ T (x∗ , r∗ ). FurR thermore, let A = 01 F 0 (x∗ + θ(x∗∗ − x∗ ))dθ for some x∗∗ ∈ Ω1 with F(x∗∗ ) = 0. It then follows by (h5) that kF 0 (x∗ )−1 (A − F 0 (x∗ ))k ≤

Z 1 0

ϕ0 (θkx∗ − x∗∗ )kdθ ≤

Z 1 0

ϕ0 (θb)dθ < 1,

so A−1 ∈ L(B2 , B1 ). Consequently, from 0 = F(x∗∗ ) − F(x∗ ) = A(x∗∗ − x∗ ), we obtain x∗∗ = x∗ . Remark 27.

1. In view of (h2) and the estimate

kF 0 (x∗ )−1 F 0 (x)k = kF 0 (x∗ )−1 (F 0 (x) − F 0 (x∗ )) + Ik ≤ 1 + kF 0 (x∗ )−1 (F 0 (x) − F 0 (x∗ ))k ≤ 1 + ϕ0 (kx − x∗ k) the second condition in (h3) can be dropped and ϕ1 can be replaced by ϕ1 (t) = 1 + ϕ0 (t) or ϕ1 (t) = 1 + ϕ0 (R0 ), since t ∈ [0, R0 . 2. The results obtained here can be used for operators F satisfying autonomous differential equations [2] of the form F 0 (x) = P(F(x)) where P is a continuous operator. Then, since F 0 (x∗ ) = P(F(x∗ )) = P(0), we can apply the results without actually knowing x∗ . For example, let F(x) = ex − 1. Then, we can choose: P(x) = x + 1. 3. Let ϕ0 (t) = L0 t, and ϕ(t) = Lt. In [2, 3] we showed that rA = radius of Newton’s method:

2 2L0 +L

is the convergence

xn+1 = xn − F 0 (xn )−1 F(xn ) for each n = 0, 1, 2, · · ·

(17.25)

under the conditions (h1) - (h3). It follows from the definition of α, that the convergence radius r∗ of the method (17.2) cannot be larger than the convergence radius rA of the second order Newton’s method (17.25). As already noted in [2, 3] rA is at least as large as the convergence radius given by Rheinboldt [16] rR =

2 , 3L

(17.26)

Three Step Jarratt-Type Methods under Generalized Conditions

137

where L1 is the Lipschitz constant on D. The same value for rR was given by Traub [18]. In particular, for L0 < L1 we have that rR < rA and

1 L0 rR → as → 0. rA 3 L1

That is the radius of convergence rA is at most three times larger than Rheinboldt’s. 4. We can compute the computational order of convergence (COC) defined by     kxn+1 − x∗ k kxn − x∗ k ξ = ln / ln kxn − x∗ k kxn−1 − x∗ k or the approximate computational order of convergence     kxn − xn−1 k kxn+1 − xn k / ln . ξ1 = ln kxn − xn−1 k kxn−1 − xn−2 k Next, we provide the local convergence analysis of the method (17.3) along the same lines. Suppose equation ¯ 2 (t) − 1 = 0 ψ

(17.27)

has a least solution r¯2 ∈ (0, R0), where ¯ 2 (t) = ψ

R1

ϕ((1 − θ)t)dθ 1 − ϕ0 (t) 3 (ϕ1 (ψ1 (t)t) ϕ0 (t) + ϕ0 (ψ1 (t)t)) 2 + (5( ) 2 8 (1 − ϕ0 (t)) 1 − ϕ0 (t) Z 1 ϕ0 (t) + ϕ0 (ψ1 (t)t)) )) ϕ1 (θt)dθ. +2( 1 − ϕ0 (t) 0 0

Suppose equation ¯ 2 (t)t) − 1 = 0 ϕ0 (ψ has a minimal positive solution R¯ 1 ∈ M − {0}. Set R¯ = min{R0 , R¯ 1 }. Suppose equation ¯ 3 (t) − 1 = 0, ψ

(17.28)

(17.29)

138

Christopher I. Argyros, Samundra Regmi, Ioannis K. Argyros et al.

¯ where has a minimal solution r¯3 ∈ (0R), "R 1 ¯ 2 (t)t)dθ 0 ϕ((1 − θ)ψ ψ¯ 3 (t) = ¯ 2 (t)t) 1 − ϕ0 (ψ

¯ 2 (t)t) + ϕ0 (t)) 01 ϕ1 (θψ ¯ 2 (t)t)dθ (ϕ0 (ψ + ¯ 2 (t)t))(1 − ϕ0(t)) (1 − ϕ0 (ψ ¯ 1 (t)t) + ϕ0 (t)) 1 (ϕ0 (ψ + (( )2 ¯ 1 (t)t))(1 − ϕ0(t)) 2 (1 − ϕ0 (ψ ¯ 1 (t)t) + ϕ0 (t)) (ϕ0 (ψ )) +3( ¯ 1 (t)t))(1 − ϕ0(t)) (1 − ϕ0 (ψ # R1 ¯ 2 (t)t)dθ 0 ϕ1 (θψ ¯ 2 (t). ψ 1 − ϕ0 (t) R

We prove that r¯∗ = min{r1 , r¯2 , r¯3 }.

(17.30)

is a radius of convergence for method (17.3). The following auxiliary estimations are needed: 3 zn − x∗ = xn − x∗ − F 0 (xn )−1 F(xn ) − αn (7I − 12an + 5a2n )F 0 (xn )−1 F(xn ) 8 3 = xn − x∗ − F 0 (xn )−1 F(xn ) − an ((an − I)2 − 2(an − I))F 0 (xn )−1 F(xn ), 8 so kzn − x∗ k ≤

"R

1 0 ϕ((1 − θ)kxn − x∗ k)dθ

1 − ϕ0 (kxn − x∗ k)

 ϕ0 (kxn − x∗ k) + ϕ0 (kyn − x∗ k)) 2 5 1 − ϕ0 (kxn − x∗ k) Z 1  ϕ0 (kxn − x∗ k) + ϕ0 (kyn − x∗ k) +2 ϕ1 (θkxn − x∗ k)dθ kxn − x∗ k 1 − ϕ0 (kxn − x∗ k 0 ¯ 2 (kxn − x∗ k)kxn − x∗ k ≤ kxn − x∗ k < r¯∗ , ≤ ψ 3 ϕ1 (kyn − x∗ k) + 8 (1 − ϕ0 (kxn − x∗ k))2

xn+1 − x∗ = zn − x∗ − F 0 (zn )−1 F(zn )



+F 0 (zn )−1 (F 0 (xn ) − F 0 (zn ))F 0 (xn )−1 F(zn ) 1 − (4I − 5an + a2n )F 0 (xn )−1 )F(zn ) 2 = zn − x∗ − F 0 (zn )−1 F(zn ) + F 0 (zn )−1 (F 0 (xn ) − F 0 (zn ))F 0 (xn )−1 F(zn ) 1 − ((an − I)2 − 3(an − I))F 0 (xn )−1 F9zn ), 2

Three Step Jarratt-Type Methods under Generalized Conditions

139

leading to kxn+1 − x∗ k ≤

"R

1 0 ϕ((1 − θ)kzn − x∗ k)dθ

1 − ϕ0 (kzn − x∗ k)

1 ϕ0 (kxn − x∗ k) + ϕ0 (kzn − x∗ k) ϕ1 (θkzn − x∗ k)dθ (1 − ϕ0 (kxn − x∗ k))(1 − ϕ0 (kzn − x∗ k)) 0  2 ϕ0 (kxn − x∗ k) + ϕ0 (kyn − x∗ k) 1 + 2 (1 − ϕ0 (kxn − x∗ k))(1 − ϕ0 (kyn − x∗ k)) # R1 ϕ0 (kxn − x∗ k) + ϕ0 (kyn − x∗ k) 0 ϕ1 (θkzn − x∗ k)dθ +3 kzn − x∗ k (1 − ϕ0 (kxn − x∗ k))(1 − ϕ0 (kyn − x∗ k)) 1 − ϕ0 (kxn − x∗ k

Z

+



¯ 3 (kxn − x∗ k)kxn − x∗ k ≤ kxn − x∗ k. ψ

Hence, we arrive at the local convergence analysis result for the method (17.3). Theorem 27. Under hypotheses (H) for α = r¯∗ choose x0 ∈ T (x∗ , r¯∗ ) − {x∗ }. Then,the ¯ 2, ψ ¯ 3 , r¯∗ replacing ψ2 , ψ3 and r∗ , conclusion of Theorem 26 hold for method (17.3) with ψ respectively.

3.

Numerical Examples

Example 16. Consider the kinematic system F10 (x) = ex , F20 (y) = (e − 1)y + 1, F30 (z) = 1 ¯ 1), x∗ = with F1 (0) = F2 (0) = F3 (0) = 0. Let F = (F1 , F2 , F3 ). Let B1 = B2 = R3 , D = U(0, T T (0, 0, 0) . Define function F on D for w = (x, y, z) by F(w) = (ex − 1,

e−1 2 y + y, z)T . 2

Then, we get  ex 0 0 F 0 (v) =  0 (e − 1)y + 1 0  , 0 0 1 

1

1

so ϕ0 (t) = (e − 1)t, ϕ(t) = e e−1 t, ϕ1 (t) = e e−1 . Then, the radii:

r1 = 0.154407, r2 = 0.0964741, r3 = 0.0539632, r¯2 = 0.0619095, r¯3 = 0.058153. Example 17. Consider B1 = B2 = C[0, 1], D = U(0, 1) and F : D −→ B2 defined by F(ψ)(x) = ϕ(x) − 5

Z 1

xθψ(θ)3 dθ.

0

We have that F 0 (ψ(ξ))(x) = ξ(x) − 15

Z 1 0

xθψ(θ)2ξ(θ)dθ, for each ξ ∈ D.

(17.31)

140

Christopher I. Argyros, Samundra Regmi, Ioannis K. Argyros et al.

Then, we get that x∗ = 0, so ϕ0 (t) = 7.5t, ϕ(t) = 15t and ϕ1 (t) = 2. Then, the radii: r1 = 0.02222, r2 = 0.0189483, r3 = 0.00988395, r¯2 = 0.0116445, r¯3 = 0.0111289. Example 18. By the academic example of the introduction, we have ϕ0 (t) = ϕ(t) = 96.6629073t and ϕ1 (t) = 2. Then, the radii: r1 = 0.00229894, r2 = 0.00158586, r3 = 0.00369474, r¯2 = 0.00094831, r¯3 = 0.00090528.

4.

Conclusion

In this chapter, we compare the radii of convergence of two sixth convergence order Jarratttype methods for solving the nonlinear equation. Our convergence analysis is based on the first Fr´echet derivative that only appears in the method. Numerical examples where the theoretical results are tested complete the chapter.

References [1] Abbasbandy, S., Bakhtiari, P., Cordero, A., Torregrosa, J. R., Lofti, T., New efficient methods for solving nonlinear systems of equations with arbitrary even order, Appl. Math. Comput., 287,288, (2016), 94-103. [2] Argyros, I. K., A unifying local-semilocal convergence analysis and applications for two-point Newton-like methods in Banach spaces, J. Math. Anal. Appl. 298 (2004) 374-397. [3] Argyros, I. K., Convergence and Applications of Newton-Type Iterations, SpringerVerlag, New York, 2008. [4] Argyros, I. K., A semilocal convergence analysis for directional Newton methods, Math. Comp. 80 (2011) 327-343. [5] Argyros, I. K., Computational Theory of Iterative Methods, Series: Studies in Computational Mathematics, 15, Editors: Chui C. K. and Wuytack L. Mlsevier Publ. Company, New York (2007). [6] Argyros, I. K., Magre˜na´ n, A. A., Iterative method and their dynamics with applications, CRC Press, New York, USA, 2017. [7] Argyros, I. K., George, S., Magre˜na´ n, A. A., Local convergence for multi-pointparametric Chebyshev-Halley-type method of higher convergence order. J. Comput. Appl. Math. 282, 215-224 (2015). [8] Argyros, I. K., Magre˜na´ n, A. A., A study on the local convergence and the dynamics of Chebyshev-Halley-type methods free from second derivative. Numer. Algorithms 71, 1-23, (2015).

Three Step Jarratt-Type Methods under Generalized Conditions

141

[9] Argyros, I. K., George, S., On the complexity of extending the convergence region for Traubs method, Journal of Complexity 56, 101423. [10] Argyros, I. K., George, S., Mathematical modeling for the solution of equations and systems of equations with applications, Volume-IV, Nova Publishes, NY, 2020. [11] Babajee, D. K. R., Dauhoo, M. Z., Darvishi, M. T., Karami, A., Barati, A., Analysis of two Chebyshev-like third order methods free from second derivatives for solving systems of nonlinear equations, J. Comput. Appl. Math. 233 (2010) 2002-2012. [12] Darvishi, M. T., AQ two step high order Newton like method for solving systems of nonlinear equations, Int. J. Pure Appl. Math., 57(2009), 543-555. [13] Grau-S´achez, M., Grau, A., Noguera, M., Ostrowski type methods for solving systems of nonlinear equations, Appl. Math. Comput. 218 (2011) 2377-2385. [14] Jaiswal, J. P., Semilocal convergence of an eighth-order method in Banach spaces and its computational efficiency, Numer. Algorithms 71 (2016) 933-951. [15] Jaiswal, J. P., Analysis of semilocal convergence in Banach spaces under relaxed condition and computational efficiency, Numer. Anal. Appl. 10 (2017) 129-139. [16] Regmi, S., Argyros, I. K., Undergraduate Research at Cameron University on Iterative Procedures in Banach and Other Spaces, Nova Science Publisher, NY, 2019. [17] Rheinboldt, W. C., An adaptive continuation process for solving systems of nonlinear equations, In Mathematical models and numerical methods (Tikhonov A. N. et al. eds.) pub.3, (1977), 129-142 Banach Center, Warsaw Poland. [18] Traub, J. F., Iterative Methods for the Solution of Equations. Prentice-Hall, Mnglewood Cliffs(1964) [19] Sharma, J. R., Arora, H., Efficient Jarratt-like method for solving systems of nonlinear equations, Calcolo, 51, 1, (2014), 193-210. [20] Sharma, J. R., Kumar, D., A fast and efficient composite Newton-Chebyshev method for systems of nonlinear equations, J. Complexity, 49, (2018), 56-73. [21] Sharma, R., Sharma, J. R., Kalra, N., A modified Newton-Ozban composition for solving nonlinear systems, International J. of computational methods, 17, 8, (2020), world scientific publ. Comp. [22] Wang, X., Li, Y., An efficient sixth order Newton type method for solving nonlinear systems, Algorithms, 10, 45, (2017), 1-9. [23] Weerakoon, S., Fernando, T. G. I., A variant of Newton’s method with accelerated third-order convergence. Appl. Math. Lett. 13, 87–93 (2000).

Chapter 18

Extended Derivative Free Algorithms of Order Seven 1.

Introduction

Symmetries play a vital role in the dynamics of physical systems. Quantum physics is one example. These problems require algorithms to solve equations. In this chapter, we consider derivative free algorithms of order seven for solving nonlinear equations in Banach spaces. We use only the divided difference to extend the usage of these methods, whereas earlier papers require up to the eighth derivative for convergence. Numerical examples complete the chapter. Consider the problem of solving equation F(x) = 0,

(18.1)

where F : Ω ⊂ X −→ Y is continuously Fr´echet differentiable, X,Y are Banach spaces and Ω is a nonempty convex set. In this chapter, we study the two derivative-free algorithms of order seven. The family of algorithms we are interested are [10,11,15,16]: uk = xk + αF(xk ) yk = xk − [uk , xk ; F]−1 F(xk ) zk = yk − A−1 k F(yk )

xk+1 = zk − B−1 k F(zk )

(18.2)

and uk = xk + βF(xk ) yk = xk − [uk , xk ; F]−1 F(xk )

zk = yk − [yk , xk ; F]−1Ck [yk , xk ; F]−1 F(yk )

xk+1 = zk − B−1 k F(zk )

(18.3)

where Ak = [yk , xk ; F] + [yk , uk ; F] − [uk, xk ; F], Bk = [zk, xk ; F] + [zk, yk ; F] − [yk , xk ; F]; and Ck = [yk , xk ; F] − [yk , uk ; F] + [uk , xk ; F].

144

Christopher I. Argyros, Samundra Regmi, Ioannis K. Argyros et al.

The efficiency and convergence order was given in [15, 16](see also [10,11]) using conditions up to the eighth derivative, restricting the applicability of these algorithms. For example: Let X = Y = R, Ω = [− 21 , 32 ]. Define f on Ω by f (t) =



t 3 logt 2 + t 5 − t 4 i f t 6= 0 0 i f t = 0.

Then, we have t∗ = 1, and f 000 (t) = 6 logt 2 + 60t 2 − 24t + 22. Obviously f 000 (t) is not bounded on Ω. So, the convergence of these methods is not guaranteed by the analysis in earlier chapters [1]-[16]. Our convergence analysis is based on the first Fr´echet derivative that only appears in the method. We provide a computable radius of convergence, also not given in [15, 16]. This way, we locate a set of initial points for the convergence of the method. The numerical examples are chosen to show how the radii theoretically predicted are computed. In particular, the last example shows that earlier results cannot be used to show the convergence of the method. Our results significantly extend the applicability of these methods and provide a new way of looking at iterative methods. The chapter contains local convergence analysis in Section 2 and the numerical examples in Section 3.

2.

Local Analysis

We introduce some parameters and functions to be used first in the local convergence analysis of the algorithm (18.2). Set M = [0, ∞) and consider a ≥ 0 and b ≥ 0. Assume function: (i) ξ0 (bt,t) − 1 has a least root r0 ∈ M − {0} for some nondecreasing and continuous function ξ0 : M × M −→ M. Set M0 = [0, r0 ). (ii) ζ1 (t) − 1 = 0 has a least root ρ1 ∈ M0 − {0} for some nondecreasing and continuous function ξ : M0 × M0 −→ M with ζ1 : M0 −→ M given by ζ1 (t) =

ξ(a|b|t,t) . 1 − ξ0 (bt,t)

(iii) p(t) − 1 has a least root r1 ∈ M0 − {0}, where p(t) = ξ0 (ζ1 (t)t, bt) + ξ0(bt,t) + ξ0(ζ1 (t)t,t). Set r2 = min{r0 , r1 } and M1 = [0, r2).

Extended Derivative Free Algorithms of Order Seven

145

(iv) ζ2 (t) − 1 has a least root ρ2 ∈ M1 − {0}, where ζ2 (t) = ξ0 (bt,t) + ξ0 (ζ1 (t)t, bt) + ξ0(ζ1 (t)t, 0).

ϕ(t)ζ1 (t) 1−p(t)

and ϕ(t) = ξ0 (ζ1 (t)t,t) +

(v) q(t) − 1 has a least root r3 ∈ M1 − {0}, where q(t) = ξ0 (ζ2 (t)t,t) + ξ0 (ζ2 (t)t, ζ1(t)t) + ξ0 (ζ1 (t)t,t). Set r = min{r2 , r3 } and M2 = [0, r). (vi) ζ3 (t) − 1 has a least root ρ3 ∈ M1 − {0}, where ζ3 (t) =

ψ(t)ζ2 (t) , 1 − q(t)

with ψ(t) = ξ0 (ζ2 (t)t,t) + ξ0 (ζ2 (t)t, ζ1 (t)t) + ξ0 (ζ1 (t)t,t) + ξ0 (ζ2 (t)t, 0). Set ρ = min{ρm }, m = 1, 2, 3.

(18.4)

Parameter ρ shall be proven to be a convergence radius for algorithm (18.2). Set T = [0, ρ). By (18.4), we have 0 ≤ ξ0 (bt,t) < 1, (18.5) 0 ≤ p(t) < 1,

(18.6)

0 ≤ q(t) < 1.

(18.7)

0 ≤ ζm (t) < 1

(18.8)

and ¯ δ) we denote the closure of the open ball B(x, δ) with center are true for each t ∈ T. By B(x, x ∈ X and of radius δ > 0. Hypotheses (H) shall be used with the functions “ξ“ as previously defined, and x∗ being a simple zero of F. Suppose: (H1) For each x, y ∈ Ω, kF 0 (x∗ )−1 ([x, y; F] − F 0 (x∗ ))k ≤ ξ0 (kx − x∗ k, ky − x∗ k) and kI + β[x, x∗ ; F]k ≤ b. Set Ω0 = Ω ∩ B(x∗ , r0 ).

146

Christopher I. Argyros, Samundra Regmi, Ioannis K. Argyros et al.

(H2) For each x, y ∈ Ω0 . kF 0 (x∗ )−1 ([x, y; F] − [y, x∗ ; F])k ≤ ξ(kx − yk, ky − x∗ k) and k[x, x∗ ; F]k ≤ a. ¯ ∗ , R) ⊂ Ω, R = max{ρ, ˜ aρ, ˜ bρ} ˜ for some ρ˜ to be given later. (H3) B(x (H4) There exists ρ∗ ≥ ρ˜ satisfying ξ0 (0, ρ∗ ) < 1 or ξ0 (ρ∗ , 0) < 1. ¯ ∗ , ρ∗ ). Set Ω1 = Ω ∩ B(x Next, the main local convergence result for the algorithm (18.2) is given using hypotheses (H). Theorem 28. Suppose that the hypotheses (H) hold with ρ˜ = ρ. Then limn−→∞ xn = x∗ , provided that x0 ∈ B(x∗ , ρ˜ ) − {x∗ }. Moreover, the only root of F in the set Ω1 given in (H4) is x∗ . Proof. Let dn = kxn − x∗ k. We shall use induction to prove assertions: kyk − x∗ k ≤ ζ1 (dk )dk ≤ dk < ρ,

(18.9)

kzk − x∗ k ≤ ζ2 (dk )dk ≤ dk ,

(18.10)

dk+1 ≤ ζ3 (dk)dk ≤ dk ,

(18.11)

and where the radius ρ is defined by (18.4) the functions ζm are given previously. Using (H1), (18.4) and (18.5), we have kF 0 (x∗ )−1 ([u0, x0 ; F] − F 0 (x∗ ))k ≤ ξ0 (ku0 − x∗ k, kx0 − x∗ k)

≤ ξ0 (bkx0 − x∗ k, kx0 − x∗ k)

≤ ξ0 (bρ, ρ) < 1,

(18.12)

where we also used ku0 − x∗ k = kx0 − x∗ − βF(x0 )k

= k(I + β[x0 , x∗ ; F])(x0 − x∗ )k

≤ kI + β[x0 , x∗ ; F]kkx0 − x∗ k ≤ bkx0 − x∗ k < R.

It then follows from (18.12) and a Lemma due to Banach [3] on invertible operators that [u0 , x0 ; F]−1 ∈ L(Y, X) with k[u0 , x0 ; F]−1 F 0 (x∗ )k ≤

1 . 1 − ω0 (bkx0 − x∗ k, kx0 − x∗ k)

(18.13)

Extended Derivative Free Algorithms of Order Seven

147

.

Notice also y0 is well defined by the first sub-step of algorithm (18.2) from which we can write y0 − x∗ = x0 − x∗ − [u0 , x0 ; F]−1 F(x0 )

= [u0 , x0 ; F]−1 ([u0 , x0 ; F] − [x0 , x∗ ; F])(x0 − x∗ )).

(18.14)

Then, by (18.4), (18.8) (for m = 1,), (18.13), (H2) and (18.14), we have ξ(ku0 − x∗ k, kx0 − x∗ k)kx0 − x∗ k 1 − ξ0 (bkx0 − x∗ k, kx0 − x∗ k) ≤ ζ1 (kx0 − x∗ k)kx0 − x∗ k ≤ kx0 − x∗ k < ρ,

ky0 − x∗ k ≤

(18.15)

proving (18.9) for n = 0 and y0 ∈ B(x∗ , ρ). Next, we prove A−1 0 ∈ L(Y, X). By (18.4), (18.6), (H1)-(H3) and (18.15), we get kF 0 (x∗ )−1 (A0 − F 0 (x∗ ))k = kF 0 (x∗ )−1 ([y0 , x0 ; F] + [y0 , u0 ; F] − [u0 , x0 ; F] − F 0 (x∗ ))k ≤ kF 0 (x∗ )−1 ([y0 , u0 ; F] − [u0 , x0 ; F])k +kF 0 (x∗ )−1 ([y0 , x0 ; F] − F 0 (x∗ ))k

≤ ξ0 (ky0 − x∗ k, ku0 − x∗ k) + ξ0 (ku0 − x∗ k, kx0 − x∗ k) +ξ0 (ky0 − x∗ k, kx0 − x∗ k) ≤ p(ρ) < 1,

so 0 kA−1 0 F (x∗ )k ≤

1 . 1 − p(t)

(18.16)

(18.17)

Notice that z0 exists by the second sub-step of algorithm (18.2) from which we can also write −1 z0 − x∗ = y0 − x∗ − A−1 (18.18) 0 F(y0 ) = A0 (A0 − [y0 , x∗ ; F])F(x0 ). But we get kF 0 (x∗ )−1 (A0 − [y0 , x∗ ; F])k

≤ kF 0 (x∗ )−1 ([y0 , x0 ; F] − [u0 , x0 ; F])k

+kF 0 (x∗ )−1 ([y0 , u0 ; F] − [y0 , x∗ ; F])k

≤ ξ0 (ky0 − x∗ k, kx0 − x∗ k) + ξ0 (ku0 − x∗ k, kx0 − x∗ k)

+ξ0 (ky0 − x∗ k, ku0 − x∗ k) + ξ0 (ky0 − x∗ k, 0) ≤ ϕ(kx0 − x∗ k).

(18.19)

Hence, by (18.4), (18.8) (for m = 2), and (18.17)-(18.19), we obtain ϕ(kx0 − x∗ k)ky0 − x∗ k 1 − p(kx0 − x∗ k) ≤ ζ2 (kx0 − x∗ k)kx0 − x∗ k ≤ kx0 − x∗ k,

kz0 − x∗ k ≤

proving (18.10) if n = 0 and z0 ∈ B(x∗ , ρ). Next, we also show B−1 0 ∈ L(Y, X). In view of (18.4), (18.7), (H1), (18.15) and (18.20), we have

(18.20)

148

Christopher I. Argyros, Samundra Regmi, Ioannis K. Argyros et al.

kF 0 (x∗ )−1 (B0 − F 0 (x∗ ))k

≤ kF 0 (x∗ )−1 ([z0, x) ; F] − [y0 , x0 ; F])k +kF 0 (x∗ )−1 ([z0, y0 ; F] − F 0 (x∗ ))k

≤ ξ0 (kz0 − x∗ k, kx0 − x∗ k) + ξ0 (kz0 − x∗ k, ky0 − x∗ k) + ξ0 (ky0 − x∗ k, kx0 − x∗ k)

≤ q(kx0 − x∗ k) ≤ q(ρ) < 1, so

0 kB−1 0 F (x∗ )k ≤

(18.21)

1 . 1 − q(kx0 − x∗ k)

(18.22)

Hence, iterate x1 exists by the third sub-step of algorithm (18.2) from which we can write −1 x1 − x∗ = z0 − x∗ − B−1 0 F(z0 ) = B0 (B0 − [z0 , x∗ ; F])(z0 − x∗ ).

(18.23)

But we also get kF 0 (x∗ )−1 (B0 − [z0 , x∗ ; F])k

≤ kF 0 (x∗ )−1 ([z0 , x0 ; F] − F 0 (x∗ ))k + kF 0 (x∗ )−1 ([z0 , y0 ; F] − F 0 (x∗ ))k

+kF 0 (x∗ )−1 ([y0 , x) ; F] − F 0 (x∗ ))k + kF 0 (x∗ )−1 ([z0 , x∗ ; F] − F 0 (x∗ ))k

≤ ξ0 (kz0 − x∗ k, kx0 − x∗ k) + ξ0 (kz0 − x∗ k, ky0 − x∗ k) +ξ0 (ky0 − x∗ k, kx0 − x∗ k) + ξ0 (kz0 − x∗ k, 0)

≤ ψ(kx0 − x∗ k).

(18.24)

Hence, by (18.4), (18.8)( for m = 3), (18.15), (18.20), (18.23) and (18.24), we obtain ψ(kx0 − x∗ k)kz0 − x∗ k 1 − q(kx0 − x∗ k) ≤ ζ3 (kx0 − x∗ k)kx0 − x∗ k ≤ kx0 − x∗ k,

kx1 − x∗ k ≤

(18.25)

proving (18.11) for n = 0 and x1 ∈ B(x∗ , ρ). The induction for assertions (18.9)-(18.11) is completed by simply exchanging x0 , y0 , z0 , x1 by xi , yi , zi , xi+1 in the previous estimations. Then, we get from kxi+1 − x∗ k ≤ αkxi − x∗ k < ρ,

(18.26)

where α = ζ3 (kx0 − x∗ k) ∈ [0, 1), that limi−→∞ xi = x∗ , and xi+1 ∈ B(x∗ , ρ). Set Q = [x∗ , y∗ ; F] for some y∗ ∈ Ω1 with F(y∗ ) = 0. Then, by (H1) and (H4), we have kF 0 (x∗ )−1 (Q − F 0 (x∗ ))k ≤ ξ0 (0, ky∗ − x∗ )k) ≤ ξ0 (0, ρ∗) < 1, so y∗ = x∗ follows from the identity 0 = F(y∗ ) − F(x∗ ) = Q(y∗ − x∗ ) and the invertibility of Q.

Extended Derivative Free Algorithms of Order Seven

149

Next, we present the local convergence analysis of algorithm (18.3) in an analogous way. But this time functions ζ2 and ζ3 are replaced, respectively by ζ¯ 2 (t) =

h(t)ζ1 (t) , 1 − ξ0 (ζ1 (t)t,t)

(ξ0 (ζ¯ 2 (t),t) + ξ0 (ζ¯ 2 (t)t, ζ1 (t)t) + ξ0 (ζ1 (t)t,t) + ξ0 (ζ¯ 2 (t)t, 0))ζ¯ 2(t) ζ¯ 3 (t) = , 1 − q(t) ¯

where

h(t) = ξ0 (ζ1 (t)t,t) + (ξ0(ζ1 (t)t, 0) + ξ0 (ζ1 (t)t, bt) + ξ0(bt,t)) and

aζ1 (t)t 1 − ξ0 (ζ1 (t)t,t)

q(t) ¯ = ξ0 (ζ¯ 2 (t)t,t) + ξ0(ζ¯ 2 (t)t, ζ1(t)t) + ξ0 (ζ1 (t)t,t). Define ρ¯ = min{ρ1 , ρ¯ 2 , ρ¯ 3 }, (18.27) ¯ ¯ where ρ¯ 2 , ρ¯ 3 are the least positive roots of functions ζ2 (t) − 1 and ζ3 (t) − 1, respectively assuming they exist. Functions ζ¯ 2 and ζ¯ 3 are motivated by estimates: zk − x∗ = yk − x∗ − [yk , xk ; F]−1Ck [yk , xk ; F]−1 F(yk )

= [yk , xk ; F]−1 ([yk, xk ; F] −Ck [yk , xk ; F]−1 [yk , x∗ ; F])(yk − x∗ )

so h(kxk − x∗ k)kyk − x∗ k 1 − ξ0 (kyk − x∗ k, kxk − x∗ k) ¯ ≤ ζ2 (kxk − x∗ k)kxk − x∗ k ≤ kxk − x∗ k,

kzk − x∗ k ≤ where we also used

[yk , xk ; F] − ([yk , xk ; F] − [yk , uk; F] + [uk , xk ; F])[yk , xk ; F]−1 [yk , x∗ ; F]

= ([yk , xk ; F] − F 0 (x∗ )) + (F 0 (x∗ ) − [yk , x∗ ; F])

+([yk , uk; F] − [uk , xk ; F])[yk ; xk ; F]−1 [yk , x∗ ; F]

and by composing with F 0 (x∗ )−1 and taking norms am upper bound of the previous expression is ξ0 (kyn − x∗ k, kxn − x∗ k) + ξ0 (kyn − x − ∗k, 0) +(ξ0 (kyn − x − ∗k, kun − x∗ k) + ξ0 (kun − x∗ k, kxn − x∗ k)) ≤

h(kxn − x∗ k).

akyn − x∗ k 1 − ξ0 (kyn − x∗ k, kxn − x∗ k)

Moreover, we obtain kxn+1 − x∗ k ≤ ζ¯ 3 (kxn − x∗ k)kxn − x∗ k ≤ kxn − x∗ k

as in (18.25) by simply replacing ζ2 by ζ¯ 1 . Hence, we arrive at the corresponding local convergence analysis of algorithm (18.3). ¯ Then, the calculations of Theorem 29. Suppose that the hypotheses (H) hold with ρ˜ = ρ. ¯ ¯ ¯ Theorem 28 hold for algorithm (18.3) with ρ, ζ2 , ζ3 replacing ρ, ζ2 and ζ3 , respectively.

150

3.

Christopher I. Argyros, Samundra Regmi, Ioannis K. Argyros et al.

Numerical Examples

In our examples we use [x, y; f ] =

R1 0 0 F (y + θ(x − y))dθ and β = −1.

Example 19. Consider the kinematic system

F10 (x) = ex , F20 (y) = (e − 1)y + 1, F30 (z) = 1 ¯ 1), x∗ = with F1 (0) = F2 (0) = F3 (0) = 0. Let F = (F1 , F2 , F3 ). Let X = Y = R3 , Ω = B(0, t t (0, 0, 0) . Define function F on Ω for w = (x, y, z) by F(w) = (ex − 1,

e−1 2 y + y, z)t . 2

Then, we get  ex 0 0 F 0 (v) =  0 (e − 1)y + 1 0  , 0 0 1 

1

so ξ0 (s,t) = 12 (e − 1)(s + t), ξ(s,t) = 12 e e−1 s, a =

e−1 2

and b =

e+1 2 .

Then, the radii are

ρ1 = 0.25736, ρ2 = 0.139593, ρ3 = 0.135726, ρ¯ 2 = 0.239997, ρ¯ 3 = 0.207536. Example 20. Consider X = Y = C[0, 1], Ω = B(0, 1) and F : Ω −→ Y defined by F(φ)(x) = ϕ(x) − 5

Z 1

xθφ(θ)3dθ.

(18.28)

0

We have that 0

F (φ(η))(x) = η(x) − 15

Z 1 0

xθφ(θ)2η(θ)dθ, for each η ∈ Ω.

15 Then, we get for x∗ = 0, that ξ0 (s,t) = 15 4 (s + t), ξ(s,t) = 4 s, a = 8 and b = 7. Then, the radii are ρ1 = 0.00416667, ρ2 = 0.00561108, ρ3 = 0.00703474,

ρ¯ 2 = 0.00947056, ρ¯ 3 = 0.0100204. Example 21. By the academic example of the introduction, we have ξ0 (s,t) = ξ(s,t) = 96.6629073 (s + t), a = 83 and b = 17 2 6 . Then, the radii are ρ1 = 0.00167008, ρ2 = 0.0012955, ρ3 = 0.0013651, ρ¯ 2 = 0.00244516, ρ¯ 3 = 0.00215457.

4.

Conclusion

In this chapter, we have considered a one-parameter family of seventh-order algorithms for solving systems of nonlinear equations. The algorithms considered are totally free of derivative evaluations and so suitable to the problems whose derivatives are expensive to compute or difficult to evaluate.

Extended Derivative Free Algorithms of Order Seven

151

References [1] Amat, S., Busquier, S., Grau, A., Grau-Sanchez, M., Maximum efficiency for a family of Newton-like algorithms with frozen derivatives and some applications, Appl. Math. Comput., 219, (2013), 7954-7963. [2] Argyros, I. K., Computational Theory of Iterative Methods, Series: Studies in Computational Mathematics, 15, Editors: Chui C. K. and Wuytack L. Elsevier Publ. Company, New York (2007). [3] Argyros, I. K., Magre˜na´ n, A. A., Iterative algorithm and their dynamics with applications, CRC Press, New York, USA, 2017. [4] Argyros, I. K., George, S., Mathematical modeling for the solution of equations and systems of equations with applications, Volume-III, Nova Publishes, NY, 2019. [5] Argyros, I. K., George, S., Mathematical modeling for the solution of equations and systems of equations with applications, Volume-IV, Nova Publishes, NY, 2019. [6] Argyros, I. K., George, S., Magre˜na´ n, A. A., Local convergence for multi-pointparametric Chebyshev-Halley-type algorithm of higher convergence order. J. Comput. Appl. Math. 282, 215-224 (2015). [7] Argyros, I. K, Hilout, S., Numerical Methods in Nonlinear Analysis. World Scientific Publishing Company, New Jersey, 2013. [8] Grau-Sanchez, M., Noguera, M., Amat, S., On the approximation of derivatives using divided difference operators preserving the local convergence order of iterative algorithms. J. Comput. Appl. Math., 237 (2013):363-372. [9] Liu, Z., Zheng, Q., Zhao, P., A variant of Steffensen’s algorithm of fourth order convergence and its applications. Appl. Math. Comput., 216(2012):1978-1983. [10] Ren, H., Wu, Q., Bi, W., A class of two-step Steffensen type algorithms with fourthorder convergence. Appl. Math. Comput., (2009)209:206-210. [11] Sharma, J. R., Arora, H., Efficient derivative-free numerical algorithms for solving systems of nonlinear equations. Comp. Appl. Math., 35(2016):269-284. [12] Sharma, J. R., Arora, H., An efficient derivative-free iterative algorithm for solving systems of nonlinear equations. Appl. Anal. Discrete Math., 7(2013):390-403. [13] Steffensen, J. F., Remarks on iteration. Skand. Aktuar Tidskr., 16 (1933):64-72. [14] Traub, J. F., Iterative Methods for the Solution of Equations. Chelsea Publishing Company, New York, 1982.

152

Christopher I. Argyros, Samundra Regmi, Ioannis K. Argyros et al.

[15] Wang, X., Zhang, T., A family of Steffensen type algorithms with seventh-order convergence. Numer. Algor., 62 (2013):429-444. [16] Wang, X., Zhang, T., Qian, W., Teng, M., Seventh-order derivative-free iterative algorithm for solving nonlinear systems. Numer. Algor., 70 (2015):545-558.

Chapter 19

Convergence of Fifth Order Methods for Equations under the Same Conditions 1.

Introduction

In this chapter, we consider fifth-order methods for solving nonlinear equations in Banach spaces under the same conditions. We use only the first derivatives to extend the usage of these methods, whereas earlier papers require up to the sixth derivative for convergence. Numerical examples complete the chapter. Let F : D ⊂ X −→ Y is continuously Fr´echet differentiable, X,Y are Banach spaces and D is a nonempty convex set. Consider the problem of solving equation F(x) = 0.

(19.1)

Iterative methods are used to approximate the solution of equations of the type (19.1) since finding an exact solution is possible only in rare cases. In this chapter, we study the convergence of two fifth-order methods under the same conditions. The methods we are interested are: By J. R. Sharma et al. [14] 1 yn = xn − F 0 (xn )−1 F(xn ) 2 zn = xn − F 0 (yn )−1 F(xn )

xn+1 = zn − (2F 0 (yn )−1 − F 0 (xn )−1 )F(zn )

(19.2)

By Cordero, A. et al. [6] yn = xn − F 0 (xn )−1 F(xn )

zn = xn − 2A−1 n F(xn ) 0 xn+1 = zn − F (yn )−1 F(zn )

(19.3)

154

Christopher I. Argyros, Samundra Regmi, Ioannis K. Argyros et al.

where An = F 0 (yn ) + F 0 (xn ). The efficiency and convergence order was given in [6] (see also [14]) using conditions up to the sixth derivative, restricting the applicability of these algorithms. For example: Let X = Y = R, D = [− 12 , 32 ]. Define f on D by  3 t logt 2 + t 5 − t 4 i f t 6= 0 f (t) = 0 i f t = 0. Then, we have t∗ = 1, and f 000 (t) = 6 logt 2 + 60t 2 − 24t + 22. Obviously f 000 (t) is not bounded on D. So, the convergence of these methods is not guaranteed by the analysis in earlier papers [1]-[15]. Our convergence analysis is based on the first Fr´echet derivative that only appears in the method. We provide a computable radius of convergence, also not given in [6,14]. This way, we locate a set of initial points for the convergence of the method. The numerical examples are chosen to show how the radii theoretically predicted are computed. In particular, the last example shows that earlier results cannot be used to show the convergence of the method. Our results significantly extend the applicability of these methods and provide a new way of looking at iterative methods. The chapter contains local convergence analysis in Section 2 and the numerical examples in Section 3.

2.

Local Convergence

In this section the convergence of methods (19.2) and (19.3) is given. Set G = [0, ∞). Suppose function: (a) ξ0 (t) − 1 has a least zero r0 ∈ G − {0} for some function ξ0 : G −→ G nondecreasing and continuous. Set G0 = [0, r0). (b) ζ1 (t) − 1 has a least zero γ1 ∈ G0 −{0} for some function ξ1 : G0 −→ G which is nondecreasing and continuous and function ζ1 : G0 −→ G defined by ζ1 (t) =

R1 0

ξ0 ((1 − θ)t)dθ + 12 1 − ξ0 (t)

R1 0

ξ1 (θt)dθ

.

(c) p(t) − 1

has a least zero r1 ∈ G0 − {0}, where p(t) = 12 (ξ0 (t) + ξ0 (ζ1 (t)t)). Set r2 = min{r0 , r1 } and G1 = [0, r2).

Convergence of Fifth Order Methods for Equations under the Same Conditions 155 (d) ζ2 (t) − 1 has a least zero γ2 ∈ G1 − {0}, for some function ξ : G1 −→ G nondecreasing and continuous, and function ζ2 : G1 −→ G defined as ζ2 (t) =

R1 0

ξ((1 − θ)t)dθ (ξ0 (t) + ξ0 (ζ1 (t)t) 01 ξ1 (θt)dθ + . 1 − ξ0 (t) 2(1 − ξ0 (t))(1 − p(t)) R

(e) ξ0 (ζ2 (t)t) − 1 has a least zero r3 ∈ G1 − {0}. Set r = min{r2 , r3 } and G2 = [0, r). (f) ζ3 (t) − 1 has a least zero γ3 ∈ G2 − {0}, where ζ3 : G2 −→ G is defined as ζ3 (t) = [

R1 0

ξ((1 − θ)ζ2 (t)t)dθ 1 − ξ0 (ζ2 (t)t)

# R (ξ0 (ζ1 (t)t) + ξ0 (ζ2 (t)t)) 01 ξ1 (θζ2 (t)t)dθ + ζ2 (t). (1 − ξ0 (ζ1 (t)t))(1 − ξ0(ζ2 (t)t)) Set γ = min{γi}, i = 1, 2, 3.

(19.4)

It shall be proven that γ defined by (19.26) is a convergence radius for method (19.2). ¯ δ) we denote the closure of the open ball B(x, δ) with center x ∈ X and of radius By B(x, δ > 0. The conditions (C) are needed provided that x∗ is a simple solution of equation (19.1), and functions “ξ“ are as previously defined. Suppose: (c1) For each x ∈ D, kF 0 (x∗ )−1 (F 0 (x) − F 0 (x∗ ))k ≤ ξ0 (kx − x∗ k). Set D0 = D ∩ B(x∗ , r0 ). (c2) For each x, y ∈ D0 . kF 0 (x∗ )−1 (F 0 (x) − F 0 (y))k ≤ ξ(kx − yk) and kF 0 (x∗ )−1 F 0 (x)k ≤ ξ1 (kx − x∗ k). ¯ ∗ , δ) ⊂ D, for some δ > 0 to be given later. (c3) B(x

156

Christopher I. Argyros, Samundra Regmi, Ioannis K. Argyros et al.

(c4) There exists γ∗ ≥ γ satisfying ¯ ∗ , γ∗ ). Set D1 = D ∩ B(x

R1

ξ0 (θγ∗ )dθ < 1.

0

Next, conditions (C) are used to show the local convergence result for the method (19.2). Theorem 30. Suppose conditions (C) hold with δ = γ. Then, if x0 ∈ B(x∗ , γ˜) − {x∗ }, limn−→∞ xn = x∗ , which is the only solution in the region D1 of equation F(x) = 0. Proof. Let dn = dn . We based our proof on the verification of items kyn − x∗ k ≤ ζ1 (dn )dn ≤ dn < γ,

(19.5)

kzn − x∗ k ≤ ζ2 (dn )dn ≤ dn

(19.6)

dn+1 ≤ ζ3 (dn )dn ≤ dn ,

(19.7)

0 ≤ p(t) < 1,

(19.9)

and to be shown using mathematical induction. Set G3 = [0, γ). The definition of γ implies that for all t ∈ G3 0 ≤ ξ0 (t) < 1, (19.8) 0 ≤ ξ0 (ζ2 (t)t) < 1

(19.10)

0 ≤ ζi (t) < 1.

(19.11)

kF 0 (x∗ )−1 (F 0 (u) − F 0 (x∗ ))k ≤ ξ0 (ku − x∗ ||) ≤ ξ0 (γ) < 1,

(19.12)

and ¯ ∗ , γ) − {x∗ }. Then, by (19.26), (19.8) and (c1), we have Pick u ∈ B(x so

1 (19.13) 1 − ξ0 (ku − x∗ k) by a lemma due to Banach on invertible operators [2]. Notice also that iterate y0 is well defined, and we can also write |F 0 (u)−1F 0 (x∗ )k ≤

1 y0 − x∗ = x0 − x∗ − F 0 (x0 )−1 F(x0 ) + F 0 (x0 )−1 F(x0 ) 2 = (F 0 (x0 )−1 F 0 (x∗ )) ×(

Z 1 0

F 0 (x∗ )−1 (F 0 (x∗ + θ(x0 − x∗ )) − F 0 (x0 )dθ)(x0 − x∗ ) (19.14)

1 + (F 0 (x0 )−1 F(x∗ ))F 0 (x∗ )−1 F(x0 ). 2

(19.15)

By (19.26), (19.8), (19.13) (for u = x0 ), (c2), (c3), (19.11) (for i = 1, and (19.15), we get in turn that R1

ξ((1 − θ)d0 )dθ + 01 ξ1 (θd0 )dθ ky0 − x∗ k ≤ 1 − ξ0 (d0 ) ≤ ζ1 (d0 )d0 ≤ d0 < γ, 0

R

(19.16)

Convergence of Fifth Order Methods for Equations under the Same Conditions 157 ¯ ∗ , γ) and (19.5) for n = 0. Next, we show A−1 ∈ L(Y, X). Indeed, by showing y0 ∈ B(x 0 (19.26), (19.9), (19.17) and (C1), we have 1 0 kF (x∗ )−1 (F 0 (y0 ) − F 0 (x∗ ))k 2 +kF 0 (x∗ )−1 (F 0 (x0 ) − F 0 (x∗ ))k 1 (ξ0 (ky0 − x∗ k) + ξ0 (d0 )) ≤ 2 ≤ p(d0 ) ≤ p(γ) < 1,

k(2F 0 (x∗ ))−1 (A0 − 2F 0 (x∗ ))k ≤

so 0 kA−1 0 F (x∗ )k ≤

1 . 2(1 − p(d0 ))

(19.17)

(19.18)

Moreover, iterate z0 is well defined by the second sub-step of method (19.2), and we can also write z0 − x∗ = x0 − x∗ − F 0 (x0 )−1 F(x0 ) + (F 0 (x0 )−1 − 2A−1 0 )F(x0 )

= x0 − x∗ − F 0 (x0 )−1 F(x0 ) + F 0 (x0 )−1 (A0 − 2F 0 (x0 ))A−1 0 F(x0 )

= x0 − x∗ − F 0 (x0 )−1 F(x0 )

+F 0 (x0 )−1 (F 0 (y0 ) − F 0 (x0 ))A−1 0 F(x0 ).

(19.19)

Using (19.26), (19.11) (for i = 2), (19.13) (for u = x0 ), and (19.17)-(19.19), we get "R 1 0 ξ((1 − θ)d0 )dθ kz0 − x∗ k ≤ 1 − ξ0 (d0 ) # R (ξ0 (d0 ) + ξ0 (ky0 − x∗ k)) 01 ξ1 (θkx0 − x∗ k)dθ + d0 2(1 − p(d0 ))(1 − ξ0 (d0 )) ≤ ζ2 (d0 )d0 ≤ d0 ,

(19.20)

showing (19.6) for n = 0 and z0 ∈ B(x∗ , γ). Notice that iterate x1 is well defined by the third sub-step of method (19.2), F 0 (z0 )−1 ∈ L(Y, X) by (19.13) (for u = z0 ), and we can write x1 − x∗ = z0 − x∗ − F 0 (z0 )−1 F(z0 ) + (F 0 (z0 )−1 − F 0 (y0 )−1 )F(z0 ) = z0 − x∗ − F 0 (z0 )−1 F(z0 )

+F 0 (z0 )−1 (F 0 (y0 ) − F 0 (z0 ))F 0 (y0 )−1 F(z0 ).

(19.21)

In view of (19.26), (19.11)( for i = 3), (19.13) (for u = y0 , z0 ), (19.17), (19.20) and (19.21), we get "R 1 0 ξ((1 − θ)kz0 − x∗ k)dθ d1 ≤ 1 − ξ0 (kz0 − x∗ k) # R (ξ0 (ky0 − x∗ k) + ξ0 (kz0 − x∗ k)) 01 ξ1 (θkz0 − x∗ k)dθ + kz0 − x∗ k (1 − ξ0 (ky0 − x∗ k))(1 − ξ0 (kz0 − x∗ k)) ≤ ζ3 (d0 )d0 ≤ d0 ,

(19.22)

158

Christopher I. Argyros, Samundra Regmi, Ioannis K. Argyros et al.

showing (19.7) for n = 0 and x1 ∈ B(x∗ , γ). Exchanging x0 , y0 , z0 , x1 by xi , yi , zi, xi+1 in the previous calculations to complete the induction for (19.5)-(19.7). It then follows from the estimation di+1 ≤ βdi < γ, (19.23) where β = ζ3 (d0 ) R∈ [0, 1), that limi−→∞ xi = x∗ , and xi+1 ∈ B(x∗ , γ). Consider T = 01 F 0 (x∗ + θ(v − x∗ ))dθ for some v ∈ Ω1 with F(v) = 0. Then, using (c1) and (c4), we obtain kF 0 (x∗ )−1 (T − F 0 (x∗ ))k ≤

Z 1 0

ξ0 (θkv − x∗ )k)dθ ≤

Z 1 0

ξ0 (θγ∗ )dθ < 1,

so v = x∗ follows from the identity 0 = F(v) − F(x∗ ) = T (v − x∗ ) and the invertibility of T. Remark 28.

1. In view of (c2) and the estimate

kF 0 (x∗ )−1 F 0 (x)k = kF 0 (x∗ )−1 (F 0 (x) − F 0 (x∗ )) + Ik

≤ 1 + kF 0 (x∗ )−1 (F 0 (x) − F 0 (x∗ ))k ≤ 1 + ξ0 (kx − x∗ k)

the second condition in (c3) can be dropped and ω1 can be replaced by ξ1 (t) = 1 + ξ0 (t) or ξ1 (t) = 1 + ξ0 (γ), or ξ1 (t) = 2, since t ∈ [0, r0 ). 2. The results obtained here can be used for operators F satisfying autonomous differential equations [2] of the form F 0 (x) = P(F(x)) where P is a continuous operator. Then, since F 0 (x∗ ) = P(F(x∗ )) = P(0), we can apply the results without actually knowing x∗ . For example, let F(x) = ex − 1. Then, we can choose: P(x) = x + 1. 3. Let ξ0 (t) = L0 t, and ξ(t) = Lt. In [2, 3] we showed that rA = radius of Newton’s method:

2 2L0 +L

is the convergence

xn+1 = xn − F 0 (xn )−1 F(xn ) for each n = 0, 1, 2, · · ·

(19.24)

under the conditions (c1) - (c3). It follows from the definition of rA , that the convergence radius ρ of the method (19.2) cannot be larger than the convergence radius rA of the second order Newton’s method (19.24). As already noted in [2, 3] rA is at least as large as the convergence radius given by Rheinboldt [10] rR =

2 , 3L

(19.25)

Convergence of Fifth Order Methods for Equations under the Same Conditions 159 where L1 is the Lipschitz constant on D. The same value for rR was given by Traub [12]. In particular, for L0 < L1 we have that rR < rA and

1 L0 rR → as → 0. rA 3 L1

That is the radius of convergence rA is at most three times larger than Rheinboldt. 4. We can compute the computational order of convergence (COC) defined by     dn+1 dn ξ = ln / ln , dn dn−1 or the approximate computational order of convergence     sn sn+1 / ln , ξ1 = ln sn sn−1 where sn = kxn − xn−1 k. Next, we present the local convergence analysis of method (19.3) in an analogous way. But this time functions ζi are replaced ζ¯ i , respectively. And ζ¯ i are defined by ζ¯ 1 (t) =

ζ¯ 2 (t) =

R1 0

R1 0

ξ((1 − θ)t)dθ , 1 − ξ0 (t)

ξ((1 − θ)t)dθ 1 − ξ0 (t)

(ξ0 (t) + ξ0 (ζ1 (t)t)) 01 ξ1 (θt)dθ + , (1 − ξ0 (t))(1 − ξ0 (ζ1 (t)t)) ζ¯ 3 (t) =

"R

R

1 0 ξ((1 − θ)ζ1 (t)t)dθ

1 − ξ0 (ζ1 (t)t)

(ξ0 (t) + ξ0 (ζ1 (t)t)) 01 ξ1 (θζ2 (t)t)dθ + (1 − ξ0 (t))(1 − ξ0(ζ1 (t)t)) R

# R (ξ0 (ζ1 (t)t) + ξ0 (ζ2 (t)t)) 01 ξ1 (θζ2 (t)t)dθ + ζ2 (t) (1 − ξ0 (ζ1 (t)t))(1 − ξ0(ζ2 (t)t)) and γ¯ = min{¯γi },

(19.26)

160

Christopher I. Argyros, Samundra Regmi, Ioannis K. Argyros et al.

where γ¯ i are the least positive zeros of functions ζ¯ i (t) − 1 in Ω0 (provided they exist). These functions are motivated by the estimates (obtained under the conditions (C) for δ = γ¯ ): kyn − x∗ k = kxn − x∗ − F 0 (xn )−1 F(xn ) R1

ξ((1 − θ)dn )dθ 1 − ξ0 (dn ) ≤ ζ¯ 1 (dn )dn ≤ dn < γ¯ ,



(

0

kzn − x∗ k = kxn − x∗ − F 0 (xn )−1 F(xn )

+F 0 (x )−1 (F 0 (yn ) − F 0 (xn ))F 0 (yn )−1 F(xn )k "R n 1 0 ξ0 ((1 − θ)dn )dθ ≤ 1 − ξ0 (dn ) # R (ξ0 (dn ) + ξ0 (kyn − x∗ k)) 01 ξ1 (θdn )dθ dn + (1 − ξ0 (dn ))(1 − ξ0 (kyn − x∗ k)) ≤ ζ¯ 2 (dn )dn ≤ dn ,

and dn+1 = kzn − x∗ − F 0 (zn )−1 F(zn )

+(F 0 (yn )−1 − F 0 (xn )−1 )F(zn )

+(F 0 (yn )−1 − F 0 (zn )−1 )F(zn )k "R 1 0 ξ((1 − θ)kzn − x∗ k)dθ ≤ 1 − ξ0 (kzn − x∗ k)

(ξ0 (dn ) + ξ0 (kyn − x∗ k)) 01 ξ1 (θkzn − x∗ k)dθ + (1 − ξ0 (kyn − x∗ k))(1 − ξ0 (kzn − x∗ k)) R

# R (ξ0 (kyn − x∗ k) + ξ0 (kzn − x∗ k)) 01 ξ1 (θkzn − x∗ k)dθ kzn − x∗ k + (1 − ξ0 (kyn − x∗ k))(1 − ξ0 (kzn − x∗ k))

≤ ζ¯ 3 (dn )dn ≤ dn .

Hence, we arrive at the corresponding local convergence analysis for the method (19.3). Theorem 31. Suppose that the conditions (C) hold with δ = γ¯ . Then, the calclutions of Theorem 30 hold for method (19.3) with γ¯ , ζ¯ i replacing γ and ζ1 , respectively.

3.

Numerical Examples

Example 22. Consider the kinematic system F10 (x) = ex , F20 (y) = (e − 1)y + 1, F30 (z) = 1

Convergence of Fifth Order Methods for Equations under the Same Conditions 161 ¯ 1), x∗ = with F1 (0) = F2 (0) = F3 (0) = 0. Let F = (F1 , F2 , F3 ). Let X = Y = R3 , D = B(0, t t (0, 0, 0) . Define function F on D for w = (x, y, z) by F(w) = (ex − 1,

e−1 2 y + y, z)t . 2

Then, we get  ex 0 0 F 0 (v) =  0 (e − 1)y + 1 0  , 0 0 1 

1

so ξ0 (t) = (e − 1)t, ξ(t) = e e−1 t, ξ1 (t) =

e−1 2 .

Then, the radii are

r1 = 0.221318, r2 = 0.207758, r3 = 0.193106. r¯1 = 0.382692, r¯2 = 0.165361, r¯3 = 0.164905.

4.

Conclusion

In this chapter, we have considered two fifth-order algorithms for solving systems of nonlinear equations. A comparison between the ball of convergence is provided using conditions on the derivative. Earlier studies have used hypotheses up to the sixth derivative. We also provide error estimates and uniqueness results not given before [6,14]. Our idea can extend the usage of other methods too [1]-[15].

References [1] Amat, S., Hern´andez, M. A., Romero, N., Semilocal convergence of a sixth order iterative method for quadratic equations, Applied Numerical Mathematics, 62 (2012), 833-841. [2] Argyros, I. K., Computational Theory of Iterative Methods, Series: Studies in Computational Mathematics, 15, Editors: Chui C. K. and Wuytack L. Elsevier Publ. Company, New York (2007). [3] Argyros, I. K., Magre˜na´ n, A. A., Iterative method and their dynamics with applications, CRC Press, New York, USA, 2017. [4] Behl, R., Cordero, A., Motsa, S. S., Torregrosa, J. R.: Stable high order iterative methods for solving nonlinear models, Appl. Math. Comput., 303, 70-88, (2017). [5] Cordero, A., Hueso, J. L., Mart´ınez, E., Torregrosa, J. R., A modified NewtonJarratt’s composition, Numer. Algor., 55, 87-99, (2010). [6] Cordero, A., Hueso, J. L., Mart´ınez, E., Torregrosa, J. R., Increasing the convergence order of an iterative method for nonlinear systems, Appl. Math. Lett., 25, (2012), 2369-2374.

162

Christopher I. Argyros, Samundra Regmi, Ioannis K. Argyros et al.

[7] Darvishi, M. T., Barati, A., A fourth order method from quadrature formulae for solving systems of nonlinear equations, Appl. Math. Comput., 188, (2007), 257-261. [8] Magre˜na´ n, A. A., Different anomalies in a Jarratt family of iterative root finding methods, Appl. Math. Comput. 233, (2014), 29-38. [9] Noor, M. A., Wassem, M., Some iterative methods for solving a system of nonlinear equations. Appl. Math. Comput. 57, 101–106 (2009) [10] Rheinboldt, W. C., An adaptive continuation process for solving systems of nonlinear equations, In Mathematical models and numerical methods (Tikhonov A. N. et al. eds.) pub.3, (1977), 129-142 Banach Center, Warsaw Poland. [11] Sharma, J. R., Kumar, S., A class of computationally efficient Newton-like methods with frozen operator for nonlinear systems, Intern. J. Non. Sc. Numer. Simulations. [12] Traub, J. F., Iterative Methods for the Solution of Equations. Prentice-Hall, Englewood Cliffs (1964). [13] Sharma, J. R., Arora, H., Improved Newton-like methods for solving systems of nonlinear equations, SeMA, 74, 147-163,(2017). [14] Sharma, J. R., Gupta, P., An efficient fifth order method for solving systems of nonlinear equations, Computer and Mathematics with Applications, 67, (2014), 591-601. [15] Weerakoon, S., Fernando, T. G. I., A variant of Newton’s method with accelerated third-order convergence. Appl. Math. Lett. 13, 87–93 (2000).

Chapter 20

A Novel Eighth Convergence Order Scheme with Derivatives and Divided Difference 1.

Introduction

We extend the applicability of an eighth-order scheme for solving nonlinear equations involving Banach space-valued equations. This is done by using assumptions only on the first derivative and divided difference of order one that does appear on the schemes, whereas in earlier works up to the ninth derivative are used to establish the convergence. Our technique is so general that it can be used to extend the usage of other schemes along the same lines. Let X and Y be a Banach space and Ω ⊂ X be an open-convex set. We are concerned with the problem of extending the applicability of an eighth order method [7]: yk = xk − F 0 (xk )−1 F(xk )

zk = yk − 5F 0 (xk )−1 F(yk ) 1 vk = zk − F 0 (xk )−1 (−16F(yk ) + F(zk )) 5 xk+1 = vk − G−1 k F[yk , xk ],

(20.1)

where F[., .] : Ω × Ω −→ L(X,Y ), is divided difference of order one, for approximating a solution x∗ of the equation F(x) = 0. (20.2) Here F : Ω ⊂ X −→ Y is a nonlinear Fr´echet differentiable operator. For more about convergence analysis of higher-order iterative schemes, one may refer to [1]-[17] and the references therein. The convergence order was established in [7] using hypotheses up to the ninth derivative (not on these schemes). No computable error bounds on kxn − x∗ k or uniqueness results were given either. We address all these problems in this chapter.

164

Christopher I. Argyros, Samundra Regmi, Ioannis K. Argyros et al.

These assumptions on derivatives of order up to nine reduce the applicability of the schemes. For example: Let X = Y = R, Ω = [− 21 , 32 ]. Define f on Ω by f (t) =



t 3 logt 2 + t 5 − t 4 i f t 6= 0 0 i f t = 0.

Then, we have t∗ = 1, and f 000 (t) = 6 logt 2 + 60t 2 − 24t + 22. Obviously f 000 (t) is not bounded on Ω. So, the convergence of the above schemes is not guaranteed by the analysis in earlier papers. Moreover, our approach is so general that it can be used to extend the applicability of other schemes [1]-[17] in a similar way. The rest of the chapter contains the convergence analysis of these schemes and the numerical examples.

2.

Convergence

We introduce parameters and functions that play a role in the local convergence of scheme (20.1). Set Q = [0, ∞). Assume function: (i) ξ0 (t) − 1 has a minimal zero r0 ∈ Q − {0} for some function ξ0 : Q −→ Q which is continuous and nondecreasing. Set Q0 = [0, r0). (ii) η1 (t) − 1 has a minimal zero ρ1 ∈ Q0 − {0} for some function ξ : Q0 −→ Q which is continuous and nondecreasing and function η1 : Q0 −→ Q defined by η1 (t) =

R1 0

ξ((1 − τ)t)dτ . 1 − ξ0 (t)

(iii) ξ0 (η1 (t)t) − 1 has a minimal zero r1 ∈ Q0 − {0}. Set r2 = min{r0 , r1 } and Q1 = [0, r2 ). (iv) η2 (t) − 1

A Novel Eighth Convergence Order Scheme with Derivatives ...

165

has a minimal zero ρ2 ∈ Q1 − {0} for some function ξ1 : Q1 −→ Q which is continuous and nondecreasing, and function η2 : Q1 −→ Q is defined by "R 1 0 ξ((1 − τ)η1 (t)t)dτ η2 (t) = 1 − ξ0 (η1 (t)t) (ξ0 (t) + ξ0 (η1 (t)t)) (1 − ξ0 (t))(1 − ξ0 (η1 (t)t)) # R1 0 ξ1 (τη1 (t)t)dτ η1 (t). +4 1 − ξ0 (t) +

Z 1 0

ξ1 (τη1 (t)t)dτ

(v) ξ0 (η2 (t)t) − 1 has a minimal zero r3 ∈ Q1 − {0}. Set Q2 = [0, r4), where r4 = min{r2 , r3 }. (vi) η3 (t) − 1 has a minimal zero ρ3 ∈ Q2 − {0}, where function η3 : Q2 −→ Q is defined by η3 (t) =

R1

ξ((1 − τ)η2 (t)t)dτη2(t) 1 − ξ0 (η2 (t)t) Z 1 (ξ0 (t) + ξ0 (η2 (t)t)) + ξ1 (τη2 (t)t)dτη2(t) (1 − ξ0 (t))(1 − ξ0 (η2 (t)t)) 0 0

4 + 5

R1

16 + 5

0

ξ1 (τη2 (t)t)dτη2(t) 1 − ξ0 (η2 (t)t)

R1 0

ξ1 (τη1 (t)t)dτη1 (t) . 1 − ξ0 (t)

(vi) ξ0 (η3 (t)t) − 1 has a minimal zero r5 ∈ Q2 − {0}. Set Q3 = [0, r), where r = min{r4 , r5 }. (vii) η4 (t) − 1 has a minimal zero ρ4 ∈ Q2 − {0}, for some function ξ2 : Q3 −→ Q which is continuous and nondecreasing and function η4 : Q3 −→ Q defined by "R 1 0 ξ((1 − τ)η3 (t)t)dτ η4 (t) = 1 − ξ0 (η3 (t)t) # R ξ2 (t) 01 ξ1 (τη3 (t)t)) + η3 (t). (1 − ξ0 (t))(1 − ξ0 (η3 (t)t))

166

Christopher I. Argyros, Samundra Regmi, Ioannis K. Argyros et al.

The parameter ρ = min{ρm }, m = 1, 2, 3, 4

(20.3)

shall be shown to be a radius of convergence for scheme (20.1). Set Q4 = [0, ρ). It follows from (20.3) that for all t ∈ Q4 0 ≤ ξ0 (t) < 1, (20.4) 0 ≤ ξ0 (η1 (t)t) < 1,

(20.5)

0 ≤ ξ0 (η2 (t)t) < 1,

(20.6)

0 ≤ ξ0 (η3 (t)t) < 1,

(20.7)

0 ≤ ηm (t) < 1.

(20.8)

and Let U[x∗ , γ] stand for the closure of the open ball U(x∗ , γ) in X with center x∗ ∈ X and radius γ > 0. We assume from now on that x∗ is a simple solution of equation F(x) = 0 and the functions “ηm “ are as previously defined. We also need the hypotheses (H). Assume: (h1) For all u ∈ Ω

kF 0 (x∗ )−1 (F 0 (u) − F 0 (x∗ ))k ≤ ξ0 (ku − x∗ k).

Set Ω0 = U[x∗ , r0 ] ∩ Ω. (h2) For all u1 , u2 ∈ Ω0

kF 0 (x∗ )−1 (F 0 (u1 ) − F 0 (u2 ))k ≤ ξ(ku1 − u2 k) kF 0 (x∗ )−1 F 0 (u1 )k ≤ ξ1 (ku1 − x∗ k). and kF 0 (x∗ )−1 (F 0 (u1 ) − F 0 (u2 )G(I − 5F 0 (u1 )−1 F[u1 , u2 ])k ≤ ξ2 (ku1 − x∗ k), where G : Ω −→ L(X,Y ) is continuous. (h3) U[x∗ , ρ] ⊂ Ω. Next, we present the local convergence analysis of the scheme (20.1) under hypotheses (H) and the preceding notation. Theorem 32. Under the hypotheses (H) further assume that x0 ∈ U(x∗ , ρ) − {x∗ }. Then, sequence {xn } starting at x0 and generated by scheme (20.1), is well defined in U(x∗ , ρ), stays in U(x∗ , ρ) for all n = 0, 1, 2, . . . and converges to x∗ , so that kyn − x∗ k ≤ η1 (qn )qn ≤ qn < ρ,

(20.9)

A Novel Eighth Convergence Order Scheme with Derivatives ...

167

kzn − x∗ k ≤ η2 (qn )qn ≤ qn ,

(20.10)

kvn − x∗ k ≤ η3 (qn )qn ≤ qn ,

(20.11)

qn+1 ≤ η4 (qn )qn ≤ qn ,

(20.12)

and where qn = kxn − x∗ k, radius ρ and function ηm are given previously. Proof. Mathematical induction is used to show items (20.9)– (20.12). Let x ∈ U[x∗ , ρ] − {x∗ }. Then, by using (20.3), (20.4) and (h1), we get in turn that kF 0 (x∗ )−1 (F 0 (x) − F 0 (x∗ ))k ≤ ξ0 (kx − x∗ k) ≤ ξ0 (ρ) < 1, so, F 0 (x)−1 ∈ L(Y, X) by a lemma due to Banach on inverses of linear operators [10] and kF 0 (x)−1 F 0 (x∗ )k ≤

1 . 1 − ξ0 (kx − x∗ k)

(20.13)

If x = x0 iterates y0 , z0 and v0 are well defined by the first three sub-steps of scheme (20.1) and we can write y0 − x∗ = x0 − x∗ − F 0 (x0 )−1 F(x0 ) 0

= F (x0 )

−1

Z 1 0

(F 0 (x∗ + τ(x0 − x∗ )) − F 0 (x0 ))dτ(x0 − x∗ ). (20.14)

In view of (20.3), (20.8) (for m = 1), (h2), (20.13) (for x = x0 ) and (20.14), we have in turn that ∗

R1

ξ((1 − τ)q0 )dτq0 1 − ξ0 (q0 ) ≤ η1 (q0 )q0 ≤ q0 < ρ,

ky0 − x k ≤

0

(20.15)

showing y0 ∈ U(x∗ , ρ), (20.9) for n = 0 and (20.13) (for x = y0 ). By the second substep, similarly but using (20.8)(for m = 2) and (20.15) and the triangle inequality, we have in turn that kz0 − x∗ k = ky0 − x∗ − F 0 (y0 )−1 F(y0 ) + F 0 (y0 )−1 (F 0 (x0 ) − F 0 (y0 )) ×F 0 (x )−1 F(y0 ) − 4F 0 (x0 )−1 F(y0 )k "R 0 1 ∗ 0 ξ((1 − τ)ky0 − x k)dτ ≤ 1 − ξ0 (ky0 − x∗ k)

(ξ0 (q0 ) + ξ0 (ky0 − x∗ k)) 01 ξ1 (τky0 − x∗ k)dτ + (1 − ξ0 (q0 ))(1 − ξ0 (ky0 − x∗ k)) # R 4 01 ξ1 (τky0 − x∗ k)dτ + ky0 − x∗ k 1 − ξ0 (q0 ) R

≤ η2 (q0 )q0 ≤ q0 ,

(20.16)

168

Christopher I. Argyros, Samundra Regmi, Ioannis K. Argyros et al.

showing z0 ∈ U(x∗ , ρ), (20.10) for n = 0 and (20.13) (for x = z0 ). Moreover, by the third sub-step of scheme (20.1), (20.30, (20.8) (for m = 3), (h2), (20.13) (for x = z0 ), (20.15), (20.16) and the triangle inequality, we obtain in turn that kv0 − x∗ k = kz0 − x∗ − F 0 (z0 )−1 F(z0 ) + F 0 (z0 )−1 F(z0 ) 1 16 + F 0 (x0 )−1 F(y0 ) − F 0 (x0 )−1 F(z0 )k 5 5 1 = kz0 − x∗ − F 0 (z0 )−1 F(z0 ) + F 0 (z0 )−1 (F 0 (x0 ) − F 0 (z0 ))F 0 (x0 )−1 F(z0 ) 5 4 0 16 0 −1 + F (x0 ) F(z0 ) + F (x0 )−1 F(y0 )k 5 5 R1 ∗ ξ((1 − τ)kz0 − x k)dτkz0 − x∗ k ≤ 0 1 − ξ0 (kz0 − x∗ k) 1 (ξ0 (q0 ) + ξ0 (kz0 − x∗ k)) 01 ξ1 (τkz0 − x∗ k)dτkz0 − x∗ k + 5 (1 − ξ0 (q0 ))(1 − ξ0 (kz0 − x∗ k) R

4 + 5

R1 0

ξ1 (τkz0 − x∗ k)dτkz0 − x∗ k 1 − ξ0 (kz0 − x∗ k)

16 01 ξ1 (τky0 − x∗ k)dτky0 − x∗ k + 5 1 − ξ0 (q0 ) ≤ η3 (q0 )q0 ≤ q0 , R

(20.17)

showing v0 ∈ U(x∗ , ρ), (20.11) (for n = 0) and (20.13)for x = v0 . Furthermore, by the last sub-step of scheme (20.1) (for x = v0 ), (20.15)-, (20.17), and the triangle inequality, we get in turn that q1 = kv0 − x∗ − F 0 (v0 )−1 F(v0 ) +(F 0 (v0 )−1 − G0 F 0 (x0 )−1 )F(v0 )k = kv0 − x∗ − F 0 (v0 )−1 F(v0 )

+F 0 (v0 )−1 (F 0 (x0 ) − F 0 (v0 )G0 )F 0 (x0 )F(v0 )k "R 1 ∗ 0 ξ((1 − τ)kv0 − x k)dτ ≤ 1 − ξ0 (kv0 − x∗ k) # R ξ2 (q0 ) 01 ξ1 (τkv0 − x∗ k)dτ) + kv0 − x∗ k (1 − ξ0 (kv0 − x∗ k))(1 − ξ0 (q0 )) ≤ η4 (q0 )q0 ≤ q0 ,

(20.18)

showing x1 ∈ U(x∗ , ρ) and (20.12)for n = 0. Hence items (20.9)– (20.12) hold for n = 0. Simply replace x0 , y0 , z0 , v0 , x1 by xi , yi , zi , vi , xi+1 , in the preceding calculations to complete the induction. Finally, from the estimation qi+1 ≤ ρqi < ρ,

where ρ = η4 (q0 ) ∈ [0, 1), we obtain xi+1 ∈ U(x∗ , ρ) and limi−→∞ xi = x∗ .

(20.19)

A Novel Eighth Convergence Order Scheme with Derivatives ...

169

The uniqueness of the solution result is given next without relying necessarily on the hypotheses (H). Proposition 3. Assume: (i) there exists a simple solution x∗ ∈ Ω of F(x) = 0. (ii) There exists ρ∗ ≥ ρ satisfying Z 1 0

ξ0 (τρ∗ )dτ < 1.

(20.20)

Set Ω1 = U[x∗ , ρ∗ ] ∩ Ω. Then, the only solution of F(x) = 0 in the region Ω1 is x∗ . Proof. Let p ∈ Ω1 be such that F(p) = 0. Define M = (h1) and (ii), we get in turn that

R1 0 ∗ ∗ 0 F (x + τ(p − x ))dτ. Then, using

kF 0 (x∗ )−1 (M − F 0 (x∗ ))k ≤

Z 1

ξ0 (τkp − x∗ k)dτ



Z 1

ξ0 (τρ∗ )dτ < 1,

0

0

(20.21)

so p = x∗ follows from the invertibility of M and the identity 0 = F(p)−F(x∗ ) = M(p−x∗ ). Remark 29. by

a. We can compute the computational order of convergence (COC) defined    kxn+1 − x∗ k kxn − x∗ k COC = ln / ln , kxn − x∗ k kxn−1 − x∗ k 

or the approximate computational order of convergence (ACOC)     kxn+1 − xn k kxn − xn−1 k ACOC = ln / ln . kxn − xn−1 k kxn−1 − xn−2 k b. In view of (c2) and the estimate kF 0 (x∗ )−1 F 0 (x)k = kF 0 (x∗ )−1 (F 0 (x) − F 0 (x∗ )) + Ik

≤ 1 + kF 0 (x∗ )−1 (F 0 (x) − F 0 (x∗ ))k ≤ 1 + ξ0 (kx − x∗ k)

condition (c3) can be dropped and ω1 can be replaced by ξ1 (t) = 1 + ξ0 (t) or ξ1 (t) = 2, since t ∈ [0, r0 ).

170

Christopher I. Argyros, Samundra Regmi, Ioannis K. Argyros et al.

c. The results obtained here can be used for operators F satisfying autonomous differential equations [2] of the form F 0 (x) = P(F(x)) where P is a continuous operator. Then, since F 0 (x∗ ) = P(F(x∗ )) = P(0), we can apply the results without actually knowing x∗ . For example, let F(x) = ex − 1. Then, we can choose: P(x) = x + 1. d. Let ω0 (t) = K0 t, and ω(t) = Kt. In [2, 3] we showed that rA = gence radius of Newton’s method:

2 2K0 +K

is the conver-

xn+1 = xn − F 0 (xn )−1 F(xn ) for each n = 0, 1, 2, · · ·

(20.22)

under the conditions (c1) and (c2). It follows from the definition of ρ in (20.3) that the convergence radius ρ of the method (20.1) cannot be larger than the convergence radius rA of the second order Newton’s method (20.22). As already noted in [2, 3] rA is at least as large as the convergence radius given by Rheinboldt [15] 2 , (20.23) 3K where K1 is the Lipschitz constant on D. The same value for rR was given by Traub [17]. In particular, for K0 < K1 we have that rR =

rR < rA and

rR 1 K0 → as → 0. rA 3 K1 That is the radius of convergence rA is at most three times larger than Rheinboldt’s.

3.

Numerical Examples

For simplicity, we choose G = I, then ξ2 (t) = ξ0 (t) + ξ0 (η3 (t)t). Example 23. Consider the kinematic system F1 (x) = ex , F2 (y) = (e − 1)y + 1, F3 (z) = 1

with F1 (0) = F2 (0) = F3 (0) = 0. Let F = (F1 , F2 , F3 ). Let X = Y = R3 , Ω = U[0, 1], x∗ = (0, 0, 0)t . Define function F on Ω for w = (x, y, z)t by F(w) = (ex − 1,

e−1 2 y + y, z)t . 2

Then, we get  ex 0 0 F 0 (v) =  0 (e − 1)y + 1 0  , 0 0 1 

1

1

so ξ0 (t) = (e − 1)t, ξ(t) = e e−1 t, ξ1 (t) = e e−1 . Then, the radii are: ρ1 = 0.382692, ρ2 = 0.101363, ρ3 = 0.0503704 = ρ, ρ4 = 0.215128.

A Novel Eighth Convergence Order Scheme with Derivatives ...

171

Example 24. Consider X = Y = C[0, 1], Ω = U[0, 1] and F : Ω −→ Y defined by F(ψ)(x) = ϕ(x) − 5

Z 1

xθψ(θ)3 dθ.

(20.24)

0

We have that 0

F (ψ(ξ))(x) = ξ(x) − 15

Z 1 0

xθψ(θ)2ξ(θ)dθ, for each ξ ∈ D.

Then, we get that x∗ = 0, so ξ0 (t) = 7.5t, ξ(t) = 15t, and ξ1 (t) = 2. Then, the radii are: ρ1 = 0.66667, ρ2 = 0.1031589, ρ3 = 0.00586666 = ρ, ρ4 = 0.0061192. Example 25. By the academic example of the introduction, we have ξ0 (t) = ξ(t) = 96.6629073t and ξ1 (t) = 2. Then, the radii are: ρ1 = 0.00689682, ρ2 = 0.001717593, ρ3 = 0.000803199 = ρ, ρ4 = 0.0169864.

4.

Conclusion

An extended local convergence of the scheme (20.1) under a set of conditions involving only the first derivative is given. Moreover, error estimates and uniqueness of the solution results were given, in contrast to an earlier study [7] using hypotheses up to the ninth derivative. Hence, we expand the applicability of the scheme (20.1).

References [1] Argyros, I. K., Computational theory of iterative methods. Series: Studies in Computational Mathematics, 15, Editors: Chui C. K. and Wuytack L., Elsevier Publ. Co. New York, U.S.A, 2007. [2] Argyros, I. K., Hilout, S., Weaker conditions for the convergence of Newton’s method. J. Complexity 28 (2012) 364–387. [3] Argyros, I. K., Magr´en˜ an, A. A., A contemporary study of iterative methods, Elsevier (Academic Press), New York, 2018. [4] Argyros, I. K., Magr´en˜ an, A. A., Iterative methods and their dynamics with applications, CRC Press, New York, USA, 2017. [5] Behl, R., Bhalla, S., Martinez, E., Alsulami, M. A., Derivative-free King’s scheme for multiple zeros of nonlinear functions, Mathematics 2021, 9, 1242.https://doi.org/10.3390/math 9111242. [6] Behl, R., Bhalla, S., Magrenan, A. A., Moysi, A., An optimal derivative free family of Chebyshev-Halley’s method for multiple zeros. Mathematics, 2021, 9, 546.

172

Christopher I. Argyros, Samundra Regmi, Ioannis K. Argyros et al.

[7] Cordero, A., Jordan, C., Codesal, E., Torregrosa, J. R., Highly efficient algorithms for solving nonlinear systems with arbitrary order of convergence p + 3, p ≥ 5, J. Comput. Appl. Math., 330, (2018), 748-758. [8] Ham, Y., Chun, C., A fifth order iterative method for solving nonlinear equations, Appl. Math. Comput., 194, (2007), 287-290. [9] Hernandez, M. A., Rubio, M. J., A uniparametric family of iterative processes for solving nondifferentiable equations, J. Math. Anal. Appl., 275, (2002), 821-834. [10] Kantorovich, L. V., Akilov, G. P., Functional Analysis, second edition, Pergamon Press, Oxford, 1982, translated from Russian by Howard L. Silcock. [11] King, H. T., Traub, J. F., Optimal order of one-point and multipoint iteration, Carnegie Mellon University, Research Showcase@CMU. Computer Science Department, Paper 1747, 1973. [12] Magr´en˜ an, A. A., Argyros, I. K., Rainer, J. J., Sicilia, J. A., Ball convergence of a sixth-order Newton-like method based on means under weak conditions, J. Math Chem. (2018) 56:2117-2131, https://doi.org/10.1007/ s10910-018-0856-y [13] Magr´en˜ an, A. A., Guti´errez, J. M., Real dynamics for damped Newton’s method applied to cubic polynomials, J. Comput. Appl. Math., 275, (2015), 527–538. [14] Ren, H., Wu, Q., Bi, W., A class of two-step Steffensen type methods with fourthorder convergence. Appl. Math. Comput. 209, (2009), 206–210. [15] Rheinboldt, W. C., An adaptive continuation process for solving systems of nonlinear equations, In Mathematical models and numerical methods (Tikhonov A. N. et al. eds.) pub.3, (1977), 129-142 Banach Center, Warsaw Poland. [16] Sharma, J. R., Arora, H., Efficient derivative-free numerical methods for solving systems of nonlinear equations. Comp. Appl. Math. 35, (2016), 269-284. doi:10.1007/s40314-014-0193-0. [17] Traub, J. F., Iterative Methods for the Solution of Equations. Prentice-Hall, New Jersey (1964).

Chapter 21

Homocentric Ball for Newton’s and the Secant Method 1.

Introduction

Newton’s method is a powerful technique for solving the equation F(x) = 0,

(21.1)

where F : Ω ⊆ X → Y a nonlinear operator mapping from an open domain Ω of Banach space X into Banach space Y [12]. Newton’s method is often modified to reduce the operation costs and for other objectives [2, 3] . The secant method is a well-known variation of Newtons method [1, 10, 14, 15, 21] xn+1 = xn − F[xn−1 , xn ]−1 F(xn ), n ≥ 0,

(21.2)

where the initial points x1 and x0 are given, the inverse operator F[u, v]−1 ∈ L(Y, X) is defined in [7], and L(Y, X) is the set of bounded linear operators from Y to X . The convergence features of (21.2) have usually been studied. For example, the conditions on the initial points make the secant iterative sequence (21.2) converge to the solution x? of (21.1), and the convergence speed is concerned when the initial points are well close to x? . A number of papers also propose convergence theorems to depict the results of convergence analysis [4, 6, 8, 15] . One of their results is the known convergence order. If x? is a single root of F and the sequence {xn } defined by (21.2) comes well closed to x? , then kxn+1 − x? k ∼ kxn − x? k · kxn−1 − x? k. Hence kxn+1 − x? k ∼ kxn − x? kr , where √ r = 1+2 5 ≈ 1.618 is the unique positive root of r2 − r − 1 = 0. This chapter involves the following questions about the secant method (21.2). How do we understand the difference among these convergence theorems? How can we compare the various conditions in the hypotheses of these convergence theorems [7] have discussed the convergence of Newtons method using a maximal convergence ball O(x? , r) with center x? and radius r. The ball has the optimal radius, and Newtons iteration converges if the initial point x0 is interior to this ball. Sparked by this interesting idea, we focus on various convergence theorems of the secant method (21.2), and for each convergence theorem, we respectively obtain a corresponding convergence ball, where the hypothesis conditions of

174

Christopher I. Argyros, Samundra Regmi, Ioannis K. Argyros et al.

the theorem can be satisfied. Since all of the convergence balls have the same center x? , the solution of (21.1) can be viewed as a homocentric ball. These convergence theorems are sorted by the different sizes of various radii of this homocentric ball. In some sense, the sorted sequence directly indicates the weakness of conditions in convergence theorems.

2.

Local Convergence

We develop the local convergence results first for method (21.2). Theorem 33. Suppose: (i) point x? is a simple solution (1.1). (ii) For all x, y ∈ Ω and some L0 > 0 kF 0 (x? )−1 ([x, y; F] − F 0 (x? ))k ≤ L0 (kx − x? k + ky − x? k)  holds. Set Ω0 = U x? , 2L1 0 ∩ Ω.

(iii) For all x, y ∈ Ω0 and some L > 0

kF 0 (x? )−1 ([x? , x; F] − [x, y; F])k ≤ Lky − x? k holds. (iv) U(x?,R ) ⊂ Ω. Then, sequence {xn } generated by method (21.2) and starting from x−1 , x0 ∈ U(x?,R)−{x? } is well defined in U(x? , R), remains in U(x? , R) for all n = 0, 1, 2, · · · and converges to x? , where B0 =

1 1−L0 (kx?−x0 k+kx? −x−1 k)

and R =

1 B0 L .

Moreover, we have the following estimates kxn − x? k ≤

1 (B0 Lkx0 − x? k)sn+1 (B0 Lkx−1 − x? k)sn , B0 L

(21.3)

√    √ n n where sn = √15 1+2 5 − 1−2 5 is the Fibonacci sequence, given implicitly by s−1 = 1, s0 = 0, s1 = s2 = 1 and sn+1 = sn + sn−1 .

Proof. Let u, v ∈ U(x? , R) − {x? }. By (i) and (ii), we obtain in turn that kF 0 (x? )−1 (A(u, v) − F 0 (x? ))k ≤ L0 (ku − x? k + kv − x? k) ≤ 2L0 R < 1.

(21.4)

Using (21.4) and a lemma on linear invertible operators due to Banach [1,5] we deduce A(u, v)−1 ∈ L(B2 , B1 ) with kA(u, v)−1F 0 (x? )k ≤

1 . 1 − L0 (ku − x? k + kv − x? k)

(21.5)

Homocentric Ball for Newton’s and the Secant Method

175

In particular, if u = x−1 and v = x0 , iterate x1 exists by method (21.2) for n = 0 and we can write kx1 − x? k = kx0 − x? − A−1 0 F(x0 )k

0 0 −1 = k(A−1 0 F (x? ))(F (x? ) ([x? , x0 ; F] − [x0 , x−1 ; F]))(x0 − x? )k 0 0 −1 ≤ kA−1 0 F (x? )kkF (xn ) ([x? , x0 ; F] − [x0 , x−1 ; F])(x0 − x? )k

≤ B0 Lkx−1 − x? kkx0 − x? k ≤ R,

so x1 ∈ U(x? , R). Hence, (21.3) holds for n = 1. We suppose that (21.3) holds for all k = 0, 1, 2, · · · , m < n. Then, if m = n, we have B0 Lkxn − x? k ≤ (B0 Lkxn−1 − x? k)(B0 Lkxn−2 − x? k)

≤ · · · ≤ (B0 Lkx0 − x? k)sn+1 (B0 Lkx−1 − x? k)sn ,

so xn ∈ U(x? , R), and limn→∞ xn = x? (by letting n → ∞) in (21.3). Remark 30. If F has divided differences of order two, then (iii) can be replaced by kF 0 (x? )−1 [x? , x, y; F]k ≤ L for all x, y ∈ Ω0 . In earlier studies the stronger and not needed conditions have been use kF 0 (x? )−1 ([x, y; F] − [u, v; F])k ≤ M(kx − uk + ky − vk)

(21.6)

kF 0 (x? )−1 [x, y, u; F]k ≤ N

(21.7)

or for all x, y, u, v ∈ Ω, and the radius of convergence is given by r=

1 3M

or r =

1 . 3N

(21.8)

But, we have Ω0 ⊆ Ω,

(21.9)

so L0 ≤ M, L0 ≤ N

(21.10)

L ≤ M, L ≤ N

(21.11)

r ≤ R.

(21.12)

and hold. Hence, we deduce Hence, the convergence radius R is least as large as the old one r. Moreover, the error estimates are also tighter. That is fewer iterates are needed to achieve a predetermined accuracy. Examples, where items (21.9) − (21.12) are strict can be found in [1]-[4]. Concerning the uniqueness of the solution, we have a result not necessarily depending on Theorem 33.

176

Christopher I. Argyros, Samundra Regmi, Ioannis K. Argyros et al.

Proposition 4. Suppose that x? is a simple solution of equation F(x) = 0 and for all x ∈ Ω and some M0 ∈ [0, 1) kF 0 (x? )−1 ([x? , x; F] − F 0 (x? ))k ≤ M0 kx − x? k (21.13)  holds. Set Ω1 = U x? , M10 ∩ Ω. Then, the only solution of equation F(x) = 0 in the domain Ω1 is x? . Proof. Let z ∈ Ω1 with F(z) = 0. Then, using (21.13), we obtain kF 0 (x? )−1 ([x? , z; F] − F 0 (x? ))k ≤ M0 kz − x? k < 1, so [x? , z; F]−1 ∈ L(B2 , B1 ). Hence, we conclude x? = z by the identity [x? , z; F](x? − z) = F(x? ) − F(z) = 0. Notice also that M0 ≤ M

(21.14)

holds. Next, we develop the corresponding result to Theorem 33 for Newton’s method. But we use derivatives instead of divided differences. Corollary 1. Suppose: (a) point x? is a simple solution of equation (1.1). (b) For all x ∈ Ω and some L0 > 0 kF 0 (x? )−1 (F 0 (x) − F 0 (x? ))k ≤ L0 kx − x? k.  Set Ω2 = U x? , L1 ∩ Ω. 0

(c) For all x, y ∈ Ω2 and some L > 0 kF 0 (x? )−1 (F 0 (x) − F 0 (y))k ≤ Lkx − yk. (d) U(x? , R) ⊂ Ω. Then, sequence {xn } generated by method (21.2) and starting from x0 ∈ U(x? , R) − {x? } is well defined in U(x? , R), remains in U(x? , R) for all n = 0, 1, 2, · · · and converges to x? , where B0 =

1 1−L0 kx? −x0 k

and R =

1 . B0 L

Moreover, the following estimates hold kxn − x? k ≤

2 B0 L ( kx0 − x? k)sn+1 ( B20 L kx−1 − x? k)sn . B0 L 2

Remark 31. (1) Comments similar to the ones given in Remark 30 can follow for Newton’s method. (2) The convergence radius is the same for both methods.

Homocentric Ball for Newton’s and the Secant Method

177

(3) Consider the following specialization of the Secant method defined for each n = −1, 0, 1, · · · by x0 = x−1 − F 0 (x−1 )−1 F(x−1 )

xn+1 = xn − A−1 n F(xn ). Clearly, the convergence radius is also R1 =

z . B0 L

(21.15)

In particular, we have the specialization of Corollary 1 : Corollary 2. Suppose that the conditions of Corollary 1 hold. Then, sequence {xn } generated by (21.15) and starting from x−1 ∈ U(x? , R1 ) − {x? } converges to x? so that  sn +3 kxn − x? k ≤ B2L B20 L kx−1 − x? k 0

Proof. Simply follow the proof of Theorem 33 to obtain instead kF 0 (x−1 )−1 F 0 (x? )k ≤

1 , 1−L0 kx−1 −x? k

so kx0 − x? k ≤ B0 Lkx−1 − x? k, kx1 − x? k < B0 Lkx0 − x? kkx−1 − x? k and B0 Lkxn − x? k ≤ (B0 Lkx0 − x? k)sn+1 (B0 Lkx−1 − x? k)sn < (B0 Lkx−1 − x? k)sn+3 .

3.

Semi-Local Convergence

Benefits similar to the ones shown for the local convergence analysis can follow in an analogous way. Let us provide the main new idea. Suppose: kA−1 0 (A(x, y) − A0 )k ≤ holds for each x,y ∈ Ω. Set D0 = U x0 , L10 ∩ Ω. Notice that

L0 (kx − x0 k + ky − x−1 k) 2

D0 ⊆ Ω.

(21.16)

(21.17)

Moreover, the iterates are restricted in D0 not in Ω as in the earlier studies [5]-[12]. Next, we develop the earlier conditions, so we can compare them to ours. These conditions were given in a non-affine invariant form. But, we present then in affine invariant form. The advantages of the latter over the former are well explained [5]. The old conditions are 1 kA−1 (21.18) 0 [x, y, z; F]k ≤ K 2

178

Christopher I. Argyros, Samundra Regmi, Ioannis K. Argyros et al.

and kF 0 (x? )−1 F 00 (x)k ≤ K ( or kF 0 (x∗ )−1 (F 0 (y) − F 0 (x))k ≤ Lky − xk)

(21.19)

for all x, y, z ∈ Ω. But ours are respectively kA−1 0 [x, y, z; F]k ≤

L 2

(21.20)

and kF 0 (x? )−1 F 00 (x)k ≤ L ( or kF 0 (x∗ )−1 (F 0 (y) − F 0 (x))k ≤ Lky − xk)

(21.21)

for all x, y, z ∈ D0 . Then, we have L ≤ K.

(21.22)

Hence, L can be replaced with K in all previous results to obtain the benefits as already stated in the introduction. It is worth noticing that these benefits are obtained with the same computational effort since computing K also requires the computation of L as a special case. That is why we skip the proofs of the results that follow. Next, we extend the corresponding results in [7], respectively. Theorem 34. Suppose that −1 0 (i) kA−1 0 F(x0 )k ≤ σ, kA0 F(x−1 )k ≤ σ

(σ0 ≥ σ),

1 (ii) [u, v, w; F] is bounded U[x0 , 2σ0 ] and kA−1 0 [u, v, w; F]k ≤ 2 L,

(iii) h = 12 L(σ + σ0 ) < 14 . Then there is a solution x? of (21.1) in U[x0 , 2σ0 ], and the sequence {xn } generated by (21.2) converges to x? . Moreover, kxn − x? k < 21−tn qtn −1 (4h)tn σ, where 0 < q < 1 and the sequence tn is the partial sum of un which is defined by tn = ∑ni=0 un = un+2 − 1. Theorem 35. Suppose that F(x? ) = 0, kF 0 (x? )−1 F 00 (x)k ≤ L for every x ∈ Ω and h i √ U x? , 6−6 6 K1 ⊂ Ω then for each pair of points

h i √ x−1 , x0 ∈ U x? , 3−3 6 K1

the conditions of Theorem 34 hold. Theorem 36. Suppose that (1) the operator A0 invertible and

kA−1 0 F(x0 )k ≤ σ,

0 kA−1 0 F(x−1 )k ≤ σ

(σ0 ≥ σ),

(2) the following condition is satisfied in U(x0 , {1 + T (t)}σ0) • the operator [u, v, w; F] is bounded and kA−1 [u, v, w; F]k ≤ 12 L (3) h = 12 L(σ + σ0 ) < 1. Then the conclusions of Theorem 34 follow. Moreover

Homocentric Ball for Newton’s and the Secant Method

179

kxn − x? k < {1 + T (h)}htn σ0 , tn where T (h) = ∑∞ 0 h , the sequence tn defined in Theorem 34 is partial sum of the Fibonacci sequence un . tn Theorem 37. Let T (h) = ∑∞ n=1 h as defined in Theorem 36, and denoted h? ≈ 0.56655 by tn the unique positive root of h2 T 0 (h) = 1. T (h? ) = ∑∞ n=1 h? ≈ 1.01040. Suppose that F(x? ) = 0. h i U x? , dL ⊂ D,

h i and kF 00 (x)k ≤ L for any x ∈ U x? , Ld , then for each pair of points h i x−1 , x0 ∈ U x? , Lv ,

the conditions of Theorem 36 hold, where

2−v v + v ≈ 0.62201 d = (1 + T (h? )) 2−2v

and q

1+h? (1+T (h? )) 1+2h? +h? (1+T (h? ))

≈ 0.19148.

−1 0 kA−1 0 F(x0 )k ≤ σ, kA0 F(x−1 )k ≤ σ

(σ0 ≥ σ),

v =: v(h? ) = 1 − Theorem 38. Suppose that (1) the operator A0 is invertible and

(2) The following condition is satisfied in U[x1 , T (h)σ0 ] ⊂ Ω : 1 • the operator [u, v, w; F] is bounded and kA−1 0 [u, v, w; F]k ≤ 2 L,

(3) h = 12 L(σ + σ0 ) < 1. Then, F has a solution x? in U[x1 , T (h)σ0]. The sequence {xn } generated by (21.2) with the initial points x−1 and x0 in the ball U[x1 , T (h)σ0 ] is well defined, and converges to x? . Moreover, kxn − x? k < [1 + T (h)]htn−1 σ0 , tn where T and tn are defined in Theorem 36, i.e., T (h) = ∑∞ n=1 h ,t0 = 0,t1 = 1,t2 = 2,t3 = 4, · · · (see [5]) 0 −1 0 Theorem h i 39. Suppose that F(x? ) = 0 and kF (x? ) k ≤ β. If kF (x)k ≤ L for any x in 0 d U x? , Lβ ⊂ Ω, then for each pair of points

h i ω0 x−1 , x0 ∈ U x? , Lβ ,

the conditions of Theorem 38 hold, where T and h? are defined in Theorem 36 and Theorem 37 respectively, 0

v d 0 = [T (h? )(2 − v0 ) + v0 ] 2−2v 0 ≈ 0.39981,

180

Christopher I. Argyros, Samundra Regmi, Ioannis K. Argyros et al.

and v0 =: v0 (h? ) = 1 +

2h? 1−h? +h? T (h? )



r

2h? 1 + 1−h? +h ? T (h? )

2

2h? − 1−h? +h ≈ 0.28381 ? T (h? )

Theorem 40. Let F(x) have the second Fr´echat derivative and F 00 (x) be continuous on Ω. Assume that kx0 − x−1 k ≤ τ, kx1 − x0 k ≤ σ, where the initial points x1 , x0 ∈ Ω and kF 00 (x)k ≤ L for all x ∈ Ω. Moreover, let s s 2 2 τ0 =

1− 2g − 1− 2g +

and rn =

1− 2g

s

1− 2g

−2h

2

,

−2h

τ−1 =

1− 2g

1+ 2g − 1+ 2g +

s

1 un+1 − 21 un − 21 1 1−τ0τ−1 τ−1 σ 2 (τ + σ) 2 , (n = un+1 un τ0 1−τ0 τ−1

1− 2g

−2h

2

−2h

−1, 0, 1, · · ·).

If U[x1 , r1 ] ⊂ Ω, then there is a unique solution x? of (21.1) in {U[x0 ,t? ] ∪U[x0 ,t?? ]} ∩ Ω where o

t? t?? =

1− g2 ∓

s

1− g2

h

2

−2h

σ.

Moreover, the disks {U[x0 , rn ]}∞ n=−1 converge monotonically to x? . 0 −1 00 Theorem h i 41. Suppose that F(x? ) = 0. If kF (x? ) F (x)k ≤ L for any x ∈ Ω, and 2 ⊂ Ω, then for each pair of points U x? , 5L h i 1 x−1 , x0 ∈ U x? , 5L

the conditions of Theorem 40 hold.

Theorem 42. Let F(x) have the first Fr´echet derivative and F 0 (x) be continuous on Ω. 00 Assume kA−1 0 F (x)k ≤ L for all x ∈ Ω. If for each pair x−1 , x0 ∈ Ω, kx1 − x−i k ≤ σ

(i = 0, 1), h = Lσ ≤

1 2

and U[x1 , τσ ] ⊂ Ω, then the sequence {xn }∞ n=1 generated by (21.2) converges to x? , which is the unique solution of (21.1) in Ω ∩1i=0 U[x−i , (1 + µ)σ] ∪ Ω ∩1i=0 U[x−i , (1 + µ−1 )σ]. Moreover xn ∈ Ω ∩1i=0 U[x−i , (1 + µ)σ], and the following error estimate kx? − xn k ≤ (1 +  −1 un+2 −1 i µ)µun+2 −1 σ ∑i=0 µ is satisfied, where

Homocentric Ball for Newton’s and the Secant Method µ=

181

√ 1−√1−2h . 1+ 1−2h

0 −1 00 Theorem i Suppose that F(x? ) = 0. If kF (x? ) F (x)k ≤ L for all x ∈ Ω and h √ 43. 2−1 ⊂ Ω, then for each pair of points U x? , Lβ

h √ i 2 x−1 , x0 ∈ U x? , 2− 2Lβ ,

the conditions of Theorem 42 hold.

4.

Numerical Examples

Example 26. Consider the kinematic system F1 (x) = ex , F2 (y) = (e − 1)y + 1, F3 (z) = 1 with F1 (0) = F2 (0) = F3 (0) = 0. Let F = (F1 , F2 , F3 ). Let E = E1 = R3 , D = S[0, 1], x∗ = (0, 0, 0)t . Define function F on D for w = (x, y, z)t by

G (w) = (ex − 1,

e−1 2 y + y, z)t . 2

Then, we get  ex 0 0 G 0(v) =  0 (e − 1)y + 1 0  , 0 0 1 

1

L0 =

e−1 e e−1 2 ,L = 2

, x0 = 0.01, x−1 = 0.015, M = e, N = 1e , M0 =

e−1 2 , L0

= 2L0 , L0 = 2L0 .

R = 0.0906094, R¯ = 0.078711, r = 0.122626. Example 27. Consider E = E1 = C[0, 1], D = S[0, 1] and F : D −→ Y defined by

G (ψ)(x) = ϕ(x) − 5

Z 1

xθψ(θ)3 dθ.

(21.23)

0

We have that

G 0 (ψ(ξ))(x) = ξ(x) − 15

Z 1 0

xθψ(θ)2 ξ(θ)dθ, for each ξ ∈ D.

Then, we get that x∗ = 0, x0 (x) = 0.01, x−1(x) = 0.015 so M0 = L0 = 15 15 2 , L = 2L, M = 15, N = 2 , R = 0.1471264, R¯ = 0.072072, r = 0.02222.

15 4 , L0

= 2L0 , L =

182

5.

Christopher I. Argyros, Samundra Regmi, Ioannis K. Argyros et al.

Conclusion

A local convergence theorem and five semi-local convergence theorems of the secant method are listed in this chapter. For every convergence theorem, a convergence ball is respectively introduced, where the hypothesis conditions of the corresponding theorem can be satisfied. Since all of these convergence balls have the same center x? , they can be viewed as a homocentric balls. Convergence theorems are sorted by the different sizes of various radii of this homocentric ball, and the sorted sequence represents the degree of weakness in the conditions of convergence theorems.

References [1] Argyros, I. K., Computational theory of iterative methods. Series: Studies in Computational Mathematics, 15, Editors: Chui C. K. and Wuytack L., Elsevier Publ. Co. New York, U.S.A, 2007. [2] Argyros, I. K., Hilout, S., Weaker conditions for the convergence of Newton’s method. J. Complexity 28 (2012) 364–387. [3] Argyros, I. K., Magr´en˜ an, A. A., A contemporary study of iterative methods, Elsevier (Academic Press), New York, 2018. [4] Argyros, I. K., Magr´en˜ an, A. A., Iterative methods and their dynamics with applications, CRC Press, New York, USA, 2017. [5] Dennis, J. R., Toward a unified convergence theory for Newton-like methods, In LB Rail, ed, Non-linear Functional Analysis and Applications, New York: Academic Press, 1971, 425-472. [6] Huang, Z. D., On the error estimates of several Newton-like methods, Appl. Math. Comput., 1999, 106: 1-16. 1-16. [7] Kewel, L., Homocentric convergence ball of the Secant method, Appl. Math. J. Chinese, Univ. Ser-B., 22, 3(2007), 353-365. [8] Magr´en˜ an, A. A., Argyros, I. K., Rainer, J. J., Sicilia, J. A., Ball convergence of a sixth-order Newton-like method based on means under weak conditions, J. Math. Chem, (2018) 56:2117-2131, https://doi.org/10.1007/ s10910-018-0856-y. [9] Magr´en˜ an, A. A., Guti´errez, J. M., Real dynamics for damped Newton’s method applied to cubic polynomials, J. Comput. Appl. Math., 275, (2015), 527–538. [10] Raydan, M., Exact order of convergence of the secant method, J. Optim Theory Appl, 1993, 78(3): 541-551. [11] Ren, H. M., Wu, Q. B., Mysovskii-type theorem for the secant method under H´’older continuous Fr´echet derivative, J. Math. Anal. Appl., 2006, 320(1): 415-424.

Homocentric Ball for Newton’s and the Secant Method

183

[12] Wang, X. H., Li C., Local and global behavior for algorithm for solving equations, Chinese Sci Bull, 2001, 46: 441-448.

Chapter 22

A Tenth Convergence Order Method under Generalized Conditions 1.

Introduction

We extend the applicability of a tenth-order method for solving nonlinear equations in the setting of Banach spaces. This is under generalized conditions only on the first derivative that does appear on the method, whereas in earlier works up to the eleventh derivative (not on the scheme) are used to establish the convergence. Our technique is so general that it can be used to extend the usage of other schemes along the same lines. Let B and B1 be a Banach spaces and Ω ⊂ B be an open-convex set. In this chapter, we are interested in the tenth order method [10]: yk = xk − F 0 (xk )−1 F(xk ) zk = yk − F 0 (yk )−1 F(yk )

xk+1 =

(22.1)

0 −1 zk − A−1 k Bk F (yk ) )F(zk ),

where Ak = 5F(zk) − F 0 (yk ), Bk = F 0 (zk) + 3F 0 (yk ), for approximating a solution x∗ of the equation F(x) = 0, (22.2) where F : Ω ⊂ B −→ B1 is a nonlinear Fr´echet differentiable operator. Various iterative methods are studied for approximating x∗ can be found in[1]-[29] and the references therein. The convergence order was established in these works using hypotheses on higher-order derivatives (not on these methods) and most of these methods are in the setting of a multidimensional Euclidean space. No computable error bounds on kxn − x∗ k or uniqueness results were given either. We address all the problems in this chapter. These assumptions on derivatives of order up to eleventh reduce the applicability of the method (22.1). For example: Let B = B1 = R, Ω = [− 12 , 32 ]. Define f on Ω by  3 t logt 2 + t 5 − t 4 i f t 6= 0 f (t) = 0 i f t = 0.

186

Christopher I. Argyros, Samundra Regmi, Ioannis K. Argyros et al.

Then, we have t∗ = 1, and f 000 (t) = 6 logt 2 + 60t 2 − 24t + 22. Obviously f 000 (t) is not bounded on Ω. So, the convergence of the above method is not guaranteed by the analysis in earlier papers. Moreover, our approach is so general that it can be used to extend the applicability of other methods [1]-[29] in a similar way. The rest of the chapter contains the convergence analysis of these methods.

2.

Convergence

It is convenient to define functions and parameters to appear in the local convergence analysis of the method (22.1) that follows. Set A = [0, ∞). Suppose function: (i) ξ0 (t) − 1 has a smallest zero R0 ∈ A − {0} where ξ0 : S −→ S is some continuous and nondecreasing function (CN). Set A0 = [0, R0). (ii) η1 (t) − 1 has a smallest zero ρ1 ∈ A0 − {0} for some functions ξ : A0 −→ A which is CN and η1 : A0 −→ A is defined by η1 (t) =

R1 0

ξ((1 − τ)t)dτ . 1 − ξ0 (t)

(iii) ξ0 (η1 (t)t) − 1 has a smallest zero R1 ∈ A0 − {0}. Set R2 = min{R0 , R1 } and A1 = [0, R2). (iv) η2 (t) − 1 has a smallest zero ρ2 ∈ A1 − {0}, for some function η2 : A1 −→ A is defined by η2 (t) = η1 (η1 (t)t)η1(t). (v) ξ0 (η2 (t)t) − 1 has a smallest zero R3 ∈ A1 − {0}. Let R = min{R2 , R3 } and A2 = [0, R).

A Tenth Convergence Order Method under Generalized Conditions

187

(vi) η3 (t) − 1 has a smallest zero ρ3 ∈ A2 − {0}, for some function ξ1 : A2 −→ A which is CN and η3 : A2 −→ A is defined by (ξ0 (η2 (t)t) + ξ0 (η1 (t)t)) 01 ξ1 (τη2 (t)t)dτ η3 (t) = [η1 (η2 (t)t) + (1 − ξ0 (η2 (t)t))(1 − ξ0 (η1 (t)t)) R

(ξ0 (η1 (t)t) + ξ0 (η2 (t)t)) 01 ξ1 (τη2 (t)t)dτ + ]η2 (t). (1 − p(t))(1 − ξ0 (η1 (t)t)) R

where p(t) = 14 (5ξ0 (η2 (t)t) + ξ0 (η1 (t)t)). Set ρ = min{ρ j }, j = 1, 2, 3.

(22.3)

This parameter is shown to be a convergence radius for method (22.1). Set A3 = [0, ρ). The definition of ρ gives that for all t ∈ A3 the following hold 0 ≤ ξ0 (t) < 1,

(22.4)

0 ≤ ξ0 (η1 (t)t) < 1,

(22.5)

0 ≤ ξ0 (η2 (t)t) < 1,

(22.6)

0 ≤ p(t) < 1

(22.7)

0 ≤ η j (t) < 1.

(22.8)

and The conditions (C) are needed provided that x∗ is a simple solution of equation F(x) = 0 and the functions “η j “ are defined previously. We denote by U[x∗ , γ] the closure of the ball U(x∗ , γ) having x∗ ∈ B as a center and γ > 0 as a radius. Suppose: (c1) For all u ∈ Ω

kF 0 (x∗ )−1 (F 0 (u) − F 0 (x∗ ))k ≤ ξ0 (ku − x∗ k).

Set Ω0 = U[x∗ , R0 ] ∩ Ω. (c2) For all u, v ∈ Ω0

kF 0 (x∗ )−1 (F 0 (v) − F 0 (u))k ≤ ξ(kv − uk)

and kF 0 (x∗ )−1 F 0 (v)k ≤ ξ1 (kv − x∗ k). (c3) U[x∗ , ρ] ⊂ Ω. Next, we develop the convergence result for method (22.1) based on the conditions (C) with the preceding notation.

188

Christopher I. Argyros, Samundra Regmi, Ioannis K. Argyros et al.

Theorem 44. Suppose conditions (C) hold. Moreover, choose x0 ∈ U(x∗ , ρ) − {x∗}. Then, we have limn−→∞ xn = x∗ . Proof. Mathematical induction is used to show items: {xn } ⊂ U(x∗ , ρ),

(22.9)

kyn − x∗ k ≤ η1 (qn )qn ≤ qn < ρ,

(22.10)

kzn − x∗ k ≤ η2 (qn )qn ≤ qn

(22.11)

qn+1 ≤ η3 (qn )qn ≤ qn ,

(22.12)

and where qn = kxn − x∗ k, radius ρ and function ηm are given before. Let x ∈ U(x∗ , ρ) − {x∗ }. In view of (22.3), (22.4), and (c1), we obtain kF 0 (x∗ )−1 (F 0 (x) − F 0 (x∗ ))k ≤ ξ0 (kx − x∗ k) ≤ ξ0 (r) < 1.

(22.13)

Estimate (22.13) together with a lemma attributed to Banach on linear invertible operators [15] give F 0 (x)−1 ∈ L(B1 , B) and kF 0 (x)−1 F 0 (x∗ )k ≤

1 . 1 − ξ0 (kx − x∗ k)

(22.14)

By hypothesis x0 ∈ U(x∗ , ρ)−{x∗ }, so estimate (22.14) hold for x = x0 , F 0 (x0 )−1 ∈ L(B1 , B) and y0 is well defined by the first sub-step of method (22.1). So, we can write y0 − x∗ = x0 − x∗ − F 0 (x0 )−1 F(x0 ) = (F 0 (x0 )−1 F 0 (x∗ ))

Z 1 0

F 0 (x∗ )−1 (F 0 (x∗ + τ(x0 − x∗ )) − F 0 (x0 ))dτ(x0 − x∗ ). (22.15)

Using (22.3), (22.8) (for j = 1), (c2), (22.14) (for x = x0 ) and (22.15) we have R1

ξ((1 − τ)q0 )dτq0 1 − ξ0 (q0 ) ≤ η1 (q0 )q0 ≤ q0 < ρ,

ky0 − x∗ k ≤

0

(22.16)

showing (22.10) for n = 0, y0 ∈ U(x∗ , ρ). By exchanging x0 by y0 , using second substep os method (22.3), (22.8) (for j = 2), (c2), (22.14) (for x = y0 ), we get as in (22.16) kz0 − x∗ k ≤ η1 (ky0 − x∗ k)ky0 − x∗ k ≤ η1 (η1 (q0 )q0 )η1 (q0 )q0

≤ η2 (q0 )q0 ≤ q0 ,

(22.17)

showing (22.11) and z0 ∈ U(x∗ , ρ). Moreover, (22.14) hold for x = z0 . Next, we show A−1 0 ∈ L(B1 , B). Indeed, using (c1), (22.3), (22.7) , (22.14) (for x = z0 ), (22.16) and (22.17), we

A Tenth Convergence Order Method under Generalized Conditions

189

have in turn that 1 5kF 0 (x∗ )−1 (F 0 (z0 ) − F 0 (x∗ ))k 4  + kF 0 (x∗ )−1 (F 0 (y0 ) − F 0 (x∗ ))k 1 (5ξ0 (kz0 − x∗ k) + ξ0 (ky0 − x∗ k)) ≤ 4 1 (5ξ0 (η2 (q0 )q0 ) + ξ0 (η1 (q0 )q0 )) ≤ 4 = p(q0 ) ≤ p(ρ) < 1,

k(4F 0 (x∗ ))−1(A0 − 4F 0 (x∗ ))k ≤

so 0 kA−1 0 F (x∗ )k ≤

1 . 4(1 − p(q0 ))

(22.18)

Furthermore, iterate x1 ie well defined, and we can write by the third sub-step of method (22.1) x1 − x∗ = z0 − x∗ − F 0 (z0 )−1 F(z0 ) + F 0 (z0 )−1 (F 0 (y0 ) − F 0 (z0 ))F 0 (y0 )−1 F(z0 ) 0 −1 +A−1 0 (A0 − B0 )F (x0 ) F(z0 ).

(22.19)

By (22.3), (22.8) (for j = 3), (22.14) (for x = y0 , z0 ), (22.16), (22.17), (22.18) and (22.19), we obtain

q1 ≤ [η1 (kz0 − x∗ k)

(ξ0 (kz0 − x∗ k) + ξ0 (ky0 − x∗ k)) 01 ξ1 (τkz0 − x∗ k)dτ + (1 − ξ0 (kz0 − x∗ k))(1 − ξ0 (ky0 − x∗ k)) +

R

(ξ0 (ky0 − x∗ k) + ξ0 (kz0 − x∗ k))

R1 )

ξ1 (τkz0 − x∗ k)dτ

(1 − ξ0 (ky0 − x∗ k))(1 − p(q0 )) ≤ η3 (q0 )q0 ≤ q0 ,

]kz0 − x∗ k (22.20)

showing (22.9) and (22.12) for n = 0. Then, repeat the preceding calculations by replacing x0 , y0 , z0 , x1 by x j , y j , z j , x j+1 , respectively in the preceding calculations to complete the induction for items (22.9)-(22.12). Finally, from the estimation q j+1 ≤ λq j < ρ,

(22.21)

where λ = η3 (q0 ) ∈ [0, 1) gives x j+1 ∈ U(x∗ , r) and lim j−→∞ x j = x∗ . Next, the uniqueness of the solution result is provided without necessarily using all hypotheses (C). Proposition 5. Suppose: (a) Point x∗ ∈ Ω is a simple solution of equation F(x) = 0 and

190

Christopher I. Argyros, Samundra Regmi, Ioannis K. Argyros et al.

(b) There exists ρ∗ ≥ ρ satisfying Z 1 0

ξ0 (τρ∗ )dτ < 1.

(22.22)

Set Ω1 = U[x∗ , ρ∗ ] ∩ Ω. Then, the only solution of equation F(x) = 0 in the domain Ω1 is x∗ . Proof. Let v∗ ∈ Ω1 with F(v∗ ) = 0. Set M = (22.22), we get

R1 0 0 F (v∗ + τ(x∗ − v∗ ))dτ. Then, by (c1) and

kF 0 (x∗ )−1 (M − F 0 (x∗ ))k ≤

Z 1

ξ0 (τkv∗ − x∗ k)dτ



Z 1

ξ0 (τρ∗ )dτ < 1,

0

0

(22.23)

so it follows from M −1 ∈ L(B1 , B) and 0 = F(v∗ ) − F(x∗ ) = M(v∗ − x∗ ), that v∗ = x∗ . Remark 32. by

a. We can compute the computational order of convergence (COC) defined coc = ln



   kxn+1 − x∗ k kxn − x∗ k / ln , kxn − x∗ k kxn−1 − x∗ k

or the approximate computational order of convergence     kxn+1 − xn k kxn − xn−1 k acoc = ln / ln . kxn − xn−1 k kxn−1 − xn−2 k b. In view of (h2) and the estimate kF 0 (x∗ )−1 F 0 (x)k = kF 0 (x∗ )−1 (F 0 (x) − F 0 (x∗ )) + Ik

≤ 1 + kF 0 (x∗ )−1 (F 0 (x) − F 0 (x∗ ))k ≤ 1 + ξ0 (kx − x∗ k)

condition (h3) can be dropped and ω1 can be replaced by ξ1 (t) = 1 + ξ0 (t), or ξ1 (t) = 2, since t ∈ [0, R0 ). c. The results obtained here can be used for operators F satisfying autonomous differential equations [1]-[4] of the form F 0 (x) = P(F(x)), where P is a continuous operator. Then, since F 0 (x∗ ) = P(F(x∗ )) = P(0), we can apply the results without actually knowing x∗ . For example, let F(x) = ex − 1. Then, we can choose: P(x) = x + 1.

A Tenth Convergence Order Method under Generalized Conditions d. Let ω0 (t) = K0 t, and ω(t) = Kt. In [2, 3] we showed that ρA = gence radius of Newton’s method:

2 2K0 +K

191

is the conver-

xn+1 = xn − F 0 (xn )−1 F(xn ) for each n = 0, 1, 2, · · · ,

(22.24)

under the conditions (h1) and (h2). It follows from the definition of ρ in (22.3) that it cannot be larger than the convergence radius ρA of the second order Newton’s method (22.24). As already noted in [2, 3] ρA is at least as large as the convergence radius given by Rheinboldt [19] 2 ρR = , (22.25) 3K where K1 is the Lipschitz constant on D. The same value for ρR was given by Traub [24]. In particular, for K0 < K1 we have that ρR < ρA and

1 K0 ρR → as → 0. ρA 3 K1 That is the radius of convergence ρA is at most three times larger than Rheinboldt’s.

3.

Numerical Examples

Example 28. Consider the kinematic system F1 (x) = ex , F2 (y) = (e − 1)y + 1, F3 (z) = 1

with F1 (0) = F2 (0) = F3 (0) = 0. Let F = (F1 , F2 , F3 ). Let X = X1 = R3 , Ω = U[0, 1], x∗ = (0, 0, 0)t . Define function F on Ω for w = (x, y, z)t by F(w) = (ex − 1,

e−1 2 y + y, z)t . 2

Then, we get  ex 0 0 F 0 (v) =  0 (e − 1)y + 1 0  , 0 0 1 

1

1

so ξ0 (t) = (e − 1)t, ξ(t) = e e−1 t, ξ1 (t) = e e−1 . Then, the radii are: ρ1 = 0.382692, ρ2 = 0.7372499, ρ = ρ3 = 0.333191, Example 29. Consider X = X1 = C[0, 1], Ω = U[0, 1] and F : Ω −→ Y defined by F(ψ)(x) = ϕ(x) − 5

Z 1

xθψ(θ)3 dθ.

(22.26)

0

We have that 0

F (ψ(ξ))(x) = ξ(x) − 15

Z 1 0

xθψ(θ)2ξ(θ)dθ, for each ξ ∈ D.

Then, we get that x∗ = 0, so ξ0 (t) = 7.5t, ξ(t) = 15t, and ξ1 (t) = 2. Then, the radii are: ρ1 = 0.06667 = ρ2 , ρ = ρ3 = 0.0558564.

192

Christopher I. Argyros, Samundra Regmi, Ioannis K. Argyros et al.

Example 30. By the academic example of the introduction, we have ξ0 (t) = ξ(t) = 96.6629073t and ξ1 (t) = 2. Then, the radii are: ρ1 = 0.00689682 = ρ2 , ρ = ρ3 = 0.00598194.

4.

Conclusion

We extended the applicability of tenth convergence order methods under generalized conditions. The first derivative is used in the conditions instead of upto the eleventh derivative required in [10], not on the methods. Computable error bound and uniqueness results not available before are also given in our chapter. The technique can be used on other methods for the same benefits.

References [1] Argyros, I. K., Computational theory of iterative methods. Series: Studies in Computational Mathematics, 15, Editors: Chui C. K. and Wuytack L., Elsevier Publ. Co. New York, U.S.A, 2007. [2] Argyros, I. K., Hilout, S., Weaker conditions for the convergence of Newton’s method. J. Complexity 28 (2012) 364–387. [3] Argyros, I. K., Magr´en˜ an, A. A., A contemporary study of iterative methods, Elsevier (Academic Press), New York, 2018. [4] Argyros, I. K., Magr´en˜ an, A. A., Iterative methods and their dynamics with applications, CRC Press, New York, USA, 2017. [5] Behl, R., Bhalla, S., Martinez, E., Alsulami, M. A., Derivative-free King’s scheme for multiple zeros of nonlinear functions, Mathematics 2021, 9, 1242. https://doi.org/10.3390/math 9111242. [6] Behl, R., Bhalla, S., Magrenan, A. A., Moysi, A., An optimal derivative free family of Chebyshev-Halley’s method for multiple zeros. Mathematics, 2021, 9, 546. [7] Cordero, A., Hueso, J. L., Martinez, E., Torregrosa, J. R., A modified Newton-Jarratts composition. Numer. Algor. 55, (2010), 87–99. [8] Cordero, A., Jordan, C., Codesal, E., Torregrosa, J. R., Highly efficient algorithms for solving nonlinear systems with arbitrary order of convergence p + 3, p ≥ 5, J. Comput. Appl. Math., 330, (2018), 748-758. [9] Grau-Sanchez, M., Grau, A., Noguera, M., Frozen divided difference scheme for solving systems of nonlinear equations. J. Comput. Appl. Math. 235, (2011),1739– 1743. [10] Ham, Y., Chun, C., A fifth order iterative method for solving nonlinear equations, Appl. Math. Comput., 194, 1, (2007), 287-290.

A Tenth Convergence Order Method under Generalized Conditions

193

[11] Liu, Z., Zheng, Q., Zhao, P., A variant of Steffensens method of fourth-order convergence and its applications. Appl. Math. Comput. 216, (2010), 1978–1983. [12] Ezquerro, J. A., Hernandez, M. A., Romero, N., Velasco, A.I., On Steffensen’s method on Banach spaces, Journal of Computational and Applied Mathematics, 249, (2013), 9-23. [13] Hernandez Veron, M. A., Yadav, S., Magrenan, A. A, Martinez, E, Singh, S., On the complexity of extending the accessibility for Steffensen-type methods, J. Complexity, (2021) [14] Hernandez, M. A., Rubio, M. J., A uniparametric family of iterative processes for solving nondifferentiable equations, J. Math. Anal. Appl., 275, (2002), 821-834. [15] Kantorovich, L. V., Akilov, G. P., Functional Analysis, second edition, Pergamon Press, Oxford, 1982, translated from Russian by Howard L. Silcock. [16] King, H. T., Traub, J. F., Optimal order of one-point and multipoint iteration, Carnegie Mellon University, Research Showcase@CMU. Computer Science Department, Paper 1747, 1973. [17] Magr´en˜ an, A. A., Argyros, I. K., Rainer, J. J., Sicilia, J. A., Ball convergence of a sixth-order Newton-like method based on means under weak conditions, J. Math Chem. (2018) 56:2117-2131, https://doi.org/10.1007/ s10910-018-0856-y [18] Magr´en˜ an, A. A., Guti´errez, J. M., Real dynamics for damped Newton’s method applied to cubic polynomials, J. Comput. Appl. Math., 275, (2015), 527–538. [19] Ren, H., Wu, Q., Bi, W., A class of two-step Steffensen type methods with fourthorder convergence. Appl. Math. Comput. 209, (2009), 206–210. [20] Rheinboldt, W. C., An adaptive continuation process for solving systems of nonlinear equations, In Mathematical models and numerical methods (Tikhonov A. N. et al. eds.) pub.3, (1977), 129-142 Banach Center, Warsaw Poland. [21] Sharma, J. R., Arora, H., A novel derivative free algorithm with seventh order convergence for solving systems of nonlinear equations. Numer. Algor. 4, (2014), 91793319. [22] Sharma, J. R., Gupta, P., Efficient Family of Traub-Steffensen-Type Methods for Solving Systems of Nonlinear Equations. Advances in Numerical Analysis. Article ID 152187, p. 11 (2014) [23] Sharma, J. R., Gupta, P., An efficient fifth order method for solving systems of nonlinear equations, Comput. Math. Appl., 67, (2014), 591-601. [24] Sharma, J. R., Arora, H., Efficient derivative-free numerical methods for solving systems of nonlinear equations. Comp. Appl. Math.35, (2016), 269-284. doi:10.1007/s40314-014-0193-0

194

Christopher I. Argyros, Samundra Regmi, Ioannis K. Argyros et al.

[25] Steffensen, J. F., Remarks on iteration. Skand Aktuar Tidsr. 16, (1933), 64-72. [26] Traub, J. F., Iterative Methods for the Solution of Equations. Prentice-Hall, New Jersey (1964). [27] Wang, X., Zhang, T., A family of Steffensen type methods with seventh-order convergence. Numer. Algor. 62, (2013), 429-444. [28] Wang, X., Fixed point iterative method with eighth order constructed by under determined parameter technique for solving nonlinear equations, Symmetry, 73, (2021), 863. [29] Xiao, X. Y, Yin, H. W., Increasing the order of convergence for iterative methods to solve nonlinear systems, Calcolo, 53, (2016), 285-300.

Chapter 23

Convergence of Chebyshev’s Method 1.

Introduction

In this chapter, we consider the third order Chebyshev’s method and extend its convergence domain using some majorant conditions. Kumari and Parida (2021) studied this method using a new type of majorant condition. In this chapter, we extend the work of Kumari and Parida (2021), who in turn extended earlier works. We are interested in approximating a solution x∗ of F(x) = 0,

(23.1)

where F : 0/ 6= Ω ⊂ V1 −→ V2 is an operator acting between Banach spaces V1 and V2 . It is known [18] that the closed form derivation of x∗ is possible only in rare cases. So most of the practitioners and researchers try to develop solution methods that are iterative. In [20] semi-local convergence of the following Chebyshev’s method defined for x0 ∈ Ω and each n = 0, 1, 2, . . . by 1 xk+1 = xk − [I + F 0 (xk )−1 F 00 (xk )F 0 (xk )−1 F(xk )]F 0 (xk )−1 F(xk ) 2

(23.2)

is studied. The convergence order three is obtained under certain condition on the initial data (Ω, F, F 0 , F 00 , x0 ). Using a new type of majorant condition similar to the one considered in [14, 15], a semi-local convergence analysis is given in [20] and also obtained a new error estimate based on the directional derivative of the majorizing function. In the present chapter, we extend the semi-local convergence analysis of method (23.2). Throughout the ¯ r) is the closure of B(x, r). chapter B(x, r) = {y ∈ V1 : kx − yk < r(r > 0) and B(x, The rest of the chapter is organized as follows: In Section 2, we develop the semi-local convergence analysis based on majorizing sequences. Numerical examples can be found in Section 3.

2.

Semi-Local Convergence Analysis

We develop some Lipschitz-type conditions, so we compare them to each other.

196

Christopher I. Argyros, Samundra Regmi, Ioannis K. Argyros et al.

Definition 15. Operator F 00 satisfies the center majorant condition for all x ∈ B(x0 , ρ) ⊂ Ω(ρ > 0) if there exists function g0 ∈ C2 [0, ρ], g0 : [0, ρ) −→ R such that kF 0 (x0 )−1 (F 00 (x) − F 00 (x0 ))k ≤ g000 (kx − x0 k) − g000 (0).

(23.3)

Suppose that function g00 (t) has a minimal zero ρ0 ∈ (0, ρ]. Set Ω0 = B(x0 , ρ0 ). Definition 16. Operator F 00 satisfies the restricted majorant condition for all x, y ∈ B(x0 , ρ0 ) if there exists function g ∈ C2 [0, ρ0 ], g : [0, ρ0) −→ R such that kF 0 (x0 )−1 (F 0 (x) − F 0 (y))k ≤ g00 (kx − yk + kx − x0 k) − g00 (kx − x0 k),

(23.4)

where ky − xk + kx − x0 k < ρ0 . Definition 17 (20). Operator F 00 satisfies the majorant condition for all x, y ∈ B(x0 , ρ) if there exists function g1 ∈ C2 [0, ρ], g1 : [0, ρ) −→ R such that kF 0 (x0 )−1 (F 0 (x) − F 0 (y))k ≤ g001 (kx − yk + kx − x0 k) − g001 (kx − x0 k),

(23.5)

where ky − xk + kx − x0 k < ρ. Remark 33. By the definition of Ω0 , we have Ω0 ⊆ Ω,

(23.6)

g000 (t) ≤ g001 (t)

(23.7)

g00 (t) ≤ g001 (t)

(23.8)

g00 (t) ≤ g0 (t) and g000 (t) ≤ g00 (t) for all t ∈ [0, ρ0 ).

(23.9)

so and for all t ∈ [0, ρ0). We also suppose from now on that

Otherwise the results that follow hold with g˜ replacing g, where g, ˜ is the largest of g0 and g on the interval [0, ρ0). We shall use conditions (A). Suppose: (A1) Conditions (23.3) and (23.4) hold. (A2) g0 (0) > 0, g(0) > 0, g000 (0) > 0, g00 (0) > 0, g00 (0) = g0 (0) = −1. (A3) g000 and g00 are convex and strictly increasing in (0, ρ0 ). (A4) Function g has zeros in (0, ρ0). Denote by ρ∗ the smallest such zero and g0 (ρ∗ ) < 0. (A5) kF 0 (x0 )−1 F(x0 )k ≤ g(0) and

Convergence of Chebyshev’s Method

197

(A6) kF 0 (x0 )−1 F 00 (x0 ) ≤ g00 (0). It turns out that tighter function g can replace g1 in all the results in [20]. To avoid repetitions, we only present the critical ones where our approach makes a difference bringing in the benefits mentioned before. The next result improves Lemma 2.4 in [20]. ¯ 0 , ρ) for ρ < ρ∗ . Then, the Lemma 10. Under condition (23.3) further suppose x ∈ B(x following items hold F 0 (x)−1 ∈ L(V2 ,V1) (23.10) and kF 0 (x)−1 F 0 (x0 )k ≤ −

1 g00 (kx − x0 k)

≤−

1 g0 (kx − x0 k)

≤−

1 g0 (ρ)

.

(23.11)

Proof. Using Taylor series for all x ∈ B(x0 , ρ) and (23.3) we get in turn that kF 0 (x0 )−1 (F 0 (x) − F 0 (x0 ))k = k

Z 1 0 0

F 0 (x0 )−1 (F 00 (x0 + θ(x − x0 )) − F 00 (x0 ))dθ(x − x0 )

+F (x0 )−1 F 00 (x0 )(x − x0 )k



Z 1



Z 1

kF 0 (x0 )−1 (F 00 (x0 + θ(x − x0 )) − F 00 (x0 ))dθkx − x0 k

0

+kF 0 (x0 )−1 F 00 (x0 )kkx − x0 k 0

[g000 (θkx − x0 k) − g000 (0)]kx − x0 kdθ

+g000 (0)kx − x0 k

≤ g00 (kx − x0 k) − g00 (0) ≤ g00 (ρ) − g00 (0) < 1,

(23.12)

by the definition of ρ0 . Then, it follows from the Banach lemma on invertible operators [18] and (23.12) that (23.10) holds. Remark 34 (18). Under (23.5) the less precise estimate kF 0 (x)−1 F 0 (x0 )k ≤ − since −

1 g01 (t)

≤−

1

, g01 (kx − x0 k) 1

g0 (t)

(23.13)

(23.14)

for all t ∈ [0, ρ∗). Examples where (23.6)-(23.9) and (23.14) are strict can be found in [20] and in the numerical section. Next, by simply exchanging (23.5), g1 by (23.4), g, respectively in the proof of Theorem 3.1 in [20], we obtain the following extension of the semi-local convergence analysis of the Chebyshev’s method (23.2).

198

Christopher I. Argyros, Samundra Regmi, Ioannis K. Argyros et al.

Theorem 45. Under conditions (A), sequence {xn } generated by the Chebyshev method ¯ 0 , ρ∗ ) of (23.2) exists in B(x0 , ρ∗ ), stays in U(x0 , ρ∗ ) and converges to a solution x∗ ∈ B(x equation F(x) = 0, which is unique in B(x0 , ρ), where ρ := sup{s ∈ (ρ∗ , ρ0 ) : g0 (s) ≤ 0}. Moreover, the following estimates hold for all n = 0, 1, 2, . . . ρ∗ − tn+1 kxn+1 − x∗ k3 , (ρ∗ − tn )3   1 D−g00 (ρ∗ ) 1 g00 (ρk ) + 0 kxn − x∗ k3 kxn+1 − x∗ k ≤ − 6 g0 (ρ∗ ) 8 g (ρ∗ )(ρ∗ − tn ) kxn+1 − x∗ k ≤

and

kxn+1 − x∗ k ≤ where sequence {tn } is defined by

ρ∗ − tn+1 kxn+1 − xn k3 , (tn+1 − tn )3

t0 = 0,tn+1 = tn − [1 +

g(tn )g00(tn ) g(tn ) ] . 2g0 (tn )2 g0(tn )

(23.15)

(23.16)

(23.17)

(23.18)

Remark 35. (a) If g = g1 , then our results correspond to the ones in [20, Theorem 3.1]. Otherwise, they constitute an improvement with benefits as already mentioned before. Clearly, by specializing in function g, we can also extend the results of the Smale-type and the Kantorovich-type reported in [20]. It is worth noticing that these benefits are obtained under the same computational cost since in practice the computation of majorant function g1 requires that of g0 and g as special cases. (b) Another way of improving our results even further is to replace (23.3) by kF 0 (x0 )−1 (F 0 (x) − F 0 (x0 ))k ≤ g¯0 (kx − x0 k) − g¯0 (0) for all x ∈ B(x0 , ρ), where function g¯0 has the same properties as g0 . Suppose that g¯00 (t) is a tighter function than g00 (t) on the interval [0, ρ). Then, function g¯ satisfying (23.4) will be at least as tight too, and can replace g in all our results. (c) If estimates (23.3) and (23.4) hold on Ω1 = B(x1 , ρ − kx1 − x0 k) ⊆ Ω0 , which is still initial data, then resulting functions g¯0 and g¯ will be at least as g0 and g or g¯0 and g, ¯ respectively. We leave the details to the motivated reader.

3.

Local Convergence Analysis

Results of this type were not given in [20]. But we can derive such results if we simply let x = x∗ in the previous section. Then, our resulting findings will be better for the same reasons. Another way, however to obtain local convergence results is based on even weaker conditions. Let T = [0, ∞). Suppose function (i) ψ0 (t) − 1 has a minimal zero s0 ∈ T − {0}, for some function ψ0 : T −→ T which is nondecreasing and continuous. Set T0 = [0, s0 ).

Convergence of Chebyshev’s Method

199

(ii) ϕ(t) − 1 has a minimal zero s ∈ T0 − {0} for functions ψ0 : T0 −→ T, ψ : T0 −→ T, ψ1 : T0 −→ T and ψ2 : T0 −→ T continuous and nondecreasing and ϕ : T0 −→ T defined by ϕ(t) =

R1 0

ψ((1 − θ)t)dθ ψ2 (t)( 01 ψ1 (θt)dθ)2t + . 1 − ψ0 (t) 2(1 − ψ0 (t))3 R

We shall show s to be a convergence radius for Chebyshev’s method (23.2). The following conditions (H) are needed to be provided that x∗ is a simple solution of an equation F(x) = 0. Suppose: (H1) For all x ∈ Ω

kF 0 (x∗ )−1 (F 0 (x) − F 0 (x∗ ))k ≤ ψ0 (kx − x∗ k).

Set Ω0 = B(x∗ , s0 ) ∩ Ω. (H2) For all x, y ∈ Ω0

kF 0 (x∗ )−1 (F 0 (y) − F 0 (x))k ≤ ψ(ky − xk), kF 0 (x∗ )−1 F 0 (x)k ≤ ψ1 (kx − x∗ k)

and kF 0 (x∗ )−1 F 00 (x)k ≤ ψ2 (kx − x∗ k). ¯ ∗ , s) ⊂ Ω. (H3) B(x Then, we can show the local convergence result for the method (23.2). Theorem 46. Under the conditions (H) further suppose x0 ∈ B(x∗ , s) − {x∗ }. Then, limn−→∞ xn = x∗ . Proof. Let v ∈ B(x∗ , s) − {x∗ }. Using (H1) and the definition of s, we have in turn that F 0 (x∗ )−1 (F 0 (v) − F 0 (x∗ ))k ≤ ψ0 (kx − x∗ k) ≤ ψ0 (s) < 1, so F 0 (v)−1 ∈ L(V2 ,V1) and kF 0 (v)−1 F 0 (x∗ )k ≤

1 . 1 − ψ0 (kv − x∗ k)

(23.19)

In particular, iterate x1 exists (since for v = x0 , F 0 (x0 )−1 exists) by method (23.2) for n = 0, and we can write x1 − x∗ = x0 − x∗ − F 0 (x0 )−1 F(x0 ) 1 − (F 0 (x0 )−1 F 0 (x∗ ))(F 0 (x∗ )−1 F 00 (x0 )))(F 0 (x0 )−1 F 0 (x∗ )) 2 ×(F 0 (x0 )−1 F(x0 ))(F 0 (x0 )−1 F 0 (x∗ ))(F 0 (x∗ )−1 F(x0 )).

(23.20)

200

Christopher I. Argyros, Samundra Regmi, Ioannis K. Argyros et al.

Then, using (H2), (H3), (23.19) (for v = x0 ), (23.20) and the triangle inequality, we get in turn that kx1 − x∗ k ≤ kx0 − x∗ − F 0 (x0 )−1 F(x0 )k 1 + kF 0 (x0 )−1 F 0 (x∗ )k3 kF 0 (x∗ )−1 F 00 (x0 )kkF 0 (x∗ )−1 F(x0 )k " R2 1 0 ψ((1 − θ)kx0 − x∗ k)dθ ≤ 1 − ψ0 (kx0 − x∗ k) # R 1 ψ2 (kx0 − x∗ k)( 01 ψ1 (θkx0 − x∗ k)dθ)2kx0 − x∗ k + 2 (1 − ψ0 (kx0 − x∗ k))3 ≤ ϕ(kx0 − x∗ k)kx0 − x∗ k

≤ kx0 − x∗ k < s,

(23.21)

and x1 ∈ B(x∗ , s). Switch x0 , x1 by xi , xi+1 in the preceding calculations to obtain kxi+1 − x∗ k ≤ ckxi − x∗ k < s,

(23.22)

where c = ϕ(kx0 − x∗ k) ∈ [0, 1), so limi−→∞ xi = x∗ and xi+1 ∈ B(x∗ , s). We present the uniqueness of the solution result. Proposition 6. Suppose: (i) x∗ is a simple solution of equation F(x) = 0. R ¯ ∗ , s∗ ) ∩ Ω. Then, the only (ii) There exists s∗ ≥ s such that 01 ψ0 (θs∗ )dθ < 1. Set Ω1 = B(x solution of equation F(x) = 0 in the domain Ω1 is x∗ . Proof. Let p ∈ Ω1 with F(p) = 0. Set M = obtain in turn

R1 0 0 F (p + θ(x∗ − p))dθ. In view of (H1), we

kF (x∗ ) (M − F (x∗ ))k ≤

Z 1

ψ0 (θkx∗ − pk)dθ



Z 1

ψ0 (θs∗ )dθ < 1,

0

−1

0

0

0

so we conclude x∗ = p by the invertibility of M and the identity M(p−x∗ ) = F(p)−F(x∗ ) = 0 − 0 = 0. Remark 36.

1. In view of (H1) and the estimate

kF 0 (x∗ )−1 F 0 (x)k = kF 0 (x∗ )−1 (F 0 (x) − F 0 (x∗ )) + Ik

≤ 1 + kF 0 (x∗ )−1 (F 0 (x) − F 0 (x∗ ))k ≤ 1 + ψ0 (kx − x∗ k)

the second condition in (H2) can be dropped and ψ1 can be replaced by ψ1 (t) = 1 + ψ0 (t) or ψ1 (t) = 1 + ψ0 (s).

Convergence of Chebyshev’s Method

201

2. The results obtained here can be used for operators F satisfying autonomous differential equations [3] of the form F 0 (x) = P(F(x)), where P is a continuous operator. Then, since F 0 (x∗ ) = P(F(x∗ )) = P(0), we can apply the results without actually knowing p. For example, let F(x) = ex − 1. Then, we can choose: P(x) = x + 1. 3. If ψ0 (t) = L0 t, ψ(t) = Lt, then the radius r1 = 2L02+L was shown by us to be the convergence radius of Newton’s method [1,3,4,8,9] xn+1 = xn − F 0 (xn )−1 F(xn ) for each n = 0, 1, 2, · · · ,

(23.23)

under the conditions (H1)–(H3). It follows from the definition of s that the convergence radius s of the method (23.2) cannot be larger than the convergence radius r1 of the second order Newton’s method (23.23). As already noted in [1,3,4,8,9] r1 is at least as large as the convergence ball given by Rheinboldt [25] rR =

2 , 3L1

(23.24)

where L1 is the Lipschitz constant on D. In particular, for L0 < L1 we have that rR < r1 and

rR 1 L0 → as → 0. r1 3 L1

That is, our convergence ball r1 is at most three times larger than Rheinboldt’s. The same value for rR was given by Traub [27]. 4. It is worth noticing that method (23.2) is not changing when we use simpler methods the conditions of Theorem 2.7 instead of the stronger conditions used in [20]. Moreover, we can compute the computational order of convergence (COC) defined by     kxn+1 − pk kxn − pk / ln , µ = ln kxn − pk kxn−1 − pk or the approximate computational order of convergence     kxn+1 − xn k kxn − xn−1 k µ1 = ln / ln . kxn − xn−1 k kxn−1 − xn−2 k

This way, we obtain in practice the order of convergence in a way that avoids the bounds involving estimates using estimates higher than the first Fr´echet derivative of operator F.

202

4.

Christopher I. Argyros, Samundra Regmi, Ioannis K. Argyros et al.

Numerical Experiments

We provide an example testing the convergence criteria. ¯ 1) and x∗ = (0, 0, 0)tr. Define mapping F on Ω Example 31. Let V1 = V2 = R3 , Ω = B(0, tr for λ = (λ1 , λ2 , λ3 ) as F(λ) = (eλ1 − 1,

e−1 2 λ + λ2 , λ3 )tr 2 2

with max norm. We take λ0 = (λ01 , λ02 , λ03 )tr = (0.05, 0.05.0.05)tr. Then  λ  e1 0 0 F 0 (λ) =  0 (e − 1)λ2 + 1 0  0 0 1

and

eλ1 00 F (λ) =  0 0 

 0 0 0 0 0 0 0 0 0 0 0 (e − 1) 0 0 0 0  . 0 0 0 0 0 0 0 0

Then, we have F 0 (λ0 ) = F 0 (λ0 )−1 = ξ0 = diag(1.051271096376025,1.085914091422952, 1) and kξk = 1.085914091422952. Further, kF 0 (λ0 )−1 F(λ0 )k ≤ 0.056628087634347 and kF 0 (λ0 )−1 F 00 (λ0 )k ≤ 2 × 0.932953225279836. Then, hypotheses (A) hold. Concerning the local convergence case we can choose 1 1 L1 = e, ψ0 (t) = (e − 1)t, ψ(t) = e e−1 t and ψ1 (t) = ψ2 (t) = e e−1 since F 0 (x∗ )−1 = F 0 (x∗ ) = diag{1, 1, 1}. Then, we obtain s = 0.3544, r1 = 0.3827, rR = 0.2475.

5.

Conclusion

Chebyshev’s method was revisited and its applicability was extended in the semi-local convergence case. In particular, the benefits include: weaker sufficient convergence criteria (i.e. more starters x0 become available); tighter upper bounds on kxk+1 − xk k, kxk − x∗ k (i.e., fewer iterates are computed to reach a predicted error accuracy) and the information on the location of x∗ is more precise. The results are based on generalized continuity, which is more general than the Lipschitz continuity used before. The local convergence analysis not given in earlier studies was also presented. Our techniques are very general and can be used to extend the applicability of other methods.

Convergence of Chebyshev’s Method

203

References [1] Argyros, I. K., On the Newton - Kantorovich hypothesis for solving equations, J. Comput. Math., 169, (2004), 315-332. [2] Argyros, I. K., Computational theory of iterative methods. Series: Studies in Computational Mathematics, 15, Editors: Chui C. K. and Wuytack L., Elsevier Publ. Co. New York, U.S.A, 2007. [3] Argyros, I. K., Convergence and Applications of Newton-type Iterations, Springer Verlag, Berlin, Germany, (2008). [4] Argyros, I. K., Hilout, S., Weaker conditions for the convergence of Newton’s method. J. Complexity, 28, (2012), 364–387. [5] Argyros, I. K., Hilout, S., The majorant method in the theory of Newton-Kantorovich approximations and generalized Lipschitz conditions, J. Comput. Appl. Math., 291(2016), 332-347. [6] Argyros, I. K., Ren, H., Ball convergence theorem for Halley’s method in Banach space, J. Appl. Math. Comput., 38(2012), 453-465. [7] Argyros, I. K., Khattri, S. K., An improved semilocal convergence analysis for Chebyshev method, J. Appl. Math. Comput., 42(2013), 509-528. [8] Argyros, I. K., Magr´en˜ an, A. A., Iterative methods and their dynamics with applications, CRC Press, New York, USA, 2017. [9] Argyros, I. K., Magr´en˜ an, A. A., A contemporary study of iterative methods, Elsevier (Academic Press), New York, 2018. [10] Candela, V., Marquina, A., Recurrence relations for rational cubic methods II: the Chebyshev method, Computing, 45(1990), 113-130. [11] Dilone, M. A., Garcia Oliva, M., Gutierrez, J. M., A note on the semilocal convergence of Chebyshev’s method, Bull. Aust. Math. Soc., 88(2013), 98-105. [12] Ezquerro, J. A., A modification of the Chebyshev method, IMA J. Numer.Anal., 17(1997), 511-525. [13] Ezquerro, J. A., Grau-S´anchez, M., Grau, A., Hernandez, M. A., Noguera, M., Romero,N. On iterative methods with accelerated convergence for solving systems of nonlinear equation, J. Optim.Theory Appl., 151(2011), 163-174. [14] Ferreira, O. P., Convergence of Newton’s method in Banach space from the view of the majorant principle, IMA J. Numer. Anal., 29(2009), 746-759. [15] Ferreira, O. P., Svaiter, B. F., Kantorovich’s majorants principle for Newton’s method, Comput. Optim. Appl., 42(2009), 213-229.

204

Christopher I. Argyros, Samundra Regmi, Ioannis K. Argyros et al.

[16] Han, D., Wang, X., The error estimates of Halley’s method, Numer. Math. JCU. Engl. Ser. 6(1997), 231-240. [17] Ivanov, S., On the convergence of Chebyshev’s method for multiple polynomial zeros, Results. Math., 69(2016), 93-103. [18] Kantorovich, L. V., Akilov, G. P. Functional Analysis, Pergamon Press, Oxford, (1982). [19] Kumari, C., Parida, P. K., Local convergence analysis for Chebyshev’s method, J. Appl. Math. Comput., 59(2019), 405-421. [20] Kumari, C., Parida, P. K., Study of semilocal convergence analysis of Chebyshev’s method under new-type majorant conditions, SeMA, 2021. [21] Ling, Y., Xu, X., On the semilocal convergence behaviour for Halley’s method, Comput. Optim. Appl., 58(2014), 597-618. [22] Magr´en˜ an, A. A., Argyros, I. K., Rainer, J. J., Sicilia, J. A., Ball convergence of a sixth-order Newton-like method based on means under weak conditions, J. Math. Chem. (2018) 56:2117-2131, https://doi.org/10.1007/ s10910-018-0856-y. [23] Magr´en˜ an, A. A., Guti´errez, J. M., Real dynamics for damped Newton’s method applied to cubic polynomials, J. Comput. Appl. Math., 275, (2015), 527–538. [24] Parida, P. K., Gupta, D. K., Semilocal convergence of a family of third order Chebyshev-type methods under mild differentiability condition, Int. J. Comput. Math., 87(2010), 3405-3419. [25] Rheinboldt, W. C., An adaptive continuation process of solving systems of nonlinear equations. Polish Academy of Science, Banach Ctr. Publ. 3, (1978), 129-142. [26] Wang, X., Kou, J., Convergence for a family of modified Chebyshev methods under weak condition, Numer. Algor., 66(2014), 33-48. [27] Traub, J. F., Iterative methods for the solution of equations, Prentice Hall, New Jersey, U.S.A, (1964).

Chapter 24

Gauss-Newton Algorithms for Optimization Problems 1.

Introduction

Let B1 , B2 be Banach spaces and G : B1 −→ B2 be a smooth operator. Let also K 6= 0/ ⊂ B2 be a convex closed set. We are seeking solutions x∗ to the inclusion problem, G(x) ∈ K

(24.1)

min(H ◦ G)(x),

(24.2)

as well as the composite problem x∈B1

where H(.) = d(., K), then (24.2) becomes (24.1). A plethora of applications in penalization schemes, programing, minimax problems can be reduced to (24.1) or (24.2). Most solution schemes for these problems are of iterative nature, since x∗ can be found in closed form only in special cases. The following scheme which generalized the one by Robinson [19] has been used in [17] to generate a sequence {xn } converging to x∗ . Algorithm A(η, δ, x0 ). Having found xn for n = 0, 1, . . ., compute xn+1 as follows. If H(G(xn )) = min{H(G(xn ) + F 0 (xn )r) : r ∈ B1 , krk ≤ δ}, then STOP. Otherwise, pick rn ∈ Dδ (xn ) satisfying krn k ≤ ηd(0, Dδ(xn )), and let, xn+1 = xn + rn . But semi-local convergence generates a domain that is small in general. This limits the applicability of the algorithm. Hence, it is important to extend this domain, but with no additional conditions. We show how to do this in the next section. The advantages included weaker convergence criteria and more precise error estimates on kxn+1 − xn k as well as kxn − x∗ k. Our technique can be used to extend the applicability of other algorithms [1][23] along the same lines since it is so general.

206

2.

Christopher I. Argyros, Samundra Regmi, Ioannis K. Argyros et al.

Convergence

More details about concepts and notation can be found in [12,22,23]. We introduce some types of Lipschitz conditions so we can compare them. Definition 18. We say that (24.1) satisfies the weak-Robinson condition at the point x0 ∈ U(x0 , ρ) if −G(x0 ) ∈ Range (Tx0 ) (24.3) and Range (G0(x)) ⊆ Range (Tx0 )

(24.4)

for all x ∈ U(x0 , ρ), ρ > 0. We suppose that (24.3) and (24.4) hold. Definition 19. We say that (Tx−1 , G0 ) is center-Lipschitz continuous on U(x0 , ρ) if 0 (G0 (x) − G0 (x0 ))k ≤ L0 kx − x0 k kTx−1 0

(24.5)

for all x ∈ U(x0 , ρ) and some L0 > 0. Set U0 = U(x0 ,

1 ) ∩U(x0 , ρ). L0

(24.6)

Definition 20. We say that (Tx−1 , G0 ) is restricted-Lipschitz continuous on U0 if 0 kTx−1 (G0 (y) − G0 (x))k ≤ Lky − xk 0

(24.7)

for all x, y ∈ U0 and some L > 0. Definition 21. We say that (Tx−1 , G0 ) is Lipschitz continuous on U(x0 , ρ) if 0 kTx−1 (G0 (y) − G0 (x))k ≤ L1 ky − xk 0

(24.8)

for all x, y ∈ U(x0 , ρ) and some L1 > 0. Remark 37. It follows from (24.5) that since U0 ⊆ U(x0 , ρ),

(24.9)

L0 ≤ L1

(24.10)

L ≤ L1 .

(24.11)

the following items hold and Notice also that L0 = L0 (x0 , ,U(x0, ρ)), L1 = L1 (x0 ,U(x0 , ρ)) but L = L(x0 ,U0 ). It was shown in [17, Proposition 2.3] using (24.8) that kTx−1 G0 (x0 )k ≤

1 1 − L1 kx − x0 k

(24.12)

Gauss-Newton Algorithms for Optimization Problems

207

for all x ∈ U(x0 , ρ). But if one follows the proof the weaker (24.5) is needed to obtain the more precise estimates 1 . (24.13) kTx−1 G0 (x0 )k ≤ 1 − L0 kx − x0 k

This modification in the proof leads to the advantages as stated in the introduction. Examples, where (24.9)-(24.11) are strict can be found in [2]-[7]. From now on we suppose L0 ≤ L.

(24.14)

Otherwise, the result that follows hold for L0 replacing L. Next, it is convenient to define some parameters and majorizing functions. Let x0 ∈ B1 , L ∈ (0, ∞), η ∈ [1, ∞), δ ∈ (0, ∞] and −G(x0 ) = Range (Tx0 ). Set β = ηkTx−1 (−G(x0 ))k 0 η and α0 = 1+(η−1)Lβ . Define majorizing quadratic scalar polynomial ψη (t) = η − t +

α0 L 2 t 2

(24.15)

for all t ≥ 0. Define scalar majorizing sequence {sη,n } by sη,0 = 0, sη,1 = β, sη,n+1 = sη,n −

ψη (sη,n ) . ψ0η (sη,n )

(24.16)

Remark 38. If L = L1 , sequence (24.16) reduces to {tη,n } used in [17], where α=

η , 1 + (η − 1)L1 β

ψη (t) = β − t +

αL1 t 2 for all t ≥ 0 2

and tη,n+1 = tη,n −

ψη (tη,n ) . ψ0η (tη,n )

(24.17) (24.18)

(24.19)

A simple inductive argument because of (24.10) and (24.11) show that the new sequence {sη,n } is more precise than {tη,n } leading to at least as tight error estimates as claimed in the introduction. Next, the main convergence result for algorithm A(η, δ, x0 ). Theorem 47. Under conditions (24.3)-(24.5), (24.8) further suppose β ≤ min{

1 , δ}. l(η + 1)

(24.20)

Then, sequence {xn } generated by the algorithm A(η, δ, x0 ) is well defined stays in U0 and converges to a point x∗ ∈ U(x0 , s∗η with F(x∗ ) ∈ K. Moreover, the following estimate hold (24.1), so that kxn+1 − xn k ≤ sη,n − sη,n−1 , (24.21)

208

Christopher I. Argyros, Samundra Regmi, Ioannis K. Argyros et al. G(xn−1) + G0 (xn−1 )(xn − xn−1 ) ∈ K

and

(24.22)

n

kx∗ − xn k ≤ where

p2η −1 n

=

1−

p

pη =

1−

p

s∗η and

1+

s∗ , j η

−1 pη ∑2j=0

p

1 − 2α0 Lβ α0 L

1 − 2α0 Lβ 1 − 2α0 Lβ

(24.23)

(24.24)

.

(24.25)

Proof. Simply replace L1 by L, (24.12) by (24.13) and also use (24.14). Remark 39. The corresponding estimates in [17] are β ≤ min{

1 , δ}, L1 (η + 1)

{tη,n } replaces {sη,n } in (24.21), qη replaces pη in (24.23), where p 1 − 1 − 2αL1 β ∗ tη = α0 L1 and qη = It follows from (24.11) that

1−

1+

p

p

1 − 2αL1 β 1 − 2αL1 β

.

(24.26)

(24.27)

(24.28)

(24.26) ⇒ (24.20), s∗η ≤ tη∗

and pη ≤ qη further justifying the advantages as claimed in the introduction. Clearly, the specialized cases in [17] are also immediately extended.

3.

Conclusion

The convergence Gauss-Newton scheme is extended with no additional hypotheses for solving inclusion problems in optimization.

Gauss-Newton Algorithms for Optimization Problems

209

References [1] Adly, S., Van Ngai, H., Nguyen, V. V., Newton’s method for solving generalized equations: Kantorovich’s and Smale’s approaches, J. Math Anal Appl. 2016; 439(1):396-418. [2] Argyros I. K. (2008), Convergence and applications of Newton-type iterations. Springer, New York. [3] Argyros, I. K., Hilout S (2010), Inexact Newton-type methods. 26(6):577590.

J. Complex

[4] Argyros, I. K., Convergence and applications of Newton-type iterations. Springer, New York (2008). [5] Argyros, I. K., Magr´en˜ an, A. A., A contemporary study of iterative methods, Elsevier (Academic Press), New York, 2018. [6] Argyros, I. K., George, S., Mathematical modeling for the solution of equations and systems of equations with applications, Volume-IV, Nova Publisher, NY, 2021. [7] Argyros, I. K., George, S. (2020), On the complexity of extending the convergence region for Traubs method, Journal of Complexity, 56, 101423. [8] Burke, J. V., Ferris, M. C., A Gauss-Newton method for convex composite optimization, Math. Program., 71 (1995), pp. 179-194. [9] Cibulka, R., Dontchev, A., Preininger, J., et al. Kantorovich-type theorems for generalized equations. Research Report. 2015-2016; 2015:1-26. [10] Dennis J. E., Schnabel, R. B., Numerical methods for unconstrained optimization and nonlinear equations. Classics in applied mathematics. Society for industrial and applied mathematics (SIAM), Philadelphia (PA); 1996. Corrected reprint of the 1983 original. [11] Deuflhard, P., Newton methods for nonlinear problems, In: Springer series in computational mathematics. Vol. 35. Berlin: Springer-Verlag; 2004. Affine invariance and adaptive algorithms. [12] Deuflhard, P., Newton Methods for Nonlinear Problems: Affine Invariance and Adaptive Algorithms, Springer-Verlag, Berlin, Heidelberg (2004). [13] Dontchev, A. L., Rockafellar, R. T., Implicit functions and solution mappings. Springer monographs in mathematics. Dordrecht: Springer; 2009. A view from variational analysis. [14] Dontchev, A. L., Rockafellar, R. T., Convergence of inexact Newton methods for generalized equations. Math Program, 2013; 139(1-2 Ser B):115-137.

210

Christopher I. Argyros, Samundra Regmi, Ioannis K. Argyros et al.

[15] Ferreira, O. P., Silva, G. N., Inexact Newton method for non-linear functions with values in a cone, Applicable Analysis, 98:8, 1461-1477, DOI: 10.1080/00036811.2018.1430779. [16] Kantorovich, L. V., Akilov, G. P. Functional Analysis (second ed.), Pergamon Press, Oxford (1982) (translated from Russian by Howard L. Silcock). [17] Li, C., Ng, K. F., Convergence analysis of the Gauss-Newton method for convex inclusion and convex-composite optimization problems, J. Math. Anal. Appl., 389, (2012), 469-485. [18] Ostrowski A. M., Solutions of Equations in Euclidean and Banach Spaces, Academic Press, New York (1973). [19] Robinson, S. M., Extension of Newtons method to nonlinear functions with values in a cone. Numer Math. 1972; 19:341-347. [20] Robinson, S. M., Normed convex processes. 174:127140.

Trans Amer Math Soc.

1972;

[21] Robinson, S. M., Normed convex process, Trans. Amer. Math. Soc., 174 (1972), pp. 127-140. [22] Rockafellar, R. T., Convex analysis. In: Princeton mathematical series. Vol. 28. Princeton (NJ): Princeton University Press; 1970. [23] Rockafellar, R. T., Convex Analysis, Princeton University Press, Princeton, NJ (1970).

Chapter 25

Two-Step Methods under General Continuity Conditions 1.

Introduction

We consider the equation F(x) = 0,

(25.1)

/ where F : Ω ⊂ B −→ B1 is an operator acting between Banach spaces B and B1 with Ω 6= 0. We are concerned with the problem of approximating a solution x∗ of (25.1). In general a closed form solution is not possible, so iterative methods are used for solving (25.1). Many iterative methods are studied for approximating x∗ [1]-[40]. In this chapter, we consider the iterative method, defined for n = 0, 1, 2, . . ., by yn = xn − F 0 (xn )−1 F(xn ),

xn+1 = yn − F 0 (yn )−1 F(yn ).

(25.2)

The convergence of the above method was shown to be of order four using Taylor expansion and assumptions on the fifth-order derivative of F, which is not on these methods. So, the assumptions on the fifth derivative reduce the applicability of these methods [1]-[40]. For example: Let B = B1 = R, Ω = [−0.5, 1.5]. Define λ on Ω by  3 t logt 2 + t 5 − t 4 i f t 6= 0 λ(t) = 0 i f t = 0. Then, we get t ∗ = 1, and λ000 (t) = 6 logt 2 + 60t 2 − 24t + 22. Obviously λ000 (t) is not bounded on Ω. So, the convergence of method (25.2) is not guaranteed by the previous analyses in [1]-[40]. In this chapter, we introduce a majorant sequence and use general continuity conditions to extend the applicability of the method (25.2). Our analysis includes error bounds and results on the uniqueness of x∗ based on computable Lipschitz constants not given before

212

Christopher I. Argyros, Samundra Regmi, Ioannis K. Argyros et al.

in [1]-[40] and in other similar studies using the Taylor series. Our idea is very general. So, it applies to other methods too. The rest of the chapter is set up as follows: In Section 2, we present results on majorizing sequences. Sections 3, 4 contain the semi-local and local convergence, respectively, whereas in Section 4, the numerical experiments are presented. Concluding remarks are given in the last Section 5.

2.

Majorizing Sequences

Let ϕ0 , ϕ : [0, ∞) −→ [0, ∞) be nondecreasing and continuous functions. Let us also define majorizing sequences {tn }, {sn} as follows for some η ≥ 0 : t0 = 0, s0 = η, ϕ(θ(sn − tn ))dθ (sn − tn ), tn+1 = sn + 1 − ϕ0 (sn ) ϕ(θ(tn+1 − sn ))dθ sn+1 = tn+1 + (tn+1 − sn ), 1 − ϕ0 (tn+1)

(25.3)

functions on the interval [0, ∞) by p(t) =

Z 1 0

and

Z 1

ϕ(θt 3 η)dθ + tϕ0 (

η ) −t 1 −t

η ) − t. 1 −t 0 Next, we present convergence results for these sequences. q(t) =

ϕ(θt 2 η)dθ + tϕ0 (

Lemma 11. Suppose: (i) Sequence {tn } is bounded from above by some γ > 0, and ϕ0 (tn ) < 1

(25.4)

for each n = 0, 1, 2, . . .. or (ii) Function ϕ0 is strictly increasing and tn ≤ ϕ−1 0 (1)

(25.5)

for each n = 0, 1, 2, . . .. Then, sequences {tn } is nondecreasing and converge to its unique least upper bound t ∗ ≤ γ or t ∗ ≤ ϕ−1 0 (1). Proof. By the definition of sequences {tn } it is nondecreasing under (i) or (ii) and as such it converges to t ∗ .

Two-Step Methods under General Continuity Conditions

213

Remark 40. The convergence conditions in Lemma 11 are very general. Next, we present some results where the convergence conditions are stronger implying (25.4) or (25.5) but are easier to verify. Suppose that functions p and q have solutions in the interval (0, 1). Denoted by s p and sq the minimal such solutions and set δ = min{s p , sq }. Lemma 12. Suppose that δ exists and (25.5) is replaced by

0≤

(25.6)

ϕ(θ(t1 − η))dθ ≤ δ. 1 − ϕ0 (t1 )

(25.7)

ϕ(θη)dθ ≤δ 1 − ϕ0 (η) 0

0≤ and

R1

R1 0

Then, sequence {tn } is nondecreasing, bounded from above by t ∗∗ = t ∗ . Moreover, the following estimates hold

η 1−δ

and converges to

tn+1 − sn ≤ δ(sn − tn ) ≤ δ2n+1 η,

(25.8)

sn − tn ≤ δ(tn − sn−1 ) ≤ δ2n η

(25.9)

tn ≤ sn ≤ t2n+1.

(25.10)

and Proof. Estimates (25.70-(25.9) hold if 0≤ 0≤ and

R1 0

R1 0

ϕ(θ(sk − tk ))dθ ≤ δ, 1 − ϕ0 (sk )

(25.11)

ϕ(θ(tk+1 − sk ))dθ ≤δ 1 − ϕ0 (tk+1)

(25.12)

0 ≤ tk ≤ sk ≤ tk+1.

(25.13)

By the definition of sequence {tk}, (25.6) and (25.7), estimates (25.11)-(25.13) hold for k = 0. Suppose these estimates hold for k = 1, 2, . . .. Then, by (25.8), (25.9) and the induction, we get in turn that sk ≤ tk + δ2k η ≤ sk−1 + δ2k−1 η + δ2k η ≤ s0 + δη + . . . + δ2k η (1 − δ2k )η = ≤ t ∗∗ . 1−δ

Using these estimates (25.11) and (25.12) shall hold if   Z 1 (1 − δ2k+1 )η 2k ϕ(θδ η)dθ + δϕ0 −δ ≤ 0 1−δ 0

214

Christopher I. Argyros, Samundra Regmi, Ioannis K. Argyros et al.

and Z 1

2k+1

ϕ(θδ

0

η)dθ + δϕ0

or



 (1 − δ2k+2 )η − δ ≤ 0, 1−δ

p(t) ≤ 0 and q(t) ≤ 0, at t = δ, which are true by the choice of δ. The induction for estimates (25.8)-(25.10) is terminated. Hence, sequence {tn } is Cauchy and as such it converges to t ∗ . Remark 41. If functions ϕ0 and ϕ are specialized to ϕ0 (t) = L0 t and ϕ(t) = Lt, for L0 > 0 and L > 0, then we obtain the result. Define polynomial f on the interval [0, 1) by L L f (t) = L0 t 3 + (L0 + )t 2 − . 2 2 Then, one obtains f (0) = − L2 and f (1) = 2L0 . It then follows from the intermediate value theorem that function f has zeros in the interval (0, 1). Denote the smallest such zero by β. Define parameter α to be α = max{

L0 (t1 − s0 ) Lη , }, 2(1 − L0 η) 2(1 − L0 t1 )

2

Lη where t1 = η + 2(1−L . 0 η)

Lemma 13. Suppose α ≤ β < 1 − L0 η.

(25.14)

tk+1 − sk ≤ β(sk − tk ) ≤ β2k+1η,

(25.15)

sk − tk ≤ β(tk − sk−1 ) ≤ β2k η,

(25.16)

Then, sequences {tn } is nondecreasing, and converge to t ∗ so that

and tk ≤ sk ≤ tk+1. (1)

Proof. Define recurrent functions ψk on the interval [0, 1) by L (1) ψk (t) = t 2k η + L0 (1 + t + . . . + t 2k+1 )η − 1. 2

(25.17)

According to the proof of Lemma 2.3, estimate (25.11) holds if (1)

ψk (t) ≤ 0 holds at t = β.

(25.18)

Two-Step Methods under General Continuity Conditions

215

A relationship between two consecutive functions is needed: L (1) (1) ψk+1(t) = ψk (t) + t 2k+2η + L0 (1 + t + . . . + t 2k+3)η 2 L 2k − t − L0 (1 + t + . . . + t 2k+1 )η + 1 2 (1) = ψk (t) + f (t)t 2kη, so (1)

(2)

ψk+1 (t) = ψk (t) at t = β, since f (β) = 0 by the choice of β. Define function (1) ψ(1) ∞ (t) = lim ψk (t). k−→∞

Evidently, (25.18) holds if

ψ(1) ∞ (t) ≤ 0 at t = b.

(25.19)

(25.20) (25.21)

But by the definition (25.20) and (25.17), ψ(1) ∞ (t) =

L0 η − 1. 1 −t

(25.22)

Hence, (25.21) (or (25.18)) holds if L0 η − 1 ≤ 0, 1−β

(25.23)

which is true by (25.14). That is estimate (25.11) holds for all k. (2) Similarly, define recurrent functions ψk (t) on the interval [0, 1) by L (2) ψk (t) = t 2k−1η + L0 (1 + t + . . . + t 2k )η − 1. 2

(25.24)

Then, the definition of these functions leads again to (2)

(2)

ψk+1 (t) = ψk (t) + f (t)t 2k−1η, and

(2)

(2)

ψk+1 (t) = ψk (t) at t = β.

(25.25)

ψ(2) ∞ (t) ≤ 0 at t = β,

(25.26)

Then, estimate (25.12) holds if

where

L0 η − 1, 1 −t which is true by (25.14), which completes the induction for (25.12). Estimate (25.13) is also true by (25.11), (25.12) and the definition (25.3) of these sequences. It follows that sequence {tk } converges to t ∗ . ψ(2) ∞ (t) =

216

Christopher I. Argyros, Samundra Regmi, Ioannis K. Argyros et al.

3.

Semi-Local Convergence

The following set of assumptions (A) is used to show convergence: Assume: (A1) There exists x0 ∈ Ω and η ≥ 0 such that F 0 (x0 )−1 exist and kF 0 (x0 )−1 F(x0 )k ≤ η. (A2) Parameter ρ is the smallest zero of function ϕ0 in the interval (0, 1). (A3) kF 0 (x0 )−1 (F 0 (v) − F 0 (x0 ))k ≤ ϕ0 (kv − x0 k) for all v ∈ Ω. Set Ω0 = Ω ∩U(x0 , ρ). (A4) kF 0 (x0 )−1 (F 0 (w) − F 0 (v)k ≤ ϕ(kw − vk) for all v, w ∈ Ω0 . (A5) Conditions of Lemma 11 or Lemma 12 or Lemma 13 hold. and (A6) U[x0 ,t ∗ ] ⊂ Ω (or U[x0 ,t ∗∗] ⊆ Ω). Next, the convergence is given under assumptions (A) and the developed notation. Theorem 48. Under the assumptions (A), sequences {xn } generated by method (25.2)exists in U[x0 ,t ∗ ], stays in U(x0 ,t ∗ ) for all n = 0, 1, 2, . . . and converges to a solution x∗ ∈ U[x0 ,t ∗ ] of equation (25.1) so that kx∗ − xn k ≤ t ∗ − tn . Proof. Induction on k is employed to show (Dk ) kyk − xk k ≤ sk − tk . (Ek ) kxk+1 − yk k ≤ tk+1 − sk . By the first substep of method (25.2) one gets ky0 − x0 k = kF 0 (x0 )−1 F(x0 )k ≤ η = s0 − t0 = s0 ≤ t ∗ , so y0 ∈ U[x0 ,t ∗ ] and (D0 ) holds. Let z ∈ U(x0 ,t ∗ ). Then, using (A3) one get kF 0 (x0 )−1 (F 0 (z) − F 0 (x0 ))k ≤ ϕ0 (kz − x0 k) ≤ ϕ0 (ρ) < 1,

so kF 0 (z)−1 F 0 (x0 )k ≤

1 1 − ϕ0 (kz − x0 k)

(25.27)

(25.28)

Two-Step Methods under General Continuity Conditions

217

follows from the Lemma due to Banach [24] on linear operators. Notice that in particular (25.28) holds for z = y0 . In view of the first sub-step of method (25.2) one can write F(y0 ) = F(y0 ) − F(x0 ) − F 0 (x0 )(y0 − x0 ) = kx1 − y0 k = k

Z 1 0



R1



R1

0

0

Z 1 0

(F 0 (x0 + θ(y0 − x0 )) − F 0 (x0 ))(y0 − x0 )dθ,

F 0 (x0 )−1 (F 0 (x0 + θ(y0 − x0 )) − F 0 (x0 ))dθ(y0 − x0 )k

ϕ0 (θky0 − x0 k)dθky0 − x0 k 1 − ϕ0 (kx0 − x0 k)

ϕ0 (θ(s0 − t0 ))dθ (s0 − t0 ) = t1 − s0 , 1 − ϕ0 (s0 )

(25.29)

so (E0 ) holds. Then, kx1 − x0 k ≤ kx1 − y0 k + ky0 − x0 k ≤ t1 − s0 + s0 − t0 = t1 ≤ t ∗ , so x1 ∈ U[x0 ,t ∗ ]. Suppose that estimates (Ek ) nad (Dk ) hold, yk , xk+1 ∈ U(x0 ,t ∗ ), and F 0 (xk )−1 and F 0 (yk )−1 exist for all k = 1, 2, . . ., n. Then, these estimates must be shown for k = n + 1.By the second sub-step of method (25.2), one obtains kF 0 (x0 )−1 F(xk+1)k ≤ kF 0 (x0 )−1 (F 0 (xk+1) − F(yk )kkxk+1 − yk k ≤

Z 1

ϕ(θkxk+1 − yk k)dθ(xk+1 − yk )k



Z 1

ϕ(θ(tk+1 − sk ))dθ(tk+1 − sk ),

0

0

(25.30)

so kyk+1 − xk+1 k ≤ kF 0 (xk+1 )−1 F 0 (xk )kkF 0 (xk )−1 F(xk+1)k ≤

R1



R1

0

0

ϕ(θkxk+1 − yk k)dθkxk+1 − yk k 1 − ϕ0 (kxk+1 − x0 k)

ϕ(θ(tk+1 − sk ))dθ(tk+1 − sk ) = sk+1 − tk+1 . 1 − ϕ0 (tk+1)

(25.31)

Hence (Kk ) holds. Similarly, by the first sub-step of method (25.2) F(yk+1) = F(yk+1 ) − F(xk+1) − F 0 (xk+1)(yk+1 − xk+1 ), so kxk+2 − yk+1 k ≤ kF 0 (yk+1)−1 F 0 (x0 )kkF 0 (x0 )−1 (F(xk+1 + θ(yk+1 − xk+1 ))dθ −F 0 (xk+1 )(yk+1 − xk+1 )k



R1



R1

0

0

ϕ(θkyk+1 − xk+1 k)dθkyk+1 − xk+1 k 1 − ϕ0 (kyk+1 − x0 k)

ϕ(θ(sk+1 − tk+1 ))dθ(sk+1 − tk+1) = tk+2 − sk+1 , 1 − ϕ0 (sk+1)

(25.32)

218

Christopher I. Argyros, Samundra Regmi, Ioannis K. Argyros et al.

so (Ek ) holds. Moreover, one gets xk+2 − x0 k ≤ kxk+2 − yk+1 k + kyk+1 − x0 k ≤ tk+2 − sk+1 + sk+1 − t0 = tk+2 ≤ t ∗ , so xk+2 ∈ U(x0 ,t ∗ ). Furthermore, kxk+1 − xk k ≤ kxk+1 − yk k + kyk − xk k ≤ tk+1 − sk + sk − tk = tk+1 − tk ,

(25.33)

so {xk } is fundamental in Banach space B, so limk−→∞ xk = x∗ ∈ U[x0 ,t ∗ ] which is a closed domain. Finally, if k −→ ∞ in (25.4) limk−→∞ kF 0 (x0 )−1 F(xk+1)k = 0, so by the continuity of F it follows that F(x∗ ) = 0. The uniqueness of x∗ is given next. Proposition 7. Assume: (1) x∗ ∈ U[x0 ,t ∗ ] solves (25.1). (2) There exists γ ≥ t ∗ so that Z 1 0

ϕ((1 − θ)γ + θt ∗ )dθ < 1.

(25.34)

Set Ω1 = Ω ∩U[x∗ , γ]. Then, x∗ is the only solution of equation (25.1) in the domain Ω1 . Proof. Consider x˜ ∈ Ω1 with F(x) ˜ = 0. Define M = (A3) and (25.34)

R1 0 ˜ Using (A1), 0 F (x˜ + θ(x∗ − x))dθ.

kF (x0 ) (M − F (x0 ))k ≤

Z 1

ϕ((1 − θ)kx˜ − x0 k + θkx˜ − x0 k)dθ



Z 1

ϕ((1 − θ)γ + θt ∗ )dθ < 1,

0

−1

0

0

0

so x˜ = x∗ , by the invertibility of M and M(x˜ − x∗ ) = F(x) ˜ − F(x∗ ) = 0 − 0 = 0.

4.

Numerical Experiments

We provide some examples in this section. Example 32. Define function q(t) = ξ0 t + ξ1 + ξ2 sinξ3 t, x0 = 0, where ξ j , j = 0, 1, 2, 3 are parameters. Choose ψ0 (t) = L0 t and ψ(t) = Lt. Then, clearly for ξ3 large and ξ2 small, LL0 can be small (arbitrarily). Notice that LL0 −→ 0.

Two-Step Methods under General Continuity Conditions

219

Example 33. Let B = B1 = C[0, 1] and Ω = U[0, 1]. It is well known that the boundary value problem [16]. ς(0) = 0, (1) = 1, ς00 = −ς − σς2 can be given as a Hammerstein-like nonlinear integral equation ς(s) = s +

Z 1

Q(s,t)(ς3(t) + σς2 (t))dt,

0

where σ is a parameter. Then, define F : Ω −→ B1 by [F(x)](s) = x(s) − s −

Z 1

Q(s,t)(x3(t) + σx2 (t))dt.

0

Choose ς0 (s) = s and Ω = U(ς0 , ρ0 ). Then, clearly U(ς0 , ρ0 ) ⊂ U(0, ρ0 +1), since kς0 k = 1. Suppose 2σ < 5. Then, conditions (A) are satisfied for L0 = and η =

5.

1+σ 5−2σ .

σ + 6ρ0 + 3 2σ + 3ρ0 + 6 ,L= , 8 4

Notice that L0 < L.

Conclusion

The semi-local convergence of a two-step Newton method of fourth-order is extended using general conditions on F 0 and recurrent majorizing sequences. In this chapter, we consider semi-local convergence analysis of a two-step iterative method for solving nonlinear equations under general continuity conditions in the Banach space. As far as we know no semi-local convergence has been given under general continuity conditions. Our goal is to extend the applicability of this method in the semi-local convergence under general continuity conditions. Moreover, we use majorizing sequences and conditions only on the first derivative, which appears in the method for proving our results. Numerical experiments testing the convergence criteria are given in this chapter.

References [1] Argyros, I. K., On the Newton - Kantorovich hypothesis for solving equations, J. Comput. Math., 169, (2004), 315-332. [2] Argyros, I. K., Computational theory of iterative methods. Series: Studies in Computational Mathematics, 15, Editors: Chui C. K. and Wuytack L., Elsevier Publ. Co. New York, U.S.A, 2007. [3] Argyros, I. K., Convergence and Applications of Newton-type Iterations, Springer Verlag, Berlin, Germany, (2008).

220

Christopher I. Argyros, Samundra Regmi, Ioannis K. Argyros et al.

[4] Argyros, I. K., Hilout, S., Weaker conditions for the convergence of Newton’s method. J. Complexity, 28, (2012), 364–387. [5] Argyros, I. K., Hilout, S., On an improved convergence analysis of Newton’s method, Applied Mathematics and Computation, 225, (2013), 372-386. [6] Argyros, I. K., Magr´en˜ an, A. A., Iterative methods and their dynamics with applications, CRC Press, New York, USA, 2017. [7] Argyros, I. K., Magr´en˜ an, A. A., A contemporary study of iterative methods, Elsevier (Academic Press), New York, 2018. [8] Behl, R., Maroju, P., Martinez, E., Singh, S., A study of the local convergence of a fifth order iterative method, Indian J. Pure Appl. Math., 51, 2, (2020), 439-455. [9] C˘atinas¸, Emil, The inexact, inexact perturbed, and quasi-Newton methods are equivalent models, Math. Comp., 74, (2005), 291-301. [10] Chen, X., Yamamoto, T., Convergence domains of certain iterative methods for solving nonlinear equations, Numer. Funct. Anal. Optim. 10, (1989), 37 - 48. [11] Dennis, Jr., J. E., On Newton-like methods. Numer. Math. 11, (1968), 324–330. [12] Dennis, Jr., John E., and Schnabel, Robert B., Numerical methods for unconstrained optimization and nonlinear equations, SIAM, Philadelphia, 1996. First published by Prentice-Hall, Englewood Cliffs, New Jersey, (1983). [13] Deuflhard, P., Heindl, G., Affine invariant convergence theorems for Newton’s method and extensions to related methods. SIAM J. Numer. Anal., 16, (1979), 110. [14] Deuflhard, P. Newton methods for nonlinear problems. Affine invariance and adaptive algorithms, Springer Series in Computational Mathematics, 35, Springer - Verlag, Berlin. (2004) [15] Ezquerro, J. A., Guti´errez, J. M., Hern´andez, M. A., Romero, N., Rubio, M. J., The Newton method: from Newton to Kantorovich (Spanish), Gac. R. Soc. Mat. Esp., 13, (2010), 53-76. [16] Ezquerro, J. A., Hernandez, M. A., Newton’s method: An updated approach of Kantorovich’s theory, Cham Switzerland, (2018). ` [17] Grau-S´anchez, Miquel and Grau, Angela and Noguera, Miquel, Ostrowski type methods for solving systems of nonlinear equations. Appl. Math. Comput., 281,(2011), 2377-2385. ´ A. and Romero, N., On the semilocal con[18] Guti´errez, J. M. and Magre˜na´ n, A. vergence of Newton-Kantorovich method under center-Lipschitz conditions, Appl. Math. Comput., 221, (2013),79-88.

Two-Step Methods under General Continuity Conditions

221

[19] Hernandez, M. A., Romero, N., On a characterization of some Newton-like methods of R− order at least three, J. Comput. Appl. Math., 183, 1, (2005), 53-66. [20] Kantorovich, L. V., Akilov, G. P., Functional Analysis, Pergamon Press, Oxford, (1982). [21] Magr´en˜ an, A. A., Argyros, I. K., Rainer, J. J., Sicilia, J. A., Ball convergence of a sixth-order Newton-like method based on means under weak conditions, J. Math Chem (2018) 56:2117-2131, https://doi.org/10.1007/ s10910-018-0856-y. [22] Magr´en˜ an, A. A., Guti´errez, J. M., Real dynamics for damped Newton’s method applied to cubic polynomials, J. Comput. Appl. Math., 275, (2015), 527–538. [23] Nashed, M.Z., and Chen, X., Convergence of Newton-like methods for singular operator equations using outer inverses, Numer. Math. 66, (1993), 235-257. [24] Ortega, L. M., Rheinboldt, W. C., Iterative Solution of Nonlinear Equations in Several Variables, Academic Press, New York, (1970). [25] Ostrowski, A. M., Solution of equations in Euclidean and Banach spaces, Elsevier, 1973. [26] Potra, F. A. and Pt´ak, V., Nondiscrete induction and iterative processes. Research Notes in Mathematics, 103. Pitman (Advanced Publishing Program), Boston, MA. (1984). [27] Proinov, P. D., General local convergence theory for a class of iterative processes and its applications to Newton’s method, J. Complexity, 25, (2009), 38-62. [28] Proinov, P. D., New general convergence theory for iterative processes and its applications to Newton-Kantorovich type theorems, J. Complexity, 26, (2010), 3-42. [29] Rheinboldt, W. C., An adaptive continuation process of solving systems of nonlinear equations. Polish Academy of Science, Banach Ctr. Publ. 3, (1978), 129-142. [30] Shakhno, S. M., Gnatyshyn, O. P., On an iterative algorithm of order 1.839... for solving nonlinear least squares problems, Appl. Math. Applic., 161, (2005), 253-264. [31] Shakhno, S. M., Iakymchuk, R. P., Yarmola, H. P., Convergence analysis of a two step method for the nonlinear squares problem with decomposition of operator, J. Numer. Appl. Math., 128, (2018), 82-95. [32] Sharma, J. R., Guha, R. K. and Sharma, R., An efficient fourth order weighted Newton method for systems of nonlinear equations. Numer. Algorithms, 62, (2013), 307-323. [33] Soleymani, F., Lotfi, T. and Bakhtiari, P., A multi-step class of iterative methods for nonlinear systems. Optim. Lett., 8, (2014),1001-1015. [34] Steffensen, J. F., Remarks on iteration. Skand Aktuar Tidsr., 16, (1993),64-72.

222

Christopher I. Argyros, Samundra Regmi, Ioannis K. Argyros et al.

[35] Traub, J. F., Iterative methods for the solution of equations, Prentice Hall, New Jersey, U.S.A, (1964). [36] Traub, J. F., Werschulz, A. G., Complexity and information, Lezioni Lince. [Lincei Lectures] Cambridge University Press, Cambridge, 1998, xii+139 pp, ISBN:0-52148506-1. [37] Traub, J. F., Wozniakowski, H., Path integration on a quantum computer, Quantum Inf. Process, 1(2002), 5, 356-388. [38] Yamamoto, T., A convergence theorem for Newton-like methods in Banach spaces. Numer. Math., 51, (1987), 545-557. [39] Verma, R., New Trends in Fractional Programming, Nova Science Publisher, New York, USA, (2019). [40] Zabrejko, P. P. and Nguen, D. F., The majorant method in the theory of NewtonKantorovich approximations and the Pt´ak error estimates, Numer. Funct. Anal. Optim., 9, (1987), 671-684.

Chapter 26

A Noor-Waseem Third Order Method to Solve Equations 1.

Introduction

Noor and Waseem [18] considered the third order method, defined for n = 0, 1, 2, . . ., by yn = xn − F 0 (xn )−1 F (xn ),

xn+1 = xn − 4A−1 n F (xn ),

(26.1)

where An = 3F 0 ( 2xn3+yn ) + F 0 (yn ) for solving the nonlinear equation

F (x) = 0.

(26.2)

/ Here F : Ω ⊂ B −→ B1 is an operator acting between Banach spaces B and B1 with Ω 6= 0. In general a closed form solution for (26.2) is not possible, so iterative methods are used for approximating a solution x∗ of (26.2) (see [1,]-[27]). The local convergence of this method in the special case when B = B1 = R was shown to be of order three using Taylor expansion and assumptions on the fourth-order derivative of F , which is not on these methods [18]. So, the assumptions on the fourth derivative reduce the applicability of these methods [1]-[27]. For example: Let B = B1 = R, Ω = [−0.5, 1.5]. Define λ on Ω by  3 t logt 2 + t 5 − t 4 i f t 6= 0 λ(t) = 0 i f t = 0. Then, we get t ∗ = 1, and λ000 (t) = 6 logt 2 + 60t 2 − 24t + 22. Obviously λ000 (t) is not bounded on Ω. So, the convergence of method (26.1) is not guaranteed by the previous analyses in [1]-[27]. In this chapter, we introduce a majorant sequence and use general continuity conditions to extend the applicability of the Homeier method. Our analysis includes error bounds and

224

Christopher I. Argyros, Samundra Regmi, Ioannis K. Argyros et al.

results on the uniqueness of x∗ based on computable Lipschitz constants not given before in [1]-[27] and in other similar studies using the Taylor series. Our idea is very general so it applies to other methods too. The rest of the chapter is set up as follows: In Section 2, we present results on majorizing sequences. Sections 3 and 4 contain the semi-local and local convergence, respectively, whereas in Section 4, the numerical experiments are presented. Concluding remarks are given in the last Section 5.

2.

Majorizing Sequences

Sequences are introduced that shall be shown to be majorizing for the method (26.1). Let L0 > 0, L > 0 and η > 0 be given parameters. Define scalar sequences {tn }, {sn} by t0 = 0, s0 = η, L(tn+1 − tn )(sn − tn ) , tn+1 = sn + 8(1 − L0 tn )(1 − 2L0 (sn + tn )) L(sn − tn + tn+1 − tn )(tn+1 − tn ) sn+1 = tn+1 + . 2(1 − L0 tn+1)

(26.3)

Next, we present results on the convergence of sequences given by (26.3). Lemma 14. Suppose that 2L0 (sn + tn ) < 1 for each n = 0, 1, 2, . . .. Then, sequences {tn} is nondecreasing and bounded from above by converge to its unique least upper t ∗ ∈ [η, 4L1 0 ].

(26.4) 1 4L0

and as such it

Proof. It follows from (26.3) and (26.4) that sequences {tn} is nondecreasing and bounded from above by 4L1 0 , so it converges to t ∗ . Next, we present conditions that are stronger than (26.4) but easier to verify. Define polynomials g1 and g2 on the interval [0, 1) by g1 (t) = 4L0 t 2 + L(1 + t)t − L(1 + t) and g2 (t) = 2L0 t 3 + (2 + t)(1 + t)t − L(2 + t)(1 + t). We get g1 (0) = −L, g1 (1) = 4L0 , g2 (0) = −2L and g2 (1) = 2L0 . It follows by the intermediate value theorem that polynomials g1 and g2 have zeros in (0, 1). Denote the minimal such L(s0 +t1 )t1 Lt1 zeros by α1 and α2 , respectively. Set a = 8(1−2L , b = 2(1−L , c = max{a, b}, α3 = 0 s0 ) 0t1 )η min{α1 , α2 } and α4 = max{α1 , α2 }.

A Noor-Waseem Third Order Method to Solve Equations

225

Lemma 15. Suppose c ≤ α3 ≤ α4 ≤ α ≤ 1 − 4L0 η. Then, sequence {tn } is nondecreasing, bounded from above by η ]. unique least upper bound t ∗ ∈ [η, 1−α Proof. Estimates 0≤ 0≤

(26.5) η 1−α

and converges to its

L(tk+1 − tk ) ≤ α, 8(1 − L0 tk )(1 − 2L0 (sk + tk ))

(26.6)

L(sk − tk + tk+1 − tk )(tk+1 − tk ) ≤ α(sk − tk ), 2(1 − L0 tk+1)

(26.7)

shall be shown using induction. These estimates hold for k = 0 by (26.5). Then, we have 2 η t1 − s0 ≤ α(s0 −t0 ), s1 −t1 ≤ α(s0 −t0 ) and t1 ≤ η + αη = 1−α 1−α η < 1−α . Suppose that these estimates hold for k = 0, 1, . . ., n − 1. Then, since 0 ≤ sk − tk ≤ αk η, tk+1 ≤ of (26.6) and (26.7) we can show L(1 + α)αk η k+1

4(1 − L0 1−α 1−α η)

1−αk+2 1−α ,

≤ α,

(26.8)

2L0 (sk + tk ) < 1 and

L(1 + α)(2 + α)αkη k+2

2(1 − L0 1−α 1−α η)

instead

(26.9)

≤ α,

(26.10)

respectively, where we also use tk+1 − tk = (tk+1 − sk ) + (sk − tk ) ≤ α(sk − tk ) + (sk − tk ) = (1 + α)(sk − tk ), and sk − tk + tk+1 − tk ≤ (2 + α)(sk − tk ). Estimate (26.8) holds if (1)

where

fk (t) ≤ 0 at t = α1 ,

(26.11)

fk (t) = L(1 + t)t k−1η + 4L0 (1 + t + . . . + t k )η − 4

(26.12)

(1)

(1)

for all t ∈ [0, 1). A relationship between two consecutive functions f k (1)

is needed: (1)

fk+1(t) = L(1 + t)t k + 4L0 (1 + t + . . . + t k+1 )η − 4 + f k (t) =

−L(1 + t)t k−1 − 4L0 (1 + t + . . . + t k )η + 4 (1)

f k (t) + g1 (t)t k−1η.

(26.13)

In particular, (1)

(1)

fk+1(α1 ) = f k (α1 ),

(26.14)

226

Christopher I. Argyros, Samundra Regmi, Ioannis K. Argyros et al. (1)

(1)

since g1 (α1 ) = 0. Define function f ∞ (t) = limk−→∞ fk (t). Then, estimate (26.10) holds if

(1)

f∞(1) (α1 ) ≤ 0.

But by (26.12), one gets f ∞ (t) =

4L0 η 1−t

(26.15)

− 4. So, (26.15) holds if

4L0 η − 4 ≤ 0 at t = α1 , 1 −t which is true by the choice of α1 . Similarly, estimate (26.10) holds if (2)

fk (t) ≤ 0 at t = α2 ,

(26.16)

fk (t) = L(2 + t)(1 + t)t k−1η + 2L0 (1 + t + . . . + t k+1 )η − 2.

(26.17)

where (2)

This time one has (2)

(2)

fk+1(t) = L(2 + t)(1 + t)t kη + 2L0 (1 + t + . . . + t k+2 )η − 2 + f k (t) =

−L(2 + t)(1 + t)t k−1η + 2L0 (1 + t + . . . + t k+1 )η + 2 (2)

f k (t) + g2 (t)t k−1η,

so

(2)

(26.18) (2)

fk+1(α2 ) = f k (α2 ), (2)

(26.19)

(2)

since g2 (α2 ) = 0. Define function f ∞ (t) = limk−→∞ fk (t). Then, estimate (26.15) holds if f∞(2) (t) ≤ 0 at t = α2 . (26.20) It follows from (26.16) that f∞(2)(t) = Hence, (26.20) holds if

2η − 2. 1 −t

2η − 2 ≤ 0 at t = α2 , 1 −t

(26.21)

0η which is true by the choice of α2 . Evidently, estimate (26.9) holds if 4L 1−t < 1 which is true by the right hand side inequality in (26.5). Hence, the induction for estimates (26.8)-(26.10) is completed. It follows that sequence {tk } is nondecreasing and bounded from above by η ∗ 1−α , so it converges to t .

A Noor-Waseem Third Order Method to Solve Equations

3.

227

Semi-Local Convergence

The hypotheses (H) are used. Suppose: (H1) There exists x0 ∈ Ω and η > 0 such that F 0 (x0 )−1 exist and kF 0 (x0 )−1 F (x0 )k ≤ η. (H2) kF 0 (x0 )−1 (F 0 (v) − F 0 (x0 ))k ≤ L0 kv − x0 k

for all v ∈ Ω. Set Ω0 = Ω ∩U(x0 , L10 ). (H3)

kF 0 (x0 )−1 (F 0 (w) − F 0 (v)k ≤ Lkw − vk for all v, w ∈ Ω0 . (H4) Hypotheses of Lemma 14 or Lemma 15 hold. and η (H5) U[x0 ,t ∗ ] ⊂ Ω (or U[x0 , 1−α ] ⊆ Ω).

Next, the semi-local convergence of the method (26.1) follows under the hypotheses (H). Theorem 49. Suppose that the hypotheses (H) hold. Then, the following assertion holds {xn } ∈ U(x0 ,t ∗ ) and there exists x∗ ∈ U[x0 ,t ∗ ] such that kxk+1 − yk k ≤ tk+1 − sk

(26.22)

kyk − xk k ≤ sk − tk .

(26.23)

and Proof. By hypothesis (H1) and (26.3) one gets ky0 − x0 k = kF 0 (x0 )−1 F(x0 ) ≤ η =≤ s0 − t0 ≤ t ∗ ,

(26.24)

so y0 ∈ U(x0 ,t ∗ ) and (26.23) hold for k = 0. Let w ∈ U(x0 ,t ∗ ). Then, by (H2), it follows kF 0 (x0 )−1 (F 0 (w) − F 0 (x0 )) ≤ L0 kw − x0 k ≤ L0 t ∗ < 1, so the invertibility of F 0 (w) follows by a Lemma on invertible operators due to Banach and kF 0 (w)−1 F 0 (x0 )k ≤

1 . 1 − L0 kw − x0 k

(26.25)

228

Christopher I. Argyros, Samundra Regmi, Ioannis K. Argyros et al.

Similarly Ak is shown to be invertible 0

−1

0

k(4F (x0 )) (Ak − 4F (x0 ))k ≤ ≤ ≤ ≤ =

    1 0 −1 0 2xk + yk kF (x0 ) 3F 4 3 0 0 +F (yn )) − 4F (x0 ))k 1 2xk + yk (3L0 k − x0 k + L0 kyk − x0 k) 4 3 1 2kxk − x0 k + kyk − x0 k (3L0 ( ) + L0 kyk − x0 k) 4 3 1 (2L0 (tk − t0 ) + 2L0 (sk − t0 )) 4 1 1 L0 (tk − sk ) < L0 (2t ∗) = L0 t ∗ < 1, 2 2

so Ak is invertible and 0 kA−1 k F (x0 )k ≤

1 . 4(1 − 2L0 (tk + sk ))

(26.26)

Moreover, by the second sub-step of method (26.1) xk+1 exists for k = 0. Furthermore, one obtains in turn that xk+1 = xk − F 0−1 (xk )F (xk ) + (F 0−1 (xk ) − 4A−1 k )F (xk ) 2xk + yk = yk + F 0−1 (xk )(3F 0 ( ) + F 0 (yk )) − 4F 0 (xk ))A−1 k F (xk ) 3 so, kxk+1 − yk k ≤ ≤

(Lkyk − xk k + Lkyk − xk k) (xk+1 − xk ) 16(1 − L0 kxk − x0 k)(1 − 2L0 (kxk − x0 k + kyk − x0 k)) L(sk − tk )(tk+1 − tk ) = tk+1 − sk , 8(1 − L0 tk )(1 − 2L0 (tk + sk ))

kxk+1 − x0 k ≤ kxk+1 − yk k + kyk − x0 k ≤ tk+1 − sk + sk − t0 = tk+1 ≤ t ∗ ,

so (26.23) holds and xk+1 ∈ U(x0 ,t ∗ ) for k = 0. Suppose estimates (26.22), (26.23) hold and {xk }, {yk } ∈ U(x0 ,t ∗ ) for all integers k smaller than n − 1. By the definition of method, one writes 1 4

F (xk+1) ≤ F (xk+1) − F (yk ) − Ak (xk+1 − xk ), so kF 0 (x0 )−1 F (xk+1)k ≤ ≤

1 L(kyk − xk k + kxk+1 − xk k)kxk+1 − xk k 2 L (sk − tk + tk+1 − tk )(tk=1 − tk ), 2

kyk+1 − xk+1 k ≤ k(F 0 (xk+1)−1 F 0 (x0 )F 0 (x0 )−1 F (xk+1 )k L(sk − tk + tk+1 − tk )(tk+1 − tk ) ≤ = sk+1 − tk+1 1 − L0 tk+1

(26.27)

A Noor-Waseem Third Order Method to Solve Equations

229

and kyk+1 − x0 k ≤ kyk+1 − xk+1 k + kxk+1 − x0 k

≤ sk+1 − tk+1 + tk+1 − t0 = sk+1 ≤ t ∗ ,

so xk+1 ∈ U(x0 ,t ∗ ). The induction for assertions (26.22) and (26.23) is completed. It follows that sequence {xn } is fundamental in Banach space B, and as such limn−→∞ xn = x∗ ∈ U[x0 ,t ∗ ]. By letting k −→ ∞ in (26.27) and the continuity of F we deduce F (x∗ ) = 0. Finally from kxk+m − xk k ≤ tk+m − tn, we get kx∗ − xk k ≤ t ∗ − tk by letting m −→ ∞. The uniqueness of the solution result is presented. Proposition 8. Assume: (1) x∗ is a simple solution of (26.2). (2) There exists τ ≥ t ∗ so that

L0 (t ∗ + τ) < 2.

(26.28)

Set Ω1 = Ω ∩U[x∗ , τ]. Then, x∗ is the unique solution of equation (26.2) in the domain Ω1 . Proof. Let q ∈ Ω1 with F (q) = 0. Define M = (26.28) one obtains kF 0 (x0 )−1 (M − F 0 (x0 ))k ≤ L0 ≤

Z 1 0

R1 0

F 0 (q + θ(x∗ − q))dθ. Using (H2) and

((1 − θ)kq − x0 k + θkx∗ − x0 k)dθ

L0 ∗ (t + τ) < 1, 2

so q = x∗ , follows from the invertibility of M and the identity M(q − x∗ ) = F (q) − F (x∗ ) = 0 − 0 = 0.

4.

Numerical Experiments

We provide some examples in this section. Example 34. Define function h(t) = ξ0 t + ξ1 + ξ2 sinξ3 t, x0 = 0, where ξ j , j = 0, 1, 2, 3 are parameters. Choose ψ0 (t) = L0 t and ψ(t) = Lt. Then, clearly for ξ3 large and ξ2 small, LL0 can be small (arbitrarily). Notice that LL0 −→ 0.

230

Christopher I. Argyros, Samundra Regmi, Ioannis K. Argyros et al.

Example 35. Let B = B1 = C[0, 1] and Ω = U[0, 1]. It is well known that the boundary value problem [16]. ς(0) = 0, (1) = 1, ς00 = −ς − σς2 can be given as a Hammerstein-like nonlinear integral equation ς(s) = s +

Z 1

Q(s,t)(ς3(t) + σς2 (t))dt

0

where σ is a parameter. Then, define F : Ω −→ B1 by [F (x)](s) = x(s) − s −

Z 1

Q(s,t)(x3(t) + σx2 (t))dt.

0

Choose ς0 (s) = s and Ω = U(ς0 , ρ0 ). Then, clearly U(ς0 , ρ0 ) ⊂ U(0, ρ0 +1), since kς0 k = 1. Suppose 2σ < 5. Then, conditions (A) are satisfied for L0 = and η =

5.

1+σ 5−2σ .

σ + 6ρ0 + 3 2σ + 3ρ0 + 6 ,L= , 8 4

Notice that L0 < L.

Conclusion

The semi-local convergence of the Homeier method of order three is extended using general conditions on F 0 and recurrent majorizing sequences. In this chapter, we consider semilocal convergence analysis of the Noor-Waseem third order method for solving nonlinear equations in Banach space. To the best of our knowledge, no semi-local convergence has been given for Noor-Waseem ’s method under Lipschitz continuity. Our goal is to extend the applicability of Noor-Waseem ’s method in the semi-local convergence under these conditions. Majorizing sequences and conditions only on the first derivative, which appear on the method are used for proving our results. Numerical experiments testing the convergence criteria are given in this chapter.

References [1] Argyros, I. K., On the Newton - Kantorovich hypothesis for solving equations, J. Comput. Math., 169, (2004), 315-332. [2] Argyros, I. K., Computational theory of iterative methods. Series: Studies in Computational Mathematics, 15, Editors: Chui C. K. and Wuytack L., Elsevier Publ. Co. New York, U.S.A, 2007. [3] Argyros, I. K., Convergence and Applications of Newton-type Iterations, Springer Verlag, Berlin, Germany, (2008).

A Noor-Waseem Third Order Method to Solve Equations

231

[4] Argyros, I. K., Hilout, S., Weaker conditions for the convergence of Newton’s method. J. Complexity, 28, (2012), 364–387. [5] Argyros, I. K., Hilout, S., On an improved convergence analysis of Newton’s method, Applied Mathematics and Computation, 225, (2013), 372-386. [6] Argyros, I. K., Magr´en˜ an, A. A., Iterative methods and their dynamics with applications, CRC Press, New York, USA, 2017. [7] Argyros, I. K., Magr´en˜ an, A. A., A contemporary study of iterative methods, Elsevier (Academic Press), New York, 2018. [8] Behl, R., Maroju, P., Martinez, E., Singh, S., A study of the local convergence of a fifth order iterative method, Indian J. Pure Appl. Math., 51, 2, (2020), 439-455. [9] C˘atinas¸, Emil, The inexact, inexact perturbed, and quasi-Newton methods are equivalent models, Math. Comp., 74, (2005), 291-301. [10] Ezquerro, J. A., Guti´errez, J. M., Hern´andez, M. A., Romero, N., Rubio, M. J., The Newton method: from Newton to Kantorovich(Spanish), Gac. R. Soc. Mat. Esp., 13, (2010), 53-76. [11] Ezquerro, J. A., Hernandez, M. A., Newton’s method: An updated approach of Kantorovich’s theory, Cham Switzerland, (2018). ` [12] Grau-S´anchez, Miquel and Grau, Angela and Noguera, Miquel, Ostrowski type methods for solving systems of nonlinear equations. Appl. Math. Comput., 281, (2011), 2377-2385. [13] Noor, M. A, Waseem, M., Some iterative methods for solving a system of nonlinear equations, Comput. Math. Appl., 2009, 57, 1, (2009), 101-106. [14] Kantorovich, L. V., Akilov, G. P., Functional Analysis, Pergamon Press, Oxford, (1982). [15] Magr´en˜ an, A. A., Argyros, I. K., Rainer, J. J., Sicilia, J. A., Ball convergence of a sixth-order Newton-like method based on means under weak conditions, J. Math Chem (2018) 56:2117-2131, https://doi.org/10.1007/ s10910-018-0856-y. [16] Magr´en˜ an, A. A., Guti´errez, J. M., Real dynamics for damped Newton’s method applied to cubic polynomials, J. Comput. Appl. Math., 275, (2015), 527–538. [17] Ortega, L. M., Rheinboldt, W. C., Iterative Solution of Nonlinear Equations in Several Variables, Academic Press, New York, (1970). [18] Ostrowski, A. M., Solution of equations in Euclidean and Banach spaces, Elsevier, 1973. [19] Potra, F. A. and Pt´ak, V., Nondiscrete induction and iterative processes. Research Notes in Mathematics, 103. Pitman(Advanced Publishing Program), Boston, MA. (1984).

232

Christopher I. Argyros, Samundra Regmi, Ioannis K. Argyros et al.

[20] Proinov, P. D., General local convergence theory for a class of iterative processes and its applications to Newton’s method, J. Complexity, 25, (2009), 38-62. [21] Proinov, P. D., New general convergence theory for iterative processes and its applications to Newton-Kantorovich type theorems, J. Complexity, 26, (2010), 3-42. [22] Rheinboldt, W. C., An adaptive continuation process of solving systems of nonlinear equations. Polish Academy of Science, Banach Ctr. Publ. 3, (1978), 129-142. [23] Shakhno, S. M., Gnatyshyn, O. P., On an iterative algorithm of order 1.839... for solving nonlinear least squares problems, Appl. Math. Applic., 161, (2005), 253-264. [24] Shakhno, S. M., Iakymchuk, R. P., Yarmola, H. P., Convergence analysis of a two step method for the nonlinear squares problem with decomposition of operator, J. Numer. Appl. Math., 128, (2018), 82-95. [25] Sharma, J. R., Guha, R. K. and Sharma, R., An efficient fourth order weighted Newton method for systems of nonlinear equations. Numer. Algorithms, 62,(2013), 307-323. [26] Traub, J. F., Iterative methods for the solution of equations, Prentice Hall, New Jersey, U.S.A, (1964). [27] Verma, R., New Trends in Fractional Programming, Nova Science Publisher, New York, USA, (2019).

Chapter 27

Generalized Homeier Method 1.

Introduction

For solving the equation F(x) = 0, where F : Rd −→ Rd is a vector function, Homeier [8] considered the iteration defined by xn+1 = ψF (xn ).

(27.1)

Here ψF (x) = x − AF (x)(x − 21 AF (x)F(x))F(x), where AF (x) = [J f (x)]−1 is the inverse of the Jacobian of F. Note that the iteration (27.1) can be written as the two step iteration 1 yn = xn − AF (xn )F(xn ), 2 xn+1 = xn − AF (yn )F(xn ). Homeier proved in [8], that the convergence order three for method (27.1) uses assumptions on the derivatives of F up to the order four. In this chapter, we consider a more general setting of the iteration (27.1) for solving the nonlinear equation F(x) = 0, (27.2) / where F : Ω ⊂ X −→ Y is an operator acting between Banach spaces X and Y with Ω 6= 0. Throughout the chapter B(x0 , ρ) = {x ∈ X : kx − x0 k < ρ} and B[x0 , ρ] = {x ∈ X : kx − x0 k ≤ ρ} for some ρ > 0. Precisely, we consider the iteration defined for k = 0, 1, 2, . . . by Homeier method yk = xk − aF 0 (xk )−1 F(xk ),

xk+1 = xk − F 0 (yk )−1 F(xk ),

(27.3)

where 0 6= a ∈ R. We shall prove local convergence of the method (27.3) using assumptions only on the Fr´echet derivative of F. Note that (27.3) reduces to Homeier method when a = 12 . Observe that Our convergence analysis is not based on Taylor expansion, so we do not need assumptions on the higher than two order derivatives of the operator involved. This technique can be used on other methods and similar topics [1]-[13].

234

Christopher I. Argyros, Samundra Regmi, Ioannis K. Argyros et al. For example: Let X = Y = R, Ω = [− 21 , 32 ]. Define f on Ω by  3 t logt 2 + t 5 − t 4 i f t 6= 0 f (t) = 0 i f t = 0.

Then, we have t∗ = 1,

f 000 (t) = 6 logt 2 + 60t 2 − 24t + 22.

Obviously f 000 (t) is not bounded on Ω. So, the convergence of the Homeier method is not guaranteed by the analysis in [8]. The chapter contains local convergence analysis in Section 2, and the numerical examples are given in Section 3.

2.

Local Convergence

It is convenient to define some functions and parameters. Let M : [0, ∞). Assume functions: (i) ϕ0 : M −→ M is nondecreasing, continuous and ϕ0 (t) − 1 = 0 has a smallest solution ρ0 ∈ M. Set M0 = [0, ρ0 ). (ii) ϕ, ϕ1 : M0 −→ M are nondecreasing, continuous and ψ1 (t) − 1 = 0 has a smallest solution r1 ∈ M0 , where ψ1 (t) =

R1 0

ϕ(θt)dθ + |1 − a| 01 ϕ1 (θt)dθ . 1 − ϕ0 (t) R

(iii) Equation ϕ0 (ψ1 (t)t) − 1 = 0 has a smallest solution ρ1 ∈ M0 . Set ρ2 = min{ρ0 , ρ1 }. (iv) Equation ψ2 (t) − 1 = 0 has a smallest solution r ∈ M0 , where (ϕ0 (ψ1 (t)t) + ϕ0 (t)) 01 ϕ(θt)dθ |1 − a| 01 ϕ1 (θkx0 − x∗ k)dϕ ψ2 (t) = ψ1 (t) + + . (1 − ϕ0 (t))(1 − ϕ0 (ψ1 (t)t)) 1 − ϕ0 (kx0 − x∗ k) R

R

By the definition of these functions, we deduce that r ≤ r1 . It shall be shown that r is a radius of convergence for the method (27.3). The convergence is based on conditions (H). Assume (H1) F 0 (x∗ )−1 exists and kF 0 (x∗ )−1 (F 0 (w) − F 0 (x∗ ))k ≤ ϕ0 (kw − x∗ k) holds for all w ∈ Ω. Set Ω0 = B(x∗ , ρ0 ) ∩ Ω. (H2) kF 0 (x∗ )−1 (F 0 (w) − F 0 (z))k ≤ ϕ(kw − zk) and kF 0 (x∗ )−1 F 0 (w)k ≤ ϕ1 (kw − x∗ k) hold for all z, w ∈ Ω0 . and (H3) B[x∗ , r] ⊂ Ω.

Generalized Homeier Method

235

Next is the local convergence of method (27.3) in this setting. Theorem 50. Assume conditions (H) hold. Then, the following assertions hold {xn } ⊂ B(x∗ , r), and limn−→∞ xn = x∗ . Proof. We employ mathematical induction to show

and

kxk+1 − x∗ k ≤ ψ1 (kxk − x∗ k)kxk − x∗ k ≤ kxn − x∗ k < r

(27.4)

kyk − x∗ k ≤ ψ2 (kxn − x∗ k)kxn − x∗ k ≤ kxn − x∗ k < r.

(27.5)

Let v ∈ B(x∗ , r). Then, using (H1) one has

kF 0 (x∗ )−1 (F 0 (v) − F 0 (x∗ ))k ≤ ϕ0 (kv − x∗ k) ≤ ϕ0 (r) < 1,

so

1 (27.6) 1 − ϕ0 (kv − x∗ k) by the Banach lemma on invertible operators [1]-[6,10]. Moreover, iterate y0 is well defined by (27.6) (for v = x0 ). By the first sub-step of method (27.3) for n = 0 kF 0 (v)−1 F 0 (x∗ )k ≤

y0 − x∗ = x0 − x∗ − F 0 (x0 )−1 F(x0 ) + (1 − a)F 0 (x0 )−1 F(x0 ).

(27.7)

In view of (H1), (2) and (27.7) ∗

ky0 − x k ≤

1 1 − ϕ0 (kx0 − x∗ k)

Z

1

ϕ(θkx0 − x∗ k)dθ  Z 1 + |1 − a| ϕ1 (θkx0 − x∗ k)dϕ kx0 − x∗ k 0

0

≤ ψ1 (kx0 − x∗ k)kx0 − x∗ k ≤ kx0 − x∗ k < r

so (27.4) holds for k = 0, y0 ∈ B(x∗ , r), and F 0 (y0 ) is invertible by (27.6) for v = y0 . Hence, x1 is well defined by the second sub-step of method (27.3) for k = 0. x1 − x∗ = x0 − x∗ − F 0 (y0 )−1 F(x0 ) = x0 − x∗ − F 0 (x0 )−1 F(x0 )

+[F 0 (x0 )−1 − F 0 (y0 )−1 ]F(x0 )

= y0 − x∗ − (1 − a)F 0 (x0 )−1 F(x0 ) +[F 0 (x0 )−1 − F 0 (y0 )−1 ]F(x0 )

(27.8) (27.9)

Hence, by (H1), (H2) and (27.11) |1 − a| 01 ϕ1 (θkx0 − x∗ k)dϕ kx1 − x k ≤ ky0 − x k + 1 − ϕ0 (kx0 − x∗ k) (ϕ0 (ky0 − x∗ k) + ϕ0 (kx0 − x∗ k)) + (1 − ϕ0 (kx0 − x∗ k))(1 − ϕ0 (ky0 − x∗ k)) ∗



×

Z 1 0

R

(27.10)

ϕ1 (θkx0 − x∗ k)dθkx0 − x∗ k

≤ ψ2 (kx0 − x∗ k)kx0 − x∗ k ≤ kx0 − x∗ k < r,

(27.11)

236

Christopher I. Argyros, Samundra Regmi, Ioannis K. Argyros et al.

showing (27.4) for k = 0 and x1 ∈ B(x∗ , r). If one replaces, x0 , y0 , x1 by xk , yk , xk+1 in the preceding computations the induction for (27.4) and (27.5) is completed. Then, by the estimation kxi+1 − x∗ k ≤ λkxi − x∗ k < r, (27.12)

we conclude limi−→∞ xi = x∗ and xi+1 ∈ B(x∗ , r).

Next, the uniqueness of the solution result follows. Proposition 9. Suppose x∗ is a simple solution of equation F(x) = 0. Then, the only solution of equation F(x) = 0 in the set S = Ω ∩ B[x∗ , r¯] is x∗ provided that k0 r¯ < 1. Proof. Let p ∈ S be a solution of equation F(x) = 0. Set M = (H1) and (27.13), we have in turn that

(27.13) R1 0 ∗ ∗ 0 F (x +t(p−x ))dt. Then,

kF 0 (x)−1 (M − F 0 (x))k ≤ k0

Z 1

kx∗ + t(p − x∗ ) − xkdt

≤ k0

Z 1

((1 − t)kx∗ − xk + tkx∗ − pk)dt

0

0

≤ k0 r¯ < 1, so p = x∗ follows from the invertibility of M and the identity 0 = F(p)−F(x∗ ) = M(p−x∗ ).

3.

Numerical Experiments

We compute the radius of convergence for two examples in this section. Example 36. Returning back to the motivational example, we have ϕ0 (t) = ϕ(t) = 96.6629073t and ϕ1 (t) = 2. Then r = 0.0052. Example 37. Let X = Y = R3 , D = B[0, 1], x∗ = (0, 0, 0)T . Define function F on D for w = (x, y, z)T by e−1 2 F(w) = (ex − 1, y + y, z)T . 2 Then, we get  x  e 0 0 F 0 (v) =  0 (e − 1)y + 1 0  , 0 0 1 1

1

so ϕ0 (t) = (e − 1)t, ϕ(t) = e e−1 t and ϕ1 (t) = e e−1 and r = 0.3114.

Generalized Homeier Method

4.

237

Conclusion

A generalized Homeier method for solving equations in Banach space is studied. The Homeier method is of order three. But earlier works require the existence of the fourth derivative, not the method. Moreover, no computational error bounds are given. But our techniques can be used to obtain the local convergence of other similar higher-order methods using assumptions only on the first derivative appearing on the method. Computational error bounds are also given. Finally, basins of attraction, using this method, for a couple of test problems are presented.

References [1] Argyros, I. K., On the Newton - Kantorovich hypothesis for solving equations, J. Comput. Math., 169, (2004), 315-332. [2] Argyros, I. K., Computational theory of iterative methods. Series: Studies in Computational Mathematics, 15, Editors: Chui C. K. and Wuytack L., Elsevier Publ. Co. New York, U.S.A, 2007. [3] Argyros, I. K., Convergence and Applications of Newton-type Iterations, Springer Verlag, Berlin, Germany, (2008). [4] Argyros, I. K., Hilout, S., Weaker conditions for the convergence of Newton’s method. J. Complexity, 28, (2012), 364–387. [5] Argyros, I. K., Hilout, S., On an improved convergence analysis of Newton’s method, Applied Mathematics and Computation, 225, (2013), 372-386. [6] Argyros, I. K., George, S., Argyros, M., On the semi-local convergence of a Traubtype method for solving equations (communicated). [7] Behl, R., Maroju, P., Martinez, E., Singh, S., A study of the local convergence of a fifth order iterative method, Indian J. Pure Appl. Math., 51, 2, (2020), 439-455. [8] Homeier, H. H. H., A modified Newton method with cubic convergence: the multivariate case, Journal of Computational and Applied Mathematics 169 (2004) 161169. [9] Magr´en˜ an, A. A., Guti´errez, J. M., Real dynamics for damped Newton’s method applied to cubic polynomials, J. Comput. Appl. Math., 275, (2015), 527–538. [10] Ortega, J. M., Rheinboldt, W. C., Iterative solution of nonlinear equations in several variables, volume 30 of Classics in Applied Mathematics, Society for Industrial and Applied Mathematics (SIAM), Philadelphia, PA (2000), Reprint of the 1970 original. [11] Ostrowski, A. M. (1966). Solution of equations and systems of equations. Second edition. Pure and Applied Mathematics, Vol. 9. Academic Press, New York-London.

238

Christopher I. Argyros, Samundra Regmi, Ioannis K. Argyros et al.

[12] Petkovi´c, M. S., Neta, B., Petkovi´c, L. D., and Dzuni´c, J., Multipoint methods for solving nonlinear equations: a survey. Appl. Math. Comput., 226 (2014), 635-660. [13] Traub, J. F., Iterative methods for the solution of equations, Prentice Hall, New Jersey, U.S.A, (1964).

Chapter 28

A Xiao-Yin Third Order Method for Solving Equations 1.

Introduction

Xiao-Yin [27] considered the third order method, defined for n = 0, 1, 2, . . ., by yn = xn − F 0 (xn )−1 F(xn ), 2 xn+1 = xn − ((3F 0 (yn ) + F 0 (xn ))−1 + F 0 (xn )−1 )F(xn ), 3

(28.1)

for solving the nonlinear equation F(x) = 0.

(28.2)

/ Here F : Ω ⊂ E −→ E1 is an operator acting between Banach spaces E and E1 with Ω 6= 0. In general a closed form solution for (28.2) is not possible, so iterative methods are used for approximating a solution x∗ of (28.2) (see [1]-[27]). The local convergence of this method in the special case when E = E1 = R was shown to be of order three using Taylor expansion and assumptions on the fourth-order derivative of F, which is not on these methods [27]. So, the assumptions on the fourth derivative reduce the applicability of these methods [1]-[27]. For example: Let E = E1 = R, Ω = [−0.5, 1.5]. Define λ on Ω by  3 t logt 2 + t 5 − t 4 i f t 6= 0 λ(t) = 0 i f t = 0. Then, we get f (1) = 0, and λ000 (t) = 6 logt 2 + 60t 2 − 24t + 22. Obviously λ000 (t) is not bounded on Ω. So, the convergence of method (28.1) is not guaranteed by the previous analyses in [1]-[27]. In this chapter, we introduce a majorant sequence and use general continuity conditions to extend the applicability of the Xiao-Yin method. Our analysis includes error bounds and results on the uniqueness of x∗ based on computable Lipschitz constants not given before

240

Christopher I. Argyros, Samundra Regmi, Ioannis K. Argyros et al.

in [1]-[27] and in other similar studies using the Taylor series. Our idea is very general. So, it applies to other methods too. The rest of the chapter is set up as follows: In Section 2, we present results on majorizing sequences. Sections 3, 4 contain the semi-local and local convergence, respectively, whereas in Section 4, the numerical experiments are presented. Concluding remarks are given in the last Section 5.

2.

Majorizing Sequences

Let K0 > 0, K > 0 and η > 0 be given parameters. Define scalar sequences {tn }, {sn } by t0 = 0, s0 = η, K(1 + K0 tn )(sn − tn )2 tn+1 = sn + , 2(1 − K0 tn )(1 − qn ) K(tn+1 − tn )2 + 2(1 + K0 tn )(tn+1 − sn ) sn+1 = tn+1 + , 2(1 − K0 tn+1)

(28.3)

where qn = K20 (3sn +tn ). These sequences shall be shown to be majorizing for method (28.1) in Section 3. Next, some convergence results are provided for the scalar sequences. Lemma 16. Assume: K0 tn < 1 and qn < 1

(28.4)

for all n = 0, 1, 2, . . .. Then, sequences {tn } is nondecreasing and bounded from above by 1 1 K0 and as such it converge to its unique least upper T ∈ [0, K0 ]. Proof. It follows from (28.4) and definition of sequences {ti } that it is nondecreasing and bounded from above by K10 , so it converges to T. Next, we present conditions that are stronger than (28.4) but easier to verify. Define polynomials p1 and p2 on the interval [0, 1) by p1 (t) = 4K0 t 2 + 5Kt − 5L and p2 (t) = K0 (1 + t)2t − K(1 + t)2 + 3K(1 + t)t − 3K(1 + t) + 2K0 t 3 . By these definition one gets p1 (0) = −2K, p1 (1) = 4K0 , p2 (0) = −4K and g2 (1) = 2K0 . So these polynomials have zeros in (0, 1). Denote the smallest such zeros by α1 and α2 , Kt12 +2(t1 −s0 ) Kη respectively. Set a = 2(1−p , b = ) 2η(1−K0t1 ) , c = max{a, b}, α3 = min{α1 , α2 } and α = 0 max{α1 , α2 }. Then, we can show the result.

A Xiao-Yin Third Order Method for Solving Equations

241

Lemma 17. Assume: c ≤ α3 ≤ α ≤ 1 − 4K0 η. Then, sequence {tn } is nondecreasing, bounded from above by η ]. to its unique least upper bound T ∈ [η, 1−α

(28.5) η 1−α

and as such it converges

Proof. Using induction we show K(1 + K0 ti )(si − ti ) ≤ α, 2(1 − K0 ti )(1 − pi ) 0≤ and

(28.6)

K(ti+1 − ti )2 + 2(1 + K0 ti )(ti+1 − si ) ≤ α(si − ti ), 2(1 − K0 ti+1 ) ti ≤ si ≤ ti+1 .

(28.7)

(28.8)

These estimates are true by (28.5) for i = 0. Then, by (28.3), (28.6) and (28.7), we get 0 ≤ si − ti ≤ αi η, and ti+1 ≤

1 − αi+2 η 1−α

(28.9)

hold for i = 0. Assume these estimates hold for i = 0, 1, . . ., n − 1. Then, we can show instead of (28.6) and (28.7) that 5K(si − ti ) ≤α (28.10) 4(1 − K0 ti ) and

since 1 + K0 ti ≤

5 4

K(1 + α)2 (si − ti ) + 3K(1 + α)(si − ti ) ≤ α, 2(1 − K0 ti+1 )

and

1 1−pi

(28.11)

≤ 2. respectively. Evidently (28.10) holds if 5Kαi η i+1

4(1 − K0 1−α 1−α η)

≤ α.

(28.12)

Estimate (28.12) motivates us to introduce recurrent functions on the interval [0, 1) by (1)

fi (t) = 5Kt i−1η + 4K0 (1 + t + . . . + t i )η − 4.

(28.13)

Then, we can show instead of (28.13) that (1)

fi (t) ≤ 0 at t = α1 . (1)

A relationship between two consecutive functions f k

(28.14)

is needed:

(1)

(1)

fi+1 (t) = 5Kt iη + 4K0 (1 + t + . . . + t i+1 )η − 4 + f i (t) =

−5Kt i−1 − 4K0 (1 + t + . . . + t i )η + 4 (1)

f i (t) + g1 (t)t i−1η.

(28.15)

242

Christopher I. Argyros, Samundra Regmi, Ioannis K. Argyros et al.

In particular, g1 (α1 ) = 0 by the definition of α1 and g1 . Define function (1)

f∞(1)(t) = lim fi (t).

(28.16)

i−→∞

Then, we can show instead of (28.14) that f∞(1) (α1 ) ≤ 0.

(28.17)

By (28.14) and (28.16) 4K0 η − 4, 1 −t so (28.17) holds by (28.5). Similarly, (28.11) holds if f∞(2) (t) =

K(1 + α)2 αi η + 3K(1 + α)αi η i+2

2(1 − K0 1−α 1−α η)

(28.18)

≤ α.

(28.19)

Define recurrent functions on the interval [0, 1) by (2)

fi (t) = K(1 + t)2t i−1 η + 3K(1 + t)t i−1 )η + 2K0 (1 + t + . . . + t i+1 )η − 2.

(28.20)

Then, we can show instead of (28.19) that (2)

fi (t) ≤ 0 at t = α2 .

(28.21)

This time the relationship between two consecutive functions is : (2)

(2)

fi+1 (t) = K(1 + t)2t i η + 3K(1 + t)t i η + 2K0 (1 + t + . . . + t i+2 )η − 2 + f i (t) =

−K(1 + t)2t i−1 η − 3K(1 + t)t i−1 η − 2K0 (1 + t + . . . + t i+1 )η + 2 (2)

f i (t) + g2 (t)t i−1η.

(28.22)

In particular, (2)

(2)

fi+1 (α2 ) = f i (α2 ). Define function

(2)

f∞(2)(t) = lim fi (t). i−→∞

(28.23) (28.24)

Using (28.20) and (28.24), we get f∞(2) (t) =

2K0 η − 2. 1 −t

(28.25)

Hence, we can show instead of (28.21) that f∞(2) (α2 ) ≤ 0,

(28.26)

which is true by (28.5). The induction is completed. It follows that sequence {tk } is nondeη so it converges to T. creasing and bounded from above by 1−α

A Xiao-Yin Third Order Method for Solving Equations

3.

243

Semi-Local Convergence

The hypotheses (H) are used. Suppose: (H1) There exists x0 ∈ Ω and η > 0 such that F 0 (x0 )−1 exist and kF 0 (x0 )−1 F(x0 )k ≤ η. (H2) kF 0 (x0 )−1 (F 0 (v) − F 0 (x0 ))k ≤ K0 kv − x0 k

for all v ∈ Ω. Set Ω0 = Ω ∩U(x0 , K10 ). (H3)

kF 0 (x0 )−1 (F 0 (w) − F 0 (v)k ≤ Kkw − vk for all v, w ∈ Ω0 . (H4) Hypotheses of Lemma 16 or Lemma 17 hold. and η (H5) U[x0 , T ] ⊂ Ω (or U[x0 , 1−α ] ⊆ Ω).

Then, we can show the semi-local convergence of the method (28.1). Theorem 51. Suppose that the hypotheses (H) hold. Then, the following assertion hold {xn } ∈ U(x0 , T ) and there exists x∗ ∈ U[x0 , T ] such that kxk+1 − yk k ≤ tk+1 − sk

(28.27)

kyk − xk k ≤ sk − tk .

(28.28)

ky0 − x0 k = kF 0 (x0 )−1 F(x0 ) ≤ η =≤ s0 − t0 = η ≤ T

(28.29)

and Proof. We have by (H1) That

so y0 ∈ U(x0 , T ) and (28.28) hold for k = 0. Let w ∈ U(x0 , T ). Then, using (H2), one gets kF 0 (x0 )−1 (F 0 (w) − F 0 (x0 )) ≤ K0 kw − x0 k ≤ K0 T < 1, so F 0 (w) is invertible and kF 0 (w)−1 F 0 (x0 )k ≤

1 . 1 − K0 kw − x0 k

(28.30)

by a lemma attributed to Banach [13] on linear invertible operators. We can write in turn by (28.1) that

244

Christopher I. Argyros, Samundra Regmi, Ioannis K. Argyros et al.

2 2 xk+1 = xk − F 0 (xk )−1 F(xk ) + (F 0 (xk )−1 − (3F 0 (xk ) − F 0 (xk ))−1 − F 0 (xk )−1 )F(xk ) 3 3 1 0 = yk + F (xk )−1 (3F 0 xk ) − F 0 (xk )) − 2F 0 (xk )) 3 ×(3F 0 (yk ) − F 0 (xk ))−1 F(xk ) = yk + F 0 (xk )−1 (F 0 (yk ) − F 0 (xk )) ×(3F 0 (yk ) − F 0 (xk ))−1 F(xk ).

(28.31)

Hence, by (H2) and (28.3), kxk+1 − yk k ≤

Kkyk − xk k(1 + K0 kxk − x0 k)kyk − xk k , 2(1 − K0 kxk − x0 k)(1 − pk )

(28.32)

where we also used k(3F 0 (yk ) − F 0 (xk ))−1 F(xk )k ≤

1 , 2(1 − pk )

(28.33)

since k(2F 0 (x0 ))−1(3F 0 (yk ) − F 0 (xk ) − 2F 0 (x0 ))k K0 ≤ (3kyk − x0 k + kxk − x0 k) 2 K0 ≤ (3sk + tk ) = pk < 1. 2 By method (28.1) we get in turn that 1 F(xk+1 ) ≤ F(xk+1) − F(xk ) − F 0 (xk )F 0 (yk )−1 2 0 0 ×(3F (yk ) − F (xk ))(xk+1 − xk ), =

Z 1 0

(F 0 (xk + θ(xk+1 − xk ))dθ − F 0 (xk ))(xk+1 − xk )

1 − F 0 (xk )F 0 (yk )−1 (F 0 (yk ) − F 0 (xk ))(xk+1 − xk ), 2

(28.34)

so K K kxk+1 − xk k2 + (1 + K0 kxk − x0 k) 2 2 kyk − xk k × kxk+1 − xk k 1 − K0 kyk − x0 k K ≤ ((tk+1 − tk )2 + 2K(1 + K0 tk )(sk − tk ))(tk+1 − tk ) 2 K ≤ ((1 + α)2 (sk − tk )2 + 3K(1 + α)(sk − tk )), (28.35) 2

kF 0 (x0 )−1 F(xk+1 )k ≤

A Xiao-Yin Third Order Method for Solving Equations

245

where we used tk+1 − tk = (tk+1 − sk ) + (sk − tk) ≤ (1 + α)(sk − tk), (28.12) (for w = yk ) (H3), (28.34), 1 + K0 tk < 1 + 12 = 32 , and the induction hypotheses. Then, by methods (28.1), (28.3), (28.12) (for w = xk+1 ) and (28.17), we obtain in turn kyk+1 − xk+1 k ≤ k(F 0 (xk+1)−1 F 0 (x0 )F 0 (x0 )−1 F(xk+1)k ≤ k(F 0 (xk+1)−1 F 0 (x0 )kkF 0 (x0 )−1 F(xk+1 )k K(1 + α)2 (sk − tk ) + 3K(1 + α)(sk − tk ) ≤ 2(1 − K0 tk+1) = sk+1 − tk+1.

(28.36)

Note also that kxk+1 − x0 k ≤ kxk+1 − yk k + kyk − xk k + kxk − x0 k

≤ tk+1 − sk + sk − tk + tk − t0 = tk+1 ≤ T,

and kyk+1 − x0 k ≤ kyk+1 − xk+1 k + kxk+1 − x0 k

≤ sk+1 − tk+1 + tk+1 − t0 = sk+1 ≤ T,

so xk+1 , yk+1 ∈ U(x0 , T ). The induction is terminated. It follows that sequence {xn } is fundamental in Banach space B, and as such limn−→∞ xn = x∗ ∈ U[x0 , T ]. By letting k −→ ∞ in (28.35) we obtain F(x∗ ) = 0. Finally from kxk+m − xk k ≤ tk+m − tk , we get kx∗ − xk k ≤ T − tk

by letting m −→ ∞.

The uniqueness of the solution result is presented. Proposition 10. Assume: (1) x∗ is a simple solution of (28.2). (2) There exists τ ≥ T so that

K0 (T + τ) < 2.

(28.37)

Set Ω1 = Ω ∩U[x∗ , τ]. Then, x∗ is the unique solution of equation (28.2) in the domain Ω1 . Proof. Let q ∈ Ω1 with F(q) = 0. Define G = (28.37) one obtains 0

−1

0

kF (x0 ) (G − F (x0 ))k ≤ K0 ≤

Z 1 0

R1 0 0 F (q + θ(x∗ − q))dθ. Using (H2) and

((1 − θ)kq − x0 k + θkx∗ − x0 k)dθ

K0 (T + τ) < 1, 2

so q = x∗ , follows from the invertibility of G and the identity G(q − x∗) = F(q) − F(x∗ ) = 0 − 0 = 0.

246

4.

Christopher I. Argyros, Samundra Regmi, Ioannis K. Argyros et al.

Numerical Experiments

We provide some examples in this section. Example 38. Define function h(t) = ξ0 t + ξ1 + ξ2 sinξ3 t, x0 = 0, where ξ j , j = 0, 1, 2, 3 are parameters. Choose ψ0 (t) = K0t and ψ(t) = Lt. Then, clearly for ξ3 large and ξ2 small, KL0 can be small (arbitrarily). Notice that KL0 −→ 0. Example 39. Let B = B1 = C[0, 1] and Ω = U[0, 1]. It is well known that the boundary value problem [11]. ς(0) = 0, (1) = 1, ς00 = −ς − σς2 can be given as a Hammerstein-like nonlinear integral equation ς(s) = s +

Z 1

Q(s,t)(ς3(t) + σς2 (t))dt

0

where σ is a parameter. Then, define F : Ω −→ B1 by [F(x)](s) = x(s) − s −

Z 1

Q(s,t)(x3(t) + σx2 (t))dt.

0

Choose ς0 (s) = s and Ω = U(ς0 , ρ0 ). Then, clearly U(ς0 , ρ0 ) ⊂ U(0, ρ0 +1), since kς0 k = 1. Suppose 2σ < 5. Then, conditions (A) are satisfied for K0 = and η =

5.

1+σ 5−2σ .

σ + 6ρ0 + 3 2σ + 3ρ0 + 6 ,L= , 8 4

Notice that K0 < L.

Conclusion

The semi-local convergence of the Homeier method of order three is extended using general conditions on F 0 and recurrent majorizing sequences. The semi-local convergence analysis of the Xiao-Yin third order method for solving nonlinear equations in Banach space has been given under Lipschitz continuity. Our goal is to extend the applicability of Xiao-Yin’s method in the semi-local convergence under conditions on the first Fr´echet derivative of the operator involved. Majorizing sequences are used for proving our results. Numerical experiments testing the convergence criteria are given in this chapter.

A Xiao-Yin Third Order Method for Solving Equations

247

References [1] Argyros, I. K., On the Newton - Kantorovich hypothesis for solving equations, J. Comput. Math., 169, (2004), 315-332. [2] Argyros, I. K., Computational theory of iterative methods. Series: Studies in Computational Mathematics, 15, Editors: Chui C. K. and Wuytack L., Elsevier Publ. Co. New York, U.S.A, 2007. [3] Argyros, I. K., Convergence and Applications of Newton-type Iterations, Springer Verlag, Berlin, Germany, (2008). [4] Argyros, I. K., Hilout, S., Weaker conditions for the convergence of Newton’s method. J. Complexity, 28, (2012), 364–387. [5] Argyros, I. K., Hilout, S., On an improved convergence analysis of Newton’s method, Applied Mathematics and Computation, 225, (2013), 372-386. [6] Argyros, I. K., Magr´en˜ an, A. A., Iterative methods and their dynamics with applications, CRC Press, New York, USA, 2017. [7] Argyros, I. K., Magr´en˜ an, A. A., A contemporary study of iterative methods, Elsevier (Academic Press), New York, 2018. [8] Behl, R., Maroju, P., Martinez, E., Singh, S., A study of the local convergence of a fifth order iterative method, Indian J. Pure Appl. Math., 51, 2, (2020), 439-455. [9] C˘atinas¸, Emil, The inexact, inexact perturbed, and quasi-Newton methods are equivalent models, Math. Comp., 74, (2005), 291-301. [10] Ezquerro, J. A., Guti´errez, J. M., Hern´andez, M. A., Romero, N., Rubio, M. J., The Newton method: from Newton to Kantorovich(Spanish), Gac. R. Soc. Mat. Esp., 13, (2010), 53-76. [11] Ezquerro, J. A., Hernandez, M. A., Newton’s method: An updated approach of Kantorovich’s theory, Cham Switzerland, (2018). ` [12] Grau-S´anchez, Miquel and Grau, Angela and Noguera, Miquel, Ostrowski type methods for solving systems of nonlinear equations. Appl. Math. Comput., 281, (2011), 2377-2385. [13] Kantorovich, L. V., Akilov, G. P. Functional Analysis, Pergamon Press, Oxford, (1982). [14] Magr´en˜ an, A. A., Argyros, I. K., Rainer, J. J., Sicilia, J. A., Ball convergence of a sixth-order Newton-like method based on means under weak conditions, J. Math Chem (2018) 56:2117-2131, https://doi.org/10.1007/ s10910-018-0856-y. [15] Magr´en˜ an, A. A., Guti´errez, J. M., Real dynamics for damped Newton’s method applied to cubic polynomials, J. Comput. Appl. Math., 275, (2015), 527–538.

248

Christopher I. Argyros, Samundra Regmi, Ioannis K. Argyros et al.

[11] Ortega, L. M., Rheinboldt, W. C., Iterative Solution of Nonlinear Equations in Several Variables, Academic Press, New York, (1970). [17] Ostrowski, A. M., Solution of equations in Euclidean and Banach spaces, Elsevier, 1973. [18] Potra, F. A. and Pt´ak, V., Nondiscrete induction and iterative processes. Research Notes in Mathematics, 103. Pitman(Advanced Publishing Program), Boston, MA. (1984). [19] Proinov, P. D., General local convergence theory for a class of iterative processes and its applications to Newton’s method, J. Complexity, 25, (2009), 38-62. [13] Proinov, P. D., New general convergence theory for iterative processes and its applications to Newton-Kantorovich type theorems, J. Complexity, 26, (2010), 3-42. [21] Rheinboldt, W. C. An adaptive continuation process of solving systems of nonlinear equations. Polish Academy of Science, Banach Ctr. Publ. 3, (1978), 129-142. [22] Shakhno, S. M., Gnatyshyn, O. P., On an iterative algorithm of order 1.839... for solving nonlinear least squares problems, Appl. Math. Applic., 161, (2005), 253-264. [23] Shakhno, S. M., Iakymchuk, R. P., Yarmola, H. P., Convergence analysis of a two step method for the nonlinear squares problem with decomposition of operator, J. Numer. Appl. Math., 128, (2018), 82-95. [24] Sharma, J. R., Guha, R. K. and Sharma, R., An efficient fourth order weighted Newton method for systems of nonlinear equations. Numer. Algorithms, 62, (2013), 307-323. [25] Traub, J. F., Iterative methods for the solution of equations, Prentice Hall, New Jersey, U.S.A, (1964). [26] Verma, R., New Trends in Fractional Programming, Nova Science Publisher, New York, USA, (2019). [27] Xiao, X., and Yin, H., Achieving higher order of convergence for solving systems of nonlinear equations, Appl. Math. Comput., 311, III,(2017), 251-261.

Chapter 29

Fifth Order Scheme 1.

Introduction

Cordero et. al [8] considered the iterative method 2 yk = xk − F 0 (xk )−1 F(xk ), 3 1 −1 zk = xk − Ak (3F 0 (yk ) + F 0 (xk ))F 0 (xk )−1 F(xk ), 2 xk+1 = zk − 2(F 0 (yk ) + F 0 (xk ))−1 F(zk),

(29.1)

where Ak = 3F 0 (yk ) − F 0 (xk ) for solving the nonlinear equation F(x) = 0.

(29.2)

/ Here F : Ω ⊂ X −→ Y is an operator acting between Banach spaces X and Y with Ω 6= 0. Throughout the chapter U(x0 , ρ) = {x ∈ X : kx−x0 k < ρ} and U[x0 , ρ] = {x ∈ X : kx−x0 k ≤ ρ} for some ρ > 0. Our convergence analysis is not based on Taylor expansion (unlike earlier studies [1][13]), so we do not need assumptions on the higher-order derivatives of the operator involved. For example: Let X = Y = R, Ω = [− 12 , 32 ]. Define f on Ω by f (t) =



t 3 logt 2 + t 5 − t 4 i f t 6= 0 0 i f t = 0.

Then, we have f (1) = 0, f 000 (t) = 6 logt 2 + 60t 2 − 24t + 22. Obviously f 000 (t) is not bounded on Ω. So, the convergence of the Cordero method is not guaranteed by the analysis in [8]. The chapter contains local convergence analysis in Section 2, and the numerical examples are given in Section 3.

250

2.

Christopher I. Argyros, Samundra Regmi, Ioannis K. Argyros et al.

Scalar Sequences

Let `0 , ` and η be positive parameters. Define scalar sequence {tn } by 2 t0 = 0 s0 = η, 3 (3`(sn − tn ) + 4(1 + `0 tn ))(sn − tn ) un = sn + , 8(1 − pn )(1 − `0 tn ) `(un − tn )2 + 2(1 + `0 tn )(un − sn ) + (1 + `0 tn )(sn − tn ) tn+1 = un + , 2(1 − qn ) `(un − sn + tn+1 − un ) + (un − tn )(tn+1 − un ) , sn+1 = tn+1 + 2(1 − `0 tn+1 )

(29.3)

where pn = `20 (3sn + tn ) and qn = 12 (sn + tn ). Sequence (29.3) shall be shown to be majorizing for scheme (29.1) in Section 3. Next, we present a convergence result for sequence (29.3). Lemma 18. Assume: pn < 1 and tn+1
0. Our convergence analysis is not based on Taylor expansion (unlike earlier studies [8]), so we do not need assumptions on the higher-order derivatives of the operator involved. For example: Let X = Y = R, D = [− 21 , 32 ]. Define f on D by f (t) =



t 3 logt 2 + t 5 − t 4 i f t 6= 0 0 i f t = 0.

Then, we have f (1) = 0, f 000 (t) = 6 logt 2 + 60t 2 − 24t + 22. Obviously f 000 (t) is not bounded on D. So, the convergence of the Cordero method is not guaranteed by the analysis in [8]. This method can be used on other methods and relevant topics along the same lines [1]-[7]. The chapter contains local convergence analysis in Section 2, and the numerical examples are given in Section 3.

258

2.

Christopher I. Argyros, Samundra Regmi, Ioannis K. Argyros et al.

Majorizing Sequences

Let `0 , `, η0 and η be positive parameters. Define scalar sequence {tn }, {sn } by t0 = 0 s0 = η0 , t1 = η n−1 )(sn − tn ) `(tn − tn−1 + sn − sn−1 )(1 + `0 tn−1 +s 2 tn+1 = sn + , (30.3) 2(1 − pn )(1 − pn−1 ) `(tn+1 − tn )2 + `(tn − tn−1 + tn − sn−1 )(sn − tn ) + 2(1 + `0 tn )(tn+1 − sn ) , sn+1 = tn+1 + 2(1 − pn+1 ) where pn = `20 (tn−1 + sn−1 ). These sequence are shown to be majorizing for scheme (30.1) in Section 3. Next, we provide a convergence result for method (30.3). Lemma 19. Assume: 0 ≤ pn < 1. Then, sequence {tn } is bounded from above by c = to its unique least upper bound b ∈ [0, c].

(30.4) 1 `0

and nondecreasing, so it converges

Proof. By the definition of this sequence and (30.4) it follows that it is nondecreasing, and bounded from above by c, so limn−→∞ tn = b.

3.

Semi-Local Convergence

The conditions (H) shall be used to show convergence of method (30.1). Assume: 0 (H1) There exists x0 ∈ D and η0 , η > 0 such that linear operators F 0 (x0 ) and F 0 ( x0+y 2 ) are invertible and kF 0 (x0 )−1 F(x0 )k ≤ η0

and kF 0 (

x0 + y0 −1 ) F(x0 )k ≤ η. 2

(H2) kF 0 (x0 )−1 (F 0 (v) − F 0 (x0 ))k ≤ `0 kv − x0 k for all v ∈ D. Set D0 = D ∩U(x0 , c). (H3) kF 0 (x0 )−1 (F 0 (w) − F 0 (v)k ≤ `kw − vk for all v, w ∈ D0 . (H4) Conditions of Lemma 19 hold, and (H5) U[x0 , b] ⊂ D.

Werner Method

259

Next, we present a semi-local the convergence result for method (30.1). Theorem 53. Assume conditions (H) hold. Then, the following assertions hold: {xn } ⊂ U(x0 , b)

(30.5)

kx∗ − xn k ≤ b − tn ,

(30.6)

and where x∗ ∈ U[x0 , c] and F(x∗ ) = 0.

Proof. Mathematical induction is used to show kxi+1 − yi k ≤ ti+1 − si

(30.7)

kyi − xi k ≤ si − ti .

(30.8)

and These estimates are true for i = 0 by the definition of sequence {ti } and (H1). Assume these estimates hold for all integers i smaller or equal to n − 1. Let z ∈ U(x0 , b). It then follows from (H2) that kF 0 (x0 )−1 (F 0 (z) − F 0 (x0 ))k ≤ `0 kz − x0 k ≤ `0 b < 1,

(30.9)

hence F 0 (z)−1 ∈ L(Y, X) and kF 0 (z)−1 F 0 (x0 )k ≤

1 , 1 − `0 kz − x0 k

(30.10)

by the Banach lemma on linear invertible operators [1]. We can write xi−1 + yi−1 −1 ) F(xi ) 2 xi−1 + yi−1 −1 xi + yi −1 +[F 0 ( ) − F 0( ) ]F(xi ) 2 2 xi−1 + yi−1 −1 0 xi + yi xi−1 + yi−1 = yi + F 0 ( ) [F ( ) − F 0( )] 2 2 2 xi + yi −1 ×F 0 ( ) F(xi ). 2

xi+1 = xi − F 0 (

In view of (H3) and (30.10) (for s =

xi−1 +yi−1 xi +yi , 2 ), 2

(30.11)

(30.3) and (30.12) we obtain

i−1 `(ti + si − ti−1 − si−1 )(1 + `0 ti−1 +s )(si − ti ) 2 kxi+1 − yi k = = ti+1 − si , 2(1 − pi )(1 − pi+1 )

(30.12)

i−1 showing (30.7), where we also used F(xi ) = F 0 ( xi−1 +y )(yi − xi ) and 2

kF 0 (x0 )−1 (F 0 (x0 ) + F 0 (

xi−1 + yi−1 xi−1 + yi−1 ) − F 0 (x0 ))k ≤ 1 + `0 k − x0 k 2 2 ti−1 + si−1 ≤ 1 + `0 . 2

260

Christopher I. Argyros, Samundra Regmi, Ioannis K. Argyros et al.

We can write by the definition of method (30.1) F(xi+1 ) = F(xi+1 ) − F(xi ) − F 0 (xi )(xi+1 − xi ) + F 0 (xi )(xi+1 − xi ) xi−1 + yi−1 )(yi − xi ) + F 0 (xi )(yi − xi ) − F 0 (xi )(yi − xi ) = F 0( 2 =

Z 1 0

(F 0 (xi + θ(xi+1 − xi ))dθ − F 0 (xi ))(xi+1 − xi )

+(F 0 (xi ) − F 0 (

xi−1 + yi−1 ))(yi − xi ) + F 0 (xi )(xi+1 − yi ), 2

(30.13)

so ` xi−1 + yi−1 kxi+1 − xi k2 + `k − xi kkyi − xi k 2 2 +(1 + `0 kxi − x0 k)kxi+1 − yi k 1 ≤ [`(ti+1 − ti )2 + `(ti − ti−1 + ti − si−1 )(si − ti ) 2 +2(1 + `0 ti )(ti+1 − si )] (30.14)

kF 0 (x0 )−1 F 0 (xi+1 )k ≤

and consequently xi + yi −1 0 ) F (x0 )kkF 0 (x0 )−1 F(xi+1 ) 2 ≤ si+1 − ti+1 ,

kyi+1 − xi+1 k ≤ kF 0 (

(30.15)

showing (30.8). Notice that we also used kxi+1 − x0 k ≤ kxi+1 − yi k + kyi − x0 k ≤ ti+1 − si + si − t0 = ti+1 < b and kyi+1 − x0 k ≤ kyi+1 − xi+1 k + kxi+1 − x0 k ≤ si+1 − ti+1 + ti+1 − t0 = si+1 < b, so {xi }, {yi} ∈ U(x0 , b). It follows that sequence {xi } in Banach space B1 , so it converges to some x∗ ∈ U[x0 , b]. By letting i −→ ∞ in (30.14) and using the continuity of F we get F(x∗ ) = 0. The uniqueness of x∗ is given next. Proposition 12. Assume: (1) x∗ ∈ U[x0 , b] solves (30.2). (2) There exists γ ≥ b so that

`0 (γ + b) < 1. (30.16) 2 Set D1 = D ∩U[x∗ , γ]. Then, x∗ is the only solution of equation (30.2) in the domain D1 . Proof. Consider x˜ ∈ D1 with F(x) ˜ = 0. Define M = (A3) and (30.16) 0

−1

0

kF (x0 ) (M − F (x0 ))k ≤ ≤

Z 1 0

R1 0 ˜ Using (A1), 0 F (x˜ + θ(x∗ − x))dθ.

`0 ((1 − θ)kx˜ − x0 k + θkx˜ − x0 k)dθ

`0 (γ + b) < 1, 2

so x˜ = x∗ , by the invertibility of M and M(x˜ − x∗ ) = F(x) ˜ − F(x∗ ) = 0 − 0 = 0.

Werner Method

4.

261

Numerical Experiments √

Example 42. Let X = Y = R, x0 = 1, D = [x0 − (1 − p), x0 + (1 − p)], p ∈ (0, 3− 2 1.5 ) and F : D −→ Y be defined by F(x) = x3 − p.

Then kF 0 (x0 )−1 k ≤ 13 , kF 0 (y) − F 0 (x)k ≤ 6(2 − p)ky − xk and hence for all x ∈ D, we have kF 0 (x)−1 k ≤

kF 0 (x0 )−1 k 1 ≤ . 1 − kF 0 (x0 )−1 kkF 0 (x) − F 0 (x0 )k 3(1 − 4(2 − p)(1 − p))

Hence, kF 0 (x)−1 (F 0 (y) − F 0 (x))k ≤ kF 0 (x)−1 kLky − xk ≤ Then, we have `0 =

5.

2(2−p) , 1−4(2−p)(1−p)

η0 =

1−p 3 ,

2(2 − p) ky − xk. 1 − 4(2 − p)(1 − p)

and for p = 0.99, p0 = 0.0035.

Conclusion

In this chapter, we are interested in the semi-local convergence of the Werner method of √ order 1 + 2. The convergence of the method under consideration was shown by assuming that the fourth derivative of the operator not on the method exists, and hence it is limiting its applicability. Moreover, no computational error bounds or uniqueness of the solution are given. We address all these problems using only the first derivative that appears on the method. Hence, we extend the applicability of the method. Our techniques can be used to obtain the convergence of other similar higher-order methods using assumptions on the first derivative of the operator involved.

References [1] Argyros, I. K., Computational Theory of Iterative Methods, Volume 15. Elsevier Science, San Diego, CA, USA, 2007. [2] Argyros, I. K. and √ Ren, H., On the convergence of efficient king-Werner-type methods of order 1 + 2. Journal of Computational and Applied Mathematics, 285:169180, 2015. [3] C´ardenas, E., Castro, R., and Sierra, W., A Newton-type midpoint method with high efficiency index. Journal of Mathematical Analysis and Applications, 491(2):124381, 2020. [4] Fern´andez, J. E., and Hern´andez, M. A., Ver´on The classic theory of Kantorovich, pages 1-38, Springer International Publishing, Cham, 2017. [5] Werner, R. F., Tangent methods for nonlinear equations. Numer. Math., 18(4):298304, August 1971.

262

Christopher I. Argyros, Samundra Regmi, Ioannis K. Argyros et al.

[6] Ren, H., On the local convergence of a deformed Newtons method under Argyrostype condition. Journal of Mathematical Analysis and Applications, 321(1):396-404, 2006. √ ¨ [7] Werner, W., Uberein Verfahren der Ordnung 1 + 2 zur Nullstellenbestimmung. Numer. Math., 32(3):333342, September 1979. √ [8] Werner, W., Some supplementary results on the 1 + 2 order method for the solution of nonlinear equations. Numer. Math., 38(3):383-392, 1981/82.

Chapter 31

Yadav-Singh Method of Order Five 1.

Introduction

We are interested in iterative method for solving the nonlinear equation F(x) = 0,

(31.1)

where F : D ⊂ B1 −→ B2 is an operator acting between Banach spaces B1 and B2 with / Throughout the chapter U(x0 , ρ) = {x ∈ B1 : kx − x0 k < ρ} and U[x0 , ρ] = {x ∈ B1 : D 6= 0. kx − x0 k ≤ ρ} for some ρ > 0. In [29], Yadav and Singh considered the iterative method yn = xn − F 0 (xn )−1 F(xn ),

zn = yn − 5F 0 (xn )−1 F(yn ), 1 xn+1 = zn − F 0 (xn )−1 (F(yn ) − 16F(xn )). 5

(31.2)

In this chapter, we study the convergence of method (31.2) using assumptions only on the first derivative of F, unlike earlier studies [8,29], where the convergence analysis required assumptions on the derivatives of F up to the order six. This method can be used on other methods and relevant topics along the same lines [1]-[28]. For example: Let X = Y = R, D = [− 12 , 32 ]. Define f on D by f (t) =



t 3 logt 2 + t 5 − t 4 i f t 6= 0 0 i f t = 0.

Then, we have f (1) = 0, f 000 (t) = 6 logt 2 + 60t 2 − 24t + 22. Obviously f 000 (t) is not bounded on D. So, the convergence of the method (31.2) is not guaranteed by the analysis in [28,29]. The chapter contains local convergence analysis in Section 2, and the numerical examples are given in Section 3.

264

2.

Christopher I. Argyros, Samundra Regmi, Ioannis K. Argyros et al.

Semi-Local Convergence

We introduce some sequences that play a role in the semi-local convergence of method (31.2). Let K0 , K and η be positive parameters. Define sequence {an } by cn = bn + an+1 = cn + and bn+1 = an+1 +

5K(bn − an )2 , 1(1 − K0 an )

(31.3)

(32 + K(bn − an ))(bn − an ) , 10(1 − K0 an )

K(an+1 − an )2 + 2(1 + K0 an )(an+1 − bn ) . 2(1 − K0 an+1 )

Next, we present a convergence result for sequence {an }. Lemma 20. Assume an
0 such that F 0 (x0 )−1 ∈ L(B2 , B1 ) and kF 0 (x0 )−1 F(x0 )k ≤ η. (H2) There exists K0 > 0 such that kF 0 (x0 )−1 (F 0 (v) − F 0 (x0 ))k ≤ K0 kv − x0 k for all v ∈ D. Set D0 = D ∩U(x0 , K10 ). (H3) There exists K > 0 such that kF 0 (x0 )−1 (F 0 (w) − F 0 (v)k ≤ Kkw − vk for all v, w ∈ D0 . (H4) Conditions of Lemma 20 hold and (H5) U[x0 , a∗ ] ⊂ D.

Yadav-Singh Method of Order Five

265

Next, we present the semi-local convergence analysis of the method (31.2) using conditions (H). Theorem 54. Assume conditions (H) hold. Then, sequence {xn } generated by method (31.2) is well defined in U(x0 , a∗ ), remains in U(x0 , a∗ ) for all n = 0, 1, 2, . . ., and converges to a solution x∗ of F(x) = 0. Moreover, the the following assertions hold: kx∗ − xn k ≤ a∗ − an . Proof. Mathematical induction is used to show kym − xm k ≤ bm − am ,

(31.5)

kzn − ym k ≤ cm − bm

(31.6)

kxm+1 − zm k ≤ am+1 − bm .

(31.7)

and Let w ∈ U(x0 , a∗ ). Using (H1) and (H2) one gets

kF 0 (x0 )−1 (F 0 (w) − F 0 (x0 ))k ≤ K0 kw − x0 k ≤ K0 a∗ < 1,

(31.8)

so F 0 (w)−1 ∈ L(B2 , B1 ) and kF 0 (w)−1 F 0 (x0 )k ≤

1 , 1 − K0 kw − x0 k

(31.9)

follows from the Banach lemma on linear invertible operators [18]. Note that y0 , z0 and x1 are well defined by method (31.2) and (31.9) (for w = x0 ). By the first sub-step of method (31.2), one can write F(ym ) = F(ym ) − F(xm ) − F 0 (xm )(ym − xm ) =

Z 1 0

(F 0 (xm + θ(ym − xm )) − F 0 (xm ))dθ(ym − xm ).

(31.10)

So, by (H3), (31.9) (for w = xm ) and (31.10) kzm − ym k ≤ 5kF 0 (xm )−1 F 0 (x0 )kkF 0 (x0 )−1 F(ym)k 5K kym − xm k2 ≤ 2 1 − K0 kxm − x0 k 5K (bm − um ))2 ≤ = cm − bm . 2 1 − K0 tm

(31.11)

One can also write xm+1 − zm = −

16 1 (ym − xm ) − 5 5

Z 1 0

(F 0 (xm + θ(ym − xm ))dθ − F 0 (xm ))(ym − xm ),

266

Christopher I. Argyros, Samundra Regmi, Ioannis K. Argyros et al.

so kxm+1 − zm k ≤ ≤

K ( 15 5 + 10 kym − xm ))kym − xm k 1 − K0 kxm − x0 k (32 + K(bm − um ))(bm − am ) = am+1 − bm . 10(1 − K0 am)

(31.12)

Moreover, one has F(xm+1 ) = F(xm+1 ) − F(xm ) − F 0 (xm )(ym − xm )

−F 0 (xm)(xm+1 − xm ) + F 0 (xm )(xm+1 − xm )

=

Z 1 0

(F 0 (xm + θ(xm+1 − xm ))dθ − F 0 (xm ))(xm+1 − xm )

+(F 0 (x0 ) + F 0 (xm ) − F 0 (x0 ))(xm+1 − ym ),

(31.13)

so K kxm+1 − xm k2 + (1 + Kkxm − x0 k)kxm+1 − ym k 2 ≤ K(am+1 − am )2 + 2(1 + K0 am )(am+1 − bm ) (31.14)

kF 0 (x0 )−1 F 0 (xm+1 )k ≤

and kym+1 − xm+1 k ≤ bm+1 − am+1 ,

(31.15)

where we also used ky0 − x0 k ≤ kF 0 (x0 )−1 F(x0 )k ≤ b0 − a0 = b0 < a∗ , and the induction hypotheses for (31.5)-(31.7) for all m smaller than or equal to m − 1, kym − x0 k ≤ kym − xm k + kxm − x0 k

≤ bm − am + am − a0 = bm < a∗ ,

kzm − x0 k ≤ kzm − ym k + kym − x0 k

≤ cm − bm + bm − a0 = cm < a∗ ,

and kxm+1 − x0 k ≤ kxm+1 − zm k + kzm − x0 k

≤ am+1 − bm + bm − a0 = am+1 < a∗ .

It follows that the induction for (31.5)-(31.7) is completed and xm , ym , zm, xm+1 ∈ U(x0 , a∗ ). Hence, sequence {xm } is fundamental in a Banach space B1 , so it converges to some x∗ ∈ U[x0 , a∗ ]. By letting m −→ ∞ in (31.13) and using the continuity of F we get F(x∗ ) = 0. The uniqueness of x∗ is given next.

Yadav-Singh Method of Order Five

267

Proposition 13. Assume: (1) x∗ ∈ U[x0 , a∗ ] solves (31.1). (2) There exists δ ≥ a∗ so that

K0 (δ + a∗ ) < 1. (31.16) 2 Set D1 = D ∩U[x∗ , δ]. Then, x∗ is the only solution of equation (31.1) in the domain D1 . Proof. Consider x˜ ∈ D1 with F(x) ˜ = 0. Define M = (A3) and (31.16) kF 0 (x0 )−1 (M − F 0 (x0 ))k ≤ ≤

Z 1 0

R1 0 ˜ Using (A1), 0 F (x˜ + θ(x∗ − x))dθ.

K0 ((1 − θ)kx˜ − x0 k + θkx˜ − x0 k)dθ

K0 (δ + a∗ ) < 1, 2

so x˜ = x∗ , by the invertibility of M and M(x˜ − x∗ ) = F(x) ˜ − F(x∗ ) = 0 − 0 = 0.

3.

Local Convergence

Let L0 , L and L1 be positive parameters. Let ρ1 = 2L02+L , Q = [0, ∞) and Q) = [0, ρ1 ). Define function ϕ1 : Q0 −→ Q by Lt . ϕ1 (t) = 2(1 − L0 t) Assume functions: (i) ϕ2 (t) − 1 has a smallest zero r1 ∈ Q0 − {0}, where ϕ2 (t) = (1 +

5L1 )ϕ1 (t). 1 − L0 t

(ii)ϕ3 (t) − 1 has a smallest zero r2 ∈ Q0 − {0}, where ϕ3 : Q0 −→ Q is defined by ϕ3 (t) = ϕ2 (t) +

16L1 L1 ϕ1 (t) + . 5(1 − L0 t) 5(1 − L0 t)

We shall show that r = min{ri}, i = 1, 2,

(31.17)

is a radius of convergence for method (31.2). Let Q1 = [0, r). By the definition of r it follows that 0 ≤ L0 t < 1 (31.18) and 0 ≤ ϕi (t) < 1, i = 1, 2, 3 hold for all t ∈ Q1 . The conditions (A) are used in the local convergence are. Assume:

(31.19)

268

Christopher I. Argyros, Samundra Regmi, Ioannis K. Argyros et al.

(A1) There exists a simple solution x∗ ∈ D for equation F(x) = 0. (A2) kF 0 (x∗ )−1 (F 0 (w) − F 0 (x∗ ))k ≤ L0 kw − x∗ k

for some L0 > 0 and all w ∈ U(x∗ , L10 ). Set D0 = D ∩U(x∗ , LK1 0 ). (A3) kF 0 (x∗ )−1 (F 0 (w) − F 0 (v)k ≤ Lkw − vk and kF 0 (x∗ )−1 F 0 (v)k ≤ L1 kv − x∗ k for all w, v ∈ D0 and (A4) U[x∗ , r] ⊂ D. Then, as in the semi-local case but using estimates kym − x∗ k ≤ kxm − x∗ − F 0 (xm )−1 F(xm )k Lkxm − x∗ k2 ≤ 2(1 − L0 kxm − x∗ k) ≤ ϕ1 (kxm − x∗ k)kxm − x∗ k ≤ kxm − x∗ k < r,

kzm − x∗ k ≤ kym − x∗ − 5F 0 (xm )−1 F(ym)k 5L1 ≤ (1 + )kym − x∗ k 1 − L0 kxm − x∗ k 5L1 ≤ (1 + )ϕ1 (kxm − x∗ k)kxm − x∗ k 1 − L0 kxm − x∗ k = ϕ2 (kxm − x∗ k)kxm − x∗ k ≤ kxm − x∗ k and 16 0 1 F (xm )−1 F(xm ) − F 0 (xm )−1 F(xm )k 5 5 16L1 kxm − x∗ k ∗ ≤ kzm − x k + 5(1 − L0 kxm − x∗ k) L1 kym − x∗ k + 5(1 − L0 kxm − x∗ k) ≤ ϕ3 (kxm − x∗ k)kxm − x∗ k ≤ kxm − x∗ k.

kxm+1 − x∗ k ≤ kzm − x∗ +

Hence, we arrive at: Theorem 55. Assume conditions (A) hold. Then, sequence {xn } generated by method (31.2) for x0 ∈ U(x∗ , r) is well defined in U(x∗ , r), remains in U(x∗ , r) for all n = 0, 1, 2, . . . and converges to x∗ .

Yadav-Singh Method of Order Five

4.

269

Numerical Experiments √

Example 43. Let X = Y = R, x0 = 1, D = [x0 − (1 − p), x0 + (1 − p)], p ∈ (0, 3− 2 1.5 ) and F : D −→ Y be defined by F(x) = x3 − p.

Then kF 0 (x0 )−1 k ≤ 13 , kF 0 (y) − F 0 (x)k ≤ 6(2 − p)ky − xk and hence for all x ∈ D, we have kF 0 (x)−1 k ≤

kF 0 (x0 )−1 k 1 ≤ . 0 −1 0 0 1 − kF (x0 ) kkF (x) − F (x0 )k 3(1 − 4(2 − p)(1 − p))

Hence, kF 0 (x)−1 (F 0 (y) − F 0 (x))k ≤ kF 0 (x)−1 kLky − xk ≤ Then, we have `0 =

2(2−p) 1−4(2−p)(1−p) ,

η0 =

1−p 3 ,

2(2 − p) ky − xk. 1 − 4(2 − p)(1 − p)

and for p = 0.99, p0 = 0.0035.

We compute the radius of convergence in the next examples. Example 44. Returning back to the motivational example, we have L0 = L = 96.6629073 and L1 = 2. Then, we have r1 = 0.0014 = r, r2 = 0.0238. Example 45. Let B1 = B2 = R3 , D = U[0, 1], x∗ = (0, 0, 0)T . Define function F on D for w = (x, y, z)T by e−1 2 F(w) = (ex − 1, y + y, z)T . 2 Then, we get  x  e 0 0 F 0 (v) =  0 (e − 1)y + 1 0  , 0 0 1 1

1

so for L0 = e − 1, L = e e−1 and L1 = e e−1 , we have

r1 = 0.0836 = r, r2 = 1.3665

5.

Conclusion

The convergence of the Yadav-Singh method of order five is studied using assumptions only on the first derivative of the operator involved. The convergence of this method was shown by assuming that the sixth derivative of the operator not on the method exists and hence it is limiting its applicability. Moreover, no computational error bounds or uniqueness of the solution are given. We address all these problems using only the first derivative that appears on the method. Hence, we extend the applicability of the method. Our techniques can be used to obtain the convergence of other similar higher-order methods using assumptions on the first derivative of the operator involved.

270

Christopher I. Argyros, Samundra Regmi, Ioannis K. Argyros et al.

References [1] Argyros, I. K., On the Newton - Kantorovich hypothesis for solving equations, J. Comput. Math., 169, (2004), 315-332. [2] Argyros, I. K., Computational theory of iterative methods. Series: Studies in Computational Mathematics, 15, Editors: Chui C. K. and Wuytack L., Elsevier Publ. Co. New York, U.S.A, 2007. [3] Argyros, I. K., Convergence and Applications of Newton-type Iterations, Springer Verlag, Berlin, Germany, (2008). [4] Argyros, I. K., Hilout, S., Weaker conditions for the convergence of Newton’s method. J. Complexity, 28, (2012), 364–387. [5] Argyros, I. K., Hilout, S., On an improved convergence analysis of Newton’s method, Applied Mathematics and Computation, 225, (2013), 372-386. [6] Argyros, I. K., Magr´en˜ an, A. A., Iterative methods and their dynamics with applications, CRC Press, New York, USA, 2017. [7] Argyros, I. K., Magr´en˜ an, A. A., A contemporary study of iterative methods, Elsevier (Academic Press), New York, 2018. [8] Arroyo, V., Cordero, A., Torregrosa, J. R., Approximation of artificial satellites preliminary orbits: The efficiency challenge. Mathematical and Computer modeling, 54, (7-8), (2011), 1802-1807. [9] Behl, R., Maroju, P., Martinez, E., Singh, S., A study of the local convergence of a fifth order iterative method, Indian J. Pure Appl. Math., 51, 2, (2020), 439-455. [10] C˘atinas¸, Emil, The inexact, inexact perturbed, and quasi-Newton methods are equivalent models, Math. Comp., 74, (2005), 291-301. [11] Ezquerro, J. A., Guti´errez, J. M., Hern´andez, M. A., Romero, N., Rubio, M. J., The Newton method: from Newton to Kantorovich (Spanish), Gac. R. Soc. Mat. Esp., 13, (2010), 53-76. [12] Ezquerro, J. A., Hernandez, M. A., Newton’s method: An updated approach of Kantorovich’s theory, Cham Switzerland, (2018). ` [13] Grau-S´anchez, Miquel and Grau, Angela and Noguera, Miquel, Ostrowski type methods for solving systems of nonlinear equations. Appl. Math. Comput., 281, (2011), 2377-2385. [14] Homeier, H. H. H., A modified Newton method with cubic convergence: the multivariate case, Journal of Computational and Applied Mathematics 169 (2004) 161169. [15] Kantorovich, L. V., Akilov, G. P. Functional Analysis, Pergamon Press, Oxford, (1982).

Yadav-Singh Method of Order Five

271

[16] Magr´en˜ an, A. A., Argyros, I. K., Rainer, J. J., Sicilia, J. A., Ball convergence of a sixth-order Newton-like method based on means under weak conditions, J. Math Chem (2018) 56:2117-2131, https://doi.org/10.1007/ s10910-018-0856-y [17] Magr´en˜ an, A. A., Guti´errez, J. M., Real dynamics for damped Newton’s method applied to cubic polynomials, J. Comput. Appl. Math., 275, (2015), 527–538. [18] Ortega, L. M., Rheinboldt, W. C., Iterative Solution of Nonlinear Equations in Several Variables, Academic Press, New York, (1970). [19] Ostrowski, A. M., Solution of equations in Euclidean and Banach spaces, Elsevier, 1973. [20] Potra, F. A. and Pt´ak, V., Nondiscrete induction and iterative processes. Research Notes in Mathematics, 103. Pitman(Advanced Publishing Program), Boston, MA. (1984). [21] Proinov, P. D., General local convergence theory for a class of iterative processes and its applications to Newton’s method, J. Complexity, 25, (2009), 38-62. [22] Proinov, P. D., New general convergence theory for iterative processes and its applications to Newton-Kantorovich type theorems, J. Complexity, 26, (2010), 3-42. [23] Rheinboldt, W. C. An adaptive continuation process of solving systems of nonlinear equations. Polish Academy of Science, Banach Ctr. Publ. 3, (1978), 129-142. [24] Shakhno, S. M., Gnatyshyn, O. P., On an iterative algorithm of order 1.839... for solving nonlinear least squares problems, Appl. Math. Applic., 161, (2005), 253-264. [25] Shakhno, S. M., Iakymchuk, R. P., Yarmola, H. P., Convergence analysis of a two step method for the nonlinear squares problem with decomposition of operator, J. Numer. Appl. Math., 128, (2018), 82-95. [26] Sharma, J. R., Guha, R. K. and Sharma, R., An efficient fourth order weighted Newton method for systems of nonlinear equations. Numer. Algorithms, 62, (2013), 307-323. [27] Traub, J. F., Iterative methods for the solution of equations, Prentice Hall, New Jersey, U.S.A, (1964). [28] Verma, R., New Trends in Fractional Programming, Nova Science Publisher, New York, USA, (2019). [29] Yadav, S., Singh, S., Global convergence domains for an efficient fifth order scheme, J. Comput. Appl. Math., (2022).

Chapter 32

Convergence of a P + 1 Step Method of Order 2P + 1 with Frozen Derivatives 1.

Introduction

Consider the problem of solving the nonlinear equation F(x) = 0,

(32.1)

where F : D ⊂ B1 −→ B2 is an operator acting between Banach spaces B1 and B2 with / The following p + 1 step iterative method was studied in [27] for a = 23 , D 6= 0. (0)

yn

(1)

yn

(2)

yn

(p−1)

yn

= xn − aF 0 (xn )−1 F(xn ), (0)

(1)

(32.2) (1)

= yn − g(xn , yn )F 0 (xn )−1 F(yn ), .. . (p−2)

= yn

xn+1 = (0)

(0)

= yn − Bn F 0 (xn )−1 F(yn ),

(p−1)

− g(xn , yn )F 0 (xn )−1 F(yn

),

(32.3)

(p) (p−1) yn − g(xn , yn )F 0 (xn )−1 F(yn ),

where a ∈ R, x0 ∈ D, yn = yn , g(xn , yn ) = 15 (5I − 3An ), An = F 0 (xn )−1 F(yn ) and Bn = 1 12 (13I − 9An ). It was shown in [27] to be of order 2p + 1 using hypotheses on the 2p + 2 derivative. This methods can be used on other methods and relevant topics along the same lines [1]-[28]. In this chapter, we provide the local convergence of method (32.2) using assumptions only on the first derivative of F, unlike earlier studies [27] where the convergence analysis required assumptions on the derivatives of F up to the order 2p + 2.

274

Christopher I. Argyros, Samundra Regmi, Ioannis K. Argyros et al. For example: Let X = Y = R, D = [− 21 , 32 ]. Define f on D by f (t) =



t 3 logt 2 + t 5 − t 4 i f t 6= 0 0 i f t = 0.

Then, we have f (1) = 0, f 000 (t) = 6 logt 2 + 60t 2 − 24t + 22. Obviously f 000 (t) is not bounded on D. So, the convergence of the method (32.2) is not guaranteed by the analysis in [27]. Throughout the chapter U(x0 , ρ) = {x ∈ B1 : kx − x0 k < ρ} and U[x0 , ρ] = {x ∈ B1 : kx − x0 k ≤ ρ} for some ρ > 0. The chapter contains local convergence analysis in Section 2, and the numerical examples are given in Section 3.

2.

Local Convergence

We introduce some scalar functions and parameters that play a role in the local convergence of the method (32.2). Assume: (i) There exists function ϕ0 : [0, ∞) −→ [0, ∞) nondecreasing and continuous such that equation ϕ0 (t) − 1 = 0 has a smallest solution ρ ∈ (0, ∞). (ii) There exist functions ϕ : [0, ρ) −→ [0, ∞), ϕ1 : [0, ρ) −→ [0, ∞) such that equations ψ0 (t) − 1 = 0, ψ1 (t) − 1 = 0 have a smallest solutions r0 , r1 ∈ (0, ρ), respectively, where ψ0 (t) =

R1 0

ϕ((1 − θ)t)dθ + |1 − a| 1 − ϕ0 (t)

ψ1 (t) = (1 + and p(t) = Define scalar functions

p(t)

0

ϕ1 (θt)dθ

,

R1

ϕ1 (θψ0 (t)t)dθ )ψ0 (t) 1 − ϕ0 (t)

0

3(ϕ0 (t) + ϕ0 (ψ0 (t)t)dθ 1 + . 4(1 − ϕ0 (t)) 3

q = q(t) = 1 +

s

R1

s = s(t) = 1 + and

R1

0 ϕ1 (θt)dθ , 1 − ϕ0 (t)

3ϕ0 (t) 1 − ϕ0 (t)

ψm (t) = qψm−1 (t), m = 2, 3, . . ., p + 1. Suppose equation ψm (t) − t = 0 have smallest solutions rm in (0, ρ), respectively. Define r = min{ri }, i = 1, 2, . . ., p + 1.

(32.4)

Convergence of a P + 1 Step Method of Order 2P + 1 with Frozen Derivatives

275

We shall show that r is a radius of convergence for method (32.2). Notice that by the definition of r it follows that for all t ∈ [0, r) 0 ≤ ϕ0 (t) < 1

(32.5)

0 ≤ ψi (t) < 1.

(32.6)

and The convergence uses conditions (A). Assume: (A1) Element x∗ ∈ D is a simple solution of equation F(x) = 0. (A2) kF 0 (x∗ )−1 (F 0 (z) − F 0 (x∗ ))k ≤ ϕ0 (kz − x∗ k)

for some L0 > 0 and all z ∈ U[x∗ , ρ]. Set D0 = D ∩U(x∗ , ρ). (A3)

kF 0 (x∗ )−1 (F 0 (z) − F 0 (w)k ≤ ϕ(kz − wk) and kF 0 (x∗ )−1 F 0 (z)k ≤ ϕ1 (kz − x∗ k) for all z, w ∈ D0 and (A4) U[x∗ , r] ⊂ D. Next, we present the convergence of the method (32.2) using the aforementioned notation and conditions (A). Theorem 56. Assume conditions (A) hold. Then, sequence {xn } generated by method (32.2) for x0 ∈ U(x∗ , r) is well defined in U(x∗ , r), remains in U(x∗ , r) for all n = 0, 1, 2, . . . and converges to x∗ . Proof. Let v ∈ U(x∗ , r). In view of (A2) and (32.4) one gets in turn that kF 0 (x∗ )−1 (F 0 (v) − F 0 (x∗ ))k ≤ ϕ0 (kv − x∗ k) ≤ ϕ0 (r) < 1, leading to F 0 (v)−1 L(B2 , B1 ) and kF 0 (v)−1F 0 (x∗ )k ≤

1 , 1 − ϕ0 (kv − x∗ k)

(32.7)

by a Lemma on invertible operators attributed to Banach [15, 18]. It follows from (32.7) (1) (2) (p) for v = x0 that y0 , y0 , . . ., y0 exist. Then, we can write by the first and second sub-step of method (32.2) (0)

y0 − x∗ = x0 − x∗ − F 0 (x0 )−1 F(x0 ) + (1 − a)F 0 (x0 )−1 F(x0 )

(32.8)

276

Christopher I. Argyros, Samundra Regmi, Ioannis K. Argyros et al.

and

(1)

(1)

(1)

y0 − x∗ = y0 − x∗ − B0 F 0 (x0 )−1 F(y0 ),

(32.9)

so using (A3), one has in turn (0) ky0 − x∗ k

ϕ((1 − θ)kx0 − x∗ k)dθ + |1 − a| 01 ϕ1 (θkx0 − x∗ k)dθ]kx0 − x∗ k ≤ 1 − ϕ0 (kx0 − x∗ k) ≤ ψ0 (kx0 − x∗ k)kx0 − x∗ k ≤ kx0 − x∗ k < r (32.10) [

R1

R

0

(0)

so y0 ∈ U(x∗ , r) and (1)

(0)

(0)

ky0 − x∗ k ≤ ky0 − x∗ k + kB0 kkF 0 (x0 )−1 F 0 (x∗ )kkF 0 (x∗ )−1 F(y0 )k (0)

p(kx0 − x∗ k) 01 ϕ1 (θky0 − x∗ k)dθ (0) ∗ )ky0 − x k ≤ (1 + 1 − ϕ0 (kx0 − x∗ k) ≤ ψ1 (kx0 − x∗ k)kx0 − x∗ k ≤ kx0 − x∗ k, R

(32.11)

(1)

so y0 ∈ U(x∗ , r), where we also used 9 0 1 F (x0 )−1 (F 0 (xn ) − F 0 (yn )) + Ik 12 3 (0) ∗ ∗ 3(ϕ0 (kx0 − x k) + ϕ0 (ky0 − x k) 1 + ≤ p(kx0 − x∗ k). 4(1 − ϕ0 (kx0 − x∗ k)) 3

kB0 k = k ≤

(32.12)

Similarly, by the third sub-step of method (32.2) (2) ky0 − x∗ k

R1

(1)

(1)

ϕ1 (θky0 − x∗ k)dθky0 − x∗ k 1 − ϕ0 (kx0 − x∗ k) ≤ qψ1 (kx0 − x∗ k)kx0 − x∗ k

≤ (1 +

0

= ψ2 (kx0 − x∗ k)kx0 − x∗ k < kx0 −∗ k,

(32.13)

(2)

so y0 ∈ U(x∗ , r), where we also used 1 (0) (0) kg(x0 , y0 ) = k (5I − 3F 0 (x0 )−1 F 0 (y0 ))k 2 3 (0) = kI + F 0 (x0 )−1 (F 0 (x0 ) − F 0 (y0 ))k 2 (0) 3 (ϕ0 (kx0 − x∗ k) + ϕ0 (ky0 − x∗ k)) ≤ 1+ 2 1 − ϕ0 (kx0 − x∗ k) 3ϕ0 (kx0 − x∗ k) ≤ 1+ = s. 1 − ϕ0 (kx0 − x∗ k)

(32.14)

As in (32.13), we get (i)

ky0 − x∗ k ≤ qψi−1 (kx0 − x∗ k)

= ψi (kx0 − x∗ k)kx0 − x∗ k < kx0 − x∗ k,

(32.15)

Convergence of a P + 1 Step Method of Order 2P + 1 with Frozen Derivatives

277

j = 2, 3, . . ., p, so (0)

kx1 − x∗ k ≤ ψ p (kx0 − x∗ k) ≤ kx0 − x∗ k < r. (p)

(0)

(32.16)

(p)

Simply replace y0 , . . ., y0 by yk , . . ., yk in the preceding estimations to obtain kxk+1 − x∗ k ≤ ψ p (kxk − x∗ k)kxk − x∗ k ≤ kxk − x∗ k < r.

(32.17)

Hence, limk−→∞ xk = x∗ and xk+1 ∈ U(x∗ , r). Proposition 14. Assume: (1) x∗ ∈ U[x0 , r] solves (32.1). (2) There exists b ≥ r so that Z 1 0

ϕ((1 − θ)b + θr)dθ < 1.

(32.18)

Set D1 = D ∩U[x∗ , b]. Then, x∗ is the only solution of equation (32.1) in the domain D1 . Proof. Consider x˜ ∈ D1 with F(x) ˜ = 0. Define M = (A3) and (32.18)

R1 0 ˜ Using (A1), 0 F (x˜ + θ(x∗ − x))dθ.

kF 0 (x0 )−1 (M − F 0 (x0 ))k ≤

Z 1

ϕ0 ((1 − θ)kx˜ − x0 k + θkx˜ − x0 k)dθ



Z 1

ϕ0 ((1 − θ)b + θt ∗ )dθ < 1,

0

0

so x˜ = x∗ , by the invertibility of M and M(x˜ − x∗ ) = F(x) ˜ − F(x∗ ) = 0 − 0 = 0.

3.

Numerical Experiments

We compute the radius of convergence in this section. Example 46. Returning back to the motivational example, we have ϕ0 (t) = ϕ(t) = 96.6629073t and ϕ1 (t) = 2. Then, we have r1 = 0.0066, r2 = 0.0055 = r, r3 = 0.0065, r4 = 0.0057. Example 47. Let B1 = B2 = C[0, 1], be the space of continuous functions defined on [0, 1] with the max norm. Let D = U(0, 1). Define function F on D by F(ϕ)(x) = ϕ(x) − 5

Z 1

xθϕ(θ)3 dθ.

(32.19)

0

We have that F 0 (ϕ(ξ))(x) = ξ(x) − 15 0

Z 1 0

xθϕ(θ)2 ξ(θ)dθ, for each ξ ∈ D.

Then, we get that x∗ = 0, F (x∗ ) = I, ϕ0 (t) =

15 2 t, ϕ(t) = 15t

and ϕ1 (t) = 2. We have

r1 = r2 = r4 = 0.1333 = r, r3 = 0.2303.

278

4.

Christopher I. Argyros, Samundra Regmi, Ioannis K. Argyros et al.

Conclusion

The local convergence of the p + 1 step method of order 2p + 1 is studied using assumptions only on the first derivative of the operator involved. The convergence of this method was shown by assuming that the higher-order derivative of the operator not on the method exists and hence it is limiting its applicability. Moreover, no computational error bounds or uniqueness of the solution are given. We address all these problems using only the first derivative that appears on the method. Hence, we extend the applicability of the method. Our techniques can be used to obtain the convergence of other similar higher-order methods using assumptions on the first derivative of the operator involved.

References [1] Argyros, I. K., On the Newton - Kantorovich hypothesis for solving equations, J. Comput. Math., 169, (2004), 315-332. [2] Argyros, I. K., Computational theory of iterative methods. Series: Studies in Computational Mathematics, 15, Editors: Chui C. K. and Wuytack L., Elsevier Publ. Co. New York, U.S.A, 2007. [3] Argyros, I. K., Convergence and Applications of Newton-type Iterations, Springer Verlag, Berlin, Germany, (2008). [4] Argyros, I. K., Hilout, S., Weaker conditions for the convergence of Newton’s method. J. Complexity, 28, (2012), 364–387. [5] Argyros, I. K., Hilout, S., On an improved convergence analysis of Newton’s method, Applied Mathematics and Computation, 225, (2013), 372-386. [6] Argyros, I. K., Magr´en˜ an, A. A., Iterative methods and their dynamics with applications, CRC Press, New York, USA, 2017. [7] Argyros, I. K., Magr´en˜ an, A. A., A contemporary study of iterative methods, Elsevier (Academic Press), New York, 2018. [8] Arroyo, V., Cordero, A., Torregrosa, J. R., Approximation of artificial satellites preliminary orbits: The efficiency challenge. Mathematical and Computer modeling, 54, (7-8), (2011), 1802-1807. [9] Behl, R., Maroju, P., Martinez, E., Singh, S., A study of the local convergence of a fifth order iterative method, Indian J. Pure Appl. Math., 51, 2, (2020), 439-455. [10] C˘atinas¸, Emil, The inexact, inexact perturbed, and quasi-Newton methods are equivalent models, Math. Comp., 74, (2005), 291-301. [11] Ezquerro, J. A., Guti´errez, J. M., Hern´andez, M. A., Romero, N., Rubio, M. J., The Newton method: from Newton to Kantorovich(Spanish), Gac. R. Soc. Mat. Esp., 13, (2010), 53-76.

Convergence of a P + 1 Step Method of Order 2P + 1 with Frozen Derivatives

279

[12] Ezquerro, J. A., Hernandez, M. A., Newton’s method: An updated approach of Kantorovich’s theory, Cham Switzerland, (2018). ` [13] Grau-S´anchez, Miquel and Grau, Angela and Noguera, Miquel, Ostrowski type methods for solving systems of nonlinear equations. Appl. Math. Comput., 281, (2011), 2377-2385. [14] Homeier, H. H. H., A modified Newton method with cubic convergence: the multivariate case, Journal of Computational and Applied Mathematics 169 (2004) 161169. [15] Kantorovich, L. V., Akilov, G. P., Functional Analysis, Pergamon Press, Oxford, (1982). [16] Magr´en˜ an, A. A., Argyros, I. K., Rainer, J. J., Sicilia, J. A., Ball convergence of a sixth-order Newton-like method based on means under weak conditions, J. Math. Chem. (2018) 56:2117-2131, https://doi.org/10.1007/ s10910-018-0856-y [17] Magr´en˜ an, A. A., Guti´errez, J. M., Real dynamics for damped Newton’s method applied to cubic polynomials, J. Comput. Appl. Math., 275, (2015), 527–538. [18] Ortega, L. M., Rheinboldt, W. C., Iterative Solution of Nonlinear Equations in Several Variables, Academic Press, New York, (1970). [19] Ostrowski, A. M., Solution of equations in Euclidean and Banach spaces, Elsevier, 1973. [20] Potra, F. A. and Pt´ak, V., Nondiscrete induction and iterative processes. Research Notes in Mathematics, 103. Pitman (Advanced Publishing Program), Boston, MA. (1984). [21] Proinov, P. D., General local convergence theory for a class of iterative processes and its applications to Newton’s method, J. Complexity, 25, (2009), 38-62. [22] Proinov, P. D., New general convergence theory for iterative processes and its applications to Newton-Kantorovich type theorems, J. Complexity, 26, (2010), 3-42. [23] Rheinboldt, W. C., An adaptive continuation process of solving systems of nonlinear equations. Polish Academy of Science, Banach Ctr. Publ. 3, (1978), 129-142. [24] Shakhno, S. M., Gnatyshyn, O. P., On an iterative algorithm of order 1.839... for solving nonlinear least squares problems, Appl. Math. Applic., 161, (2005), 253-264. [25] Shakhno, S. M., Iakymchuk, R. P., Yarmola, H. P., Convergence analysis of a two step method for the nonlinear squares problem with decomposition of operator, J. Numer. Appl. Math., 128, (2018), 82-95. [26] Sharma, J. R., Guha, R. K. and Sharma, R., An efficient fourth order weighted Newton method for systems of nonlinear equations. Numer. Algorithms, 62, (2013), 307-323.

280

Christopher I. Argyros, Samundra Regmi, Ioannis K. Argyros et al.

[27] Sharma,J. R, Kumar, S., A class of accurate Newton-Jarratt-like methods with applications to nonlinear models, Comput. Appl. Math., (2022). [28] Traub, J. F., Iterative methods for the solution of equations, Prentice Hall, New Jersey, U.S.A, (1964). [29] Verma, R., New Trends in Fractional Programming, Nova Science Publisher, New York, USA, (2019).

Chapter 33

Efficient Fifth Order Scheme 1.

Introduction

Let F : D ⊂ X −→ Y be a nonlinear operator acting between Banach spaces X and Y. Consider the problem of solving the nonlinear equation F(x) = 0.

(33.1)

Iterative scheme s are used to approximate a solution x∗ of the equation (33.1). The following iterative scheme was studied in [21], yn = xn − αF 0 (xn )−1 F(xn ), 1 zn = yn − (13I − 9An )F 0 (xn )−1 F(xn ), 12 1 xn+1 = yn − (5I − 3An )F 0 (xn )−1 F(zn ), 2

(33.2)

where α ∈ R, An = F 0 (xn )−1 F 0 (yn ). If α = 23 , (33.2) reduces to the scheme in [21]. It was shown to be of order five using hypotheses on the sixth derivative. In this chapter, we study the convergence of scheme (33.2) using assumptions only on the first derivative of F, unlike earlier studies [21], where the convergence analysis required assumptions on the derivatives of F up to the order six. This method can be used on other methods and relevant topics along the same lines [1]-[28]. For example: Let X = Y = R, D = [− 12 , 32 ]. Define f on D by  3 t logt 2 + t 5 − t 4 i f t 6= 0 f (t) = 0 i f t = 0. Then, we have f (1) = 0, f 000 (t) = 6 logt 2 + 60t 2 − 24t + 22. Obviously f 000 (t) is not bounded on D. So, the convergence of the scheme (33.2) is not guaranteed by the analysis in [21].

282

Christopher I. Argyros, Samundra Regmi, Ioannis K. Argyros et al.

Throughout the chapter U(x0 , R) = {x ∈ X : kx − x0 k < R} and U[x0 , R] = {x ∈ X : kx − x0 k ≤ R} for some R > 0. The chapter contains local convergence analysis in Section 2, and the numerical examples are given in Section 3.

2.

Ball Convergence

Some functions and parameters shall play a role in the local convergence. Suppose functions: (i) There exists function ω0 : [0, ∞) −→ [0, ∞) nondecreasing and continuous such that ω0 (t) − 1 = 0 has a minimal zero R0 ∈ (0, ∞). (ii) ω : [0, R0 ) −→ [0, ∞), ω1 : [0, R0) −→ [0, ∞) such that equations h1 (t)−1 = 0, h2(t)−1 = 0 have a minimal zeros r0 , r1 ∈ (0, R)), respectively, where h1 (t) = and

R1 0

ω((1 − θ)t)dθ + |1 − α| 1 − ω0 (t)

R1 0

ω1 (θt)dθ

p(t) 01 ω1 (θt)dθ h2 (t) = h1 (t) + , 1 − ω0 (t) R

where

p(t) = Define functions on [0, R0) by

3(ω0 (t) + ω0 (h1 (t)t) 1 + . 4(1 − ω0 (t)) 3

q = q(t) = 1 +

s

R1

s = s(t) = 1 + and

ω1 (θt)dθ , 1 − ω0 (t) 0

3ω0 (t) 1 − ω0 (t)

h3 (t) = qh2 (t). (iii) h3 (t) − t = 0 has minimal zeros r3 in (0, R0). The parameter r defined by r = min{ri }, i = 1, 2, 3,

(33.3)

shall be shown to be a radius of convergence for scheme (33.2). It follows from (33.3) that for all t ∈ [0, r) 0 ≤ ω0 (t) < 1 (33.4) and 0 ≤ hi (t) < 1. The following conditions (H) are needed. Suppose:

(33.5)

Efficient Fifth Order Scheme

283

(H1) x∗ ∈ D is a simple solution of equation F(x) = 0. (H2) kF 0 (x∗ )−1 (F 0 (z) − F 0 (x∗ ))k ≤ ω0 (kz − x∗ k)

for all z ∈ D. Set D0 = D ∩U(x∗ , R) ). (H3)

kF 0 (x∗ )−1 (F 0 (z) − F 0 (w)k ≤ ω(kz − wk) and kF 0 (x∗ )−1 F 0 (z)k ≤ ω1 (kz − x∗ k) for all z, w ∈ D0 and (H4) U[x∗ , r] ⊂ D. The main ball convergence result follows next using conditions (H). Theorem 57. Suppose conditions (H) hold. Then, sequence {xn } produced by scheme (33.2) for x0 ∈ U(x∗ , r) converges to x∗ and {xn } ∈ U(x∗ , r). Proof. Let u ∈ U(x∗ , r) to be arbitrary. Then, using (H2) and (33.3) one obtains kF 0 (x∗ )−1 (F 0 (u) − F 0 (x∗ ))k ≤ ω0 (ku − x∗ k) ≤ ω0 (r) < 1, implying the invertibility of F 0 (u) and kF 0 (u)−1 F 0 (x∗ )k ≤

1 , 1 − ω0 (ku − x∗ k)

(33.6)

by the standard Banach lemma on linear invertible operators [11, 14]. Hence, if u = x0 , (33.6) gives that y0 , z0 , x1 are well defined by the scheme (33.2). Moreover , the first two sub-steps of scheme (33.2) give, respectively y0 − x∗ = x0 − x∗ − F 0 (x0 )−1 F(x0 ) + (1 − a)F 0 (x0 )−1 F(x0 )

(33.7)

z0 − x∗ = y0 − x∗ − B0 F 0 (x0 )−1 F(x0 ).

(33.8)

and Using (H3) on (33.7) and (33.8), respectively ω((1 − θ)kx0 − x∗ k)dθ + |1 − a| 01 ω1 (θkx0 − x∗ k)dθ]kx0 − x∗ k ky0 − x k ≤ 1 − ω0 (kx0 − x∗ k) ∗ ∗ ≤ h1 (kx0 − x k)kx0 − x k ≤ kx0 − x∗ k < r, (33.9) ∗

[

R1

R

0

so y0 ∈ U(x∗ , r) and kz0 − x∗ k ≤ ky0 − x∗ k + kB0 kkF 0 (x0 )−1 F 0 (x∗ )kkF 0 (x∗ )−1 F(y0 )k

p(kx0 − x∗ k) 01 ω1 (θkx0 − x∗ k)dθ ≤ (h1 (kx0 − x k + )kx0 − x∗ k 1 − ω0 (kx0 − x∗ k) ≤ h2 (kx0 − x∗ k)kx0 − x∗ k ≤ kx0 − x∗ k, (33.10) ∗

R

284

Christopher I. Argyros, Samundra Regmi, Ioannis K. Argyros et al.

so z0 ∈ U(x∗ , r), where we also used for B0 =

1 12 (13I − 9A0 )

that

9 0 1 F (x0 )−1 (F 0 (x0 ) − F 0 (y0 )) + Ik 12 3 3(ω0 (kx0 − x∗ k) + ω0 (ky0 − x∗ k) 1 + ≤ p(kx0 − x∗ k). 4(1 − ω0 (kx0 − x∗ k)) 3

kB0 k = k ≤

(33.11)

Then, by the third sub-step of the scheme (33.2), one has



R1

ω1 (θkz0 − x∗ k)dθ kz0 − x∗ k 1 − ω0 (kx0 − x∗ k) ≤ qh2 (kx0 − x∗ k)

kx1 − x k ≤ (1 +

0

= h3 (kx0 − x∗ k)kx0 − x∗ k < kx0 − x∗ k,

(33.12)

so x1 ∈ U(x∗ , r), where we also used 1 3 k (5I − 3F 0 (x0 )−1 F 0 (y0 ))k = kI + F 0 (x0 )−1 (F 0 (x0 ) − F 0 (y0 ))k 2 2 3 ω0 (kx0 − x∗ k) + ω0 (ky0 − x∗ k) ≤ 1+ ( 2 1 − ω0 (kx0 − x∗ k) 3ω0 (kx0 − x∗ k) = s. ≤ 1+ 1 − ω0 (kx0 − x∗ k) Then, simply replace x0 , y0 , z0 , x1 by xm , ym , zm, xm+1 in the preceding calculations to obtain kxmk+1 − x∗ k ≤ dkxk − x∗ k < r,

(33.13)

where d = h3 (kxk − x∗ k) ∈ [0, 1). Hence, (33.13) implies that limm−→∞ xm = x∗ and xm+1 ∈ U(x∗ , r). Proposition 15. Assume: (1) x∗ ∈ U[x0 , r] solves (33.1) and is simple. (2) There exists λ ≥ r so that Z 1 0

ω((1 − θ)λ + θr)dθ < 1.

(33.14)

Set D1 = D ∩U[x∗ , λ]. Then, x∗ is the only solution of equation (33.1) in the domain D1 . Proof. Consider x˜ ∈ D1 with F(x) ˜ = 0. Define M = (A3) and (33.14)

R1 0 ˜ Using (A1), 0 F (x˜ + θ(x∗ − x))dθ.

kF 0 (x0 )−1 (M − F 0 (x0 ))k ≤

Z 1

ω((1 − θ)kx˜ − x0 k + θkx˜ − x0 k)dθ



Z 1

ω((1 − θ)λ + θt ∗ )dθ < 1,

0

0

so x˜ = x∗ , by the invertibility of M and M(x˜ − x∗ ) = F(x) ˜ − F(x∗ ) = 0 − 0 = 0.

Efficient Fifth Order Scheme

3.

285

Numerical Experiments

We compute the radius of convergence in this section. Example 48. Let X = Y = R3 , D = B[0, 1], x∗ = (0, 0, 0)T . Define function F on D for w = (x, y, z)T by e−1 2 y + y, z)T . F(w) = (ex − 1, 2 Then, we get  x  e 0 0 F 0 (v) =  0 (e − 1)y + 1 0  , 0 0 1 1

1

so for ω0 (t) = (e − 1)t, ω(t) = e e−1 t and ω1 (t) = e e−1 , we have

r1 = 0.0403 = r, r2 = 0.3190, r3 = 0.3203.

4.

Conclusion

The local convergence of a fifth-order scheme is studied using assumptions only on the first derivative of the operator involved. The convergence of this scheme was shown by assuming that the sixth order derivative of the operator does not exist and hence it limiting its applicability. Moreover, no computational error bounds or uniqueness of the solution are given. We address all these problems using only the first derivative that appears on the method. Hence, we extend the applicability of the method. Our techniques can be used to obtain the convergence of other similar higher-order schemes using assumptions on the first derivative of the operator involved.

References [1] Argyros, I. K., On the Newton - Kantorovich hypothesis for solving equations, J. Comput. Math., 169, (2004), 315-332. [2] Argyros, I. K., Computational theory of iterative methods. Series: Studies in Computational Mathematics, 15, Editors: Chui C. K. and Wuytack L., Elsevier Publ. Co. New York, U.S.A, 2007. [3] Argyros, I. K., Convergence and Applications of Newton-type Iterations, Springer Verlag, Berlin, Germany, (2008). [4] Argyros, I. K., Hilout, S., Weaker conditions for the convergence of Newton’s method. J. Complexity, 28, (2012), 364–387. [5] Argyros, I. K., Hilout, S., On an improved convergence analysis of Newton’s method, Applied Mathematics and Computation, 225, (2013), 372-386.

286

Christopher I. Argyros, Samundra Regmi, Ioannis K. Argyros et al.

[6] Argyros, I. K., Magr´en˜ an, A. A., Iterative methods and their dynamics with applications, CRC Press, New York, USA, 2017. [7] Argyros, I. K., Magr´en˜ an, A. A., A contemporary study of iterative methods, Elsevier (Academic Press), New York, 2018. [8] Arroyo, V., Cordero, A., Torregrosa, J. R., Approximation of artificial satellites preliminary orbits: The efficiency challenge. Mathematical and Computer modeling, 54, (7-8), (2011), 1802-1807. [9] Behl, R., Maroju, P., Martinez, E., Singh, S., A study of the local convergence of a fifth order iterative method, Indian J. Pure Appl. Math., 51, 2, (2020), 439-455. [10] C˘atinas¸, Emil, The inexact, inexact perturbed, and quasi-Newton methods are equivalent models, Math. Comp., 74, (2005), 291-301. [11] Ezquerro, J. A., Guti´errez, J. M., Hern´andez, M. A., Romero, N., Rubio, M. J., The Newton method: from Newton to Kantorovich(Spanish), Gac. R. Soc. Mat. Esp., 13, (2010), 53-76. [12] Ezquerro, J. A., Hernandez, M. A., Newton’s method: An updated approach of Kantorovich’s theory, Cham Switzerland, (2018). ` [13] Grau-S´anchez, Miquel and Grau, Angela and Noguera, Miquel, Ostrowski type methods for solving systems of nonlinear equations. Appl. Math. Comput., 281, (2011), 2377-2385. [14] Homeier, H. H. H., A modified Newton method with cubic convergence: the multivariate case, Journal of Computational and Applied Mathematics 169 (2004) 161169. [15] Kantorovich, L. V., Akilov, G. P., Functional Analysis, Pergamon Press, Oxford, (1982). [16] Magr´en˜ an, A. A., Argyros, I. K., Rainer, J. J., Sicilia, J. A., Ball convergence of a sixth-order Newton-like method based on means under weak conditions, J. Math Chem. (2018) 56:2117-2131, https://doi.org/10.1007/ s10910-018-0856-y. [17] Magr´en˜ an, A. A., Guti´errez, J. M., Real dynamics for damped Newton’s method applied to cubic polynomials, J. Comput. Appl. Math., 275, (2015), 527–538. [18] Ortega, L. M., Rheinboldt, W. C., Iterative Solution of Nonlinear Equations in Several Variables, Academic Press, New York, (1970). [19] Ostrowski, A. M., Solution of equations in Euclidean and Banach spaces, Elsevier, 1973. [20] Potra, F. A. and Pt´ak, V., Nondiscrete induction and iterative processes. Research Notes in Mathematics, 103. Pitman (Advanced Publishing Program), Boston, MA. (1984).

Efficient Fifth Order Scheme

287

[21] Proinov, P. D., General local convergence theory for a class of iterative processes and its applications to Newton’s method, J. Complexity, 25, (2009), 38-62. [22] Proinov, P. D., New general convergence theory for iterative processes and its applications to Newton-Kantorovich type theorems, J. Complexity, 26, (2010), 3-42. [23] Rheinboldt, W. C. An adaptive continuation process of solving systems of nonlinear equations. Polish Academy of Science, Banach Ctr. Publ. 3, (1978), 129-142. [24] Shakhno, S. M., Gnatyshyn, O. P., On an iterative algorithm of order 1.839... for solving nonlinear least squares problems, Appl. Math. Applic., 161, (2005), 253-264. [25] Shakhno, S. M., Iakymchuk, R. P., Yarmola, H. P., Convergence analysis of a two step method for the nonlinear squares problem with decomposition of operator, J. Numer. Appl. Math., 128, (2018), 82-95. [26] Sharma, J. R., Guha, R. K. and Sharma, R., An efficient fourth order weighted Newton method for systems of nonlinear equations. Numer. Algorithms, 62, (2013), 307-323. [27] Traub, J. F., Iterative methods for the solution of equations, Prentice Hall, New Jersey, U.S.A, (1964). [28] Verma, R., New Trends in Fractional Programming, Nova Science Publisher, New York, USA, (2019). [29] Yadav, S., Singh, S., Global convergence domains for an efficient fifth order scheme, J. Comput. Appl. Math., (2022).

Chapter 34

Sharma-Gupta Fifth Order Method 1.

Introduction

Let F : D ⊂ E1 −→ E2 be a nonlinear operator acting between Banach spaces E1 and E2 . Consider the problem of solving the nonlinear equation F(x) = 0.

(34.1)

Iterative methods are used to approximate a solution x∗ of the equation (34.1). The following iterative method was studied in [26], yn = xn − γF 0 (xn )−1 F(xn ), zn = xn − F 0 (yn )−1 F(xn ), 0

xn+1 = zn − (2F (yn )

−1

0

(34.2) −1

− F (xn ) )F(zn ),

where γ ∈ R. If γ = 21 , (34.2) reduces to the method in [26]. It was shown to be of order five using hypotheses on the sixth derivative. In this chapter, we study the convergence of method (34.2) using assumptions only on the first derivative of F, unlike earlier studies [26] where the convergence analysis required assumptions on the derivatives of F up to the order six. This method can be used on other methods and relevant topics along the same lines [1]-[29]. For example: Let X = Y = R, D = [− 12 , 32 ]. Define f on D by  3 t logt 2 + t 5 − t 4 i f t 6= 0 f (t) = 0 i f t = 0. Then, we have f (1) = 0, f 000 (t) = 6 logt 2 + 60t 2 − 24t + 22.

Obviously f 000 (t) is not bounded on D. So, the convergence of the method (34.2) is not guaranteed by the analysis in [26]. Throughout the chapter U(x0 , R) = {x ∈ X : kx − x0 k < R} and U[x0 , R] = {x ∈ X : kx − x0 k ≤ R} for some R > 0. The chapter contains local convergence analysis in Section 2, and the numerical examples are given in Section 3.

290

2.

Christopher I. Argyros, Samundra Regmi, Ioannis K. Argyros et al.

Convergence

Some scalar functions and parameters are developed. Let T = [0, ∞). Suppose functions: (i) ψ0 (t) − 1 has a minimal positive zero ρ0 , where ψ0 : T −→ T is nondecreasing and continuous. Let T0 = [0, ρ0 ). (ii) ϕ1 (t) − 1 has a minimal solution R1 ∈ T0 − {0}, where ψ : T0 −→ T, ψ1 : T0 −→ T are continuous and nondecreasing functions and ϕ1 : T0 −→ T is defined by ϕ1 (t) =

R1 0

ψ((1 − θ)t)dθ + |1 − γ| 1 − ψ0 (t)

R1 0

ψ1 (θt)dθ

.

(iii) ψ0 (ϕ1 (t)t) − 1 has a minimal zero ρ1 ∈ T0 − {0}. Let r2 = min{ρ0 , ρ1 } and T1 = [0, r2 ). (iv)ϕ2 (t) − 1 has a minimal solution R2 ∈ T1 − {0}, where ϕ2 : T1 −→ T is defined by ϕ2 (t) =

R1 0

ψ((1 − θ)t)dθ (ψ0 (t) + ψ0 (ϕ1 (t)t)) 01 ψ1 (θt)dθ + . 1 − ψ0 (t) (1 − ψ0 (t))(1 − ψ0 (ϕ1 (t)t)) R

(v) ψ0 (ϕ2 (t)t) − 1 has a minimal solution ρ3 ∈ T1 − {0}. Let ρ = min{ρ2 , ρ3 } and T2 = [0, ρ). (vi)ϕ3 (t) − 1 has a minimal solution R3 ∈ T2 − {0}, where "R 1 0 ψ(θϕ2 (t)t)dθ ϕ3 (t) = 1 − ψ0 (ϕ2 (t)t) (ψ0 (ϕ1 (t)t) + ψ0 (ϕ2 (t)t)) 01 ψ1 (θϕ2 (t)t)dθ (1 − ψ0 (ϕ1 (t)t))(1 − ψ0(ϕ2 (t)t)) # R (ψ0 (t) + ψ0 (ϕ1 (t)t)) 01 ψ1 (θϕ2 (t)t)dθ + . (1 − ψ0 (t))(1 − ψ0 (ϕ1 (t)t)) R

+

The parameter R defined by R = min{Rm }, m = 1, 2, 3.

(34.3)

shall be shown to be a radius of convergence for method (34.2). Let T3 = [0, R). Definition (34.3) implies that for all t ∈ T3 0 ≤ ψ0 (t) < 1, (34.4) 0 ≤ ψ0 (ϕ1 (t)t) < 1,

(34.5)

0 ≤ ψ0 (ϕ2 (t)t) < 1,

(34.6)

0 ≤ ϕm (t) < 1.

(34.7)

and Moreover, conditions (C) shall be utilized. Suppose: (C1) Approximation x∗ ∈ D is a simple solution of equation (34.1).

Sharma-Gupta Fifth Order Method

291

(C2) kF 0 (x∗ )−1 (F 0 (w) − F 0 (x∗ ))k ≤ ψ0 (kw − x∗ k)

holds for all w ∈ D. Set D0 = D ∩U(x∗ , ρ0 ). (C3)

kF 0 (x∗ )−1 (F 0 (w) − F 0 (u)k ≤ ψ(kw − uk) and kF 0 (x∗ )−1 F 0 (u)k ≤ ψ1 (ku − x∗ k) for all w, u ∈ D0 and (C4) U[x∗ , R] ⊂ D. These conditions are used in the main local convergence result for the method (34.2) that follows. Theorem 58. Suppose conditions (C) hold and pick x0 ∈ U(x∗ , R) − {x∗ }. Then, sequence {xn } produced by method (34.2) is such that limn−→∞ xn = x∗ . Proof. Mathematical induction shall be utilized to show in turn that kyn − x∗ k ≤ ϕ1 (kxn − x∗ k)kxn − x∗ k ≤ kxn − x∗ k < R,

(34.8)

kzn − x∗ k ≤ ϕ2 (kxn − x∗ k)kxn − x∗ k ≤ kxn − x∗ k

(34.9)

kxn+1 − x∗ k ≤ ϕ3 (kxn − x∗ k)kxn − x∗ k ≤ kxn − x∗ k.

(34.10)

and Let u ∈ U(x∗ , R). It then follows by (C2) that

kF 0 (x∗ )−1 (F 0 (u) − F 0 (x∗ ))k ≤ ψ0 (ku − x∗ k) ≤ ψ0 (R) < 1, so F 0 (u)−1 ∈ L(B2 , B − 1) and kF 0 (u)−1 F 0 (x∗ )k ≤

1 , 1 − ψ0 (ku − x∗ k)

(34.11)

by the celebrated Banach lemma on linear invertible operators [15]. Iterate y0 is also well defined by the first sub-step of method (34.2) for n = 0 and (34.11) for u = x0 . Moreover, one can write y0 − x∗ = x0 − x∗ − F 0 (x0 )−1 F(x0 ) + (1 − γ)F 0 (x0 )−1 F(x0 ).

(34.12)

In view of (C3), (34.7) and (34.12) for m = 1, one has ψ((1 − θ)kx0 − x∗ k)dθ + |1 − γ| 01 ψ1 (θkx0 − x∗ k)dθ]kx0 − x∗ k ky0 − x k ≤ 1 − ψ0 (kx0 − x∗ k) ∗ ∗ ≤ ϕ1 (kx0 − x k)kx0 − x k ≤ kx0 − x∗ k < R, (34.13) ∗

[

R1 0

R

292

Christopher I. Argyros, Samundra Regmi, Ioannis K. Argyros et al.

so (34.8) holds for n = 0 and y0 ∈ U(x∗ , R). Notice that F 0 (y0 )−1 ∈ L(B2 , B1 ) by (34.11) for u = y0 , so iterate z0 , x1 are well define by method (34.2). Moreover, one has z0 − x∗ = y0 − x∗ − F 0 (x0 )−1 F(x0 )

+F 0 (x0 )−1 (F 0 (y0 ) − F 0 (x0 ))F 0 (y0 )−1 F(x0 ).

(34.14)

It follows by (34.7), (34.13) and (34.14) that "R 1 ∗ ) ψ((1 − θ)kx0 − x k)dθ ∗ kz0 − x k ≤ 1 − ψ0 (kx0 − x∗ k)

# R (ψ0 (kx0 − x∗ k) + ψ0 (ky0 − x∗ k)) 01 ψ1 (θkx0 − x∗ k)dθ kx0 − x∗ k + (1 − ψ0 (kx0 − x∗ k))(1 − ψ0 (ky0 − x∗ k))

≤ ϕ2 (kx0 − x∗ k)kx0 − x∗ k ≤ kx0 − x∗ k,

(34.15)

so (34.9) holds and z0 ∈ U(x∗ , R). Furthermore, by the third sub-step of method (34.2) for n=0 x1 − x∗ = z0 − x∗ − F 0 (z0 )−1 F(z0 ) +F 0 (z0 )−1 (F 0 (y0 ) − F 0 (z0 ))F 0 (y0 )−1 F(z0 )

+F 0 (z0 )−1 (F 0 (y0 ) − F 0 (x0 ))F 0 (y0 )−1 F(z0 ).

(34.16)

Using (34.7), (34.15) for m = 3, (C3), (34.11) for u = y0 , z0 , (34.15) and (34.16) one gets "R 1 ∗ ∗ 0 ψ((1 − θ)kz0 − x k)dθ kx1 − x k ≤ 1 − ψ0 (kz0 − x∗ k) +

(ψ0 (kx0 − x∗ k) + ψ0 (kz0 − x∗ k)) 01 ψ1 (θkz0 − x∗ k)dθ (1 − ψ0 (ky0 − x∗ k))(1 − ψ0 (kz0 − x∗ k)) R

# R (ψ0 (kx0 − x∗ k) + ψ0 (ky0 − x∗ k)) 01 ψ1 (θkz0 − x∗ k)dθ + kz0 − x∗ k (1 − ψ0 (kx0 − x∗ k))(1 − ψ0 (ky0 − x∗ k))

= ϕ3 (kx0 − x∗ k)kx0 − x∗ k < kx0 − x∗ k,

(34.17)

so (34.10) holds for n = 0 and x1 ∈ U(x∗ , R). The induction for estimates (34.8)-(34.10) is completed if x0 , y0 , z0 , x1 are replaced by xi , yi , zi, xi+1 , respectively in the previous computations. Then, from the estimation kxi+1 − x∗ k ≤ dkxk − x∗ k < R,

(34.18)

where d = ϕ3 (kxk − x∗ k) ∈ [0, 1) we have limi−→∞ xi = x∗ and xi+1 ∈ U(x∗ , R). Proposition 16. Assume: (1) x∗ ∈ U[x0 , R] solves (34.1) and is simple. (2) There exists β ≥ R so that Z 1 0

ψ((1 − θ)β + θR)dθ < 1.

(34.19)

Set D1 = D ∩U[x∗ , β]. Then, x∗ is the only solution of equation (34.1) in the domain D1 .

Sharma-Gupta Fifth Order Method Proof. Consider x˜ ∈ D1 with F(x) ˜ = 0. Define M = (C2) and (34.19)

R1 0 ˜ Using (C1), 0 F (x˜ + θ(x∗ − x))dθ.

kF (x0 ) (M − F (x0 ))k ≤

Z 1

ψ((1 − θ)kx˜ − x0 k + θkx˜ − x0 k)dθ



Z 1

ψ((1 − θ)β + θβ)dθ < 1,

0

−1

0

0

0

293

so x˜ = x∗ , by the invertibility of M and M(x˜ − x∗ ) = F(x) ˜ − F(x∗ ) = 0 − 0 = 0.

3.

Numerical Experiments

We compute the radius of convergence in this section. Example 49. Let E1 = E2 = R3 , D = B[0, 1], x∗ = (0, 0, 0)T . Define function F on D for w = (x, y, z)T by e−1 2 y + y, z)T . F(w) = (ex − 1, 2 Then, we get  x  e 0 0 F 0 (v) =  0 (e − 1)y + 1 0  , 0 0 1 1

1

so for ω0 (t) = (e − 1)t, ω(t) = e e−1 t and ω1 (t) = e e−1 , we have

R1 = 0.0403, R2 = 0.8934, R3 = 0.1045.

4.

Conclusion

The local convergence for a Sharma-Gupta method of order five is studied using assumptions only on the first derivative of the operator involved. The convergence of this method was shown by assuming that the sixth order derivative of the operator not on the method exists and hence it is limiting its applicability. Moreover, no computational error bounds or uniqueness of the solution are given. We address all these problems using only the first derivative that appears on the method. Hence, we extend the applicability of the method. Our techniques can be used to obtain the convergence of other similar higher-order methods using assumptions on the first derivative of the operator involved.

294

Christopher I. Argyros, Samundra Regmi, Ioannis K. Argyros et al.

References [1] Argyros, I. K., On the Newton - Kantorovich hypothesis for solving equations, J. Comput. Math., 169, (2004), 315-332. [2] Argyros, I. K., Computational theory of iterative methods. Series: Studies in Computational Mathematics, 15, Editors: Chui C. K. and Wuytack L., Elsevier Publ. Co. New York, U.S.A, 2007. [3] Argyros, I. K., Convergence and Applications of Newton-type Iterations, Springer Verlag, Berlin, Germany, (2008). [4] Argyros, I. K., Hilout, S., Weaker conditions for the convergence of Newton’s method. J. Complexity, 28, (2012), 364–387. [5] Argyros, I. K., Hilout, S., On an improved convergence analysis of Newton’s methods, Applied Mathematics and Computation, 225, (2013), 372-386. [6] Argyros, I. K., Magr´en˜ an, A. A., Iterative methods and their dynamics with applications, CRC Press, New York, USA, 2017. [7] Argyros, I. K., Magr´en˜ an, A. A., A contemporary study of iterative methods, Elsevier (Academic Press), New York, 2018. [8] Arroyo, V., Cordero, A., Torregrosa, J. R., Approximation of artificial satellites preliminary orbits: The efficiency challenge. Mathematical and Computer modeling, 54, (7-8), (2011), 1802-1807. [9] Behl, R., Maroju, P., Martinez, E., Singh, S., A study of the local convergence of a fifth order iterative method, Indian J. Pure Appl. Math., 51, 2, (2020), 439-455. [10] C˘atinas¸, Emil, The inexact, inexact perturbed, and quasi-Newton methods are equivalent models, Math. Comp., 74, (2005), 291-301. [11] Ezquerro, J. A., Guti´errez, J. M., Hern´andez, M. A., Romero, N., Rubio, M. J., The Newton method: from Newton to Kantorovich(Spanish), Gac. R. Soc. Mat. Esp., 13, (2010), 53-76. [12] Ezquerro, J. A., Hernandez, M. A., Newton’s method: An updated approach of Kantorovich’s theory, Cham Switzerland, (2018). ` [13] Grau-S´anchez, Miquel and Grau, Angela and Noguera, Miquel, Ostrowski type methods for solving systems of nonlinear equations. Appl. Math. Comput., 281, (2011), 2377-2385. [14] Homeier, H. H. H., A modified Newton method with cubic convergence: the multivariate case, Journal of Computational and Applied Mathematics 169 (2004) 161169. [15] Kantorovich, L. V., Akilov, G. P., Functional Analysis, Pergamon Press, Oxford, (1982).

Sharma-Gupta Fifth Order Method

295

[16] Magr´en˜ an, A. A., Argyros, I. K., Rainer, J. J., Sicilia, J. A., Ball convergence of a sixth-order Newton-like method based on means under weak conditions, J. Math. Chem. (2018) 56:2117-2131, https://doi.org/10.1007/ s10910-018-0856-y. [17] Magr´en˜ an, A. A., Guti´errez, J. M., Real dynamics for damped Newton’s method applied to cubic polynomials, J. Comput. Appl. Math., 275, (2015), 527–538. [18] Ortega, L. M., Rheinboldt, W. C., Iterative Solution of Nonlinear Equations in Several Variables, Academic Press, New York, (1970). [19] Ostrowski, A. M., Solution of equations in Euclidean and Banach spaces, Elsevier, 1973. [20] Potra, F. A. and Pt´ak, V., Nondiscrete induction and iterative processes. Research Notes in Mathematics, 103. Pitman (Advanced Publishing Program), Boston, MA. (1984). [21] Proinov, P. D., General local convergence theory for a class of iterative processes and its applications to Newton’s method, J. Complexity, 25, (2009), 38-62. [22] Proinov, P. D., New general convergence theory for iterative processes and its applications to Newton-Kantorovich type theorems, J. Complexity, 26, (2010), 3-42. [23] Rheinboldt, W. C. An adaptive continuation process of solving systems of nonlinear equations. Polish Academy of Science, Banach Ctr. Publ. 3, (1978), 129-142. [24] Shakhno, S. M., Gnatyshyn, O. P., On an iterative algorithm of order 1.839... for solving nonlinear least squares problems, Appl. Math. Applic., 161, (2005), 253-264. [25] Shakhno, S. M., Iakymchuk, R. P., Yarmola, H. P., Convergence analysis of a two step method for the nonlinear squares problem with decomposition of operator, J. Numer. Appl. Math., 128, (2018), 82-95. [26] Sharma, J. R. and Gupta, P., An efficient fifth order method for solving systems of nonlinear equations, Comput. and Math. Appl., 4, (2014), 591-601. [27] Sharma,J. R, Kumar, S., A class of accurate Newton-Jarratt-like methods with applications to nonlinear models, Comput. Appl. Math., (2022). [28] Traub, J. F., Iterative methods for the solution of equations, Prentice Hall, New Jersey, U.S.A, (1964). [29] Verma, R., New Trends in Fractional Programming, Nova Science Publisher, New York, USA, (2019).

Chapter 35

Seventh Order Method for Equations 1.

Introduction

Let F : Ω ⊂ B −→ B1 be a nonlinear operator acting between Banach spaces B and B1 . Consider the problem of solving the nonlinear equation F(x) = 0.

(35.1)

Iterative methods are used to approximate a solution x∗ of the equation (35.1). Define iterative method by, yn = xn − δF 0 (xn )−1 F(xn ), zn = xn − F 0 (yn )−1 F(xn ),

wn = zn − (2F 0 (yn )−1 − F 0 (xn )−1 )F(zn ),

xn+1 = wn − (2F 0 (yn )−1 − F 0 (xn )−1 )F(wn ),

δ ∈ R, if δ = 12 then, the method reduces to the one in [29]. In this chapter, we study the convergence of method (35.2) using assumptions only on the first derivative of F, unlike earlier studies [29] where the convergence analysis required assumptions on the derivatives of F up to the order eight. This technique can be used on other methods and relevant topics along the same lines [1]-[29]. For example: Let X = Y = R, D = [− 12 , 32 ]. Define f on D by  3 t logt 2 + t 5 − t 4 i f t 6= 0 f (t) = 0 i f t = 0. Then, we have f (1) = 0, f 000 (t) = 6 logt 2 + 60t 2 − 24t + 22.

Obviously f 000 (t) is not bounded on D. So, the convergence of the method (35.2) is not guaranteed by the analysis in [29]. Throughout the chapter U(x0 , R) = {x ∈ X : kx − x0 k < R} and U[x0 , R] = {x ∈ X : kx − x0 k ≤ R} for some R > 0. The chapter contains local convergence analysis in Section 2, and the numerical examples are given in Section 3.

298

2.

Christopher I. Argyros, Samundra Regmi, Ioannis K. Argyros et al.

Convergence

The convergence utilizes real functions and parameters. Set M = [0, ∞). Assume function: (1) ψ0 (t) − 1 has a minimal solution ρ0 ∈ M − {0} for some function ψ0 : M → M continuous and non decreasing. Set M0 = [0, ρ0 ). (2) ϕ1 (t) − 1 has a minimal solution r1 ∈ M0 − {0}, for some continuous, non-decreasing functions ψ : M0 → M and ψ1 : M0 → M, where ϕ1 : M0 → M is given by ϕ1 (t) =

R1 0

ψ((1 − τ)t)dτ + |1 − δ| 1 − ψ0 (t)

R1 0

ψ1 (τt)dτ

.

(3) ψ0 (ϕ1 (t)) − 1 has a minimal ρ2 , ρ2 = solution ρ1 ∈ M0 − {0}. Set M1 = [0, min{ρ0 , ρ1 }). (4) ϕ2 (t) − 1 has a minimal solution r2 ∈ M1 − {0}, where ϕ2 : M1 → M is given by ϕ2 (t) =

R1 0

ψ((1 − τ)t) dτ (ψ0 (t)) + ψ0 (ϕ1 (t)t) 01 ψ1 (τt)dτ + . 1 − ψ0 (t) (1 − ψ0 (t))(1 − ψ0(ϕ1 (t)t)) R

(5)ψ0 (ϕ2 (t)t) − 1 has a minimal solution ρ3 ∈ M1 − {ρ2 }. Set ρ4 = min{ρ2 , ρ3 } and M2 = [0, ρ4 ). (6) ϕ3 (t) − 1 has a minimal solution r3 ∈ M2 − {0}, where ϕ3 : M − 2 −→ M is given by "R 1 0 ψ((1 − θ)ϕ2 (t)t)dθ ϕ3 (t) = 1 − ψ0 (ϕ2 (t)t) (ψ0 (ϕ2 (t)t) + ψ0 (ϕ1 (t)t)) 01 ψ1 (θϕ2 (t)t)dθ + (1 − ψ0 (ϕ2 (t)t))(1 − ψ0(ϕ1 (t)t)) # R (ψ0 (t) + ψ0 (ϕ1 (t)t)) 01 ψ1 (θϕ2 (t)t)dθ ϕ2 (t). + (1 − ψ0 (t))(1 − ψ0 (ϕ1 (t)t)) R

(6) ψ0 (ϕ3 (t)t) − 1 has a minimal solution ρ ∈ M2 − {0}. Set M3 = [0, ρ). (7) ϕ4 (t) − 1 has a minimal solution r4 ∈ M3 − {0}, where ϕ4 : M3 −→ M is given by "R R 1 (ψ0 (ϕ3 (t)t)) + ψ0(ϕ1 (t)t) 01 ψ1 (τϕ3 (t)t)dτ 0 ψ((1 − τ)ϕ3 (t)t) dτ ϕ4 (t) = + 1 − ψ0 (ϕ3 (t)t) (1 − ψ0 (ϕ3 (t)t))(1 − ψ0(ϕ1 (t)t)) # R (ψ0 (t)) + ψ0 (ϕ1 (t)t) 01 ψ1 (τϕ3 (t)t)dτ ϕ3 (t). + (1 − ψ0 (t))(1 − ψ0 (ϕ1 (t)t)) Parameter r given by r = min{rm },

m = 1, 2, 3, 4

(35.2)

shall be shown to be u convergence for method (35.2). Set M4 = [0, r). By the definition of r it follows that for all t ∈ M4 0 ≤ ψ0 (t) < 1,

0 ≤ ψ0 (ϕ1 (t)t) < 1,

0 ≤ ψ0 (ϕ2 (t)t) < 1, 0 ≤ ψ0 (ϕ3 (t)t) < 1

Seventh Order Method for Equations

299

and 0 ≤ ϕm (t) < 1.

(35.3)

Moreover, conditions (H) are needed. Suppose: (H1 ) Element x? ∈ Ω is a simple solution of the equation F(x) = 0. (H2 ) kF 0 (x? )−1 (F 0 (u) − F 0 (x? ))k ≤ ψ0 (ku − x? k) for all u ∈ Ω. Set D0 = V (x? , r) ∩ Ω. (H3 ) kF 0 (x? )−1 (F 0 (u1 ) − F 0 (u2 ))k ≤ ψ(ku1 − u2 k) and kF 0 (x? )−1 (F 0 (u1 ))k ≤ ψ1 (kv1 − x? k) for all u1 , v2 ∈ D0 . (H4 ) U[x? , r] ⊂ Ω. The main local convergence result follows for the method (35.2) using the before mentioned terminology and conditions (H). Theorem 59. Suppose conditions (H) hold, and pick x0 ∈ U(x? , r) − {x? }. Then, sequence {xn } generated by method (35.2) converges to x? , so that kyn − x? k ≤ ϕ1 (kxn − x? k)kxn − x? k ≤ kxn − x? k < r, kzn − x? k ≤ ϕ2 (kxn − x? k)kxn − x? k ≤ kxn − x? k,

kwn − x? k ≤ ϕ3 (kxn − x? k)kxn − x? k ≤ kxn − x? k and

kxn+1 − x? k ≤ ϕ4 (kxn − x? k)kxn − x? k ≤ kxn − x? k.

(35.4)

Proof. Let u ∈ U(x? , r). Then, using (H2 ) and kF 0 (x? )−1 (F 0 (u) − F 0 (x? ))k ≤ ψ0 (ku − x? k) ≤ ψ0 (r) < 1, so F 0 (u)−1 ∈ L (B2, B1 ) and kF 0 (u)−1 F 0 (x? )k ≤

1 , 1 − ψ0 (ku − x? k)

(35.5)

by the Banach lemma on invertible operators [15]. In particular, if u = x0 , F 0 (x0 )−1 ∈ L (B2, B1), so iterate z0 is well defined by the first sub step of method (35.2) for n = 0. Moreover, one has y0 − x? = x0 − x? − F 0 (x0 )−1 F(x0 ) + (1 − δ)F 0 (x0 )−1 F(x0 ).

(35.6)

By (2.1) for j = 0, (35.3) for m = 1, (H3 ) and (35.5) for u = x0 and (35.6) ψ((1 − τ)kx0 − x? k)dτ + |1 − δ| 01 ψ1 (τkx0 − x? k)dτ)kx0 − x? k ky0 − x k ≤ 1 − ψ0 (kx0 − x? k) = ϕ1 (kx0 − x? k)kx0 − x? k ≤ kx0 − x? k < r. ?

(

R1 0

R

So y0 ∈ U(x? , r), and (35.4) holds for n = 0. It also follows that iterate z0 , x1 and w0 are well defined by method (35.2) and (35.5) for u = y0 . Furthermore, one can also write z0 − x? = x0 − x? − F 0 (x0 )−1 F(x0 ) + F 0 (x0 )−1 (F 0 (y0 ) − F 0 (x0 ))F 0 (y0 )−1 F(x0 ).

(35.7)

300

Christopher I. Argyros, Samundra Regmi, Ioannis K. Argyros et al.

In view of (2.1), (35.3) (for m = 2), (H3 ), ()35.5) for u = x0 , y0 , (35.7) and (35.7) one gets "R 1 ? 0 ψ((1 − τ)kx0 − x k)dτ kz0 − x? k ≤ 1 − ψ0 (kx0 − x? k) # R (ψ0 (kx0 − x? k) + ψ0 (ky0 − x? k)) 01 ψ1 (τkx0 − x? k)dτ + kx0 − x? k (1 − ψ0 (kx0 − x? ))(1 − ψ0 (ky0 − x? k)) ≤ ψ2 (kx0 − x? k)kx0 − x? k ≤ kx0 − x? k.

So z0 ∈ U(x? , r) and (35.4) holds for n = 0. By the third sub-step one can gets w0 − x? = z0 − x? − F 0 (z0 )−1 F(z0 ) + (F 0 (z0 )−1 − F 0 (y0 )−1 )F(z0 ) +(F 0 (x0 )−1 − F 0 (y0 )−1 )F(z0 ).

Using (35.2), (35.3) for m = 3, (H3), (35.5) (for u = x0 , y0 , z0 ,), (35.7), (35.8), one obtains "R 1 ? ? 0 ψ((1 − τ)kz0 − x k)dτ kw0 − x k ≤ 1 − ψ0 (kz0 − x? k) (ψ0 (kz0 − x? k) + ψ0 (ky0 − x? k)) 01 ψ1 (τkz0 − x? k)dτ + (1 − ψ0 (kz0 − x? ))(1 − ψ0 (ky0 − x? k)) R

# R (ψ0 (kx0 − x? k) + ψ0 (ky0 − x? k)) 01 ψ1 (τkz0 − x? k)dτ kz0 − x? k + (1 − ψ0 (kx0 − x? ))(1 − ψ0 (ky0 − x? k))

≤ ϕ3 (kx0 − x? k)kx0 − x? k ≤ kx0 − x? k.

So w0 ∈ U(x? , r) and (35.4) holds for n = 0. Similarly, by the last sub step of method (35.2), one gets x1 − x? = w0 − x? − F 0 (w0 )−1 F(w0 ) + (F 0 (w0 )−1 − F 0 (y0 )−1 )F(w0 ) +(F 0 (x0 )−1 − F 0 (y0 )−1 )F(w0 ),

tending to ?

kx1 − x k ≤

"R

1 ? 0 ψ((1 − τ)kw0 − x k)dτ 1 − ψ0 (kw0 − x? k)

(ψ0 (kw0 − x? k) + ψ0 (ky0 − x? k)) 01 ψ1 (τkw0 − x? k)dτ + (1 − ψ0 (kw0 − x? ))(1 − ψ0 (ky0 − x? k)) # R (ψ0 (kx0 − x? k) + ψ0 (ky0 − x? k)) 01 ψ1 (τkw0 − x? k)dτ + kw0 − x? k (1 − ψ0 (kx0 − x? ))(1 − ψ0 (ky0 − x? k)) R

≤ ϕ4 (kx0 − x? k)kx0 − x? k ≤ kx0 − x? k.

Hence, x1 ∈ U(x? , r) and (35.4) holds for n = 0. Simply, exchange x0 , y0 , z0 , w0 , x1 by x j , y j , w j , w j+1 in the preceding estimations to complete the induction for (35.4) − (35.4). It then follows from the estimation that kx j+1 − x? k ≤ pkx j − x? k < r,

(35.8)

Seventh Order Method for Equations

301

where p = ϕ4 (kx0 − x∗ k) ∈ [0, 1). Therefore, we conclude x j+1 ∈ U(x? , r) and lim j→∞ x j = x? . Proposition 17. Suppose: (1) The element x∗ ∈ U(x∗ , s∗ ) ⊂ Ω for some s∗ > 0 is a simple solution of (35.1), and (H2) holds. (2) There exists δ ≥ s∗ so that K0 (s∗ + δ) < 2. (35.9) Set Ω1 = Ω ∩U[x∗ , δ]. Then, x∗ is the unique solution of equation (35.1) in the domain Ω1 . Proof. Let q ∈ Ω1 with F(q) = 0. Define S = one obtains kF 0 (x0 )−1 (S − F 0 (x0 ))k ≤ K0

R1 0 ∗ 0 F (q+θ(x −q))dθ. Using (H2) and (35.9)

Z 1 0

((1 − θ)kq − x0 k + θkx∗ − x0 k)dθ

K0 ∗ (s + δ) < 1, 2 so q = x∗ , follows from the invertibility of S and the identity S(q − x∗ ) = F(q) − F(x∗ ) = 0 − 0 = 0. ≤

3.

Numerical Experiments

We compute the radius of convergence in this section. Example 50. Let B = B1 = R3 , Ω = B[0, 1], x∗ = (0, 0, 0)T . Define function F on Ω for w = (x, y, z)T by e−1 2 F(w) = (ex − 1, y + y, z)T . 2 Then, we get  x  e 0 0 F 0 (v) =  0 (e − 1)y + 1 0  , 0 0 1 1

1

so for ψ0 (t) = (e − 1)t, ψ(t) = e e−1 t, ψ1 (t) = e e−1 , and δ = 12 , we have

r = r1 = 0.0403, r2 = 0.0876, r3 = 0.0612, r4 = 0.0542.

4.

Conclusion

The local convergence for a Xiao-Yin method of order seven is studied using assumptions only on the first derivative of the operator involved. The convergence of this method was shown by assuming that the eighth order derivative of the operator not on the method exists and hence it is limiting its applicability. Moreover, no computational error bounds or uniqueness of the solution are given. We address all these problems using only the first derivative that appears on the method. Hence, we extend the applicability of the method. Our techniques can be used to obtain the convergence of other similar higher-order methods using assumptions only on the first derivative of the operator involved.

302

Christopher I. Argyros, Samundra Regmi, Ioannis K. Argyros et al.

References [1] Argyros, I. K., On the Newton - Kantorovich hypothesis for solving equations, J. Comput. Math., 169, (2004), 315-332. [2] Argyros, I. K., Computational theory of iterative methods. Series: Studies in Computational Mathematics, 15, Editors: Chui C. K. and Wuytack L., Elsevier Publ. Co. New York, U.S.A, 2007. [3] Argyros, I. K., Convergence and Applications of Newton-type Iterations, Springer Verlag, Berlin, Germany, (2008). [4] Argyros, I. K., Hilout, S., Weaker conditions for the convergence of Newton’s method. J. Complexity, 28, (2012), 364–387. [5] Argyros, I. K., Hilout, S., On an improved convergence analysis of Newton’s methods, Applied Mathematics and Computation, 225, (2013), 372-386. [6] Argyros, I. K., Magr´en˜ an, A. A., Iterative methods and their dynamics with applications, CRC Press, New York, USA, 2017. [7] Argyros, I. K., Magr´en˜ an, A. A., A contemporary study of iterative methods, Elsevier (Academic Press), New York, 2018. [8] Arroyo, V., Cordero, A., Torregrosa, J. R., Approximation of artificial satellites preliminary orbits: The efficiency challenge. Mathematical and Computer modeling, 54, (7-8), (2011), 1802-1807. [9] Behl, R., Maroju, P., Martinez, E., Singh, S., A study of the local convergence of a fifth order iterative method, Indian J. Pure Appl. Math., 51,2, (2020), 439-455. [10] C˘atinas¸, Emil, The inexact, inexact perturbed, and quasi-Newton methods are equivalent models, Math. Comp., 74, (2005), 291-301. [11] Ezquerro, J. A., Guti´errez, J. M., Hern´andez, M. A., Romero, N., Rubio, M. J., The Newton method: from Newton to Kantorovich(Spanish), Gac. R. Soc. Mat. Esp., 13, (2010), 53-76. [12] Ezquerro, J. A., Hernandez, M. A., Newton’s method: An updated approach of Kantorovich’s theory, Cham Switzerland, (2018). ` [13] Grau-S´anchez, Miquel and Grau, Angela and Noguera, Miquel, Ostrowski type methods for solving systems of nonlinear equations. Appl. Math. Comput., 281, (2011), 2377-2385. [14] Homeier, H. H. H., A modified Newton method with cubic convergence: the multivariate case, Journal of Computational and Applied Mathematics 169 (2004) 161169. [15] Kantorovich, L. V., Akilov, G. P., Functional Analysis, Pergamon Press, Oxford, (1982).

Seventh Order Method for Equations

303

[16] Magr´en˜ an, A. A., Argyros, I. K., Rainer, J. J., Sicilia, J. A., Ball convergence of a sixth-order Newton-like method based on means under weak conditions, J. Math. Chem. (2018) 56:2117-2131, https://doi.org/10.1007/ s10910-018-0856-y [17] Magr´en˜ an, A. A., Guti´errez, J. M., Real dynamics for damped Newton’s method applied to cubic polynomials, J. Comput. Appl. Math., 275, (2015), 527–538. [18] Ortega, L. M., Rheinboldt, W. C., Iterative Solution of Nonlinear Equations in Several Variables, Academic Press, New York, (1970). [19] Ostrowski, A. M., Solution of equations in Euclidean and Banach spaces, Elsevier, 1973. [20] Potra, F. A. and Pt´ak, V., Nondiscrete induction and iterative processes. Research Notes in Mathematics, 103. Pitman(Advanced Publishing Program), Boston, MA. (1984). [21] Proinov, P. D., General local convergence theory for a class of iterative processes and its applications to Newton’s method, J. Complexity, 25, (2009), 38-62. [22] Proinov, P. D., New general convergence theory for iterative processes and its applications to Newton-Kantorovich type theorems, J. Complexity, 26, (2010), 3-42. [23] Rheinboldt, W. C. An adaptive continuation process of solving systems of nonlinear equations. Polish Academy of Science, Banach Ctr. Publ. 3, (1978), 129-142. [24] Shakhno, S. M., Gnatyshyn, O. P., On an iterative algorithm of order 1.839... for solving nonlinear least squares problems, Appl. Math. Applic., 161, (2005), 253-264. [25] Shakhno, S. M., Iakymchuk, R. P., Yarmola, H. P., Convergence analysis of a two step method for the nonlinear squares problem with decomposition of operator, J. Numer. Appl. Math., 128, (2018), 82-95. [29] Sharma, J. R, Kumar, S., A class of accurate Newton-Jarratt-like methods with applications to nonlinear models, Comput. Appl. Math., (2022). [27] Traub, J. F., Iterative methods for the solution of equations, Prentice Hall, New Jersey, U.S.A, (1964). [28] Verma, R., New Trends in Fractional Programming, Nova Science Publisher, New York, USA, (2019). [29] Xiao, X., Yin, H., A new class of methods with high order of convergence for solving systems of nonlinear equations, Appl. Math. Comput., 264, (2015), 300-309.

Chapter 36

Newton-Like Method 1.

Introduction

We are concerned with the problem of approximating a solution x∗ of the inclusion problem, F(x) ∈ K

(36.1)

where F : D ⊂ B1 −→ B2 is a Fr´echet differentiable operator between Banach spaces B1 and B2 , D is an open set and K 6= 0/ is a closed convex cone. Many applications from computational sciences can be [2,6,7,8,9,12,13,17] formulated like inclusion (36.1). The solution in closed form can be attained only in special cases. That is why most solution approaches are iterative. The following method was employed in [19] for solving inclusion (36.1) xn+1 = xn + rn rn ∈ arg min{krk : F(xn ) + H(xn )r ∈ K},

(36.2)

where H is an approximation to F 0 (x) which satisfies a Lipschitz-type condition. The results in [18, 19] extend the corresponding ones by Robinson [15,16] who was the first researcher to present a Newton-Kantorovich-type semi-local convergence result for method (36.2). The case when F is not necessarily differentiable was studied in [14]. It was assumed that F is decomposed in a differentiable and non differentiable part. Recently, these results were extended in [19] for solving inclusion Q(x) = F(x) + G(x) ∈ K,

(36.3)

where F : B1 −→ B2 is Fr´echet differentiable and G : B1 −→ B2 is continuous. Moreover, F 0 and H satisfies a H¨older-type condition. Furthermore, the following method was used which extends the one given by Robinson: Given xn ∈ D, pick xn+1 to be a solution of min{kx − xn k : F(xn ) + G(xn ) + H(xn )(x − xn ) ∈ K}.

(36.4)

It follows by the definition of K that the feasible set for (36.4) is closed and convex. So, if x¯ is a feasible point of (36.4), then it is a solution of (36.4) and belongs in U(xn , kxn − xk). ¯ ¯ Here and below U(u, r) = {x ∈ B1 : kx − uk < r} and U(u, r) = {x ∈ B1 : kx − uk ≤ r} for some r > 0. Moreover, if B1 is reflexive and kx − xn k is weakly lower semi-continuous, a

306

Christopher I. Argyros, Samundra Regmi, Ioannis K. Argyros et al.

solution of (36.4) is assured. In our chapter we are concerned by optimization considerations. The convergence domain of method (36.4) is small in general. The novelty of our chapter is two-fold: (1) The convergence domain given in [15,16,18,19] is extended and tighter upper error bounds on kxn+1 − xn k are obtained without additional hypotheses (see [19]). (2) The H¨older conditions are replaced by tighter ones leading to benefits (1). These improvements are obtained by determining a subset of the original domain, also containing the iterates xn . Then, in the new domain, the H¨older parameters or functions are even tighter than the original ones. The rest of the chapter contains the mathematical background in Section 2; results on majorizing sequences in Section 3, the semi-local convergence analysis of method (36.4) in Section 4, whereas the numerical examples and conclusions appear in Section 5 and Section 6, respectively.

2.

Mathematical Background

We re-introduce some standard concepts and results to make the chapter as self-contained as possible. More details can be found in [6,12,13,14,15,16,17,18,19]. The graph of a multifunction S : B1 ⇒ B2 is the set gph S := {(x, y) ∈ B1 ×B2 : y ∈ S(x)}. / and range S := The domain and range of S are defined by dom S : {x ∈ B1 : S(x) 6= 0} {y ∈ B2 : y ∈ S(x) for x ∈ B1 }. Moreover, the inverse of S is given by S−1 : B2 ⇒ B1 with S−1 (y) := {x ∈ B1 : y ∈ S(x)}. Definition 22. [17] S : B1 ⇒ B2 is convex if (a) 0 ∈ S(0), (b)S(µx) = µS(x) for all µ > 0, x ∈ B1 , ˜ for all x, (c)S(x˜ + x˜ ⊇ S(x˜ + S(x) ˜ x˜ ∈ B1 . Definition 23. [17] If S is convex, then its norm is given by kSk := sup{kS(x)k : x ∈ dom S, kxk ≤ 1},

(36.5)

/ where kS(x)k := inf{kyk : y ∈ S(x)} provided S(x) 6= 0. Definition 24. [16] Let S1 , S2 : B1 ⇒ B2 and S3 : B2 −→ B3 , where B3 is also a Banach space. Then, we define (i)(S1 + S2 )(x) := S1 (x) + S2 (x) (ii)S3 (S2 (x)) := U{S3 (y) : y ∈ S2 (x)} (iii) (aS1)(x) := aS1 (x) for all x ∈ B1 and a > 0. Lemma 21. [6] Suppose S : B1 ⇒ B2 be convex with closed graph. Then, the following implications hold dom S = B1 ⇐⇒ kSk < +∞ and range S = B2 ⇐⇒ kS−1 k < +∞.

Newton-Like Method

307

Lemma 22. [6,13] Let S1 , S2 : B1 ⇒ B2 be convex processes with closed graph, dom S1 = −1 dom S=B1 and kS−1 2 k < +∞. Moreover, suppose kS2 kkS1 k < 1 and (S1 + S2 )(x) is closed for each x ∈ B1 . Then, the following assertions hold. range (S1 + S2 )−1 = B1 and k(S1 + S2 )−1 k ≤

kS−1 2 k . −1 1 − kS2 kkS1 k

Let K ⊂ B2 be a closed convex cone such that K 6= 0/ and z ∈ Ω ⊆ B1 . Then, we define multifunction Hz : B1 ⇒ B2 by Hz (x) := H(z)x − K. (36.6) It follows from (36.6) that Hz−1 (v) := {w ∈ B1 : H(z)w − v ∈ K}

(36.7)

for z ∈ Ω, v ∈ B2 . Lemma 23. [6] Sz and S−1 z are convex processes with closed graph, don Sz = B1 and kSz k < +∞. Moreover, rang Sz = B2 ⇐⇒ kS−1 z k < +∞. Lemma 24. [18] Let Ω ⊂ B1 be an open set, H : B1 −→ B2 be a linear operator and Sz be given by (36.6). Then, the following holds −1 −1 S−1 z (H(w))Sw (v) ⊆ Sz (v)

for all w, z ∈ Ω, v ∈ B2 . Consequently, −1 1 kS−1 z (H(y) − H(x))k ≤ kSz (H(w))Sw (H(y) − H(x))k

holds for w, x, y, z ∈ Ω.

3.

Majorizing Sequences

The convergence analysis is based on scalar sequences that shall be shown to be majorizing for method (36.4). Let λ > 0 be a given parameter. Set M = [0, ∞). Consider function ϕ0 : M −→ M to be nondecreasing, continuous ans such that equation ϕ0 (t) − 1 = 0 has a smallest zero ρ ∈ M − {0}. Set M0 = [0, ρ). Then, consider function ϕ : M0 −→ M, ϕ1 : M0 −→ M and ϕ2 : M0 −→ M nondecreasing and continuous. Define sequence {tn } by t0 = 0 tn+1 = tn + qn (tn − tn−1 ), (36.8) where qn =

R1 0

ϕ(θ(tn − tn−1 ))dθ + ϕ1 (tn−1) + ϕ2 (tn − tn−1 ) . 1 − ϕ0 (tn )

Next, we present results on the convergence of sequence {tn }.

308

Christopher I. Argyros, Samundra Regmi, Ioannis K. Argyros et al.

Lemma 25. Suppose ϕ0 (tn ) < 1

(36.9)

and sequence {tn } is bounded from above by some t∗∗ ∈ [0, ρ). Then, sequence {tn } is nondecreasing and converges to its unique least upper bound t∗ ∈ [0,t∗∗]. Proof. By the definition of sequence {tn } and (36.9), it follows that it is nondecreasing and since tn ≤ t∗∗ , limn−→∞ tn = t∗ . Lemma 26. Suppose function ϕ0 is strictly increasing on the interval M and tn ≤ ϕ−1 0 (1).

(36.10)

Then, sequence {tn} is nondecreasing and converges to t∗ . Proof. Set t∗∗ = ϕ−1 0 (1) in Lemma 25. Next, we present two more results but where (36.90 and (36.10) are easier to verify. Lemma 27. Suppose that there exists s ∈ [0, 1) such that 0 ≤ q1 ≤ δ

(36.11)

0 ≤ qδ ≤ δ,

(36.12)

and where qδ : M0 −→ M is defined by qδ =

R1 0

λ ϕ(θλ)dθ + ϕ1 ( 1−δ ) + ϕ2 (λ) λ 1 − ϕ0 ( 1−δ )

.

Then, the following assertions hold 0 ≤ tn+1 − tn ≤ δ(tn − tn−1 ) ≤ δn λ, 0 ≤ tn ≤ and

(1 − δn )λ 1−δ

lim tn := t∗
0 and δ = δ2 if Notice that if 4 ≥ 0, polynomial p has roots δ1 and δ2 such that 0 ≤ δ1 ≤ δ2 .

L 2

− L3 ≤ 0.

Then, we can show: Lemma 28. Suppose that there exists δ ∈ (0, 1) such that 0 ≤ q1 ≤ δ

(36.22)

4 < 0 and h2 (δ) ≤ 0

(36.23)

4 ≥ 0 and h∞(δ) ≤ 0,

(36.24)

and either or where 4 is given in (36.20). Then, assertions (36.13)-(36.15) hold for sequence {tn }. Proof. Mathematical induction is used to show 0 ≤ qm ≤ δ

(36.25)

0 ≤ tm ≤ tm+1 .

(36.26)

and

310

Christopher I. Argyros, Samundra Regmi, Ioannis K. Argyros et al.

These estimates are true for m = 1 by the initial conditions and (36.22). Then, as in Lemma 27, we get estimates (36.13) and (36.14) hold for all values of m ≤ n − 1. Then, evidently (36.25) holds if L m−1 δ λ + L3 (1 + δ+ . . . + δm−2 )λ + L6 + δL1 (1 + δ+ . . .+ δm−1 )λ + δL2 − δ ≤ 0, (36.27) 2 or hm (t) ≤ 0 at t = δ.

(36.28)

A relationship between two consecutive polynomials is needed. By definition of polynomials hm given by (36.17): hm+1 (t) = hm+1 (t) − hm (t) + hm(t) L m = t λ + L3 (1 + t + . . . + t m−1 )λ + L6 + L1 t(1 + t + . . . + t m )λ 2 L +L2 t − t + hm (t) − t m−1 λ − L3 (1 + t + . . . + t m−2 )λ − L6 2 −L1 t(1 + t + . . . + t m−1 )λ − L2 t + t = hm (t) + p(t)t m−1λ,

where polynomial p is given by (36.18). Case:4 ≥ 0. It follows from (36.29) that hm+1 (t) = hm (t) at t = δ.

(36.29)

(36.30)

Define: h∞ (t) = lim hm (t). m−→∞

(36.31)

Then, by (36.17) and (36.31) function h∞ is defined by (36.19). Hence, (36.28) holds if h∞(t) ≤ 0 at t = δ,

(36.32)

which is true by (36.24). Case:4 < 0. It follows that p(t) < 0 for all t ∈ M, so by (36.29) hm+1 (t) ≤ hm(t).

(36.33)

So, (36.28) holds if h2 (δ) ≤ 0 which is true by (36.23). Therefore, the induction for (36.25) and (36.26) is completed. It follows that limm−→∞ tm = t∗ .

4.

Semi-Local Convergence

Conditions (C) are used: Suppose: (C1) There exist x0 ∈ D, λ > 0 such that Sx0 = B2 and kx1 − x0 k ≤ λ.

Newton-Like Method

311

(C2) kS−1 x0 (H(x) − H(x0 ))k ≤ ϕ0 (kx − x0 k) for all x ∈ D. Set D0 = D ∩U(x0 , ρ). 0 0 (C3) kS−1 x0 (F (x) − F (y))k ≤ ϕ(kx − yk), −1 0 kSx0 (F (x) − H(x))k ≤ ϕ1 (kx − x0 k) and kS−1 x0 (G(x) − G(y))k ≤ ϕ2 (kx − yk)kx − yk for all x, y ∈ D0 .

(C4) Conditions of any of the Lemmas of previous Section hold. and ¯ 0 ,t∗ ) (or U(x ¯ 0 ,t∗∗ ) ⊂ D). (C5) U(x We prove a Banach-type lemma [6,10,13] for invertible operators. Lemma 29. Let x0 ∈ B1 and R ∈ (0, ρ). Moreover, suppose Sx0 : B1 −→ B2 is onto and (C2) holds. Then, the following assertions hold S−1 x H(x0 ) = B1 and kS−1 x H(x0 )k ≤ for all x ∈ U(x0 , R).

1 1 − ϕ0 (kx − x0 k)

(36.34)

Proof. By hypothesis dom E = B1 , where E := S−1 x0 (H(x) − H(y)). Choose x ∈ U(x0 , R) and use (C2) to obtain in turn by the definition of ρ kEk = kS−1 x0 (H(x) − H(x0 ))k

≤ ϕ0 (kx − x0 k) ≤ ϕ0 (ρ) < 1,

so by Lemma 28 we get range (E + I)−1 = B1 and k(E + I)−1 k ≤

1 . 1 − ϕ0 (kx − x0 k)

(36.35)

−1 −1 −1 Notice that S−1 x0 is a convex process, so Sx0 H(x) ⊇ Sx0 (H(x)−H(x0 ))+Sx0 H(x0 ) and since −1 −1 0 ∈ K, we get r ∈ Sx0 H(x0 )r, hence (Sx0 H(x))r ⊇ (E + I)r. It then follows from Lemma 21 that range S−1 x0 H(x) ⊇ range (E + I) = B1 and

kSx0 (H(x))−1 k ≤ k(E + I)−1 k ≤ By definition (36.7), we can obtain

1 . 1 − ϕ0 (kx − x0 k)

(36.36)

−1 (S−1 x0 − H(x)) r = {w ∈ B1 : H(x0 )r + H(x)w ∈ K}

= {w ∈ B1 : H(x)w + H(x0 )r ∈ K}

= S−1 x (−H(x0 )r)

(36.37)

312

Christopher I. Argyros, Samundra Regmi, Ioannis K. Argyros et al.

for all r ∈ B1 . Moreover, by (36.5), Lemma 21 and range (E + I)−1 = B1 , we obtain −1 −1 kS−1 x (−H(x0 ))k = k(Sx0 − H(x)) k

−1 = k(S−1 x0 H(x)) k < +∞,

(36.38)

since H(x0 ) is a linear operator, so by Lemma 21 and −1 kS−1 x H(x0 )k = kSx (−H(x0 ))k < +∞

we deduce S−1 x H(x0 ) = B1 . Finally, (36.36) and (36.37) imply (36.34). Next, we present the main semi-local convergence result using the conditions (C). Theorem 60. Suppose conditions (C) hold. Then, sequence {xn } generated by method (36.4) is well defined in U(x0 ,t∗), remains in U(x0 ,t∗ ) for all n = 0, 1, 2, . . . and converges ¯ 0 ,t∗ ) satisfying F(x∗ ) + G(x∗ ) ∈ K, where t∗ = limn−→∞ tn . to x∗ ∈ U(x Proof. Let us find a point x satisfying F(xm ) + G(xm ) + H(xm )(x − xm ) ∈ F(xm−1 ) + G(xm−1 )

+H(xm−1 (xm − xm−1 ) ∈ K.

(36.39)

But xm solves (36.4), so x satisfying (36.39) is necessarily feasible for (36.4). Notice that (36.39) can be rewritten as x − xm ∈ S−1 xm [−F(xm ) − G(xm ) + F(xm−1 ) + G(xm−1 ) +H(xm−1 (xm − xm−1 )],

(36.40)

so there exists an element x¯ of least norm satisfying kx¯ − xm k ≤ kS−1 xm [−F(xm ) − G(xm ) + F(xm−1 ) + G(xm−1 ) +H(xm−1 (xm − xm−1 )]k

≤ ksxm H(x0 )kkH(x0 )−1 [−F(xm ) − G(xm ) + F(xm−1 ) + G(xm−1 ) +H(xm−1 (xm − xm−1 )]k



ksxm H(x0 )k(kS−1 x0 −

Z 1 0

(F 0 (xm−1 + θ(xm − xm−1 ))dθ

−F 0 (xm−1 ))(xm − xm−1 )]k

0 +kS−1 x0 (F (xm−1 ) − H(xm−1 ))(xm−1 − xm )k

+kS−1 x0 (−G(xm ) + G(xm−1 ))k ≤

R1

ϕ(θkxm − xm−1 k)dθ + ϕ1 (kxm−1 − x0 k) + ϕ2 (kxm − xm−1 k)] 1 − ϕ0 (kxm − x0 k) ×kxm − xm−1 k, (36.41) [

0

where we also used Lemma 24, Lemma 29 and condition (C3). Next, we shall show that kxm+1 − xm k ≤ tm+1 − tm ,

(36.42)

Newton-Like Method

313

where sequence {tm} is defined in (36.8). Estimate (36.42) is true for m = 0, by (36.8) and (C1), since kx1 − x0 k ≤ λ = t1 − t0 . By kxn − xn−1 k ≤ tn − tn−1 for n = 0, 1, 2, . . ., m, we get n

n

kxn − x0 k ≤ ∑ kxi − xi−1 k ≤ ∑ (ti − ti−1 ) = tn . i=1

(36.43)

i=1

It follows from (36.8), (36.41)(for x¯ = xn+1 ), (36.42) and (36.43) that R1

ϕ(θ(tm − tm−1 ))dθ + ϕ1 (tm−1) + ϕ2 (tm − tm−1 )] (tm − tm−1 ) 1 − ϕ0 (tm) = tm+1 − tm (36.44) [

kxm+1 − xm k ≤

0

and kxm − x0 k ≤ tm ≤ t∗ .

(36.45)

But, we know that limm−→∞ tm = t∗ , so (36.42) gives ∞



m=m0



kxm+1 − xm k ≤



m=m0

(tm+1 − tm ) = t∗ − tm = t∗ − tm0 < +∞

for all m0 = 1, 2, . . .. It follows that sequence {xm } is complete in U(x0 ,t∗ ) and as such it ¯ 0 ,t∗ ). Moreover, we have converges to some x∗ ∈ U(x kxm − x∗ k ≤ t∗ − tm . Furthermore, by (36.4) F(xm ) + G(xm ) + H(xm )(xm+1 − xm ) ∈ K

(36.46)

for all m = 0, 1, . . .. By letting m −→ ∞ in (36.46) and using the differentiability of F and continuity of G we conclude F(x∗ ) + G(x∗ ) ∈ K. Remark 43. Let us specialize functions ϕ so we can compare our results to the ones in [19]. Let ϕ0 (t) = L0 t b1 + `0 , L0 > 0, 0`0 < 1, 0 < b1 ≤ 1, ϕ(t) = Lt b2 , L > 0, 0 < b2 ≤ 1,

ϕ1 (t) = c1 t b3 + c2 , c1 ≥ 0, c2 ≥ 0, 0 < b3 ≤ 1 and ϕ2 (t) = c3 t, c3 ≥ 0. Define scalar sequence {un } by u0 = 0, u1 = t1 , un+1 = un +

L b2 +1 3 + (c1 ubn−1 + c2 )(un − un−1 ) + c3 (un − un−1 ) b2 +1 (un − un−1 ) . 1 − (L0 ubn1 + `0 )

The corresponding functions in [19] are ϕ¯ 0 (t) = ϕ0 (t),

(36.47)

314

Christopher I. Argyros, Samundra Regmi, Ioannis K. Argyros et al. ¯ b2 , ¯ = Lt ϕ(t) ϕ¯ 1 (t) = c¯1 t b3 + c¯2

and ϕ¯ 2 (t) = c¯3 t, and sequence {u¯n } is defined for u¯0 = 0, u¯1 = λ, u¯n+1 = u¯n +

L¯ b2 +1 3 + (c¯1 u¯bn−1 + c¯2 )(u¯n − u¯n−1 ) + c¯3 (u¯n − u¯n−1 ) b2 +1 (u¯n − u¯n−1 ) . 1 − (L0 u¯bn1 + `0 )

(36.48)

¯ are determined using But functions ”ϕ” depend on ρ and D0 ⊂ D, whereas functions “ϕ” D. Hence, we have for all t ∈ M0 ¯ ≤ ϕ(t), ϕ(t) (36.49) ϕ¯ 1 (t) ≤ ϕ1 (t),

(36.50)

ϕ¯ 2 (t) ≤ ϕ2 (t).

(36.51)

and Then, by induction and (36.47)-(36.51) un ≤ u¯n ,

(36.52)

0 ≤ un+1 − um ≤ u¯n+1 − u¯n

(36.53)

u∗ = lim un ≤ u¯∗ = lim u¯n .

(36.54)

and n−→∞

n−→∞

These limits exist [19], respectively there exists d > λ, d1 > λ such that λ ≤ d(1 − d0 ),

(36.55)

λ ≤ d1 (1 − d2 ),

(36.56)

and where 1 L¯ [ λb2 + c¯2 + c¯1 λb3 + c¯3 d] 1 − (L0 d b1 + `)) b2 + 1 1 L¯ ≥ d2 = [ λb2 + c2 + c1 λb3 + c3 d1 ]. b1 b + 1 1 − (L0 d1 + `)) 2

d0 =

(36.57)

Hence, semi-local convergence criterion (36.55) is weaker that (36.56) too. So, we extended the results in [19] which in turn extended the ones in [8,11,14,20]. A direct study of sequence {un } may lead to even weaker condition than (36.55) (see the approach in Lemma 28). So, we extended the results in [19] which in turn extended the ones in [8,11,14,20]. Hence, we justified the advantages claimed in the introduction.

Newton-Like Method

5.

315

Numerical Experiments

We use examples to show that crucial inequalities (36.49)-(36.51) can be strict, say when they are constant functions Example 51. Let X = Y = R, x0 = 1, Ω = [x0 − (1 − p), x0 + (1 − p)], p ∈ (0, 1) and F : Ω −→ Y be defined by F(x) = x3 − p.

Then, we get kF 0 (x0 )−1 k ≤ 13 ,

kF 0 (x0 )−1 (F 0 (y)−F 0 (x0 ))k ≤ ky+x0kky−x0 k ≤ (ky−x0k+2kx0 k)ky−x0 k ≤ (3− p)ky−x0k and kF 0 (x0 )−1 (F 0 (y) − F 0 (x))k ≤ ky + xkky − xk

≤ (ky − x0 k + kx − x0 k + 2x0 k)ky − xk 1 ≤ 2(1 + )ky − xk, `0

since D0 = D ∩U(1, 1 − p) = U(1, 1 − p), 1 kx−x0 k ≤ `10 and ky−x0 k ≤ `10 . Then, we can set η = 1−p 3 , `0 = 3− p and ` = 2(1+ `0 ).

6.

Conclusion

The convergence domain of the Newton-like method for solving generalized inclusion problems is small in general. We develop a technique that determines at least as small a domain as in earlier works that also contains the iterates. This way, the majorant functions are at least as tight as the ones used before, leading to a finer semi-local convergence analysis and under the same conditions as before. Hence, the applicability of the method is extended.

References [1] Argyros, I. K., Hilout, S., On the convergence of Newton-type methods under mild differentiability conditions. Numer. Algor. 52, 701726 (2009). [2] Argyros, I. K., Convergence and applications of Newton-type iterations. Springer, New York (2008). [3] Argyros, I. K., On the Newton - Kantorovich hypothesis for solving equations, J. Comput. Math., 169, (2004), 315-332. [4] Argyros, I. K., Magr´en˜ an, A. A., Iterative methods and their dynamics with applications, CRC Press, New York, USA, 2017. [5] Argyros, I. K., George, S., Mathematical modeling for the solution of equations and systems of equations with applications, Volume-IV, Nova Publisher, NY, 2021.

316

Christopher I. Argyros, Samundra Regmi, Ioannis K. Argyros et al.

[6] Dennis, Jr. J. E., On the convergence of newton-like methods. In: Numerical Methods for Nonlinear Algebraic Equations (Proc. Conf., Univ. Essex, Colchester, 1969), pp. 163181. Gordon and Breach, London (1970). [7] Ferreira, O. P., Silva, G. N., Inexact Newtons method for nonlinear functions with values in a cone. Appl. Anal. 98(8), 14611477 (2019). [8] Gutierrez, J. M., Hernndez, M. A., Newtons method under weak Kantorovich, conditions. IMA J. Numer. Anal. 20(4), 521532 (2000). [9] He, Y., Sun, J., Error bounds for degenerate cone inclusion problems. Math. Oper. Res. 30(3), 701717. (2005) [10] Kantorovich, L. V., Akilov, G. P., Functional analysis in normed spaces. The Macmillan Co, New York (1964). [11] Keller, H. B., Newtons method under mild differentiability conditions. J. Comput. System Sci. 4(1), 1528 (1970). [12] Li, C., Ng, K. F., Convergence analysis of the Gauss-Newton method for convex inclusion and convex- composite optimization problems. J. Math. Anal. Appl. 389(1), 469-485 (2012). [13] Nashed, M. Z., Chen, X., Convergence of Newton-like methods for singular operator equations using outer inverses. Numer. Math. 66(1), 235257 (1993). [14] Pietrus, A., Jean-Alexis, C., Newton-secant method for functions with values in a cone. Serdica Math. J. 39(3-4), 271286 (2013). [15] Robinson, S. M., Extension of Newtons method to nonlinear functions with values in a cone. Numer. Math. 19, 341347 (1972). [16] Robinson, S. M., Normed convex processes. Trans. Amer. Math. Soc. 174, 127140 (1972) [17] Rockafellar, R. T., Convex analysis, Princeton Mathematical Series, No. 28. Princeton University Press, Princeton, N. J. (1970) [18] Silva, G. N., Santos, P. S. M., Souza, S. S., Extended Newton-type method for nonlinear functions with values in a cone. Comput. Appl Math. 37(4), 50825097 (2018). [19] Santos, P. S. M., Silva, G. N, Silva, R. C. M., Newton-type method for solving generalized inclusion, Numerical Algorithms (2021) 88:18111829. [20] Rokne, J., Newtons method under mild differentiability conditions with error analysis. Numer. Math. 18, 401412 (1971).

Chapter 37

King-Type Methods 1.

Introduction

Consider the nonlinear equation F(x) = 0.

(37.1)

/ where S = R or C. Throughout Here F : M ⊂ S −→ S is a nonlinear operator with D 6= 0, the chapter U(x0 , ρ) = {x ∈ B1 : kx − x0 k < ρ} and U[x0 , ρ] = {x ∈ B1 : kx − x0 k ≤ ρ} for some ρ > 0. We study the King-type method defined for n = 01, 2, . . . by yn = xn − F 0 (xn )−1 F(xn ),

0 −1 xn+1 = yn − A−1 n Bn F (xn ) F(yn ),

(37.2)

where a ∈ S, An = F(xn ) + (α − 2)F(yn ) and Bn = F(xn ) + αF(yn ). This method is of order four, and is studied in [2]-[7] using Taylor expansion and assumptions on the derivatives of F of order up to five. For α ∈ [0, 2] convergence of this method was studied in [2]-[7], but semi-local convergence that seems to be more interesting has not been studied. That is why we study it here. Our convergence analysis is not based on Taylor expansion (unlike earlier studies [2][7]), so we do not need assumptions on the higher-order derivatives of the operator involved. For example: Let X = Y = R, D = [− 12 , 32 ]. Define f on D by  3 t logt 2 + t 5 − t 4 i f t 6= 0 f (t) = 0 i f t = 0. Then, we have f (1) = 0, f 000 (t) = 6 logt 2 + 60t 2 − 24t + 22. Obviously f 000 (t) is not bounded on D. So, the convergence of the King-type methods is not guaranteed by the analysis in [2]-[7]. This method can be used on other methods and relevant topics along the same lines [1]-[7]. The chapter contains local convergence analysis in Section 2, and the numerical examples are given in Section 3.

318

2.

Christopher I. Argyros, Samundra Regmi, Ioannis K. Argyros et al.

Majorizing Sequences

Let L0 , L, η0 and η be positive parameters. It shall be shown in Section 3 that scalar sequence {tn } is majorizing for method (37.2), where t0 = 0, s0 = η, tn+1 = sn +

2 L(1 + L0 tn + |α|L 2 (sn − tn ))(sn − tn ) 2(1 − L2 |α − 2|(sn − tn ) − L0 tn )(1 − L0 tn )

sn+1 = tn+1 +

(37.3)

(1 + L20 (sn + tn+1 ))(tn+1 − sn ) + L2 (sn − tn )2 . 1 − L0 tn+1

Next, we present the result on the convergence of sequence {tn }. Lemma 30. Suppose: 2L0 tn + L|α − 2|(sn − tn ) < 1. Then, sequence {tn } is nondecreasing and bounded from above by verges to its unique least upper bound t ∗ ∈ [0, 2L1 0 ].

(37.4) 1 2L0

and as such it con-

Proof. It follows from the definition of sequence {tn } and (37.4), this sequence is nondecreasing, and bounded from above by 2L1 0 , so limn−→∞ tn = t ∗ . The next result provides stronger conditions than (37.4) but which are easier to verify. It is convenient to define parameters L(1 + |α|L 2 η)η a= , L 2(1 − 2 |α − 2|η) b=

(1 + L20 (η + t1 ))(t1 − η) + L2 η2 , 1 − L0 t1 c = max{a, b},

and polynomials on the interval [0, 1] by p(t) = 3|α|L2 ηt 3 − 3|α|L2t + 4L0 t 2 + 11Lt − 11L. It follows by the intermediate value theorem that p has roots in (0, 1). Indeed, by these definitions one gets p(0) = −11L < 0, p(1) = 4L0 > 0. Denote the smallest such solution 0 +L|α−2|)η by δ. Set c = max{a, b}, and d = 1−(2L . 1−L|α−2|η Lemma 31. Suppose: 0 ≤ c ≤ δ ≤ d, (2L0 + L|α − 2|)η < 1 and L0 t1 < 1. η Then, the conclusions of Lemma 30 hold for sequence {tn } with t ∗ ∈ [0, 1−δ ].

(37.5)

King-Type Methods

319

Proof. Mathematical induction is used to show

(2)

(Tm ) : 0 ≤

|α|L

L(1 + L0 tm + 2 (sm − tm ))(sm − tm ) 0≤ ≤ δ, (1 − L2 |α − 2|(sm − tm ) − L0 tm )(1 − L0 tm)

(37.6)

(1 + L20 (sm + tm+1 ))(tm+1 − sm ) + L2 (sm − tm )2 ≤ δ(sm − tm ), 1 − L0 tm+1

(37.7)

(1) (Tm ) :

1

(3)

(Tm ) : 0 ≤

(1 −

L 2 |α − 2|(sm − tm ) − L0 tm )

(4)

1 ≤ 2, 1 − L0 tm

(Tm ) : 0 ≤ and

≤ δ,

(37.8) (37.9)

(5)

(Tm ) : 0 ≤ tm ≤ sm ≤ tm+1 .

(37.10)

These estimates hold for m = 0, by (37.3) and (37.5). Then, it follows 0 ≤ t1 − s0 ≤ 2 η δη, s1 − t1 ≤ δη, t1 ≤ 1−δ 1−δ η < 1−δ . Assume that (Tm ) estimates hold for all integers m ≤ m+2

)η η n − 1. We also can use the estimates 0 ≤ sm − tm ≤ δm η and tm+1 ≤ (1−δ < 1−δ . By 1−δ these induction hypotheses the (Tm ) estimates hold if instead of (37.6) and (37.7) we show (1)

L(3 + |α|(sm − tm ))(sm − tm ) ≤ δ, 2(1 − L0 tm )

(37.11)

L(11 + 3|α|L(sm − tm ))(sm − tm ) ≤ δ, 4(1 − L0 tm+1 )

(37.12)

(Tm )0 : 0 ≤ and (2)

(Tm )0 : 0 ≤ if

since 1 + L0 tm ≤ 1 + 12 = 32 and (37.8)-(37.10) hold. Evidently, (37.11) and (37.12) hold 11Lδm η + 3|α|L2 (δm η)2 + 4L0 δ(1 + δ + . . . + δm )η − 4δ ≤ 0

(37.13)

hm (t) ≤ 0 at t = δ,

(37.14)

or where recurrent functions are defined on [0, 1) by hm(t) = 3|α|L2 η2 t 2m−1 + 11Lt m−1 η + 4L0 (1 + t + . . . + t m )η − 4.

(37.15)

A relationship between two consecutive functions hm is needed. By (37.14) it follows that hm+1 (t) = hm+1 (t) − hm(t) + hm (t)

= 3|α|L2t 2m+1 η2 + 11Lt m η + 4L0 (1 + t + . . . + t m+1 )η − 4 + hm (t) −3|α|L2 η2 t 2m−1 − 11Lt m−1 η − 4L0 (1 + t + . . . + t m )η + 4

= hm(t) + [3|α|L2t m+2 η − 3|α|L62t mη + 4L0 t 2 + 11Lt − 11L)t m−1 η ≥ hm(t) + p1 (t)t m−1η,

320

Christopher I. Argyros, Samundra Regmi, Ioannis K. Argyros et al.

since |α|L2t m+2 η ≤ |α|L2t 3 η and −|α|L2t m η < 0. So, hm+1 (t) ≥ hm+1 (t) at t = δ. Define function

(37.16)

4L0 η − 4. 1 −t

(37.17)

h∞ (t) ≤ 0 at t = δ,

(37.18)

h∞ (t) = So, (37.14) holds if which is true by (37.5). We also get

2L0 tm + L|α − 2|(sm − tm ) ≤

2L0 η + L|α − 2|η < 1, 1−δ

by the second condition in (37.5). Therefore, (37.7)-(37.10) hold too, where we also used (37.3). The induction for (37.6)-(37.10) is completed. It follows that sequence {tm} is η η nondecreasing and bounded above by 1−δ , so t ∗ = limm−→∞ tm ∈ [0, 1−δ ].

3.

Semi-Local Convergence

The following conditions (A) are used. Suppose: (A1) There exist x0 ∈ M and η > 0 such that F 0 (x0 )−1 is invertible and kF 0 (x0 )−1 F(x0 )k ≤ η. (A2) kF 0 (x0 )−1 (F 0 (z) − F 0 (x0 ))k ≤ L0 kz − x0 k

for all z ∈ M. Set M0 = [0, L10 ]. (A3)

kF 0 (x0 )−1 (F 0 (w) − F 0 (v)k ≤ Lkw − vk for all v, w ∈ M0 . (A4) Conditions of Lemma 30 or Lemma 31 hold, and (A5) U[x0 ,t ∗ ] ⊂ M. Next, we present a semi-local the convergence result for method (37.2) under conditions (A). Theorem 61. Suppose conditions (A) hold. Then, sequence {xn } is well defined, remains in U(x0 ,t ∗ ) for all n = 0, 1, 2, . . . and there exists x∗ ∈ U[x0 ,t ∗ ] such that F(x∗ ) = 0.

King-Type Methods

321

Proof. Mathematical induction is employed to show kym − xm k ≤ sm − tm ,

(37.19)

kxm+1 − ym k ≤ tm+1 − sm .

(37.20)

ky0 − x0 k = kF 0 (x0 )−1 F(x0 )k ≤ η = s0 − t0 ,

(37.21)

and By (A1) and the definition of sequence {tm} we have

so (37.19) holds for m = 0 and y0 ∈ U(x0 ,t ∗ ). Let u ∈ U(x0 ,t ∗ ). Then, by (A1) and (A2), it follows that kF 0 (x0 )−1 (F 0 (u) − F 0 (x0 ))k ≤ L0 ku − x0 k ≤ L0 t ∗ < 1,

so F 0 (u) 6= 0 and

(37.22)

1 , 1 − L0 ku − x0 k by a standard lemma on linear invertible operators due to Banach [1]. Next, we show Am 6= 0. kF 0 (u)−1F 0 (x0 )k ≤

(37.23)

k[−F 0 (x0 )(ym − xm )]−1(F(xm ) + (α − 2)F(ym ) + F 0 (x0 )(ym − xm )k

≤ kym − xm k−1 [|α − 2|kF 0 (x0 )−1 F(ym )k

+kF 0 (x0 )−1 (F 0 (x0 ) − F 0 (xm ))kkym − xm k] L ≤ kym − xm k−1 [|α − 2| kym − xm k2 + L0 kxm − x0 kkym − xm k] 2 L ≤ |α − 2| kym − xm k + L0 kxm − x0 k 2 L ≤ |α − 2|(sm − tm ) + L0 tm < 1, 2 by any of the Lemmas, so kA−1 m F(x0 )k ≤

1 ksm − tm k[(1 − L2 |α − 2|(sm − tm ) − L0 tm]

,

(37.24)

(37.25)

where we also used kF 0 (x0 )−1 F(ym )k ≤ kF 0 (x0 )−1 (F(ym ) − F(xm ) − F 0 (xm )(ym − xm ))k L L ≤ kym − xm k2 ≤ (sm − tm )2 (37.26) 2 2 by the definition of method (37.2). It follows from the second sub-step of method (37.2), (37.22) (for u = x0 ), (37.25), (37.26) that 0 0 −1 0 −1 0 0 −1 kxm+1 − ym k ≤ kA−1 m F (x0 )kkF (x0 ) Bm k||F (xm ) F (x0 )kkF (x0 ) F(ym )k 2 L(1 + L0 tm + |α|L 2 (sm − tm ))(sm − tm ) 2(1 − L2 |α − 2|(sm − tm ) − L0 tm )(1 − L0 tm) = tm+1 − sm ,



(37.27)

322

Christopher I. Argyros, Samundra Regmi, Ioannis K. Argyros et al.

so (37.20) holds, where we also used kF 0 (x0 )−1 (F 0 (xm ) + αF(ym ))k ≤ kF 0 (x0 )−1 F(xm )k + |α|kF 0 (x0 )−1 F(ym )k ≤ kF 0 (x0 )−1 (F 0 (xm )(ym − xm ))k +|α|kF 0 (x0 )−1 F(ym )k

≤ kF 0 (x0 )−1 (F 0 (x0 ) + (F 0 (xm ) − F 0 (x0 )))kkym − xm k L +|α| kym − xm k2 2 |α|L kym − xm k2 ≤ (1 + L0 kxm − x0 k)kym − xm k + 2 ≤ (1 + L0 tm)(sm − tm ) |α|L (sm − tm )2 . (37.28) + 2 We also have

kxm+1 − x0 k ≤ kxm+1 − ym k + kym − x0 k

≤ tm+1 − sm + sm − t0 = tm+1 < t ∗ ,

(37.29)

so xm+1 ∈ U(x0 ,t ∗ ). Moreover, we can write kF 0 (x0 )−1 F 0 (xm+1 )k = kF 0 (x0 )−1 ((F(xm+1 ) − F(ym )) + F(ym ) ≤ k

Z 1 0

F 0 (x0 )−1 F 0 (ym + θ(xm+1 − ym )

×(xm+1 − ym )dθk + kF 0 (x0 )−1 F(ym )k ≤ k

Z 1 0

F 0 (x0 )−1 (F 0 (ym + θ(xm+1 − ym ))dθ − F 0 (x0 ))k

×kxm+1 − ym k + kF 0 (x0 )−1 F 0 (x0 )kkxm+1 − ym k L + kym − xm k2 2 ≤ (1 + L0 (sm + tm+1 ))(tm+1 − sm ) L + (sm − tm )2 , 2

(37.30)

sinse L0

Z 1 0

kym − x0 + θ(xm+1 − ym )k ≤ L0 [ ≤ ≤

Z 1 0

(1 − θ)kym − x0 k + θkxm+1 − x0 k]dθ

L0 (kym − x0 k + kxm+1 − x0 k) 2 L0 (sm + tm+1 ). 2

(37.31)

King-Type Methods

323

Furthermore, using the first sub-step of method (37.2), (37.22) (for u = xm+1 ), (37.3), (37.30) and (37.31), we obtain kym+1 − xm+1 k ≤ kF 0 (xm+1 )−1 F 0 (x0 )kkF 0 (x0 )−1 F(xm+1 )k ≤ sm+1 − tm+1 ,

(37.32)

showing (37.19) for m + 1 replacing m. Notice also that kym+1 − x0 k ≤ kym+1 − xm+1 k + kxm+1 − x0 k

≤ sm+1 − tm+1 + tm=1 − t0 = sm+1 < t ∗ ,

so ym+1 ∈ U(x0 ,t ∗ ). Hence, sequence {xm } is fundamental in complete space S, (since {tm} is convergent), so it converges to some x∗ ∈ U[x0 ,t ∗ ]. By letting m −→ ∞ in (37.30) and using the continuity of F we get F(x∗ ) = 0. The uniqueness of x∗ is given next. Proposition 18. Assume: (1) The point x∗ ∈ U[x0 ,t ∗ ] solves (37.1). (2) There exists γ ≥ t ∗ so that L0 (γ + t ∗ ) < 1. (37.33) 2 Set D1 = D ∩U[x∗ , γ]. Then, x∗ is the only solution of equation (37.1) in the domain D1 . Proof. Consider x˜ ∈ D1 with F(x) ˜ = 0. Define M = (A3) and (37.33) kF 0 (x0 )−1 (M − F 0 (x0 ))k ≤ ≤

Z 1 0

R1 0 ˜ Using (A1), 0 F (x˜ + θ(x∗ − x))dθ.

L0 ((1 − θ)kx˜ − x0 k + θkx˜ − x0 k)dθ

L0 (γ + t ∗ ) < 1, 2

(37.34)

so x˜ = x∗ , by the invertibility of M and M(x˜ − x∗ ) = F(x) ˜ − F(x∗ ) = 0 − 0 = 0.

4.

(37.35)

Numerical Experiments

Example 52. Let X = Y = R, x0 = 1, Ω = [x0 − (1 − p), x0 + (1 − p)], p ∈ (0, 1) and F : Ω −→ Y be defined by F(x) = x3 − p.

Then, we get kF 0 (x0 )−1 k ≤ 13 ,

kF 0 (x0 )−1 (F 0 (y) − F 0 (x0 ))k ≤ ky + x0 kky − x0 k ≤ (ky − x0 k + 2kx0 k)ky − x0 k ≤ (3 − p)ky − x0 k

324

Christopher I. Argyros, Samundra Regmi, Ioannis K. Argyros et al.

and kF 0 (x0 )−1 (F 0 (y) − F 0 (x))k ≤ ky + xkky − xk ≤ (ky − x0k + kx − x0 k + 2x0 k)ky − xk ≤ 2(1 +

1 )ky − xk, L0

since D0 = D ∩U(1, 1 − p) = U(1, 1 − p), kx − x0 k ≤ L10 and ky − x0 k ≤ L10 . Then, we can pick η = 1−p 3 , L0 = 3 − p and L = 1 2(1 + L0 ). Then, for α = 2, p = 0.95 we obtain c = a = 0.0260, b = 8.8504e − 04, d = 0.9317, δ = 0.5865 and L0 t1 = 0.0351 < 1. Hence, (37.5) holds.

5.

Conclusion

In this chapter, we are interested in the semi-local convergence of a King-type method for solving equations defined on a real line or complex plane. The convergence of the method under consideration was shown by assuming a higher-order derivative of the operator not on the method, hence it is limiting its applicability. Moreover, no computational error bounds or uniqueness of the solution are given. We address all these problems using only the first derivative that appears on the method. Hence, we extend the applicability of the method. Our techniques can be used to obtain the convergence of other similar higher-order methods using assumptions on the first derivative of the operator involved.

References [1] Argyros, I. K., Computational Theory of Iterative Methods, Volume 15. Elsevier Science, San Diego, CA, USA, 2007. [2] Argyros, I. K. and √Ren, H., On the convergence of efficient King-Werner-type methods of order 1 + 2. Journal of Computational and Applied Mathematics, 285:169180, 2015. [3] C´ardenas, E., Castro, R., and Sierra, W., A Newton-type midpoint method with high efficiency index. Journal of Mathematical Analysis and Applications, 491(2):124381, 2020. [4] Fern´andez, J. E., and Hern´andez, M. A., Ver´on, The classic theory of Kantorovich, pages 1-38, Springer International Publishing, Cham, 2017. [5] King, J. F., A family of fourth order methods for nonlinear equations, SIAM J. Numer. Anal., 10, 5, (1973), 876-879. [6] Kumar, S., Ball comparison between King-like and Jarratt-like family of methods under the same set of conditions, TWMS. J. Appl. Enhin. Math., (2022). √ [7] Werner, W., Some supplementary results on the 1 + 2 order method for the solution of nonlinear equations. Numer. Math., 38(3):383-392, 1981/82.

Chapter 38

Single Step Third Order Method 1.

Introduction

Let F : D ⊂ S −→ S be a differentiable function, where S = R or S = C and D is an open nonempty set. We are interested in computing a solution x∗ of equation F(x) = 0.

(38.1)

The point x∗ is needed in closed form. But this form is attained only in special cases. That explains why most solution methods for (38.1) are iterative. There is a plethora of local convergence results for high convergent iterative methods based on Taylor expansions requiring the existence of higher than one derivatives not present on these methods. But there is very little work on the semi-local convergence of these methods or the local convergence using only the derivative of the operator appearing on these methods. We address these issues using a method by S. Kumar defined by x0 ∈ D, xn+1 = xn − A−1 n F(xn ),

(38.2)

where An = F 0 (xn ) − γF(xn ), γ ∈ S. It was shown in [6] that the order of this method is three and for en = xn − x∗ , en+1 = (γ − a2 )e2n + O(e3n ), (38.3) (m)



1 F (x ) where am = m! F 0 (x∗ ) , m = 2, 3, . . .. It follows that the convergence requires the existence 0 00 000 of F , F , F but F 00 , F 000 do not appear on method (38.2). So, these assumptions limit the applicability of the method. Moreover, no computable error bounds on |xn − x∗ | or uniqueness of the solution results are given. For example: Let E = E1 = R, D = [−0.5, 1.5]. Define λ on D by  3 t logt 2 + t 5 − t 4 i f t 6= 0 λ(t) = 0 i f t = 0.

Then, we get t ∗ = 1, and λ000 (t) = 6 logt 2 + 60t 2 − 24t + 22.

326

Christopher I. Argyros, Samundra Regmi, Ioannis K. Argyros et al.

Obviously λ000 (t) is not bounded on D. So, the convergence of the scheme (38.2) is not guaranteed by the previous analyses in [6]. We address all these concerns by using conditions only on F 0 in both the local and semi-local case that appears on the method (38.2). This way, we expand the applicability of this method. Our technique is very general, so it can be used to extend the applicability of other methods along the same lines [2]-[10]. Throughout this chapter U(x, r) = {y : |x − y| < r} and U[x, r] = {y : |x − y| ≤ r} for x ∈ S and r > 0. The rest of the chapter is set up as follows: In Section 2, we present the semi-local analysis, whereas in Section 3, we present the local analysis. The numerical experiments are presented in Section 4.

2.

Semi-Local Analysis

Let L0 , L, γ, δ be given positive parameters and η ≥ 0. Define scalar sequence {tn } by t0 = 0,t1 = η, L(tn+1 − tn )2 + 2|γ|δ(tn + η)(tn+1 − tn ) . tn+2 = tn+1 + 2(1 − (L0 + |γ|δ)tn+1))

(38.4)

Next, we shall prove that this sequence is majorizing for method (38.2). But first we need to define more parameters and scalar functions: α0 =

Lt1 , 2(1 − (L0 + |γ|δ)t1 )

4 = L2 − 8(L0 + |γ|δ)(1|γ|δ − L), functions f : [0, 1) −→ R, g : [0, 1) −→ R by f (t) =

2|γ|δη 2t(L0 + |γ|δ)η + + 2|γ|δη − 2t, 1 −t 1 −t

g(t) = 2(L0 + |γ|δ)t 2 + Lt + 2|γ|δ − L and sequences of polynomials f n : [0, 1) −→ R by fn (t) = Lt n η + 2|γ|δ(1 + t + . . . + t n−1 )η + η +2t(L0 + |γ|δ)(1 + t + . . . + t n )η − 2t. Notice that 4 is the discriminant of g. Consider that any of these conditions hold: (C1) There exists minimal β ∈ (0, 1) such that f (β) = 0 and 4 ≤ 0. Then, suppose α0 ≤ β. (C2) There exists minimal β ∈ (0, 1) such that f (β) = 0, α ∈ (0, 1) : f (α) = 0, and 4 > 0. Then, suppose α0 ≤ α ≤ β. (C3) f (t) 6= 0 for all t ∈ [0, 1) and 4 ≤ 0. (C4) f (t) 6= 0 and 4 > 0. Notice that g has two solutions: 0 ≤ s1 < s2 < 1. Suppose α0 ≤ s ∈ (s1 , s2 ] and f (s) ≤ 0.

Single Step Third Order Method

327

Let us denote these conditions by (C). Next, we present convergence results for sequence (38.4). Lemma 32. Suppose: (L0 + |γ|δ)tn+1 < 1.

(38.5)

0 ≤ tn ≤ tn+1

(38.6)

Then, the following assertions hold

and t ∗ = lim tn ≤ n−→∞

1 . L0 + |γ|δ

(38.7)

Proof. Assertions follow from (38.4) and (38.5), where t ∗ is the unique least upper bound of sequence {tn }. The next result is shown under stronger conditions but which are easier to verify than (38.5). Lemma 33. Suppose: conditions (C) hold. Then, assertions (38.6) and (38.7) hold too. Proof. Mathematical induction on m is used to show (Im ) :

L(tm+1 − tm ) + 2|γ|δ(tm + η) ≤ α. 2(1 − (L0 + |γ|δ)tm+1

(38.8)

This estimate hols for m = 0 by the definition of α0 and conditions (C). Then, we get 2 η m 0 ≤ t2 −t1 ≤ α(t1 −t0 ) = αη and t2 ≤ t1 +αη = 1−α 1−α η < 1−α . Suppose 0 ≤ tm+1 −tm ≤ α η m and tm ≤ 1−α 1−α η. Then, (38.8) holds if Lαm η + 2|γ|δ((1 + α + . . . + αm−1 )η + η) +2α(L0 + |γ|δ)(1 + α + . . . + αm )η − 2α ≤ 0

(38.9)

or fm (t) ≤ 0 at t = α.

(38.10)

We need a relationship between two consecutive polynomials f m : fm+1 (t) =

f m+1 (t) − f m(t) + f m (t)

= Lt m+1 η + 2|γ|δ(1 + t + . . . + t m )η + 2|γ|δη +2t(L0 + |γ|δ)(1 + t + . . . + t m+1 )η + f m (t)

−Lt m η − 2|γ|δ(1 + t + . . . + t m−1 )η − 2|γ|δη −2t(L0 + |γ|δ)(1 + t + . . . + t m−1 )η − 2|γ|δη

=

−2t(L0 + |γ|δ)(1 + t + . . . + t m )η + 2t f m (t) + g(t)t mη,

so fm+1 (t) = f m (t) + g(t)t mη.

(38.11)

328

Christopher I. Argyros, Samundra Regmi, Ioannis K. Argyros et al.

Define function f ∞ : [0, 1) −→ R by f∞ (t) = lim fm (t). m−→∞

(38.12)

It then follows from (38.9) and (38.12) that f∞(t) = f (t).

(38.13)

fm (t) ≤ f m+1 (t).

(38.14)

f∞ (t) ≤ 0,

(38.15)

Case (C1) We have by (38.11) that

So, (38.10) holds if which is true by the choice of β. Case(C2) Then, again (38.14) and (38.15) hold by the choice of α and β. Case(C4) We have fm+1 (t) ≤ f m(t), so (38.10) holds if f 1 (α) ≤ 0, which is true by (C4). The induction for items (38.8) so the induction for (38.6) is completed too leading again η 1 . in (38.7) replaced by 1−α to the verification of the assertions for L0 +|γ|δ Next, we introduce the conditions (A) to be used in the semi-local convergence of the method (38.2). Suppose: (A1) There exist x0 ∈ D, η ≥ 0 such that A0 6= 0 and kA−1 0 F(x0 )k ≤ η. 0 0 (A2) There exists L0 > 0 such that kA−1 0 (F (v) − F (x0 ))k ≤ L0 kv − x0 k for all v ∈ D. Set 1 D0 = U(x0 , L0 ) ∩ D. 0 0 (A3) There exist L > 0, δ > 0 such that kA−1 0 (F (v) − F (w))k ≤ Lkv − wk and

kA−1 0 (F(v) − F(x0 ))k ≤ δkv − x0 k, for all v, w ∈ D0 . (A4) Conditions of Lemma 32 or Lemma 33 hold and (A5) U[x0 ,t ∗ ] ⊂ D. Next, we show the semi-local convergence of the method (38.2) under the conditions (A). Theorem 62. Suppose that conditions (A) hold. Then, sequence {xn } generated by method (38.2) is well defined remains in U(x0 ,t ∗ ) and converges to a solution of equation (38.1) such that x∗ ∈ U[x0 ,t ∗ ].

Single Step Third Order Method

329

Proof. Mathematical induction is used to show kxn+1 − xn k ≤ tn+1 − tn .

(38.16)

This estiamte holds by (A1) and (38.2) for n = 0. Indeed, we have ∗ kx1 − x0 k = kA−1 0 F(x0 )k = η = t1 − t0 < t ,

so x1 ∈ U(x0 ,t ∗ ). Suppose (38.16) holds for all values of m smaller or equal to n − 1. Next, we show Am+1 6= 0. Using the definition of Am+1 , (A2), (A3) we get in turn that 0 0 −1 kA−1 0 (Am+1 − A0 )k = kA0 (F (xn+1 ) − γF(xm+1 ) − F (x0 ) + γF(x0 ))k

0 0 −1 ≤ kA−1 0 (F (xm+1 ) − F (x0 ))k + |γ|kA0 (F(xm+1 ) − F(x0 ))k

≤ L0 kxm+1 − x0 k + |γ|δkxm+1 − x0 k

≤ L0 (tm+1 − t0 ) + |γ|δ(tm+1 − t0 )

= (L0 + |γ|δ)tm+1 < 1,

(38.17)

where we also used by the induction hypotheses that kxm+1 − x0 k ≤ kxm+1 − xm k + kxm − xm−1 k + . . . + kx1 − x0 k ≤ tm+1 − t0 = tm+1 < t ∗ ,

so xm+1 ∈ U(x0 ,t ∗ ). It also follows from (38.17) that Am+1 6= 0 and kA−1 m+1 A0 k ≤

1 , 1 − (L0 + |γ|δm+1 )

(38.18)

by the Banach lemma on inverses of functions [8]. Moreover, we can write by method (38.2): F(xm+1 ) = F(xm+1 ) − F(xm ) − F 0 (xm )(xm+1 − xm ) + γF 0 (xm )(xm+1 − xm ),

(38.19)

since F(xm ) = −(F 0 (xm ) − γF(xm ))(xm+1 − xm ). By (A3) and (38.19), we obtain in turn kA−1 0 F(xm+1 )k ≤ k

Z 1 0 0

0 A−1 0 (F (xm + θ(xm+1 − xm ))

−F (xm))dθ(xm+1 − xm )k

0 +kA−1 0 F (xm )kkxm+1 − xm k L ≤ kxm+1 − xm k2 2 −1 +|γ|(kA−1 0 (F(xm+1 ) − F(x0 )k + kA0 F(x0 )k)kxm+1 − xm k L ≤ (tm+1 − tm )2 + |γ|(δkxm+1 − x0 k + η)(tm+1 − tm ) 2 L ≤ (tm+1 − tm )2 2 +|γ|(δtm+1 + η)(tm+1 − tm ). (38.20)

330

Christopher I. Argyros, Samundra Regmi, Ioannis K. Argyros et al.

It then follows from (38.2), (38.18) and (38.20) that −1 |xm+2 − xm+1 k ≤ kA−1 m+1 A0 kkA0 F(xm+1 )k ≤ tm+2 − tm+1 ,

(38.21)

and kxm+2 − x0 k ≤ kxm+2 − xm+1 k + kxm+1 − x0 k

≤ tm+2 − tm+1 + tm+1 − t0 = tm+2 < t ∗ .

(38.22)

But sequence {tm} is fundamental. So, sequence {xm } is fundamental too (by (38.21)), so it converges to some x∗ ∈ U[x0 ,t ∗ ]. By letting m −→ ∞ in (38.20), we deduce that F(x∗ ) = 0. Next, we present the uniqueness of the solution result for equation (38.1). Proposition 19. Suppose (1) There exists x0 ∈ D, K > 0 such that F 0 (x0 ) 6= 0 and kF 0 (x0 )−1 (F 0 (v) − F 0 (x0 ))k ≤ Kkv − x0 k

(38.23)

for all v ∈ D. (2) The point x∗ ∈ U[x0 , a] ⊆ D is a simple solution of equation F(x) = 0 for some a > 0. (3) There exists b ≥ a such that K(a + b) < 2. Set B = U[x0 , b] ∩ D. Then, the only solution of equation F(x) = 0 in B is x∗ . Proof. Set M = (38.23) 0

R1 0 ∗ ∗ ∗ ∗ 0 F (z + θ(x − z ))d for some z ∈ B with F(z ) = 0. Then, in view of −1

0

kF (x0 ) (M − F (x0 ))k ≤ K ≤

Z 1 0

((1 − θ)kx0 − x∗ k + θkx0 − z∗ k)dθ

K (a + b) < 1, 2

so, z∗ = x∗ follows from M 6= 0 and M(z∗ − x∗ ) = F(z∗ ) − F (x∗ ) = 0 − 0 = 0.

3.

Local Convergence

Let β0 , β and β1 be positive parameters. Set β2 = β0 + |δ|β1 . Define function h : [0, β12 ) −→ R by h(t) =

βt |γ|β21t + . 2(1 − β0 t) (1 − β0 t)(1 − β2t)

Single Step Third Order Method

331

Suppose this function has a minimal zero ρ ∈ (0, β12 ). We shall use conditions (H). Suppose: (H1) The point x∗ ∈ D is a simple solution of equation (38.1). (H2) There exists β0 > 0 such that kF 0 (x∗ )−1 (F 0 (v) − F 0 (x∗ ))k ≤ β0 kv − x∗ k for all v ∈ D. Set D1 = U(x∗ , β10 ) ∩ D. (H3) There exist β > 0, β1 > 0 such that kF 0 (x∗ )−1 (F 0 (v) − F 0 (w))k ≤ βkv − wk and kF 0 (x∗ )−1 (F(v) − F(x∗ ))k ≤ β1 kv − x∗ k for all v, w ∈ D1 . (H4) Function h(t) − 1 has a minimal solution ρ ∈ (0, 1). and (H5) U[x∗ , ρ] ⊂ D. Notice that A(x∗ ) = F 0 (x∗ ). Then, we get the estimates kF 0 (x∗ )−1 (F(xn ) − γF(xn ) − F 0 (x∗ ) + γF(x∗ )k ≤ kF 0 (x∗ )−1 (F 0 (xm) − F 0 (x∗ ))k + |γ|kF 0 (x∗ )−1 (F(xm ) − F(x∗ ))k ≤ β0 kxm − x∗ k + |γ|β1 kxm − x∗ k

= β2 kxm − x∗ k < β2 ρ < 1,

kF 0 (x∗ )−1 (Am − F 0 (xm ))k = kF 0 (x∗ )−1 (F 0 (xm ) − γF(xm) − F 0 (xm ))k = |γ|kF 0 (x∗ )−1 F(xm )k

≤ |γ|β1 kxm − x∗ k,

xm+1 − x∗ = xm − x∗ − F 0 (xm )−1 F(xm ) + F 0 (xm )−1 F(xm ) − A−1 m F(xm ) = (xm − x∗ − F 0 (xm )−1 F(xm )) + (F 0 (xm)−1 − A−1 m )F(xm )

= (xm − x∗ − F 0 (xm )−1 F(xm ))

+F 0 (xm )−1 (Am − F 0 (xm ))A−1 m F(xm ),

leading to βkxm − x∗ k2 2(1 − β0 kxm − x∗ k) |γ|β21kxm − x∗ k2 + (1 − β0 kxm − x∗ k)(1 − β2 kxm − x∗ k) < h(ρ)kxm − x∗ k = kxm − x∗ k < ρ.

kxm+1 − x∗ k ≤

So, we get kxm+1 − x∗ k ≤ p(kxm − x∗ k) < ρ, p = h(kx0 − x∗ k)

(38.24)

and xm+1 ∈ U(x∗ , ρ). Hence, we conclude by (38.24) that limm−→∞ xm = x∗ . Therefore, we arrive at the local convergence result for the method (38.2).

332

Christopher I. Argyros, Samundra Regmi, Ioannis K. Argyros et al.

Theorem 63. Under conditions (H) further suppose that x0 ∈ U(x∗ , ρ). Then, sequence {xn } generated by method (38.2) is well defined in U(x0 , ρ), remains in U(x0 , ρ) and converges to x∗ . Next, we present the uniqueness of the solution result for equation (38.2). Proposition 20. Suppose (1) The point x∗ is a simple solution of equation F(x) = 0 in U(x∗ , τ) ⊂ D for some τ > 0. (2) Condition (H2) holds. (3) There exists τ∗ ≥ τ such that β0 τ∗ < 2. Set B1 = U[x0 , τ∗ ] ∩ D. Then, the only solution of equation (38.1) in B1 is x∗ . Proof. Set M1 = 01 F 0 (z∗ + θ(x∗ − z∗ ))d for some z∗ ∈ B1 with F(z∗ ) = 0. Then, using (H2), we get in turn that R

kF 0 (x∗ )−1 (M1 − F 0 (x∗ ))k ≤

Z 1



0

(1 − θ)kz∗ − x∗ kdθ

β0 ∗ τ < 1, 2

so, z∗ = x∗ follows from M1 6= 0 and M1 (z∗ − x∗ ) = F(z∗ ) − F(x∗ ) = 0 − 0 = 0.

4.

Numerical Example

We verify convergence criteria using method (38.2) for γ = 0, so δ = 0. Example 53. (Semi-local case) Let us consider a scalar function F defined on the set D = U[x0 , 1 − q] for q ∈ (0, 12 ) by F(x) = x3 − q. Choose x0 = 1. Then, we obtain the estimates η =

1−q 3 ,

|F 0 (x0 )−1 (F 0 (x) − F 0 (x0 ))| = |x2 − x20 |

≤ |x + x0 ||x − x0 | ≤ (|x − x0 | + 2|x0 |)|x − x0 | = (1 − q + 2)|x − x0 | = (3 − q)|x − x0 |,

for all x ∈ D, so L0 = 3 − q, D0 = U(x0 , L10 ) ∩ D = U(x0 , L10 ), |F 0 (x0 )−1 (F 0 (y) − F 0 (x)| = |y2 − x2 | ≤ |y + x||y − x| ≤ (|y − x0 + x − x0 + 2x0 )||y − x| = (|y − x0 | + |x − x0 | + 2|x0 |)|y − x| 1 1 1 ≤ ( + + 2)|y − x| = 2(1 + )|y − x|, L0 L0 L0

Single Step Third Order Method

333

for all x, y ∈ D and so L = 2(1 + L10 ). Next, set y = x − F 0 (x)−1 F(x), x ∈ D. Then, we have y + x = x − F 0 (x)−1 F(x) + x =

5x3 + q . 3x2

Define fundtion F¯ on the interval D = [q, 2 − q] by 5x3 + q ¯ F(x) = . 3x2 Then, we get by this definition that F¯ 0 (x) = =

15x4 − 6xq 9x4 5(x − q)(x2 + xq + q2 ) , 3x3

q ¯ where p = 3 2q 5 is the critical point of function F. Notice that q < p < 2 − q. It follows that this function is decreasing on the interval (q, p) and increasing on the interval (q, 2 − q), since x2 + xq + q2 > 0 and x3 > 0. So, we can set K2 =

5(2 − q)2 + q 9(2 − q)2

and K2 < L0 . But if x ∈ D0 = [1 − L10 , 1 + L10 ], then L= where ρ =

4−q 3−q

5ρ3 + q , 9ρ2

and K < K1 for all q ∈ (0, 12 ). For q = 0.45, we have

n tn (L0 + |γ|δ)tn+1

1 0.1833 0.4675

2 0.2712 0.6916

3 0.3061 0.7804

4 0.3138 0.8001

5 0.3142 0.8011

0.3142 0.8011

Thus condition (38.5) satisfied. Example 54. Let F : [−1, 1] −→ R be defined by F(x) = ex − 1 1

Then, we have for x∗ = 0, β0 = e − 1, β = e e−1 and β1 = 0. So, we have ρ = 0.3827.

334

5.

Christopher I. Argyros, Samundra Regmi, Ioannis K. Argyros et al.

Conclusion

S. Kumar provided the local convergence of a third convergent order method for solving equations defined on the real line. We study the semi-local convergence of this method defined on the real line or complex plain. The local convergence is also provided but under weaker conditions.

References [1] Argyros, I. K., Hilout, S. (2010), Inexact Newton-type methods. 26(6):577-590.

J. Complex

[2] Argyros, I. K., Convergence and applications of Newton-type iterations. Springer, New York (2008). [3] Argyros, I. K., George, S., Mathematical modeling for the solution of equations and systems of equations with applications, Volume-IV, Nova Publisher, NY, 2021. [4] Behl, R., Kanwar, V., New highly efficient families of higher-order methods for simple roots, permitting f 0 (xn ) = 0. Int. J. Math. Math. Sci., 2014, (2014), 1-12. [5] Kou, J., Li, Y., Wang, X., A family of fifth-order iterations composed of Newton and third-order methods, Appl. Math. Comput., 186, 2, (2007), 1258-1262. [6] Kumar, S., Kanwar, V., Tomar, S. K., Singh, S., Geometrically constructed families of Newton’s method for unconstrained optimization and nonlinear equations, Int. J. Math. Math. Sci., 2011, (2011), 1-9. [7] Noor, M. A., Noor, K. I., Fifth order iterative methods for solving nonlinear equations, Appl. Math. Comput., 188, 1,(2007), 406-410. [8] Ortega, L. M., Rheinboldt, W. C., Iterative Solution of Nonlinear Equations in Several Variables, Academic Press, New York, (2000). [9] Traub, J. F., Iterative schemes for the solution of equations, Prentice Hall, New Jersey, U.S.A, (1964). [10] Petkovic, M. S., Neta, B., Petkovic, J. D., Dzunice, J., Multipoint point methods for solving nonlinear equations, Academic Press, Elsevier, USA, 2013.

Chapter 39

Newton-Type Method for Non-Differentiable Inclusion Problems 1.

Introduction

A Plethora of problems in diverse disciplines such as Applied Mathematics, Mathematical Programming, Economics, Physics, Engineering, and Transport theory, to mention a few, can be formulated as inclusion problem 0 ∈ F(x) + G(x),

(39.1)

where B1 , B2 are Banach spaces, F : B1 −→ B2 is an operator (smooth) and G : B1 ⇒ B2 is a set valued mapping. The following method has been used to generate a sequence {xm } approximating a solution x∗ of inclusion problem (39.1). Newton-type method (F, G, K, x0, P) (NTM). Step 1: If T −1 (x0 )(−F(x0 ) − G(x0 )) = 0/ stop since there is failure. Step 2: Do while q > p. • Pick z solving minimize{kz − x0k : F(x0 ) + G(x0 ) + F 0 (x0 )(z − x0 ) ∈ K} • q = kz − x0 k; x0 = z. Step :3 Return z. Here, T is a convex process [13]. It is convenient to denote G by −K in the rest of this chapter. If K = {0}, this method reduces to the one given in [13]. The method was studied in [4] using majorizing sequences (i.e., {un }). We show that {un } can be replaced by tighter {tn }. These scalar sequences are defined in Section 3. This replacement leads to the advantages as already stated in the introduction.

336

Christopher I. Argyros, Samundra Regmi, Ioannis K. Argyros et al.

Throughout this chapter U(x, r) = {y : |x − y| < r} and U[x, r] = {y : |x − y| ≤ r} for x ∈ S and r > 0.

2.

Majorizing Sequences

Let η ≥ 0 and a, b, c, d be nonnegative given parameters. Define the sequence {tn } by t0 = 0,t1 = η, a(tn+1 − tn ) + b + ctn )(tn+1 − tn ) . tn+2 = tn+1 + 1 − dtn+1

(39.2)

This sequence is very useful since it appears in the study of many Newton-type methods as a majorizing (sequence). Next, we present convergence results for iteration {tn }. Lemma 34. Suppose: dtn < 1.

(39.3)

Then, sequence {tn } is nondecreasing bounded from above by least upper bound t ∗ ∈ [η, d1 ].

1 d

and converges to its unique

Proof. It follows from (39.2) and (39.3) that 0 ≤ tm ≤ tm+1 < [η, 1d ].

1 d

so limm−→∞ tm = t ∗ ∈∈

Next, we provide two stronger results but whose conditions are easier to verify than (39.3). First we define sequence {un } by u0 = 0, u1 = η, a(u ¯ n+1 − un ) + b¯ + cu ¯ n )(un+1 − un ) un+2 = un+1 + ¯ 1 − dun

(39.4)

¯ for some constants a, ¯ b¯ c¯ d. Lemma 35. Suppose: ¯ 2 (1 − b) . 0 ≤ b¯ < 1, c¯ + d¯ = 2a¯ and 0 < η ≤ 4a¯

(39.5)

Then, the sequence {un } is strictly increasing and 1 ¯ 2− lim um = u = ((1 − b) m−→∞ 1a¯ ∗

q

¯ 2 − 4aη). (1 − b) ¯

(39.6)

Next, we present another result with conditions on the one hand: stronger than those of Lemma 34 but easier to verify and on the other different than those of Lemma 35 which give better error estimates and weaker sufficient convergence criteria. Define parameters ∆ = a3 − 4d(c − a), and γ0 =

aη + b 1 − dt1

Newton-Type Method for Non-Differentiable Inclusion Problems

337

and functions on the interval [0, 1) by h(t) = dt 2 + at + c − d, h∞ (t) = and

(c + dt)η +b −t 1 −t

h1 (t) = (at + c + td(a + t))η + b − c. Notice that ∆ is the discriminant of h. Suppose either of the following conditions hold: dt1 < 1 and (1) ∆ ≤ 0, there exists a minimal solution γ1 ∈ (0, 1) of equation h∞(t) = 0 and γ0 ≤ γ ≤ γ1 , or (2) ∆ > 0, there exists a minimal solution γ1 ∈ (0, 1) of equation h∞ (t) = 0, there exists a minimal solution γ2 ∈ (0, 1) of equation h(t) = 0, γ0 ≤ γ ≤ γ3 , where γ3 = min{γ1 , γ2 }, or (3) ∆ > 0, there exists a minimal solution γ1 ∈ (0, 1) of equation h∞ (t) = 0, there exists a minimal solution γ2 ∈ (0, 1) of equation h(t) = 0, h1 (γ2 ) ≤ 0 (or (aγ2 + c + γ2 d(1 + γ2 ))η ≤ c − b, γ0 ≤ γ ≤ γ2 . Denote either these conditions by (A). Next, we present the convergence result for a sequence {tn } using conditions (A). Lemma 36. Suppose that conditions (A) hold. Then, sequence {tn } generated by (39.2) is η well defined, nondecreasing bounded from above by t ∗∗ = 1−γ and converges to its unique ∗ ∗∗ least upper bound t ∈ [η,t ]. Proof. Estimates (Im ) 0≤

a(tm − tm−1 ) + b + ctm−1 ≤γ 1 − dtm 0 ≤ tm−1 ≤ tm

shall be shown using mathematical induction. The first estimate in (I1 ) follows by condition 0 ≤ γ0 ≤ γ where as the second holds by the definition of the sequence. Then, we also have 2 ∗∗ 0 ≤ t2 − t1 ≤ γ(t1 − t0 ) and t2 ≤ t1 + γ(t1 − t0 ) = η + γη = 1−γ 1−γ η < t . Suppose that 0 ≤ tm+1 − tm ≤ γmη and tm ≤

1 − γm η < t ∗∗ . 1−γ

Evidently the first estimate in (Im ) holds if aγm η + b + c

(1 − γm )η 1 − γm+1 + γd η−γ ≤ 0 1−γ 1−γ

338

Christopher I. Argyros, Samundra Regmi, Ioannis K. Argyros et al.

or hm (t) ≤ 0 at t = γ,

(39.7)

where polynomials hm are recurrent and defined on the interval [0, 1) by hm (t) = at mη + c(1 + t + . . . + t m−1 )η +td(1 + t + . . . + t m )η + b − t.

(39.8)

A relationship between two consecutive polynomials is needed: hm+1 (t) = hm+1 (t) − hm(t) + hm (t)

= at m+1η + c(1 + t + . . . + t m )η + td(1 + t + . . . + t m+1 )η −td(1 + t + . . . + t m )η − b + c

= hm(t) + h(t)t mη.

(39.9)

Case(1): We have hm (t) ≤ hm+1 (t) for all t ∈ [0, 1), since h(t) ≥ 0 for all t ∈ [0, 1). Define function f | in f ty : [0, 1) −→ R by f∞ (t) = lim fm (t). m−→∞

(39.10)

It follows from (39.8) and (39.10) that f∞(t) =

tdη cη + + b − t, 1 −t 1 −t

so (39.7) holds if f∞(t) ≤ o at t = γ

(39.11)

which is true. Case(2): Polynomial h has a least positive solution γ2 ∈ (0, 1). Then, we have hm+1 (γ2 ) = hm(γ2 ), so (39.7) is true, since (39.11) holds by the condition γ0 ≤ γ ≤ γ3 . Case(3): This time it suffices to show h1 (t) ≤ 0 at t = γ which is true by h1 (γ2 ) ≤ 0. The induction for (Im ) is completed. Then, sequence {tm} is nondecreasing, bounded from above by t ∗∗ so it converges to t ∗ ∈ [η,t ∗∗ ]. Next, we compare sequences {tn } and {un }. Proposition 21. Suppose: (i) Sequence {tn }, {un } are nondecreasing and bounded from above. (ii) At least one of the following holds ¯ a ≤ a¯ or b ≤ b¯ or c ≤ c¯ or d ≤ d.

Newton-Type Method for Non-Differentiable Inclusion Problems

339

Then, the following assertions hold t n ≤ un

(39.12)

0 ≤ tn+1 − tn ≤ un+1 − un

(39.13)

t ∗ ≤ u∗ .

(39.14)

and Proof. Using (i) we establish the existence of t ∗ and u∗ . Moreover, by (ii) and definition of these sequences and induction on n, assertions (39.12)-(39.14) follow. It follows from Proposition 21 that any results in the literature using {un } can be replaced with majorizing sequence {tn } instead leading to the advantages as stated in the introduction. We demonstrate that for method (NTM) in the next Section.

3.

Analysis

Let x0 ∈ B1 be such

kT −1 (x0 )k ≤ β.

(39.15)

We introduce certain Lipschitz continuity conditions to compare them on one hand and use them in what follows on the other. Definition 25. Operator F 0 is center-Lipschitz continuous on D ⊂ B1 if for some `0 > 0 kF 0 (w) − F 0 (x0 )k ≤ `0 kw − x0 k for all w ∈ D. Set D0 = U(x0 , β`10 ) ∩ D. Definition 26. Operator F 0 is restricted Lipschitz continuous on D0 if for some ` > 0 kF 0 (w) − F 0 (v)k ≤ `kw − vk for all w, v ∈ D0 . Definition 27. Operator F 0 is Lipschitz continuous on D if for some `1 > 0 kF 0 (w) − F 0 (v)k ≤ `1 kw − vk for all w, v ∈ D. Remark 44. It follows by these definitions that `0 ≤ `1

(39.16)

` ≤ `1 ,

(39.17)

and

340

Christopher I. Argyros, Samundra Regmi, Ioannis K. Argyros et al.

since D0 ⊆ D.

(39.18)

Notice that `0 = `0 (D, x0 ), `1 = `1 (D, x0 ) but ` = `(D0 , x0 ). Examples, where (39.16)(39.18) can be found in [1,2,3]. We suppose `0 ≤ `0

(39.19)

otherwise the result that follow hold with `0 replacing `. Definition 27 was used in [13] (see also [4]-[13] to obtain the estimate kT −1 (v)k ≤

β . 1 − β`1 kv − x0 k

(39.20)

But instead using weaker Definition 25 the tighter than (39.20) estimate is obtained kT −1 (v)k ≤

β . 1 − β`0 kv − x0 k

(39.21)

Moreover, the iterate {xn } lie in D0 which is a tighter set than D. Furthermore, D) help us define `. Hence, ` can replace `1 in the proof in [13, Theorem 4.1]. Our semi-local convergence analysis of (NTM) is based on (39.16) and conditions (A): Suppose: (A1) Operator F 0 is center and restricted-Lipschitz continuous. (A2) kG(v) − G(w)k ≤ `2 kv − wk for all v, w ∈ D). (A3) kx1 − x0 k ≤ η ≤ min{ β`10 ,

(1−βell2 )2 }. 2β`

(A4) U(x0 , u∗ ) ⊂ D, where u∗ =

1 β` ((1 − β`) −

p

(1 − β`)2 − 2β`η).

Next, we present the semi-local convergence of NTM under conditions (A). Theorem 64. Suppose conditions (A) hold. Then, sequence {xn } generated by (NTM) is well defined in U(x0 , u∗ ), remains in U(x0 , u∗ ) for all n = 0, 1, 2, . . . and converges to x∗ ∈ U[x0 , u∗ ] which solves F(x∗ ) + G(x∗ ) ∈ K. Moreover, the following estimates hold kxn+1 − xn k ≤ un+1 − un

(39.22)

kx∗ − un k ≤ u∗ − un .

(39.23)

and Proof. Simply replace `1 in the proof of Theorem 4.1 in [13], and use sequence {un } for ¯ ¯ a¯ = β` 2 , b = 0, c¯ = β`2 and d = β`.

Newton-Type Method for Non-Differentiable Inclusion Problems

341

Notice that in [13] kF 0 (v) − F 0 (w)k ≤ `3 kv − wk

(39.24)

for all v, w ∈ D is used. But we have again `2 ≤ `3 .

(39.25)

Hence, our approach is tighter than the one in [13] under the conditions of Lemma 35. But sequence {un } can also be replaced by {tn }. Indeed consider conditions (A)’ where (A3) is replaced by the conditions of Lemma 36 (or Lemma 34). Then, we arrive at a further improved semi-local convergence result of (NTM). Theorem 65. Suppose conditions (A)’ hold. Then, sequence {xn } generated by (NTM) is well defined in U(x0 ,t ∗ ), (or U(x0 , β`10 )) remains in U(x0 ,t ∗ ) (or U(x0 , β`10 )) for all n = 0, 1, 2, . . . and converges to x∗ ∈ U[x0 ,t ∗ ] (or U[x0 , β`10 ]) which solves F(x∗ ) + G(x∗ ) ∈ K. Moreover, the following estimates hold kxn+1 − xn k ≤ tn+1 − tn

(39.26)

kx∗ − un k ≤ t ∗ − tn .

(39.27)

and Proof. Sequence {tn } can replace {un } as majorizing for (NTM) in view of (39.21) replacing (39.20) and setting a = β` 2 , β = 0, c = β`2 and d = β`0 . The advantages derived from our technique have been justified (see also Proposition 21).

4.

Conclusion

We provide a semi-local convergence analysis of a Newton-type method for solving inclusion problems containing a non-differentiable term. Using tighter majorizing sequences than before and under the same conditions, we obtain weaker sufficient convergence criteria and tighter error bounds on the distances involved.

References [1] Argyros, I. K., Hilout, S. (2010), Inexact Newton-type methods. 26(6):577-590.

J. Complex

[2] Argyros, I. K., Convergence and applications of Newton-type iterations. Springer, New York (2008). [3] Argyros, I. K., George, S., Mathematical modeling for the solution of equations and systems of equations with applications, Volume-IV, Nova Publisher, NY, 2021. [4] Aubin, J. P. H. F., Set valued analysis, Birkhauser, Boston, 1990.

342

Christopher I. Argyros, Samundra Regmi, Ioannis K. Argyros et al.

[5] Dontchev, A., Local convergence of the Newton method for generalized equation, C.R.A.S, 1996. [6] Behl, R., Kanwar, V., New highly efficient families of higher-order methods for simple roots, permitting f 0 (xn ) = 0, Int. J. Math. Math. Sci., 2014, (2014), 1-12. [7] Geoffroy, M. H. A. P., Local convergence of some iterative methods for generalized equations, J. Math. Anal. Appl., 290 (2004) 497-505. [8] Rheinboldt, W., A unified convergence theory for a class of iterative processes, SIAM J. Numer. Anal., 5, 1, (1968), https://doi.org/10.1137/0705003. [9] Robinson, S., Extension of Newton’s method to non linear functions with values in a cone, Numer. Math., 19, 4 (1972), 341-347. [10] Robinson, S., Generalized equations and their solutions, Part I: basic theory, Mathematical Programming Study, 10, (1979), 128-141. [11] Rockafellar, R., Convex analysis, Princeton University Press, 1970. [12] Zincenko, A., Some approximate method of solving equations with nondifferentiable operators, USSR Computational Mathematics and Mathematical Physics, 7, 2, (1967), 242-249. [13] Pietrus, A., Non differentiable perturbed Newton’s method for functions with values in a cone, Revista Investigacion, Operacional, 35, 1, (2014), 58-67.

Chapter 40

Extended Kantorovich-Type Theory for Solving Nonlinear Equations Iteratively: Part I Newton’s Method 1.

Introduction

This chapter traces the progress of the Kantorovich theory for solving nonlinear equations iteratively. Newton’s Method (NM) is one of the well-established iterative processes for solving equations in Euclidean or Banach spaces. Fundamental results about convergence of NM, error estimates, and uniqueness domains of solutions are presented by the Kantorovich theorem. Later numerous authors contributed by providing more upper and lower error estimates and uniqueness domains. These results are based on the celebrated simplicity and clarity Kantorovich convergence criterion given in the affine or nonaffine invariant form. This criterion is sufficient but not necessary for convergence. That is why there exist even scalar equations for which this criterion is not satisfied. Therefore, the Kantorovich theorem cannot guarantee convergence although NM may converge. Motivated by this problem and optimization considerations a methodology is developed to replace this criterion with a weaker one without additional conditions. This criterion replacement is done since the original Lipschitz constant is replaced by at least smaller ones. This modification creates at least as tight error estimates and more precise uniqueness domains. This methodology is shown to increase the applicability of NM infinitely many times over the Kantorovich approach. Some examples further validate the new methodology. One of the most challenging problems in Mathematics is the determination of a solution x∗ of nonlinear equation F(x) = 0, (40.1) where,B0 and B stant for Banach spaces, Ω ⊆ B0 is a nonempty, open and convex set and operator F : Ω ⊂ B0 −→ B is differentiable per Fr´echet. A plethora of applications from diverse areas are formulate like equation (40.1) using mathematical modelling [1]-[13]. The solution is mostly found iteratively, since its analytic form is obtained only in special cases. NM defined ∀n = 0, 1, 2, . . . by x0 ∈ Ω, xn+1 = xn − F 0 (xn )−1 F(xn )

(40.2)

344

Christopher I. Argyros, Samundra Regmi, Ioannis K. Argyros et al.

is often selected among numerous other methods to generate a sequence approximating solution x∗ [1]-[4]. The Kantorovich convergence hypotheses (HM ) in affine invariant form are: ∃ element x0 ∈ Ω and parameter η > 0 : F 0 (x0 )−1 ∈ L(B, B0 ), and kF 0 (x0 )−1 F(x0 )k ≤ η;

(40.3)

∃ parameter L > 0 : ∀u1 , u2 ∈ Ω kF 0 (x0 )−1 (F 0 (u1 ) − F 0 (u2 ))k ≤ Mku1 − u2 k;

(40.4)

1 2

(40.5)

U[x0 , ρM ] ⊂ Ω,

(40.6)

hM = Mη ≤ and

where ρM > 0 is given later. Hypotheses H have been used in affine or non affine invariant form by a plethora of authors [1]-[13]. The simple example that follows reveals some of the issues with hypotheses HM . Example 55. Consider solving cubic scalar polynomial equation x3 − a = 0 and let polynomial p : Ω −→ R, p(x) = x3 − a, where for Ω = U(x0 , 1 − a), B = R, a ∈ (0, 21 ). Then, by the definition of polynomial p the parameter appearing in hypotheses (H) for x0 = 1 are: η = 1−a 3 and M = 2(2 − a). Criteria (40.6) is not satisfied, since 1 1 1−a 2(2 − a) > ∀ a ∈ (0, ). 3 2 2 Hence, no assurance can be given by hypotheses HM that NM initiated at x0 converges to √ solution x∗ = 3 a. This example indicates how limited the applicability of NM although it may converge. There is a need to address this problem. As it can easily be seen from hypothesis H, this can be done if e.g. parameter M is replaced by smaller ones assuming parameter η remains fixed. Next, a methodology is developed making this idea possible. It is also worth noticing that parameters introduced are specializations of M. Consequently, when computing M its specializations are also computed. Therefore, no additional effort (computational) is needed to find these parameters. These parameters are related to operator F 0 as follows: ∃ center-Lipschitz parameter K0 > 0 : ∀w ∈ Ω kF 0 (x0 )−1 (F 0 (w) − F 0 (x0 ))k ≤ K0 kw − x0 k.

(40.7)

Consider region Ω0 = U(x0 ,

1 ) ∩ Ω. K0

(40.8)

Extended Kantorovich-Type Theory for Solving Nonlinear Equations Iteratively 345 ∃ restricted Lipschitz parameter K > 0 : ∀w1 , w2 ∈ Ω0 kF 0 (x0 )−1 (F 0 (w1 ) − F 0 (w2 ))k ≤ Kkw1 − w2 k.

(40.9)

∃ parameter K1 > 0 : ∀ w1 ∈ Ω0 kF 0 (x0 )−1 (F 0 (w1 ) − F 0 (w2 ))k ≤ K1 kw1 − w2 k.

(40.10)

for w2 = w1 − F 0 (w1 )−1 F(w1 ) and w2 ∈ Ω0 . Consider region Ω1 = U(x0 ,

1 − η) ∩ Ω K0

(40.11)

for K0 η < 1. ∃ parameter K2 > 0 : ∀w1 ∈ Ω1 kF 0 (x0 )−1 (F 0 (w1 ) − F 0 (w2 ))k ≤ K2 kw1 − w2 k.

(40.12)

for w2 = w1 − F 0 (w1 )−1 F(w1 ) and w2 ∈ Ω1 . The relationship between these parameters follows. Remark 45. The definition (40.8) and (40.11) imply Ω1 ⊂ Ω0 ⊂ Ω.

(40.13)

It follows by these definitions and implication (40.13) K0 ≤ M

(40.14)

K2 ≤ K1 ≤ K ≤ M.

(40.15)

and Estimates (40.13)-(40.15) can be strict (see the numerical Section). In particular, the ratio e.g. KM0 can be large arbitrarily. Example 56. Consider scalar function f (t) = a0t + a1 + a2 sina3 t, where ai , i = 0, 1, 2, 3 ∈ R are parameter and let t0 = 1. Then, by the definition of function f if parameter a3 is large enough, and a2 is sufficiently small the ratio KM0 is arbitrarily large. Similar comparison can follow between these parameters. Hence, somehow replacing parameter M by one of these smaller ones expands the applicability of NM. Notice that under hypothesis (40.4) the following estimate is derived to show invertibility of F 0 (w) and kF 0 (w)−1 F 0 (x0 )k ≤

1 1 − Mkw − x0 k

(40.16)

provided ∀w ∈ U(x0 , M1 ) ⊂ Ω. But invertibility can be shown if the actually needed hypothesis (40.7) is used to obtain the more precise estimate kF 0 (w)−1 F 0 (x0 )k ≤

1 1 − K0 kw − x0 k

(40.17)

346

Christopher I. Argyros, Samundra Regmi, Ioannis K. Argyros et al.

∀w ∈ Ω0 . Notice also that M = M(Ω, F 0 ); and K0 = K0 (Ω0 , F 0 ). That is region Ω0 is used to determine parameter K0 . Moreover, NM iterates lie in subset Ω0 of Ω, so it can replace it in convergence results. Similar observations can be given for the other parameters. Error estimates and uniqueness regions are also improved, since smaller than M parameters “K” can replace it. The assumption K0 ≤ K (40.18) is made from now on. Otherwise, i.e., if K < K0 , then K0 can replace K in all results presented in this chapter.

2.

Convergence of NM

Let L stand for M or K or K1 or K2 , ρ > 0 a radius to be given later. Notice that L ≤ M.

(40.19)

The following extension of the Kantorovich theorem is presented [1,12,13]. Theorem 66. Suppose hypotheses HL hold. with √ 1 − 1 − 2Lη − ρL = . L Then, the following assertions hold (1) Sequence {xn } generated by NM is contained in the ball U(x0 , ρ− L ) and convergent to a solution x∗ ∈ U[x0 , ρ− ]. L (2) Estimates kxn+1 − xn k ≤ sn+1 − sn , kxn − x∗ k ≤ kxn − x∗ k ≤

1 − sn , ρ− L

L L 2n 1 ( ) , if Lη < , 2 n ρ− 2 L

1 1 , if Lη = , L2n 2 hold, where scalar sequence {sn } is defined by kxn − x∗ k ≤

s0 = 0, sn+1 = sn −

p(sn ) L(sn − sn−1 )2 = , p0 (sn ) 2(1 − Lsn )

p(t) = L2 t 2 − t + η, ρ− L = limn−→∞ sn is the smallest root of polynomial p with the largest one given by √ 1 + 1 − 2Lη + ρL = . L Concerning uniqueness ball (3)If Lη < 21 and U[x0 , L1 ] ⊂ Ω, then the point x∗ ∈ U[x0 , L1 ] is the only solution of equation F(x) = 0 in the ball U[x0 , L1 ].

Extended Kantorovich-Type Theory for Solving Nonlinear Equations Iteratively 347 Proof. Simply follow the proof given in the Kantorovich theorem when L = M. Remark 46. (i), if L = K2 , we obtain the weakest convergence criterion, the tightest error estimates, the most precise information about the ball containing the Newton iterate, and the largest uniqueness ball. (2) The proof shows (see also estimates (40.16) and (40.17)) that sequence {rn } defined by r0 = 0, r1 = η, r2 − r1 =

L0 (r1 − r0 )2 L(rn+1 − rn )2 , rn+2 − rn+1 = . 2(1 − L0 r1 ) 2(1 − L0 rn+1 )

It is a tighter majorizing sequence for {xn } than {sn }. In particular, rn ≤ sn , rn+1 − rn ≤ sn+1 − sn and r = lim rn = ρ− L. n−→∞

The sequence converges under an even weaker convergence criterion. But limit point r is not given in the closed form. However, an upper bound on r is given in closed form [1][5]. Clearly, if condition HL hold then r, {rn } can replace ρ− L , {sn }, respectively in the first two error estimates in (2). The uniqueness ball can be extended under the center Lipschitz condition, whereas the rest of the conditions (HL ) are not needed. Proposition 22. Suppose (1) Point z ∈ U(x0 , ρ0 ) ⊂ Ω is a solution of equation F(x) = 0 for some ρ0 > 0. (2) Center-Lipschitz hypothesis (40.7) holds and (3) ∃ ρ1 ≥ ρ0 such that 2 ρ0 + ρ1 < . K0 Define Ω2 = U[x0 , K10 ] ∩ Ω. Then, the only solution of equation F(x) = 0 in the region Ω2 is z. Proof. Let z1 ∈ Ω2 be such that F(z1 ) = 0. Define linear operator T by T=

Z 1 0

F 0 (z + τ(z1 − z))dτ.

It follows by the last two hypotheses kF 0 (x0 )−1 (T − F 0 (x0 ))k ≤ L0 ≤

Z 1 0

[(1 − τ)kz − x0 k + τkz0 − x0 k]dτ

L0 0 (ρ + ρ1 ) < 1, 2

which according to the Banach lemma on linear operators implies the invertibility of linear operator T. Moreover, z1 = z is concluded using the identity T (z1 − z) = F(z1 ) − F(z) = 0.

348

Christopher I. Argyros, Samundra Regmi, Ioannis K. Argyros et al.

Remark 47. Based on extended Newton-Kantorovich Theorem 66 sharp lower and upper error estimates become available. Next a list of such bounds is presented. The proofs are omitted as these can be given if parameter M is replaced bt L. The authors whose error estimates are improved are listed next to the original results. But first it is convenient to define some parameters and functions: h = hL , en = kxn+1 − xn k, Dn = kxn − x0 k, D = √ ρ− 1−2h + − ρL − ρL , Θ = ρL+ , α = L , and (1 + coshψ)h = 1 for ψ ≥ 0. Moreover, define sequences L q 1 `n = 1 + 1 + 2L(1 − LDn )−1 en , q `2n = 1 + 1 + 2L(1 − Lsn )−1 en , s n 4(1 − Θ2 ) `3n = 1 + 1 + en , if 2h < 1, n 1 + Θ2 ) s 2n `4n = 1 + 1 + en , if 2h = 1 η s 2e pn `5n = 1 + 1 + , en + α2 + e2n r q q q `6n = en +

α2 + e2n +

(en +

α2 + e2n )(3en +

α2 + e2n ).

Then, the lower bounds are

2en (Yamamoto [12,13]) `1n 2en ≥ (Schmidt [11] `2n ( 2en , if 2h < 1 (Miel [7]) `3n ≥ 2en , if 2h = 1 `4

kx∗ − xn k ≥

n



2en `5n

=

p 2en (en + α2 + e2n (Potra-Pat´ak [10] `6n

Furthermore, define sequences v1n = 1 + s

q

1 − 2L(1 − LDn )−1 en , n

4(1 − Θ2 ) en , if 2h < 1, n 1 + Θ2 ) s 2n v3n = 1 + 1 − en , if 2h = 1 η

v2n = 1 +

1−

Extended Kantorovich-Type Theory for Solving Nonlinear Equations Iteratively 349 ρ∗L − sn , sn+1 − sn q √ v5n = 1 − 2h + 1 − 2h + (Len+1 )2 , q v6n = α2 + e2n−1 − α, v4n =

v7n = e−2

n−1 ψ

sinhψ , if 2h < 1, sinh2n−1 ψ

v8n 21−n η, if 2h = 1, n

v9n = and

DΘ2 n 1 − Θ2

1−n v10 η. n =2

Then, the upper bounds are 2en (Moret [8]) v1n ( 2en , if 2h < 1 (Yamamoto [12]) v2n ≤ 2en , if 2h = 1 v3

kx∗ − xn k ≤

n

≤ ≤ = ≤ = =

v4n en (Yamamoto [13] Le2n−1 (Potra-Pt´ak [10] v5n v6n (Potra-Pt´ak [10] ρ− − sn (Kantorovich [1]-[5] L v7n , if 2h < 1 (Ostrowski [9]) 8 vn , if 2h = 1  v9n , if 2h < 1 (Gragg [6]) 1−n 2 η, if 2h = 1

.

Finally, one more convergence result for a sequence {sn } is presented.

Proposition 23. Suppose L0 r˜n+1 < 1 ∀ n = 0, 1, 2, . . .. Then, the following assertions hold rn ≤ rn+1
0 such that F 0 (x0 )−1 ∈ L(B2 , B1 ) and kF 0 (x0 )−1 F(x0 )k ≤ d; (ii)∃a > 0 such that for ∀x ∈ D kF 0 (x0 )−1 (F 0 (x) − F 0 (x0 ))k ≤ akx − x0 k. Define the set D1 = U(x0 , 1a ) ∩ D. (iii)∃b > 0 such that ∀x, y ∈ D1 kF 0 (x0 )−1 (F 00 (x) − F 00 (y))k ≤ bkx − yk. (iv) Polynomial equation g(s) = 16 bs3 + 21 as2 − s + d = 0 has one negative and two positive roots s∗ and s∗∗ satisfying s∗ ≤ s∗∗ . This is equivalent to √ a2 + 4b − a a2 + 2b √ d ≤ d1 = . (43.3) 3b(a + a2 + 2b) Equality holds if and only if s∗∗ = s∗ . Define the majorizing sequence s0 = 0, sn+1 = sn − α(sn )−1 g(sn ) for ∀n = 0, 1, 2, . . ., where α(s) = g0 (s) − 21 g00 (s)g0(s)−1g(s). Remark 51. Condition (iii) by Safiev’s (iii)’ ∃b1 > 0 such that ∀x, y ∈ D kF 0 (x0 )−1 (F 00 (x) − F 00 (y))k ≤ b1 kx − yk whereas condition (43.3) holds with b1 replacing b. That is

Updated Halley’s and Chebyshev’s Iterations p a2 + 4b1 − a a2 + 2b1 p d ≤ d2 = . 3b1 (a + a2 + 2b1 )

367

(43.4)

It follows by the definition of D1 , d1 and d2 that

D1 ⊂ D, so b ≤ b1 and d2 ≤ d1 . Hence, the range of initial approximations is extended. Denote corresponding items as 1 1 g1 (s) = b1 s3 − as2 − s + d, 6 2 α1 (s) = g01 (s)− 12 g001 (s)g01(s)−1 g1 (s). and s10 = 0, s1n+1 = s1n −α1 (s1n )−1 g1 (s1n ) ∀n = 0, 1, 2, . . ., with s∗1 ≤ s∗∗ 2 . It follows g(s∗ ) ≤ g1 (s∗1 ) = 0 and g(0) = d > 0, so s∗ ≤ s∗1 and similarly s∗∗ ≤ s∗∗ 2 . Consequently a more precise information about the location of the iterates and solution α becomes available. Furthermore, sequence {sn } is more precise than {s1n }. The advantages are obtained with no additional computational effort, since parameter b1 is a special case of b. The main semi-local convergence analysis for HI is based on Safiev’s conditions. Theorem 70. Under conditions (i)-(iv) further suppose (v) U1 = U[x1 , s∗ − s1 ] ⊂ D. Then, sequence {xn } generated by HI is well defined in U1 remains in U1 ∀n = 0, 1, 2, . . . and converges to a solution γ ∈ U1 of equation F(x) = 0. The solution is unique in  U(x0 , s∗∗ ) ∩ D, if s∗ < s∗∗ . U2 = U[x0 , s∗∗ ] ∩ D, if s∗ = s∗∗ Moreover, the following error estimates hold e∗n ≤ kγ − xn k ≤ θ∗n ≤ cn + (s∗ − sn+1 )( ≤ (s∗ − sn )

cn ≤ s∗ − sn , sn=1 − sn

and kγ − xn k ≤ (s∗ − sn )(

cn )3 sn+1 − sn

cn−1 3 ) , sn − sn−1

where cn = kxn+1 − xn k and θ∗n and e∗n are the smaller positive roots of equation s∗ − sn+1 3 s − s + en = 0 s∗ − sn

368

Christopher I. Argyros, Samundra Regmi, Ioannis K. Argyros et al.

and

respectively.

s∗ − sn+1 3 s + s − en = 0, (s∗ − sn )3

Proof. Simply replace b1 by b in [30]. Remark 52. If b = b1 Theorem 70 reduces to the corresponding one in [30]. Otherwise, i.e, if b 6= b1 it constitutes an improvement. Notice that the Theorem in [30] extended the results by D¨oring. Kanno in [16] showed Yamamoto theorem extends Safiev’s result. Similar comments can be made for CI. In fact CI is obtained from HI if we replace term 1 (I − F 0 (x)−1 F 00 (x)F 0 (x)−1 F(x))−1 2 in HI by 1 I + F 0 (x)F 00 (x)F 0 (x)−1 F(x). 2 Semi-local convergence results for CI have been provided by Necepurenko [19], Safiev [23], Alefeld [1], Werner [26, 27], Argyros and George [4]. Numerical tests contacted by Alefeld show that although both iterations are of convergence order three, HI gives a slightly tighter result than CI. Numerical examples where b < b1 can be found in [4,5,6].

3.

Conclusion

The development of convergence for Halley’s, Chebyshev’s, and other third-order iterations is discussed from a scalar to a Banach space setting. The new convergence analysis extends previous ones with no additional conditions.

References [1] Alefeld, G., Zur Konvergenz des Verfahrens der tangierenden Hyperbeln und des Tschelysche -Verfahrens beikonvexen Abbildungen, Computing 11 (1973) 379-390. [2] Alefeld, G., On the convergence of Halley’s method, Amer. Math. Monthly 88 (1981) 530-536. [3] Altman, M., Concerning the method of tangent hyperbolas for operator equations, Bull. Acad. Polon. Sci. Ser. Sci. Math. Astronom. Phys. 9 (1961) 633-637. [4] Argyros, I. K., Unified Convergence Criteria for Iterative Banach Space Valued Methods with Applications, Mathematics 2021, 9(16), 1942; https://doi. org/10.3390/math9161942. [5] Argyros, I. K., The theory and applications of iteration methods, 2nd Edition, Engineering Series, CRC Press, Taylor and Francis Group, 2022.

Updated Halley’s and Chebyshev’s Iterations

369

[6] Argyros, I. K., George, S., On the complexity of extending the convergence region for Traub’s method, J. Complexity, 56, (2020), 101423. [7] Brown, G. H., On Halley’s variation of Newton’s method, Amer. Math. Monthly 84 (1977) 726-728. [8] Cauchy, A. L., Sur la determination approximative des racines d’une equation algebrique ou transcendante, in: Lecons sur le Calcul Diferentiel, Bure freres, Paris, 1829, pp. 573- 609. [9] Chen, D., Argyros, I. K., Qian, Q. S., A note on the Halley method in Banach spaces, Appl. Math. Comp. 58 (1993),215-224. [10] D´oring, B., Einige S´atze´uber das Verfahren der tangierenden Hyperbeln in BanachRaumen, Aplikace Mat. 15 (1970) 418-464. [11] Ezquerro, J. A., Hernandez, M. A., Avoiding the computation of the second Fr´echetderivative in the convex acceleration of Newton’s method, J. Comput. Appl. Math. 96 (1998) 1-12. [12] Gander, W., On Halley’s iteration method, Amer. Math. Monthly 92 (1985) 131-134. [13] Gragg, W. B., Tapia, R. A., Optimal error bounds for the Newton-Kantorovich theorem, SIAM J. Numer. Anal. 11 (1974) 10-13. [14] Hernandez, M. A., A note on Halley’s method, Numer. Math. 59 (1991) 273-276. [15] Hitotumatu, S., A method of successive approximation based on the expansion of second order, Math. Japon. 7 (1962) 31-50. [16] Kanno, S., Convergence theorems for the method of tangent hyperbolas, Math. Japon. 37 (1992) 711-722. [17] Miel, G. J., Majorizing sequences and error bounds for iterative methods, Math. Comp. 34 (1980) 185-202. [18] Moret, I., A note on Newton type iterative methods, Computing 33 (1984) 65-73. [19] Necepurenko, M. I., On Chebyshev’s method for functional equations, Uspehi Matem. Nauk 9 (1954) 163-170. English translation: L.B. Rall, MRC Technical Summary Report #648, 1966. [20] Ostrowski A. M., La m´ethode de Newton dans les espaces de Banach, C.R. Acad. Sci. Paris Ser. A 272 (1971) 1251-1253. [21] Ostrowski A. M., Solution of Equations in Euclidean and Banach Space, Academic Press, New York, 1973. [22] Potra, F. A., Pt´ak, V., Sharp error bounds for Newton’s process, Numer. Math. 34 (1980) 63-72.

370

Christopher I. Argyros, Samundra Regmi, Ioannis K. Argyros et al.

[23] Safiev, R. A., The method of tangent hyperbolas, Sov. Math. Dokl. 4 (1963) 482-485. [24] Salehov, G. S., On the convergence of the process of tangent hyperbolas, Dokl. A. N. SSSR 82 (1952) 525 -528 (in Russian). [25] Schmidt, J. W., Untere Fehlerschranken f´ur Regular-falsi-Verfahren, Period. Math. Hungarica 9 (1978) 241-247. [26] Werner, W., Some improvements of classical iterative methods for the solution of nonlinear equations, in: E.L. Allgower, K. Glashoft, H.O. Peitgen (Eds.), Numerical Solution of Nonlinear Equations, Lecture Notes in Math., Vol. 878, Springer, Heidelberg, 1981. [27] Werner, W., Iterative solution of systems of nonlinear equations based upon quadratic approximations, Comput. Math. Appl. A 12 (1986) 331-343. [28] Yamamoto, T., A unified derivation of several error bounds for Newton’s process, J. Comput. Appl. Math. 12, 13 (1985) 179-191. [29] Yamamoto, T., A method for finding sharp error bounds for Newton’s method under the Kantorovich assumptions, Numer. Math., 49 (1986), 203-220. [30] Yamamoto, T., On the method of tangent hyperbolas in Banach spaces, J. Comput. Appl. Math., 21 (1988), 75-86.

Chapter 44

Updated Iteration Theory for Non Differentiable Equations 1.

Introduction

Let B1 and B2 be Banach spaces and Ω ⊂ B1 be an open set. Zincenko in [44] presented semi-local convergence results for iterations suggested by Krasnoselskii (KZI) defined by xn+1 = xn − F 0 (xn )−1 (F(xn ) + G(xn )), x0 ∈ Ω ∀n ≥ 0.

(44.1)

The operator G : Ω −→ B2 is continuous whose differentiability is not assumed. Later Rheinboldt [35] used majorization to prove his results for KZI. In the next section, a finer semi-local convergence analysis is presented with no additional conditions. Benefits include weaker sufficient convergence criteria, tighter error estimates, and more precise uniqueness of the solution results.

2.

Convergence

Semi-local convergence results are listed in affine invariant form using the preceding notation. Theorem 71. Suppose kF 0 (x0 )−1 F(x0 )k ≤ α, kF 0 (x0 )−1 (F 0 (x) − F 0 (x0 ))k ≤ γ0 kx − x0 k, γ0 > 0 for ∀x ∈ Ω.

(44.2)

Set Ω1 = Ω ∩U(x0 , γ10 ). kF 0 (x0 )−1 (F 0 (y) − F 0 (x))k ≤ γky − xk, γ > 0 for ∀x, y ∈ Ω1 .

(44.3)

kF 0 (x0 )−1 (G(y) − G(x))k ≤ δky − xk, δ > 0 for ∀x, y ∈ Ω1 .

(44.4)

βδ < 1,

(44.5)

γα 1 ≤ 2 (1 − δ) 2

(44.6)

h=

372

Christopher I. Argyros, Samundra Regmi, Ioannis K. Argyros et al.

and U[x0 , s∗ ] ⊆ Ω,

(44.7)

√ α(1+ 1−2h) . h(1−δ)

2α , and s∗∗ = Then, KZI is well defined remains in where s∗ = (1+√1−2h)(1−δ) ∗ U(x0 , s ) for ∀n = 0, 1, 2, . . . and converges to the only solution x∗ of equation F(x) = 0 in the set Ω2 = Ω ∩U(x0 , s∗∗ ).

Remark 53. Consider conditions kF 0 (x0 )−1 (F 0 (y) − F 0 (x))k ≤ γ1 ky − xk, γ1 > 0 for ∀x, y ∈ Ω

(44.8)

kF 0 (x0 )−1 (G(y) − G(x))k ≤ δ1 ky − xk, δ1 > 0 for ∀x, y ∈ Ω.

(44.9)

and Rheinbolt [35] presented his result but using γ1 , δ1 instead of γ, δ in Theorem 2.5. His results were also given in non affine invariant form. The advantages of affine versus non affine invariant form are well explained [22]. Notice however that Ω1 ⊂ Ω, so γ0 ≤ γ1 , γ ≤ γ1 , δ ≤ δ1 , γ1 = b1 (Ω), γ0 = b0 (Ω) but γ = γ(Ω0 , Ω), δ1 = δ1 (Ω) and δ = δ(Ω0 , Ω). Numerical examples where (44.9) holds as strict inequality can be found in [1]-[11]. The parameters γ0 and γ are specializations of γ1 . So, no additional computational effort is required to obtain the afore mentioned benefits. Finally, notice that if G = 0 Theorem 71 specializes and extends the Kantorovich Theorem [22]. Later Zabrejko-Nguen in [42] showed a result which is a reformulation of the corresponding one by Zabtejko-Zlepko [43]. It is convenient to introduce in the ball U[x0 , ρ] conditions kF 0 (x0 )−1 (F 0 (x) − F 0 (x0 ))k ≤ p0 (r)kx − x0 k for ∀x, y ∈ U[x0 , r], kF 0 (x0 )−1 (F 0 (y) − F 0 (x))k ≤ q(r)ky − xk for ∀x, y ∈ Ω3 , and kF 0 (x0 )−1 (G(y) − G(x))k ≤ p(r)ky − xk for ∀x, y ∈ Ω3 , where p0 , q, p are nondecreasing functions on interval [0, ρ), [0, ρ0), [0, ρ0), respectively and ρ0 is the smallest solution of equation p0 (r) − 1 = 0 in the interval (0, ρ), and R Ω3 = U(x0 , ρ0 ). Moreover, define functions w(r) = 0r p(t)dt, ϕ(r) = a +

Z r 0

w(t)dt − r = a +

and ψ(r) =

Z r 0

Z r 0

q(t)dt.

(r − s)p(s)ds − r

Updated Iteration Theory for Non Differentiable Equations

373

Theorem 72. Suppose that the function h(r) = ϕ(r) + ψ(r) has a unique zero in the interval [0, ρ0 ] and h(ρ0 ) ≤ 0. Then, KZI is well defined, remains in U[x0 , s∗ ] for ∀n = 0, 1, 2, . . . and converges to a solution x∗ of equation F(x) = 0 with error estimates ∀n = 0, 1, 2, . . . kxn+1 − xn k ≤ sn+1 − sn , and kx∗ − xn k ≤ s∗ − sn , where sequence {sn } is defined by sn+1 = sn −

h(sn ) p0 (sn )

which is strictly increasing and converges to s∗ . Remark 54. As in Remark 53 consider conditions given by Zabrejko-Nquen. kF 0 (x0 )−1 (F 0 (y) − F 0 (x))k ≤ p1 (r)ky − xk ∀x, y ∈ U(x0 , ρ), and kF 0 (x0 )−1 (G(y) − G(x))k ≤ q1 (r)ky − xk ∀x, y ∈ U(x0 , ρ). Notice that these conditions were given in non-affine invariant form. Then, again p0 (r) ≤ p1 (r), p(r) ≤ p1 (r) and q(r) ≤ q1 (r), so similar comment can be made as in Remark 53. Additional error estimates were given in [42] but without proofs. Yamamoto later in [40] presented these proofs. In particular, set tn = kxn − x0 k, pn (r) = p(tn + r) and qn (r) = q(tn + r) ∀r ∈ [0, ρ − tn]. Moreover, set an = kxn+1 − xn k and bn = (1 − w(tn))−1 . Suppose an > 0 and that equation r = an + bn

Z 1 0

((r − t)pn (t) + qn (t))dt

has a unique solution tn∗ ∈ (0, ρ − tn ). Then, the additional error estimates holds kx∗ − xn k ≤ tn∗

an (s∗ − sn ) for ∀n = 0, 1, 2, . . . sn+1 − sn an−1 ≤ (s∗ − sn ) ∀n = 0, 1, 2, . . . sn − sn−1 ≤ s∗ − sn for ∀n = 0, 1, 2, . . ..



374

Christopher I. Argyros, Samundra Regmi, Ioannis K. Argyros et al.

Clearly, these estimates are sharper than the ones given in [40], when p < p1 or q < q1 . It is also worth noticing that Theorem 1.3 generalizes Pt´ak’s error estimates for Newton’s method obtained by the technique of non-discrete induction. Moreover, they showed Pt´ak’s estimates are a simple consequence of the standard method due to Kantorovich [22]. Later Chen-Yamamoto [15] presented the semi-local convergence for iteration xn+1 = xn − L(xn )−1 (F(xn ) + G(xn )) for ∀n = 0, 1, 2, . . .,

(44.10)

where L : Ω × Ω −→ L(B1 , B2 ) along the same lines. Furthermore, a study of the iteration (44.10) is found in Yamamoto-Chen [41] and Chen-Yamamoto [15], where ball convergence theorems, as well as error estimates, are given. The results generalize and deepen those of Kantorovich [22], Mysovskii [27], Rall [33], Rheinboldt [35], Yamamoto [41], Zabrejko-Nguen [42], and others. See also the works of Argyros [1]-[11]. For the case X = Y = Rn , Clark [16] proposed a generalized Newton method which uses an element in the generalized Jacobian ∂F(xk ) in place of the Jacobian if F is locally Lipschitzian but not differentiable. Since then, there has been a growing interest in the study of non-smooth equations, which is closely related to the study of Newton-like methods. Such equations arise, for example, from (i) nonlinear complementarity problems, (ii) nonlinear constrained optimization problems, (iii) non-smooth convex optimization problems, (iv) compact fixed point problems, (v) non-smooth eigenvalue problem related to ideal MHD (magnetohydrodynamics), etc. Reformulating problems (i) - (iii) to non-smooth equations and global and super linear convergence of algorithms for solving such non-smooth equations can be found in the works of Chen [13], Qi [30,31], Qi-Sun [32], Pang [29], Qi-Chen [13], Robinson [36], Fukushima-Qi [19] and others. Heinkenschloss et al. [21] discuss (iv) and Rappaz [34] and Kikuchi [24] discuss (v). Finally, we remark that the general Gauss-Newton method for solving singular or ill-posed equations is defined by xn+1 = xn − Q(xn )F(xn ); n = 0, 1, 2, . . ., (44.11) where F : Ω ⊂ B1 −→ B2 is diferentiable and Q(xn ) is a linear operator which generalizes the Moore-Penrose pseudo-inverse. If B1 = Rn , B2 = Rm and Q(xn ) = F 0 (xn )† , then (44.11) reduces to the ordinary Gauss-Newton method for solving the least square problem: Find x ∈ Ω which minimizes F(x)t F(x). Convergence analysis for (44.11) is given in the works of Ben-Israel [12], Meyn [36], Walker [37], Walker and Watson [38], Deu hard and Potra [18], Chen and Yamamoto [15], Nashed and Chen [28] and others

3.

Conclusion

A finer semi-local convergence analysis is presented for iterations applied to solve not necessarily differentiable equations in Banach spaces.

Updated Iteration Theory for Non Differentiable Equations

375

References [1] Argyros, I. K., The theory and applications of iteration methods, 2nd Edition, Engineering Series, CRC Press, Taylor and Francis Group, 2022. [2] Argyros, I. K., Improved a posteriori error bounds for Zincenko’s iteration, Inter. J. Comp. Math., 51, (1994), 51-54. [3] Argyros, I. K., Computational theory of iterative schemes. Series: Studies in Computational Mathematics, 15, Editors: Chui C. K. and Wuytack L., Elsevier Publ. Co. New York, U.S.A, 2007. [4] Argyros, I. K., Convergence and Applications of Newton-type Iterations, Springer Verlag, Berlin, Germany, (2008). [5] Argyros, I. K., Improving the order and rates of convergence for the super-Halley method in Banach spaces, The Korean Journal of Computational & Applied Mathematics, vol. 5, no. 2, pp. 465-474, 1998. [6] Argyros, I. K., The convergence of a Halley-Chebysheff-type method under NewtonKantorovich hypotheses, Appl. Math. Lett., vol. 6, no. 5, pp. 71-74, 1993. [7] Argyros, I. K., Weaker convergence criteria for Traub’s method, J. Complexity, 69, (2022), 101615. [8] Argyros, I. K., Unified Convergence Criteria for Iterative Banach Space Valued Methods with Applications, Mathematics 2021, 9(16), 1942; https://doi. org/10.3390/math9161942. [9] Argyros, I. K., The theory and applications of iteration methods, 2nd Edition, Engineering Series, CRC Press, Taylor and Francis Group, 2022. [10] Argyros, I. K., George, S., On the complexity of extending the convergence region for Traub’s method, J. Complexity, 56, (2020), 101423. [11] Argyros, I. K., Magr´en˜ an, A. A., A contemporary study of iterative schemes, Elsevier (Academic Press), New York, 2018. [12] Ben-Israel, A., A Newton-Raphson method for the solution of systems of equations, J. Math. Anal. Applic., 15, (1966), 243-252. [13] Chen, X., Smoothing methods for complementarity problems and their applications: a survey, J. Oper. Res. Soc. Japan, 43, (2000), 12-47. [14] Chen, X., Nashed, M. Z., Qi, Li, Convergence of Newton’s method for singular smooth and nonsmooth equations using adaptive outer inverses, SIAM J. Optim., 7, (1997), 245-262. [15] Chen, X., Yamamoto, T., Newton-like methods for solving under determined nonlinear equations with nondifferentiable terms, J. Comput. Appl. Math., 55, (1994), 311-324.

376

Christopher I. Argyros, Samundra Regmi, Ioannis K. Argyros et al.

[16] Clark, F. H., Optimization and Nonsmooth Analysis, Wiley, New York, 1983. [17] Deufihard, P., Heindl, G., An invariant convergence theorems for Newton’s method and extensions to related methods, SIAM J. Numer. Anal. 16 (1979) 1-10. [18] Deufihard, P., Potra, F. A., A refined Gauss-Newton-Mysovskii theorem, ZIB SC 91-4 (1991) 1-9. [19] Fukushima, M., Qi, L., Reformulation: Nonsmooth, Piecewise Smooth, Semismooth and Smoothing Methods, Kluwer Academic Publishers, Dordrecht, 1989. [20] H´aussler, W. M., A Kantorovich-type convergence analysis for the Gauss-Newton method, Numer. Math. 48 (1986)119-125. [21] Heinkenschloss, M., Kelley, C. T., Tran, H. T., Fast algorithms for nonsmooth compact xed-point problems, SIAM J. Numer. Anal. 29 (1992) 1769-1792. [22] Kantorovich, L. V., Akilov, G. P., Functional Analysis, 2nd ed., Pergamon Press, Oxford, 1982. [23] Kikuchi, F., An iteration scheme for a nonlinear eigenvalue problem, Theoret. Appl. Mech. 29 (1981) 319-333. [24] Kikuchi, F., Finite element analysis of a non differentiable nonlinear problem related to MHD equilibria, J. Fac. Sci. Univ. Tokyo Sect. IA Math. 35 (1988) 77-101. [25] Mart´ınez, J. M., Quasi-Newton methods for solving underdetermined nonlinear simultaneous equations, J. Comput. Appl. Math. 34 (1991) 171-190. [26] Meyn, K. H., Solution of underdetermined nonlinear equations by stationary iteration methods, Numer. Math. 42, (1983) 161-172. [27] Mysovskii, I., On the convergence of Kantorovich, L. V.’s method of solution of functional equations and its applications, Dokl, Akad. Nauk SSSR 70 (1950) 565 -585 (in Russian). [28] Nashed, M. Z., Chen, X., Convergence of Newton-like methods for singular operator equations using outer inverses, Numer. Math. 66 (1993) 235-257. [29] Pang, J. S., Qi, L., Nonsmooth equations: motivation and algorithms, SIAM J. Optim. 3 (1993) 443-465. [30] Qi, L., Convergence analysis of some algorithms for solving nonsmooth equations, Math. Oper. Res. 18 (1993) 227-244. [31] Qi, L., Chen, X., A globally convergent successive approximation method for severely nonsmooth equations, SIAM J. Control. Optim. 33 (1995) 402-418. [32] Qi, L., Sun, J., A nonsmooth version of Newton’s method, Math. Programming 58 (1993) 353-367.

Updated Iteration Theory for Non Differentiable Equations

377

[33] Rall, L. B., A note on the convergence of Newton’s method, SIAM J. Numer. Anal. 11 (1974) 34-36. [34] Rappaz, J., Approximation of a non differentiable nonlinear problem related to MHD equilibria, Numer. Math. 45 (1984) 117-133. [35] Rheinboldt, W. C., An adaptive continuation process for solving systems of nonlinear equations, in Mathematical Models and Numerical Methods, Banach Center Publications, Vol. 3, PWN-Polish Scientific Publishers, Warsaw, 1978, pp. 129 -142. [36] Robinson, S. M., Newton’s method for a class of nonsmooth functions, Set-Valued Anal. 2 (1994) 291-305. [37] Walker, H. J., Newton-like methods for underdetermined systems, in: E.L. Allgower, K. Georg (Eds.), Computational Solution of Nonlinear Systems of Equations, Lectures in Appl. Math., Vol. 26, AMS, Providence, 1990. [38] Walker, H. J., Watson, L. T., Least-change secant update methods for underdetermined systems, SIAM J. Numer. Anal. 27 (1990) 1227-1262. [39] Yamamoto, T., A convergence theorem for Newton-like methods in Banach spaces, Numer. Math. 51 (1987) 545-557. [40] Yamamoto, T., A note on a posteriori error bound of Zabrejko and Nguen for Zincenko’s iteration, Numer. Funct. Anal. Optim. 9 & 10 (1987) 987-994. [41] Yamamoto, T., Chen, X., Ball convergence theorems and error estimates for certain iterative methods for nonlinear equations, Japan J. Appl. Math. 7 (1990) 131-143. [42] Zabrejko, P. P., Nguen, D. F., The majorant method in the theory of NewtonKantorovich approximations and the Pt´ak error estimates, Numer. Funct. Anal. Optim. 9 (1987) 671-684. [43] Zabrejko, P. P.,. Zlepko, P. P, On a generalization of the Newton-Kantorovich method for an equation with non differentiable operator, Ukr. Mat. Zhurn. 34 (1982) 365 -369 (in Russian). [44] Zincenko, A. I., Some approximate methods of solving equations with non differentiable operators, Dopovidi Akad. Nauk. Ukrain RSR (1963) 156 -161 (in Ukrainian).

Chapter 45

On Generalized Halley-Like Methods for Solving Nonlinear Equations 1.

Introduction

A certain class of Halley-like methods is introduced for solving Banach space-valued nonlinear equations. The semi-local convergence unifies and also extends the applicability of these methods. Numerous applications reduce to finding a solution of the nonlinear equation

G (x) = 0,

(45.1)

where U , V are Banach spaces, Ω ⊂ U is a nonempty open set and F : Ω −→ V is a C2 (Ω) operator. Most solution methods for equation (45.1) are of iterative nature, since closed form solutions are possible to find only in some special cases. We study the semi-local convergence analysis of generalized Halley-like method defined for each n = 0, 1, 2, . . . by yn = xn − G (xn )−1 G (xn),

xn+1 = xn − (I + An Ln )(yn − xn ),

(45.2)

where An = A(xn ), A : Ω −→ L(U , V ) and Ln = L(xn ), L(x) = G 0(x)−1 G 00 (x)G 0(x)−1 G (x). Specializations of the linear operator An lead to the following popular third or fourth order methods for solving nonlinear equations [1]-[7,11, 12,13]. Halley Method: yn = xn − G (xn )−1 G (xn ), 1 xn+1 = xn + (I + Ln )−1 (yn − xn ). 2 Super-Halley Method: yn = xn − G (xn )−1 G (xn ), 1 xn+1 = xn + (I + (I − Ln )−1 Ln )(yn − xn ). 2

380

Christopher I. Argyros, Samundra Regmi, Ioannis K. Argyros et al.

Chebyshev Method: yn = xn − G (xn )−1 G (xn), 1 xn+1 = xn − (I + Ln )(yn − xn ). 2 Chebyshev-like Methods: yn = xn − G (xn )−1 G (xn ) 1 xn+1 = xn − (I + Ln + λL2n )(yn − xn ), λ ∈ [0, 2]. 2 Notice for example that Super-Halley method is obtained from (45.2) if An = 12 (I − Ln )−1 . Numerous other methods can be found in [8,9,10,14]. These special methods have been studied under conditions of the form kG 0 (x0 )−1 (G 00(x) − G 00 (y))k ≤ T kx − yk

(45.3)

for all x, y ∈ Ω and some T > 0 or involving higher than two derivatives. These conditions restrict the applicability of these methods. As a motivational example, consider function Let f : [− 12 , 32 ] −→ R defined by  3 t logt 2 + t 5 − t 4 i f t 6= 0 f (t) = 0 i f t = 0. Then, we obtain f 000 (t) = 6 logt 2 + 60t 2 − 24t + 22. So, function f 000 is not bounded on [− 12 , 32 ]. We are motivated by these concerns, unification, and optimization considerations. In particular, our conditions are weaker than (45.3) (see conditions (B)). Hence, we extend the applicability of these methods. Throughout the chapter B(x0 , ρ) = {x ∈ U : kx − x0 k < ρ} and B[x0 , ρ] = {x ∈ U : kx − x0 k ≤ ρ} for some ρ > 0. Majorizing sequences for method (45.2) are given in Section 2. The semi-local convergence analysis is presented in Section 3, followed by special cases in Section 4 and Conclusions in Section 5.

2.

Majorizing Convergence Analysis

Let α, L0 , L, L1 , L2 > 0 and η ≥ 0 be parameters. Define scalar sequence {tn } by t0 = 0, s0 = η, αL2 (sn − tn ) tn+1 = sn + , (45.4) 1 − L0 tn and L(tn+1 − tn )2 + 2L1 (tn+1 − sn ) sn+1 = tn+1 + . 2(1 − L0tn+1 )

This sequence shall be shown to be majorizing for method (45.2) in the next Section. But first we prove convergence results for sequence {tn }.

On Generalized Halley-Like Methods for Solving Nonlinear Equations

381

Lemma 37. Suppose: L0tn < 1. Then, sequence {tn} is such that tn ≤ sn ≤ tn+1 and t ∗ = limn−→∞ tn ≤

(45.5) 1 L0 .

Proof. It follows by (45.4) and (45.5) that sequence {tn } is nondecreasing and bounded from above by L10 and as such it converges to its least upper bound t ∗ . Stronger conditions than (45.5) but easier to verify can be used to develop a second convergence result for method (45.2). To do this, define recurrent polynomials on the interval [0, 1) by (1) fn (t) = αL2 + tL0 (1 + t + . . . + t n )η − t, (2)

fn (t) = L(1 + t)2 t n−1η + 2L1 + 2L0 (1 + t + . . . + t n+1 )η − 2, (3)

fn (t) = 2L2 t n η + L0 γ(1 + t + . . . + t n )η − γ, γ = 2α − 1, p(t) = L(1 + t)2t − L(1 + t)2 + 2L0 t 3 , p1 (t) = (2L2 + L0 γ)t − 2L2

and p2 (t) = t 2 + (L0 η − 1 − αL2 )t + αL2 .

By these definitions it follows p(0) = −L, p(1) = 2L0 , p1 (0) = −2L2 , p1 (1) = L0 γ. Suppose α > 12 . Then, polynomials p and p1 have zeros in the interval (0, 1). Denote the minimal such zero by δ1 and δ3 , respectively. Suppose 4 = (L0 η − 1 − αL2 )2 − 4αL2 ≥ 0

(45.6)

L0 η < 1 + αL2 .

(45.7)

and Then, polynomial p2 has two positive solutions s1 and s2 with s1 ≤ s2 . Set δ2 = Lt1 Z 2 +2L1 (t1 −s0 ) 1−L1 −L0 η and c = max{a, b}. 1−L1 , a = αL2 , b = 2(1−L0t1 ) Suppose: 1 L0 η < 1 − L1 , L1 < 1, α > (45.8) 2 and c ≤ δ1 ≤ min{δ2 , s2 } (45.9) or max{a, s1 } ≤ δ1 ≤ δ2 .

(45.10)

We can show the second convergence result for a sequence {tn }. Lemma 38. Suppose: Conditions (45.8), (45.9) or (45.8) and (45.10) hold. Then, the following assertions hold η η tn ≤ sn ≤ tn+1 ≤ t ∗∗ = , lim tn = t ∗ ∈ [0, ), 1 − δ n−→∞ 1−δ 0 ≤ sn − tn ≤ δ(sn − tn ) ≤ δn η,

(45.11)

0 ≤ tn+1 − sn ≤ δ(sn − tn ) ≤ δn+1 η.

(45.12)

and

382

Christopher I. Argyros, Samundra Regmi, Ioannis K. Argyros et al.

Proof. These assertions shall be true if we show 2L2 ≤ δ, 1 − L0 tk

(45.13)

L(tk+1 − tk )2 + 2L1 (tk+1 − sk ) ≤ δ(sk − tk ), 2(1 − L0 tk+1 )

(45.14)

0≤ 0≤ and

tk ≤ sk ≤ tk+1.

(45.15)

These estimates hold by the initial conditions, the choice of a, b, c and (45.8), (45.9) or (45.8) and (45.10). It follows that 0 ≤ t1 − s0 ≤ δ(s0 − t0 ), 0 ≤ s1 − t1 ≤ δ(s0 − t0 ), so 2 η t1 ≤ s0 + δ(s0 − t0 ) = η + δη = 1−δ η < 1−δ . 1−δ Suppose 0 ≤ tk+1 − sk ≤ δk+1 η, 0 ≤ sk − tk ≤ δ(sk−1 − tk−1 ) ≤ δk η and tk+1 ≤ Then, evidently (45.13) holds if αL2 + δL0

1 − δk+1 η − δ ≤ 0. 1−δ

1−δk+2 1−δ η.

(45.16) (1)

This estimate motivates the introduction of recurrent polynomial f k

and show instead

(1)

fk (t) ≤ 0 at t = δ.

(45.17)

(1)

(1)

A relationship among two consecutive polynomials f k is needed. By the definition of f k we can write in turn that (1)

fk+1 (t) =

(1)

(1)

(1)

f k+1(t) − f k (t) + f k (t)

= αL2 + tL0 (1 + t + . . . + t n+1 )η − t

(1)

−αL2 − L0 t(1 + t + . . . + t k )η + t + f k (t)

= so

(1)

f k (t) + L0 t k+2η, (1)

(1)

fk+1(t) ≤ f k (t).

(45.18)

(1)

Define function f ∞ by (1)

f∞(1)(t) = lim fk (t). k−→∞

(45.19)

(1)

Then, it follows by the definition of f k (t) and (45.19) that f∞(1) (t) =

tL0 η + αL2 − t. 1 −t

(45.20)

Hence, (45.18) holds if f∞(1)(t) ≤ 0, at t = δ, which is true by the choice s1 and s2 . We can show (45.14) (see also (45.13) if L(1 + δ)2 (sn − tn ) + 2L1 δ ≤δ 2(1 − L0 tn+1 )

(45.21)

On Generalized Halley-Like Methods for Solving Nonlinear Equations

383

or L(1 + δ)2 δn η + 2L1 δ + 2L0 δ(1 + δ + . . . + δn+1 )η − 2δ ≤ 0, or

(45.22)

(2)

fn (t) ≤ 0 at t = δ.

(45.23)

This time we have (2)

fk+1(t) =

(2)

(2)

(2)

f k+1 (t) − f k (t) + f k (t)

= L(1 + t)2 t k η + 2L1 + 2L0 (1 + t + . . . + t k+2 )η − 2

(2)

−L(1 + t)2 t k−1 η − 2L1 − 2L0 (1 + t + . . . + t k+1 )η + 2 + f k (t)

=

(2)

f k (t) + p(t)t k−1η,

so

(2)

(2)

fk+1(t) ≤ f k (t) + p(t)t k−1η. Notice that

(2)

(2)

fk+1 (t) = f k (t) at t = δ1 ,

(45.24) (45.25)

(2)

by the definition of δ1 . Define function f ∞ on the interval [0, 1) by (2)

f∞(2)(t) = lim fk (t). k−→∞

(45.26)

Then, we get f∞(2) (t) = 2L1 − 2 + (2)

2L0 η . 1 −t

(45.27)

Hence, (45.23) holds if f ∞ (t) ≤ 0, which is true by the choice of δ2 . Then, estimate (45.15) also holds by (45.4), (45.13) and (45.14). The induction for estimates (45.13)-(45.15) is completed. It follows that limk−→∞ tk = t ∗ ≤ t ∗∗.

3.

Semi-Local Analysis

The conditions (B) are used in the convergence analysis. Suppose: (B1) There exists x0 ∈ Ω, η ≥ 0, α ≥ 0 such that G 0(x0 )−1 ∈ L(V , U ), kG 0(x0 )−1 G (x0 )k ≤ η and kLn k ≤ α. (B2) kG 0(x0 )−1 (G 0(v) − G 0 (x0 ))k ≤ L0 kv − x0 k for all v ∈ Ω. Set Ω0 = B(x0 , L10 ) ∩ Ω. (B3)kG 0(x0 )−1 (G 0(v) − G 0(w))k ≤ Lkv − wk, kG 0(x0 )−1 G 0(v)k ≤ L1 and kG 0(x0 )−1 G 00(v)k ≤ L2 for all v, w ∈ Ω0 . (B4) Conditions of Lemma 37 or Lemma 38 hold and (B5) B[x0 ,t ∗ ] ⊂ Ω. The semi-local convergence analysis of the method (45.2) follows based on conditions (B).

384

Christopher I. Argyros, Samundra Regmi, Ioannis K. Argyros et al.

Theorem 73. Under conditions (B), the following assertions hold {xn } ⊂ B(x0 ,t ∗ ),

(45.28)

limn−→∞ xn = x∗ ∈ B[x0 ,t ∗ ] and F(x∗ ) = 0. Proof. Mathematical induction shall be used to show kyk − xk k ≤ sk − tk

(45.29)

kxk+1 − yk k ≤ tk+1 − sk .

(45.30)

and By (B1) and (45.4), we get ky0 − x0 k = kG 0(x0 )−1 G (x0 )k ≤ η = s0 − t0 , so y0 ∈ B(x0 ,t ∗ ) and (45.30) holds for k = 0. In view of (B2) for v ∈ B(x0 ,t ∗), we obtain kG 0(x0 )−1 (G 0(v) − G 0 (x0 ))k ≤ L0 kv − x0 k ≤ L0 t ∗ < 1, so G 0(v)−1 ∈ L(V , U ) and kG 0(v)−1 G 0 (x0 )k ≤

1 , 1 − L0 kv − x0 k

(45.31)

by the Lemma due to Banach for linear invertible operators [5,11]. Using method (45.2), we have in turn that xk+1 = xk − G 0 (xk )−1 G (xk ) − Ak G 0 (xk )−1 G (xk )

= yk − Ak k + 1G 0 (xk )−1 G 00 (xk )G 0(xk )−1 G (xk )

= yk + Ak G 0(xk )−1 G 00 (xk )(yk − xk ).

(45.32)

We need the estimate using (45.31) for v = xk and (B3) kG 0(xk )−1 G 00 (xk )k ≤ kG 0(xk )−1 G 0 (x0 )kkG 0(x0 )−1 G 00 (xk )k L2 ≤ . 1 − L0 kxk − x0 k

(45.33)

In view of (45.4), (B1), (45.32) and (45.33), we obtain in turn kxk+1 − yk k ≤ kAk kkG 0(xk )−1 G 0(x0 )kkG 0(x0 )−1 G 00(xk )kkyk − xk k αk L2 kyk − xk k ≤ 1 − L0 kx0 − x0 k αL2 (sk − tk ) ≤ = tk+1 − sk . 1 − L0 tk It follows from method (45.2) that

G (xk+1) = G (xk+1) − G (xk ) − G 0 (xk )(yk − xk ) = G 0(xk+1 ) − G (xk ) − G 0 (xk )(xk+1 − xk ) + G 0 (xk )(xk+1 − yk ),

(45.34)

On Generalized Halley-Like Methods for Solving Nonlinear Equations

385

so kG 0 (x0 )−1 G (xk+1)k ≤ ≤

L kxk+1 − xk k2 + L1 kxk+1 − yk k 2 L (tk+1 − tk )2 + L1 (tk+1 − sk ), 2

(45.35)

and kyk+1 − xk+1 k ≤ kG 0(xk+1)−1 G 0 (x0 )kkG 0(x0 )−1 G (xk+1)k L(tk+1 − tk )2 + 2L1 (tk+1 − sk ) ≤ = sk+1 − tk+1, 2(1 − L0 tk+1 )

(45.36)

showing (45.29), where we also used kxk+1 − x0 k ≤ kxk+1 − yk k + kyk − x0 k ≤ tk+1 − sk + sk − t0 = tk+1 < t ∗ and kyk+1 − x0 k ≤ kyk+1 − xk+1 k + kxk+1 − x0 k ≤ sk+1 − tk+1 + tk+1 − t0 = sk+1 < t ∗ , so xk+1 , yk+1 ∈ B(x0 ,t ∗ ). The induction for estimates (45.29) and (45.30) is finished. It follows that sequence {xk } is fundamental (since {tk} is as convergent) in Banach space U . By the continuity of G letting k −→ ∞, we deduce G (x∗) = 0. Finally, for i ≥ 0, we get kxk+i − xk k ≤ kxk+i − yk+i k + kyk+i − xk k

≤ . . . ≤ tk+i − sk+i + . . . + sk − tk = tk+i − tk .

By letting i −→ ∞, we get the estimate

kx∗ − xk k ≤ t ∗ − tk .

(45.37)

Remark 55. (1) Condition (B5) can be replaced by B(x0 , L10 ) ⊂ Ω in the case of Lemma 37 η η and B(x0 , 1−δ ) in case of Lemma 38. Notice that L10 and 1−δ are given in closed form. (2) Condition (B3) can be replaced by tighter ¯ − wk (B3)’ kG 0(x0 )−1 (G 0(v) − G 0 (w))k ≤ Lkv kG 0(x0 )−1 G 0(v)k ≤ L¯ 1 ,

G 0(x0 )−1G 00(v)k ≤ L¯ 2

for all v ∈ Ω0 , w = u − G 0 (u)−1G (u). Then, we have

L¯ ≤ L, L¯ 1 ≤ L1 and L¯ 2 ≤ L2 .

(45.38)

˜ L˜ 1 and L˜ 2 , Moreover, if these condition hold for all u, v ∈ Ω, then we have with constants L, L ≤ L˜ L1 ≤ L˜ 1 and L2 ≤ L˜ 2 .

(45.39)

¯ L¯ 1 and L¯ 2 can replace L, L1 and L2 in Theorem 73. Notice that tighter L, (3)Another way to obtain tighter Lipschitz constants is to consider the set Ω1 = B(x1 , L10 −η) provided that L0 η < 1. Suppose Ω1 ⊂ Ω. Then, we have Ω1 ⊂ Ω0 and the Lipschitz constants are at least as tight.

386

Christopher I. Argyros, Samundra Regmi, Ioannis K. Argyros et al.

Next, a uniqueness result for the solution x∗ is presented without necessarily using all the conditions (B). Proposition 24. Suppose (i) There exists a simple solution x∗ ∈ B(x0 , ρ) ⊂ Ω for some ρ > 0. (ii) Condition (B2) ρ ≥ t ∗ and L0 (ρ + t ∗ ) < 1. (45.40) 2 Set Ω2 = B(x0 , ρ) ∩ Ω. Then, the only solution of equation F(x) = 0 in the set Ω2 is the point x∗ . Proof. Let z∗ ∈ Ω2 with F(z∗ ) = 0. Define linear operator Q = In view of (B2) and (45.40), we obtain 0

−1

0

kG (x0 ) (Q − G (x0 ))k ≤

Z 1 0

R1 0

G 0(x∗ + θ(z∗ − x∗ ))dθ.

[(1 − θ)kx0 − x∗ k + θkz∗ − x0 k]dθ

L0 ∗ (t + ρ) < 1. 2



So, we conclude z∗ = x∗ , since linear operator Q is invertible and Q(x∗ − z∗ ) = G (x∗ ) − G (z∗) = 0.

4.

Special Cases

Case I. Set An = 21 I and α = 12 . Then, we obtain Halley’s method and the conclusions of Theorem 73 hold in this case. Case II. Set An = 12 (I − Ln )−1 to obtain the super Halley’s method. This time we have kI − Ln k ≤ where βn =

L2 (sk −tk ) 1−L0 tk .

1 = αn ≤ α, 1 − βn

Then, we must have 1 ≤ α, 2(1 − βk )

or

or

2L2 (sk − tk ) ≤ 2α − 1 = γ, 1 − L0 tk 2L2 δk η + L0 γ

or

(3)

1 − δk+1 η − γ ≤ 0, 1−δ

fk (t) ≤ 0 at t = δ.

On Generalized Halley-Like Methods for Solving Nonlinear Equations (3)

(3)

387

(3)

But f k+1(t) = f k (t) + p1 (t)t kη. Define function f ∞ by (3)

f∞(3)(t) = lim fk (t). k−→∞

Then, we get f∞(3) (t) = so

L0 γη − γ, 1 −t

f∞(3) (t) ≤ 0, if t ≤ 1 − L0 η. Let δ = max{δ1 , δ3 }. So, the conclusions of Theorem 73 for this method if 2α − 1 > 0, c ≤ min{δ1 , δ3 } ≤ δ ≤ min{1 − L0 η, δ2 , s2 } or max{c, s1 } ≤ min{δ1 , δ3 } ≤ δ ≤ min{1 − L0 η, δ2 }.

5.

Conclusion

Motivated by optimization considerations, we presented a finer semi-local convergence analysis for the procedure (45.2). The benefits are already stated in the introduction of this chapter.

References [1] Amat, S., Bermudez, C., Busquier, S., Plaza, S., On a third order Newton-type method free of bilinear operators, Numer. linear Algebra Appl., 17,(2010), 639-653. [2] Argyros, I. K., On the Newton - Kantorovich hypothesis for solving equations, J. Comput. Math., 169, (2004), 315-332. [3] Argyros, I. K., Computational theory of iterative methods. Series: Studies in Computational Mathematics, 15, Editors: Chui C. K. and Wuytack L., Elsevier Publ. Co. New York, U.S.A, 2007. [4] Argyros, I. K., Convergence and Applications of Newton-type Iterations, Springer Verlag, Berlin, Germany, (2008). [5] Argyros, I. K., Unified convergence criteria for Banach space valued methods with applications, Mathematics, 9, 16, (2021), 1942. [6] Argyros, I. K., The theory and applications of iteration methods, 2nd Edition, CRC Press, Engineering Series, Taylor, and Francis Group, 2022. [7] Argyros, I. K. and George, S., On the complexity of extending the convergence region for Traubs method, Journal of Complexity, 56, (2020), 101423. [8] Behl, R., Maroju, P., Martinez, E., Singh, S., A study of the local convergence of a fifth order iterative method, Indian J. Pure Appl. Math., 51, 2, (2020), 439-455.

388

Christopher I. Argyros, Samundra Regmi, Ioannis K. Argyros et al.

[9] Noor, M. A, Waseem, M., Some iterative methods for solving a system of nonlinear equations, Comput. Math. Appl., 2009, 57, 1, (2009), 101-106. [10] Magr´en˜ an, A. A., Guti´errez, J. M., Real dynamics for damped Newton’s method applied to cubic polynomials, J. Comput. Appl. Math., 275, (2015), 527–538. [11] Ortega, J. M., Rheinboldt, W. C., Iterative solution of nonlinear equations in several variables, volume 30 of Classics in Applied Mathematics, Society for Industrial and Applied Mathematics (SIAM), Philadelphia, PA (2000), Reprint of the 1970 original. [12] Ostrowski, A. M. (1966). Solution of equations and systems of equations. Second edition. Pure and Applied Mathematics, Vol. 9. Academic Press, New York-London. [13] Traub, J. F., Iterative methods for the solution of equations, Prentice Hall, New Jersey, U.S.A, (1964). [14] Verma, R., New Trends in Fractional Programming, Nova Science Publisher, New York, USA, (2019).

Chapter 46

Extended Semi-Local Convergence of Steffensen-Like Methods for Solving Nonlinear Equations 1.

Introduction

Steffensen-type methods for nonlinear equations are extended with no additional conditions. Iterates are shown to belong in a smaller domain resulting in tighter Lipschitz constants and a finer convergence analysis than in earlier works. Let X be a Banach space Ω ⊂ X be an open set, F : Ω ⊂ X −→ X be a continuously differentiable operator in the Fr´echet-sense. Consider the problem of finding a solution x∗ ∈ Ω of the equation, F(x) = 0.

(46.1)

Numerous methods are developed to find an approximation to the point x∗ . The points x∗ is needed in closed form. But, this can be achieved only in special cases. That is why most solution methods for (46.1) are iterative. The convergence region for these methods is small in general, limiting their applicability. The error bounds on the distances involved are also pessimistic (in general). We consider Steffensen-like method defined for n = 0, 1, 2, . . . by yn = xn − A−1 n F(xn ),

xn+1 = yn − B−1 n F(yn ),

(46.2)

where An = [xn , wn ; F], [., .; F] : Ω × Ω −→ L(X, X), wn = xn + F(xn ) and Bn = [yn , wn ; F] + [yn , xn ; F] − [xn , wn ; F]. Our analysis leads to a finer convergence analysis with advantages (A): (1) Extended convergence domain. (2) Tighter error bounds on the distances involved. (3) More precise information for the location of the solution.

390

Christopher I. Argyros, Samundra Regmi, Ioannis K. Argyros et al.

The advantages A are obtained with no additional conditions. The methodology is so general that it can be used to extend the applicability of other iterative methods along the same lines.

2.

Majorizing Real Sequences

Let M0 , M, be given positive parameters and δ ∈ [0, 1), M0 ≤ M, η ≥ 0, and T = [0, 1). Consider recurrent polynomials defined on the interval T for n = 1, 2, . . . by (1)

fn (t) = Mt 2n η + Mt 2n−1 η + 2M0 (1 + t + . . . + t 2n+1 )η (2)

+M0 (t 2n+1 + 2t 2n )t 2n+1η + δ − 1,

fn (t) = Mt 2n+1η + M(t 2n+1 + 2t 2n )t 2nη (1)

+2M0 (1 + t + . . . + t 2n+2 )η + δ − 1,

gn (t) = Mt 3 + Mt 2 − Mt − M + 2M0 (t 3 + t 4 ) (2)

+M0 (t 2n+3 + 2t n+2 )t 4 η − M0 (t 2n+1 + 2t 2n )t 2 η,

gn (t) = Mt 3 + M(t 3 + 2t 2 )t 2n+2η

+2M0 (t 3 + t 4 ) − Mt − M(t + 2)t 2n η,

1)

(1)

(1)

(2)

(2)

hn+1 (t) = gn+1 (t) − gn (t), (2)

hn+1 (t) = gn+1 (t) − gn (t) and polynomials 3 2 3 4 g(1) ∞ (t) = g1 (t) = Mt + Mt − Mt − M + 2M0 (t + t ), 3 3 4 g(2) ∞ (t) = g2 (t) = Mt + 2M0 (t + t ) − Mt = g3 (t)t

and g(t) = (t − 1)2 (t 5 + 4t 4 + 6t 3 + 6t 2 + 5t + 2). Then, the following auxiliary result connecting these polynomials can be shown. Lemma 39. The following assertions hold: (1)

(1)

(1)

fn+1(t) = f n (t) + gn (t)t 2n−1η, (2)

(2)

(2)

fn+1 (t) = f n (t) + gn (t)t 2nη, (1)

hn+1 (t) = g(t)M0t 2n+2η, (2)

hn+1 (t) = g(t)Mt 2nη,

(46.3) (46.4) (46.5) (46.6)

polynomials g1 and g2 have smallest zeros in the interval T − {0} denoted by α1 and α2 , respectively, (1) hn+1 (t) ≥ 0, for each t ∈ [0, α1 ) (46.7)

Extended Semi-Local Convergence of Steffensen-Like Methods ... and

(2)

hn+1 (t) ≥ 0, for each t ∈ [0, α2).

391

(46.8)

Moreover, define functions on the interval T by (1)

g(1) ∞ (t) = lim gn (t) n−→∞

and

(2)

(46.9)

g(2) ∞ (t) = lim gn (t).

(46.10)

g(1) ∞ (t) = g1 (t), for each t ∈ [0, α1),

(46.11)

n−→∞

Then, g(2) ∞ (t) = g2 (t), for each t ∈ [0, α2),

(46.12)

fn+1 (t) ≤ f n (t) + g1 (t)t 2n−1η, for each t ∈ [0, α1),

(46.13)

(1)

(2)

(1)

(2)

fn+1 (t) ≤ f n (t) + g2 (t)t 2nη, for each t ∈ [0, α2), (1)

(1)

(2)

(2)

fn+1 (α1 ) ≤ f n (α1 ), and

fn+1 (α2 ) ≤ f n (α2 ).

(46.14) (46.15) (46.16)

Proof. Assertions (46.3)-(46.6) hold by the definition of these functions and basic algebra. By the intermediate value theorem polynomials g1 and g3 have zeros in the interval T − {0}, since g1 (0) = −M, g1 (1) = 4M0 , g2 (0) = −M and g2 (1) = 4M0 . Then, assertions (46.7) and (46.8) follow by the definition of these polynomials and zeros α1 and α2 . Next, assertions (46.13) and (46.16) also follows from (46.9), (46.10) and the definition of these polynomials. The preceding result is connected to the scalar sequence defined for each n = 0, 1, 2, . . . by t0 = 0, s0 = η, M(η + δ)η , 1 − M0 (2η + δ) M(tn+1 − tn + sn − tn )(tn+1 − sn ) = tn+1 + , 1 − M0 (2tn+1 + γn + δ) M(sn+1 − tn+1 + γn )(sn+1 − tn+1 ) = sn+1 + , 1 − M0 (2sn+1 + δ)

t1 = s0 + sn+1 tn+2

(46.17)

where γn = M(tn+1 − tn + sn − tn )(tn+1 − sn ), δ ≥ γ0 . M(s1 −t1 +γ0 ) 1 +s0 ) Moreover, define parameters a1 = 1−M , a2 = 1−MM(t , a = max{a1 , a2 } 0 (2s1 +δ) 0(2t1 +γ0 +δ) and α = min{α1 , α2 }. Then, the first convergence result for sequence {tn } follows.

392

Christopher I. Argyros, Samundra Regmi, Ioannis K. Argyros et al.

Lemma 40. Suppose Mη ≤ 1, 0 < a1 , a < a2 , a ≤ α,

(46.18)

f1 (α1 ) ≤ 0

(46.19)

(1)

and

(1)

f2 (α2 ) ≤ 0.

(46.20)

0 < sn − tn ≤ α(tn − sn−1 ) ≤ α2n η

(46.22)

γn+1 ≤ γn ≤ γ0 .

(46.23)

η , and conThen, scalar sequence {tn} is nondecreasing, bounded from above by t ∗∗ = 1−α ∗ ∗∗ verges to its unique least upper bound t ∈ [0,t ]. Moreover, the following error bounds hold 0 < tn+1 − sn ≤ α(sn − tn ) ≤ α2n+1η, (46.21)

and Proof. Assertions (46.21)-(46.23) hold if we show using induction that 0
1, s0 1 − M0 (2η + δ) so s0 < t1 , and (46.25) holds for n = 0. Suppose assertions (46.23)-(46.25) hold for each m = 0, 1, 2, 3, . . ., n. By (46.21) and (46.22) we have sm ≤ tm + α2m η ≤ sm−1 + α2m−1 η + α2m η ≤ η + αη + . . . + α2m η 1 − α2m+1 = η ≤ t ∗∗ , 1−α

(46.27)

and tm+1 ≤ sm + α2m+1 η ≤ tm + α2m+1 η + α2m η ≤ η + αη + . . . + α2m+1 η 1 − α2m+2 = η ≤ t ∗∗. 1−α

(46.28)

Extended Semi-Local Convergence of Steffensen-Like Methods ...

393

By the induction hypotheses sequences {tm}, {sm} are increasing. Evidently, estimate (46.23) holds if Mα2m+1η + Mα2m η + 2M0 α +M0 αδ + αγm M0 − α ≤ 0, or

1 − α2m+2 η 1−α

(1)

fm (t) ≤ 0 at t = α1 ,

(46.29)

where γm ≤ M(α2m+1 + 2α2m )α2m+1η2 . By (46.13), (46.15), and (46.20) estimate (46.29) holds. Similarly, assertion (46.25) holds if Mα2m+2η + M 2 (α2m+1η + 2α2m η)α2m+1 η +2αM0 (1 + α + . . . + α2m+2 )η + δα − α ≤ 0, or

(2)

fm (t) ≤ 0 at t = α2 .

(46.30)

By (46.14), (46.16) and (46.29), assertion (46.30) holds. Hence, (46.22) and (46.25) also hold. Notice that γn can be written as γn = M(En + En1 )En2 , where En = tn+1 − tn > 0, En1 = sn − tn , and En2 = tn+1 − sn > 0. Hence, we get En+1 − En = tn+2 − 2tn+1 + tn ≤ α2n (α2 − 1)(α + 1)η < 0, 1 En+1 − En1 = sn+1 − tn+1 − (sn − tn ) ≤ α2n (α2 − 1)η < 0,

and 2 En+1 − En2 = tn+2 − sn+1 − (tn+1 − sn ) ≤ α2n+1(α2 − 1)η < 0,

so γn+1 ≤ γn ≤ γ0 .

It follows that sequence {tn } is nondecreasing, bounded from above by t ∗∗ , so it converges to t ∗ . Next, a second convergence result for sequence (46.17) is presented but the sufficient criteria are weaker but more difficult to verify than those of Lemma 40. Lemma 41. Suppose M0 δ < 1,

(46.31)

M0 (2tn+1 + γn + δ) < 1,

(46.32)

and M0 (2sn+1 + δ) < 1 hold. Then, sequence {tn } is increasing and bounded from above by converges to its unique least upper bound t1∗ ∈ [0,t1∗∗].

(46.33) t1∗∗

=

1−M0 δ 2M0 ,

so it

Proof. It follows from the definition of sequence (46.17), and conditions (46.31)-(46.33).

394

3.

Christopher I. Argyros, Samundra Regmi, Ioannis K. Argyros et al.

Convergence

The conditions (C) shall be used in the semi-local convergence analysis of method (46.2). Suppose −1 (C1) There exist x0 ∈ Ω, η ≥ 0, δ ∈ [0, 1) such that A−1 0 ∈ L(X, X), kA0 F(x0 )k ≤ η, and kF(x0 )k ≤ δ. (C2) There exists M0 > 0 such that for all u, v ∈ Ω kA−1 0 ([u, v; F] − A0 )k ≤ M0 (ku − x0 k + kv − w0 k). 0δ Set Ω0 = U(x0 , 1−M 2M0 ) ∩ Ω for M0 δ < 1. (C3) There exists M > 0 such that for all u, v, u, ¯ v¯ ∈ Ω0

kA−1 ¯ v; ¯ F])k ≤ M(ku − uk ¯ + kv − vk). ¯ 0 ([u, v; F] − [u,  ∗ t + γ0 or t ∗∗ , if conditions of Lemma 40 hold (C4) U[x0 , ρ + δ] ⊂ Ω, where ρ = t1∗ + γ0 or t1∗∗ , if conditions of Lemma 41 hold. Remark 56. The results in [10] were given in the non-affine form. The benefits of using affine invariant results over non-affine are well known [1]-[4]. In particular they assumed kA−1 0 k ≤ β and ¯ (C3)’ k[x, y; F] − [x, ¯ y; ¯ F]k ≤ M(kx − xk ¯ + ky − yk) ¯ holds for all x, y, x¯ y¯ ∈ Ω. By the definition of the set Ω0 , we get Ω0 ⊂ Ω,

(46.34)

M0 ≤ βM¯

(46.35)

¯ M ≤ βM.

(46.36)

ao and Hence, M can replace βM¯ in the results in [10]. Notice also that using (C3)’ they estimated kB−1 n+1A0 k ≤ and kA−1 0 (An+1 − A0 )k ≤

1 ¯ 1 − βM(2s¯n+1 + δ)

(46.37)

1 , ¯ 1 − βM(t¯n+1 − t¯0 ) + γ¯ n + δ)

(46.38)

where {t¯n }, {s¯n } are defined for n = 0, 1, 2, . . . by t¯0 = 0, s¯0 = η,

¯ + δ)η βM(η ¯ s¯0 + δ) , 1 − βM(2 β¯γ = t¯n+1 + ¯ t¯n+1 + γ¯n + δ) 1 − βM(2 ¯ s¯n+1 − t¯n+1 + γ¯ n )(s¯n+1 − t¯n+1 ) βM( = s¯n+1 + , ¯ s¯n+1 + δ) 1 − βM(2

t¯1 = s¯0 + s¯n+1 t¯n+2

(46.39)

Extended Semi-Local Convergence of Steffensen-Like Methods ...

395

¯ t¯n+1 − t¯n + s¯n − t¯n )(t¯n+1 − s¯n ), δ ≥ γ¯ 0 . But using the weaker condition (C2) where γ¯ n = M( we obtain respectively, 1 kB−1 (46.40) n+1A0 k ≤ 1 − M0 (2sn+1 + δ) and

kA−1 0 (An+1 − A0 )k ≤

1 1 − M0 (tn+1 − t0 + γn + δ)

(46.41)

which are tighter estimates than (46.37) and (46.38), respectively. Hence, M0 , M can re¯ β, M¯ and (46.40), (46.41) can replace (46.37), (46.38), respectively in the proof place βM, of Theorem 3 in [10]. Examples where (46.34)-(46.36) are strict can be found in [1]-[7]. Simple induction shows that 0 < sn − tn ≤ s¯n − t¯n , (46.42) 0 < tn+1 − sn ≤ t¯n+1 − s¯n

(46.43)

t ∗ ≤ t¯∗ = lim t¯n .

(46.44)

and n−→∞

These estimates justify the claims made at the introduction of this work along the same lines. The local results in [10] can also be extended using our technique. Next, we present the semi-local convergence result for the method (46.2). Theorem 74. Suppose that conditions (C) hold. Then, iteration {xn } generated by method (46.2) exists in U[x0 ,t ∗ ], remains in U[x0 ,t ∗ ] and limn−→∞ xn = x∗ ∈ U[x0 ,t ∗] with F(x∗ ) = 0, so that kxn − x∗ k ≤ t ∗ − tn . Proof. It follows from the comment above Theorem 74. Next, we present the uniqueness of the solution result where the conditions (C) are not necessarily utilized. Proposition 25. Suppose: (i) There exists a simple solution x∗ ∈ U(x0 , r) ⊂ Ω for some r > 0. (ii) Condition (C2) holds and (iii) There exists r∗ ≥ r such that M0 (r + r∗ + δ) < 1. Set Ω1 = U(x0 , 1−MM0 (δ+r) ) ∩ Ω. Then, the element x∗ is the only solution of equation F(x) = 0 0 in the region Ω1 . Proof. Let z∗ ∈ Ω1 with F(z∗ ) = 0. Define Q = [x∗ , z∗ ; F]. Then, in view of (ii) and (iii), ∗ ∗ ∗ kA−1 0 (Q − A0 )k ≤ M0 (kx − x0 k + kz − w0 k≤ M0 (r + r + δ) < 1,

so z∗ = x∗ is a consequence of the invertibility of Q and the identity Q(x∗ − z∗ ) = F(x∗ ) − F(z∗ ) = 0.

396

Christopher I. Argyros, Samundra Regmi, Ioannis K. Argyros et al.

Remark 57. (i) Notice that r can be choosen to be t ∗ . (ii) The results can be extended further as follows. Replace ˜ (C3)”kA0−1 ([u, v; F] − [u, ¯ v; ¯ F])k ≤ M(ku − uk ¯ + kv − vk), ¯ for all u, u¯ ∈ Ω0 , v = u − −1 −1 A(u) F(u) and v¯ = A(u) ¯ F(u). ¯ Then, we have (iii) M˜ ≤ M. 0 (δ+γ0 ) − η) provided that M0 (δ + Another way is if we define the set Ω2 = U(x1 , 1−M2M 0 γ0 ) < 1. Moreover, suppose Ω2 ⊂ Ω. Then, we have Ω2 ⊂ Ω0 if condition (C3)” on Ω2 say with constant M˜ 0 , then, we have M˜ 0 ≤ M also hold. Hence, tighter M˜ or M˜ 0 can replace M in Theorem 74. (iv) The local results in [10] can also be extended along the lines of the semi-local.

4.

Conclusion

The convergence of the method (46.2) is extended by using more precise majorizing sequences and no additional conditions. This methodology applies to other methods [1]-[12].

References [1] Argyros, I. K., Hilout, S. (2010), Inexact Newton-type procedures. J. Complex 26(6):577-590. [2] Argyros, I. K., Convergence and applications of Newton-type iterations. Springer, New York (2008). [3] Argyros, I. K., Magr´en˜ an, A. A., A contemporary study of iterative procedures, Elsevier (Academic Press), New York, 2018. [4] Argyros, I. K., George, S., Mathematical modeling for the solution of equations and systems of equations with applications, Volume-IV, Nova Publisher, NY, 2021. [5] Behl, R., Maroju, P., Martinez, E., Singh, S., A study of the local convergence of a fifth order iterative procedure, Indian J. Pure Appl. Math., 51,2, (2020), 439-455. [6] Argyros. I. K., Computational Theory of Iterative Methods, vol. 15 Elsevier (2007). [7] Argyros, I. K., Cordero, A., Magre´na´ n, A. A., Torregrosa, J. R., Third-degree anomalies of Traub’s method, J. Comput. Appl. Math., 309 (2017), pp. 511-521. [8] Kung, H. T., Traub, J. F., Optimal order of one-point and multipoint iteration, J. Assoc. Comput. Math., 21 (1974), pp. 634-651. [9] Liu, Z., Zheng, Q., Zhao, P., A variant of Steffensens method of fourth-order convergence and its applications, Appl. Math. Comput., 216 (2010), pp. 1978-1983. [10] Moccari, M., Lofti, T., On a two step optimal Steffensen-type method: Relaxed local and semi-local convergence analysis and dynamical stability, J. Math. Anal. Applic., 468, (2018), 240-269.

Extended Semi-Local Convergence of Steffensen-Like Methods ...

395

[11] Ortega, J. M., Rheinboldt, W. G., Iterative Solution of Nonlinear Equations in Several Variables, Academic Press, New York (1970). [12] Zheng, Q., Li, J., Huang, F., An optimal Steffensen-type family for solving nonlinear equations, Appl. Math. Comput. 217 (2011) 9592-9597.

About the Authors Christopher I. Argyros Researcher Department of Computing and Mathematical Sciences Cameron University Lawton, OK, USA Email: [email protected]

Samundra Regmi Researcher and Professional Mathematics Tutor Learning Commons University of North Texas at Dallas, Dallas, TX, USA Email: [email protected]

Ioannis K. Argyros Professor Department of Computing and Mathematical Sciences Cameron University Lawton, OK, USA Email: [email protected]

Dr. Santhosh George Department of Mathematical and Computational Sciences National Institute of Technology Karnataka, India Email: [email protected]

Index A algorithm, 2, 3, 4, 5, 6, 9, 15, 17, 19, 21, 23, 24, 25, 27, 29, 30, 31, 33, 35, 36, 37, 42, 43, 44, 45, 46, 48, 144, 145, 146, 147, 148, 149, 151, 183, 193, 205, 207, 221, 232, 248, 271, 279, 287, 295, 303 applied mathematics, 20, 63, 75, 84, 94, 115, 128, 129, 193, 209, 220, 231, 237, 247, 255, 261, 270, 278, 279, 285, 286, 294, 302, 324, 335, 350, 357, 363, 388 approximation, 42, 43, 84, 94, 99, 106, 128, 151, 270, 278, 286, 290, 294, 302, 305, 365, 369, 376, 377, 389

B ball convergence, 95, 97, 99, 101, 103, 105, 107, 282, 283, 374 ball convergence theorem, 95, 97, 99, 101, 103, 105, 107, 374 Banach Lemma, 90, 98, 100 Banach space(s), 3, 13, 19, 20, 24, 25, 28, 30, 36, 49, 52, 55, 60, 66, 75, 77, 84, 87, 90, 91, 93, 94, 98, 100, 106, 107, 109, 112, 114, 117, 122, 128, 129, 131, 134, 140, 141, 143, 146, 153, 156, 162, 163, 167, 172, 173, 174, 185, 188, 193, 195, 197, 203, 204, 205, 210, 211, 217, 218, 219, 221, 222, 223, 227, 229, 230, 231, 232, 233, 235, 237, 239, 243, 245, 246, 248, 249, 251, 253, 257, 259, 260, 263, 265, 266, 271, 273, 275, 279, 281, 283, 286, 287, 289, 291, 295, 297, 299, 303, 305, 306, 311, 321, 329, 335, 343, 347, 350, 357, 358, 359, 361, 362, 363, 364, 366, 368, 369, 370, 371, 374, 375, 377, 379, 384, 385, 387, 389 bounds, 27, 41, 49, 81, 104, 105, 111, 114, 126, 163, 185, 201, 202, 211, 223, 237, 239, 255, 261, 269, 278, 285, 293, 301, 306, 316, 324, 325, 341, 348, 349, 350, 351, 356, 357, 358, 369, 370, 375, 389, 392

convergence criteria, 36, 49, 69, 75, 109, 114, 121, 202, 205, 219, 246, 332, 336, 341, 349, 350, 353, 356, 357, 360, 362, 363, 368, 371, 375, 387 convergence radius, 23, 136, 145, 155, 158, 170, 175, 176, 177, 187, 191, 199, 201 convergence region, 1, 9, 13, 16, 19, 21, 69, 117, 128, 141, 369, 375, 387, 389

D derivative, 75, 83, 87, 88, 92, 94, 96, 104, 105, 115, 127, 131, 132, 140, 143, 144, 145, 147, 149, 150, 151, 152, 153, 154, 161, 163, 171, 172, 180, 182, 185, 192, 193, 195, 201, 211, 219, 223, 230, 233, 237, 239, 246, 255, 261, 263, 269, 273, 278, 281, 285, 289, 293, 297, 301, 324, 325, 363, 369 differential equations, 1, 33 divided difference, 163

F Fifth Order Methods, 153, 155, 157, 159, 161 Fifth Order Scheme, 249, 251, 253, 255, 281, 283, 285, 287 Fifth-Order Method, 153 first derivatives, 153 fixed point, 84, 374 fixed point problems, 374

G Gauss-Newton, 14, 15, 17, 19, 20, 33, 35, 36, 37, 55, 61, 67, 205, 207, 208, 209, 210, 316, 374, 376 Gauss-Newton method, 20, 37, 209, 210, 316, 374 generalized condition, 109, 115, 131, 133, 135, 137, 139, 141, 185, 187, 189, 191, 193, 362 generalized equation, 9, 11, 13, 14, 67, 75, 209, 342 generalized inverses, 49, 51, 53, 55, 57, 59, 60, 61, 63, 65, 67 global convergence, 7

C Chebyshev-Halley method, 87, 140, 151, 171, 192 closed ball, 39, 78, 88, 96, 133 composite optimization, 14, 15, 17, 19, 20, 33, 210, 316

H Halley’s method, 93, 106, 369, 379, 380 Hilbert space, 9, 11, 13 Homeier Method, 233, 235, 237

400

Index

I

N

inclusion problem, 30, 33, 205, 208, 305, 316, 335, 337, 339, 341 interior point algorithms, 39, 41, 43, 45, 47, 48 inverses, 39, 49, 51, 52, 53, 55, 57, 59, 60, 61, 63, 65, 67, 167, 221, 316, 329, 375, 376 iterative methods, 6, 7, 13, 19, 24, 30, 36, 47, 55, 60, 61, 67, 68, 74, 84, 93, 94, 95, 96, 97, 99, 101, 103, 105, 106, 107, 110, 111, 117, 119, 121, 123, 125, 127, 128, 129, 131, 132, 140, 141, 144, 151, 154, 161, 162, 171, 172, 182, 185, 192, 194, 203, 209, 211, 219, 220, 221, 223, 230, 231, 237, 239, 247, 255, 261, 270, 278, 285, 286, 294, 302, 324, 325, 334, 342, 350, 357, 369, 370, 377, 387, 388, 390, 396

Newton, 1, 6, 7, 9, 13, 14, 19, 20, 21, 23, 24, 25, 27, 29, 31, 34, 36, 37, 39, 40, 41, 42, 43, 44, 45, 46, 47, 48, 49, 50, 51, 53, 55, 57, 59, 60, 61, 63, 64, 65, 67, 68, 72, 74, 75, 84, 87, 92, 93, 94, 95, 104, 106, 107, 114, 115, 117, 121, 126, 127, 128, 129,136, 140, 141, 151, 158, 161, 162, 170, 171, 172, 173, 175, 176, 177, 179, 181, 182, 183, 191, 192, 193, 201, 203, 204, 209, 210, 219, 220, 221, 222, 230, 231, 232, 237, 247, 248, 255, 261, 270, 271, 278, 279, 280, 285, 286, 287, 294, 295, 302, 303, 305, 307, 309, 311, 313, 315, 316, 324, 334, 335, 336, 337, 339, 341, 342, 343, 347, 348, 350, 353, 357, 358, 363, 364, 365, 369, 370, 374, 375, 376, 377, 387, 388, 396 Newton--Kantorovich Theorem, 39 Newton-like method, 55, 61, 67, 93, 94, 95, 114, 115, 140, 162, 172, 182, 193, 204, 220, 221, 222, 231, 247, 271, 279, 286, 295, 303, 305, 307, 309, 311, 313, 315, 316, 353, 357, 358, 363, 364, 374, 375, 376, 377 non differentiable equations, 373, 375, 377

J Jarratt-like, 141, 280, 295, 303, 324 Jarratt-type method, 131, 133, 135, 137, 139, 141

K Kantorovich Theorem, 348, 372

O

L

optimization problems, 205, 207, 209, 374 outer inverses, 52, 55, 60, 61, 67, 221, 316, 375, 376

Lipschitz conditions, 33, 71, 203, 206, 220 local analysis, 132, 144, 326, 383 local convergence, 21, 22, 23, 24, 25, 49, 57, 60, 63, 64, 77, 80, 84, 87, 88, 89, 93, 95, 96, 97, 100, 101, 105, 109, 110, 111, 112, 114, 115, 118, 125, 128, 129, 132, 134, 137, 139, 140, 144, 146, 149, 151, 154, 156, 159, 160, 164, 166, 171, 174, 177, 182, 186, 198, 199,202, 212, 220, 221, 223, 224, 230, 231, 232, 233, 234, 235, 237, 239, 240, 247, 248, 249, 255, 257, 262, 263, 267, 270, 271, 273, 274, 278, 279, 282, 285, 286, 287, 289, 291, 293, 294, 295, 297, 299, 301, 302, 303, 317, 325, 330, 331, 334, 366, 368, 371, 387, 396 local convergence analysis, 60, 64, 77, 84, 87, 93, 96, 97, 100, 101, 105, 109, 110, 111, 114, 125, 128, 132, 137, 139, 144, 149, 154, 159, 160, 166, 177, 198, 202, 230, 234, 249, 257, 263, 274, 282, 289, 297, 317

M majorizing sequences, 71, 77, 118, 124, 195, 212, 219, 224, 230, 240, 246, 258, 306, 307, 318, 335, 336, 341 mathematical induction, 121, 134, 156, 235, 337 mathematical modeling, 9, 33, 47

P parameters, 1, 3, 57, 64, 67, 70, 78, 89, 97, 100, 104, 105, 118, 124, 129, 132, 144, 164, 186, 207, 218, 224, 229, 234, 240, 246, 250, 254, 258, 264, 267, 274, 282, 290, 298, 306, 309, 318, 326, 330, 336, 344, 345, 346, 348, 353, 354, 355, 356, 359, 372, 380, 390, 391 projection, 63, 92, 104

Q quadratic, 29, 35, 42, 43, 53, 57, 87, 161, 207, 370

R radius of convergence, 23, 24, 67, 81, 82, 83, 126, 132, 133, 137, 138, 144, 154, 159, 166, 170, 175, 191, 234, 236, 267, 269, 275, 277, 282, 285, 290, 293, 301 Riemannian manifold(s), 20, 21, 23, 24, 25, 27, 29, 30, 31, 33, 35, 36, 37

Index

401

S

T

scalar function, 132, 274, 290, 326, 332, 345 scalar sequences, 224, 240, 250, 307, 335 second derivative, 94, 95, 106, 140, 141 second substep of method, 103, 135 semi-local convergence, 1, 3, 4, 6, 9, 10, 11, 16, 18, 21, 27, 28, 30, 34, 40, 41, 50, 69, 70, 77, 84, 87, 88, 94, 95, 111, 114, 117, 118, 121, 123, 128, 177, 182, 195, 197, 205, 216, 219, 227, 230, 237, 243, 246, 250, 251, 255, 258, 261, 264, 265, 305, 306, 310, 312, 314, 315,317, 320, 324, 325, 328, 334, 340, 341, 350, 353, 354, 355, 357, 359, 360, 361, 362, 365, 366, 367, 371, 374, 379, 383, 387, 389, 391, 393, 394, 395, 396, 397 Seventh Order Method for Equations, 297, 299, 301, 303 Single Step Third Order Method, 325, 327, 329, 331, 333

Taylor expansion, 131, 211, 223, 233, 239, 249, 257, 317, 325 Taylor series, 197, 212, 224, 240 Tenth Convergence Order Method, 185, 187, 189, 191, 193 third-order method, 94, 141

V vector field, 21

W Werner Method, 257, 259, 261