Perturbation Methods in Science and Engineering 3030734609, 9783030734602

Perturbation Methods in Science and Engineering provides the fundamental and advanced topics in perturbation methods in

244 57 11MB

English Pages 595 [584] Year 2021

Report DMCA / Copyright

DOWNLOAD PDF FILE

Table of contents :
Preface
Contents
About the Author
I Preliminaries
1 P1: Principles of Perturbations
1.1 Gauge Function and Order Symbol
1.1.1 The Order Symbol O
1.1.2 The Order Symbol o
1.2 Applied Perturbation Principle
1.2.1 Regular Perturbations
1.2.2 Singular Perturbations
1.3 Applied Weighted Residual Methods
1.4 Chapter Summary
1.5 Key Symbols
2 P2: Differential Equations
2.1 Applied Differential Equations
2.1.1 Phase Plane
2.1.2 Limit Cycle
2.1.3 State Space
2.1.4 State-Time Space
2.2 Chapter Summary
2.3 Key Symbols
3 P3: Approximation of Functions
3.1 Applied Power Series Expansion
3.2 Applied Fourier Series Expansion
3.3 Applied Orthogonal Functions
3.4 Applied Elliptic Functions
3.5 Chapter Summary
3.6 Key Symbols
II Perturbation Methods
4 Harmonic Balance Method
4.1 First Harmonic Balance
4.2 Higher Harmonic Balance
4.3 Energy Balance Method
4.4 Multi-degrees-of-Freedom Harmonic Balance
4.5 Chapter Summary
4.6 Key Symbols
5 Straightforward Method
5.1 Chapter Summary
5.2 Key Symbols
6 Lindstedt-Poincaré Method
6.1 Periodic Solution of Differential Equations
6.2 Chapter Summary
6.3 Key Symbols
7 Mathieu Equation
7.1 Periodic Solutions of Order n=1
7.2 Periodic Solutions of Order nN
7.3 Mathieu Functions
7.4 Chapter Summary
7.5 Key Symbols
8 Averaging Method
8.1 Chapter Summary
8.2 Key Symbols
9 Multiple Scale Method
9.1 Chapter Summary
9.2 Key Symbols
A Ordinary Differential Equations
B Trigonometric Formulas
C Integrals of Trigonometric Functions
D Expansions and Factors
E Unit Conversions
Index
Recommend Papers

Perturbation Methods in Science and Engineering
 3030734609, 9783030734602

  • 0 0 0
  • Like this paper and download? You can publish your own PDF file online for free in a few minutes! Sign Up
File loading please wait...
Citation preview

Reza N. Jazar

Perturbation Methods in Science and Engineering

Perturbation Methods in Science and Engineering

Reza N. Jazar

Perturbation Methods in Science and Engineering

Reza N. Jazar School of Engineering RMIT University Melbourne, VIC, Australia

ISBN 978-3-030-73460-2 ISBN 978-3-030-73462-6 (eBook) https://doi.org/10.1007/978-3-030-73462-6 © Springer Nature Switzerland AG 2021 This work is subject to copyright. All rights are reserved by the Publisher, whether the whole or part of the material is concerned, specifically the rights of translation, reprinting, reuse of illustrations, recitation, broadcasting, reproduction on microfilms or in any other physical way, and transmission or information storage and retrieval, electronic adaptation, computer software, or by similar or dissimilar methodology now known or hereafter developed. The use of general descriptive names, registered names, trademarks, service marks, etc. in this publication does not imply, even in the absence of a specific statement, that such names are exempt from the relevant protective laws and regulations and therefore free for general use. The publisher, the authors and the editors are safe to assume that the advice and information in this book are believed to be true and accurate at the date of publication. Neither the publisher nor the authors or the editors give a warranty, expressed or implied, with respect to the material contained herein or for any errors or omissions that may have been made. The publisher remains neutral with regard to jurisdictional claims in published maps and institutional affiliations. This Springer imprint is published by the registered company Springer Nature Switzerland AG The registered company address is: Gewerbestrasse 11, 6330 Cham, Switzerland

Her imperial majesty Queen Farah Pahlavi

This book is dedicated to Her Imperial Majesty Farah Pahlavi, the most virtuous and noble Persian Queen, who has universally and profoundly promoted Iranian arts and culture with devotion, whose contribution is beyond comparison in the last 1400 years.

and Kavosh, Vazan, Mojgan.

If you can find the solution, find it, if you cannot, perturb the closest solution you find.

Preface How to Use the Book If in your undergraduate program you have passed Calculus, Linear Algebra, Differential Equations, and Advanced Mathematics, then you may read Chap. 1, skip Chaps. 2 and 3, jump to Chap. 4, and read the rest of the book. If in your undergraduate program you have passed Calculus, Linear Algebra, and Differential Equations, then you may read Chap. 1, skip Chap. 2, jump to Chap. 3, start from Sect. 3.2, and read the rest of the book. If in your undergraduate program you have passed Calculus and Linear Algebra, then you may read from the beginning of the book at Chap. 1 and read the whole book, but Sect. 3.1. If in your undergraduate program you have not seen Calculus and Linear Algebra, then this book is not for you.

Read It First Several years ago when I was student, I was working on Mathieu-Duffing equation. Perturbation methods were the only classical method to provide me a semi-analytical solution of the problem. Studying the great books of Ali Nayfeh (1933–2017) on perturbation analysis opened a door to me to enter into the new world of nonlinear systems, (Nayfeh, 1993a, 2000a), the world in which mathematics turned to show itself like poems. In the world of semianalytical perturbation methods, the hard and untamed mathematical equations were flowing as music and poems. Reading several times the amazing Nayfeh’s perturbation analysis books and then the amazing book of Nonlinear Oscillations by Nayfeh and Dean T. Mook was everything I needed to fly in the ocean of nonlinear vibrations and nonlinear systems, (Nayfeh & Mook, 1979). Nayfeh became one of my role models in science, and I began to enjoy reading other books of him, Applied Nonlinear Dynamics by Nayfeh and Balakumar Balachandran, Nonlinear Interactions, and The Method of Normal Forms, (Nayfeh & Balachandran, 1995; Nayfeh, 2000b, 1993b). Then I dedicated my studies to nonlinear vibrations and nonlinear dynamics using perturbation methods as the main tool of analysis, reading all available materials starting from the works of vii

viii

Preface

the pioneer of perturbation methods, Henri Poincar´e (1854–1912), (Poincar´e, 1892). Reading Richard Rand’s Perturbation Methods showed the potential of perturbation analysis in extracting behavior of applied nonlinear modeling of engineering applications. Employing perturbation analysis in walking across a Mathieu stability chart was an amazing application of nonlinear analysis, (Rand & Armbruster, 1987; Rand, 1994, 2012). To me, Rand is a great scientist who used perturbation methods in a vast range of engineering and mathematical problems successfully. There are many more scientists who are expert in Perturbation Analysis whose names and contribution must be included in a book on history of regular perturbations, similar to the fantastic book of O’Malley (O’Malley, 2014). Modern science begins in the eighteenth century when differential equations born to be the superior method of modeling engineering and physical phenomena. Soon after, it became clear that we can only solve a very limited portion of differential equations, mainly the linear ones. Information about the solution of the equations were necessary, so indirect methods of solution such as qualitative, numerical, and approximate methods were born. One set of approximate methods is perturbations technics that this book is about. Assume there is an equation A that we are unable to solve. There is another equation B close and nearby to A that we are able to solve. Equations B is found by eliminating the nonlinear terms of A. Assuming that the eliminated nonlinear terms are very small, we may also assume that the solution of A is close to the solution of B. Eliminating the terms associated with the nonlinear terms from the solution of A will reduce the solution of A to the solution of B. This is the main concept of perturbation approximation methods. To visualize the perturbation methods, let us consider an algebraic equation as an example. Let us search for solutions of x2 − 13.1x + 36 = 0

(1)

and assume we are unable to find the exact solution at the moment. However, we may find out that if we had the closer equation x2 − 13x + 36 = 0

(2)

then we would be able to split it into the form (x − 4) (x − 9) = 0

(3)

x1 = 4

(4)

with known solutions. x2 = 9

Let us rewrite Eq. (1) as a perturbed equation to (2), x2 − (13 + 0.1) x + 36 = 0

(5)

Preface

ix

and assume the solution of the perturbed equation is close to the solutions of the unperturbed equation x1 = 4 + ε

x2 = 9 + ε

(6)

where ε is a very small number to be determined. Substituting the perturbed solution x1 into the original equation (1) provides 2

(4 + ε) − 13.1 (4 + ε) + 36 = 

ε2 − 5.1ε − 0.4

(7)

−5.1ε − 0.4 = 0

(8)

Because ε is assumed to be very small, we may assume ε2  0 to determine ε. ε = −7.843 1 × 10−2

(9)

Therefore, the solution x1 of Eq. (1) would be x1 = 4 + ε = 4 − 7.843 1 × 10−2 = 3.921 6

(10)

Similarly, we substitute the perturbed solutions x2 into (1) to have 2

(9 + ε) − 13.1 (9 + ε) + 36

= 

ε2 + 4.9ε − 0.9 4.9ε − 0.9 = 0

(11) (12)

Ignoring ε2  0 gives us ε, ε = 0.18367

(13)

and hence, the solution x2 of Eq. (1) would be x2 = 9 + ε = 9 + 0.183 67 = 9.1837

(14)

The approximate solutions x1 = 3.9216

x2 = 9.1837

(15)

of Eq. (1) are close enough to the exact solutions x1 = 3.9227 and x2 = 9.1773, supposedly unavailable. Error analysis and investigating how to increase the accuracy of the approximate solutions are topics of the next chapters. This example showed how perturbation analysis works. However, still the meaning of close or nearby needs to be clarified, which will be done in the following chapters. This text is intended to be for graduate students in mathematics, physics, and engineering. It can also be used by those who conduct research in the field of nonlinear dynamics as a standard reference if and when they encounter unavailable solutions of differential equations associated with dynamic systems. Generally speaking, all the problems in science and engineering are nonlinear from the outset. Linearization of equations is the first approximating approach that is good enough and satisfactory for most purposes. There are, however,

x

Preface

many cases in which linear treatments are not applicable or do not provide enough information of the aspects we are interested in. There are always new phenomena occurring in nonlinear systems which cannot occur in linear systems and will be missed by linearization. The aim of this book is to introduce methods of improving the accuracy obtainable by linearization. It is often possible to capture the principal features of a system in the first approximation. For example, the body of the Earth may be described as a sphere, an ellipsoid, or a geoid. The first, the second, and the third description are all approximate, although their accuracy successively increases. However, the first approximation of the sphericity of the Earth, which is the most inaccurate of the three, has been good enough to make a scientific revolution (Andrianov et al., 2002). In the perturbation analysis, the solution of the nonlinear equation will be expressed by a series, each term multiplied by the perturbation parameter, εk , where k is the number of the term. The series generated by a perturbation approach does not necessarily converge but still approximate the solution by a truncated series. T. Stieltjes (1856–1894) and H. Poincar´e (1854–1912) were the first to introduce a clear concept of such series, (Holmes, 1995). An equation is considered to be solved if the solution has been expressed in a finite number of known functions. This is only possible in one out of hundred cases. What we could do is to solve the problem approximately, qualitatively, or numerically, in the same order of importance, (Poincar´e, 1908; Kiyosi, 1993).

Level of the Book This book has been developed from nearly two decades of research and teaching in engineering and mathematics. It is addressing primarily to cover the missing knowledge in the curriculum of graduate students in engineering and science. Hence, it is an advanced level book that may also be used at a textbook. It provides fundamental and advanced topics needed in computerizing some approximate methods to solve differential equations. The whole book can be covered in a single course in 14–16 weeks. Students are required to know the fundamentals of calculus, kinematics, and dynamics, as well as an acceptable knowledge of numerical methods and differential equations. The contents of the book have been kept at a fairly theoretical-practical level. All concepts are deeply explained and their applications emphasized, and most of the related theories and formal proofs have been explained. The book places a strong emphasis on the physical meaning and applications of the concepts. Topics that have been selected are of high interest in the field. An attempt has been made to expose students and researchers to the most important topics and applications.

Organization of the Book The book is organized so it can be used for teaching or for self-study. The book is in two parts: Parts I and II. Part I is set in three chapters to cover

Preface

xi

the preliminary knowledge needed to study the second part. Chapter 1 contains basic information on what the meaning of perturbation is about. Chapter 2 reviews the knowledge of modeling engineering systems and indirect analysis methods of differential equations. Chapter 3 introduces methods of approximation of functions. Part II is the main part of the book and is set in six chapters to cover the methods of perturbation analysis. Chapter 4 covers the most logical and practical method of perturbation to determine approximation periodic solution of equations and suitable to determine frequency responses. Chapter 5 introduces the most fundamental perturbation method suitable to find response of differential equations in independent variable domain. Chapter 6 is an extension of straightforward method to find periodic solutions and determine frequency response of equations. Chapter 7 introduces parametric equations and shows how Lindstedt-Poincar´e method discovers Mathieu functions and stability chart of the equation. Chapter 8 shows the advantage and application of transformation method from Cartesian to polar coordinates. It is suitable in determining frequency response of periodically forced dynamic systems. Chapter 9 is a strong method to determine solution of differential equations in both time and frequency domains.

Method of Presentation This book uses a “fact-reason-application” structure. The “fact” introduces the main subject of each section. Then the reason is given as a “proof.” The application of the fact is examined in some “examples.” The “examples” are a very important part of the book as they show how to implement the “facts.” They also cover some other facts that are needed to expand the knowledge explained in “fact.” Most examples are self-sustained and introduce, analyze, and discuss an independent topic.

Prerequisites The book is written for researchers and advanced graduate-level students of science, engineering, and mathematics; the assumption is that users are familiar with matrix algebra, numerical analysis, calculus, differential equations, as well as principles of kinematics and dynamics. Therefore, the prerequisites are the fundamentals of kinematics, dynamics, vector analysis, matrix theory, numerical methods, and differential equations.

Unit System The system of units adopted in this book is, unless otherwise stated, the international system of units (SI). The units of degree (deg) or radian (rad) are utilized for variables representing angular quantities.

xii

Preface

Symbols • Lowercase bold letters indicate a vector. Vectors may be expressed in an n-dimensional Euclidian space. Example: r p ω

, , ,

s q α

, , ,

d v 

, , ,

a w θ

, , ,

b y δ

, , ,

c z φ

• Uppercase bold letters indicate a dynamic vector or a dynamic matrix, such as force and moment, or coefficient matrix. For example, A

F

M

J

• Lowercase letters with a hat indicate a unit vector. Unit vectors are not bold. For example, ˆı Iˆ

, ,

jˆ Jˆ

, ,

kˆ ˆ K

, ,

u ˆ u ˆθ

, ,

u ˆ u ˆϕ

, ,

n ˆ u ˆψ

• The length of a vector is indicated by a non-bold lowercase letter. For example, r = |r| a = |a| b = |b| s = |s| • Capital letter B is utilized to denote a body coordinate frame. For example, B(oxyz) B(Oxyz) B1 (o1 x1 y1 z1 ) • Capital letter G is utilized to denote a global, inertial, or fixed coordinate frame. For example, G

G(XY Z)

G(OXY Z)

• An asterisk  indicates a more advanced subject or example that is not designed for undergraduate teaching and can be dropped in the first reading.

References Andrianov, I. V. , Manevitch, L. I. , & Hazewinkel, M. (2002). Asymptotology: Ideas, methods, and applications. New York, NY: Springer. Holmes, M. H. (1995). Introduction to perturbation methods. New York, NY: Springer.

Preface

xiii

Kiyosi, I. (1993). Encyclopedic dictionary of mathematics. Mathematical society of Japan (2nd ed.). Cambridge, MI: The MIT Press. Nayfeh, A. H. (1993a). Introduction to perturbation techniques. New York: John Wiley & Sons. Nayfeh, A. H. (1993b). The method of normal forms. New York: John Wiley & Sons. Nayfeh, A. H. (2000a). Perturbation methods. New York: John Wiley & Sons. Nayfeh, A. H. (2000b). Nonlinear interactions: Analytical, computational, and experimental methods. New York: John Wiley & Sons. Nayfeh, A. H., & Balachandran, B. (1995). Applied nonlinear dynamics: Analytical, computational and experimental methods. New York: John Wiley & Sons. Nayfeh, A. H., & Mook, D. T. (1979). Nonlinear oscillations. New York: John Wiley & Sons. O’Malley, R. E. (2014). Historical developments in singular perturbations. New York: Springer Verlag. Poincar´e, H. (1892). New methods of celestial mechanics, volume I–III. NASA TTF-450, 1967 (English translation). Poincar´e, H. (1908). Science et Methode. Paris: Flammarion. 1920, rpt. Paris: Kim´e, 1999. Translation in 1913. Rand, R. H. (1994). Topics in nonlinear dynamics with computer algebra. Newark: Gordon and Breach Rand, R. H. (2012). Lecture notes on nonlinear vibrations. Published on-line by The Internet-First University Press. http://ecommons.library.cornell.edu/ handle/1813/28989 Rand, R. H., & Armbruster D. (1987). Perturbation methods, bifurcation theory and computer algebra. New York: Springer Verlag.

Contents I

Preliminaries

1

1 P1: Principles of Perturbations 1.1 Gauge Function and Order Symbol . 1.1.1 The Order Symbol O . . . . . 1.1.2 The Order Symbol o . . . . . 1.2 Applied Perturbation Principle . . . 1.2.1 Regular Perturbations . . . . 1.2.2 Singular Perturbations . . 1.3 Applied Weighted Residual Methods 1.4 Chapter Summary . . . . . . . . . . 1.5 Key Symbols . . . . . . . . . . . . . 2 P2: Differential Equations 2.1 Applied Differential Equations 2.1.1 Phase Plane . . . . . . . . 2.1.2 Limit Cycle . . . . . . . . 2.1.3 State Space . . . . . . . 2.1.4 State-Time Space . . . 2.2 Chapter Summary . . . . . . . . 2.3 Key Symbols . . . . . . . . . . . 3 P3: 3.1 3.2 3.3 3.4 3.5 3.6

Approximation of Functions Applied Power Series Expansion Applied Fourier Series Expansion Applied Orthogonal Functions Applied Elliptic Functions . . . Chapter Summary . . . . . . . . Key Symbols . . . . . . . . . . .

. . . . . . .

. . . . . .

. . . . . . .

. . . . . .

. . . . . . . . .

. . . . . . .

. . . . . .

. . . . . . . . .

. . . . . . .

. . . . . .

. . . . . . . . .

. . . . . . .

. . . . . .

. . . . . . . . .

. . . . . . .

. . . . . .

. . . . . . . . .

. . . . . . .

. . . . . .

. . . . . . . . .

. . . . . . .

. . . . . .

. . . . . . . . .

. . . . . . .

. . . . . .

. . . . . . . . .

. . . . . . .

. . . . . .

. . . . . . . . .

. . . . . . .

. . . . . .

. . . . . . . . .

. . . . . . .

. . . . . .

. . . . . . . . .

. . . . . . .

. . . . . .

. . . . . . . . .

. . . . . . .

. . . . . .

. . . . . . . . .

. . . . . . .

. . . . . .

. . . . . . . . .

. . . . . . .

. . . . . .

. . . . . . . . .

. . . . . . .

. . . . . .

. . . . . . . . .

3 3 6 8 11 14 27 35 43 46

. . . . . . .

53 53 92 139 144 148 153 156

. . . . . .

179 179 199 216 226 242 244 xv

xvi

II

CONTENTS

Perturbation Methods

259

4 Harmonic Balance Method 4.1 First Harmonic Balance . . . . . . . . . . . . 4.2 Higher Harmonic Balance . . . . . . . . . . . 4.3 Energy Balance Method . . . . . . . . . . . . 4.4 Multi-degrees-of-Freedom Harmonic Balance . 4.5 Chapter Summary . . . . . . . . . . . . . . . 4.6 Key Symbols . . . . . . . . . . . . . . . . . .

. . . . . .

. . . . . .

. . . . . .

. . . . . .

. . . . . .

. . . . . .

. . . . . .

. . . . . .

. . . . . .

. . . . . .

. . . . . .

263 263 303 330 338 342 344

5 Straightforward Method 355 5.1 Chapter Summary . . . . . . . . . . . . . . . . . . . . . . . . . . 377 5.2 Key Symbols . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 380 6 Lindstedt-Poincar´ e Method 387 6.1 Periodic Solution of Differential Equations . . . . . . . . . . . . . 387 6.2 Chapter Summary . . . . . . . . . . . . . . . . . . . . . . . . . . 423 6.3 Key Symbols . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 425 7 Mathieu Equation 7.1 Periodic Solutions of Order n = 1 . 7.2 Periodic Solutions of Order n ∈ N . 7.3 Mathieu Functions . . . . . . . . . 7.4 Chapter Summary . . . . . . . . . 7.5 Key Symbols . . . . . . . . . . . .

. . . . .

. . . . .

. . . . .

. . . . .

. . . . .

. . . . .

. . . . .

. . . . .

. . . . .

. . . . .

. . . . .

. . . . .

. . . . .

. . . . .

. . . . .

. . . . .

. . . . .

433 434 446 459 472 474

8 Averaging Method 483 8.1 Chapter Summary . . . . . . . . . . . . . . . . . . . . . . . . . . 515 8.2 Key Symbols . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 518 9 Multiple Scale Method 527 9.1 Chapter Summary . . . . . . . . . . . . . . . . . . . . . . . . . . 545 9.2 Key Symbols . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 547 A Ordinary Differential Equations

551

B Trigonometric Formulas

555

C Integrals of Trigonometric Functions

563

D Expansions and Factors

565

E Unit Conversions

567

Index

571

About the Author Reza N. Jazar is a Professor of Mechanical Engineering. Reza received his PhD degree from Sharif University of Technology, Tehran, Iran, and MSc and BSc from Tehran Polytechnic, Tehran, Iran. His areas of expertise include Nonlinear Dynamic Systems and Applied Mathematics. He obtained original results in non-smooth dynamic systems, applied nonlinear vibrating problems, time optimal control robotics, and mathematical modeling of vehicle dynamics and stability. He authored several monographs in vehicle dynamics, robotics, dynamics, vibrations, and mathematics and published numerous professional articles, as well as book chapters in research volumes. Most of his textbooks have been adopted by many universities for teaching and research and by many research agencies as standard model for research results. Dr. Jazar had the pleasure to work in several Canadian, American, Asian, and Middle Eastern universities, as well several years in Automotive Industries all around the world. Working in different engineering firms and educational systems provides him a vast experience and knowledge to publish his researches on important topics in engineering and science. His unique style of writing helps readers to learn the topics deeply in an easy way.

xvii

Part I

Preliminaries

1

2

I

Preliminaries

Before beginning the main subject of perturbations, we must review certain preliminary details and summarize the fundamental principles of gauge functions, perturbations, weighted residual methods, differential equations, geometrical illustration of solutions, approximation of functions, integration of differential equations by series, Fourier series, orthogonal functions, and elliptical functions. These are the information we will be using throughout the book, and therefore, we devote the first part of the book to the mathematical knowledge needed to understand the main subject of the book in the next parts. This is why the first part is called ”Preliminary,” in which we limit ourselves to the fundamental concepts whose demonstrations are well-known and essential. This part makes the book self-sustained. A proper knowledge of mathematical methods is important for grasping all university and college courses not only in physics and engineering but also in more general sciences. The best way to solve any physical problem governed by differential equations is to obtain their analytical solution. However, there are many situations, where the analytical solution is impossible or difficult to obtain. For such problems, we must use indirect methods of numerical, approximate, series, or perturbation methods. Approximate and numerical solutions methods will be covered as much as needed for the main scope of this book. Perturbation methods are a unique area of mathematics and the best method to solve nonlinear differential equations in a semi-analytic way. The beauty of perturbation analysis is in deriving some analytical equations to approximate the solution. Having analytical equation is essential in design and fine-tuning of dynamic systems (Jazar 2020, 2011). Students who come to perturbation analysis course come from diverse mathematical backgrounds, and their core knowledge varies considerably. I have therefore decided to write this part that assumes knowledge only of the material that can be expected to be familiar to all the current generation of students starting physical science and engineering courses at university.

1

P1: Principles of Perturbations There are several important mathematical preliminaries which are needed in perturbation methods and solution of differential equations. Gauge functions, order symbols, perturbation coefficient and splitting differential equations into linear and nonlinear terms, principle of perturbation solution, and review of weighted residual methods will be covered in this chapter. These topics are all needed to study perturbation methods.

1.1

Gauge Function and Order Symbol

Consider a function f (ε) of the real parameter ε with no singularity. If the limit of f (ε) exists when ε → 0, then there are three Possibilities. ⎧ ⎨ 0 ∞ lim f (ε) = −∞ · · · ε−2 < ε−3 < · · ·

(1.5) (1.6)

The behavior of a function f (ε), as ε → 0, will be compared with a gauge function g (ε) by employing the symbols of big O and small o (Erdelyi, 1956; Mickens, 2004; Nayfeh, 1981). Example 1 L’Hospital’s rule. It usually happens when we wish to determine the limit of the ratio of a function f (x) and a gauge function g (x), for x → 0 or x → x0 , both functions approach zero and make the limit indeterminate. In such cases, the L’Hospital’s rule allows such limits to be evaluated. lim

x→x0

f (x) g (x)

df (x) /dx dg (x) /dx d2 f (x) /dx2 lim 2 x→x0 d g (x) /dx2

=

lim

x→x0

=

if

if f (x0 ) = g (x0 ) = 0

(1.7)

dg (x0 ) df (x0 ) = =0 dx dx

(1.8)

The differentiation will be stopped and the limit be evaluated by setting x = x0 when either or both derivatives are nonzero (Bush, 1992; Jazar, 2020). For example sin x d sin x/dx cos x = lim = lim =1 (1.9) lim x→x0 x x→x0 x→x0 dx/dx 1 lim

x→x0

sin x − x −x3

= =

lim

x→x0

cos x − 1 − sin x = lim 2 x→x −3x −6x 0 − cos x 1 = lim x→x0 −6 6 lim

x→x0

1 1 − cos x sin x cos x = lim = = lim 2 x→x x→x x 2x 2 2 0 0

(1.10) (1.11)

1. P1: Principles of Perturbations

5

The French mathematician, Guillaume de l’Hospital (1661–1704), presented and proved his rule in “Analyse des Infiniment Petits pour l’Intelligence des Lignes Courbes” (Analysis of the Infinitely Small for the Understanding of Curved Lines), published in 1696. The rule was discovered by the Swiss mathematician Johann Bernoulli (L’Hospital, 1696). Example 2 Power function, xn , with the same rate of f (x). Let us find out what is the power function that approaches zero with the same rate as 1 − cos x − x2 does. We divide the function by x to an unknown power, xn , and keep using L’Hospital’s rule until we get the limit. lim

x→x0

cos x − 1 + x2 xn

− sin x + 2x − cos x + 2 = lim x→x0 n (n − 1) xn−2 nxn−1 sin x lim x→x0 n (n − 1) (n − 2) xn−3 cos x lim x→x0 n (n − 1) (n − 2) (n − 3) xn−4 1 n=4 (1.12) 24

=

lim

x→x0

= = =

Example 3 Power function as gauge function. Powers of ε is the most common functions we use as gauge functions. Assuming ε < 1, we have 1 > −1

ε


ε2 > ε3 > · · · ε

−2

0 such that (1.22) |f (ε)| ≤ C |g (ε)| |ε| ≤ ε0 then we say f (ε) is of asymptotic order g (ε) when ε → 0 if f (ε) /g (ε) is bounded and a C exists. f (ε)    f (ε)   lim  ε→0 g (ε) 

=

O (g (ε))

=

C

ε→0

(1.23)

|C| < ∞

(1.24)

In case f (x, ε) is a function of the variable x and the parameter ε and g (x, ε) is a gauge function and |f (x, ε)| ≤ C |g (x, ε)|

|ε| ≤ ε0

(1.25)

ε→0

(1.26)

then f (x, ε) = O (g (x, ε))

The symbol big “oh” indicates the rates of change of f (ε) and g (ε) are the same when ε → 0, their limit is known, and the limit of their ratio, limε→0 f (ε) /g (ε), is a known constant (Nayfeh, 1981). Example 5 Big “oh” and series expansion of functions. Series expansion of functions by polynomials and truncating them is based on order of magnitude of the eliminated terms. sin x

=

cos x

=

 1 5 1 7 1 x − x + O x9 x − x3 + 6 120 5040  1 2 1 4 1 6 x + O x8 1− x + x − 2 24 720

(1.27) (1.28)

1. P1: Principles of Perturbations

tan x

=

sin x2

=

 1 2 17 7 x + O x9 x + x3 + x5 + 3 15 315  10 1 6 2 x − x +O x 6

7

(1.29) (1.30)

When x → 0, we have sin x = O (x)

 sin x2 = O x2

tan x = O (x)

  O (x) + O x2 + O x3 O (xm ) + O (xn )

= =

n

k O (x )

=

O (x) O (xm )

m 0, γ > 0 such that Burd (2007)  At  e  ≤ M eγt t≤0 (2.100)

2. P2: Differential Equations

67

A system of nonhomogeneous linear differential equations is dx (t) − A x (t) = q (t) dt

(2.101)

where A is a n × n constant square matrix of order n and q (t) is a bounded vector function for all t. The general solution of Eq. (2.101) is the sum of the general solution xh (t) of the associated homogeneous equation x˙ − A x = 0 and of a particular solution xp (t) of the complete Eq. (2.101). dxh (t) − A xh (t) dt dxp (t) − A xp (t) dt x (t)

=

0

(2.102)

=

q (t)

(2.103)

=

xh (t) + xp (t)

(2.104)

If λ1 , λ2 , . . . , λn denote the distinct eigenvalues of A, then the general solution of the associated homogeneous equation, xh (t), may be written as xh (t) =

n n

Cij ti−1 eλj t

(2.105)

j=1 i=1

where Cij are constant vectors. As an example, let us find solution for x+x=0 x(4) + 2¨

(2.106)

by transforming the equation into 4 first-order equations. x˙ 1 = x2

x˙ 2 = x3 ⎤ ⎡ x˙ 1 ⎢ x˙ 2 ⎥ ⎢ ⎢ ⎥ ⎢ ⎣ x˙ 3 ⎦ = ⎣ x˙ 4 ⎡

x˙ 3 = x4 0 0 0 −1

1 0 0 0

x˙ 4 = −x1 − 2x3 ⎤ ⎤⎡ 0 0 x1 ⎥ ⎢ 1 0 ⎥ ⎥ ⎢ x2 ⎥ 0 1 ⎦ ⎣ x3 ⎦ x4 −2 0

(2.107)

(2.108)

The eigenvalues and eigenvectors of the coefficient matrix are λ1

=

i

λ2

=

−i

u1 =



u1 =

i 

−1 −i −i

−1 i

1

T 1

T

(2.109) (2.110)

Therefore, solution of (2.106) is x = (C1 t + C2 ) cos t + (C3 t + C4 ) sin t

(2.111)

Let assume the system (2.101) has a unique bounded solution x (t). If system (2.101) has two bounded solutions, then their difference would be a bounded solution of homogeneous equation (2.96). Hence, the necessary condition for

68

2. P2: Differential Equations

the existence of a unique bounded solution of system (2.101) for the bounded function q (t) is absence of bounded solutions of system (2.96) except the trivial ones. Therefore, we shall assume that the matrix A has no eigenvalues with the zero real part. Under this assumption, system (2.101) has a unique bounded solution. Example 35 First-order differential equations. A first-order ordinary differential equation is an equation that involves at most the first derivative of an unknown function, y. If y is a function of x then the first-order ordinary differential equation is shown as dy = f (y, x) dx

(2.112)

where f (y, x) is a given function of the variables y and x which is defined in some interval a < x < b. An explicit solution of the first-order ordinary differential equation in some interval a < x < b is any function y = y(x) that satisfied the differential equation (2.112). In other words its substitution into the equation reduces the differential equation to an identity everywhere within that interval. Linear differential equations with constant coefficients always have analytical solutions; however, there are no general solutions when the coefficients are not constant. The first-order linear differential equation is exceptional and it is always solvable. The simplest linear differential equation of first order dx + p (t) x = q (t) dt

x (0) = x0

(2.113)

has explicit solution. x

  t  x0 exp − p (s) ds 0   t  t  + exp − p (s) ds exp

=

0

0

s

 p (r) dr q (s) ds

0

To verify this solution, we may multiply (2.113) by exp 



t

p (s) ds

exp 0

to have d dt





dx + p (t) x dt



p (s) ds

t 0

p (s) ds 

t

= q (t) exp



t

x exp



"

(2.114)

p (s) ds

(2.115)

0





t

= q (t) exp

p (s) ds

0

(2.116)

0

and integrate both sides between 0 and t (Bellman, 1964).  t   s   t x exp p (s) ds = q (s) exp p (r) dr ds + C 0

0

C

=

x0

(2.117)

0

(2.118)

2. P2: Differential Equations

69

To show the method and prove the solution (2.114), it can be claimed that if we may multiply Equation (2.113) by a function u (t) such that u (t) = p (t) u (t), then we can solve the equation. u (t)

dx + u (t) p (t) x = dt d (u (t) x) = dt x − x0

The condition

=

u (t) q (t)

(2.119)

u (t) q (t)  t 1 u (s) q (s) ds u (t) 0

(2.120)

u (t) − p (t) u (t) = 0

(2.121)

(2.122)

is a differential equation to determine such u (t) is a homogeneous linear firstorder equation that always has a solution.   t p (s) ds (2.123) u (t) = C exp 0

Because we only need one such nonzero function u, we set C = 1 and therefore the solution (2.114) is proven. Equations of higher order than 1 have no solutions in general. As a result, approximate solutions such as perturbation methods are the only way to deduce properties of the actual solution. Example 36 Solve the general first-order equation x˙ + p (t) x = f (t). Assuming p (t) = 0, f (t) = 0, we use the identity d h(t) xe dt

=

h (t)

=

xe ˙ h(t) + xp (t) eh(t)  p (t) dt = 0

to rewrite the differential equation in a new form d h(t) = eh(t) f (t) xe dt

(2.124) (2.125)

(2.126)

Integrating both sides implies    x = e−h(t) C + eh(t) f (t) dt

(2.127)

An example of this equation is x˙ + ax = b

(2.128)

for which we have p (x) = a

f (t) = b

(2.129)

70

2. P2: Differential Equations

and

 h (t) =

adt = at + C1

(2.130)

and the solution is x (t)

= =

C

=

 beat+C1 dt   1 b e−at−C1 C2 + beC1 +at = + Ce−at a a e−at−C1





C2 +

C2 e−C1

(2.131) (2.132)

The steady-state condition xs of the equation is # xs = lim x (t) = t→∞

b a ∞

a>0

(2.133)

a 0 and we set the initial condition x (0) of the system to be x (0) = b/a, then the system will stay at the initial condition forever. Example 37 Homogeneous second-order linear equation. The second-order equation d2 y (x) + y (x) = 0 dx2

(2.134)

is equivalent to two first-order equations dy (x) = A y (x) dx % $ $ % y1 (x) y (x) y= = y  (x) y2 (x) The solution of the equation is $ % cos x y1 = − sin x If we have the initial conditions as $ % 1 y10 = y1 (0) = 0

$ A=

$ y2 =

0 −1

sin x cos x

1 0

(2.136)

% (2.137)

$ y20 = y2 (0) =

(2.135)

%

0 1

% (2.138)

then y = y10 y1 (x) + y20 y2 (x)

(2.139)

2. P2: Differential Equations

71

Example 38 Nonhomogeneous second-order linear equation. Consider the linear second-order nonhomogeneous differential equation with a general boundary condition. x ¨ + p (t) x˙ + q (t) x = f (t)

(2.140)

x (0) = C1

(2.141)

x (t0 ) = C2

The most general form of second-order linear differential equation is p0 (t) x ¨ + p1 (t) x˙ + p2 (t) x = p3 (t) x (t0 ) = C2 x (0) = C1

(2.142) (2.143)

The functions pi (t) are assumed to be continuous and real-valued. The points at which p0 (t) vanishes are called singular points. Assuming p0 (t) = 0, we may rewrite (2.142) to the form of (2.140) by defining new coefficients. p (t) =

p1 (t) p0 (t)

q (t) =

p2 (t) p0 (t)

f (t) =

p3 (t) p0 (t)

(2.144)

The solution of a linear nonhomogeneous differential equation may be made of linear combination of two functions u and v x=u+v

(2.145)

where u is the solution of the homogeneous equation with the following boundary conditions. u ¨ + p (t) u˙ + q (t) u = 0 u (0) = C1 u (t0 ) = C2 − v (t0 )

(2.146) (2.147)

and v is the solution of the nonhomogeneous equation with zero initial conditions. v¨ + p (t) v˙ + q (t) v = f (t) v (0) = 0 v˙ (0) = 0

(2.148) (2.149)

Any second-order homogeneous linear differential equation will have two principal linearly independent solutions. Let us consider the homogeneous equation (2.146) with the given boundary conditions (2.141). u ¨ + p (t) u˙ + q (t) u = 0 u (0) = C1

u (t0 ) = C2

(2.150) (2.151)

Assume u1 and u2 to be the two principal solutions of the homogeneous equation (2.150). If we use the following initial conditions: u1 (0)

=

1

u˙ 1 (0) = 0

(2.152)

u2 (0)

=

0

u˙ 2 (0) = 1

(2.153)

72

2. P2: Differential Equations

then the solution of (2.150) can be expressed as u (t) = A u1 (t) + B u2 (t)

(2.154)

where A and B are constants to be determined by the initial and boundary conditions (2.151). A = C1

B=

C2 − C1 u1 (t0 ) u2 (t0 )

u2 (t0 ) = 0

(2.155)

Therefore, the solution of (2.146) will also be (2.154) with new coefficients. A = C1

B=

C2 − v (t0 ) − C1 u1 (t0 ) u2 (t0 )

u2 (t0 ) = 0

(2.156)

The homogeneous case of Eq. (2.148) will also be similar to (2.150) with the same principal solutions u1 and u2 . To derive the solution of the nonhomogeneous Equation (2.148), we employ the method of variation of parameters and assume a solution of the following form (Esmailzadeh et al., 1996; Simmons, 2017): v (t) = f1 (t) u1 (t) + f2 (t) u2 (t)

(2.157)

where f1 (t) and f2 (t) are functions of t to be determined such that (2.157) becomes a proper solution of (2.148). Derivative of (2.157) is v˙ (t) = f1 u˙ 1 + f2 u˙ 2 + f˙1 u1 + f˙2 u2

(2.158)

We may set a condition on f1 (t) and f2 (t) such that f˙1 u1 + f˙2 u2 = 0

(2.159)

v˙ (t) = f1 u˙ 1 + f2 u˙ 2

(2.160)

¨ 1 + f2 u ¨2 + f˙1 u˙ 1 + f˙2 u˙ 2 v¨ (t) = f1 u

(2.161)

to have and therefore, Substituting (2.157), (2.160), and (2.161) in (2.148), we have f (t)

=

u1 + p (t) u˙ 1 + q (t) u1 ) v¨ + p (t) v˙ + q (t) v = f1 (¨ +f2 (¨ u2 + p (t) u˙ 2 + q (t) u2 ) + f˙1 u˙ 1 + f˙2 u˙ 2

(2.162)

and because u1 and u2 satisfy the homogeneous equation, this relation reduces to (2.163) f (t) = f˙1 u˙ 1 + f˙2 u˙ 2 Now we have two Eqs. (2.159) and (2.163) on f˙1 (t) and f˙2 (t) to calculate the functions f1 (t) and f2 (t). The solution exists if the determinant of the coefficients is nonzero.    u u2   = u1 u˙ 2 − u˙ 1 u2 (2.164) W (t) =  1 u˙ 1 u˙ 2 

2. P2: Differential Equations

73

The determinant W (t) of this form is called Wronskian (Bellman, 1970; Bellman and Kalaba, 1965; Jazar, 2020). Hence,      0  u1 u2  0     f (t) u˙ 2   u˙ 1 f (t)  f˙2 (t) = (2.165) f˙1 (t) = W (t) W (t) and we derive f1 (t) and f2 (t).  f1 (t)

0

 f2 (t)

t



=

t

= 0

u2 (s) f (s) ds W (s)

u1 (s) f (s) ds W (s)

Therefore, the solution (2.157) would be  t  t u2 (s) f (s) u1 (s) f (s) ds + u2 (t) ds v = −u1 (t) W (s) W (s) 0 0  t u1 (s) u2 (t) − u2 (s) u1 (t) f (s) ds = W (s) 0

(2.166) (2.167)

(2.168)

It is traditional to show G (t, s) =

u1 (s) u2 (t) − u2 (s) u1 (t) W (s)

(2.169)

and write the solution as 

t

v=

G (t, s) f (s) ds

(2.170)

0

Hence, the solution of (2.140) with the boundary conditions (2.141) is  t G (t, s) f (s) ds (2.171) x = u + v = A u1 (t) + B u2 (t) + 0

A =

C1

C2 − v (t0 ) − C1 u1 (t0 ) B= u2 (t0 )

(2.172)

To determine Wronskian W (t), we take a derivative of (2.164).       d  u1 u2   u˙ 1 u˙ 2   u1 u2  dW = + = ¨1 u ¨2  dt dt  u˙ 1 u˙ 2   u˙ 1 u˙ 2   u     u1 u2   =  −p (t) u˙ 1 − q (t) u1 −p (t) u˙ 2 − q (t) u2  = =

−p (t) u1 u˙ 2 − q (t) u1 u2 + p (t) u2 u˙ 1 + q (t) u2 u1    u u2   = −p (t) W −p (t)  1 u˙ 1 u˙ 2 

(2.173)

74

2. P2: Differential Equations

Therefore,

   t p (s) ds W (t) = W (0) exp −

(2.174)

0

and hence,  s  u2 (s) f (s) exp p (z) dz ds 0 0  s   t u1 (s) f (s) exp p (z) dz ds 

f1 (t)

=

f2 (t)

=

t



0

(2.175) (2.176)

0

Now that we have the Wronskian, we can rearrange the solution of x ¨ + a (t) x˙ + b (t) x = f (t)

(2.177)

x (0) = C1

(2.178)

x˙ (0) = C2

as x = C1 u1 (t) + C2 u2 (t)  s   t + f (s) exp p (z) dz (u1 (s) u2 (t) − u2 (s) u1 (t)) ds 0

0

(2.179) A nonzero Wronskian is the essential tool to guarantee that a different set of n solutions are linearly independent.    u1 u2 ··· un    u˙ 1 u˙ 2 ··· u˙ n  (2.180) W =  · · · · · · · · · · · ·   (n−1) (n−1) (n−1)  u  u2 · · · un 1 Example 39 Solution of a parametric second-order equation by Wronskian. Consider an equation with two point conditions. tx ¨ − (1 + t) x˙ + x = 0

x (1) = 1

x˙ (1) = 2

(2.181)

The two principal solutions of the homogeneous equation are x 1 = et

x2 = 1 + t

(2.182)

The Wronskian W (t) is   x W (t) =  1 x˙ 1

  x2   et = x˙ 2   et

 1 + t  = −tet 1 

(2.183)

which will be zero only at t = 0. The two solutions are linearly independent, and the general solution is x (t) = C1 x1 + C2 x2 = C1 et + C2 (1 + t)

(2.184)

2. P2: Differential Equations

75

and based on the point conditions, we have C1 e + 2C2

=

C1

=

and the solution will be x (t) =

1 3 e

C1 e + C2 = 2

(2.185)

C2 = −1

(2.186)

3 t e − (1 + t) e

(2.187)

Example 40 Solution of a nonhomogeneous second-order equation by Wronskian. Consider a nonhomogeneous equation. x ¨ + x = csc t

(2.188)

The two principal solutions of the equation are x1 = sin t

x2 = cos t

(2.189)

The Wronskian W (t) is   x W (t) =  1 x˙ 1

  x2   sin t = x˙ 2   cos t

 cos t  = −1 − sin t 

(2.190)

Therefore  t x2 (s) f (s) cos s csc s ds = − ds = ln sin t W (s) −1 0 0  t  t x1 (s) f (s) sin s csc s ds = ds = −t W (s) −1 0 0 

f1 (t)

=

f2 (t)

=

t



(2.191) (2.192)

and hence a particular solution of the equation is x

=

f1 (t) x1 (t) + f2 (t) x2 (t)

=

sin t ln sin t − t cos t

(2.193)

x (t) = C1 sin t + C2 cos t + sin t ln sin t − t cos t

(2.194)

and the general solution is

Example 41 Inhomogeneous second-order constant coefficient linear equation. Inhomogeneous second-order differential equation with constant coefficient and without first derivative is the main equations that perturbation analyses are made for.

x (0) = C1

x ¨ + ω 2 x = f (t)

(2.195)

x (t0 ) = C2

(2.196)

76

2. P2: Differential Equations

The general solution xh of the corresponding homogeneous equation is xh = A cos ωt + B sin ωt

(2.197)

and the particular solution xp is  1 t xp = f (s) sin ω (t − s) ds ω t0

(2.198)

Therefore solution of Eq. (2.195) is x = A cos ωt + B sin ωt +

1 ω



t

f (s) sin ω (t − s) ds

(2.199)

t0

To determine the particular solution xp we may use the method of variation of coefficient and seek a particular solution of equation in the form: xp = A (t) cos ωt + B (t) sin ωt

(2.200)

where A (t) and B (t) are functions of t to be determined. Derivative of (2.200) is (2.201) x˙ p = A˙ cos ωt + B˙ sin ωt − Aω sin ωt + Bω cos ωt We may set a condition on A (t) and B (t) such that

and

A˙ cos ωt + B˙ sin ωt = 0

(2.202)

1 − A˙ sin ωt + B˙ cos ωt = f (t) ω

(2.203)

because x˙ p x ¨p

= −ω (A sin ωt − B cos ωt)

= −ω A˙ sin ωt − B˙ cos ωt − ω 2 (A cos ωt + B sin ωt)

(2.204) (2.205)

Solving (2.202) and (2.203) gives us A˙ = B˙

1 − f (t) sin ωt ω 1 f (t) cos ωt ω

=

1 ω  t



t

A=− B=

1 ω

f (s) sin ωs ds

(2.206)

t0

f (s) cos ωs ds

(2.207)

t0

and hence, xp

=

=

1 − A (t) cos ωt ω



t

f (s) sin ωs ds t0  t

1 + B (t) sin ωt f (s) cos ωs ds ω t0  1 t f (s) sin ω (t − s) ds ω t0

(2.208)

2. P2: Differential Equations

77

Example 42 Linear equations with almost constant coefficients. Consider the second-order linear homogeneous differential equation d2 x dx + q (t) x = 0 + p (t) 2 dt dt dx (0) = C2 x (0) = C1 dt

(2.209) (2.210)

and assume the coefficients p (t) and q (t) differ slightly from constants over the interval of interest of t. We may write p (t)

=

p + (p (t) − p)

(2.211)

q (t)

=

q + (q (t) − q)

(2.212)

and assume |p (t) − p| and |q (t) − q| are very small, O (ε). Now we may rewrite the equation as d2 x dx + (q + ε (q (t) − q)) x = 0 + (p + ε (p (t) − p)) dt2 dt

(2.213)

which will be rearranged as a nonhomogeneous second-order linear differential equation in order to be able to apply perturbation methods (Bellman, 1964).   d2 x dx dx + qx = εf x, , t (2.214) + p dt2 dt dt     dx dx = (p − p (t)) + (q − q (t)) x (2.215) f x, , t dt dt Let us rewrite the equations with almost constant coefficients in a new form: x ¨ + (p + εp1 (t)) x˙ + (q + εq1 (t)) x x0 (0) = C1

=

0

x˙ 0 (0) == C2

(2.216) (2.217)

and assume a series solution of powers of ε with unknown coefficients xi (t): x = x0 + εx1 + ε2 x2 + · · ·

(2.218)

such that x0 (0)

=

C1

xi (0)

=

0

x˙ 0 (0) = C2 x˙ i (0) = 0

(2.219) i≥1

(2.220)

Substituting solution (2.218) in Eq. (2.216) and equating coefficients of powers of ε,  x ¨0 + ε¨ x 1 + ε2 x ¨2 + · · ·  + (p + εp1 (t)) x˙ 0 + εx˙ 1 + ε2 x˙ 2 + · · ·  = 0 (2.221) + (q + εq1 (t)) x0 + εx1 + ε2 x2 + · · ·

78

2. P2: Differential Equations

provides us with the series of equations: x ¨0 + px˙ 0 + qx0

=

0

(2.222)

x ¨1 + px˙ 1 + qx1 x ¨2 + px˙ 2 + qx2

= =

− (q1 x0 + p1 x˙ 0 ) − (q1 x1 + p1 x˙ 1 )

(2.223) (2.224)

··· The function x0 (t) will be determined from the unperturbed equation (2.222). Then we solve (2.223) for x1 in terms of x0 using (2.179). Then we find x2 in terms of x1 and, hence, terms of x0 and so on. Therefore, we developed a systematic technic to determine the coefficients in (2.218) (Smirnov, 1963). Example 43 Asymptotic expansion acceptance condition. Consider a function of arguments x and small parameter ε denoted by f (x, ε) and a divergent power series in ε: f0 + εf1 + ε2 f2 + · · · + εn fn + · · ·

(2.225)

The coefficients f0 , f1 , f2 · · · can be functions of x only, or functions of both x and ε. We define a function gn

and if

gn = f0 + εf1 + ε2 f2 + · · · + εn fn

(2.226)

lim (f − gn ) ε−n = 0

(2.227)

ε→0

then we will accept the series (2.225) as an asymptotic representation of the function f (x, ε). f (x, ε) = f0 + εf1 + ε2 f2 + · · · (2.228) The relation (2.227) is the acceptance condition of an asymptotic expansion series for a function. The condition fulfills if ε is very small, then f − gn will also be very small. In these conditions, although the series (2.225) diverges, the sum of its first (n + 1) terms will give a very good approximation of the function f (x, ε), (Andrianov et al., 2002). Example 44 Series solution. A nonhomogeneous second-order linear differential equation may be written as x ¨ + (p + εp1 (t)) x˙ + (q + εq1 (t)) x x0 (0) = C1 x˙ 0 (0) = C2

=

−ε (p1 (t) x˙ + q1 (t) x)

(2.229) (2.230)

Employing (2.179) x

=

C1 u1 (t) + C2 u2 (t)  s   t + f (s) exp p (z) dz (u1 (s) u2 (t) − u2 (s) u1 (t)) ds (2.231) 0

0

2. P2: Differential Equations

79

as the solution of initial-value problem, x ¨ + a (t) x˙ + b (t) x = f (t) x˙ (0) = C2 x (0) = C1

(2.232) (2.233)

we may develop the solution of (2.229) as x

C1 u1 (t) + C2 u2 (t)  t +ε g (t, s) (p1 (t) x˙ + q1 (t) x) ds

=

(2.234)

0

where u1 and u2 are particular solutions of the associated homogeneous equation with the initial conditions u1 (0) u2 (0)

= =

1 0

u˙ 1 (0) = 0 u˙ 2 (0) = 1

(2.235) (2.236)

and the kernel g (t, s) is assumed to be known function.   s p (z) dz (u1 (s) u2 (t) − u2 (s) u1 (t)) g (t, s) = − exp

(2.237)

0

Integrating by parts for the first term of in integral (2.234) eliminates x. ˙ x

=

t

C1 u1 (t) + C2 u2 (t) + ε [g (t, s) p1 (t) x]0   t  d (g (t, s) p1 (t)) ds +ε x g (t, s) q1 (t) − ds 0

Let us rewrite this solution in a simpler way:  t x = w (t) + ε g1 (t, s) x (s) ds

(2.238)

(2.239)

0

where w (t)

=

g1 (t, s)

=

C1 u1 (t) + C2 u2 (t) + εC1 g (t, 0) p1 (0) d (g (t, s) p1 (t)) g (t, s) q1 (t) − ds

(2.240) (2.241)

This is the Volterra integral equation for the unknown function x. To determine x we may use the Volterra equation repeatedly (Bellman, 1964).    s  t g1 (t, s) w (s) + ε g1 (s, s1 ) x (s1 ) ds1 ds x = w (t) + ε  =

0

0 t

w (t) + ε g1 (t, s) w (s) ds 0  s   t g1 (t, s) g1 (s, s1 ) x (s1 ) ds1 ds + · · · +ε2 0

0

(2.242)

80

2. P2: Differential Equations

 t x = w (t) + ε g1 (t, s) w (s) ds 0  s  t 2 g1 (t, s) g1 (s, s1 ) +ε 0  0    s1 g1 (s1 , s2 ) x (s2 ) ds2 ds1 ds × w (s1 ) + ε

(2.243)

0

x

=

 t g1 (t, s) w (s) ds w (t) + ε 0  s  t g1 (t, s) g1 (s, s1 ) w (s1 ) ds +ε2 0 0  s  t g1 (t, s) g1 (s, s1 ) +ε3 0 0    s1 g1 (s1 , s2 ) x (s2 ) ds2 ds1 ds + · · · ×

(2.244)

0

Assuming a series solution for x x = x0 + εx1 + ε2 x2 + · · ·

(2.245)

the coefficients will be x0

=

x1

=

x2

=

w (t)  t g1 (t, s) w (s) ds 0  t  s g1 (t, s) g1 (s, s1 ) w (s1 ) ds 0

···

(2.246) (2.247) (2.248)

0

Example 45 Linear parametric equation to integral equation. Consider the forced linear second-order parametric differential equation with a general boundary conditions. x ¨ + a (t) x˙ + b (t) x = f (t)

(2.249)

x (0) = C1

(2.250)

x (t0 ) = C2

The linearity of the equation allows us to write the solution to be made of two functions where v is the solution of the inhomogeneous equation with zero initial conditions and u is the solution of the homogeneous equation with nonzero boundary conditions. x=u+v (2.251) v¨ + a (t) v˙ + b (t) v = f (t) v (0) = 0

v˙ (0) = 0

(2.252) (2.253)

2. P2: Differential Equations

u ¨ + a (t) u˙ + b (t) u = 0 u (0) = C1 u (t0 ) = C2 − v (t0 )

81

(2.254) (2.255)

Let us begin with the homogeneous boundary conditions to develop the complete solutions. u ¨ + a (t) u˙ + b (t) u = 0 u (0) = C1

(2.256)

u (t0 ) = C2

(2.257)

A second-order linear equation will have two principal linearly independent solutions. Assume u1 and u2 to be the two principal solutions of the homogeneous equation (2.256) using these initial conditions: u1 (0) u2 (0)

= =

1 0

u˙ 1 (0) = 0 u˙ 2 (0) = 1

(2.258) (2.259)

Because of the linearity, every solution of (2.256) can be expressed by u (t) = A u1 (t) + B u2 (t)

(2.260)

where A and B are constants to be determined by the initial and boundary conditions. C2 − C1 u1 (t0 ) A = C1 u2 (t0 ) = 0 B= (2.261) u2 (t0 ) Hence, the solution of (2.254) will be (2.260) with proper coefficients. A = C1

B=

C2 − v (t0 ) − C1 u1 (t0 ) u2 (t0 )

u2 (t0 ) = 0

(2.262)

The homogeneous case of (2.252) is the same as homogeneous Equation (2.256) having the same principal solutions u1 and u2 . To develop the solution of the inhomogeneous Equation (2.252), Lagrange invented the method of variation of parameters (Esmailzadeh et al., 1996; Simmons, 2017). Let us assume the solution of the inhomogeneous equation to be v (t) = f1 (t) u1 (t) + f2 (t) u2 (t)

(2.263)

where f1 (t) and f2 (t) are functions of t to be determined such that (2.263) becomes a proper solution of (2.252). Derivative of (2.263) is v˙ (t) = f1 u˙ 1 + f2 u˙ 2 + f˙1 u1 + f˙2 u2

(2.264)

Let us set a condition on f1 (t) and f2 (t) such that f˙1 u1 + f˙2 u2 = 0

(2.265)

v˙ (t) = f1 u˙ 1 + f2 u˙ 2

(2.266)

to have

82

2. P2: Differential Equations

and ¨ 1 + f2 u ¨2 + f˙1 u˙ 1 + f˙2 u˙ 2 v¨ (t) = f1 u

(2.267)

Substituting (2.263), (2.266), and (2.267) in (2.254), we have f (t)

=

u1 + a (t) u˙ 1 + b (t) u1 ) v¨ + a (t) v˙ + b (t) v = f1 (¨ u2 + a (t) u˙ 2 + b (t) u2 ) + f˙1 u˙ 1 + f˙2 u˙ 2 +f2 (¨

(2.268)

This relation reduces to f (t) = f˙1 u˙ 1 + f˙2 u˙ 2

(2.269)

because u1 and u2 satisfy the homogeneous equation. Now we have two conditions (2.265) and (2.269) on f˙1 (t) and f˙2 (t) to calculate the functions f1 (t) and f2 (t). The solution exists if the determinant of the coefficients is nonzero. The determinant W (t) of this form is called Wronskian (Bellman, 1970; Bellman and Kalaba, 1965).    u1 u 2   = u1 u˙ 2 − u˙ 1 u2  W (t) =  (2.270) u˙ 1 u˙ 2  Hence,

  0 u2   f (t) u˙ 2 f˙1 (t) = W (t)

   

  u1 0   u˙ 1 f (t) f˙2 (t) = W (t)

   

(2.271)

and we derive f1 (t) and f2 (t).  f1 (t)

0

 f2 (t)

t



=

t

= 0

u2 (s) f (s) ds W (s)

u1 (s) f (s) ds W (s)

The solution of (2.252) from (2.263) would be  t u1 (s) u2 (t) − u2 (s) u1 (t) f (s) ds v= W (s) 0 We may use G (t, s) = to show v as

u1 (s) u2 (t) − u2 (s) u1 (t) W (s) 

(2.272) (2.273)

(2.274)

(2.275)

t

v=

G (t, s) f (s) ds

(2.276)

0

Therefore, the solution of (2.249) with the boundary-value conditions (2.250) is  t x = u + v = A u1 (t) + B u2 (t) + G (t, s) f (s) ds (2.277) 0

A =

C1

C2 − v (t0 ) − C1 u1 (t0 ) B= u2 (t0 )

(2.278)

2. P2: Differential Equations

83

To determine Wronskian W (t), we take a derivative of (2.270).       d  u1 u2   u˙ 1 u˙ 2   u1 u2  dW = + = ¨1 u ¨2  dt dt  u˙ 1 u˙ 2   u˙ 1 u˙ 2   u     u1 u2  =  −a (t) u˙ 1 − b (t) u1 −a (t) u˙ 2 − b (t) u2  = = Therefore,

−a (t) u1 u˙ 2 − b (t) u1 u2 + a (t) u2 u˙ 1 + b (t) u2 u1    u1 u2   = −a (t) W  −a (t)  u˙ 1 u˙ 2    t  W (t) = W (0) exp − a (s) ds

(2.279)

(2.280)

0

and hence,  f1 (t)



t



=

u2 (s) f (s) exp   t u1 (s) f (s) exp

a (z) dz ds  s a (z) dz ds

0

f2 (t)

=

0



s

(2.281)

0

(2.282)

0

Having Wronskian, we can rearrange the solution of x ¨ + a (t) x˙ + b (t) x = f (t) x (0) = C1 x (t0 ) = C2

(2.283) (2.284)

as x = Au1 (t) + Bu2 (t)   t + f (s) exp 0



s

a (z) dz (u1 (s) u2 (t) − u2 (s) u1 (t)) ds

(2.285)

0

Example 46 Continued integral solution of Mathieu-Duffing equation. Mathieu-Duffing equation is the simplest nonlinear and parametric differential equation. d2 x + (a + 2b cos 2t) x + cx3 = dt2 n∈N a = n2

0

(2.286) (2.287)

Let us rewrite the Mathieu-Duffing equation (2.286) as d2 x + n2 x dt2

=

 −b 2x cos 2t + ex3 = −bf (x, t)

(2.288)

e

=

c b

(2.289)

84

2. P2: Differential Equations

Knowing that x1 = cos nt and x2 = sin nt are the principal solutions of d2 x + n2 x = 0 dt2 x˙ 1 (0) = 0 x1 (0) = 1 x2 (0) = 0 x˙ 2 (0) = 1

(2.290) (2.291) (2.292)

from (2.285), we have x

=

A cos nt + B sin nt  t +b f (x, s) (cos ns sin nt − sin ns cos nt) ds

(2.293)

0

Substituting f (x, s) makes x = A cos nt + B sin nt  t  −b x (s) 2 cos 2s + ex2 (s) (cos ns sin nt − sin ns cos nt) ds 0  t  2 cos 2s + ex2 (s) x (s) sin n (t − s) ds (2.294) = w (t) − b 0

where w (t) = A cos nt + B sin nt

(2.295)

Let us call w (t) the first approximate solution x0 (t). x0 = w (t) = A cos nt + B sin nt

(2.296)

Substituting x0 into the right-hand side of (2.294), we find the second approximate solution.  t  2 cos 2s1 + ex2 (s1 ) sin n (t − s1 ) x (s1 ) ds1 x1 = w (t) − b 0  t  = w (t) − b sin n (t − s1 ) w (s1 ) 2 cos 2s1 + ew2 (s1 ) ds1 (2.297) 0

Another substitution x1 for w (s1 ) provides us with the third approximate solution.  t x2 = w (t) − b sin n (t − s1 ) (w (s1 ) + bJ1 ) 0  × 2 cos 2s1 + ew2 (s1 ) + 2ew (s1 ) bJ1 − eb2 J12 ds1 (2.298)  s1  J1 = sin n (s1 − s2 ) w (s2 ) 2 cos 2s2 + ew2 (s2 ) ds2 (2.299) 0

The next approximate solution will be  t x3 = w (t) − b sin n (t − s1 ) (w (s1 ) + bJ2 ) 0  × 2 cos 2s1 + ew2 (s1 ) + 2ebw (s1 ) J2 − eb2 J22 ds1

(2.300)

2. P2: Differential Equations

 J2

=

J1

s1

sin n (s1 − s2 ) (w (s2 ) + bJ1 ) 0 2 cos 2s2 − ew2 (s2 ) − 2ebw (s2 ) J1 − eb2 J12 ds2  s2  sin n (s2 − s3 ) w (s3 ) 2 cos 2s3 − ew2 (s3 ) ds3

=

85

(2.301) (2.302)

0

Keeping on substituting provides us with continued integral approximate solution of the Mathieu-Duffing equation.  t sin n (t − s1 ) (w (s1 ) + bJ2 ) (2.303) xk = w (t) − b 0  2 × 2 cos 2s1 + ew2 (s1 ) + 2ebw (s1 ) Jk−1 + eb2 Jk−1 ds1  sk−i Ji = sin n (sk−i+1 − sk−i ) (w (sk−i+1 ) + bJi−1 ) (2.304) 0  2 dsk−i+1 2 cos 2sk−i+1 + ew (sk−i+1 ) (w (sk−i+1 ) + 2bJi−1 ) + eb2 Jn−1 1 0, −r < x < r. f (x) = a0 + a1 x + a2 x2 + · · · + ak xk + · · · =



ak x k

(3.52)

k=0

Then, f (x) has the derivative f  (x) =



kak xk−1

(3.53)

∞ ak k+1 x k+1

(3.54)

k=1

and f (x) has the integral 

x

f (x) = 0

k=0

and the series (3.53) and (3.54) both have the same radius of curvature r. Example 91 Polynomials’ relationship. Two polynomials in x are equal if and only if they have the same degree and equal coefficients for corresponding powers of x. If (ax + b) = (5x − 2), then we have a = 5 and b = −2. If a product of two polynomials is zero, then at least one of the polynomials must be zero. If (ax + b) (5x − 2) = 0, then x = 2/5 or x = −b/a. For any two polynomials P (x) and Q (x), Q (x) = 0, there exist unique polynomials p (x) and r (x) such that P (x) /Q (x) = p (x) + r (x) /Q (x). The remainder of division of polynomial P (x) by (x − a) equals P (a). If P (x) /Q (x) = p (x) + r (x) /Q (x), then r (x) = P (x) − p (x) Q (x), and if Q (x) = x − a, then r (a) = P (a). Therefore, a polynomial P (x) is divisible by (x − a) if and only if P (a) = 0. As a result, P (x) = xn − an is always divisible by Q (x) = x − a for any integer n ∈ N.

188

3. P3: Approximation of Functions

Example 92 Fraction functions f (x) = 1/ (1 + x) and f (x) = 1/ (1 − x). The first term of the power series for f (x) =

1 1+x

(3.55)

is f (0) = 1

(3.56)

The second term would be f  (x) =

−1 d 1 = 2 dx 1 + x (x + 1)

f  (0) = −1

(3.57)

and the third and fourth terms are f  (x) =

f  (x) =

2 (x + 1) −6

(x + 1)

4

3

2 f  (0) = =1 2! 2!

(3.58)

−6 f  (0) = = −1 3! 3!

(3.59)

This process may be continued up to the desired term to derive the power series expansion of the function. ∞

1 k = 1 − x + x2 − x3 + x4 − x5 + · · · = (−1) xk 1+x

(3.60)

k=0

Similarly we may check that ∞

1 = 1 + x + x2 + x3 + x4 + x5 + · · · = xk 1−x

(3.61)

k=0

Example 93 Arctangent function arctan x. Besides the classical method to show the power series expansion of arctan x for −1 ≤ x ≤ 1 arctan x = x −

x3 x5 x7 x9 x11 + − + − + ··· 3 5 7 9 11

(3.62)

d 1 arctan x = dx 1 + x2

(3.63)

We may use the equation

and



1 k = 1 − x + x2 − x3 + x4 − x5 + · · · = (−1) xk 1+x k=0

(3.64)

3. P3: Approximation of Functions

189

to replace x with x2 ∞

1 k 2 4 6 8 = 1 − x + x − x + x − · · · = (−1) x2k 1 + x2

(3.65)

k=0

to get d arctan x = 1 − x2 + x4 − x6 + x8 − x10 + x12 + · · · dx and therefore   arctan x = 1 − x2 + x4 − x6 + x8 − x10 + · · · dx x−

=

x5 x7 x9 x11 x3 + − + − + ··· 3 5 7 9 11

(3.66)

(3.67)

or similarly  arctan x

= =

1 = 1 + x2



(−1)

k=0

=

C +x−

k

 ∞

k

(−1) x2k dx =

k=0



 (−1)

k

x2k dx

k=0

2k+1

x +C 2k + 1

(3.68)

x3 x5 x7 x9 x11 + − + − + ··· 3 5 7 9 11

(3.69)

The fact that arctan 0 = 0 indicates that C = 0, and hence, Eq. (3.62) recovers. The test for convergence shows that  2k+3    x  / (2k + 3)  ak+1  2k + 1 2 2   = lim = |x| lim = |x| lim  (3.70)  2k+1 k→∞ k→∞ |x k→∞ 2k + 3 ak | / (2k + 1) and the series converges absolutely if |x| < 1. Example 94 Power series of inverse polynomials. It will be needed to express the inversion of a finite polynomial in power series. It means we may have a function f (x) such that f (x) =

1 b0 + b 1 x + b2 x 2 + b 3 x 3 + · · · + b n x n

(3.71)

and we need to express f (x) in a power series. f (x) = a0 + a1 x + a2 x2 + a3 x3 + · · ·

(3.72)

The process to develop series (3.72) is as usual. f (x)

=

a0 + a1 x + a2 x + · · · + ak x + · · · = 2

k



ak x k

(3.73)

k=0

ak

=

1 (k) f (x) k!

(3.74)

190

3. P3: Approximation of Functions

A few examples will show the process. Consider ex e−x

x3 x4 x5 x6 x2 + + + + + ··· 2! 3! 4! 5! 6! x3 x4 x5 x6 x2 − + − + − ··· = 1−x+ 2! 3! 4! 5! 6!

(3.75)

= 1+x+

(3.76)

and assume we have f (x) which is inverse of the truncated series (3.75) up to term x6 . 1 f (x) = (3.77) x3 x4 x5 x6 x2 + + + + 1+x+ 2! 3! 4! 5! 6! Expanding f (x) into a power series will address two Questions: firstly, how we determine power series expansion of inverse of a polynomial and, secondly, how close would be the series of inverse of truncated series of ex to e−x . Determining the coefficients of series (3.73) term by term will show a0 = f (0) = 1



f (x)

=

a1

=

(3.78)

 −4320 x5 + 5x4 + 20x3 + 60x2 + 120x + 120 (x6 + 6x5 + 30x4 + 120x3 + 360x2 + 720x + 720) f  (0) = −1

a2

=

a3

=

a4

=

1  1 f (0) = 2! 2 1  1 f (0) = − 3! 6 1  1 f (0) = 4! 24 .. .

2

(3.79) (3.80)

(3.81) (3.82) (3.83)

and we find a series that matches (3.76) very well. f (x) = 1 − x +

x2 x3 x4 x5 x6 − + − + − ··· 2! 3! 4! 5! 6!

(3.84)

What will happen if we expand f (x) further than the term x6 ? Would it match to the series of e−x for the terms higher than x6 ? A function equal to the inverse of a polynomial is an independent function which its power series will have infinity terms. We will theoretically need all terms to converge to the given function. In case that the polynomial is a truncated series of a known function with a known inverse, it is not necessarily true that we get the power series

3. P3: Approximation of Functions

191

of the inverse function by expanding the inverse polynomial. However, in this example we may expand f (x) further than x6 , to find f (x)

=

x2 x3 x4 x5 x6 − + − + 2! 3! 4! 5! 6! x9 19x10 x11 x8 + − + − ··· − 4 × 6! 4 × 6! 120 × 6! 80 × 5! 1−x+

(3.85)

which does not match the series expansion of e−x after the term x6 . e−x

=

x3 x4 x5 x6 x2 − + − + 2! 3! 4! 5! 6! x8 x9 x10 x11 x7 − + − + ··· − + 7! 8! 9! 10! 11!

1−x+

(3.86)

It can be seen that the term proportional to x7 is missing in the series expansion of f (x). Example 95 Even series for inverse even polynomials. It will be needed in future sections to expand functions that are inverse of even or odd polynomials. An even series fe (x) is a series without terms proportional to odd exponents, x2k+1 , k = 0, 1, 2, 3, · · · , such as fe (x) = a0 + a2 x2 + a4 x4 + a6 x6 + · · ·

(3.87)

Similarly, an odd series fo (x) is a series without terms proportional to even exponents, x2k , k = 0, 1, 2, 3, · · · , such as fo (x) = a1 x + a3 x3 + a5 x5 + a7 x7 + · · ·

(3.88)

Let us define a function f (x) equal to the inverse of an even polynomial and determine its power series expansion. f (x) =

1 a0 + a2 x 2 + a4 x 4 + a6 x 6

(3.89)

1 a0

(3.90)

We have f (0) = and f  (x)

=



f  (0)

=

0

6a6 x5 + 4a4 x3 + 2a2 x (a6 x6 + a4 x4 + a2 x2 + a0 )

2a2 1  f (0) = − 2 2! a0 1 (4) a2 − a0 a4 f (0) = 2 3 4! a0

2

(3.91) (3.92)

f  (0) = 0 f (5) (0) = 0

(3.93) (3.94)

192

3. P3: Approximation of Functions

1 (6) a6 a20 − 2a4 a0 a2 + a32 f (0) = − 6! a40

f (5) (0) = 0

(3.95)

to derive the power series of the function. f (x) =

1 2a2 a2 − a0 a4 4 − 2 x2 + 2 3 x + ··· a0 a0 a0

(3.96)

It shows that the power series of the inverse of an even polynomial is an even power series. As an example, we may cut the series expansion of y = cos x, which is an even series 1 1 1 6 x + ··· cos x = 1 − x2 + x4 − (3.97) 2 24 720 to define f (x) 1 f (x) = (3.98) x4 x6 x2 + − 1− 2 24 720 and expand it to a power series. 1 5 61 6 x + ··· f (x) = 1 + x2 + x4 + 2 24 720

(3.99)

To check if the power series of the inverse of an odd polynomial would be an odd series, let us cut the series expansion of y = sin x, which is an odd series 1 1 5 1 7 sin x = x − x3 + x − x + ··· 6 120 5040

(3.100)

to define f (x) f (x) =

1 x5 x7 x3 + − x− 6 120 5040

(3.101)

and expand it to a power series. f (x) =

7 3 1 1 31 + x+ x + x5 + · · · x 6 360 15 120

(3.102)

The power series of the of function f (x) is odd; however, it is singular at the point x = 0 but well defined everywhere else. This is because the function f (x) is not defined at x = 0. Example 96 Singular points of functions. If a function is singular at a point x = a, then the function goes to infinity when x → a. The function f (x) = 1/x is singular at x = 0. The  function f (x) = 1/ (x − 2) is singular at x = 2. The function f (x) = 1/ x2 − 4 is singular at x = ±2. The functions f (x) = 1/ sin x is singular at x = kπ, k = 0, 1, 2, 3, · · · . These functions are analytic at all points other than their singular points.

3. P3: Approximation of Functions

The function f (x) f (x) =

(x2

x + 4) (x2 − 9)

193

(3.103)

has four singular points at which it goes to infinity. x = 3, −3, 2i, −2i

(3.104)

We may locate the singularities on the complex plane, and then to determine the radius of convergence is to draw a circle, with center at the origin, passing through the nearest singular point. As an example, the radius of convergence of Eq. (3.103) is √ |r| < 2 2 (3.105) If we expand the function about another point, say x = a = 1+i, then the center of the circle would be at x = a = 1 + i. The closest singular point to x = 1 + i is x = 2i, and therefore the radius of convergence would be √ |r − (1 + i)| < 2 (3.106) Example 97 New power series from known series. Calculating power series expansion of elementary functions may be utilized to develop series expansion of more complicated functions by substitution, multiplication, division, composition, differentiation, integration, etc. The following power series of elementary functions will be useful for the future analysis. sin ax

=

cos ax

=

a3 3 a5 5 a7 7 a9 9 a11 11 x + x − x + x − x + ··· 3! 5! 7! 9! 11! 2 4 6 8 10 a a a a a 1 − x2 + x4 − x6 + x8 − x10 + · · · 2! 4! 6! 8! 10! ax −

tan ax = ax +

a3 3 2a5 5 17a7 7 62a9 9 x + x + x + x + ··· 3 15 315 2835

a2 2 a3 3 a4 4 a5 5 x + x + x + x + ··· 2 3! 4! 5! Let us calculate power series expansion of  f (x) = exp −3x2 exp ax = 1 + ax +

(3.107) (3.108) (3.109) (3.110)

(3.111)

In Example 87, we found the series expansion for f (t) = exp (−αt), as f (t) = e−αt = 1 − αt +

α2 2 α3 3 α4 4 α5 5 t − t + t − t + ··· 2 3! 4! 5!

(3.112)

Substituting a = 1 and t = 3x2 , we have 2 9 9 27 f (x) = e−3x = 1 − 3x2 + x4 − x6 + x8 − · · · 2 2 8

(3.113)

194

3. P3: Approximation of Functions

Similarly, having the series expansion of exp x, we are able to find the series expansion of functions involving exp x such as   1 1 3 1 4 1 5 ex 1 2 = f (x) = 1 + x + x + x + x + x + ··· x x 2 3! 4! 5! 1 1 1 1 1 1 = + 1 + x + x2 + x3 + x4 + x5 + · · · (3.114) x 2 3! 4! 5! 6! Example 98 Theory of functions. The theory of functions tries to represent arbitrary functions by simpler and elementary analytical approximate functions. The class of continuous real functions over a specified segment C [a, b] or over the whole real values [−∞, ∞] as well as real periodic functions C2π f (x + 2π) = f (x)

(3.115)

are the most interested and applied functions in science and engineering. The first option to approximate the continuous real functions C is the algebraic polynomial series expansion with real coefficients P (x) = c0 + c1 x + c2 x2 + · · · + ck xk + · · · =



ck xk

(3.116)

k=0

and the first option to approximate the real periodic functions C2π is the trigonometric polynomials series expansion with real coefficients. T (x) = a0 + a1 cos x + b1 sin x + · · · + an cos nx + bn sin nx

(3.117)

A polynomial P (x) as an approximate function for a function f (x) ∈ C [a, b] if for all values of x ∈ [a, b] the following inequality holds true |f (x) − P (x)| < 

(3.118)

where  > 0 is indicator of the degree of approximation. Similarly, a trigonometric polynomial T (x) is an approximation for a function f (x) ∈ C2π , if for all values of x ∈ [a, b] the following inequality holds true. |f (x) − T (x)| < 

(3.119)

The integrals 

b

2

(f (x) − P (x)) dx 

(3.120)

a π −π

2

(f (x) − T (x)) dx

(3.121)

are the measure of distance of the approximated functions P (x) and T (x) from the function f (x).

3. P3: Approximation of Functions

195

Example 99 Power series of matrix functions. Similar to a power series expansion of a function of a scalar variable, f (x) f (x) = a0 + a1 x + a2 x2 + a3 x3 + · · ·

(3.122)

we may define power series expansion of a function of a square matrix, f (A) f (A) = a0 I + a1 A + a2 A2 + a3 A3 + · · ·

(3.123)

where I is the unit matrix of the same order as A. As an example, exponential function of a matrix is 1 1 1 eAt = I + At + A2 t2 + A3 t3 + A4 t4 + · · · 2 3! 4!

(3.124)

Assume A is a square matrix n × n whose eigenvalues λ1 , λ2 , λ3 ,· · · , λn are associated to its eigenvectors u1 , u2 , u3 ,· · · , un . Aui

=

ui

=

i = 1, 2, 3, · · · , n

λi ui  u1i

u2i

u3i

...

uni

T

(3.125) (3.126)

We may combine all equations of (3.125) and write them in form of an equation AU = ΛU

(3.127)

where Λ is a diagonal eigenvalue matrix and U is square eigenvector matrix, assuming all eigenvalues are real and distinct ⎡ ⎤ λ1 0 0 ... ⎢ 0 λ2 0 . . . ⎥ ⎢ ⎥ Λ=⎢ 0 = diag (λ1 , λ2 , λ3 , · · · ) (3.128) 0 λ3 . . . ⎥ ⎣ ⎦ .. .. .. . . . . . . ⎡ ⎢ ⎢ U=⎢ ⎣

u1 u2 u3 .. .

⎤T



⎢ ⎥ ⎢ ⎥ ⎥ =⎢ ⎣ ⎦

u11 u12 u13 .. .

u21 u22 u23 .. .

u31 u32 u33 .. .

... ... ... .. .

⎤ ⎥ ⎥ ⎥ ⎦

(3.129)

and therefore U−1 AU A

= =

Λ UΛU

(3.130) −1

Now we may have the exponents of A     A2 = UΛU−1 · UΛU−1 = UΛ2 U−1     A3 = A2 A = UΛ2 U−1 · UΛU−1 = UΛ3 U−1 ... = ...     An = An−1 A = UΛn−1 U−1 · UΛU−1 = UΛn U−1

(3.131)

(3.132) (3.133) (3.134)

196

3. P3: Approximation of Functions



where

⎢ ⎢ An = ⎢ ⎣

λn1 0 0 .. .

0 λn2 0 .. .

0 0 λn3 .. .

... ... ... .. .

⎤ ⎥ ⎥ ⎥ ⎦

(3.135)

Hence, the power series of a matrix function (3.123) will be f (A)

= = =

where



a 0 I + a 1 A + a 2 A2 + a 3 A3 + · · · a0 UIU−1 + a1 UΛU−1 + a2 UΛ2 U−1 + a3 UΛ3 U−1 + · · ·   U a0 I + a1 Λ + a2 Λ2 + a3 Λ3 + · · · U−1 (3.136)

a0 I + a 1 Λ + a 2 Λ 2 + · · ·



diag(a0 + a1 λ1 + a2 λ21 · · · ,

=

a0 + a1 λ2 + a2 λ22 · · · , . . . a0 + a1 λn + a2 λ2n · · · ) As an example, let us consider

$

A=

%

1 1 9 1

(3.138)

with eigenvalues and eigenvectors of λ1

=

−2

λ2

=

4

and therefore $ % −2 0 Λ= 0 4

U=



$

−1/3 u1 = 1 $ % 1/3 u2 = 1

u1

u2



$ =

% (3.139) (3.140)

−1/3 1

1/3 1

and we may check that A = UΛU−1 . $ %$ %$ −1/3 1/3 −2 0 −1/3 −1 A = UΛU = 1 1 0 4 1 $ % 1 1 = 9 1 Now we may calculate the exponents of A A2

=

UΛ2 U−1 =

A3

=

UΛ3 U−1 = ···

(3.137)

$ $

10 18 28 108

2 10 12 28

% (3.141)

1/3 1

%−1

(3.142)

% (3.143) % (3.144)

3. P3: Approximation of Functions

197

to have eAt . eAt

= = 

1 1 I + At + A2 t2 + A3 t3 + · · · 2 3! $ % $ % $ % 1 10 2 1 0 1 1 + t+ t2 + · · · 0 1 9 1 2 18 10 $ 14 3 % 2 2t3 + t2 + t 3 t + 5t + t + 1 14 3 2 18t3 + 9t2 + 9t 3 t + 5t + t + 1

The exact value of eAt is

(3.145)

% e4t − e−2t e (3.146) 3e−2t + 3e4t $ % a 0 Knowing that if we have a diagonal matrix such as , then the exponen0 b tial of the matrix would be $ % $ a % e 0 a 0 exp = (3.147) 0 b 0 eb At

1 = 6

$

3e−2t + 3e4t 9e4t − 9e−2t

we transform a given matrix A to its Jordan form $ % $ %$ %$ 1 1 1 1 −2 0 = 9 1 −3 3 0 4 to calculate the exact eAt . $ % 1 1 exp = 9 1 = =

$

1 1 −3 3

%

$

−2 0

1 2 1 2

− 16

%$ 0 4 %$ 1

% (3.148)

1 6

1 2 1 2

− 16

exp 1 6 % $ −2 % 0 − 16 1 1 e 2 1 1 0 e4 −3 3 2 6 % $ −2t 1 3e + 3e4t e4t − e−2t 6 9e4t − 9e−2t 3e−2t + 3e4t

$

%

(3.149)

Example 100 Roots of polynomials. In the following chapters, we will repeatedly need to calculate roots of polynomials. Here we summarize the ones with analytical solutions. 1. Quadratic equation: The solution of the quadratic equation ax2 + bx + c = 0 where a = 0 is

(3.150)

√ b2 − 4ac (3.151) 2a  The discriminant of the quadratic equation is b2 − 4ac . Assume a, b, and c are all real. If the discriminant is negative, then the two roots are complex conjugate. If the discriminant is positive, then the two roots are real and unequal. If the discriminant is zero, then the two roots are equal. x=

−b ±

198

3. P3: Approximation of Functions

2. Cubic equation: The solution of the cubic equation ax3 + bx2 + cx + d = 0

(3.152)

where a = 0 is x

=

√ 3

x

=

e

x

=

where α

=

β

=

α+

 3

3a b 4πi  3

β−

√ α+e

(3.153)

3a b 4πi √ 2πi  3a 3 3 e 3 α+e 3 β− b 2πi 3 3

3

β−

 1

−q + q 2 + 4p3 2  1

−q − q 2 + 4p3 2

3ac − b2 2b3 − 9abc + 27a2 d q= 2 9a 27a3 To solve the cubic equation, first a change of variable p=

y =x+

b 3a

(3.154) (3.155)

(3.156) (3.157) (3.158)

(3.159)

changes the cubic equation into y 3 + 3py + q = 0

(3.160)

The discriminant of the modified equation is 4p3 +q 2 . Assume p and q, are real. If the discriminant is positive, one root is real, and two are complex conjugates. If the discriminant is zero, there are three real roots, that at least two are equal. If the discriminant is negative, there are three unequal real roots. 3. Quartic equation: The solution of the quartic equation ax4 + bx3 + cx2 + dx + e = 0 where a = 0 is x=y−

b 4a

(3.161)

(3.162)

where y is a solution of y2 ±



 u−p y−

q 2 (u − p)

 +

u =0 2

(3.163)

and u is a solution of  s3 − ps2 − 4rs + 4pr − q 2 = 0

(3.164)

3. P3: Approximation of Functions

199

and we have

3.2

p

=

r

=

8ac − 3b2 b3 − 4abc + 8a2 d q = 8a2 8a3 2 3 4 16ab c + 256a e − 3b − 64a2 bd 256a4

(3.165) (3.166)

Applied Fourier Series Expansion

A complete theory of trigonometric series is beyond the scope of this book. This section reviews applications of Fourier series in physical sciences which will be used in perturbation analysis. We make our task tractable by narrowing our scope to those principles that bear directly on our interests. A polynomial of the form 1 a0 + a1 cos t + a2 cos 2t + · · · + an cos nt 2 +b1 sin t + b2 sin 2t + · · · + bn sin nt

(3.167)

where a2j + b2j = 0 is a trigonometric polynomial of order n. The Fourier series to approximate a function f (t) is a trigonometric series whose coefficients are calculated by the integrals (3.170)–(3.171) or (3.175)–(3.177). Among all trigonometric polynomials of order n, the polynomial (3.167) whose coefficients are given by Fourier rule is the polynomial which is at the minimum distance from f (t). A periodic function f (t) of period 2πλ f (t) = f (t + 2πλ)

(3.168)

can be approximated by a Fourier series expansion into its harmonic components, cos (jt) and sin (jt).  ∞  jt jt 1 aj cos + bj sin (3.169) f (t) = a0 + 2 λ λ j=1 The number a0 /2 is the mean value of the periodic function. The coefficients aj and bj are weight factors of the harmonic functions cos (jt/λ) and sin (jt/λ). The coefficients are obtained by multiplying f (t) by cos (jt/λ) and sin (jt/λ) and integrating term-by-term over (−πλ, −πλ).  πλ 1 js ds (3.170) f (s) cos aj = πλ −πλ λ  πλ 1 js ds (3.171) bj = f (s) sin πλ −πλ λ In case the function f (t) is a periodic function of T f (t) = f (t + T )

(3.172)

200

3. P3: Approximation of Functions

we may define a new 2π-periodic function g (θ)   2π t g (θ) = f T then g (θ) =

(3.173)

    ∞  2π 2π 1 a0 + jθ + bj sin jθ aj cos 2 T T j=1

a0

=

aj

=

bj

=

 1 π g (θ) dθ π −π    π 2π 1 jθ dθ g (θ) cos π −π T    π 2π 1 jθ dθ g (θ) sin π −π T

(3.174)

(3.175) (3.176) (3.177)

or directly expand f (t) over T = 2π/ω. f (t) =

∞ 1 a0 + (aj cos (jωt) + bj sin (jωt)) 2 j=1

a0 aj bj

= = =

2 T 2 T 2 T



(3.178)

T



0



0

f (t) dt

(3.179)

f (t) cos (jωt) dt

(3.180)

f (t) sin (jωt) dt

(3.181)

T

T 0

Proof. Taking an integral of both sides of (3.169) over the interval (−πλ, πλ) provides us with ⎛ ⎞   πλ  πλ ∞  js js ⎠ ⎝ 1 a0 + + bj sin f (s) ds = aj cos ds 2 λ λ −πλ −πλ j=1 = =

1 a0 2



πλ

ds + −πλ





aj

j=1

πλ

cos −πλ

πλa0

 πλ ∞ js js ds + bj sin ds λ λ −πλ j=1

(3.182)

and hence a0 =

1 πλ



πλ

f (s) ds −πλ

(3.183)

3. P3: Approximation of Functions

201

a0 /2 is the average value of f (t) over the interval (−πλ, −πλ). Multiplying both sides of (3.169) by cos (ks/λ) and taking an integral yields 

πλ

ks f (s) ds λ −πλ ⎛ ⎞   πλ ∞  ks ⎝ 1 js js ⎠ a0 + + bj sin = cos aj cos ds λ 2 λ λ −πλ j=1 cos

 πλ ∞ js ks ks cos ds cos ds + aj cos λ λ λ −πλ −πλ j=1  ∞ πλ js ks sin ds = πaj + bj cos λ λ −πλ j=1

1 = a0 2



πλ

(3.184)

and, therefore, we have aj =

1 π



πλ

cos −πλ

ks f (t) dt λ

(3.185)

Multiplying both sides of (3.169) by sin (ks/λ) and taking an integral show that 

πλ

ks f (s) ds λ −πλ ⎛ ⎞   πλ ∞  ks ⎝ 1 js js ⎠ a0 + + bj sin = sin aj cos ds λ 2 λ λ −πλ j=1 sin

=

1 a0 2 +



πλ

sin −πλ

∞ j=1



bj

 πλ ∞ js ks ks ds + cos ds aj sin λ λ λ −πλ j=1

πλ

sin −πλ

js ks sin ds = πbj λ λ

(3.186)

And, therefore, we have 1 bj = π



πλ

sin −πλ

js f (s) ds λ

(3.187)

Therefore, a periodic function f (t) can be decomposed and substituted by a series of its harmonic components. Working with a truncated series of harmonic functions is much easier than working with an arbitrary periodic function. Because sin (jt) are odd and cos (jt) are even functions, the Fourier series breaks a periodic function into the sum of an even and an odd series, associated with an even and an odd function, respectively. If f (t) is an even function, then f (−t) = f (t) and bj = 0, and if f (t) is an odd function, then f (−t) = −f (t) and aj = 0.

202

3. P3: Approximation of Functions

An alternative representation of Eq. (3.169) is f (t)

=

Cj

=

ϕj

=

∞  1 a0 + Cj sin jt − ϕj 2 j=1  a2j + b2j aj arctan bj

(3.188) (3.189) (3.190)

Equating real and imaginary parts on the two sides of De Moivre’s formula k

(cos t + i sin t) = cos kt + i sin kt we have cos kt

=

sin kt

=

    k k cosk t − cosk−2 sin2 t + · · · 0 2     k k cosk−3 sin3 t + · · · cosk−1 t sin t − 3 1   n n! = k k! (n − k)!

where

(3.191)

(3.192) (3.193)

(3.194)

Therefore, every trigonometric polynomial of order n is also a polynomial of degree n in cos t sin t. Also, by equating real and imaginary parts of the two sides of the exponential expressions of cosk t and sink t cosk t = we have

eit + e−it 2 

22k−1 cos2k t

=

sink t =

eit − e−it 2

   2k 2k cos 2kt + cos (2k − 2) t + · · · 0 1     2k 1 2k + cos 2t + k−1 2 k

 2k + 1 t= cos (2k + 1) t 2 cos 0     2k + 1 2k + 1 + cos (2k − 1) t + · · · + cos t 1 k

(3.195)

(3.196)



2k

2k+1

    2k 2k k (−1) 22k−1 sin2k t = sin 2kt − sin (2k − 2) t 0 1     2k k 1 2k + sin (2k − 4) t + · · · + (−1) 2 2 k

(3.197)

(3.198)

3. P3: Approximation of Functions

203

 k

(−1) 22k sin2k+1 t

=

   2k + 1 2k + 1 sin (2k + 1) t − sin (2k − 1) t 0 1   2k + 1 + sin (2k − 3) t + · · · 2   k 1 2k + 1 + (−1) sin t (3.199) 2 k

Assume f1 (t) is a given periodic function and f2 (t) is a trigonometric polynomial to approximate f1 (t). f1 (t) f2 (t)

= f (t) f (t) − f (t + 2π) (3.200) 1 a0 + a1 cos t + b1 sin t + a2 cos 2t + b2 sin 2t + · · · = 2 ∞ 1 = a0 + (aj cos (jωt) + bj sin (jωt)) (3.201) 2 j=1

The distance between two functions f1 (t) and f2 (t) in [−π, π] is  π  π  π  π 2 d2 = (f1 − f2 ) = f12 + f22 − 2 f1 f2 −π

−π

−π

(3.202)

−π

To minimize the distance of f2 (t) from f1 (t), we have ∂d2 ∂d2 = =0 ∂ak ∂bk

(3.203)

and, therefore, the coefficients of the trigonometric polynomial should be the Fourier coefficients.  1 π a0 = f (s) ds (3.204) π −π  π 1 aj = f (s) cos js ds (3.205) π −π  π 1 f (s) sin js ds (3.206) bj = π −π Every generalized periodic function f (s) is a sum of a trigonometric series. The trigonometric series converges, in the generalized sense, if and only if for a k ≥ 0 we have aj bj →0 →0 (3.207) k j jk Joseph Fourier (1768–1830) introduced his method to answer the question of how a string can vibrate with a number of different frequencies at the same time. Fourier showed how we can decompose a periodic wave into a sum of sine and cosine waves with different frequencies. The frequencies are integer multiples of the fundamental frequency of the periodic wave (Fourier, 1878).

204

3. P3: Approximation of Functions

Figure 3.2: A sawtooth function Example 101 Separated sawtooth wave function. Consider a sawtooth function f (x) as is shown in Fig. 3.2. & f (x)

=

k

=

x 2kπ < x − 2kπ < (2k + 1) π 0 elsewhere 0, 1, 2, 3, · · ·

The coefficients of its Fourier series are   1 π 1 π π a0 = f (s) ds = s ds = π −π π 0 2   1 π 1 π aj = f (s) cos (js) ds = s cos (js) ds π −π π 0

bj

=

−1 + (−1) j2π

=

1 π

=





f (s) sin (js) ds = (−1) j

(3.209)

j

π −π

(3.208)

(3.210) 1 π



π

s sin (js) ds 0

j

(3.211)

Therefore, the Fourier series of the separated sawtooth function (3.208) is f (x)

=

=

∞ 1 a0 + (aj cos jx + bj sin jx) 2 j=1 

∞ j j (−1) π −1 + (−1) + cos jx − sin jx 4 j=1 j2π j

(3.212)

which its expanded form for the first five harmonics is f (x)

=

π 2 1 2 1 − cos x + sin x − sin 2x − cos 3x + sin 3x 4 π 2 9π 3 2 1 1 cos 5x + sin 5x + · · · (3.213) − sin 4x + 4 25π 5

3. P3: Approximation of Functions

205

Figure 3.3: The Fourier series approximation of a separated sawtooth periodic function Figure 3.3 illustrates how the Fourier series approaches (3.212) by increasing j. Example 102 Sawtooth wave function. Let us approximate a full sawtooth function f (x) by Fourier series. (2k − 1) π < x − 2kπ < (2k + 1) π

f (x)

=

x

k

=

0, 1, 2, 3, · · ·

(3.214)

206

3. P3: Approximation of Functions

Figure 3.4: The Fourier series approximation of a sawtooth periodic function The Fourier coefficients are   1 π 1 π f (s) cos (js) ds = s cos (js) ds = 0 aj = π −π π −π   j+1 1 π 1 π 2 (−1) f (s) sin (js) ds = s sin (js) ds = bj = π −π π −π j

(3.215) (3.216)

Hence, the Fourier series of the sawtooth function (3.214) is f (x) =

∞ ∞ j+1 2 (−1) 1 a0 + sin jx (aj cos jx + bj sin jx) = 2 j j=1 j=1

Its expanded form for the first few harmonics is   1 1 f (x) = 2 sin x − sin 2x + sin 3x − · · · 2 3

(3.217)

(3.218)

Figure 3.4 illustrates the Fourier series for (3.214). Example 103 Fourier’s integral formula. The trigonometric series 1 a0 + a1 cos t + b1 sin t + a2 cos 2t + b2 sin 2t + · · · (3.219) 2 is called a Fourier’s series of function f (x), and a0 , a1 , b1 , a2 , b2 , · · · are called Fourier’s constants if  1 π f (s) ds (3.220) a0 = π −π  1 π aj = f (s) cos js ds (3.221) π −π  1 π bj = f (s) sin js ds (3.222) π −π

3. P3: Approximation of Functions

207

and " π f (x) is bounded and integrable in interval (−π, π) and if unbounded, then f (s) ds is convergent. Employing (3.169)–(3.171), we will have the ap−π proximate equation for the periodic function f (t).

  πλ  ∞ jt πλ 1 js 1 cos ds f (t) dt + f (s) cos f (t) = 2πλ −πλ πλ λ −πλ λ j=1

  ∞ jt πλ 1 js + sin ds (3.223) f (s) sin πλ λ −πλ λ j=1 This formula may be contracted and be written in a new form.  πλ  πλ ∞ 1 j (t − s) 1 ds f (s) ds + f (s) cos f (t) = 2πλ −πλ πλ −πλ λ j=1

(3.224)

Assigning j/λ = u and 1/λ = αu and making λ → ∞ changes the sum to an integral, and we will have Fourier’s integral formula (Titchmarsh, 1948).  ∞  1 ∞ f (t) = du f (s) cos u (t − s) ds (3.225) π 0 −∞ It may also be written in the form  ∞ f (t) = (a (u) cos (tu) + b (u) sin (tu)) du

(3.226)

0

where a (u)

=

b (u)

=

 1 ∞ f (s) cos (us) ds π −∞  ∞ 1 f (s) sin (us) ds π −∞

(3.227) (3.228)

If f (t) is an even function, then b (u) = 0, and we have a Fourier’s cosine formula.  2 ∞ a (u) = f (s) cos (us) ds (3.229) π 0  ∞  ∞ 2 cos (tu) du f (s) cos (us) ds (3.230) f (t) = π 0 0 If f (t) is an odd function, then a (u) = 0, and we have a Fourier’s sine formula (Carslaw, 1921).  2 ∞ f (s) sin (us) ds (3.231) b (u) = π 0  ∞  2 ∞ sin (tu) du f (s) sin (us) ds (3.232) f (t) = π 0 0

208

3. P3: Approximation of Functions

Example 104 Periodic functions. A function x = f (t) is called a periodic function if there exists a period T > 0 such that for all t we have f (t) = f (t + T ) = f (t − T )

(3.233)

If f (t) is periodic of period T , then f (t) is also periodic for kT, k ∈ N. Usually we take the minimum value of T as the period for which we have (3.233). The function f (t) = cos t has period T = 2kπ. If f (t) is periodic of period T , then f (ωt) is of period T1 = T /ω. The function f (t) = cos 5t has period T = 2π/5, and the function f (t) = cos ωt has period T = 2π/ω. If f (t) is periodic of period T and g (t) is periodic of period T , then f (t) g (t) is also of period T , but T may not be the minimum period. The functions cos t and sin t both have period 2π, and the function cos t sin t has also period 2π, but its minimum period is π. If f (t) is periodic of period T , then for any number 0 < a < T , we have "T " a+T f (x) dx = 0 f (x) dx. a If a function f (t) is the sum of two periodic functions with different periods, then f (t) may not be periodic. However, if two functions have equal periods, then their sum will also have the same period (Grigorieva, 2015). Example 105 Non-periodic functions. Consider a non-periodic function f (t) that is defined only on the interval [0, L] and we want to have a Fourier series to approximate the function. To develop the Fourier series, we can make f (t) a periodic function by extending it as an even or odd periodic function over [−L, L] with period 2L. As an example, assume a non-periodic function f (t) such as f (t) = t

0