Fractional-Order Control Systems: Fundamentals and Numerical Implementations 9783110497977, 9783110499995

This book explains the essentials of fractional calculus and demonstrates its application in control system modeling, an

385 81 9MB

English Pages 388 Year 2017

Report DMCA / Copyright

DOWNLOAD PDF FILE

Table of contents :
Foreword
Preface
Contents
1. Introduction to fractional calculus and fractional-order control
2. Mathematical prerequisites
3. Definitions and computation algorithms of fractional-order derivatives and integrals
4. Solutions of linear fractional-order differential equations
5. Approximation of fractional-order operators
6. Modelling and analysis of multivariable fractional-order transfer function matrices
7. State space modelling and analysis of linear fractional-order systems
8. Numerical solutions of nonlinear fractional-order differential equations
9. Design of fractional-order PID controllers
10. Frequency domain controller design for multivariable fractional-order systems
A. Inverse Laplace transforms involving fractional and irrational operations
B. FOTF Toolbox functions and models
C. Benchmark problems for the assessment of fractional-order differential equation algorithms
Bibliography
Index
Recommend Papers

Fractional-Order Control Systems: Fundamentals and Numerical Implementations
 9783110497977, 9783110499995

  • 0 0 0
  • Like this paper and download? You can publish your own PDF file online for free in a few minutes! Sign Up
File loading please wait...
Citation preview

Dingyü Xue Fractional-Order Control Systems

Fractional Calculus in Applied Sciences and Engineering

| Editor in Chief Changpin Li Editorial Board Virginia Kiryakova Francesco Mainardi Dragan Spasic Bruce Ian Henry YangQuan Chen

Volume 1

Dingyü Xue

Fractional-Order Control Systems | Fundamentals and Numerical Implementations

Mathematics Subject Classification 2010 Primary: 26A33, 34A08; Secondary: 93C83, 65Y20, 46N40, 34C60 Author Prof. Dingyü Xue School of Information Science and Engineering Northeastern University Wenhua Road 3rd Street 110819 Shenyang China [email protected]

ISBN 978-3-11-049999-5 e-ISBN (PDF) 978-3-11-049797-7 e-ISBN (EPUB) 978-3-11-049719-9 Set-ISBN 978-3-11-049798-4 ISSN 2509-7210 Library of Congress Cataloging-in-Publication Data A CIP catalog record for this book has been applied for at the Library of Congress. Bibliographic information published by the Deutsche Nationalbibliothek The Deutsche Nationalbibliothek lists this publication in the Deutsche Nationalbibliografie; detailed bibliographic data are available on the Internet at http://dnb.dnb.de. © 2017 Walter de Gruyter GmbH, Berlin/Boston Cover image: naddi/iStock/thinkstock Typesetting: Dimler & Albroscheit, Müncheberg Printing and binding: CPI books GmbH, Leck ♾ Printed on acid-free paper Printed in Germany www.degruyter.com

Foreword Fifteen years have passed since the first Tutorial Workshop on “Fractional Calculus Applications in Automatic Control and Robotics” at IEEE International Conference on Decision and Control (CDC) in December 2002 in Las Vegas, USA. The perception on the usefulness of fractional calculus in control systems has been much improved since then. Indeed, using integer-order models and controllers for complex natural or man-made systems is just for our own convenience while the nature actually runs in a fractionalorder dynamical way. Using integer-order traditional tools for modelling and control of dynamic systems may result in suboptimum performance, that is, using fractional calculus tools, we could be “more optimal” as already documented in the literature. A signature evidence of wider acceptance of fractional-order control research could be the Best Journal Paper Award presented at the IFAC World Congress 2011 in Milan, Italy, to the top cited paper “Tuning and Auto-Tuning of fractional-order Controllers for Industry Applications” published in the IFAC journal Control Engineering Practice. Today, in control community, more and more researchers are looking into and working on the topic of fractional-order controls. With the publication of Bruce J. West’s book “The Fractional Dynamic View of Complexity – Tomorrow’s Science” (CRC Press, 2015), we also believe that fractional-order systems and control theory is an important step towards “tomorrow’s sciences”. Professor Dingyü Xue has been an active researcher and a pioneering educator in computer aided design of control systems (CADCS). His textbooks and the MATLAB/Simulink toolkits he developed have been benefitting control and other communities for more than 20 years. In MATLAB, the most fundamental tool is to construct LTI (linear time invariant) objects such as tf, ss, zpk, etc. How to construct similar tools for fractional-order LTI systems has been the key contributions from Dingyü, known as FOTF. Dingyü has a very fine taste of reliable coding in MATLAB with very serious craftsmanship. His Ph.D. mentor Professor Derek P. Atherton has openly praised him “A MATLAB genius” multiple times. The roots of this book by Professor Dingyü Xue can be traced back to the first complete course on the fractional calculus and its applications in engineering, which was read by Igor Podlubny in 2007 at Utah State University, and which Dingyü greatly improved in 2009 not only by extending it, but also by creating a huge amount of accompanying MATLAB code. We can fully rely on his codes included in this book. “Fractional-Order Control Systems, Fundamentals and Numerical Implementations” is a timely self-contained work that can be used both as a textbook and a desktop reference monograph. It focuses on fundamentals and numerical aspects of fractional-order control and it is very useful for beginning students, both undergraduates and postgraduates, as well as for researchers and engineers.

DOI 10.1515/9783110497977-202

VI | Foreword Several impressive features of this book are: (1) All examples are reproducible with MATLAB scripts embedded in the texts, so the readers can also learn some coding skills from reading this book. (2) The logic flow is self-containing: math fundamentals – fractional calculus – modelling and analysis of fractional-order control systems – design of control systems. (3) An object-oriented open-source MATLAB Toolbox FOTF is introduced in detail with newly added MIMO (multi-input-multi output) feature. (4) New Simulink based numerical solution methods with high numerical precision for general form nonlinear, possibly time-varying and delayed, MIMO, fractional-order systems. (5) Benchmark problems are designed for proper comparisons of existing numerical algorithms. “Fractional-Order Control Systems, Fundamentals and Numerical Implementations” has a simple goal in mind: making modelling and analysis of fractional-order systems as simple as dealing with integer-order ones. As Confucius said, “The craftsman who wishes to work well has first to sharpen his implements,” it is clear that “FractionalOrder Control Systems, Fundamentals and Numerical Implementations” is a suitable effective implement for fractional-order control engineering. We hope the readers will feel the same and enjoy reading and using in their research and engineering practice. YangQuan Chen University of California, Merced, USA

Igor Podlubny Technical University of Kosice, Slovakia

Preface Fractional calculus and fractional-order control are rapid developing topics in science and engineering of today. The systematic knowledge in the fields will be presented in the book, accompanied by easy-to-use MATLAB code. The first part is devoted to the knowledge on fractional calculus and their computation, while in the second part, a dedicated object-oriented MATLAB FOTF toolbox is presented, and the foundations of linear fractional-order control modelling and analysis are established. The third part presents some of the fractional-order controller design techniques. Many new research results and innovations make their first debut in this book. These include the algorithms and code on the high-precision numerical solutions to fractional calculus and differential equations; the dedicated MATLAB Toolbox – the FOTF Toolbox – fully supporting multivariable systems and descriptor fractional-order state space models, with their applications in frequency domain design of multivariable systems; extended fractional-order state space models and their numerical solution algorithms; much more efficient predictor–corrector algorithms for nonlinear Caputo equations; Simulink-based solutions of nonlinear fractional-order differential equations with nonzero initial conditions; a unified optimum fractional-order PID-type controller design framework for a variety of plant templates; benchmark problems for fractional-order differential equations. Each technical point is fully accompanied by MATLAB commands and functions by the author. Together with the FOTF Toolbox, the readers are able to experience the numerical implementations for themselves to the technical topics directly, with all the open-source code. This book does not just display the existing results, it advises the readers on how to obtain the results themselves, and how to explore new knowledge with the reusable code. In the early 2000s, inspired or even persuaded by my friend, Professor YangQuan Chen, University of California Merced, I started the research work in the field of fractional-order control. But it was not until 2003, when I was coauthoring the first edition of the mathematics book in MATLAB with YangQuan, I did really put effort in fractional calculus computations, where we introduced systematic materials in the computation of fractional-order derivatives, filter approximation, linear and nonlinear fractional-order differential equation solutions, and some of them are still in use in the fractional calculus community. Thanks must first go to my long-time collaborator and friend, YangQuan. Thanks also go to some of the leading scholars and active researchers in fractional calculus community, Professors Igor Podlubny and Ivo Petráš of Technical University of Košice, Professor Kai Diethelm of Technische Universität Braunschweig, Professor Yan Li of Shandong University, Professor Yong Wang and his team at University of Science and Technology of China, Professor Junguo Lu of Shanghai Jiaotong University, Professor Changpin Li of Shanghai University, Professor Donghai Li of Tsinghua

VIII | Preface University, Professor Yongguang Yu of Beijing Jiaotong University, Professors Wen Chen and Hongguang Sun of Hohai University, Professor Caibin Zeng of South China University of Technology, and my coauthors of the 2010 Springer monograph, Professors Blas Vinagre, Concepción Monje and Vicente Feliu from Spain. Fruitful discussions and correspondences with them motivated me a lot in initiating and completing some research activities in the field, and enriched the materials in the book. Discussions with my colleagues at Northeastern University, especially with Drs Feng Pan, Dali Chen and Xuefeng Zhang, brought out many interesting outcomes for this book. Some contributions from my students and former students are also acknowledged here. Especially, the contributions on the numerical solutions of fractional-order derivatives and differential equations by Mr Lu Bai and Dr Chunna Zhao; on filter design by Drs Chunna Zhao and Li Meng; on FOTF Toolbox design and testing by Ms Tingxue Li and Ms Lu Liu; on controller design by Drs Chunna Zhao, Li Meng, Ms Weinan Wang, Ms Lu Liu and Ms Tingxue Li. The research work of Ms Yang Yang, Drs Yanzhu Zhang, Yanmei Liu, Zhen Chen and Mr Lanfeng Chen are also acknowledged. Special thanks to Mr Lu Bai for proposing and implementing the high-precision algorithms, and also for the efforts in writing, testing and finalising some of the companion functions in the book. Personally, I would think that his original work puts a full-stop to the research of numerical solutions of engineering-related fractional-order differential equations. Thanks to the Series Editor, Professor Changpin Li, and the editorial board member Professor YangQuan Chen for the invitation of the book, and thanks to the Acquisitions Editors, Ms Baihua Wang and Chao Yang, Project Editors Astrid Seifert, Nadja Schedensack and Production Manager Mrs Gesa Plauschenat of Walter de Gruyter GmbH for their efficient work and help. Thanks also go to the Editor, Ms Hong Jiang of Science Press, China, for her careful work in arranging the concurrent Chinese publication. Thanks Professors YangQuan Chen and Igor Podlubny for the foreword of this book. The financial support from National Natural Science Foundation of China under Grants 61174145 and 61673094 is acknowledged. Last but not least, the author would like to thank his wife Jun Yang and his daughter Yang Xue for their patience, understanding, complete and everlasting support throughout this work. Dingyü Xue Northeastern University, Shenyang, China

March 2017

Contents Foreword | V Preface | VII 1 1.1 1.2 1.3 1.4 1.4.1 1.4.2

Introduction to fractional calculus and fractional-order control | 1 Historical review of fractional calculus | 1 Fractional modelling in the real world | 3 Introduction to fractional-order control tools | 5 Structure of the book | 6 Outlines of the book | 6 A guide of reading the book | 7

2 2.1 2.1.1 2.1.2 2.1.3 2.2 2.2.1 2.2.2 2.3 2.3.1 2.3.2 2.3.3 2.3.4 2.3.5

Mathematical prerequisites | 9 Elementary special functions | 9 Error and complementary error functions | 10 Gamma functions | 11 Beta functions | 15 Dawson function and hypergeometric functions | 18 Dawson function | 18 Hypergeometric functions | 20 Mittag-Leffler functions | 23 Mittag-Leffler functions with one parameter | 23 Mittag-Leffler functions with two parameters | 26 Mittag-Leffler functions with more parameters | 30 Derivatives of Mittag-Leffler functions | 30 Numerical evaluation of Mittag-Leffler functions and their derivatives | 33 Some linear algebra techniques | 34 Kronecker product and Kronecker sum | 35 Matrix inverse | 35 Arbitrary matrix function evaluations | 38 Numerical optimisation problems and solutions | 40 Unconstrained optimisation problems and solutions | 40 Constrained optimisation problems and solutions | 41 Global optimisation solutions | 44 Laplace transform | 47 Definitions and properties | 47 Computer solutions to Laplace transform problems | 48

2.4 2.4.1 2.4.2 2.4.3 2.5 2.5.1 2.5.2 2.5.3 2.6 2.6.1 2.6.2

X | Contents 3 3.1 3.1.1 3.1.2 3.2 3.2.1 3.2.2 3.2.3 3.2.4 3.2.5 3.3 3.3.1 3.3.2 3.3.3 3.3.4 3.3.5 3.4 3.4.1 3.4.2 3.4.3 3.4.4 3.4.5 3.5 3.6 3.6.1 3.6.2 3.6.3 3.6.4 3.7 3.7.1 3.7.2

4 4.1 4.1.1

Definitions and computation algorithms of fractional-order derivatives and integrals | 51 Fractional-order Cauchy integral formula | 52 Cauchy integral formula | 52 Fractional-order derivative and integral formula for commonly used functions | 53 Grünwald–Letnikov definition | 53 Deriving high-order derivatives | 53 Grünwald–Letnikov definition in fractional calculus | 54 Numerical computation of Grünwald–Letnikov derivatives and integrals | 54 Podlubny’s matrix algorithm | 61 Studies on short-memory effect | 62 Riemann–Liouville definition | 66 High-order integrals | 66 Riemann–Liouville fractional-order definition | 66 Riemann–Liouville formula of commonly used functions | 67 Properties of initial time translation | 68 Numerical implementation of the Riemann–Liouville definition | 69 High-precision computation algorithms of fractional-order derivatives and integrals | 70 Construction of generating functions with arbitrary orders | 71 FFT-based algorithm | 74 A new recursive formula | 76 A better fitting at initial instances | 80 Revisit to the matrix algorithm | 85 Caputo definition | 86 Relationships between different definitions | 87 Relationship between Grünwald–Letnikov and Riemann–Liouville definitions | 88 Relationships between Caputo and Riemann–Liouville definitions | 88 Computation of Caputo fractional-order derivatives | 89 High-precision computation of Caputo derivatives | 91 Properties of fractional-order derivatives and integrals | 93 Properties | 93 Geometric and physical interpretations of fractional-order integrals | 95 Solutions of linear fractional-order differential equations | 99 Introduction to linear fractional-order differential equations | 99 General form of linear fractional-order differential equations | 100

Contents

4.1.2 4.1.3 4.2 4.2.1 4.2.2 4.2.3 4.2.4 4.3 4.3.1 4.3.2 4.3.3 4.4 4.4.1 4.4.2 4.4.3 4.5 4.5.1 4.5.2 4.5.3 4.6 4.6.1 4.6.2 4.6.3 4.6.4 4.6.5 5 5.1 5.1.1 5.1.2 5.1.3 5.2 5.2.1 5.2.2 5.3

| XI

Initial value problems of fractional-order derivatives under different definitions | 100 An important Laplace transform formula | 102 Analytical solutions of some fractional-order differential equations | 103 One-term differential equations | 103 Two-term differential equations | 103 Three-term differential equations | 104 General n-term differential equations | 105 Analytical solutions of commensurate-order differential equations | 106 General form of commensurate-order differential equations | 106 Some commonly used Laplace transforms in linear fractional-order systems | 107 Analytical solutions of commensurate-order equations | 109 Closed-form solutions of linear fractional-order differential equations with zero initial conditions | 113 Closed-form solution | 113 The matrix-based solution algorithm | 117 High-precision closed-form algorithm | 119 Numerical solutions to Caputo differential equations with nonzero initial conditions | 121 Mathematical description of Caputo equations | 121 Taylor auxiliary algorithm | 121 High-precision algorithm | 124 Numerical solutions of irrational system equations | 130 Irrational transfer function expression | 130 Simulation with numerical inverse Laplace transforms | 131 Closed-loop irrational system response evaluation | 133 Stability assessment of irrational systems | 134 Numerical Laplace transform | 138 Approximation of fractional-order operators | 141 Some continued fraction-based approximations | 142 Continued fraction approximation | 142 Carlson’s method | 145 Matsuda–Fujii filter | 147 Oustaloup filter approximations | 149 Ordinary Oustaloup approximation | 149 A modified Oustaloup filter | 155 Integer-order approximations of fractional-order transfer functions | 157

XII | Contents 5.3.1 5.3.2 5.4 5.4.1 5.4.2 5.4.3 6 6.1 6.1.1 6.1.2 6.1.3 6.2 6.2.1 6.2.2 6.2.3 6.2.4 6.2.5 6.3 6.3.1 6.3.2 6.3.3 6.4 6.4.1 6.4.2 6.4.3 6.4.4 6.4.5 6.5 6.5.1 6.5.2 6.6 7 7.1 7.2 7.2.1 7.2.2 7.2.3

High-order approximations | 157 Low-order approximation via optimal model reduction techniques | 161 Approximations of irrational models | 165 Frequency response fitting approach | 166 Charef approximation | 168 Optimised Charef filters for complicated irrational models | 173 Modelling and analysis of multivariable fractional-order transfer function matrices | 179 FOTF — Creation of a MATLAB class | 180 Defining FOTF class | 180 Display function programming | 182 Multivariable FOTF matrix support | 184 Interconnections of FOTF blocks | 185 Multiplications of FOTF blocks | 185 Adding FOTF blocks | 187 Feedback function | 188 Other supporting functions | 190 Conversions between FOTFs and commensurate-order models | 193 Properties of linear fractional-order systems | 195 Stability analysis | 195 Partial fraction expansion and stability assessment | 198 Norms of fractional-order systems | 199 Frequency domain analysis | 201 Frequency domain analysis of SISO systems | 201 Stability assessment with Nyquist plots | 203 Diagonal dominance analysis | 204 Frequency response evaluation under complicated structures | 207 Singular value plots in multivariable systems | 210 Time domain analysis | 211 Step and impulse responses | 211 Time responses under arbitrary input signals | 215 Root locus for commensurate-order systems | 217 State space modelling and analysis of linear fractional-order systems | 221 Standard representation of state space models | 221 Descriptions of fractional-order state space models | 222 Class design of FOSS | 222 Conversions between FOSS and FOTF objects | 224 Model augmentation under different base orders | 226

Contents

7.2.4 7.3 7.3.1 7.3.2 7.3.3 7.3.4 7.3.5 7.4 7.5 7.5.1 7.5.2 8 8.1 8.1.1 8.1.2 8.1.3 8.1.4 8.2 8.2.1 8.2.2 8.2.3 8.3 8.3.1 8.3.2 8.3.3 8.3.4 8.4 8.5 8.5.1 8.5.2 8.5.3 8.5.4

9 9.1 9.2

| XIII

Interconnection of FOSS blocks | 227 Properties of fractional-order state space models | 231 Stability assessment | 231 State transition matrices | 232 Controllability and observability | 235 Controllable and observable staircase forms | 235 Norm measures | 236 Analysis of fractional-order state space models | 237 Extended fractional state space models | 238 Extended linear fractional-order state space models | 238 Nonlinear fractional-order state space models | 240 Numerical solutions of nonlinear fractional-order differential equations | 243 Numerical solutions of nonlinear Caputo equations | 243 Numerical solutions of single-term equations | 244 Solutions of multi-term equations | 249 Numerical solutions of extended fractional-order state space equations | 252 Equation solution-based algorithm | 256 Efficient high-precision algorithms for Caputo equations | 258 Predictor equation | 258 Corrector equation | 261 High-precision matrix algorithm for implicit Caputo equations | 263 Simulink block library for typical fractional-order components | 265 FOTF block library | 265 Implementation of FOTF matrix block | 270 Numerical solutions of control problems with Simulink | 272 Validations of Simulink results | 275 Block diagram-based solutions of fractional-order differential equations with zero initial conditions | 275 Block diagram solutions of nonlinear Caputo differential equations | 282 Design of a Caputo operator block | 282 Typical procedures in modelling Caputo equations | 284 Simpler block diagram-based solutions of Caputo equations | 286 Numerical solutions of implicit fractional-order differential equations | 291 Design of fractional-order PID controllers | 293 Introduction to fractional-order PID controllers | 293 Optimum design of integer-order PID controllers | 295

XIV | Contents 9.2.1 9.2.2 9.2.3 9.3 9.3.1 9.3.2 9.3.3 9.3.4 9.3.5 9.3.6 9.3.7 9.4 9.4.1 9.4.2 9.4.3 9.5 9.5.1 9.5.2 10 10.1 10.1.1 10.1.2 10.1.3 10.2 10.2.1 10.2.2 10.2.3 10.2.4 A

Tuning rules for FOPDT plants | 295 Meaningful objective functions for servo control | 297 OptimPID: An optimum PID controller design interface | 300 Fractional-order PID controller design with frequency response approach | 302 General descriptions of the frequency domain design specifications | 303 Design of PIλ Dμ controllers for FOPDT plants | 304 Controller design with FOIDPT plants | 308 Design of PIλ Dμ for typical fractional-order plants | 310 Design of PIDμ controllers | 311 Design of FO-[PD] controllers | 311 Other considerations on controller design | 312 Optimal design of PIλ Dμ controllers | 312 Optimal PIλ Dμ controller design | 312 Optimal PIλ Dμ controller design for plants with delays | 317 OptimFOPID: An optimal fractional-order PID controller design interface | 320 Design of fuzzy fractional-order PID controllers | 322 Fuzzy rules of the controller parameters | 322 Design and implementation with Simulink | 323 Frequency domain controller design for multivariable fractional-order systems | 329 Pseudodiagonalisation of multivariable systems | 329 Pseudodiagonalisation and implementations | 330 Individual channel design of the controllers | 333 Robustness analysis of controller design through examples | 337 Parameter optimisation design for multivariable fractional-order systems | 340 Parameter optimisation design of integer-order controller | 340 Design procedures of parameter optimisation controllers | 343 Investigations on the robustness of the controllers | 347 Controller design for plants with time delays | 349

A.1 A.2

Inverse Laplace transforms involving fractional and irrational operations | 353 Special functions for Laplace transform | 353 Laplace Transform tables | 353

B B.1

FOTF Toolbox functions and models | 357 Computational MATLAB functions | 357

Contents |

B.2 B.3 B.4 C

Object-oriented program design | 359 Simulink models | 361 Functions and models for the examples | 361 Benchmark problems for the assessment of fractional-order differential equation algorithms | 363

Bibliography | 365 Index | 369

XV

1 Introduction to fractional calculus and fractional-order control 1.1 Historical review of fractional calculus In the earlier development of classical calculus, referred to in the book as integerorder calculus, the British scientist Sir Isaac Newton and the German mathematician Gottfried Wilhelm Leibniz used different symbols to denote different orders of derivatives to a function y(x): the notations Newton used were y󸀠 (x), y󸀠󸀠 (x), y󸀠󸀠󸀠 (x), . . . , while Leibniz introduced the symbol dn y(x)/dx n , where n is a positive integer. In a letter from Guillaume François Antoine L’Hôpital, a French mathematician, to Leibniz in 1695, a specific question on the meaning of n = 1/2 was asked. In a letter dated 30 September 1695, Leibniz replied, “Thus it follows that d1/2 x will be equal 2 to x √ dx : x, an apparent paradox, from which one day useful consequences will be drawn” [41]. The question and answer given above were considered as the beginning of fractional calculus. In 1819, it was shown by the French mathematician Sylvestre François Lacroix that the 1/2th-order derivative of x is 2√ x/π. It is obvious to see that the notations Newton used were not suitable to be extended to the field of fractional calculus, while the one by Leibniz was ready to be extended into the new field. Three centuries passed, it was not until the last four or five decades, the research field concentrated on theoretical aspects. Very good historic reviews on the development of fractional calculus can be found in [41, 47], where in [41], Kenneth Miller and Bertram Ross presented a historic review of fractional calculus up to the last decade of the nineteenth century, while in [47], Keith Oldham and Jerome Spanier quoted Professor Bertram Ross’s year-by-year historic review of novel developments in fractional calculus up to the year 1975. These reviews were mainly focused on the development in pure mathematics. From 1960s, the fractional calculus research was extended into engineering fields. The dissipation model based on fractional-order derivatives was proposed by Professors Michele Caputo and Francesco Mainardi in Italy [5]. Professor Shunji Manabe in Japan extended the theoretical work to the field of control systems and introduced the non-integer-order control systems [36]. Professor Igor Pudlubny in Slovakia proposed the structures and applications of fractional-order PID controllers [55]. The work of Professor Alain Oustaloup’s group in France on robust controller design and their applications in suspension control systems in automobile industry [48, 49] was considered as a milestone in real-world applications of fractional calculus. From the years around 2000, several monographs dedicated to fractional calculus and its applications in a variety of fields appeared, among those, the well-organised ones are Professor Igor Podlubny’s book [54] on fractional-order differential equations

DOI 10.1515/9783110497977-001

2 | 1 Introduction to fractional calculus and fractional-order control and automatic control in 1999, Professor Rudolf Hilfer’s book [23] in physics in 2000, and Professor Richard Magin’s book [35] in bioengineering in 2002. Recently, several books concentrating on numerical computation and theoretical aspects in fractional calculus were published, such as Kai Diethelm’s book [16] in 2010, Shantanu Das’s book [13] in 2011, Vladimir Uchaikin’s book [66] in 2013, and Changpin Li and Fanhai Zeng’s book [28] in 2015. Many books on fractional-order control were published, for instance, Riccardo Caponetto, Giovanni Dongola, Luigi Fortuna and Ivo Petráš’s book [4] in 2010, Concepción Monje, YangQuan Chen, Blas Vinagre, Dingyü Xue and Vicente Feliu’s book [42] in 2010, Ivo Petráš’s book [52] in 2011, Ying Luo and YangQuan Chen’s book [33] in 2012, and Alain Oustaloup’s book [50] in 2014. The book [67] by Vladimir Uchaikin in 2013 delivered a very comprehensive coverage on the applications of fractional calculus in a variety of fields. It should be noted that the terms “fractional” or “fractional-order” are misused ones; more suitable terms are “non-integer-order” or “arbitrary-order”, since apart from fractional (rational) numbers, the theory also includes irrational numbers, for instance, d√2 y(t)/dt√2 , or even, n can be a complex number, while this is beyond the scope of the book. Since in the literature and in the related research communities, the term “fractional” is extensively used, in the book we use this term as well, while it also includes irrational-orders, and even the structures of the system are irrational. We shall use the unified terms “fractional calculus”, “fractional-order systems” and “fractional-order derivatives” throughout the book. As is well known in classical calculus, if x is the displacement, then dx/dt is the velocity, while d2 x/dt2 is the acceleration. Unfortunately, there are almost no widely accepted physical or geometrical explanations to fractional calculus as those to classical calculus, although some researchers were trying to bridge the gap. One meaningful interpretation is that the fractional-order integral can be regarded as “the moving shadow on the fence” [57]; however, it is not as simple and straightforward as the ones in classical calculus. Therefore, the easy-understanding physical interpretations are yet to be established. Here an numerical example is given to show the benefit in fractional-order derivatives and integrals on commonly encountered functions. Example 1.1. Consider a sinusoidal signal sin t. It is known that the first-order derivative is cos t, and the high-order derivatives are alternating ones as ± sin t and ± cos t. It can be seen that there is no extra information beside these four functions. Solution. It can be found by the well-known Cauchy’s integral formula that dn π sin t = sin(t + n ). dt n 2 In fact, the formula applies to any non-integer n, and the fractional-order derivatives of the given sinusoidal function are illustrated by the surface shown in Figure 1.1.

1.2 Fractional modelling in the real world | 3

1 0.5 0 -0.5 -1 1.5 1

order

n

0.5 0

0

2

4

6

8

time t

Fig. 1.1: Surface representation of the derivatives of different orders.

>> n0=0:0.1:1.5; t=0:0.2:2*pi; Z=[]; Translite for n=n0, Z=[Z; sin(t+n*pi/2)]; end, surf(t,n0,Z) It can be seen that apart from the four functions ± sin t and ± cos t, intermediate information is provided in the gradual changing surface. Therefore, it is found that the fractional-order derivatives are more informative, compared with the integer-order counterparts. Gradual changes are allowed when the orders are no longer integers. In practical applications, if fractional calculus is introduced, some information invisible in classical calculus may be revealed.

1.2 Fractional modelling in the real world There are many papers and monographs reporting the applications of fractional-order derivative-based modelling and identification in real-world phenomena and systems [13, 23, 35]. Here, several typical examples are given, where the system behaviours cannot be expressed well under the integer-order calculus framework, and fractional calculus should be introduced in describing the problems. It can be seen that fractional calculus phenomena are ubiquitous in real world. Example 1.2. In viscoelasticity of polymer materials, it is suggested [23] that the rheological constitutive equation can be more precisely described by the following fractional-order differential equation: σ(t) + τ α−β

α dα−β σ(t) α d = Eτ σ(t) dt α dt α−β

with 0 < α, β < 1. Example 1.3. Consider the driving-point impedance of semi-infinite lossy transmission line [13]. The standard voltage equation satisfies an integer-order partial differential

4 | 1 Introduction to fractional calculus and fractional-order control equation with boundary conditions ∂2 v(x, t) ∂v(x, t) , v(0, t) = v I (t), v(∞, t) = 0, =α ∂t ∂x2 with a series of mathematical formulation, the driving-point impedance expression of voltage–current relation with zero initial conditions can be modelled as the following fractional-order differential equations: i(t) =

1 d1/2 v(t) R√α dt1/2

or

v(t) = R√α

d−1/2 i(t) . dt−1/2

Example 1.4. Ionic polymer metal composite (IPMC) is an innovative material made of an ionic polymer membrane electroded on both sides with a noble metal. IPMCs have great potentials in soft robotic actuators, artificial muscles and dynamic sensors in the micro-to-macro size range. A set of frequency response data were measured for the IPMC, and it was found that the response could not be fitted well with the well-established linear integer-order models. Therefore, fractional-order system identification techniques were carried out, and the identified model was (see [4]) G(s) =

s0.756 (s2

340 . + 3.85s + 5880)1.15

Of course, this is a form of a fractional-order model. Example 1.5. In standard heat diffusion processes, the temperature of a rod can be expressed perfectly by a linear unidimensional partial differential equation of the form ∂c ∂2 c = K 2. ∂t ∂x If one applies a constant temperature C0 at x = 0, the heat diffusion can be governed by a fractional-order transfer function of G(s) = e−x√s/k such that the temperature can be described by (see [26]) C0 −x√s/k c(x, s) = e . s Example 1.6. Memristor (short for memory resistor) was postulated as the fourth basic element after the physically existing resistor, capacitor and inductor, by Professor Leon Chua [12] in 1971, and it was claimed that the missing memristor was found [64]. Since there exists memory behaviours in fractional calculus, the resistance is expressed in fractional calculus terms as [20] t α+1 Rm = [Rin ∓ 2kRd ∫ 0

v(τ) dτ] (t − τ)1−α

1/(α+1)

,

where the integral is a fractional-order integral. As we all know that the solutions of integer-order differential equations are always exhibiting behaviours in exponential functions, however, observations of certain phenomena show that they are power functions of time. This is the so-called power-law.

1.3 Introduction to fractional-order control tools | 5

In this case, there may be fractional-order derivative related mathematical models existing in the systems. Equipped with the fractional calculus viewpoint, one may have better ideas in handling complicated phenomena and world [72].

1.3 Introduction to fractional-order control tools There are several widely used MATLAB toolboxes available in fractional calculus and fractional-order control, among the researchers in the fractional calculus community. A survey on some of the numerical tools can be found in [29]. Most of the tools listed in [29] are just single MATLAB functions, solving one or a few specific problems. Only four of them can be considered as toolboxes. They are summarised below in the order of the time of their first appearance. (1) CRONE Toolbox [48], developed by the CRONE team of France led by Professor Alain Oustaloup, is a MATLAB and Simulink toolbox dedicated to the applications of non-integer derivatives in engineering and science. The work started in 1990s and it is useful in solving problems in fractional-order identification, robust control analysis and design. Unfortunately, the code are encrypted in MATLAB’s pseudo-code format, and it is not possible to modify or extend the code in the toolbox. CRONE is the French abbreviation of Commande Robuste d’Ordre Non Entier, in English, “non-integer-order robust control”. Multivariable systems can be handled by the toolbox. (2) Ninteger Toolbox [68], developed by Professor Duarte Valério of Portugal since 2005, is a non-integer control toolbox for MATLAB intended to help with developing fractional-order controllers and assessing their performance. The kernels of the toolbox are the facilities of fractional-order system identification, and fractional-order system approximations with integer-order objects. The toolbox applies only to single-variable control systems. (3) FOTF Toolbox [10, 76] is a control toolbox for fractional-order systems developed by Professor Dingyü Xue of China. It was first released under the name of FOTF in 2006, although several functions and Simulink blocks were released by the author since 2004. Now, all of its components are upgraded such that multivariable systems are fully supported. Also, the low-level fractional calculus computation facilities are fully upgraded to high-precision algorithms, which makes the computational support much more efficient and reliable. The theoretical background and programming details of the toolbox are fully addressed in the book. (4) FOMCON Toolbox [65], developed Aleksei Tepljakov of Estonia, started by collecting the existing functions and classes in FOTF and Ninteger toolboxes, to build a toolbox for solving problems in the identification, analysis and design of fractionalorder control systems. Since the earlier versions of my FOTF class were used as its basis, it is restricted to single-variable systems. Besides, the readers are also recommended to search for themselves the possible new toolboxes from MathWorks’ “File Exchange” web-site. However, care must be taken

6 | 1 Introduction to fractional calculus and fractional-order control to select the reliable tools, since the standard of implementation may be significantly different, and some are of very poor quality and may lead to erroneous results.

1.4 Structure of the book 1.4.1 Outlines of the book This book covers the essential areas of fractional calculus and its applications in fractional-order control systems. In the first chapter, a brief introduction on fractional calculus is presented. Examples are given to show why fractional-order modelling are useful in real world. Several of the existing MATLAB toolboxes are summarised, and as the companion of the book, the FOTF Toolbox is extensively used. In Chapter 2, survival mathematics are fully covered. Special functions are presented first, followed by mathematical problems solution schemes often encountered in the book. In Chapter 3, the definitions and numerical algorithms on fractional-order derivatives and integrals are presented, and in particular, high-precision algorithms are proposed and implemented, which can be regarded as the basis of the rest of the materials in the book. In Chapter 4, linear fractional-order differential equations are explored, where both analytical and numerical solutions are considered. High-precision algorithms are proposed to solve numerically linear fractional-order differential equations with zero and nonzero initial conditions. Numerical Laplace transform and its inverse are used to solve problems in irrational systems. In Chapter 5, filter approximations to fractional-order actions are introduced, a variety of continuous filters are explored. Behaviour assessment of the filters are introduced, where the order of filters as well as the parameters such as fitting frequency intervals are studied. In Chapters 6 and 7, MATLAB classes are established for the two of the most widely used linear fractional-order models, i.e. fractional-order transfer function and fractional-order state space models. Object-oriented programming techniques are introduced to solve typical modelling and analysis problems of linear fractionalorder systems. The analysis procedures of linear fractional-order systems will be made as easy as they were in integer-order control systems with Control System Toolbox. In Chapter 8, numerical approaches are explored for nonlinear fractional-order differential equations, where high-precision algorithms are again introduced to solve explicit or even implicit fractional-order differential equations. A Simulink blockset is designed such that complicated nonlinear fractional-order systems can be modelled and simulated as block diagrams through the use of the powerful Simulink

1.4 Structure of the book | 7

environment. Nonlinear fractional-order differential equations of any complexity can be modelled and solved by the systematic schemes proposed in Simulink. In Chapters 9 and 10, controller design tasks will be illustrated for fractional-order systems. The commonly used integer-order PID controllers are introduced, and the optimum design of controllers are suggested using numerical optimisation algorithms. This idea is carried out to the design problems with fractional-order PID controllers. Variable-order fuzzy PID controllers are also explored in the book. For multivariable fractional-order systems, diagonal dominance problems are studied with the pseudodiagonalisation techniques. The well-established parameter optimisation technique in integer-order multivariable feedback systems are studied for multivariable fractionalorder plants. Simulation analysis shows that the controllers are with strong robustness to the gain variations in the plants. Three appendices are provided, where the Laplace transform tables are given in Appendix A, while in Appendix B, the functions and models designed by the author are listed for reference. In Appendix C, five benchmark problems are designed for the fair assessment of the numerical algorithms for fractional-order differential equations.

1.4.2 A guide of reading the book Numerical implementations with MATLAB are provided whenever necessary, so that the readers can grasp the algorithms easily, and solve their own problems with handy numerical tools. A MATLAB toolbox – FOTF Toolbox – is designed for the modelling, analysis and design of fractional-order control systems. The download address of the toolbox is http://cn.mathworks.com/matlabcentral/fileexchange/60874-fotf-toolbox. Beside the use as a textbook in fractional calculus and fractional-order control systems, this monograph is written for three different categories of readers, i.e. control engineers who want to introduce fractional-order control facilities in their work; researchers of a variety disciplines who want to learn systematic knowledge in fractional calculus and fractional-order controls; the mathematics-related researchers and students who want to learn mainly fractional calculus and its applications. For engineers who just want to try fractional-order controllers in real-world applications, all the theoretical materials can be bypassed. The fundamental material of what fractional-order derivatives and integrals are should be studied. Then the fractionalorder derivatives computations for given functions and unknown functions should be studied, and numerical solutions of linear and nonlinear fractional-order differential equations should be learned. The use of the two classes FOTF and FOSS should be mastered, and since the two classes can be used in almost the same way as the Control System Toolbox of MATLAB, there should not be any difficulties in using them. With all these equipped, the readers can learn further the materials in Chapters 9 and 10, and try to apply fractional-order controllers in their own control systems, and see whether better control behaviours can be achieved.

8 | 1 Introduction to fractional calculus and fractional-order control Researches and students who want to learn systematically fractional calculus and fractional-order control, all the relevant materials should be learned, while the proofs to most of the theorems can be bypassed. With the reusable codes and models provided, the readers may just try to fit their own problems into the framework such that the fractional calculus can be tried in their own area and the readers can see for themselves immediately whether promising results can be achieved by introducing the ideas of fractional calculus. For researchers and students of mathematics related disciplines, programming details and tactics should be learned to improve their own capabilities in algorithm development, so that they can later implement their own algorithms in an efficient way. The book is by no means a strict book in mathematics, and a lot of theorems or even conjectures are listed without proofs. The readers are encouraged to provide proofs for the missing ones. Also, the author sincerely hopes that the dedicated examples can be used as testing problems for the mathematics researchers to test their own algorithms in solving similar problems such that fair comparisons can be made. Further, a set of dedicated benchmark problems on fractional-order ordinary differential equations are proposed in Appendix C, and they are also solved in the book with fairly high precision, speed and the results are repeatable with the code and models. So, just try to solve them with your own algorithms or the algorithms published by other researchers, and see whether the results in the book can be beaten in any way, i.e. in speed, in accuracy, or in the ease of use. New benchmark problems are also welcome, and the improved versions of the algorithms in the book may be tailored for them. There is a famous Chinese proverb “Try to show you are a horse not a mule”. Only in this way scientific research may proceed towards the correct direction.

2 Mathematical prerequisites Fractional calculus is a direct extensions to its integer-order counterpart. The theory and computation of fractional calculus is the foundation of fractional-order control. In the descriptions of fractional calculus, some of the special functions are inevitable. The computation and graphical displays of some commonly used special functions will be introduced in this chapter. In Section 2.1, the elementary special functions like error functions, Gamma functions and beta functions will be introduced, and the properties of these functions are summarised in this section. In Section 2.2, Dawson function, hypergeometric functions and other special functions will also be introduced, where MATLAB solutions are summarised. In Section 2.3, various Mittag-Leffler functions will be introduced, and the computation algorithms of the Mittag-Leffler functions and their integer-order derivatives will be presented. In fact, the Mittag-Leffler functions are as important in fractional calculus as the exponential functions in classical calculus and deserve much attention. In the modelling, analysis and design of fractional-order systems, some problems will be converted to typical mathematical problems, and solution techniques to these problems are needed. In particular, some linear algebra problems to be used later in the book are summarised in Section 2.4, where the topics on Kronecker product and sum, simple matrix inverse algorithm, and arbitrary matrix function evaluation are summarised. An introduction on numerical optimisation problems with solution patterns in MATLAB will be given in Section 2.5, where both unconstrained and constrained optimisation problems are summarised, and particle swarm optimisation technique and other approaches for finding global optimal results are presented. The Laplace transform and its MATLAB solutions will be introduced in Section 2.6, where the problems can be solved with symbolic computing facilities. For more information and complete approaches in scientific computing problems with MATLAB, we refer to [77]. This reference may significantly enhance the readers’ power in tackling complicated mathematical problems.

2.1 Elementary special functions In a traditional calculus course, it is often found that there are certain functions with the property that the theoretical indefinite integral of them does not exist. For 2 instance, if the integrand is f(x) = e−x , there is no analytical solution to the indefinite integral ∫ f(x) dx. Therefore, mathematicians invented a special function erf(x) to denote the indefinite integral, and regard it as an analytical solution to the integral problem. In real applications, many such special functions are invented, and the commonly used ones include Gamma and beta functions.

DOI 10.1515/9783110497977-002

10 | 2 Mathematical prerequisites 2.1.1 Error and complementary error functions In this and the following sections, definitions and plots of the error, Gamma, beta, Dawson, hypergeometric and Mittag-Leffler functions will be presented. Definition 2.1. The mathematical form of the error function is erf(z) =

z

z

0

−z

2 2 1 2 ∫ e−t dt = ∫ e−t dt. √π √π

It immediately follows that 0

erf(0) =

2 2 ∫ e−t dt = 0, √π

0 ∞

erf(∞) =

2 2 ∫ e−t dt = 1, √π

erf(−∞) = −1.

0

Theorem 2.1. The error function can also be expressed as an infinite series erf(z) =

2 ∞ (−1)k z2k+1 ∑ √π k=0 (2k + 1)k!

with a convergent field of |z| < ∞. Proof. With the Taylor series expansion of the exponential function, it is easily found that z ∞ (−t2 )k 2 ∞ (−1)k z2k+1 2 dt = , erf(z) = ∑ ∫∑ √π k=0 k! √π k=0 (2k + 1)k! 0

where the convergent field for the Taylor series expansion is |z| < ∞. Hence the theorem is proven. In MATLAB, the error function can be evaluated directly with the function erf(), in the syntax of y = erf(z). Definition 2.2. A complementary error function is defined as ∞

erfc(z) =

2 2 ∫ e−t dt. √π

z

It is easily found from the definitions that erfc(z) = 1 − erf(z). The complementary error function erfc(z) can be evaluated directly with the function erfc(): y = erfc(z). It should be noted that in the existing MATLAB functions, the case of real z can be processed easily, while if z is a complex number, numerical integrals should be taken instead, and will be demonstrated this in the examples below.

2.1 Elementary special functions | 11 2

1.5

error function erf(x)

1

0.5

0

complementary error function erfc(x)

-0.5

-1

-5

-4

-3

-2

-1

0

1

2

3

4

5

Fig. 2.1: Curves of error and complementary error functions.

Example 2.1. Draw the curves of the error and the complementary error functions. Solution. The curves of the error functions can be obtained directly with the following MATLAB statements, as shown in Figure 2.1. >> x=-5:0.1:5; y1=erf(x); y2=erfc(x); plot(x,y1,x,y2,’--’) Example 2.2. Evaluate numerically the function erf(1 + j2). Solution. It should be noted that the current MATLAB erf() function supports only real arguments. Therefore, numerical integrals can be used alternatively for the value of erf(1 + j2). The anonymous function can be used to define the integrand in MATLAB. Then the numerical integral function integral() can be used to evaluate the required integral with numerical methods. It is found with the following statements that erf(1 + j2) = 0.5366 − j5.0491. >> f=@(x)2*exp(-x.^2)/sqrt(pi); I=integral(f,0,1+2i)

2.1.2 Gamma functions Gamma functions are essential functions in fractional calculus, since all the fractional calculus definitions are established upon Gamma functions. Definition 2.3. A typical Gamma function is defined as an infinite integral of the form ∞

Γ(z) = ∫ e−t t z−1 dt. 0

Theorem 2.2. An important property of the Gamma function is Γ(z + 1) = zΓ(z).

(2.1)

12 | 2 Mathematical prerequisites Proof. The well-known integration by parts formula is given by ∫ u(t) dv(t) = u(t)v(t) − ∫ v(t) du(t). Now, select u(t) = e−t ,

v(t) =

(2.2)

tz . z

It is easily found that du(t) = − e−t = e−t dt, dv(t) = t z−1 , and therefore ∞



0

0

󵄨󵄨∞ 1 1 1 Γ(z) = ∫ u(t) dv(t) = e−t t z 󵄨󵄨󵄨 + ∫ e−t t z dt = Γ(z + 1). 󵄨0 z z z The property in (2.1) can be shown directly from the above formula. Corollary 2.1. As a special case, for nonnegative integers z, it can be found from (2.1) that Γ(z + 1) = zΓ(z) = z(z − 1)Γ(z − 1) = ⋅ ⋅ ⋅ = z!. Therefore, the Gamma function can be regarded as interpolation of factorial, or it can be regarded as an extension of factorials for non-integers z. If z is a negative integer, then Γ(z + 1) tends to ±∞. With MATLAB, the Gamma function can be evaluated with the command y = gamma(x), where x can be a vector, the function gamma() evaluates the Gamma function values on each value of the vector x, and returns also a vector y. Also, x can be a matrix or other data types, and the returned y is of the same size as x. Note that the MATLAB function gamma() can only deal with real arguments x. For a vector x containing complex components, a new MATLAB function can be written later. Example 2.3. Draw the Gamma function curve for x ∈ (−5, 5). Solution. The following statements can be used to draw the curve of the Gamma function, and the result is shown in Figure 2.2. Since the values of the Gamma function tend to infinity at the points z = 0, −1, −2, . . . , the function ylim() is used to make the curve more informative. >> a=-5:0.002:5; plot(a,gamma(a)), ylim([-15,15]) hold on, v=[1:4]; plot(v,gamma(v),’o’) For some specific values of z, the following values of the Gamma function can be found: 1 Γ( ) = √π, 2

√π 3 Γ( ) = , 2 2

5 3√π Γ( ) = , 2 4

7 15√π Γ( ) = , 2 8

... .

(2.3)

2.1 Elementary special functions | 13 15

10

3! 5

0!

1!

2!

0

-5

-10

-15

-5

-4

-3

-2

-1

0

1

2

3

4

5

Fig. 2.2: The Gamma function curve.

Example 2.4. Show (2.3) with MATLAB. Solution. The formulae in (2.3) can be shown easily with the following statements. >> syms t z; Gam=int(exp(-t)*t^(z-1),t,0,inf); I1=subs(Gam,z,sym(1/2)), I2=subs(Gam,z,sym(3/2)), I3=subs(Gam,z,sym(5/2)), I4=subs(Gam,z,sym(7/2)) Theorem 2.3. The general term when z is not an integer is written as Γ(z +

1 ) = (2π)(m−1)/2 m1/2−mz Γ(mz). m

(2.4)

Corollary 2.2. It follows from Theorem 2.3 that if m = 2, Γ(z +

√πΓ(2z + 1) 1 . ) = 2z 2 2 Γ(z + 1)

(2.5)

Theorem 2.4. The Gamma function satisfies the Legendre formula: Γ(z)Γ(z +

1 ) = 21−2z √πΓ(2z). 2

Proof. It can be found from (2.5) that Γ(z +

√πΓ(2z + 1) √π2zΓ(2z) 1 Γ(2z) . = = 21−2z √π ) = 2z 2 Γ(z) 2 Γ(z + 1) 22z zΓ(z)

Therefore it is seen that the Gamma function satisfies the Legendre formula. Besides, the Gamma function satisfies the following equations, and these equations can be shown by manual derivation [54] or, alternatively, by symbolic computation facilities in MATLAB: π Γ(z)Γ(1 − z) = . (2.6) sin πz It is found by direct derivation that −π πz Γ(z)Γ(−z) = , Γ(1 + z)Γ(1 − z) = . (2.7) z sin πz sin πz

14 | 2 Mathematical prerequisites Theorem 2.5. The following formulae are satisfied: 1 π 1 , Γ( + z)Γ( − z) = 2 2 cos πz z a Γ(z) = 1, R(a) > 0, lim z→∞ Γ(z + a)

(2.8) (2.9)

where R(a) extracts the real part of a. Example 2.5. Show (2.6)–(2.9) with MATLAB symbolic computation. Solution. The Gamma function can be defined with symbolic computation facilities. Then the function subs() can be used to evaluate variations of the Gamma function with variable substitution technique. It can be seen that all the equations hold. >> syms t z; Gam=int(exp(-t)*t^(z-1),t,0,inf); I1=simplify(Gam*subs(Gam,z,1-z)) I21=simplify(Gam*subs(Gam,z,-z)) I22=simplify(subs(Gam,z,1+z)*subs(Gam,z,1-z)) I3=simplify(subs(Gam,z,1/2+z)*subs(Gam,z,1/2-z)) syms a positive; I4=limit(z^a*Gam/subs(Gam,z,a+z),z,inf) Some integral problems can be evaluated with Gamma functions. Example 2.6. Find the analytical solutions of the following infinite integrals: ∞

π/2

I1 = ∫ sin2m−1 t cos2n−1 t dt,

I2 = ∫ t x−1 cos t dt,

x > 0.

0

0

Solution. The following statements can be used to evaluate the two integrals. >> syms m n t z; syms x positive I1=int(sin(t)^(2*m-1)*cos(t)^(2*n-1),t,0,sym(pi)/2) I2=simplify(int(t^(x-1)*cos(t),t,0,inf)) The results are I1 =

Γ(n)Γ(m) , 2Γ(n + m)

I2 =

2x−1 √π Γ(x/2) . Γ((1 − x)/2)

Example 2.7. Evaluate the functions Γ(2 + 2j) and Γ(2 − 2j). Solution. Since the MATLAB function gamma(z) can only be used to evaluate the Gamma function for real z, the users must write their own MATLAB functions for complex values z. For instance, numerical integral technique can be used to evaluate Gamma functions, with an extended MATLAB function written as follows. function y=gamma_c(z) if isreal(z), y=gamma(z); else, f=@(t)exp(-t).*t.^(z-1); y=integral(f,0,inf,’ArrayValued’,true); end

2.1 Elementary special functions | 15

With the above extended function, it is found that Γ(2 ± 2j) = 0.1123 ± 0.3236j. It can be seen that the Gamma function with complex arguments can be evaluated directly with such a function. In this book, however, only Gamma function with real arguments are considered. >> z=[2+2i, 2-2i]; I=gamma_c(z) Alternative representations of the Gamma function also exist, where the Gamma function can be evaluated through the following theorems. Theorem 2.6. An infinite product can be used to represent the Gamma function: Γ(z) =

1 −γz ∞ n e ∏( ) ez/n , z n + z n=1

where γ ≈ 0.57721566490153286 is the Euler constant γ.. Theorem 2.7. The Gamma function can be represented by the following limit: Γ(z) = lim

n→∞

n z n! . z(z + 1)(z + 2) ⋅ ⋅ ⋅ (z + n)

It is found that the Gamma function has poles at z = 0, −1, −2, . . . . Unfortunately, an infinite sequence product is involved, which increases significantly the computer burden. Therefore, these computation approaches are not adopted in the book. Definition 2.4. The incomplete Gamma function is defined as z

Γ(z, α) =

1 ∫ e−t t α−1 dt, Γ(z)

z ≥ 0.

0

The function gammainc() can be used to evaluate the incomplete Gamma function, with y = gammainc(z, α). If the upper bound in the integral is α = 0, then Γ(0, 0) = 1. Example 2.8. Draw incomplete Gamma functions with different values of α. Solution. The incomplete Gamma functions can be evaluated and plotted directly, and the curves obtained are as shown in Figure 2.3. >> x=0:0.01:3; b=0.2:0.2:2; for a=b, plot(x,gammainc(x,a)); hold on; end

2.1.3 Beta functions Definition 2.5. The beta function is defined as 1

B(z, m) = ∫ t z−1 (1 − t)m−1 dt, 0

R(m) > 0, R(z) > 0.

16 | 2 Mathematical prerequisites 1 0.9

α=

0.8

α

0.7

0.2

=

0.

4

0.6 0.5

α=2

0.4 0.3 0.2 0.1 0

0

0.5

1

1.5

2

2.5

3

Fig. 2.3: Curves of incomplete Gamma functions.

Theorem 2.8. The relationship between Gamma and the beta functions is B(m, z) =

Γ(m)Γ(z) . Γ(m + z)

Proof. It is known from Definition 2.5 that ∞



∞∞

Γ(m)Γ(z) = ∫ e−u u m−1 du ∫ e−v v z−1 dv = ∫ ∫ e−(u+v) u m−1 v z−1 du dv. 0

0

(2.10)

0 0

Create new variables u = f(w, t) = wt and v = g(w, t) = w(1 − t), where w ∈ [0, ∞) and t ∈ [0, 1]. Substitute them into (2.10); one has ∞ 1

Γ(m)Γ(z) = ∫ ∫ e−w (wt)m−1 [w(1 − t)]z−1 |J(w, t)| dt dw,

(2.11)

0 0

where |J(w, t)| is the absolute value of the Jacobian determinant of u = f(w, t) and v = g(w, t), with 󵄨󵄨 󵄨∂u/∂w J(w, t) = 󵄨󵄨󵄨󵄨 󵄨󵄨 ∂v/∂w

󵄨 󵄨 ∂u/∂t󵄨󵄨󵄨 󵄨󵄨󵄨 t 󵄨󵄨 = 󵄨󵄨 ∂v/∂t 󵄨󵄨󵄨 󵄨󵄨󵄨1 − t

󵄨 w 󵄨󵄨󵄨 󵄨󵄨 = −w. −w󵄨󵄨󵄨

Substitute |J(w, t)| = w into (2.11); it can be found that ∞

Γ(m)Γ(z) = ∫ e

1 −w

w

z+m−1

0

dw ∫ t m−1 (1 − t)z−1 dt = Γ(m + z)B(m, z). 0

Hence the theorem is proven. Corollary 2.3. It can be seen from Theorem 2.8 that B(z, m) = B(m, z).

2.1 Elementary special functions | 17

From the above property, it can be found that the integral I1 in Example 2.6 can further be simplified as B(m, n)/2. The beta function can be evaluated directly in MATLAB with the function y = beta(z, m), where z is a given vector. Unfortunately, only real nonnegative arguments z and m are supported. Definition 2.6. For any m and z, the extended beta function is defined as B(m, z) =

Γ(m)Γ(z) . Γ(m + z)

For the problems with negative arguments, direct evaluations can be made from Definition 2.6: y = gamma(z). ∗ gamma(m)./gamma(z + m). Again, only real arguments in m and z can be handled. If complex arguments are expected, the numerical integral technique should be adopted. function y=beta_c(z,m) if isreal(z), y=beta(z,m); else, f=@(t)(1-t).^(m-1).*t.^(z-1); y=integral(f,0,1,’ArrayValued’,true); end Example 2.9. Draw the curves of beta functions for different values of m. Solution. For m = 1 and m = 3, the curve of the extended beta function can be drawn as shown in Figure 2.4. >> x=-3:0.01:3; m=1; y1=gamma(x).*gamma(m)./gamma(x+m); m=3; y2=gamma(x).*gamma(m)./gamma(x+m); plot(x,y1,x,y2,’--’); ylim([-20 20]) 20 15

m=3

10 5

m=1

m=1

0 -5

m=3

m=3

-10

m=3

-15 -20

-3

-2

-1

0

1

2

3

Fig. 2.4: The curves of extended beta functions for m = 1 and m = 3.

18 | 2 Mathematical prerequisites 3

2.5

v=

2

1.5

v

=

v=

1.

0.

21

42

75

3

82

59

1

0.

17

2

0.5

0

0

0.5

1

1.5

2

2.5

3

Fig. 2.5: Contour plots of the beta function.

Example 2.10. Draw contour plots of the beta function. Solution. Generate a set of mesh grid data, the beta function can be evaluated. Contour plots can be drawn with the function contour(). Therefore, the contour plots of the beta function can be obtained as shown in Figure 2.5. >> [x,y]=meshgrid(0:0.05:3); z=beta(x,y); contour(x,y,z,100) Definition 2.7. The multivariate beta function is defined as Γ(α1 )Γ(α2 ) ⋅ ⋅ ⋅ Γ(α n ) B(α1 , α2 , . . . , α n ) = . Γ(α1 + α2 + ⋅ ⋅ ⋅ + α n ) Definition 2.8. Similar to the incomplete Gamma function, the regularised incomplete beta function can also be defined as follows: for 0 ≤ x ≤ 1, x

Bx (z, m) =

1 ∫ t z−1 (1 − t)m−1 dt, B(z, m)

R(m) > 0, R(z) > 0,

0

The regularised incomplete beta function can be evaluated in MATLAB directly with the syntax y = betainc(x, z, m).

2.2 Dawson function and hypergeometric functions 2.2.1 Dawson function The Dawson function is also useful in the field of fractional calculus [1]. Definition 2.9. The Dawson function is defined as z −z2

daw(z) = e

2

∫ eτ dτ. 0

(2.12)

2.2 Dawson function and hypergeometric functions | 19

The Dawson function is an odd function, i.e. daw(z) = − daw(−z), and it can also be written as j√π −z2 daw(z) = − e erf(jz). 2 Theorem 2.9. The first-order derivative of the Dawson function satisfies d daw(z) = 1 − 2z daw(z). dz Proof. From the definition in (2.12), it is immediately found that z

2 2 2 2 d daw(z) = −2z e−z ∫ eτ dτ + e−z ez = 1 − 2z daw(z). dz

0

Hence the theorem is proven. The Dawson function can also be evaluated through the infinite series (see [35]) daw(z) = z −

∞ 2 3 4 5 (−2z2 )k z + z + ⋅⋅⋅ = z ∑ , 3 15 (2k + 1)!! k=0

where (2k + 1)!! = (2k + 1)(2k − 1)(2k − 3) ⋅ ⋅ ⋅ 3 ⋅ 1. A low-level function is provided in MuPAD, which can be accessed through the Symbolic Math Toolbox, to compute the Dawson function, with the following statement: y = feval(symengine, 󸀠 dawson󸀠 , x). The returned y is a symbolic expression. If numerical values are expected, a variable substitution is needed. This will be demonstrated through some examples. Example 2.11. Draw the curve of the Dawson functions daw(x) and daw(√x). Solution. The expected Dawson functions can be drawn with the following statements, as shown in Figure 2.6. >> syms x; Y=feval(symengine,’dawson’,x); z=0:0.1:5; y=subs(Y,x,z); y1=subs(Y,x,sqrt(z)); plot(z,y,z,y1) Example 2.12. It is known that the derivative of the Dawson function f(x) satisfies f 󸀠 (x) = 1 − 2f(x). Derive the formula for high-order derivatives. Solution. The symbolic function f(x) can be declared in MATLAB first; then variable substitutions can be made to derive the derivatives of the Dawson function, with the following statements. >> syms x f(x); f1=1-2*x*f; f2=f1; for i=1:5, f2=diff(f2,x); f2=expand(subs(f2,diff(f,x),f1)), end

20 | 2 Mathematical prerequisites 0.6

0.5

daw(√x)

0.4

0.3

daw(x) 0.2

0.1

0

0

0.5

1

1.5

2

2.5

3

3.5

4

4.5

5

Fig. 2.6: Curve of the Dawson function.

The high-order derivatives of the Dawson function can be found as daw󸀠󸀠 (x) = −2x + (−2 + 4x2 ) daw(x),

daw󸀠󸀠󸀠 (x) = −4 + 4x2 + (12x − 8x3 ) daw(x), daw(4) (x) = 20x − 8x3 + (12 − 48x2 + 16x4 ) daw(x), daw(5) (x) = 32 − 72x2 + 16x4 + (−120x + 160x3 − 32x5 ) daw(x), daw(6) (x) = −264x + 224x3 − 32x5 + (−120 + 720x2 − 480x4 + 64x6 ) daw(x), as desired.

2.2.2 Hypergeometric functions In some applications of fractional calculus, some hypergeometric functions are also involved. The definitions and properties of hypergeometric functions will be introduced in this subsection. Definition 2.10. The general form of a hypergeometric function is p Fq (a 1 ,

. . . , a p ; b1 , . . . , b q ; z) =

Γ(b1 ) ⋅ ⋅ ⋅ Γ(b q ) ∞ Γ(a1 + k) ⋅ ⋅ ⋅ Γ(a p + k) z k , ∑ Γ(a1 ) ⋅ ⋅ ⋅ Γ(a p ) k=0 Γ(b1 + k) ⋅ ⋅ ⋅ Γ(b q + k) k!

where b i are not non-positive integers. If p ≤ q, the series is convergent for all z. If p = q + 1, the series is convergent for |z| < 1, while if p > q + 1, the series is divergent for all z. Definition 2.11. Alternatively, a hypergeometric function is also defined as p Fq (a 1 ,



(a1 )k ⋅ ⋅ ⋅ (a p )k z k , (b1 )k ⋅ ⋅ ⋅ (b q )k k! k=0

. . . , a p ; b1 , . . . , b q ; z) = ∑

where (γ)k are the Pochhammer symbols defined below.

2.2 Dawson function and hypergeometric functions | 21

Definition 2.12. A Pochhammer symbol is defined as (γ)k = γ(γ + 1)(γ + 2) ⋅ ⋅ ⋅ (γ + k − 1) =

Γ(k + γ) . Γ(γ)

The Pochhammer symbol is also known as the rising factorial. In the Pochhammer symbol, it can be seen from the definitions that, if γ = 1, then (1)k =

Γ(k + 1) = k!. Γ(1)

(2.13)

The following two types of hypergeometric functions are in particular useful in fractional calculus: the Kummer hypergeometric function and the Gaussian hypergeometric function. Definition 2.13. If p = q = 1, the Kummer hypergeometric function is defined as 1 F1 (a; b; z)

=

Γ(b) ∞ Γ(a + k) z k . ∑ Γ(a) k=0 Γ(b + k) k!

The Kummer hypergeometric function is also known as the confluent hypergeometric function, usually denoted as M(a, b, z). If b > a, the function can also be described as 1

Γ(b) ∫ t a−1 (1 − t)b−a−1 ezt dt. 1 F1 (a; b; z) = Γ(a)Γ(b − a) 0

Theorem 2.10. The derivative of 1 F1 (a; b; z) satisfies d a 1 F1 (a; b; z) = 1 F1 (a + 1; c + 1; z). dz c Proof. The theorem can easily be proven with d Γ(c) ∞ Γ(a + k) kz k−1 ∑ 1 F1 (a; b; z) = dz Γ(a) k=1 Γ(c + k) k! =

Γ(c) ∞ Γ(a + k) z k−1 ∑ Γ(a) k=1 Γ(c + k) (k − 1)!

=

Γ(c) ∞ Γ(a + k󸀠 + 1) z k ∑ Γ(a) k󸀠 =0 Γ(c + k󸀠 + 1) k󸀠 !

=

a Γ(c + 1) ∞ Γ(a + k + 1) z k . ∑ c Γ(a + 1) k=0 Γ(c + k + 1) k!

󸀠

Hence the theorem is proven. Definition 2.14. The Gaussian hypergeometric function is defined as 2 F1 (a,

b; c; z) =

∞ Γ(c) Γ(a + k)Γ(b + k) z k ∑ Γ(a)Γ(b) k=0 Γ(c + k) k!

for p = 2, q = 1. If c is not a nonnegative integer, the function is convergent for all z.

22 | 2 Mathematical prerequisites Theorem 2.11. The relationship of the two functions satisfies z 1 F1 (a; c; z) = lim 2 F1 (a, b; c; ). b b→∞ Proof. Consider the right-hand side first 2 F1 (a, b; c;

∞ z Γ(c) Γ(a + k)Γ(b + k) z k )= ( ) ∑ b Γ(a)Γ(b) k=0 Γ(c + k)k! b k terms

⏞⏞⏞⏞⏞⏞⏞⏞⏞⏞⏞⏞⏞⏞⏞⏞⏞⏞⏞⏞⏞⏞⏞⏞⏞⏞⏞⏞⏞⏞⏞⏞⏞⏞⏞⏞⏞⏞⏞⏞⏞⏞⏞⏞⏞⏞⏞⏞⏞⏞⏞⏞⏞⏞⏞ Γ(c) ∞ Γ(a + k) (b + k)(b + k − 1) ⋅ ⋅ ⋅ (b + 1) z k = . ∑ Γ(a) k=0 k! Γ(c + k)b k When taking the limits as b → ∞, the “k terms” in the numerator cancel b k in the denominator, and the theorem can then be proven. The MATLAB function hypergeom() is provided in the new Symbolic Math Toolbox, and the hypergeometric function p Fq (a1 , . . . , a p ; b1 , . . . , b q ; z) can be evaluated with syntax y = hypergeom([a1 , . . . , a p ], [b1 , . . . , b q ], z). The hypermetric function 1 F1 (a, b; z) can be evaluated with y = hypergeom(a, b, z), while 2 F1 (a, b; c; z) can be evaluated with y = hypergeom([a, b], c, z). Example 2.13. Draw the curve of the following Gaussian hypergeometric function: − cos x)/2).

2 F1 (1.5, −1.5; 1/2; (1

Solution. It is known that the function 2 F1 (1.5, −1.5; 1/2; (1 − cos x)/2) is cos 1.5x. With the function hypergeom(), the curve of the hypermetric function can be evaluated as shown in Figure 2.7. It is identical to cos 1.5x. >> syms x, ezplot(cos(1.5*x),[-pi,pi]) y=hypergeom([1.5,-1.5],0.5,0.5*(1-cos(x))); hold on, ezplot(y,[-pi,pi]), hold off

1

0.5

0

-0.5

-1 -3

-2

-1

0

1

x

Fig. 2.7: Numerical solution of Example 2.13.

2

3

2.3 Mittag-Leffler functions | 23

Theorem 2.12 ([1]). Complementary error functions can be converted into hypergeometric function problems jn erfc(z) = e−z [ 2

1 F1 ((n + 1)/2; 1/2; z 2n Γ(1 + n/2)

2)



z 1 F1 (n/2 + 1; 3/2; z2 ) ]. 2n−1 Γ((n + 1)/2)

Theorem 2.13. The incomplete Gamma function can also be evaluated with hypergeometric functions: Γ(x, a) =

1 1 a −x x e 1 F1 (1; 1 + a; x) = x a 1 F1 (a; 1 + a; −x). a a

2.3 Mittag-Leffler functions Mittag-Leffler functions are direct extensions to an exponential function. The simplest form of a Mittag-Leffler function was proposed by the Swedish mathematician Magnus Gustaf (Gösta) Mittag-Leffler (1846–1927) in 1903 (see [54]). The function is also known as Mittag-Leffler function with one parameter. Later, Mittag-Leffler functions with two or more parameters were proposed. The importance of Mittag-Leffler functions in fractional calculus is similar to the exponential function in classical calculus. In this section, the definitions and computations of various Mittag-Leffler functions are presented.

2.3.1 Mittag-Leffler functions with one parameter It is well known that the Taylor series expansion of an exponential function ez can be written as ∞ ∞ k z zk = ∑ , ez = ∑ k! k=0 Γ(k + 1) k=0 and it is convergent for all z ∈ C , where C is the set of complex numbers. Definition 2.15. The simplest form of a Mittag-Leffler function is the Mittag-Leffler function with one parameter, defined as ∞

Eα (z) = ∑ k=0

zk , Γ(αk + 1)

(2.14)

where α ∈ C , and the convergent condition for the infinite series is R(α) > 0, for all z ∈ C . It is obvious that the exponential function ez is a special case of a Mittag-Leffler function, since E1 (z) = ez . Besides, it can easily be found that ∞

E2 (z) = ∑ k=0

∞ zk (√z)2k = ∑ = cosh √z Γ(2k + 1) k=0 (2k)!

(2.15)

24 | 2 Mathematical prerequisites and ∞

E1/2 (z) = ∑ k=0

2 2 zk = ez (1 + erf(z)) = ez erfc(−z). Γ(k/2 + 1)

(2.16)

It can be seen from (2.14) that the Mittag-Leffler functions can be evaluated directly with infinite series, and the function symsum() in Symbolic Math Toolbox is suitable for this task. Therefore, the following MATLAB function can be written to compute analytically the Mittag-Leffler functions. Such a function can also be used to compute the MittagLeffler functions with two parameters to be presented later. The Mittag-Leffler function with one parameter α can be evaluated directly with f = mittag_leffler(α,z). function f=mittag_leffler(v,z) v=[v,1]; a=v(1); b=v(2); syms k; f=symsum(z^k/gamma(a*k+b),k,0,inf); Example 2.14. Derive some formulae with symbolic computations. Solution. Some particular Mittag-Leffler functions can be derived directly with the symbolic function. It should be noted that the old versions of MATLAB, i.e. the versions before R2008a, where the Symbolic Math Toolbox was the Maple symbolic engine, are recommended. For instance, when α is set to be α = 1/3, 3, 4, 5, respectively, the following MATLAB statements can be used. >> syms z; Ia=mittag_leffler(1,z) Ib=mittag_leffler(2,z), Ic=mittag_leffler(1/2,z) I1=mittag_leffler(1/sym(3),z), I2=mittag_leffler(3,z) I3=mittag_leffler(4,z), I4=mittag_leffler(5,z) Then the following results can be obtained: 3

ez (−6πΓ(2/3) + √3Γ2 (2/3)Γ(1/3, z3 ) + 2πΓ(2/3, z3 )) E1/3 (z) = − , 2πΓ(2/3) √3 3 1 3 2 3 √z), E3 (z) = e √z + e− √z/2 cos( 3 3 2 1 4 1 4 1 4 E4 (z) = e √z + e− √z + cos(√ z), 4 4 2 5 5 5 e √z 2 cos(2π/5) √z 2 2 π 5 5 + e E5 (z) = cos(sin( π)√ z) + e− cos(π/5) √z cos(sin( )√ z). 5 5 5 5 5 With the new versions of MATLAB, I1 cannot be evaluated, while others are expressed as hypergeometric functions, with 1 , 3 1 I4 = 0 F4 (; , 5 I2 = 0 F2 (;

2 z 1 1 3 z ; ), I3 = 0 F3 (; , , ; ), 3 27 4 2 4 256 2 3 4 z , , ; ). 5 5 5 3125

2.3 Mittag-Leffler functions | 25

It can be seen from (2.14) that numerical solutions to the Mittag-Leffler functions can also be found by using an accumulative method, and a MATLAB function can be written as follows. The syntax of the function is y = ml_func(α, z). It should be noted that the Mittag-Leffler functions with more parameters, and even their integer-order derivatives, can also be obtained by this function. function f=ml_func(aa,z,n,eps0) aa=[aa,1,1,1]; a=aa(1); b=aa(2); c=aa(3); q=aa(4); f=0; k=0; fa=1; if nargin> a=[1:0.5:5]; t=-20:0.1:10; Y=[]; for a=[1:0.5:5], Y=[Y; ml_func(a,t)]; end plot(t,Y), line(t,exp(t)); ylim([-1 5])

26 | 2 Mathematical prerequisites

3

z) E2 (

E1 (z) and

ez

4

E1.5 (z)

5

(z) E3 z) E 3.5(

2

1

E5 (z)

0

-1 -20

-15

-10

-5

0

5

10

Fig. 2.8: Curves of Mittag-Leffler functions with one parameter.

2.3.2 Mittag-Leffler functions with two parameters Consider again the Mittag-Leffler function with one parameter, defined in (2.14). If the value 1 in the Gamma function is substituted by another free variable β, the Mittag-Leffler function with two parameters can be introduced. Definition 2.16. The Mittag-Leffler function with two parameters is defined as ∞

Eα,β (z) = ∑ k=0

zk , Γ(αk + β)

(2.17)

where α, β ∈ C , and the convergent conditions for the infinite series are that R(α) > 0, R(β) > 0, for any z ∈ C . If β = 1, the Mittag-Leffler function with two parameters coincides with the one with one parameter, i.e. Eα,1 (z) = Eα (z). Therefore, the Mittag-Leffler function with one parameter can be regarded as a special case of the one with two parameters. Other special instances of the Mittag-Leffler functions can also be derived. Theorem 2.14. Some commonly used Mittag-Leffler functions are summarised: ∞

E1,2 (z) = ∑ k=0 ∞

E1,3 (z) = ∑

k=0

zk 1 ∞ z k+1 ez −1 = ∑ = , Γ(k + 2) z k=0 (k + 1)! z

(2.18)

∞ zk zk 1 ∞ z k+2 ez −1 − z = ∑ = 2 ∑ = . Γ(k + 3) k=0 (k + 2)! z k=0 (k + 2)! z2

(2.19)

More generally (see [54]), ∞

E1,m (z) = ∑ k=0

m−2 k z k+m−1 zk 1 ∞ 1 z = m−1 ∑ = m−1 (ez − ∑ ). Γ(k + m) z (k + m − 1)! k! z k=0 k=0

2.3 Mittag-Leffler functions | 27

Besides, it can be found that ∞

E2,2 (z) = ∑ k=0 ∞

E2,1 (z2 ) = ∑

k=0 ∞

E2,2 (z2 ) = ∑

k=0

1 ∞ (√z)2k+1 sinh √z zk = = , ∑ Γ(2k + 2) √z k=0 (2k + 1)! √z

(2.20)

∞ z2k z2k = ∑ = cosh z, Γ(2k + 1) k=0 (2k)!

(2.21)

1 ∞ z2k+1 sinh z z2k = ∑ = . Γ(2k + 2) z k=0 (2k + 1)! z

(2.22)

Since Mittag-Leffler functions with two parameters are defined as infinite series, the MATLAB function symsum() can again be used to evaluate the infinite series for different combinations of (α, β) pairs. The function mittag_leffler() discussed earlier can be used, with the syntax f = mittag_leffler([α, β], z). The functions will be demonstrated in the next example. Example 2.16. Some special Mittag-Leffler functions with one or two parameters can be derived with MATLAB. Find the functions E4,1 (z), E4,5 (z), E5,6 (z), E1/2,4 (z). Solution. By definition, the Mittag-Leffler functions can be derived directly with the following statements. The special cases in (2.18), (2.19), (2.20), (2.21) and (2.22) can also be derived symbolically. >> syms z, I1=mittag_leffler([1,2],z), I2=mittag_leffler([2,2],z), I3=mittag_leffler([1,3],z) I4=mittag_leffler([4,1],z), I5=mittag_leffler([4,5],z) I6=mittag_leffler([5,6],z), I7=mittag_leffler([1/sym(2),4],z) t=-3:0.1:2; t=t(t~=0); I71=subs(I7,z,t); y=ml_func([1/2,4],t); plot(t,y,t,I71,’--’), ylim([0,1.2]) I8=mittag_leffler([2,1],z^2), I9=mittag_leffler([2,2],z^2) The obtained results are 4 1 √z 1 4 1 4 z = E4 (z), e + e− √z + cos √ 4 4 2 4 4 4 4 1 1 √z E4,5 (z) = − + (e + e− √z + ej √z + e−j √z ), 4 4z 5 2/5 4/5 1/5 3/5 1 e √z E5,6 (z) = − + [1 + e(−1) + e(−1) + e−(−1) + e−(−1) ], z 5z 2 2 ez 1 1 1 zez erf(z) 8 4 2 E1/2,4 (z) = 6 − 6 − 4 − 2 + − − − . 7 3 z 2z z √πz5 z z 15√πz 3√πz

E4,1 (z) =

Note that some of the above results cannot be obtained with recent MATLAB versions. Instead, hypergeometric functions may be returned. For instance, I4 , I5 , I6 can be represented as I4 = 0 F3 (; 1/4, 1/2, 3/4; z/256), I5 = I4 /z−1/z, I6 = 1/z0 F4 (; 1/5, 2/5, 3/5, 4/5; z/3125)−1/z, and the plot can be obtained with ezplot().

28 | 2 Mathematical prerequisites 1.2

1

0.8

0.6

0.4

0.2

0

-3

-2.5

-2

-1.5

-1

-0.5

0

0.5

1

1.5

2

Fig. 2.9: Analytical and numerical solutions of E1/2,4 (z).

The Mittag-Leffler functions with two parameters can alternatively be evaluated with the numerical function ml_func(), with the syntax y = ml_func([α, β], z). It can be seen that the curves of E1/2,4 (z) with analytical and numerical methods can be obtained as shown in Figure 2.9, and they are exactly the same. Example 2.17. Draw the curves of the functions E1,2 (x) and E3/2,1/2 (x). Solution. The curves of the two functions can be obtained directly with the following statements, as shown in Figure 2.10. >> x=-2:0.1:2; y1=ml_func([1,2],x); y2=ml_func([3/2,1/2],x); plot(x,y1,x,y2,’--’) 5

4

3

2

E1,2 (x)

1

E3/2,1/2 (x)

0

-1

-2

-1.5

-1

-0.5

0

0.5

1

1.5

2

Fig. 2.10: Curves of Mittag-Leffler functions with two parameters.

Theorem 2.15. Some of the commonly used properties for the Mittag-Leffler functions with two parameters are Eα,β (z) + Eα,β (−z) = 2E2α,β (z2 )

2.3 Mittag-Leffler functions | 29

and Eα,β (z) − Eα,β (−z) = 2zE2α,α+β (z2 ). Proof. The two equations can be shown directly with ∞ 2z2k z k + (−z)k = ∑ Γ(αk + β) k󸀠 =0 Γ(2αk󸀠 + β)



󸀠

Eα,β (z) + Eα,β (−z) = ∑

k=0 ∞

=2 ∑

k󸀠 =0 ∞ k

Eα,β (z) − Eα,β (−z) = ∑

k=0

󸀠

(z2 )k = 2E2α,β (z2 ), Γ(2αk󸀠 + β)

∞ z − (−z)k 2z2k +1 = ∑ Γ(αk + β) k󸀠 =0 Γ((2k󸀠 + 1)α + β) 󸀠



󸀠

(z2 )k = 2z ∑ = 2zE2α,α+β (z2 ). 󸀠 Γ(2k α + α + β) k󸀠 =0 Example 2.18. Draw the surface of the real part of the function E0.8,0.9 (z). Solution. For complex variable z, the function E0.8,0.9 (z) is a complex function. The mesh grids in the x-y plane can be generated first, the real and imaginary parts of the complex function can be extracted, and the surfaces can be drawn directly with the following statements. Since some of the values are too large, i.e. exceed the interval of ±1873, details of the plots can be obtained as shown in Figure 2.11. It should be mentioned that the range of the z axis is constrained to the interval ±3 (see [62]). The surface of the imaginary part can be processed in the same way. >> [x y]=meshgrid(-6:0.2:6); z=x+sqrt(-1)*y; L=ml_func([0.8,0.9],z); L1=real(L); L2=imag(L); ii=find(L1>3); L1(ii)=3; ii=find(L10,...,l m >0

where

i (k; l1 , . . . , l m ) ∏m i=1 z i

Γ(β + ∑m i=1 α i l i )

,

k! . l1 ! ⋅ ⋅ ⋅ l m !

(k; l1 , . . . , l m ) =

Definition 2.18 ([63]). The Mittag-Leffler functions with three and four parameters are defined respectively as ∞ (γ)k zk γ Eα,β (z) = ∑ (2.23) Γ(αk + β) k! k=0 and



γ,q

Eα,β (z) = ∑ k=0

(γ)kq z k , Γ(αk + β) k!

(2.24)

where α, β, γ ∈ C , and for any z ∈ C , the convergent conditions are R(α) > 0, R(β) > 0, R(γ) > 0, and q ∈ N , where N is the set of integers and (γ)k is the Pochhammer symbol. Theorem 2.16. The Mittag-Leffler functions in two parameters are a special case of the ones with three parameters, while the Mittag-Leffler functions in three parameters are a special case of the ones with four parameters, specifically 1 Eα,β (z) = Eα,β (z),

γ

γ,1

Eα,β (z) = Eα,β (z).

Proof. Substituting (2.13) into (2.23), one has ∞

1 Eα,β (z) = ∑ k=0

∞ (1)k z k zk = ∑ = Eα,β (z). Γ(αk + β) k! k=0 Γ(αk + β)

Besides, it can immediately be seen from (2.24) that, if q = 1, one has γ,1

γ

Eα,β (z) = Eα,β (z). Hence the theorem is proven.

2.3.4 Derivatives of Mittag-Leffler functions There are various properties in the integer-order derivatives and integrals in MittagLeffler functions. The derivatives of the Mittag-Leffler functions with two parameters are first presented. Then the derivatives of the Mittag-Leffler functions with four parameters are presented.

2.3 Mittag-Leffler functions | 31

Theorem 2.17. The integer-order derivatives of the Mittag-Leffler functions Eα,β (z) can be evaluated from ∞ dn (k + n)! zk . E (z) = ∑ α,β dz n k!Γ(αk + αn + β) k=0

(2.25)

Taking first-order derivative of the Mittag-Leffler function Eα,β (z) with resect to z, it is found that ∞ d kz k−1 Eα,β (z) = ∑ , dz Γ(αk + β) k=1

where the term at k = 0 is removed, since its value is zero. Denote k󸀠 = k − 1; the above formula can be rewritten as ∞ ∞ d (k󸀠 + 1) (k + 1) k󸀠 Eα,β (z) = ∑ z = zk . ∑ 󸀠 dz Γ(αk + α + β) Γ(αk + α + β) 󸀠 k=0 k =0

(2.26)

Taking again first-order derivative of the function, one has ∞ d2 (k + 1)(k + 2)z k E (z) = . ∑ α,β 2 Γ(αk + 2α + β) dz k=0

Therefore, it can be found that the nth-order derivative can be expressed as ∞ (k + n)(k + n − 1) ⋅ ⋅ ⋅ (k + 1) k ∞ (k + n)! dn Eα,β (z) = ∑ z = ∑ zk . n dz Γ(αk + αn + β) k!Γ(αk + αn + β) k=0 k=0

(2.27)

Proof. A formal proof of the theorem can be carried out with the mathematical induction method. For n = 1, it has been shown in (2.26) that the theorem holds. Or, rather, if n = 0, the theorem holds. Now let us assume that for n = m, the theorem also holds, i.e. ∞ (k + m)! dm E (z) = zk . ∑ α,β dz m k!Γ(αk + αm + β) k=0

Taking first-order derivative of the above formula, with respect to z, it is found that the k = 0 term is removed, and ∞ dm+1 (k + m)! Eα,β (z) = ∑ kz k−1 . m+1 k!Γ(αk + αm + β) dz k=1

Denoting k = k󸀠 + 1, the above equation can be rewritten as ∞ 󸀠 dm+1 (k󸀠 + 1 + m)! E (z) = (k󸀠 + 1)z k . ∑ α,β 󸀠 + 1)!Γ(α(k 󸀠 + 1) + αm + β) (k dz m+1 k󸀠 =0

Cancelling the k󸀠 + 1 terms in the numerator and denominator, and regroup k󸀠 + 1 + m with k󸀠 + (m + 1), and α(k󸀠 + 1) + αm with αk󸀠 + α(m + 1), then substitute k󸀠 back with k, it can be seen that ∞ dm+1 (k + (m + 1))! Eα,β (z) = ∑ zk . m+1 k!Γ(αk + α(m + 1) + β) dz k=0

32 | 2 Mathematical prerequisites It means that for n = m + 1, the theorem also holds. Therefore, through the mathematical induction method, the theorem is proven. Theorem 2.18. The following equation holds for the Mittag-Leffler functions: Eα,β (z) = βEα,β+1 (z) + αz

d Eα,β+1 (z). dz

Proof. The formula can immediately be shown by rearranging the expressions on the right-hand side as follows: βEα,β+1 (z) + αz

∞ ∞ zk kz k−1 d Eα,β+1 (z) = β ∑ + αz ∑ dz Γ(αk + β + 1) Γ(αk + β + 1) k=0 k=0 ∞

= ∑ k=0

∞ (β + kα)z k zk = ∑ = Eα,β (z). Γ(αk + β + 1) k=0 Γ(αk + β)

Hence the theorem is proven. γ,q

Theorem 2.19. The nth-order derivatives of the Mittag-Leffler functions Eα,β (z) can be evaluated from (see [63]) dn γ,q γ+qn,q E (z) = (γ)qn Eα,β+nα (z). dz n α,β γ,q

Proof. The first-order derivative of the Mittag-Leffler function Eα,β (z) is ∞ (γ)kq kz k−1 d γ,q Eα,β (z) = ∑ dz Γ(αk + β) k! k=1 ∞

= ∑ k=1

∞ (γ)kq (γ)(k+1)q z k z k−1 = ∑ . Γ(αk + β) (k − 1)! k=0 Γ(αk + α + β) k!

(2.28)

It can easily be found that for any integer q, (γ)(k+1)q =

Γ(kq + q + γ) Γ(kq + γ + q) Γ(γ + q) = = (γ + q)kq (γ)q , Γ(γ) Γ(γ + q) Γ(γ)

where (γ)q is independent of k. Substituting it into (2.28), it is found that ∞ (γ + q)kq z k d γ,q γ+q,q Eα,β (z) = (γ)q ∑ = (γ)q Eα,β+α (z). dz Γ(αk + α + β) k! k=0

(2.29)

It can be seen that the theorem is correct for n = 1. The mathematical induction method can be used again to show the theorem. Assume that dk γ,q γ+kq,q E (z) = (γ)qk Eα,β+kα (z). dz k α,β Taking first-order derivative of the above formula, and use the result in (2.29), it can be seen that d dk+1 γ,q γ+kq,q γ+(k+1)q,q E (z) = (γ)qk Eα,β+kα (z) = (γ)q(k+1) Eα,β+(k+1)α (z). dz dz k+1 α,β

2.3 Mittag-Leffler functions | 33

It can be seen that n = k + 1 also holds, and it can be seen from the mathematical induction method that the theorem is correct. Corollary 2.4. For the Mittag-Leffler functions with two parameters, the nth-order derivative satisfies dn n+1 Eα,β (z) = n!Eα,β+nα (z). dz n 1,1 Proof. Recall Theorem 2.19. Since Eα,β (z) = Eα,β (z), where γ = 1, q = 1, it can be found easily that dn n+1,1 n+1 Eα,β (z) = (1)n Eα,β+nα (z) = n!Eα,β+nα (z). dz n Hence the corollary is proven.

Theorem 2.20. There are various related important properties in the derivatives of Mittag-Leffler functions. For instance, for any positive integer n, the following equations are satisfied: z γ,q+1 z2 d γ,q+1 Eα,α+β (z) + E (z), q q dz α,α+β z d γ,q γ,q γ,q−1 Eα,β (z) − Eα,β (z) = E (z), 1 − q dz α,β dn β−1 γ,q γ,q [z Eα,β (ωz α )] = z β−n−1 Eα,β−n (ωz α ), R(β − n) > 0, dz n dn q−1 γ,q γ,q−n [z Eα,β (z)] = (δ − n)n z q−1−n Eα,β (z). dz n γ,q

γ−1,q

Eα,β (z) − Eα,β

(z) =

(2.30) (2.31) (2.32) (2.33)

Proof. For instance, formula (2.32) can be proven directly from its definition: ∞ z αk+β−n−1 (γ)k ω k dn β−1 γ,q α E (ωz )] = [z ∑ α,β n dz Γ(αk + β − n) k! k=0 ∞

= z β−n−1 ∑ k=0

ω k z αk (γ)k γ,q = z β−n−1 Eα,β−n (ωz α ). Γ(αk + β − n) k!

Hence (2.32) is proven. Detailed proofs of the other formulae are given in [61, 63].

2.3.5 Numerical evaluation of Mittag-Leffler functions and their derivatives In fact, the function ml_func() listed earlier can be used directly to evaluate the nth-order derivatives of the Mittag-Leffler functions with up to four parameters. The syntaxes of the function are f = ml_func(α, z, n, ϵ0 )

%nth-order derivative of Eα (z),

f = ml_func([α, β], z, n, ϵ0 )

%nth-order derivative of Eα,β (z),

f = ml_func([α, β, γ], z, n, ϵ0 )

%nth-order derivative of Eα,β (z),

f = ml_func([α, β, γ, q], z, n, ϵ0 )

%nth-order derivative of Eα,β (z),

γ

γ,q

34 | 2 Mathematical prerequisites where ϵ0 is the acceptable error tolerance, which means that if the absolute value of a term in the series is smaller than this number, all the subsequent terms in the infinite series will be truncated. The returned vector f is the nth-order derivatives of the Mittag-Leffler function with the independent variable vector z. The default values of the last two terms are n = 0 and ϵ0 = eps. If the Mittag-Leffler function itself is expected, then n can be omitted. Comments 2.2 (Limitations of the function). There are certain limitations of the function ml_func(): (1) The truncation algorithm employed in the function may sometimes diverge in evaluating certain Mittag-Leffler functions. Thus, the reliable – yet sometimes extremely slow – MATLAB function mlf() designed by Professor Igor Podlubny in [58] can alternatively be used. This function is embedded in the function ml_func(), and will be called automatically whenever necessary. Due to the limitations of the function mlf(), only the Mittag-Leffler functions with one or two parameters can be evaluated when the truncation algorithm fails. Besides, in this case, the derivatives of the Mittag-Leffler functions cannot be evaluated. (2) It is noticed that the derivatives of some Mittag-Leffler functions cannot be evaluated with ml_func(). In this case, numerical differentiations can be performed instead. The high-precision function glfdiff9() discussed in the next chapter is recommended. (3) Since the function gamma() used in ml_func() can only deal with real arguments, the newly designed function gamma_c() can be used instead to evaluate the MittagLeffler functions with complex arguments. This case is not considered in the book. (5)

(2)

Example 2.19. Draw the curves of the functions E1,1 (x) and E√2,1.3 (x). Solution. It can be seen that the analytical solution of the former function is in fact ex , and there is no analytical solution to the latter function. The curves of the two functions can be obtained as shown in Figure 2.12, where ex is also displayed. It can be seen that the former curve is exactly the same as the curve of ex . >> x=0:0.001:2; y1=ml_func([1,1],x,5); plotyy(x,y1,x,exp(x)) y2=ml_func([sqrt(2),1.3],x,2); line(x,y2), max(abs(y1-exp(x)))

2.4 Some linear algebra techniques Linear algebra deals with vectors, vector spaces or linear spaces, linear maps or linear transformations and systems of linear equations. It is ubiquitous in modern applied mathematics and almost all engineering fields. Several techniques in linear algebra will be used in this book in handling fractional calculus problems. These topics will be presented here.

2.4 Some linear algebra techniques | 35 10

1

E√2,1.3(x ) (2)

5

(5)

E 1,1 (x 0

0

0.2

0.4

) and

0.5

ex

0.6

0.8

1

1.2

1.4

1.6

1.8

2

0

(2) (x). 2,1.3

(5)

Fig. 2.12: The functions E1,1 (x), ex and E√

2.4.1 Kronecker product and Kronecker sum The Kronecker product and the Kronecker sum will be used in Section 6.2 in handling the multiplication of pseudo-polynomials. They are defined as follows. Definition 2.19. The Kronecker product A ⊗ B is defined as a11 B [ . [ A ⊗ B = [ .. [a n1 B

⋅⋅⋅ .. . ⋅⋅⋅

a1m B .. ] ] . ]. a nm B]

Definition 2.20. The Kronecker sum A ⊕ B is defined as a11 + B .. . a +B n1 [

[ A⊕B=[ [

⋅⋅⋅ .. . ⋅⋅⋅

a1m + B ] .. ]. . ] a nm + B]

In MATLAB, the function kron() can be used to compute the Kronecker product A ⊗ B, with kron(A, B). Similar to the function kron(), a Kronecker sum function kronsum() is written as follows. function C=kronsum(A,B) [ma,na]=size(A); [mb,nb]=size(B); A=reshape(A,[1 ma 1 na]); B=reshape(B,[mb 1 nb 1]); C=reshape(bsxfun(@plus,A,B),[ma*mb na*nb]);

2.4.2 Matrix inverse The MATLAB function inv() can be used to find the analytical and numerical solutions of the inverse of a given matrix; however, it may not be used in handling fractional-

36 | 2 Mathematical prerequisites order transfer function matrices to be introduced in Chapter 6. Therefore, a low-level function should be developed to solve similar problems. By left multiplying a matrix with a deliberately selected matrix, the form of the original matrix can be changed. The selection of transformation matrices is demonstrated in the following examples. Example 2.20. For a matrix A given below, please study the effect of matrix multiplication with a deliberately chosen matrix E: 16 [5 [ A=[ [9 [4

2 11 7 14

3 10 6 15

13 8] ] ], 12] 1]

1 [0 [ E=[ [−2 [0

0 1 0 0

0 0 1 0

0 0] ] ]. 0] 1]

Solution. The two matrices can be entered first; then the three matrices A1 = EA and E1 = E−1 can be computed. >> A=[16,2,3,13; 5,11,10,8; 9,7,6,12; 4,14,15,1]; E=eye(4); E1=inv(E), E(3,1)=-2; A1=E*A The two matrices are 16 [ 5 [ A1 = [ [−23 [ 4

2 11 3 14

3 10 0 15

13 8 ] ] ], −14] 1 ]

1 [0 [ E1 = [ [2 [0

0 1 0 0

0 0 1 0

0 0] ] ]. 0] 1]

Is anything in particular observed? Declare E as an identity matrix first, the final E is generated by setting the element in the third row, first column (denoted by the (3, 1)th element) to −2. Then E1 is a copy of E; however, the sign of its (3, 1)th element is altered. Now let us consider A1 , which is obtained by left multiplying E to the matrix A. Comparing A and A1 , it can be seen that only the third row is changed. The new third row is generated by multiplying −2 to all the elements in the first row and added to the third row of A. With these rules in mind, the matrix E can deliberately be constructed. For instance, if one wants to eliminate all the elements in the first column, except the first one. The matrix E1 can be constructed with the following statements. >> E1=sym(eye(4)); E1(2:4,1)=-A(2:4,1)/A(1,1), A1=E1*A The constructed matrix and the product are given below, exactly the same as the one we expected: 1 [−5/16 [ E1 = [ [−9/16 [ −1/4

0 1 0 0

0 0 1 0

0 0] ] ], 0] 1]

16 [0 [ A1 = EA = [ [0 [0

2 83/8 47/8 27/2

3 145/16 69/16 57/4

13 63/16] ] ]. 75/16] −9/4 ]

2.4 Some linear algebra techniques | 37

Another matrix E2 can be selected to eliminate the rest of the elements in the second column of A1 , with a new E2 , such that the matrices can be obtained with >> E2=sym(eye(4)); E2([1 3 4],2)=-A1([1 3 4],2)/A1(2,2) A2=E2*A1, E=E2*E1; A3=E*A and the solutions are 1 [0 [ E2 = [ [0 [0

−16/83 1 −47/83 −108/83

0 0 1 0

0 0] ] ], 0] 1]

16 [0 [ A2 = [ [0 [0

0 83/8 0 0

104/83 145/16 −68/83 204/83

1016/83 63/16 ] ] ]. 204/83 ] −612/83]

The overall transform matrix we obtain is E = E2 E1 . This is the foundation of the echelon and triangular factorisation approaches. Following the same strategy, the matrices E3 , . . . , E n can also be constructed, with the following algorithm. Algorithm 2.1 (A simple matrix inversion algorithm). Proceed as follows. (1) Input the original matrix A, and let E0 = I. (2) In loop, construct E i , E−1 i , record diagonal elements a i , for i = 1, 2, . . . , n. −1 −1 −1 (3) Compute the transformation matrix E0 = E−1 n E n−1 ⋅ ⋅ ⋅ E 2 E 1 . −1 (4) Compute the inverse matrix A = diag([1/a1 , . . . , 1/a n ])E0 . Therefore, the new low-level function for matrix inversion based on the algorithm can be written as follows. function A2=new_inv(A) A1=A; [n,m]=size(A); E0=eye(n); aa=[]; for i=1:n, ij=1:n; ij=ij(ij~=i); E=eye(n); a0=A1(i,i); aa=[aa,a0]; E(ij,i)=-A1(ij,i)/a0; E0=E*E0; A1=E*A1; end A2=diag(1./aa)*E0; Example 2.21. Find the inverse matrix of the 9 × 9 magic matrix with the new algorithm, and assess its accuracy. Solution. The matrix can be entered, and the inverses by the new algorithm, and the MATLAB function inv() can be obtained. The analytical inverse can be obtained, with the errors of the two numerical inverses of e1 = 3.1710×10−16 and e2 = 7.3715×10−18 . The new algorithm is not as good as the one used in the function eig(); however, it is simple and it works. >> A=magic(9); A0=inv(sym(A)); A1=new_inv(A); e1=double(norm(vpa(A1-A0))) A2=inv(A); e2=double(norm(vpa(A2-A0)))

38 | 2 Mathematical prerequisites 2.4.3 Arbitrary matrix function evaluations Matrix function evaluation techniques will be used in Chapter 7 to find the state transition matrices of fractional-order systems. Here, a Jordan matrix-based algorithm [24] is presented and implemented for the evaluation of an arbitrary function φ(A). Also, the matrix functions containing the time variable t can be evaluated. Definition 2.21. A nilpotent matrix is defined as 0 [0 [ [. . H=[ [. [ [0 [0

1 0 .. .

0 1 .. .

⋅⋅⋅ ⋅⋅⋅ .. .

0 0

0 0

⋅⋅⋅ ⋅⋅⋅

0 0] ] .. ] .] ]. ] 1] 0]

With Jordan decomposition, the matrix A can be transformed such that J1 [ [ A=V[ [

J2

..

] ] −1 ]V . ]

.

(2.34)

Jm ]

[

Using Jordan matrix decomposition, the arbitrary function φ(A) can be evaluated from [ [ φ(A) = V [ [ [

φ(J1 ) φ(J2 )

..

.

] ] −1 ]V . ]

(2.35)

φ(J m )]

To solve such a problem, one can first write an m i × m i Jordan block J i as J i = λ i I + H m i , where λ i is a repeated eigenvalue of multiplicity m i , and H m i is a nilpotent matrix, i.e. k ≡ 0. It can be shown that the matrix function φ(J ) can be obtained when k ≥ m i , H m i i as follows: φ(m i −1) (λ i ) m i −1 φ(J i ) = φ(λ i )I m i + φ󸀠 (λ i )H m i + ⋅ ⋅ ⋅ + H mi . (2.36) (m i − 1)! An algorithm is designed to compute the arbitrary matrix function φ(A). Algorithm 2.2 (Arbitrary matrix function evaluation [77]). Require: Matrix A, prototype expression φ(x), independent variable x For A, find Jordan transform matrices V, J, and mark m Jordan blocks for i = 1 To m do Extract J i from J, compute φ(J i ) from (2.36), compose φ(J) end for Compute the matrix function φ(A) from (2.35). Based on the above algorithm, the following new MATLAB function funmsym() can be written. This function can be used to find the analytical solution of any matrix function. The listing of the function is as follows.

2.4 Some linear algebra techniques | 39

function F=funmsym(A,fun,x) [V,T]=jordan(A); vec=diag(T); v1=[0,diag(T,1)’,0]; v2=find(v1==0); lam=vec(v2(1:end-1)); m=length(lam); for i=1:m, k=v2(i):v2(i+1)-1; J1=T(k,k); F(k,k)=funJ(J1,fun,x); end F=V*F*inv(V); % compute the matrix function for Jordan block function fJ=funJ(J,fun,x) lam=J(1,1); f1=fun; H=diag(diag(J,1),1); H1=H; fJ=subs(fun,x,lam)*eye(size(J)); for i=2:length(J), f1=diff(f1,x); a1=subs(f1,x,lam); fJ=fJ+a1*H1; H1=H1*H/i; end Example 2.22. Find the arbitrary matrix function f(At) for −1 [−2 [ A=[ [1 [3

−1/2 −5/2 −3/2 −1/2

1/2 −1/2 −5/2 −1/2

−1 1] ] ]. −1] −4]

Solution. The problem can be solved easily with the following statements. >> A=[-1 -1/2 1/2 -1; -2 -5/2 -1/2 1; 1 -3/2 -5/2 -1; 3 -1/2 -1/2 -4]; syms x t f(t); F1=funmsym(A,f(x*t),x) One obtains the following result: 2f(−t) + 2f(−3t) + 4tf 󸀠 (−3t) + t2 f 󸀠󸀠 (−3t) [ 1 [ 2f(−3t) − 2f(−t) − 4tf 󸀠 (−3t) − t2 f 󸀠󸀠 (−3t) f(At) = [ 4[ 2f(−t) − 2f(−3t) + t2 f 󸀠󸀠 (−3t) [ 󸀠 2 󸀠󸀠 [ 2f(−t) − 2f(−3t) + 8tf (−3t) + 3t f (−3t) 2f(−3t) − 2f(−t) + 2tf 󸀠 (−3t) + t2 f 󸀠󸀠 (−3t) 2f(−t) + 2f(−3t) − 2tf 󸀠 (−3t) − t2 f 󸀠󸀠 (−3t) 2f(−3t) − 2f(−t) − 2tf 󸀠 (−3t) + t2 f 󸀠󸀠 (−3t)

2f(−3t) − 2f(−t) + 2tf 󸀠 (−3t) + 3t2 f 󸀠󸀠 (−3t) −2tf 󸀠 (−3t) − 2f(−3t) + 2f(−t) 2tf 󸀠 (−3t)

+ 2f(−3t) − 2f(−t)

−2tf 󸀠 (−3t) + 2f(−3t) + 2f(−t) −6tf 󸀠 (−3t) − 2f(−3t) + 2f(−t)

2f(−3t) − 2f(−t)

] 2f(−t) − 2f(−3t) ] ]. 2f(−3t) − 2f(−t) ] ] 6f(−3t) − 2f(−t) ]

40 | 2 Mathematical prerequisites

2.5 Numerical optimisation problems and solutions Optimisation techniques are very important in control system analysis and design procedures. Optimisation problems can be classified into unconstrained optimisation problems and constrained optimisation problems. MATLAB-based solutions to these problems will be presented, and global optimisation solutions will be explored.

2.5.1 Unconstrained optimisation problems and solutions Definition 2.22. The mathematical description of unconstrained problems is min f(x), x

]T

where x = [x1 , x2 , . . . , x n are referred to as decision variables, or optimisation variables, and the scalar function f( ⋅ ) is referred to as the objective function. The task is to find a vector x such that the value of the objective function f(x) is minimised. In fact, the definition of a minimisation problem may not lose its generality. For instance, the maximisation problem can be converted into a minimisation problem by multiplying −1 to its objective function. An unconstrained optimisation problem solver, fminsearch(), is provided in MATLAB. Moreover, a similar function fminunc() is also provided in the Optimisation Toolbox. Both functions have the same syntaxes: x = fminunc(fun, x0 ), [x, f, flag, out] = fminunc(fun, x0 , opt, p1 , p2 , . . . ), where fun can either be an M-function or an anonymous function describing the objective function. The variable x0 is the initial search point for the solution. A numerical solution to the equations can be obtained by the searching method from the initial point x0 using numerical algorithms. If a solution is successfully found, the returned flag is greater than 0; otherwise, the search is not successful. The improved simplex algorithm in [44] is used to solve the optimisation problem. This method is an effective one in solving unconstrained optimisation problems. Example 2.23. For a function with two variables given by z = (x2 − 2x) e−x −y −xy , find the minimum with MATLAB functions, and interpret the solutions graphically. 2

2

Solution. The variables in the objective function are x and y, not the same as in the standard unconstrained optimisation definition. A vector x should be defined by the variable substitutions such that x1 = x, and x2 = y. Therefore, the objective function can be rewritten as f(x) = (x21 − 2x1 ) e−x1 −x2 −x1 x2 . 2

2

2.5 Numerical optimisation problems and solutions | 41

Describing it with the anonymous function, the following statements can be used to find the optimal solution, and the solution is x = [0.6110, −0.3056]T . >> f=@(x)(x(1)^2-2*x(1))*exp(-x(1)^2-x(2)^2-x(1)*x(2)); x0=[2; 1]; x=fminsearch(f,x0) Similarly, the same problem can be solved with the function fminunc(), with the same syntax, where >> x=fminunc(f,x0) and the solution obtained is x = [0.6110, −0.3055]T . The objective function can be alternatively expressed by an M-function as follows. function y=c2fobj1(x) y=(x(1)^2-2*x(1))*exp(-x(1)^2-x(2)^2-x(1)*x(2)); The function can be saved in the file c2fobj1.m. With the objective function, the optimisation problem can alternatively be solved as follows. >> x=fminunc(@c2fobj1,x0) Normally, the number of the objective function called in the function fminunc() is much less than the number called in the function fminsearch(), since a more effective algorithm is used in fminunc(). Therefore, if one installs the Optimization Toolbox, it is suggested that the function fminunc() be used instead for unconstrained optimisation problems. Definition 2.23. The mathematical description of optimisation problems with decision variable boundaries is given by min

x s.t. xm ≤x≤xM

f(x),

where the notation s.t. means “subject to”, indicating the constraints that should be satisfied. The MATLAB function fminsearchbnd(), developed by John D’Errico, can be used to solve the problems [15]. The syntax of the function is x = fminsearchbnd(fun, x0 , xm , xM ). If the lower or upper bounds, xm or xM , are not specified, they can be set to an empty matrix [].

2.5.2 Constrained optimisation problems and solutions In certain problems, the decision variables x cannot be chosen arbitrarily. Some conditions must be satisfied. This type of problems is referred to as constrained optimisation problems.

42 | 2 Mathematical prerequisites Definition 2.24. A general description to constrained optimisation problems is min

x s.t. G(x)≤0

f(x),

where x = [x1 , x2 , . . . , x n ]T . The interpretation of such a description is that, under the constraints G(x) ≤ 0, a decision vector x is expected which minimises the objective function f(x). In practical optimisation problems, the constraints could be very complicated. For instance, it can be equalities or inequalities, and it can also be linear or nonlinear. Sometimes, these functions may not easily be described by mathematical functions. Definition 2.25. If the original constraints in Definition 2.24 are further classified into linear inequalities and equalities, upper and lower bounds, and nonlinear equalities and inequalities, the original constrained programming problem can further be rewritten as

min f(x), x

Ax ≤ B, { { { { { { Aeq x = Beq , { { { x s.t. {xm ≤ x ≤ xM , { { { { C(x) ≤ 0, { { { { {Ceq (x) = 0.

(2.37)

The MATLAB function fmincon() can be used to solve general nonlinear programming problems. The syntax of the function is [x, fopt , flag, c] = fmincon(fun, x0 , A, B, Aeq , Beq , . . . , xm , xM , CFun, opts, p1 , p2 , . . . ), where fun is the M-function or an anonymous function to describe the objective function. The argument x0 is the initial search point. The definitions of A, B, Aeq , Beq , xm and xM are the same as the ones in Definition 2.25. The argument CFun is the M-function to describe the nonlinear constraints, with two returned arguments, indicating the inequality and equality constraints, respectively. The argument opts contains the control options. The returned arguments are exactly the same as in other optimisation functions in MATLAB. Example 2.24. Solve the following nonlinear programming problem: 2

min 1000 − x21 − 2x22 − x23 − x1 x2 − x1 x3 , x

2

2

x + x2 + x3 − 25 = 0, { { { 1 x s.t. {8x1 + 14x2 + 7x3 − 56 = 0, { { {x1 , x2 , x3 ≥ 0.

Solution. Nonlinear programming solvers should be used to solve this problem. The objective function can be expressed by an anonymous function. Moreover, since the

2.5 Numerical optimisation problems and solutions | 43

two constraints are all equalities, the constraint functions can be expressed as function [c,ceq]=opt_con1(x), c=[]; ceq=[x(1)*x(1)+x(2)*x(2)+x(3)*x(3)-25; 8*x(1)+14*x(2)+7*x(3)-56]; where the nonlinear inequalities and nonlinear equalities are returned in the arguments c and ceq, respectively. Since there is no inequality constraint, the argument c is then assigned to an empty matrix. Having declared the constraints, the matrices A, B, Aeq and Beq are now all empty matrices. The lower-bound vector can be written as xm = [0, 0, 0]T . If the initial point is selected as x0 = [1, 1, 1]T , the problem can then be solved using the statements >> x0=[1;1;1]; xm=[0;0;0]; f1=@opt_con1 f=@(x)1000-x(1)*x(1)-2*x(2)*x(2)-x(3)*x(3)-x(1)*x(2)-x(1)*x(3); A=[]; B=[]; Aeq=[]; Beq=[]; xM=[]; [x,f0,flag]=fmincon(f,x0,A,B,Aeq,Beq,xm,xM,f1) with x = [3.5121, 0.2170, 3.5522]T , and fopt = 961.7151. Totally 111 calls of the objective function are made during the solution process. Example 2.25. Solve the following nonlinear programming problem [22]:

min k,

q,w,k

q3 + 9.625q1 w + 16q2 w + 16w2 + 12 − 4q1 − q2 − 78w = 0, { { { { { {16q1 w + 44 − 19q1 − 8q2 − q3 − 24w = 0, { { { q, w, k s.t. {2.25 − 0.25k ≤ q1 ≤ 2.25 + 0.25k, { { { { { {1.5 − 0.5k ≤ q2 ≤ 1.5 + 0.5k, { { {1.5 − 1.5k ≤ q3 ≤ 1.5 + 1.5k.

Solution. It can be seen that the decision variables are q, w and k, while the standard nonlinear programming problem solver can only solve vector decision variable problems. Variable substitutions should be made first to convert the problem into a standard problem. The following variables are assigned: x1 = q1 , x2 = q2 , x3 = q3 , x4 = w, x5 = k. Also, the inequality constraints should be rewritten. The original problem can be manually written as

min x5 , x

x3 + 9.625x1 x4 + 16x2 x4 + 16x24 + 12 − 4x1 − x2 − 78x4 = 0, { { { { { 16x1 x4 + 44 − 19x1 − 8x2 − x3 − 24x4 = 0, { { { { { {−0.25x5 − x1 ≤ −2.25, { { { { { {x1 − 0.25x5 ≤ 2.25, x s.t. { { −0.5x5 − x2 ≤ −1.5, { { { { { { x2 − 0.5x5 ≤ 1.5, { { { { { { −1.5x5 − x3 ≤ −1.5, { { { {x3 − 1.5x5 ≤ 1.5.

44 | 2 Mathematical prerequisites The following statements can be used to describe the nonlinear constraints. function [c,ceq]=c2exnls(x), c=[]; ceq=[x(3)+9.625*x(1)*x(4)+16*x(2)*x(4)+16*x(4)^2+... 12-4*x(1)-x(2)-78*x(4); 16*x(1)*x(4)+44-19*x(1)-8*x(2)-x(3)-24*x(4)]; Random numbers are used as the initial searching vector, and the result of the optimisation process is x = [1.9638, 0.9276, −0.2172, 0.0695, 1.1448], the optimal objective function is 1.1448, and the value of flag is 1, meaning the solution process is successful. >> f=@(x)x(5); f1=@c2exnls; x0=rand(5,1); A=[-1 0 0 0 -0.25; 1 0 0 0 -0.25; 0 -1 0 0 -0.5; 0 1 0 0 -0.5; 0 0 -1 0 -1.5; 0 0 1 0 -1.5]; B=[-2.25; 2.25; -1.5; 1.5; -1.5; 1.5]; Aeq=[]; Beq=[]; xm=[]; xM=[]; x0=10*rand(5,1); [x,f0,flag]=fmincon(f,x0,A,B,Aeq,Beq,xm,xM,f1) The initial value x0 can be assigned to another random vector, and the solver can be run again. After several runs, it can be found that a better solution can be found x = [2.4544, 1.9088, 2.7263, 1.3510, 0.8175], with an objective function of 0.8175. This is the global optimum point, while the one obtained earlier is a local optimal point.

2.5.3 Global optimisation solutions In the optimisation algorithms, if the initial search points are not properly chosen, the local minimum points may be obtained. Our target is to avoid local minimum points to find global minimum points. There are several global optimisation problem solvers provided in the new Global Optimization Toolbox. Specifically, the particle swarm optimisation (PSO) algorithm, the simulated annealing algorithm, the genetic algorithm, as well as the pattern search algorithm are implemented, with the latter two targeting at solving constrained optimisation problems. In the Global Optimization Toolbox, the function particleswarm() is provided to solve unconstrained optimisation problems with particle swarm optimisation algorithms, with the syntax [x, fm , key] = particleswarm(f, n, xm , xM , opts), where n is the number of decision variables.

2.5 Numerical optimisation problems and solutions | 45

It can be seen that the particle swarm optimisation function can only be used in solving unconstrained optimisation problems with decision variable boundaries, and cannot be used to solve constrained optimisation problems. To find global minima, two other functions are recommended – the genetic algorithm and the pattern search algorithm – with the syntaxes [x, fm , key] = ga(f, n, A, B, Aeq , Beq , xm , xM , CFun, opts), [x, fm , key] = patternsearch(f, x0 , A, B, Aeq , Beq , xm , xM , CFun, opts), where the arguments are the same as the ones discussed earlier. It should be noted that the accuracy of the ga() function is not quite high. Example 2.26. Consider the Rastrigin function problem [77] f(x1 , x2 ) = 20 + (x1 − 1)2 + (x2 − 1)2 − 10[cos(x1 − 1)π + cos(x2 − 1)π]. Solve the problem with the particle swarm optimisation approach. Solution. The surface of the objective function can be obtained as shown in Figure 2.13. It can be seen that there are many valleys in the surface; the target is to find the one with the smallest value as global minimum, and the others are local minima. It can be seen that with traditional search algorithms, improperly selected initial search point may lead to local minima. >> syms x1 x2; f=20+(x1-1)^2+(x2-1)^2-10*(cos(x1-1)*pi+cos(x2-1)*pi); ezsurf(f,[10,10]) The functions particleswarm() and patternsearch() can be used to solve the problem directly, and the global minimum point (1, 1) can be obtained almost each time the two functions are called.

250 200 150 100 50 0 10

❅ ■ ❅ global minimum

5 0 -5 -10

-10

-5

Fig. 2.13: Surface of the objective function.

0

5

10

46 | 2 Mathematical prerequisites

>> f=@(x)20+(x(1)-1)^2+(x(2)-1)^2-... 10*(cos(pi*(x(1)-1))+cos(pi*(x(2)-1))); [x g]=particleswarm(f,2), patternsearch(f,10*rand(2,1)) Alternatively, two other functions can be used: while simulannealbnd() gives accurate solutions of (1, 1) at almost every time, the function ga() gives less accurate solutions, for instance, x2 = [1.0551, 1.0115]. >> x1=simulannealbnd(f,100*rand(2,1)), x2=ga(f,2) An alternative solution algorithm of global constrained optimisation problems can be formulated as follows. Algorithm 2.3 (An alternative global optimisation algorithm, cf. [77]). Proceed in the following way. (1) Assign a large value f x for the initial objective function. (2) Select randomly an initial search point x0 . (3) Find the optimal solution from x0 , and find x and the objective function f1 . (4) Check whether f1 < f x . If so, f x = f1 , record x; otherwise, go to (2). Run this in a loop for N times to find the best x and f x . A MATLAB function fmincon_global() is written for the algorithm function [x,f0]=fmincon_global(f,a,b,n,N,varargin), f0=Inf; for i=1:N, x0=a+(b-a).*rand(n,1); [x1 f1 key]=fmincon(f,x0,varargin{:}); if key>0 & f1> f=@(x)x(5); f1=@c2exnls; A=[-1 0 0 0 -0.25; 1 0 0 0 -0.25; 0 -1 0 0 -0.5; 0 1 0 0 -0.5; 0 0 -1 0 -1.5; 0 0 1 0 -1.5];

2.6 Laplace transform | 47

B=[-2.25; 2.25; -1.5; 1.5; -1.5; 1.5]; Ae=[]; Be=[]; xm=[]; xM=[]; a=-100; b=100; [x,f0]=fmincon_global(f,a,b,5,10,A,B,Ae,Be,xm,xM,f1) Although the functions ga() and patternsearch() claim to be capable of solving constrained optimisation problems, their behaviours are not good in finding the global optimal solutions in this example. Therefore, global solutions of constrained problems can be tried with the function fmincon_global().

2.6 Laplace transform An integral transform introduced by the French mathematician Pierre-Simon Laplace (1749–1827) can be used to map the ordinary differential equations into algebraic equations. Therefore, he established the foundation for many research areas, for instance, for modelling, analysis and synthesis of control systems. In this section, the definition and basic properties of the Laplace transform and the inverse Laplace transform are summarised first. Then we focus on the solutions to Laplace transform problems and their applications using MATLAB. Examples will be used to demonstrate the solutions of Laplace transform problems.

2.6.1 Definitions and properties Definition 2.26. The one-sided Laplace transform of a function f(t) is defined as ∞

L [f(t)] = ∫ f(t) e−st dt = F(s), 0

where L [f(t)] denotes the Laplace transform of the function f(t). The properties of the Laplace transform are summarised below without proofs. (1) Linear property: L [af(t) ± bg(t)] = aL [f(t)] ± bL [g(t)] for scalars a and b. (2) Time-domain shift: L [f(t − a)] = e−as F(s). (3) s-domain property: L [e−at f(t)] = F(s + a). (4) Differentiation property: L [df(t)/dt] = sF(s) − f(0). Generally, the nth-order derivative can be obtained from n dn L [ n f(t)] = s n F(s) − F(s) − ∑ s n−k f (k−1) (0). (2.38) dt k=1 If the initial values of f(t) and other derivatives are all zero, (2.38) simplifies to dn f(t) (2.39) ] = s n F(s), dt n and these properties are the crucial formulae to map ordinary differential equations into algebraic equations. L[

48 | 2 Mathematical prerequisites (5) Integration property: If zero intial conditions are assumed, one has t

L [∫ f(τ) dτ] = 0

F(s) . s

Generally, the Laplace transform of the multiple integral of f(t) can be obtained from t t F(s) L [∫ ⋅ ⋅ ⋅ ∫ f(τ) dτ n ] = n . s 0

0

(6) Initial value property: The initial value of the function is lim f(t) = lim sF(s). s→∞

t→0

(7) Final value property: If F(s) has no pole with nonnegative real part, i.e. R(s) ≥ 0, then lim f(t) = lim sF(s). t→∞

s→0

(8) Convolution property: L [f(t) ∗ g(t)] = L [f(t)]L [g(t)], where the convolution operator ∗ is defined as t

t

f(t) ∗ g(t) = ∫ f(τ)g(t − τ) dτ = ∫ f(t − τ)g(τ) dτ. 0

(2.40)

0

(9) Other properties: L [t f(t)] = (−1) n

nd

n F(s)

ds n

,





s

s

f(t) L [ n ] = ∫ ⋅ ⋅ ⋅ ∫ F(s) ds n . t

Definition 2.27. If the Laplace transform of a signal f(t) is F(s), the inverse Laplace transform of F(s) is defined as σ+j∞

f(t) = L

−1

1 [F(s)] = ∫ F(s) est ds, j2π σ−j∞

where σ is greater than all the real parts of the poles of the function F(s).

2.6.2 Computer solutions to Laplace transform problems The Symbolic Math Toolbox of MATLAB can be used to solve the Laplace transform problems easily and analytically. The procedures for solving such problems are summarised as follows: Algorithm 2.4 (Symbolic computation of Laplace transforms). Proceed as follows. (1) The symbolic variables such as t should be declared using the command syms. The time domain function f(t) should be defined in the variable fun.

2.6 Laplace transform | 49

(2) Call laplace() to solve the problem. Therefore, the Laplace transform can be obtained with calling the function F = laplace(fun) or F = laplace(fun, v, u), and the function simplify() can be used to simplify the obtained symbolic result. (3) If the Laplace transform function F(s) is known, it can also be described in the symbolic expression fun. Then the MATLAB function ilaplace() can be used to calculate the inverse Laplace transform of the given function. The syntax of the function is f = ilaplace(fun) or f = ilaplace(fun, u, v). Example 2.28. For a given time domain function f(t) = t2 e−2t sin(t + π), compute its Laplace transform function F(s). Solution. From the original problem, it can be seen that the time domain variable t should be declared first. With the MATLAB statements, the function f(t) can be specified. Then the function laplace() can be used to derive the Laplace transform of the original function >> syms t; f=t^2*exp(-2*t)*sin(t+pi); F=laplace(f) and the result is as follows: F(s) =

2 [(s + 2)2 + 1]

2



2(2s + 4)2

3

[(s + 2)2 + 1]

.

Example 2.29. Compute the inverse Laplace transform for the function G(x) =

−17x5 − 7x4 + 2x3 + x2 − x + 1 . x6 + 11x5 + 48x4 + 106x3 + 125x2 + 75x + 17

Solution. The following statements can be used. >> syms x t; % declare symbolic variables G=(-17*x^5-7*x^4+2*x^3+x^2-x+1)... % specify original function /(x^6+11*x^5+48*x^4+106*x^3+125*x^2+75*x+17); f=ilaplace(G,x,t) % evaluate Laplace transform However, the result is far too complicated to display. With the use of vpa(f, 7), the analytical form of the solution can be obtained: y(t) = −556.2565 e−3.2617t +1.7589 e−1.0778t cos 0.6021t + 10.9942 e−1.0778t sin 0.6021t + 0.2126 e−0.5209t + 537.2850 e−2.5309t cos 0.3998t − 698.2462 e−2.5309t sin 0.3998t. Example 2.30. For the function f(t) = e−5t cos(2t + 1) + 5, compute L [d5 f(t)/dt5 ]. Solution. For the given function f(t), the Laplace and then the inverse Laplace transform can be performed with the following MATLAB statements. >> syms t; f=exp(-5*t)*cos(2*t+1)+5; F=laplace(diff(f,t,5)); F=simplify(F)

50 | 2 Mathematical prerequisites One obtains F(s) =

1475 cos 1s − 1189 cos 1 − 24360 sin 1 − 4282 sin 1s . s2 + 10s + 29

In fact, a simplified result can further be obtained if needed. For instance, collect the terms in the numerator by using the following statements. >> syms s; F1=collect(F) % collect the terms in the result The following result can be obtained: F1 =

(1475 cos 1 − 4282 sin 1)s − 1189 cos 1 − 24360 sin 1 . s2 + 10s + 29

Example 2.31. Find the inverse Laplace transform of the function F(s) =

1 . √s(s − 1)

Solution. Very few irrational or fractional-order transfer functions have analytical solutions in inverse Laplace transforms. Even if there exist analytical solutions in the form of special functions, it is still possible that the solutions cannot be obtained with MATLAB. It is recommended that earlier versions of MATLAB – MATLAB R2008a – be used. With the following statements, the inverse Laplace transform of the function is f = et erf(√t). >> syms s; F=1/sqrt(s)/(s-1); f=ilaplace(F) For the functions where analytical solutions do not exist, numerical solutions to Laplace transform techniques should be used instead, and this topic will be discussed in Chapter 4.

3 Definitions and computation algorithms of fractional-order derivatives and integrals Just as it was pointed out earlier, the concept of fractional calculus may date back to the beginning of Newton’s and Leibniz’s calculus. However, since there were no universally accepted definitions, the research topic was not progressed well in the early stages of its development. It was not until the mid of the nineteenth century, wellestablished definitions were proposed by well-known scholars, including the French mathematician Joseph Liouville (1809–1882) in 1834, the German mathematician Georg Friedrich Bernhard Riemann (1826–1866) in 1847, the Czech mathematician Anton Karl Grünwald (1838–1920) in 1867 and the Russian mathematician Aleksey Vasilievich Letnikov (1837–1888) in 1868. A unification was made on the definitions of Liouville and Riemann, and the Riemann–Liouville definition was formulated, while the combinations of the definitions by Grünwald and Letnikov became the Grünwald–Letnikov definition. The two definitions are most widely used in fractional calculus, and it can be shown that the two definitions are equivalent for engineering applications. They are more suitable to describe fractional calculus problems with zero initial conditions. For problems with nonzero initial conditions, the definition proposed by Michele Caputo in 1967 is in particular useful, because in fractional-order differential equations, the initial values of fractional-order derivatives are no longer needed, as in the case of the Riemann–Liouville and Grünwald–Letnikov definitions. Apart from the widely used definitions, there are also other types of fractionalorder derivatives, such as the Erdélyi–Kober derivative, the Hadamard derivative, the Marchaud derivative, the Riesz derivative, the Riesz–Miller derivative, the Miller– Ross derivative and the Weyl derivative. These definitions are all beyond the scope of the book. A unified fractional-order derivative and integral operator t0Dtα was introduced, and it will be used throughout the book, where α is restricted to a real number, t is the independent variable and t0 is a lower bound of it. Definition 3.1. The following unified fractional-order integro-differential operator α t0Dt can be introduced: dα f(t)/dt α , { { { α t0Dt f(t) = { f(t), { { t −α {∫t f(τ) dτ , 0

α > 0, α = 0, α < 0.

Comments 3.1 (Unified notation). We remark the following. (1) If α ≥ 0 and t0 = 0, the notation t0 can be omitted. The independent variable is t and it can also be omitted if there is no other variable. DOI 10.1515/9783110497977-003

52 | 3 Definitions and computations of fractional-order derivatives (2) If α > 0, the operator t0Dtα stands for the αth-order derivative with respect to t; if α = 0, the operator retains the original function; if α < 0, the −αth-order integral is taken to the function. (3) If α is a complex number, its real part determines whether differential or integral actions are taken. This case is not considered in this book. In Sections 3.1–3.5, direct extensions to integer-order integro-differential operations are made to present different definitions of fractional-order ones. The commonly used Cauchy’s integral formula is presented first, followed by the Riemann–Liouville definition, the Grünwald–Letnikov definition and the Caputo definition. In particular, a high-precision algorithm is proposed in Section 3.4. With such an algorithm, numerical integrals and differentiation of any accuracy can be evaluated, and this algorithm can also be embedded in other algorithms, so as to accordingly increase the accuracy of the algorithms. In Section 3.6, the relationship among different definitions are summarised, and the numerical computations of fractional-order derivatives and integrals under different definitions are presented. Some properties of fractional-order derivatives and integrals are summarised in Section 3.7. The advantage of this chapter is that the high-precision algorithms for various fractional-order derivatives and integrals are provided, with ready-to-use MATLAB implementations. Mathematically, the precisions of the algorithms are o(h p ), where p can be of any integer.

3.1 Fractional-order Cauchy integral formula 3.1.1 Cauchy integral formula The very important Cauchy integral formula in complex analysis reads f(τ) dn n! f(t) = dτ, ∮ dt n 2πj (τ − t)n+1 C

where C is a smooth closed-path, and in the encircled field, the function f(t) is singlevalued and analytic. If n is replaced by a non-integer γ, there will be an isolated singularity at τ = t. The singularity can be removed when computing a closed-path integrals. Therefore, the formula can be extended to handle fractional-order derivatives. Definition 3.2. The fractional-order Cauchy integral formula is defined as γ

Dt f(t) =

Γ(γ + 1) f(τ) dτ, ∮ 2πj (τ − t)γ+1

(3.1)

C

where the order γ can be of any positive real number. The γth-order derivative of a signal f(t) with respect to t can be computed.

3.2 Grünwald–Letnikov definition | 53

3.1.2 Fractional-order derivative and integral formula for commonly used functions In classical integer-order calculus, the nth-order derivatives of sin at and cos at are shown below: nπ dn nπ dn sin at = a n sin(at + cos at = a n cos(at + ), ). n dt 2 dt n 2 If the integer n is replaced by any real number α, the above formulae also hold. Therefore, fractional-order derivatives and integrals can be expressed as απ απ Dtα sin at = a α sin(at + (3.2) ), Dtα cos at = a α cos(at + ). 2 2 Also, the nth-order derivative of the function t m can be obtained as follows: m! dn m t = m(m − 1) ⋅ ⋅ ⋅ (m − n + 1)t m−n = t m−n , n dt (m − n)! which can be extended to the fractional-order case Γ(m + 1) m−α Dtα t m = t , Γ(m − α + 1)

α > −1.

(3.3)

In [47], fractional-order derivative formulae to more commonly used functions are presented.

3.2 Grünwald–Letnikov definition 3.2.1 Deriving high-order derivatives Before presenting the Grünwald–Letnikov definition of fractional-order derivatives, the integer-order derivatives are derived first. Consider the first-order derivative 1 d1 f(t) = lim [f(t) − f(t − h)]. 1 h→0 h dt Based on the above result, the second-order derivative can be found: d2 1 f(t) = lim 2 [f(t) − 2f(t − h) + f(t − 2h)]. h→0 h dt2 Use repeatedly the above method, the nth-order derivative can be found: dn 1 n n f(t) = lim ∑ (−1)j ( ) f(t − jh), dt n h→0 h n j j=1 where the binomial expression can be written as n n n! n (1 − z)n = ∑ (−1)j ( ) z j = ∑ (−1)j zj , j!(n − j)! j j=0 j=0

and the binomial coefficients can be extracted from n! n (−1)j ( ) = (−1)j . j!(n − j)! j

54 | 3 Definitions and computations of fractional-order derivatives 3.2.2 Grünwald–Letnikov definition in fractional calculus Extending directly the above definition, for non-integers α, the difficulty is the fact that the binomial expression is no longer a finite sum, but an infinite sum of the form ∞ ∞ α (1 − z)α = ∑ (−1)j ( ) z j = ∑ w j z j . j j=0 j=0

Therefore, the extended binomial coefficients can be written as (−1)j Γ(α + 1) α w j = (−1)j ( ) = . Γ(j + 1)Γ(α − j + 1) j

(3.4)

Assume that, when t ≤ t0 , the function y(t) = 0; therefore, a finite sum can be used to approximate the infinite one, and the Grünwald–Letnikov definition can be introduced. Definition 3.3. The αth-order Grünwald–Letnikov derivative of a function f(t) is defined as 1 [(t−t0 )/h] α GL α (3.5) ∑ (−1)j ( ) f(t − jh), t0 Dt f(t) = lim α h→0 h j j=0 where [ ⋅ ] means to round to the nearest integer. Comments 3.2 (The Grünwald–Letnikov definition). We make the following remarks. (1) The notation GL to the left of the operator stands for “Grünwald–Letnikov definition”, which can be omitted when no confusion can arise. (2) It can be seen that the integer-order derivatives rely on the current value and finite number of function values in the past, while fractional-order derivatives rely on the finite number of all the points in the past, starting from t0 . Therefore, the fractional-order derivatives can be regarded as the derivatives with memories. (3) The definition applies for both cases α > 0 and α < 0 for derivatives and integrals; 0 for α = 0, it is easily seen from the definition that GL t0 Dt f(t) = f(t). The Grünwald– Letnikov operator is a unified integro-differential operator in Definition 3.1. Later, various methods for computing binomial coefficients will be given, and numerical algorithms for computing Grünwald–Letnikov fractional-order derivatives will be presented.

3.2.3 Numerical computation of Grünwald–Letnikov derivatives and integrals It can be seen from Definition 3.3 that if h is small enough, it seems that the fractionalorder derivative of a given function f(t) can be evaluated directly from the following algorithm. Algorithm 3.1 (Direct fractional-order derivative evaluation). Proceed as follows. (1) Compute the samples in the signal f(t) at time t i to get a vector f .

3.2 Grünwald–Letnikov definition | 55

(2) Evaluate the binomial coefficients from (3.4). (3) Evaluate directly the fractional-order directives from (3.5). A MATLAB function glfdiff0() can be written based on Algorithm 3.1. function df=glfdiff0(f,t,gam) if strcmp(class(f),’function_handle’), f=f(t); end h=t(2)-t(1); n=length(t); J=[0:(n-1)]; f=f(:); a0=f(1); if a0~=0 & gam>0, dy(1)=sign(a0)*Inf; end C=gamma(gam+1)*(-1).^J./gamma(J+1)./gamma(gam-J+1)/h^gam; for i=2:n, dy(i)=C(1:i)*[f(i:-1:1)]; end The syntax of the function is f1 = glfdiff0(f , t, α), where t is the equal-spaced time vector and α is the order. It can be seen from the code that the original function can either be described as the sample vector of f(t), or as a function handle of f(t). In other words, in the latter case, the original f(t) can be described by an anonymous function. The fractional-order derivative is returned in the argument f1 . Unfortunately, this algorithm is not working properly in MATLAB, since Γ(172) and subsequent terms are ∞ in MATLAB and other languages under double-precision data type. Therefore, the impact of the corresponding terms are completely ignored. Better algorithms are needed to solve efficiently the fractional-order derivative and integral computation problems. Theorem 3.1. If the step-size h is selected small enough, the limit sign in (3.5) can be removed such that the Grünwald–Letnikov fractional-order derivatives and integrals can be computed approximately from GL α t0 Dt f(t)



1 hα

[(t−t0 )/h]



w j f(t − jh),

(3.6)

j=0

where w j are the binomial coefficients of (1 − z)α , which can be evaluated recursively from w0 = 1,

w j = (1 −

α+1 )w j−1 , j

j = 1, 2, . . . .

(3.7)

Proof. It can immediately be seen from (3.4) that w0 = 1, and wj −Γ(j)Γ(α − j + 2) α+1−j α+1 = =− =1− . w j−1 Γ(j + 1)Γ(α − j + 1) j j Hence (3.7) is proven. If the step-size h is small enough, formula (3.6) can be used directly to evaluate the fractional-order derivatives of the given function. It can be shown [54] that the precision of the algorithm is o(h). Since in this algorithm a Gamma function evaluation is avoided in the recursive term w i , the problem in Algorithm 3.1 is solved successfully.

56 | 3 Definitions and computations of fractional-order derivatives Algorithm 3.2 (Grünwald–Letnikov derivatives and integrals). Proceed as follows. (1) Evaluate samples of the given signal f . (2) Evaluate the binomial coefficients w k recursively with (3.7). (3) Compute fractional-order derivatives and integrals with (3.6). Based on the algorithm, the following MATLAB function can be written to evaluate the Grünwald–Letnikov fractional-order derivatives and integrals. function dy=glfdiff(y,t,gam) if strcmp(class(y),’function_handle’), y=y(t); end h=t(2)-t(1); w=1; y=y(:); t=t(:); for j=2:length(t), w(j)=w(j-1)*(1-(gam+1)/(j-1)); end for i=1:length(t), dy(i)=w(1:i)*[y(i:-1:1)]/h^gam; end The syntax of the function is y1 = glfdiff(y, t, γ), where the arguments here are exactly the same as those in glfdiff0(). If γ is negative, the −γth-order integral can be computed. Example 3.1. Evaluate the 0.75th-order derivative of a constant using the Cauchy and Grünwald–Letnikov definitions. Also, what is the 0.75th-order integral is under different definitions? Solution. It is well known in classical calculus that the derivatives of a constant is zero, while the first-order integral of a constant is a straight line. One may naturally wonder what is the fractional-order derivatives or integrals of constants. A constant can alternatively be expressed as f(t) = 1 = t0 . Then, from (3.3), it can be found that from Cauchy’s integral formula, we have Dt0.75 y(t) =

t−0.75 Γ(1)t−0.75 = Γ(1 − 0.75) Γ(0.25)

Dt−0.75 y(t) =

Γ(1)t0.75 t0.75 = . Γ(1 + 0.75) Γ(1.75)

and

The 0.75th-order derivative and integral of unit step function under the Cauchy and Grünwald–Letnikov definitions can be evaluated with the following statements, as shown in Figure 3.1. It can be seen that the two sets of curves are closely matched. >> t=0:0.001:1; y=ones(size(t)); y1=glfdiff(y,t,0.75); y1a=t.^(-0.75)/gamma(0.25); y2=glfdiff(y,t,-0.75); y2a=t.^(0.75)/gamma(1.75); y2a=gamma(1)*t.^(-0.75)/gamma(1-0.75); plot(t,y1,t,y1a,t,y2,’--’,t,y2a,’--’), ylim([0 5])

3.2 Grünwald–Letnikov definition | 57 5 4.5 4

0.75th-order derivative

3.5 3 2.5 2 1.5

0.75th-order

1

integral

0.5 0

0

0.1

0.2

0.3

0.4

0.5

0.6

0.7

0.8

0.9

1

Fig. 3.1: The 0.75th-order derivative and integral of a constant.

Selecting different orders γ = 0, 1/2, 3/4, 1, 5/4, the following statements can be used to evaluate integrals of different orders, and the results are shown in Figure 3.2. It can be seen that when γ is assigned to integers 0 and 1, the curves are exactly the same as the original function and first-order derivative of the original function. >> t=0:0.001:0.5; y=ones(size(t)); gam0=0:0.25:1.25; for gam=gam0, y1=glfdiff(y,t,gam); plot(t,y1), hold on end ylim([-3 5]) Fractional-order integrals can also be evaluated with the function glfdiff(). If the orders of the integrals are selected as 0, 1/4, 1/2, 3/4, 1, 5/4, i.e. a vector is declared γ = [0, −1/4, −1/2, −3/4, −1, −5/4], the following statements can be used, and the results are shown in Figure 3.3. It can be seen that when γ = 0, the original function is 5 4

γ = 3/4

3

γ = 1/2

2 1

γ=0

0

γ=1

-1

γ = 5/4

-2 -3

0

0.05

0.1

0.15

0.2

0.25

0.3

0.35

0.4

Fig. 3.2: Fractional-order derivatives of different orders.

0.45

0.5

58 | 3 Definitions and computations of fractional-order derivatives 1.6 1.4 1.2

γ=0

1

γ=

0.8 0.6

γ=

0.4

−1/

4

/2

−1

γ = −5/4

0.2 0

0

0.5

1

1.5

Fig. 3.3: Fractional-order integrals of different orders.

retained, while when γ = −1, the integral is a straight line, which is the same as the one in integer-order integrals. >> t=0:0.01:1.5; y=ones(size(t)); gam0=-1.25:0.25:0; for gam=gam0, y1=glfdiff(y,t,gam); plot(t,y1), hold on end Example 3.2. Evaluate the 0.5th-order derivative of a step function under different step-sizes, and assess the accuracy. Solution. Selecting the step-sizes h = 0.0001, 0.001, 0.01, 0.1, the derivative values and errors can be measured with the following statements, as listed in Table 3.1. It can be seen that the o(h) precision assumption is correct. >> t0=0.2:0.2:1; y0=gamma(1)*t.^(-0.5)/gamma(1-0.5); t=0:0.0001:1; y=ones(size(t)); y1=glfdiff(y,t,0.5); t=0:0.001:1; y=ones(size(t)); y2=glfdiff(y,t,0.5); t=0:0.01:1; y=ones(size(t)); y3=glfdiff(y,t,0.5); t=0:0.1:1; y=ones(size(t)); y4=glfdiff(y,t,0.5); y1=y1(2001:2000:10001); y2=y2(201:200:1001); y3=y3(21:20:101); y4=y4(3:2:11); T=[[t0, t0]; [y1; y2; y3; y4],[y0-y1; y0-y2; y0-y3; y0-y4]] In the example, the analytical solutions are known. The accuracy of the results can be assessed by comparing the results with theoretical ones. In practical applications, it is not always the case, since the analytical solutions are usually unknown. Therefore, validation of the results must be carried out. For instance, one may select first a step-size h and compute the results. Then select a smaller step-size, and evaluate the fractionalorder derivatives again. If the two results are consistence, then the results can be

3.2 Grünwald–Letnikov definition | 59 Tab. 3.1: Derivatives and errors under different step-sizes. derivatives at time t k

step-

errors at time t k

size h

0.2

0.4

0.6

0.8

0.2

0.4

0.6

0.8

y0 0.0001 0.001 0.01 0.1

1.2616 1.2615 1.2608 1.2537 1.1859

0.8921 0.8920 0.8918 0.8893 0.8647

0.7284 0.7284 0.7282 0.7269 0.7134

0.6308 0.6308 0.6307 0.6298 0.6210

7.9×10−5 0.0007 0.0079 0.0757

2.8×10−5 0.00028 0.00278 0.0273

1.5×10−5 0.00015 0.00152 0.015

9.9×10−6 9.9×10−5 0.00099 0.00977

accepted; otherwise, try an even smaller step-size and validate the results again until satisfactory results are obtained. Example 3.3. Consider the function f(t) = e−t sin(3t + 1), t ∈ (0, π). It can be seen that the initial value of the function is not zero. Compute its fractional-order derivatives. Solution. Selecting step-sizes at h = 0.01, h = 0.001 and h = 0.0005, respectively, the 0.5th-order Grünwald–Letnikov derivatives can be found with the following statements, as shown in Figure 3.4. It can be seen that apart from the region near t = 0, the two curves are very close. Further, it can seen that for this example, the result is acceptable for the step-size h = 0.01, and the ones with h = 0.001 and h = 0.0005 are identical. >> f=@(t)exp(-t).*sin(3*t+1); % anonymous function description t=0:0.001:pi; dy=glfdiff(f,t,0.5); t1=0:0.01:pi; dy1=glfdiff(f,t1,0.5); t2=0:0.0005:pi; dy2=glfdiff(f,t2,0.5); plot(t,dy,t1,dy1,t2,dy2), axis([0,pi,-1.5 6]) 6 5

6 5

4

h = 0.001 and h = 0.0005 h= 0.0 1

4

3

3 2

2

1

1

0

0.02

0.04

0.06

0 -1 0

0.5

1

1.5

2

2.5

Fig. 3.4: Comparisons of results in different step-sizes.

3

60 | 3 Definitions and computations of fractional-order derivatives 3

2

γ = 5/4

1

γ=0 0

γ= 4 3/ 1 γ=

-1

-2

-3

0

0.5

1

1.5

2

2.5

3

Fig. 3.5: Derivatives of different orders.

For different orders γ = 0, 1/4, 1/2, 3/4, 1, 5/4, the following statements can be used, and the results are obtained as shown in Figure 3.5. >> t=0:0.01:pi; y=exp(-t).*sin(3*t+1); for gam=0:0.25:1.25, y1=glfdiff(y,t,gam); plot(t,y1); hold on end Integrals of different orders can also be obtained as shown in Figure 3.6. It can be seen that the fractional-order derivatives and integrals are more informative. >> for gam=-1:0.25:0, y1=glfdiff(y,t,gam); plot(t,y1); hold on end 1 0.8

γ=0

0.6 0.4

γ = −1 γ= −3/ γ= 4 −1 γ= /2 −1 /4

0.2 0 -0.2 -0.4

0

0.5

1

1.5

Fig. 3.6: Integrals of different orders.

2

2.5

3

3.2 Grünwald–Letnikov definition | 61 8

6

Grünwald–Letnikov definition

4

2

hy uc Ca

0

fi de tio ni

-2

n

-4

0

0.5

1

1.5

2

2.5

3

Fig. 3.7: Fractional-order derivatives under different definitions.

Example 3.4. Compute the 0.75th-order derivative of the function f(t) = sin(3t + 1) with the Cauchy and Grünwald–Letnikov definitions and compare the results. Solution. With the Cauchy definition in (3.2), the 0.75th-order derivative can be obtained as 0.75π 0.75 f(t) = 30.75 sin(3t + 1 + ), 0 Dt 2 while with the Grünwald–Letnikov definition, the numerical solution can be evaluated with the function glfdiff(). Both of the results are obtained as shown in Figure 3.7. >> t=0:0.01:pi; y=sin(3*t+1); y1=3^0.75*sin(3*t+1+0.75*pi/2); y2=glfdiff(y,t,0.75); plot(t,y1,t,y2,’--’), ylim([-4 8]) It can be seen by comparing the two definitions that in the Cauchy definition, the sudden jump at t = 0 is not noticed, since in that definition the value of y(t) at t ≤ 0 is considered as f(t) = sin(3t + 1), while in the Grünwald–Letnikov definition, the initial values are assumed to be zeros.

3.2.4 Podlubny’s matrix algorithm A matrix algorithm proposed by Professor Igor Podlubny [56] can alternatively be used in the evaluation of Grünwald–Letnikov derivatives and integrals, with the following algorithm. Algorithm 3.3 (Matrix algorithm in Grünwald–Letnikov derivatives). One proceeds in the following way. (1) Evaluate the samples of the given signal f .

62 | 3 Definitions and computations of fractional-order derivatives (2) Evaluate the binomial coefficients w k recursively with (3.7). (3) Equation (3.6) can alternatively be expressed in matrix form: w0 [ w [ 1 [ 1 [ [ w2 α D f = α[ . h [ .. [ [ [w N−1 [ wN

w0 w1 .. . .. . w N−1

w0 .. . w2 .. .

..

.

w1 w2

w0 w1

f0 ][ f ] ][ 1 ] ] ][ ] [ f2 ] ] ][ ][ . ]. ] [ .. ] ] ][ ] ][ ] [f N−1 ] w0 ] [ f N ]

(3.8)

Note that the matrix in (3.8) is in fact a rotated version of a Hankel matrix; therefore, the matrix algorithm can be implemented in MATLAB directly as follows. function dy=glfdiff_mat(y,t,gam) h=t(2)-t(1); w=1; y=y(:); t=t(:); a0=y(1); dy(1)=0; if a0~=0 & gam>0, dy(1)=sign(a0)*Inf; end for j=2:length(t), w(j)=w(j-1)*(1-(gam+1)/(j-1)); end dy=rot90(hankel(w(end:-1:1)))*y; It can be seen that the algorithm is concise, and it is more suitable for the problems for evaluating sequential derivatives such as D α D β D γ y(t), since only matrix multiplications are involved. The limitation of the algorithm is that, when N is very large, the size of the matrix may become too big to be handled by MATLAB, or the speed may be reduced significantly.

3.2.5 Studies on short-memory effect In general, if the step-size h is selected very small, or the value of [(t − t0 )/h] is selected too large, the terms involved in the sum of (3.6) are too many, which may lead to the significant increase in computational burden, and the efficiency in numerical computation is reduced. It may not be necessary to use all the previous information in [(t − t0 )/h] to compute the fractional-order derivatives. The recent information in the interval [t − L, t] can be used instead to reduce computational burden, i.e. (see [54]) α t0Dt f(t)

≈ (t−L) Dtα f(t).

This method is also known as short-memory effect [17, 54]. With such a method, the Grünwald–Letnikov derivative can be approximated as y(t) ≈

1 N(t) ∑ w j f(t − jh), h α j=0

where N(t) = min{[

t − t0 L ], }. h h

3.2 Grünwald–Letnikov definition | 63

Here L is referred to as the memory length. Therefore, it is a crucial problem how to select an appropriate memory length L. Assume that in the interested time interval (t0 , T), the function f(t) satisfies |f(t)| ≤ M; it is known that the approximation error satisfies L

1 f 󸀠 (τ) ML−α 󵄨 󵄨 ∆(t) = 󵄨󵄨󵄨t0Dtα f(t) − (t−L) Dtα f(t)󵄨󵄨󵄨 = dτ ≤ . ∫ Γ(1 − α) (t − τ)α |Γ(1 − α)| t0

If one expects the error is smaller than ε, i.e. ∆(t) < ε, the memory length selected should satisfy 1/α M L≥( (3.9) ) . ε|Γ(1 − α)| Therefore, it might be too demanding in the selection of L. For instance, even if a lowprecision error tolerance of ϵ = 10−2 is selected, with α = 0.1 and M = 1, the expected memory length could be as high as L = 5.15×1019 . Therefore, short-memory approximation may not be very useful in real applications. A MATLAB function glfdiff_mem() is written to implement the approximation algorithm. The syntax of the function is y1 = glfdiff_mem(y, t, γ, L0 ), where the arguments y, t, γ, y1 are exactly the same as the ones in glfdiff(), and L0 is the number points in the memory, i.e. L0 = L/h. If L0 is not specified, the ordinary Grünwald– Letnikov definition is used instead. function dy=glfdiff_mem(y,t,gam,L0) h=t(2)-t(1); dy(1)=0; y=y(:); t=t(:); w=1; if nargin==3, L0=length(t); end for j=2:length(t), w(j)=w(j-1)*(1-(gam+1)/(j-1)); end for i=1:length(t), L=min([i,L0]); dy(i)=w(1:L)*[y(i:-1:i-L+1)]/h^gam; end The matrix algorithm shown in Algorithm 3.3 can also be used, by setting the trailing elements in the vector w to zero. But matrix multiplication is not an efficient way in finding the derivatives, since in the actual code, multiplication of L terms are sufficient; however, full matrix is still needed in the matrix algorithm. The examples below can be used to demonstrate the effect of such an approximation algorithm. Example 3.5. Let us consider the function f(t) = e−t sin(3t + 1) in Example 3.3. Compute the 0.5th-order derivative and observe the accuracy of approximation using shortmemory effect. Solution. The step-size can be selected as h = 0.001, and the expected error tolerance can be selected as ε = 10−3 . Since the maximum value of the function is M = 1, the minimum memory length can be computed from formula (3.9) such that L = (1/(10−3 Γ(1 − 0.5)))1/0.5 = 3.2 × 105 . This is of course too demanding. We can

64 | 3 Definitions and computations of fractional-order derivatives 5

4

3

2

1

0

-1

0

0.5

1

1.5

2

2.5

3

Fig. 3.8: Short-memory effect when L0 = 1200.

select shorter lengths, and observe what may happen in real computations. With the statements given below, the comparisons are recorded in Table 3.2. It can be seen that when the memory is selected as L0 = 1,200, satisfactory approximation can be obtained as shown in Figure 3.8, although the ε = 10−3 requirement is not satisfied. Since the total number of points for this example is 3142, the selection of memory length of L0 = 1,200 may save about 2/3 of the total time. >> t=0:0.001:pi; y=exp(-t).*sin(3*t+1); y0=glfdiff(y,t,0.5); L0=[100:100:3000]; T=[]; for L=L0 dy=glfdiff_mem(y,t,0.5,L); T=[T; L, norm(y0-dy,inf)]; end dy=glfdiff_mem(y,t,0.5,1200); plot(t,y0,t,dy,’--’); axis([0 pi -1 5])

Tab. 3.2: Memory lengths and approximation errors when h = 0.001. memory L0 100 400 700 1,000 1,300 1,600 1,900 2,200 2,500 2,800

error norm 0.71736 0.19737 0.10767 0.071227 0.051879 0.04007 0.032206 0.026644 0.022532 0.015194

memory L0 200 500 800 1,100 1,400 1,700 2,000 2,300 2,600 2,900

error norm 0.38972 0.15597 0.092463 0.063559 0.047346 0.037115 0.030149 0.025141 0.021099 0.011026

memory L0 300 600 900 1,200 1,500 1,800 2,100 2,400 2,700 3,000

error norm 0.26417 0.12789 0.080647 0.057211 0.043449 0.034513 0.028305 0.023777 0.018628 0.006443

3.2 Grünwald–Letnikov definition | 65

For the same function, if the terminate time is increased to 50 seconds, then the ordinary Grünwald–Letnikov algorithm may need 18 seconds to evaluate the derivative, while with short-memory of L0 = 1200, only 4.7 seconds are needed. In real applications, it is also a time-consuming work to choose an appropriate L0 . >> t=0:0.001:50; y=exp(-t).*sin(3*t+1); tic, y0=glfdiff(y,t,0.5); toc tic, dy=glfdiff_mem(y,t,0.5,1200); toc Example 3.6 (Counterexample of short-memory effect). Evaluate the fractional-order derivatives to a step function using short-memory effect. Solution. The 0.75th-order derivative to a step function can easily be derived with Cauchy’s integral formula. Selecting a step-size of h = 0.001, and total time of 5 seconds, 5,000 samples will be generated. Now, let us try a relatively large memory size, e.g., L0 = 4,000. Then the following statements can be used to evaluate the numerical integral, and also the theoretical integral, as shown in Figure 3.9. >> t=0:0.001:5; y=ones(size(t)); y0=gamma(1)*t.^(0.75)/gamma(1+0.75); % Cauchy’s formula dy=glfdiff_mem(y,t,-0.75,4000); dy1=glfdiff(y,t,-0.75); plot(t,dy,t,dy1,’--’); It can be seen that although the memory size is quite close to the total number of samples, a very large computation error is still introduced, which means that the short-memory approach fails in this example. It does not mean that the short-memory principle is violated, it only means that the problem cannot be solved in the form of the principle, because a reasonable L0 cannot be selected. 4 3.5 3 2.5 2 1.5 1 0.5 0

0

0.5

1

1.5

2

2.5

3

Fig. 3.9: Failure of short-memory approach.

3.5

4

4.5

5

66 | 3 Definitions and computations of fractional-order derivatives It can be seen from the previous example that although the short-memory method claims to be efficient, it is also risky. Sometimes erroneous results may be introduced. Therefore, care must be taken before using such an approximation.

3.3 Riemann–Liouville definition 3.3.1 High-order integrals Let us consider integer-order integrals first. It is obvious that the first-order integral of function f(t) can be expressed as t

d−1 f(t) = ∫ f(τ) dτ. dt−1 t0

It can also seen that second-order integral can be written as t t

t

t0 t0

t0

d−2 f(t) = ∫ ∫ f(τ) dτ dt = ∫ f(τ)(t − τ) dτ. dt−2 Similarly, the nth-order integral can be evaluated from t

t

d−n f(τ) 1 f(t) = ∫ ⋅ ⋅ ⋅ ∫ f(τ) ⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟ dτ. dτ1 dτ2 ⋅ ⋅ ⋅ dτ = ∫ dt−n (n − 1)! (t − τ)1−n n t0 0 ⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟ n

3.3.2 Riemann–Liouville fractional-order definition If the integer −n is substituted by a real number −α, the Riemann–Liouville fractionalorder integral can be defined. Definition 3.4. The αth-order Riemann–Liouville integral of function f(t) is defined as t

RL −α t0 Dt f(t)

=

1 f(τ) dτ, ∫ Γ(α) (t − τ)1−α

(3.10)

t0

where the notation RL indicates “Riemann–Liouville definition”, and it can be omitted when no confusion can arise. Besides, when 0 < α < 1 and the initial time instance t0 = 0, the notation can be simplified as Dt−α f(t). The Riemann–Liouville definition is the most widely used definition in fractional calculus. The subscripts on both sides of the operator D represent the lower and upper bounds of the integral [23].

3.3 Riemann–Liouville definition | 67

The fractional-order derivatives cannot be defined just by replacing −α with α, the definition should be presented through integrals. Now consider the βth-order derivative. If n − 1 < β ≤ n, denote n = ⌈β⌉; then RL β t0 Dt f(t)

dn RL −(n−β) f(t)]. [ D dt n t0 t

=

Based on the above integral, the formal Riemann–Liouville fractional-order derivatives can be defined. Definition 3.5. Assume that n − 1 < β ≤ n, and denote n = ⌈β⌉. The Riemann–Liouville fractional-order derivative can be expressed as t

RL β t0 Dt f(t)

1 dn f(τ) = dτ. ∫ n Γ(n − β) dt (t − τ)1+β−n

(3.11)

t0

In such a definition, the power of the term t − τ in the integrand is ensured to be not less than −1.

3.3.3 Riemann–Liouville formula of commonly used functions Let us consider first the Riemann–Liouville derivatives of the power and exponential functions, when the lower bound is t0 = 0. α Lemma 3.1. For a power function f(t) = t μ , with μ > −1, RL 0 Dt f(t) satisfies RL α μ 0 Dt t

=

Γ(μ + 1) μ−α t . Γ(μ + 1 − α)

(3.12)

Theorem 3.2. For an exponential function f(t) = eλt , one has RL α 0 Dt

eλt = t−α E1,1−α (λt).

(3.13)

Proof. It is known that the Taylor series expansion of eλt is ∞

∞ (λt)k (λt)k = ∑ . k! Γ(k + 1) k=0 k=0

eλt = ∑

(3.14)

Using the properties in (3.12), and computing the Riemann–Liouville fractional-order operation of the right-hand side of (3.14) term by term, one obtains RL β 0 Dt



β

eλt = ∑ RL 0 Dt k=0 ∞

= t−β ∑ k=0

Hence, the theorem is proven.

∞ λ k t k−β (λt)k = ∑ Γ(k + 1) k=0 Γ(k + 1 − β)

(λt)k = t−β E1,1−β (λt). Γ(k + 1 − β)

(3.15)

68 | 3 Definitions and computations of fractional-order derivatives Theorem 3.3. The Riemann–Liouville formula for several commonly used functions are given in [54]. The following formulae are given without proofs: Dtα H(t − a) =

RL

(t − a)−α H(t − a), Γ(1 − α)

Dtα H(t − a)f(t) = H(t − a) a Dtα f(t),

RL

t−α−1 , Γ(−α) t−α−n−1 RL α (n) , 0 Dt δ (t) = Γ(−α − n) RL α Dt cosh√ λt = t−α E2,1−α (λt2 ), RL α 0 Dt δ(t)

=

0

√λt

RL α sinh 0 Dt

√λt

RL α λt −∞ Dt e RL α λt+μ −∞ Dt e

= t1−α E2,2−α (λt2 ), = λ α eλt , α

λ > 0,

λ > 0, απ RL α α ), −∞ Dt sin λt = λ sin(t + 2 απ RL α α ), −∞ Dt cos λt = λ cos(t + 2 RL α λt α λt −∞ Dt e sin μt = r e sin(μt + αφ), RL α −∞ Dt

=λ e

λt+μ

,

eλt cos μt = r α eλt cos(μt + αφ),

where H(t − a) is the Heaviside function, meaning when t ≥ a, H(t − a) = 1; otherwise H(t − a) = 0. The function δ(t) is an impulsive function or a Dirac function. Besides, r = √ λ2 + μ2 , tan φ = μ/λ. It should be noted that there are only very few functions whose analytical Riemann–Liouville derivatives exist. Since the variable t appears both in the integral boundary and in the integrand, it could be very difficult to evaluate the analytical solutions to Riemann–Liouville definitions for ordinary function f(t). Numerical solutions are usually necessary in practical applications.

3.3.4 Properties of initial time translation In the above presentations of fractional-order derivatives, some of the definitions are made on general initial time of t0 , while others specify t0 = 0. In order to extend all the definitions and properties to arbitrary t0 , a new theorem is proposed in this section. Let us review the Riemann–Liouville fractional-order definition first: t

RL α t0 Dt f(t)

=

1 f(τ) dn dτ. ∫ Γ(n − α) dt n (t − τ)1+α−n t0

(3.16)

3.3 Riemann–Liouville definition | 69

Theorem 3.4. If t0 ≠ 0, and the Riemann–Liouville fractional-order derivative is defined as in (3.16), then RL α RL α (3.17) t0 Dt f(t) = 0 Ds f(s + t 0 ), where s = t − t0 . Proof. Substitute τ = θ + t0 into the right-hand side of (3.16). It is found that t

t−t0

t0

0

1 f(θ + t0 ) dn dn 1 f(τ) dτ = dθ. ∫ ∫ Γ(n − α) dt n (t − τ)1+α−n Γ(n − α) dt n (t − t0 − θ)1+α−n

(3.18)

Substitute t = s + t0 into the right-hand side of (3.18). One obtains t−t0

s

1 f(θ + t0 ) dn f(θ + t0 ) 1 dn dθ = dθ. ∫ ∫ n 1+α−n Γ(n − α) dt Γ(n − α) ds n (s − θ)1+α−n (t − t0 − θ)

(3.19)

0

0

Let F(θ) = f(θ + t0 ). The right-hand side of (3.19) can further be written as s

dn F(θ) 1 α RL α dθ = RL ∫ 0 Ds F(s) = 0 Ds f(s + t 0 ). Γ(n − α) ds n (s − θ)1+α−n 0

Hence, the theorem is proven. Example 3.7. Derive the Riemann–Liouville derivative of the second equation of (3.13) with lower bound at t0 , where RL α 0 Dt

eλt = t−α E1,1−α (λt).

Solution. With the variable substitution principles in Theorem 3.4, it is immediately found that RL α λt RL α λ(t+t0 ) = (t + t0 )−α E1,1−α (λ(t + t0 )). t0 Dt e = 0 Dt e

3.3.5 Numerical implementation of the Riemann–Liouville definition An algorithm is proposed here to evaluate fractional-order integrals and derivatives with the Riemann–Liouville definition. If the step-size is h, the αth-order derivative can be obtained in the following procedures. Algorithm 3.4 (Riemann–Liouville derivative algorithm). Proceed as follows. (1) Compute the vector f = [f(0), f(h), f(2h), . . . ]. (2) Set n = ⌈α⌉, compute the coefficients w i = ht i n−α /Γ(n − α + 1). (3) Compute the vector s with j

s j = ∑ w j−i f i − i=1

1 1 w1 f j − w j f1 . 2 2

(4) Compute the (n + 1)st-order difference of s with a step-size h.

(3.20)

70 | 3 Definitions and computations of fractional-order derivatives The MATLAB implementation of the above algorithm can be written as follows. In fact, apart from the algorithm, more actions are taken in the code – for instance, integerorder and fractional-order integrals can also be evaluated. function [dy,t]=rlfdiff(y,t,r) h=t(2)-t(1); n=length(t); dy1=zeros(1,n); y=y(:); t=t(:); if r>-1, m=ceil(r)+1; p=m-r; y3=t.^(p-1); elseif r==-1, n=length(t); yy=0.5*(y(1:n-1)+y(2:n)).*diff(t); dy(1)=0; for i=2:n, dy(i)=dy(i-1)+yy(i-1); end, return else, m=-r; y3=t.^(m-1); end for i=1:n, dy1(i)=y(i:-1:1).’*(y3(1:i)); end if r>-1, dy=diff(dy1,m)/(h^(m-1))/gamma(p); t=t(1:n-m); else, dy=dy1*h/gamma(m); end The syntax of the function is [y1 , t] = rlfdiff(y, t, γ), which is quite similar to the function glfdiff(). The only different is that two vectors y1 and t are returned, and the actual length of y1 may be shorter than the length of y, since the MATLAB function diff() is used. Example 3.8. It is known in Example 3.1 that the analytical expression of 0.75th-order derivative of a step function is Dt0.75 y(t) = t−0.75 /Γ(0.25). Compare the accuracies of numerical methods for the Grünwald–Letnikov and Riemann–Liouville definitions under the step-size of h = 0.01. Solution. The theoretical 0.75th-order derivative, as well as those obtained by using the Grünwald–Letnikov and Riemann–Liouville definitions, can be obtained as shown in Figure 3.10, and it can be seen that the Riemann–Liouville computation result is not as good as the Grünwald–Letnikov solutions. >> t0=0:0.0001:3; y0=t0.^-0.75/gamma(0.25); h=0.01; t1=0:h:3; y=ones(size(t1)); y1=glfdiff(y,t1,0.75); [y2,t2]=rlfdiff(y,t1,0.75); plot(t0,y0,t1,y1,’--’,t2,y2,’:’), ylim([0 6])

3.4 High-precision computation algorithms of fractional-order derivatives and integrals In order to increase the accuracy of computation, polynomial expansions should be used instead to replace binomial expression. Generating functions with an accuracy

3.4 High-precision computation algorithms of fractional-order derivatives and integrals | 71 6 5 4

1.6 1.4

3

1.2

exact and Grünwald–Letnikov

Rie

ma

1 2

0

0.1

0

0.5

–L

iou vil

le

0.8

1

nn

0.15

1

0.2

0.25

1.5

0.3

2

0.35

2.5

3

Fig. 3.10: The 0.75th-order Riemann–Liouville derivative.

of o(h p ), where p = 1, 2, . . . , 6, can be used [54]. To further increase the accuracy, p for any integer is derived. Based on the generating functions, a recursive algorithm for the evaluation of fractional-order derivatives and integrals is proposed and demonstrated.

3.4.1 Construction of generating functions with arbitrary orders Definition 3.6. For higher-precision, as indicated in [54], the generating functions defined in p

1 (1 − z)k k k=1

g p (z) = ∑

(3.21)

may lead to computation of fractional-order derivatives and integrals with an accuracy of o(h p ). The generating functions for p = 1, 2, . . . , 6 by Lubich [31] are given in Table 3.3. If a higher-order generating function is expected, the following new theorem is presented. Tab. 3.3: The low-order generating functions. order p

generating function g p (z)

order p

generating function g p (z)

1

1−z

4

2

3 1 2 2 − 2z + 2 z 11 3 2 6 − 3z + 2 z

4 3 1 4 25 2 12 − 4z + 3z − 3 z + 4 z 137 10 3 5 4 1 5 2 60 − 5z + 5z − 3 z + 4 z − 5 z 147 15 2 20 3 15 4 6 5 60 − 6z + 2 z − 3 z + 4 z − 5 z

3

5 − 13 z3

6

+ 61 z6

Theorem 3.5. The pth-order generating function g p (z) can be expressed as a polynomial p

g p (z) = ∑ g k z k k=0

(3.22)

72 | 3 Definitions and computations of fractional-order derivatives whose coefficients g k can be evaluated from the following linear equation: 1 [ [1 [ [1 [ [. [ .. [ [1

1 2 22 .. . 2p

1 3 32 .. . 3p

0 g0 1 [ ] ][ ] [1] p + 1 ] [ g1 ] [ ] ][ ] [ ] [ g2 ] (p + 1)2 ] ] [ ] = − [2] . [.] ] [ ] .. [ .. ] ] [ ... ] . [ ] ][ ] p (p + 1) ] [g p ] [p]

⋅⋅⋅ ⋅⋅⋅ ⋅⋅⋅ .. . ⋅⋅⋅

(3.23)

Proof. From (3.21) and (3.22), it is found that p

p

1 (1 − z)k . k k=1

∑ gk zk = ∑ k=0

(3.24)

Substituting z = 1 into (3.24), it is found that p

∑ g k = 0.

(3.25)

k=0

Multiply both sides of (3.24) by z, and then taking first-order derivative with respect to z, it is found that p

p

p

1 (1 − z)k − z ∑ (1 − z)k−1 . k k=1 k=1

∑ (k + 1)g k z k = ∑ k=0

(3.26)

Substituting z = 1 into (3.26), it is found that p

∑ (k + 1)g k = −1. k=0

Multiply both sides of (3.26) by z, and then taking first-order derivative with respect to z, it is found that p

p

p

p

1 1 (1 − z)k − 3z ∑ (1 − z)k−1 + z2 ∑ (1 − z)k−2 . k k − 1 k=1 k=1 k=2

∑ (k + 1)2 g k z k = ∑ k=0

Substituting z = 1 into (3.27), it is found that p

∑ (k + 1)2 g k = −2. k=0

Repeating the above procedures, the following equations are obtained: g0 + g1 + g2 + ⋅ ⋅ ⋅ + g p { { { { { g0 + 2g1 + 3g2 + ⋅ ⋅ ⋅ + (p + 1)g p { { { g0 + 22 g1 + 32 g2 + ⋅ ⋅ ⋅ + (p + 1)2 g p { { { { { { { { p p p { g0 + 2 g1 + 3 g2 + ⋅ ⋅ ⋅ + (p + 1) g p

= 0, = −1, = −2, .. . = −p.

It can be seen that the matrix form can be expressed as (3.23).

(3.27)

3.4 High-precision computation algorithms of fractional-order derivatives and integrals | 73 Tab. 3.4: The higher-order generating functions. p

generating function g p (z)

6

49 15 2 20 3 15 4 6 5 1 6 20 − 6z + 2 z − 3 z + 4 z − 5 z + 6 z 21 2 35 3 35 4 21 5 7 6 363 1 7 140 − 7z + 2 z − 3 z + 4 z − 5 z + 6 z − 7 z 761 56 3 35 4 56 5 14 6 8 7 1 8 2 280 − 8z + 14z − 3 z + 2 z − 5 z + 3 z − 7 z + 8 z 7129 63 4 126 5 36 7 9 8 1 9 2 3 6 2520 − 9z + 18z − 28z + 2 z − 5 z + 14z − 7 z + 8 z − 9 z 45 2 105 4 252 5 120 7 45 8 10 9 1 10 7381 3 6 2520 − 10z + 2 z − 40z + 2 z − 5 z + 35z − 7 z + 8 z − 9 z + 10 z 83711 55 2 165 4 462 5 330 7 165 8 55 9 11 10 3 6 27720 − 11z + 2 z − 55z + 2 z − 5 z + 77z − 7 z + 8 z − 9 z + 10 z

7 8 9 10 11



1 11 11 z

Based on the theorem, higher-order generating functions can be derived as shown in Table 3.4, where p = 6 is given again, because the first term 147/60 can be simplified as 49/20, as in the new table. The MATLAB implementation of the algorithm is given below function g=genfunc(p) a=1:p+1; A=rot90(vander(a)); g=(1-a)*inv(sym(A’)); where symbolic computation is used for accurate coefficient generation. The syntax of the function is g = genfunc(p), where p is the order of computation. The polynomial coefficient vector g in symbolic format is returned. In fractional-order computation, the fractional-order generating function g pα (z) is usually used, which can alternatively be expressed as (g p (z))α . Recalling the Grünwald–Letnikov definition GL α t0 Dt f(t)

= lim

h→0

1 hα

[(t−t0 )/h]

∑ j=0

α (−1)α ( )f(t − jh), j

(3.28)

the high-precision o(h p ) fractional-order derivatives can be introduced. Definition 3.7. The high-precision o(h p ) fractional-order derivatives can be evaluated approximately with GL α t0 Dt f(t)



1 hα

[(t−t0 )/h]



k=0 (α,p) [w0 ,

(α,p)

wk

(α,p)

f(t − kh),

(3.29)

(α,p)

where the coefficient vector w(α,p) = w1 , w2 , . . . ] can be computed by Lubich’s fractional-order linear multi-step algorithm. When no conflict can arise, the superscript (α, p) may be dropped off. The vector w are the coefficients of the Taylor series expansions of the generating functions. If the first-order generating function is chosen, the coefficient vector w can be computed recursively from α+1 w0 = 1, w k = (1 − (3.30) )w k−1 . k

74 | 3 Definitions and computations of fractional-order derivatives The accuracy of fractional-order linear multi-step algorithm is proven by Lubich [31], the conclusion is given in Lemma 3.2. Lemma 3.2. If f(t) = t v−1 , v > 0, w are the coefficients of the Taylor series expansions of generating function of order p, then RL α t0 Dt f(t)

= h−α

[(t−t0 )/h]



w k f(t − kh) + o(h v ) + o(h p ).

(3.31)

k=0

3.4.2 FFT-based algorithm As the above analysis, the Taylor series algorithm is not suitable to compute the vector w in practice. An algorithm with fast Fourier transform is suggested by Podlubny [54]. A brief introduction of this algorithm is given, and the reason will be explored why the computing error is large. The coefficients of the Taylor series expansion of the generating function g pα (z) are w = [w0 , w1 , w2 , . . . ]. Therefore, the Taylor series expansion of g pα (z) can be written as ∞

g pα (z) = ∑ w k z k .

(3.32)

k=0

Substitute the variable z with e−jφ in (3.32), it is found that ∞

g pα (e−jφ ) = ∑ w k e−jkφ .

(3.33)

k=0

Definition 3.8. The functions ejnφ and e−jmφ are said to be orthogonal if the inner product of them is 2π

⟨ejnφ , e−jmφ ⟩ =

{0, 1 ∫ ejnφ e−jmφ dφ = { 2π 1, 0 {

m ≠ n, m = n.

(3.34)

To compute the inner product ⟨g pα (e−jφ ), ejkφ ⟩, it is found that 2π

wk =

1 ∫ g pα (e−jφ ) ejkφ dφ. 2π

(3.35)

0

Equation (3.35) shows that the w are the Fourier coefficients of g pα (ejφ ). Therefore, the coefficient vector w can be evaluated efficiently with the fast Fourier transform (FFT) algorithm. With the idea for computing the vector w, the following algorithm is given. Algorithm 3.5 (Fast Fourier transform-based algorithm). Proceed as follows. (1) Compute the values of the function y = f(t) at the sampling points with the stepsize h; obtain the vector f . (2) Choose the generating function g pα (z).

3.4 High-precision computation algorithms of fractional-order derivatives and integrals | 75

(3) Compute the coefficients w k in (3.35) with FFT. (4) Compute fractional-order derivatives and integrals with (3.29). The MATLAB function implementing the above algorithm can be written as follows. function dy=glfdiff_fft(y,t,gam,p) h=t(2)-t(1); y=y(:); t=t(:); n=length(t); dy=zeros(1,n); g=double(genfunc(p)); T=2*pi/(n-1); if y(1)~=0 & gam>0, dy(1)=sign(y(1))*Inf; end tt=[0:T:2*pi]; F=g(1); f1=exp(1i*tt); f0=f1; for i=2:p+1, F=F+g(i)*f1; f1=f1.*f0; end w=real(fft(F.^gam))*T/2/pi; for k=2:n, dy(k)=w(1:k)*[y(k:-1:1)]/h^gam; end Because the FFT is used in Algorithm 3.5, the obtained Fourier coefficients are not exact, that is why the computing error is large. Example 3.9 is presented to illustrate this phenomenon. 0.6 −t Example 3.9. Compute the numerical solution of the expression y1 (t) = RL e in 0 Dt the interval [0, 5].

Solution. The theoretical expression of the function is y1 (t) = t−0.6 E1,0.4 (−t), where E ( ⋅ ) is the Mittag-Leffler function with two parameters. The theoretical and numerical results are shown in Figure 3.11. It can be seen that the difference of the two curves are very large. >> t=0:0.001:5; y0=t.^-0.6.*ml_func([1,0.4],-t,0,eps); h=0.01; t1=0:h:5; y1=exp(-t1); dy1=glfdiff_fft(y1,t1,0.6,6); h=0.001; t2=0:h:5; y2=exp(-t2); dy2=glfdiff_fft(y2,t2,0.6,6); plot(t,y0,t1,dy1,’-’,t2,dy2,’:’), ylim([-1 4]) 4 3.5 3 2.5 2 1.5 1 0.5

erroneous result by FFT

0

analytical solution

-0.5 -1

0

0.5

1

1.5

2

2.5

3

3.5

4

4.5

5

Fig. 3.11: The erroneous numerical solutions with the FFT-based algorithm.

76 | 3 Definitions and computations of fractional-order derivatives It is also noticed that Algorithms 3.5 is very fast, however, for the two different step-sizes h = 0.01 and h = 0.001, the fourth-order algorithm failed, especially when t is large. The reason is that the vector w computed with Algorithm 3.5 are not as accurate as we expected, which in turn lead to erroneous fractional-order derivative in (3.29).

3.4.3 A new recursive formula As in the above analysis, the computing error of Algorithm 3.5 is large. A new algorithm is presented to compute more accurate results within a reasonable time consumption, the new recursive algorithm for computing the coefficient vector w is derived. Consider the right-hand side of (3.32) as an infinite series of the variable z in some convergent interval, and the generating function g pα (z) is differentiable. Differentiate both sides of (3.32) with respect to z; it is found that d ∞ d ∞ d α g p (z) = ∑ wk zk = ∑ wk zk . dz dz k=0 dz k=0

(3.36)

It follows immediately that αg pα−1 (z)

∞ dg p (z) = ∑ kw k z k−1 . dz k=1

(3.37)

Multiply both sides of (3.37) by g p (z); then αg pα (z)

∞ dg p (z) = g p (z) ∑ kw k z k−1 . dz k=1

(3.38)

k Substitute g pα (z) = ∑∞ k=0 w k z into the left-hand side of (3.38); it is found that

α

∞ dg p (z) ∞ ∑ w k z k = g p (z) ∑ kw k z k−1 . dz k=0 k=1

(3.39)

Theorem 3.6. If the generating function g pα (z) can be written as α

g pα (z) = (g0 + g1 z + ⋅ ⋅ ⋅ + g p z p ) ,

(3.40)

and its Taylor series expansion can be written as ∞

g pα (z) = ∑ w k z k ,

(3.41)

k=0

then w0 = g0α if k = 0, and when k > 0, the subsequent coefficients w k can be evaluated recursively from p

wk = −

1 1+α )w k−i , ∑ g i (1 − i g0 i=1 k

w k = 0 for k < 0.

(3.42)

3.4 High-precision computation algorithms of fractional-order derivatives and integrals | 77

Proof. Let z = 0 in (3.40); it is found that w0 = g0α . Rewriting (3.40), one has ∞

α

(g0 + g1 z + ⋅ ⋅ ⋅ + g p z p ) = ∑ w k z k ,

(3.43)

k=−∞

where w k = 0 for k < 0. Taking first-order derivatives to both sides of (3.43) with respect to z, one obtains α(g1 + 2g2 z + ⋅ ⋅ ⋅ + pg p z p−1 )(g0 + g1 z + ⋅ ⋅ ⋅ + g p z p )

α−1



= ∑ kw k z k−1 .

(3.44)

k=−∞

Multiply both sides of (3.44) by (g0 + g1 z + ⋅ ⋅ ⋅ + g p z p ) to have α(g1 + 2g2 z + ⋅ ⋅ ⋅ + pg p z p−1 )(g0 + g1 z + ⋅ ⋅ ⋅ + g p z p )

α



= (g0 + g1 z + ⋅ ⋅ ⋅ + g p z p ) ∑ kw k z k−1 .

(3.45)

k=−∞

Substituting (3.43) into (3.45), it is found that ∞

α(g1 + 2g2 z + ⋅ ⋅ ⋅ + pg p z p−1 ) ∑ w k z k k=−∞ ∞

= (g0 + g1 z + ⋅ ⋅ ⋅ + g p z p ) ∑ kw k z k−1 .

(3.46)

k=−∞

With the property z d w k = w k−d , the left-hand side of (3.46) can be written as ∞

∑ α(g1 w k + 2g2 w k−1 + ⋅ ⋅ ⋅ + pg p w k−p+1 )z k ,

(3.47)

k=−∞

while the right-hand side of (3.46) becomes ∞

∑ [(k + 1)g0 w k+1 + kg1 w k + ⋅ ⋅ ⋅ + (k − p + 1)g p w k−p+1 ]z k .

(3.48)

k=−∞

Equating the coefficients to the same power of z in (3.47) and (3.48), it can be found that α(g1 w k + 2g2 w k−1 + ⋅ ⋅ ⋅ + pg p w k−p+1 ) = (k + 1)g0 w k+1 + kg1 w k + ⋅ ⋅ ⋅ + (k − p + 1)g p w k−p+1 .

(3.49)

Shifting backward (3.49) by one pace, i.e. by denoting k = k − 1, it is found that g0 kw k + g1 (k − 1 − α)w k−1 + ⋅ ⋅ ⋅ + g p (k − p − pα)w k−p = 0.

78 | 3 Definitions and computations of fractional-order derivatives If k ≠ 0, it is found that wk = −

1+α 1+α 1+α 1 [g1 (1 − )w k−1 + g2 (1 − 2 )w k−2 + ⋅ ⋅ ⋅ + g p (1 − p )w k−p ], g0 k k k

where when k < 0, w k = 0. This is the recursive formula. In the actual algorithm implementation, the condition w k = 0 for k < 0 may lead to negative subscripts. Therefore, for the convenience of numerical implementation, the following corollary is proposed. Corollary 3.1. If the generating function and Taylor series expansion are again given in (3.40) and (3.41), respectively, the Taylor series coefficients w k can be evaluated recursively from p 1 1+α w k = − ∑ g i (1 − i (3.50) )w k−i g0 i=1 k for k = p, p + 1, p + 2, . . . , and the initial coefficients can be evaluated from wm = −

1+α 1 m−1 )w m−i ∑ g i (1 − i g0 i=1 m

for m = 1, 2, . . . , p − 1, and w0 = g0α . The commonly used recursive formulae for p = 1, 2, . . . , 6 can be evaluated directly from Corollary 3.1, and the results are given in Table 3.5. When k < p, the formulae for computing the initial coefficients are shown in Table 3.6. Algorithm 3.6 (Direct recursive algorithm). Proceed as follows. (1) Compute the values of the function y = f(t) at the sampling points with the stepsize h; obtain the vector f . (2) Choose the generating function g pα (z); compute the w with the recursive formulae in Corollary 3.1. (3) Compute fractional-order derivatives with (3.29). Tab. 3.5: The recursive formulae. p

recursive formulae for w k , where k = p, p + 1, . . .

1

w k = (1 −

2

wk =

3

wk =

4

wk =

5

wk =

6

wk =

1+α )w k−1 k 1 1+α )w k−1 − (1 − 2 1+α )w k−2 ] [4(1 − 3 k k 1 1+α 1+α [18(1 − )w − 9(1 − 2 )w k−2 + 2(1 − 3 1+α )w k−3 ] k−1 11 k k k 1 1+α 1+α 1+α 1+α 25 [48(1 − k )w k−1 − 36(1 − 2 k )w k−2 + 16(1 − 3 k )w k−3 − 3(1 − 4 k )w k−4 ] 1 1+α 1+α 1+α 137 [300(1 − k )w k−1 − 300(1 − 2 k )w k−2 + 200(1 − 3 k )w k−3 1+α 1+α −75(1 − 4 k )w k−4 + 12(1 − 5 k )w k−5 ] 1 [360(1 − 1+α )w k−1 − 450(1 − 2 1+α )w k−2 + 400(1 − 3 1+α )w k−3 147 k k k

−225(1 − 4 1+α )w k−4 + 72(1 − 5 1+α )w k−5 − 10(1 − 6 1+α )w k−6 ] k k k

3.4 High-precision computation algorithms of fractional-order derivatives and integrals | 79 Tab. 3.6: The formulae when k < p. order p

initial terms w k

1 2

w0 w0 w1 w0 w1 w2 w0 w1 w2 w3 w0 w1 w2 w3 w4 w0 w1 w2 w3 w4 w5

3

4

5

6

=1 = (3/2)α = −4αw0 /3 = (11/6)α = −18αw0 /11 = 9αw0 /11 + 9(1 − α)w1 /11 = (25/12)α = −48αw0 /25 = [36αw0 + 24(1 − α)w1 ]/25 = [−16αw0 − 12(1 − 2α)w1 + 16(2 − α)w2 ]/25 = (137/60)α = −300αw0 /137 = [300αw0 + 150(1 − α)w1 ]/137 = [−200αw0 − 100(1 − 2α)w1 + 100(2 − α)w2 ]/137 = [75αw0 + 50(1 − 3α)w1 − 75(2 − 2α)w2 + 75(3 − α)w3 ]/137 = (147/60)α = −360αw0 /147 = [450αw0 + 180(1 − α)w1 ]/147 = [−400αw0 − 150(1 − 2α)w1 + 120(2 − α)w2 ]/147 = [225αw0 + 100(1 − 3α)w1 + 112.5(2 − 2α)w2 + 90(3 − α)w3 ]/147 = [−72αw0 − 45(1 − 4α)w1 + 80(2 − 3α)w2 − 90(3 − 2α)w3 + 72(4 − α)w4 ]/147

The MATLAB function implementing the above algorithm is written below. function dy=glfdiff2(y,t,gam,p) g=double(genfunc(p)); b=1+gam; g0=g(1); n=length(t); if strcmp(class(y),’function_handle’), y=y(t); end h=t(2)-t(1); y=y(:); t=t(:); y0=y(1); if y0~=0 & gam>0, dy(1)=sign(y0)*Inf; end w=get_vecw(gam,n,g); dy=zeros(1,n); for i=2:n, dy(i)=w(1:i)*[y(i:-1:1)]/h^gam; end A supporting function get_vecw() is written to evaluate the vector w, under the generation function g. This function will also be used by some other functions as well. function w=get_vecw(gam,n,g), p=length(g)-1; b=1+gam; g0=g(1); w=zeros(1,n); w(1)=g(1)^gam; for m=2:p, M=m-1; dA=b/M; w(m)=-[g(2:m).*((1-dA):-dA:(1-b))]*w(M:-1:1).’/g0; end for k=p+1:n, M=k-1; dA=b/M;

80 | 3 Definitions and computations of fractional-order derivatives

w(k)=-[g(2:(p+1)).*((1-dA):-dA:(1-p*dA))]*w(M:-1:(k-p)).’/g0; end Example 3.10. Compute the 0.6th-order derivative of the function in Example 3.9. Solution. The numerical solutions of the 0.6th-order derivative can be evaluated for p = 1, 5 and 6 with the following statements, as shown in Figure 3.12. It can be seen that there exists very strong oscillation at initial stage of the computation when p is large, albeit the error is much smaller for large values of t. >> t=0:0.01:0.2; y=@(t)exp(-t); y0=t.^-0.6.*ml_func([1,0.4],-t,0,eps); y1=glfdiff2(y,t,0.6,1); y5=glfdiff2(y,t,0.6,5); y6=glfdiff2(y,t,0.6,6); plot(t,y0,t,y1,t,y5,t,y6) Besides, the function glfdiff2() cannot be used in the integral evaluation. 20

p=6 15

←p=5

10

0

p=

1

5

-5 -10 -15

0

0.02

0.04

0.06

0.08

0.1

0.12

0.14

0.16

0.18

0.2

Fig. 3.12: 0.6th-order derivatives for different value of p.

3.4.4 A better fitting at initial instances It is found that the direct use of Algorithm 3.6 may lead to large errors in the initial period of numerical evaluation, since for k < p, there are some terms missing in the initial period. This is the reason why the huge initial errors occur. For the same reason, the original algorithm cannot be used at all to evaluate fractional-order integrals. In order to solve the above problem, a further improved algorithm is proposed. α In the numerical evaluation of RL t0 Dt y(t), the following auxiliary function can be constructed: p

u(t) = ∑ c k (t − t0 )k .

(3.51)

k=0

To ensure that the first p + 1 samples in functions u(t) and y(t) are the same, the

3.4 High-precision computation algorithms of fractional-order derivatives and integrals | 81

coefficients c k in (3.51) satisfy 1 [ [1 [ [1 [ [. [ .. [ [1

0 h 2h .. . ph

0 h2 (2h)2 .. . (ph)2

⋅⋅⋅ ⋅⋅⋅ ⋅⋅⋅ .. . ⋅⋅⋅

y(t0 ) c0 0 ] ][ ] [ h p ][ c1 ] [ y(t0 + h) ] [ ] ] [ ] ] [ ] [ (2h)p ] ][ c2 ] = [ y(t0 + 2h)], ] [.] [ . .. ] ] [.] [ .. . ] ] ][ . ] [ p (ph) ][c p ] [y(t0 + ph)]

(3.52)

where h is the step-size selected. The signal y(t) can be decomposed as y(t) = u(t) + v(t),

(3.53)

and RL α t0 Dt y(t)

α RL α = RL t0 Dt u(t) + t0 Dt v(t),

(3.54)

where the signal u(t) is the sum of Heaviside and power functions, and the unified form of the analytical representation of the fractional-order derivatives and integrals of the Heaviside and power functions is RL α t0 Dt (t

− t 0 )k =

Γ(k + 1) (t − t0 )k−α , Γ(k + 1 − α)

k = 0, 1, 2, . . . ,

(3.55)

α where (3.55) can be used directly to evaluate accurate samples of RL t0 Dt u(t). Since the first p + 1 samples of the signals u(t) and y(t) are assumed to be the same, the first p + 1 samples of v(t) are zero. In the computation of the first p + 1 samples of RL α t0 Dt v(t), although some terms are missing in Table 3.6, one also has RL α t0 Dt v(t 0

+ kh) = 0,

k < p;

the error caused by the missing terms in Table 3.6 can be successfully eliminated. Therefore, the modified version of Algorithm 3.6 is proposed. Algorithm 3.7 (Modified recursive algorithm). Proceed as follows. (1) Construct the auxiliary function u(t) with (3.51); compute v(t) = y(t) − u(t). α (2) Compute the exact solutions of RL t0 Dt u(t) with (3.55). α (3) Compute the numerical solutions of RL t0 Dt v(t) at the sampling points with Algorithm 3.6. α RL α RL α (4) Compute the numerical solutions of RL t0 Dt y(t) with t0 Dt u(t) + t0 Dt v(t) at the sampling points. The MATLAB implementation of the algorithm is provided, where the supporting function get_vecw() is used to extract the vector w. function dy=glfdiff9(y,t,gam,p) if strcmp(class(y),’function_handle’), y=y(t); end, y=y(:); h=t(2)-t(1); t=t(:); n=length(t); u=0; du=0; r=(0:p)*h;

82 | 3 Definitions and computations of fractional-order derivatives

R=sym(fliplr(vander(r))); c=double(inv(R)*y(1:p+1)); for i=1:p+1, u=u+c(i)*t.^(i-1); du=du+c(i)*t.^(i-1-gam)*gamma(i)/gamma(i-gam); end v=y-u; g=double(genfunc(p)); w=get_vecw(gam,n,g); for i=1:n, dv(i)=w(1:i)*v(i:-1:1)/h^gam; end dy=dv+du’; if abs(y(1))> t10=0.5:0.5:5; t=0:0.01:5; y10=t10.^-0.6.*ml_func([1,0.4],-t10,0,eps); f=@(t)exp(-t); ii=[51:50:length(t)]; T=t10’; for p=1:6 y1=glfdiff9(f,t,0.6,p); T=[T [y1(ii)-y10]’]; end Even if the step-size h is selected as an exaggeratingly large value of h = 0.1, high-precision computing results can still be ensured as shown in Table 3.8, with higher-order generating functions, as shown in Table 3.10. >> t10=0.5:0.5:5; y10=t10.^-0.6.*ml_func([1,0.4],-t10,0,eps); t=0:0.1:5; y=exp(-t); ii=[6:5:51]; T=t10’; for p=1:6 y1=glfdiff9(y,t,0.6,p); T=[T [y1(ii)-y10]’]; end

3.4 High-precision computation algorithms of fractional-order derivatives and integrals | 83 Tab. 3.7: Computing errors under different orders p. t 0.5 1 1.5 2 2.5 3 3.5 4 4.5 5

p=1

p=2

p=3

p=4

p=5

p=6

−0.00180 −0.00172 −0.00151 −0.00129 −0.00111 −0.00096 −0.00084 −0.00075 −0.00068 −0.00062

1.194×10−5

−8.893×10−8

7.066×10−10

−5.85×10−12

1.005×10−5 8.613×10−6 7.385×10−6 6.395×10−6 5.611×10−6 4.993×10−6 4.503×10−6 4.110×10−6

−7.522×10−8 −6.446×10−8 −5.527×10−8 −4.785×10−8 −4.197×10−8 −3.734×10−8 −3.366×10−8 −3.072×10−8

6.005×10−10 5.147×10−10 4.413×10−10 3.820×10−10 3.350×10−10 2.980×10−10 2.688×10−10 2.453×10−10

−4.99×10−12 −4.28×10−12 −3.7×10−12 −3.23×10−12 −2.93×10−12 −2.83×10−12 −3.03×10−12 −3.46×10−12

4.8×10−14 4.8×10−14 4.4×10−14 4.7×10−14 3.0×10−15 2.1×10−14 7×10−14 1.4×10−13 4.7×10−13 4.0×10−13

1.148×10−5

−8.586×10−8

6.848×10−10

−5.69×10−12

Tab. 3.8: Errors under larger step-size of h = 0.1. t 0.5 1 1.5 2 2.5 3 3.5 4 4.5 5

p=1

p=2

p=3

p=4

p=5

p=6

−0.017078 −0.016764 −0.014743 −0.012641 −0.010832 −0.0093663 −0.0082052 −0.0072896 −0.0065645 −0.0059845

0.0010339 0.0010808 0.00096247 0.0008275 0.0007085 0.00061127 0.00053401 0.00047309 0.00042489 0.00038644

−6.9464×10−5

4.5263×10−6

−1.9794×10−7

−7.0585×10−5 −6.0864×10−5 −5.208×10−5 −4.4842×10−5 −3.9073×10−5 −3.4522×10−5 −3.0926×10−5 −2.8063×10−5

5.5076×10−6 4.7719×10−6 4.0837×10−6 3.51×10−6 3.0507×10−6 2.6881×10−6 2.4019×10−6 2.1744×10−6

−4.4747×10−7 −3.9004×10−7 −3.3396×10−7 −2.8658×10−7 −2.4846×10−7 −2.1832×10−7 −1.9455×10−7 −1.7569×10−7

−3.0685×10−9 3.7396×10−8 3.7261×10−8 3.279×10−8 2.8098×10−8 2.407×10−8 2.0816×10−8 1.8241×10−8 1.6211×10−8 1.4603×10−8

−7.8225×10−5

5.9813×10−6

−4.7312×10−7

Tab. 3.9: Errors with even higher orders p. t

p=7

p=8

p=9

p = 10

p = 11

0.5 1 1.5 2 2.5 3 3.5 4 4.5 5

−8.1653×10−11 −2.9148×10−9 −3.12×10−9 −2.847×10−9 −2.4454×10−9 −2.0587×10−9 −1.7566×10−9 −1.5471×10−9 −1.3943×10−9 −1.2581×10−9

−2.9697×10−12 2.4836×10−10 2.4794×10−10 2.5309×10−10 2.501×10−10 1.5191×10−10 4.4916×10−11 2.2712×10−10 4.9411×10−10 −2.1665×10−10

−1.3595×10−13 −2.0282×10−11 −2.0988×10−11 −1.6299×10−11 −4.5112×10−11 −6.3581×10−12 2.3519×10−10 −6.0421×10−10 −1.9055×10−9 1.1419×10−8

−1.3572×10−14 7.7977×10−13 2.4357×10−12 −4.0665×10−13 6.7932×10−12 3.1815×10−11 −3.3692×10−10 9.424×10−10 7.2915×10−9 −8.3347×10−8

9.714×10−17 4.8572×10−15 −2.0356×10−13 −1.5415×10−13 5.3905×10−13 −1.0038×10−11 8.9195×10−11 −3.9803×10−10 −3.4967×10−9 9.9516×10−8

84 | 3 Definitions and computations of fractional-order derivatives Tab. 3.10: Computing errors for larger step-size of h = 0.1. t

p=6

p=7

p=8

p=9

p = 10

0.5 1 1.5 2 2.5 3 3.5 4 4.5 5

−3.0685×10−9

−8.1655×10−11

−2.99×10−12

−1.72×10−13

−3.12×10−9 −2.847×10−9 −2.4454×10−9 −2.0588×10−9 −1.7566×10−9 −1.5471×10−9 −1.3943×10−9 −1.2582×10−9

2.4795×10−10 2.5307×10−10 2.5006×10−10 1.5195×10−10 4.5007×10−11 2.2698×10−10 4.9367×10−10 −2.162×10−10

−2.112×10−11 −1.713×10−11 −4.254×10−11 −2.477×10−12 1.990×10−10 −5.851×10−10 −1.468×10−9 1.032×10−8

−4.4×10−14 1.053×10−12 1.506×10−12 −1.627×10−12 3.904×10−11 −1.1016×10−10 −6.854×10−10 8.608×10−9 −2.472×10−8 −2.209×10−7

3.7396×10−8

3.7261×10−8 3.279×10−8 2.8098×10−8 2.407×10−8 2.0816×10−8 1.8241×10−8 1.6211×10−8 1.4603×10−8

−2.9148×10−9

2.4837×10−10

−2.011×10−11

It is also seen that due to the double-precision scheme adopted in computation, the errors under higher p may not be necessarily smaller than the ones with order p − 1. For this example, p = 8 is a good choice. >> t10=0.5:0.5:5; y10=t10.^-0.6.*ml_func([1,0.4],-t10,0,eps); t=0:0.1:5; y=exp(-t); ii=[6:5:51]; T=t10’; for p=7:11 y1=glfdiff9(y,t,0.6,p); T=[T [y1(ii)-y10]’]; end Example 3.12. Find the 0.6th-order integral of e−t in the interval [0, 5]. Solution. It is known that the analytical solution of the integral is t0.6 E1,1.6 (−t). Again, a step-size of h = 0.01 is used; the errors of the algorithm with different p can be obtained, as shown in Table 3.11. It can be seen that the errors are also consistent with those in Example 3.11. Tab. 3.11: Computing errors under different order p for integrals. t

p=1

p=2

p=3

p=4

p=5

p=6

0.5 1 1.5 2 2.5 3 3.5 4 4.5 5

0.00057 0.00146 0.00240 0.00330 0.00415 0.00494 0.00567 0.00635 0.00699 0.00759

−3.612×10−6 −9.514×10−6 −1.574×10−5 −2.175×10−5 −2.738×10−5 −3.261×10−5 −3.747×10−5 −4.201×10−5 −4.626×10−5 −5.027×10−5

2.620×10−8 7.012×10−8 1.166×10−7 1.615×10−7 2.036×10−7 2.427×10−7 2.790×10−7 3.129×10−7 3.446×10−7 3.746×10−7

−2.024×10−10 −5.510×10−10 −9.209×10−10 −1.279×10−9 −1.614×10−9 −1.926×10−9 −2.215×10−9 −2.485×10−9 −2.739×10−9 −2.977×10−9

1.628×10−12 4.509×10−12 7.576×10−12 1.055×10−11 1.333×10−11 1.592×10−11 1.833×10−11 2.059×10−11 2.273×10−11 2.479×10−11

−1.3×10−14 −3.8×10−14 −6.5×10−14 −9.1×10−14 −1.2×10−13 −1.4×10−13 −1.6×10−13 −1.9×10−13 −2.2×10−13 −2.9×10−13

3.4 High-precision computation algorithms of fractional-order derivatives and integrals | 85

>> t10=0.5:0.5:5; y10=t10.^0.6.*ml_func([1,1.6],-t10,0,eps); t=0:0.01:5; y=exp(-t); ii=[51:50:501]; T=t10’; for p=1:6 y1=glfdiff9(y,t,-0.6,p); T=[T [y1(ii)-y10]’]; end Example 3.13. Find the second-order derivative of e−t with the new algorithm using the function glfdiff9(). Solution. It is known that the second-order derivative of e−t is also e−t itself. Now the following commands can be used, and the computation errors under different orders p are listed in Table 3.12. It can be seen that the high-precision algorithm presented here also applies in the numerical evaluation of integer-order derivatives, and the length of the derivative vector is the same as the original vector. For this example, p = 5 is a good choice. >> t10=0.5:0.5:5; y10=exp(-t10); t=0:0.01:5; y=exp(-t); ii=[51:50:501]; T=t10’; for p=1:6 y1=glfdiff9(y,t,2,p); T=[T [y1(ii)-y10]’]; end Tab. 3.12: Computing errors under different order p for second-order derivatives. t

p=1

p=2

p=3

p=4

p=5

p=6

0.5 1 1.5 2 2.5 3 3.5 4 4.5 5

0.006100 0.0037003 0.0022444 0.0013613 0.000826 0.00050 0.000304 0.000184 0.000112 6.777×10−5

−4.0739×10−5

3.06×10−7

−2×10−9

0 0 0 0 0 0 1×10−9 1×10−9 4×10−9 9×10−9

0 0 0 0 0 −1×10−9 −5×10−9 −1.7×10−8 −2.7×10−8 −8.9×10−8

−2.4709×10−5 −1.4987×10−5 −9.09×10−6 −5.513×10−6 −3.344×10−6 −2.028×10−6 −1.23×10−6 −7.46×10−7 −4.52×10−7

1.86×10−7 1.12×10−7 6.8×10−8 4.1×10−8 2.5×10−8 1.5×10−8 9×10−9 6×10−9 4×10−9

−1×10−9 0 0 0 −1×10−9 −1×10−9 −2×10−9 −3×10−9 −1.1×10−8

3.4.5 Revisit to the matrix algorithm Since there exist compensating terms in the initial period, the matrix formula (3.8) cannot be used directly in implementing the high-precision algorithms. An auxiliary function u(t) was introduced in (3.51), with the coefficients evaluated from (3.52), such that the Riemann–Liouville derivative of f(t) can be rewritten as RL α t0 D f(t)

α RL α = RL t0 D u(t) + t0 D v(t).

86 | 3 Definitions and computations of fractional-order derivatives Therefore, the matrix form of the high-precision Riemann–Liouville derivative can be written as p 1 Γ(k + 1)c k RL α D f(t) = (t − t0 )k−α , Wv + ∑ t0 hα Γ(k + 1 − α) k=0 where W is the matrix with the high-precision w k in (3.42), and v is the samples of signal v(t) = y(t) − u(t). It is easily seen that the results of the matrix representation should be the same as the one in conventional algorithms in Algorithm 3.7.

3.5 Caputo definition In the Grünwald–Letnikov and Riemann–Liouville definitions presented earlier, the nonzero initial values of the function f(t) are not handled satisfactorily. Therefore, in real system modelling processes, another fractional-order derivative definition, the Caputo definition, should be adopted. Definition 3.9. The Caputo fractional-order derivative is defined as t

C α t0Dt y(t)

=

y(m) (τ) 1 dτ, ∫ Γ(m − α) (t − τ)1+α−m

(3.56)

t0

where m = ⌈α⌉ is an integer. Definition 3.10. The Caputo fractional-order integral is defined as t

C −γ t0Dt y(t)

=

1 y(τ) dτ. ∫ Γ(γ) (t − τ)1−γ

(3.57)

t0

It can be seen that the definitions of Caputo and Riemann–Liouville on fractional-order integrals are exactly the same. Now let us consider first the Caputo derivative of an exponential function f(t) = eλt . Theorem 3.7. Denote m = ⌈α⌉, γ = m − α. The Caputo derivative of the exponential function f(t) = eλt can be evaluated from C α 0 Dt

eλt = λ m t γ E1,γ+1 (λt).

(3.58)

Proof. Since α > 0, m = ⌈α⌉, γ = m − α, the Riemann–Liouville integral of the exponential function eλt can be evaluated from Theorem 3.2, rewritten here as RL −γ 0 Dt

eλt = t γ E1,1+γ (λt).

(3.59)

Since the Caputo integral and the Riemann–Liouville integral are exactly the same, the Caputo derivative of the exponential function eλt can be evaluated from C α 0 Dt

−γ

eλt = RL 0 Dt (

Hence the theorem is proven.

dm λt −γ λt m γ e ) = λ m RL 0 Dt e = λ t E1,1+γ (λt). dt m

3.6 Relationships between different definitions | 87

Theorem 3.8 ([54]). The Caputo fractional-order derivatives of some commonly used functions are C α λ 0 Dt t

C α 0 Dt (t

C α 0 Dt

C α 0 Dt

{0, ={ Γ(λ + 1)t λ−α /Γ(λ + 1 − α), {

+ c)λ =

λ is natural and m > λ, otherwise,

Γ(λ + 1) c λ−m−1 t γ m−λ E (−t/c), c > 0, Γ(λ + 1 − m) Γ(γ + 1) 1,γ+1

(3.60)

(3.61)

for even m, {λ m j(−1)m/2 t γ [−F(jλt) + F(−jλt)]/[2Γ(γ + 1)] sin λt = { m (m−1)/2 γ t [F(jλt) + F(−jλt)]/[2Γ(γ + 1)] for odd m, {λ (−1)

(3.62)

{λ m (−1)m/2 t γ [F(jλt) + F(−jλt)]/[2Γ(γ + 1)] cos λt = { λ m j(−1)(m−1)/2 t γ [F(jλt) − F(−jλt)]/[2Γ(γ + 1)] {

(3.63)

for odd m, for even m,

where λ is real, F(x) = 1 F1 (1; γ + 1; x) and 1 F1 ( ⋅ ) is a hypergeometric function defined in Section 2.2. Corollary 3.2. The Caputo derivatives of trigonometric functions can also be obtained from C α 0 Dt

sin λt =

λm tγ [E1,γ+1 (jλt) + (−1)m E1,γ+1 (−jλt)] 2j

cos λt =

λm tγ [E1,γ+1 (jλt) − (−1)m E1,γ+1 (−jλt)]. 2

and C α 0 Dt

Proof. With the well-known Euler formula 1 jx −jx (e + e ), 2j 1 cos x = (ejx − e−jx ), 2 sin x =

the corollary can be proven directly. For Caputo integrals, let m = 0 and γ = −α; the above formula are also satisfied. The numerical evaluation approach for Caputo derivatives will be presented later.

3.6 Relationships between different definitions Several definitions on fractional-order derivatives were presented in the previous sections. Under certain conditions, some of the definitions are equivalent, some are related to others under certain laws. In this section, the relationships between different definitions are summarised. Numerical computations of Caputo derivatives are also presented.

88 | 3 Definitions and computations of fractional-order derivatives 3.6.1 Relationship between Grünwald–Letnikov and Riemann–Liouville definitions It is indicated [54] that for a variety of functions, the Grünwald–Letnikov definition is equivalent to the Riemann–Liouville definition. Theorem 3.9 ([54]). If a function f(t) is continuous and (n − 1)st-order differentiable in the interval (t0 , T), the function f (n) (t) is integrable in the same interval, and 0 ≤ m − 1 < α < m ≤ n, then the Grünwald–Letnikov fractional-order derivative can also be written in integral form as t

f (m) (τ) dτ 1 f (j) (t0 )(t − t0 )j−α + . = ∑ ∫ Γ(1 + j − α) Γ(m − α) (t − τ)1+α−m j=0 m−1

GL α t0 Dt f(t)

(3.64)

t0

Theorem 3.10. If a function f(t) is continuous and (n − 1)st-order differentiable in the interval (t0 , T), the function f (n) (t) is integrable in the same interval, and 0 ≤ m − 1 < α < m ≤ n, then the Grünwald–Letnikov definition is equivalent to the Riemann–Liouville definition. Proofs of the theorems can be found in [54]. In numerical terms, the functions glfdiff() and rlfdiff() can be regarded as equivalent. The high-precision function glfdiff9() is recommended to evaluate Riemann–Liouville fractional-order derivatives and integrals as well.

3.6.2 Relationships between Caputo and Riemann–Liouville definitions When 0 < α < 1, it can be seen that m = 1. In this case, (3.56) can be rewritten as t

C α t0Dt y(t)

=

1 y󸀠 (τ) dτ. ∫ Γ(1 − α) (t − τ)α t0

If the initial value of y(t) is not zero, it can be found by comparing the Caputo and Riemann–Liouville definitions that C α t0Dt y(t)

α = RL t0 Dt (y(t) − y(t 0 )),

where the fractional-order derivative of constant y(t0 ) can be written as y(t0 ) RL α (t − t0 )−α . t0 Dt y(t 0 ) = Γ(1 − α) It can then be found that the relationship between the Caputo and Riemann–Liouville fractional-order derivative definitions is y(t0 ) C α RL α (t − t0 )−α . (3.65) t0Dt y(t) = t0 Dt y(t) − Γ(1 − α) Furthermore, if the order α > 0, the following theorem can be used to describe the relationship between the two definitions.

3.6 Relationships between different definitions | 89

Theorem 3.11. Denote m = ⌈α⌉. Then m−1 C α t0Dt y(t)

α = RL t0 Dt y(t) − ∑ k=0

y(k) (t0 ) (t − t0 )k−α . Γ(k − α + 1)

(3.66)

Comments 3.4 (Caputo derivatives and integrals). We remark the following. (1) It can be seen that the relationship for 0 ≤ α ≤ 1 is just a special case of the above relationship. (2) If α < 0, as it was pointed out earlier that,the Riemann–Liouville integral is exactly the same as the Caputo integral, and they should not be distinguished in real applications.

3.6.3 Computation of Caputo fractional-order derivatives From the presentation in Section 3.6.1, it can be seen that the difference between the Caputo and Riemann–Liouville/Grünwald–Letnikov definitions is in the handling of initial values. If the order α ∈ (0, 1), then the difference is y(t0 )(t − t0 )−α /Γ(1 − α). Therefore, when α > 0, based on the Grünwald–Letnikov derivative evaluation code, a numerical function can be written to evaluate Caputo derivative Ct0 Dtα y(t). function dy=caputo(y,t,alfa,y0,L) dy=glfdiff(y,t,alfa); if nargin0, q=ceil(alfa); if alfa 1, the initial vector y0 should be supplied for the function y(t), with y0 = [y(t0 ), y󸀠 (t0 ), . . . , y(q−1) (t0 )], and q = ⌈α⌉. Comments 3.5 (Caputo derivatives). We remark the following. (1) In practical computations, there might exist large errors in the first few terms of the Caputo derivatives; therefore, interpolation can be performed for the first L terms. The default value of L is 10, and when the order increases, the value of L should also be increased. (2) Since interpolation is used in the first few points, the initial curves may not be accurate. Sometimes the errors may be very large. Therefore, this function is not practical. Better functions are expected. Example 3.14. Consider again the sinusoidal function f(t) = sin(3t +1) in Example 3.4. Find the 0.3rd-, 1.3rd- and 2.3rd-order derivatives.

90 | 3 Definitions and computations of fractional-order derivatives Solution. It can be seen that the value of the function at t = 0 is sin 1; thus, the difference of the two definitions is d(t) = t−0.3 sin 1/Γ(0.7). The two curves under the Grünwald–Letnikov and Caputo definitions can be obtained as shown in Figure 3.13 (a). It can be seen that with nonzero initial conditions, the difference of the two definitions are rather large. >> t=0:0.01:pi; y=sin(3*t+1); d=t.^(-0.3)*sin(1)/gamma(0.7); y1=glfdiff(y,t,0.3); y2=caputo(y,t,0.3,0); plot(t,y1,t,y2,’--’,t,d,’-.’) Since the curve of C0 Dt2.3 y(t) is expected, the initial values y󸀠 (0) and y󸀠󸀠 (0) are required. These values can be obtained symbolically and converted to double-precision ones. The 1.3rd- and 2.3rd-order derivatives can be obtained. To have good approximations in the first few terms, interpolation is performed, and the results are shown in Figure 3.13 (b). 3

2

Grünwald–Letnikov definition 1

compensation 0

-1

Caputo definition

-2

-3

0

0.5

1

1.5

2

2.5

3

(a) 0.3rd-order derivative. 5

20

-5

0

axis

10

C 2.3 0 Dt y(t)

0

C 1.3 0 Dt y(t)

axis

C 1.3 0 Dt y(t)

C 2.3 0 Dt y(t) -10

0

0.5

1

1.5

2

2.5

(b) 1.3rd and 2.3rd-order derivatives. Fig. 3.13: Fractional-order derivatives under different orders.

3

-10

3.6 Relationships between different definitions | 91

>> syms t; y=sin(3*t+1); y00=sin(1); y10=double(subs(diff(y,t),t,0)); y20=double(subs(diff(y,t,2),t,0)); t=0:0.01:pi; y=sin(3*t+1); y1=caputo(y,t,1.3,[y00 y10],10); y2=caputo(y,t,2.3,[y00,y10,y20],30); plotyy(t,y1,t,y2) In fact, the use of interpolations leads to extra errors. This problem will be revisited later with high-precision algorithm.

3.6.4 High-precision computation of Caputo derivatives It is known from the properties that Caputo fractional-order integrals are exactly the same as Riemann–Liouville integrals; therefore, glfdiff9() can be used directly in the evaluation. Algorithm 3.8 (High-precision Caputo derivatives). Proceed as follows. (1) The high-precision function glfdiff9() can be used to compute Grünwald– Letnikov derivatives. (2) The compensation in (3.66) can be computed. Together with the Grünwald– Letnikov derivatives, the high-precision Caputo derivatives can be obtained. Based on the above algorithm, a new MATLAB function caputo9() can be written to compute high-precision Caputo derivatives. function dy=caputo9(y,t,gam,p) if gam t0=0.5:0.5:5; q=1; gam=q-0.6; y0=-t0.^0.4.*ml_func([1,1.4],-t0,0,eps); t=0:0.01:5; y=exp(-t); ii=[51:50:501]; T=t0’; for p=1:6 y1=caputo9(y,t,0.6,p); T=[T [y1(ii)-y0’]]; end If a larger step-size h = 0.1 is used, the following statements can be issued, and the accuracy comparisons are given in Table 3.14, with p = 8 a good choice. >> t=0:0.1:5; y=exp(-t); ii=[6:5:51]; T=t0’; for p=4:9 y1=caputo9(y,t,0.6,p); T=[T [y1(ii)-y0’]]; end

3.7 Properties of fractional-order derivatives and integrals | 93

It can be seen from the comparisons that, even the step-size is selected as a large value as h = 0.1, the accuracy of the computation results are still very high. Therefore, the algorithm and implementation here are effective and efficient. Example 3.16. Revisit the problem in Example 3.14. Solution. It is known from Example 3.14 that the initial points in the Caputo derivatives obtained with caputo() are in fact computed with the spline interpolation approach; therefore, the results are suspicious. With the high-precision algorithm, reliable results are obtained in Figure 3.14. Besides, the initial values of y(t), y󸀠󸀠 (t) and y󸀠󸀠 (t) are no longer needed in the new algorithm. >> t=0:0.01:pi; y=sin(3*t+1); y1=caputo9(y,t,1.3,5); y2=caputo9(y,t,2.3,5); plotyy(t,y1,t,y2) It can be seen that there are differences at initial time, and the result obtained with the new algorithm is reliable. 5

20 C 1.3 0 Dt y(t)

0

10

-5

0

C 2.3 0 Dt y(t)

-10

0

0.5

1

1.5

2

2.5

3

-10

Fig. 3.14: Recomputed Caputo derivatives.

3.7 Properties of fractional-order derivatives and integrals 3.7.1 Properties Like integer-order derivatives, fractional-order operators have also many properties. In this subsection, some of the properties can be shown. Theorem 3.12. Fractional-order differentiators are linear, i.e. one has Dtα [af(t) + bg(t)] = aDtα f(t) + bDtα g(t) for any constants a and b.

(3.67)

94 | 3 Definitions and computations of fractional-order derivatives Proof. Let us consider the Grünwald–Letnikov definition first. It can be shown that linear properties can be shown with the formulae GL α t0 Dt [af(t)

+ bg(t)] = lim

h→0

1 hα

= a lim

h→0

[(t−t0 )/h]

1 hα

α (−1)j ( ) [af(τ) + bg(τ)] j

∑ j=0

[(t−t0 )/h]

1 [(t−t0 )/h] α α (−1)j ( ) f(τ) + b lim α ∑ (−1)j ( ) g(τ) h→0 h j j j=0

∑ j=0

α GL α = a GL t0 Dt f(t) + b t0 Dt g(t),

where τ = t − jh. It is not difficult to show that the Riemann–Liouville fractional-order differentiator is also linear: t

RL α t0 Dt [af(t)

1 af(τ) + bg(τ) dk + bg(t)] = dτ ∫ Γ(k − α) dt k (t − τ)1+α−k t0

t

t

t0

t0

a dk f(τ) b dk g(τ) = dτ + dτ ∫ ∫ Γ(k − α) dt k (t − τ)1+α−k Γ(k − α) dt k (t − τ)1+α−k =

α a RL t0 Dt f(t)

+

α b RL t0 Dt g(t).

It can also be shown that Caputo differentiators are linear. Apart from the linear property, there are also other properties. Here several commonly used properties will be listed, without proofs. Detailed proofs can be found in other references. Theorem 3.13. When α = n is an integer, the fractional-order operators are exactly the same as their integer-order counterparts, and 0 Dt0 f(t) = f(t). Theorem 3.14 (Leibniz properties in fractional-order operators). In classical calculus, two functions f(t) and g(t) satisfy the Leibniz property n dn n [f(t)g(t)] = ( ) f (k) (t)g (n−k) (t). ∑ dt n k k=0

(3.68)

If the value n is not an integer, the above Leibniz property is also satisfied. Details of proofs can be found in [54]. Theorem 3.15. If f(t) is analytic, then t0Dtα f(t) is analytic in t and α. Theorem 3.16 ([54]). If α, β > 0, the sequential fractional-order integrals satisfy RL −α RL −β t0 Dt [t0 Dt f(t)]

−β

−(α+β)

RL −α RL = RL t0 Dt [t0 Dt f(t)] = t0 Dt

f(t).

Theorem 3.17. If α, β > 0, and if the function f(t) has continuous rth-order derivatives, where r = max(⌈α⌉, ⌈β⌉), the sequential fractional-order derivatives can be evaluated

3.7 Properties of fractional-order derivatives and integrals | 95

from RL α RL β t0 Dt [t0 Dt f(t)]

α+β

= RL t0 Dt

n−1

(t − t0 )k−α−n RL k+β−n 󵄨󵄨󵄨 D f(t)󵄨󵄨 , 󵄨t0 Γ(1 + k − n − α) t0

f(t) − ∑ k=0

where n = ⌈β⌉. Therefore, it can be seen that the commutative law normally does not hold for fractional-order derivatives. Corollary 3.3. If n is an integer, then the sequential derivatives of f(t) satisfy dn RL α n+α f(t), [ D f(t)] = RL t0 Dt dt n t0 t n−1 n f (k) (t0 ) RL α d RL n+α D f(t)] = D f(t) − (t − t0 )k−α−n . [ ∑ t0 t t0 t n dt Γ(1 + k − n − α) k=0

(3.69) (3.70)

Corollary 3.4. If the function f(t) is at rest when t = t0 , i.e. all the initial conditions of f(t) and its fractional-order derivatives are all zero, then the commutative law holds: RL α RL β t0 Dt [t0 Dt f(t)]

β

α+β

RL α RL = RL t0 Dt [t0 Dt f(t)] = t0 Dt

f(t).

(3.71)

Theorem 3.18. If the signal y(t) is ⌈γ⌉th-order differentiable, then γ C t0 Dt y(t)

γ−⌈γ⌉

= RL t0 Dt

[y(⌈γ⌉) (t)].

(3.72)

Proof. Considering Theorem 3.11 and Corollary 3.3, the theorem is proven.

3.7.2 Geometric and physical interpretations of fractional-order integrals Although fractional calculus is a topic with more than 300 years of history, there were no widely accepted geometric and physical interpretations of what fractional-order derivatives and integrals are. Professor Igor Podlubny proposed a reasonable geometric explanation to the Riemann–Liouville fractional-order integral [57], and it will be presented here. Let us recall the Riemann–Liouville definition of fractional-order integral: t

RL −γ 0 Dt f(t)

1 f(τ) = dτ. ∫ Γ(γ) (t − τ)1−γ 0

An auxiliary function can be constructed as follows: g(τ) =

1 [t γ − (t − τ)γ ]. Γ(γ + 1)

The fractional-order integral can be expressed in the form of Stieltjes integrals: t RL −γ 0 Dt f(t)

= ∫ f(τ) dg(τ). 0

96 | 3 Definitions and computations of fractional-order derivatives The three axes for three-dimensional plots are selected as τ, g t (τ) and f(τ), respectively. The plot f(t) can be drawn on the τ-f(τ) axis, the g(τ)-f(τ) axis, and in the three-dimensional framework. The area enclosed on the g(τ)-f(τ) axis can be regarded as the fractional-order integral of the given function, while the area on τ-f(τ) axis is the firstorder integral. A MATLAB function fence_shadow() is written to draw geometrically the corresponding three-dimensional plot. function fence_shadow(t,f,gam,key), x=t; z=f(x); t=x(end); switch key case 1, y=(t^gam-(t-x).^gam)/gamma(1+gam); axis([min(x),max(x),min(y),max(y),min(z),max(z)]), hold on for i=1:length(x)-1, x0=[x(i) x(i) x(i+1) x(i+1)]; z0=[0 z(i) z(i+1) 0]; y0=[0 0 0 0]; fill3(x0,y0,z0,’c’), x0=[x(i) x(i) x(i+1) x(i+1)]; y0=[y(i) y(i) y(i+1) y(i+1)]; fill3(x0,y0,z0,’g’), x0=[0 0 0 0]; fill3(x0,y0,z0,’r’) end case {2,3}, axis([min(x),max(x),min(z),max(z)]), hold on for i=1:length(x)-1, x1=[0:0.01:x(i+1)]; t=x1(end); y=(t^gam-(t-x1).^gam)/gamma(1+gam); if key==2, plot(x1,y); else, z=f(x1); plot(y,z,[y(end),y(end)],[z(end),0]); end, end, end The syntax of the function is fence_shadow(t,f ,γ,key), where t is the time vector, f is the function handle of f(t), γ is the integral order, key is the identifier, corresponding to the figure numbers in [57]. The function g(τ) can be regarded as the time in the observer’s own system, which is a distorted time system of the universe. If the speed under the observer’s time system g(t) is f(t), then the αth-order integral is the displacement witnessed by the observer. Example 3.17. For the function f(t) = t + 0.5 sin t, t ∈ (0, 10), show its 0.75th-order Riemann–Liouville integral graphically. Solution. According to the above presentation, it can be seen that the data can be computed first, and from the data, three filled plots can be obtained as shown in Figure 3.15, where each filled plot is made up of coloured patches. >> t=0:0.5:10; f=@(t)t+sin(t)/2; fence_shadow(t,f,0.75,1) grid, set(gca,’xdir’,’reverse’), set(gca,’ydir’,’reverse’) It can be seen that the area of the shadow on the left wall is the first-order integral of the function f(t), while the area of the shadow on the right wall is the expected fractional-order integral.

3.7 Properties of fractional-order derivatives and integrals | 97

the shadow area is 󵄨󵄨 −0.75 f(t)󵄨󵄨󵄨 0 Dt 󵄨t=10

10

10 the shadow area is ∫0 f(τ) dτ

f(τ)

8 6

f(τ)

4 2 0 0 2

g(τ

)

4 6

10

8

6

time t,

4

2

0

τ

Fig. 3.15: Geometric interpretation of fractional-order integral.

Although the Podlubny’s explanation is good, there is still long way to go on providing successful and easy-to-understand geometric and physical interpretations to fractionalorder derivatives and integrals, under various definitions.

4 Solutions of linear fractional-order differential equations The systems we are familiar with are described by integer-order differential equations, and similarly, fractional-order systems are usually described by fractional-order differential equations (FODEs). Therefore, the studies on the solutions of fractional-order differential equations are crucial in the analysis and design of fractional-order control systems. In this chapter, mainly linear fractional-order differential equations are discussed. The general form of linear fractional-order differential equations is formulated in Section 4.1, and Laplace transforms to the fractional-order derivatives under different definitions are summarised. It is concluded in the chapter that Caputo differential equations are more suitable in describing systems with nonzero initial conditions. A very important Laplace transform formula is presented, from which analytical solutions to many problems can be obtained. Analytical solutions to some linear fractional-order differential equations will be presented in Section 4.2, where several typical forms of equations are studied. In Section 4.3, analytical solutions of linear commensurate-order equations are studied with the partial fraction expansion and Laplace transform techniques. For problems where analytical solutions are not possible, numerical solution techniques are explored. Closed-form solutions to linear fractional-order differential equations are proposed, for zero initial value problems, in Section 4.4. High-precision algorithms and a matrix approach are further presented. In Section 4.5, the solutions to Caputo differential equations with nonzero initial conditions are explored, with equivalent initial condition estimation techniques proposed such that closed-form solutions can also be obtained. Numerical solutions of irrational systems, with the help of numerical Laplace and inverse Laplace transform tools are explored in Section 4.6. Stability assessment of irrational systems is also investigated. Numerical solutions to nonlinear fractional-order ordinary differential equations will be discussed further in Chapter 8.

4.1 Introduction to linear fractional-order differential equations In this section, the standard form of linear fractional-order differential equation is presented, from which the linear fractional-order transfer function is derived. Then the nonzero initial value problems are formulated, and an important Laplace transform formula is summarised.

DOI 10.1515/9783110497977-004

100 | 4 Solutions of linear fractional-order differential equations 4.1.1 General form of linear fractional-order differential equations Definition 4.1 ([54]). A class of linear time-invariant (LTI) fractional-order differential equations can be written as a1 D η1 y(t) + a2 D η2 y(t) + ⋅ ⋅ ⋅ + a n−1 D η n−1 y(t) + a n D η n y(t) = b1 D γ1 u(t) + b2 D γ2 u(t) + ⋅ ⋅ ⋅ + b m D γ m u(t),

(4.1)

where b i and a i are real coefficients, and γ i and η i are real orders. The signal u(t) can be regarded as the input signal to the system described by the equation, while y(t) can be regarded as the output. It is safe to assume that η1 > η2 > ⋅ ⋅ ⋅ > η n−1 > η n and γ1 > γ2 > ⋅ ⋅ ⋅ > γ m−1 > γ m . If the initial values of y(t) and its derivatives are all zero, the fractional-order differential equations can be regarded as FODEs with zero initial conditions. In this case, the fractional-order derivatives under different definitions are all equivalent. Like in integer-order systems, a fractional-order transfer function (FOTF) can be defined as the ratio of the Laplace transforms of the output and the input, and it is, in fact, the gain of the system in the s domain. Definition 4.2. The fractional-order transfer function G(s) is defined as G(s) =

a1

s η1

b 1 s γ1 + b 2 s γ2 + ⋅ ⋅ ⋅ + b m s γ m . + a2 s η2 + ⋅ ⋅ ⋅ + a n−1 s η n−1 + a n s η n

(4.2)

Definition 4.3. A pseudo-polynomial is defined by two vectors (c, α): p(s) = c1 s α1 + c2 s α2 + ⋅ ⋅ ⋅ + c n−1 s α n−1 + c n s α n . A fractional-order transfer function model is expressed as the ratio of two pseudopolynomials. It is not only an efficient way in solving linear fractional-order differential equations, but also a useful tool to analyse and design linear fractional-order systems, as in the case of conventional linear system theory. An FOTF Toolbox based on such a model is designed and it will be fully presented in Chapter 6.

4.1.2 Initial value problems of fractional-order derivatives under different definitions If the initial values of f(t) and its derivatives are not zero, the Laplace transform of its integer-order derivatives can be evaluated in (2.38). For ease of presentation, the formula is rewritten here: L[

n dn n f(t)] = s F(s) − s n−k f (k−1) (0). ∑ dt n k=1

(4.3)

Now let us explore first the Laplace transform to the fractional-order integrals and derivatives under the Riemann–Liouville definition. Denote F(s) = L [f(t)]; the

4.1 Introduction to linear fractional-order differential equations | 101

Laplace transform of the fractional-order integrals and derivatives of the function f(t) satisfy the following theorem. Theorem 4.1 ([54]). The Laplace transforms to the fractional-order integrals and derivatives of the function f(t) are, respectively, −β

L [RL Dt f(t)] = s−β F(s) and

n−1

γ

γ−k−1

L [RL Dt f(t)] = s γ F(s) − ∑ s k RL Dt k=0

󵄨󵄨 f(t)󵄨󵄨󵄨 , 󵄨t=0

(4.4)

where β, γ ≥ 0 and n = ⌈γ⌉. Proof. Let us observe the integral formula first. The Laplace transform of a convolution function can be written as L [f(t) ∗ g(t)] = F(s)G(s). Let g(t) = t β−1 ; then t

RL

−β

Dt f(t) =

1 f(τ) dτ = t β−1 ∗ f(t). ∫ Γ(β) (t − τ)1−β 0

It can be found that G(s) =

Γ(β)s−β ;

therefore

L [RL D −β f(t)] = s−β F(s).

(4.5)

We can then consider the Laplace transform of the fractional-order derivatives of the γ function. Denote Dt f(t) = g (n) (t), where t

1 f(τ) RL −(n−γ) g(t) = Dt f(t) = ∫ dτ. Γ(n − γ) (t − τ)1+γ−n 0

From (4.5), it can be found directly that g(n−k−1) (t) =

dn−k−1 RL −(n−γ) γ−k−1 Dt f(t) = RL Dt f(t). dt n−k−1

(4.6)

Substituting (4.5) and (4.6) into (4.3), it can immediately found that (4.4) is also satisfied. Hence the theorem is proven. It can be seen from the Laplace transform formula for Riemann–Liouville derivatives that the initial values of fractional-order derivatives are involved. Theorem 4.2. The Laplace transform of the Caputo fractional-order derivative can be evaluated from n−1

L [C Dt f(t)] = s γ F(s) − ∑ s γ−k−1 f (k) (0). γ

k=0

(4.7)

102 | 4 Solutions of linear fractional-order differential equations It can be seen that only initial values of f(t) and its integer-order derivatives are involved. It can also be seen from (4.7) that for fractional-order differential equations, if q = ⌈max(β i )⌉, then q independent initial values are needed such that the solutions are unique. Theorem 4.3. The Laplace transform to Grünwald–Letnikov and Caputo fractional-order integrals satisfy both −γ

−γ

L [GL Dt f(t)] = L [C Dt f(t)] = s−γ F(s),

γ ≥ 0.

Since this book is not a strict book in mathematics, proofs to some of the theorems are not given. Interested readers can refer to relevant monographs, such as [54]. If the initial values of the signal y(t) and its derivatives are not zero, the FODEs can be regarded as the ones with nonzero initial conditions. The definitions of fractionalorder derivatives matter a lot. According to the properties in (4.7), it is easily seen that Caputo equations rely on initial values of the integer-order derivatives, while in (4.4), the Riemann–Liouville equations rely on the initial values of fractional-order derivatives. Therefore, the Caputo differential equations are more suitable for describing practical systems with nonzero initial conditions, while since its simplicity in numerical computation, Grünwald–Letnikov and Riemann–Liouville equations are more suitable in describing systems initially at rest.

4.1.3 An important Laplace transform formula Similar to the case in the linear integer-order differential equation, Laplace transform is a very useful tool in handling linear fractional-order differential equations. A very useful Laplace transform formula is given in this subsection. Lemma 4.1 ([6]). For R(γ) > 0, R(β) > 0 and |z| < 1, there exists ∞

1 γ = ∫ e−x x β−1 Eα,β (−x α z) dx. (1 + z)γ

(4.8)

0

Theorem 4.4. A very important Laplace transform formula [6] is given by L −1 [

s αγ−β γ ] = t β−1 Eα,β (−at α ). (s α + a)γ

(4.9)

Proof. It can be seen from Lemma 4.1 that, substituting z = as−α , x = st into the integral in (4.8), it immediately follows that ∞

1 γ γ = ∫ e−st (st)β−1 Eα,β (−(st)α as−α )s dt = s β L [t β−1 Eα,β (−at α )], (1 + as−α )γ 0

from which the theorem is proven.

4.2 Analytical solutions of some fractional-order differential equations | 103

4.2 Analytical solutions of some fractional-order differential equations In this subsection, one-, two- and three-term equations are studied, and generally n-term equation solutions are summarised.

4.2.1 One-term differential equations Definition 4.4. The typical form of the so-called “one-term” fractional differential equation is given by aD η y(t) = u(t). It is immediately seen that the solutions to the one-term equation is y(t) =

1 −η D u(t), a

and it means that the solution of the equation is in fact the fractional-order integral of the signal u(t), where the numerical integral algorithm in Chapter 3 can be used in finding the high-precision solutions of the equations.

4.2.2 Two-term differential equations Definition 4.5. The typical form of the two-term fractional-order differential equation is given by aD η y(t) + by(t) = u(t). Taking Laplace transform to both sides of the equation, it is immediately found that a linear equation can be transformed into (as η + b)Y(s) = L [u(t)]. Theorem 4.5. For simple input signals u(t) = δ(t) or u(t) = H(t), i.e. for impulse and step inputs, the Laplace transforms are L [δ(t)] = 1 and L [H(t)] = 1/s, respectively. The analytical solutions can be written as y δ (t) =

1 η−1 b t Eη,η (− t η ) a a

(4.10)

yH (t) =

1 η b t Eη,η+1 (− t η ). a a

(4.11)

and

104 | 4 Solutions of linear fractional-order differential equations Proof. Suppose that the input signal is an impulse signal u(t) = δ(t); the Laplace transform of the output signal Y(s) is Y(s) =

1 1 1 = . η + b a s + b/a

as η

Recall the Laplace transform in (4.9); if one selects γ = 1, α = η and β = η, the output signal in (4.10) can be obtained. If the input signal is a Heaviside function u(t) = H(t), the Laplace transform of the output signal Y(s) is Y(s) =

1 1 1 1 = . as η + b s a s(s η + b/a)

Therefore, by selecting γ = 1, α = η and β = η + 1, the output signal in (4.11) can be found. Since the differential equation itself is very simple, which is in fact a special case in the problems to be further studied in Section 4.3, this solution algorithm is not further explored in this subsection.

4.2.3 Three-term differential equations Definition 4.6. A typical form of a three-term fractional differential equation can be expressed as a1 D η1 y(t) + a2 D η2 y(t) + a3 y(t) = u(t). (4.12) Theorem 4.6 ([55]). The analytical solution of the equation can be written as y(t) =

1 ∞ (−1)k â 3k t−â 2 +(k+1)η1 dk Eη −η ,η +η k+1 (−â 2 t η1 −η2 ), ∑ a1 k=0 k! dt k 1 2 1 2

(4.13)

where u(t) is a step function, and â 3 = a3 /a1 , â 2 = a2 /a1 . It can be seen that the output y(t) can be evaluated by the truncation algorithm, and the Mittag-Leffler function evaluation code ml_func() can be used directly here. A MATLAB function ml_step3() can be written to evaluate numerically the solutions of the differential equation. function y=ml_step3(a1,a2,a3,b1,b2,t,eps0) y=0; k=0; ya=1; a2=a2/a1; a3=a3/a1; if nargin==6, eps0=eps; end while max(abs(ya))>=eps0 ya=(-1)^k/gamma(k+1)*a3^k*t.^((k+1)*b1).*... ml_func([b1-b2,b1+b2*k+1],-a2*t.^(b1-b2),k,eps0); y=y+ya; k=k+1; end, y=y/a1;

4.2 Analytical solutions of some fractional-order differential equations | 105

The syntax of the function is y = ml_step3(a1 , a2 , a3 , η1 , η2 , t, ϵ0 ), where ϵ0 can be omitted, with a default value of eps. Example 4.1. Find the analytical and numerical solutions of the three-term fractionalorder differential equation D 0.8 y(t) + 0.75D 0.4 y(t) + 0.9y(t) = 5u(t), where u(t) is a unit step input. Solution. It is obvious that a1 = 1, a2 = 0.75, a3 = 0.9, η1 = 0.8, and η2 = 0.4. Therefore, the analytical solution of the differential equation can be written as ∞

(−1)k 0.9k t−0.75+0.8(k+1) dk E (−0.75t0.4 ). k 0.4,1.8+0.4k k! dt k=0

y(t) = ∑

(4.14)

The numerical solution can be obtained with the following statements, as shown in Figure 4.1. >> t=0:0.001:5; y=ml_step3(1,0.75,0.9,0.8,0.4,t); plot(t,5*y) 4 3.5 3 2.5 2 1.5 1 0.5 0

0

0.5

1

1.5

2

2.5

3

3.5

4

4.5

5

Fig. 4.1: The solution of the differential equation.

4.2.4 General n-term differential equations Definition 4.7. An n-term fractional-order differential equation is given by η

η

η

η

a1 Dt 1 y(t) + a2 Dt 2 y(t) + ⋅ ⋅ ⋅ + a n−1 Dt n−1 y(t) + a n Dt n y(t) = u(t). Definition 4.8. The multinomial coefficients are defined as (m; k2 , k3 , . . . , k n ) =

m! . k2 !k3 ! ⋅ ⋅ ⋅ k n !

(4.15)

106 | 4 Solutions of linear fractional-order differential equations Theorem 4.7 ([54]). If the input signal is a step signal, the analytical solution of the linear n-term fractional-order differential equation can be written as y(t) =

1 ∞ (−1)m ∑ a1 m=0 m! n

× ∏( ×

i=2 dm

dt m



(m; k2 , k3 , . . . , k n )

k2 +⋅⋅⋅+k n =m k2 ≥0,...,k n ≥0

a i k i (η1 −η2 )m+η1 +∑nj=2 (η2 −η j )k j −1 ) t a1 Eη1 −η2 ,−η2 +∑nj=2 (η2 −η j )k j (−

a2 η1 −η2 t ), a1

where (m; k2 , k3 , . . . , k n ) are the multinomial coefficients. It is difficult to write out analytical solutions to a certain system, since the solution form is too complicated. This method is not quite useful in practical problems, and it can be used to solve differential equations with at most three terms on the left-hand side. Better algorithms and tools are needed in solving more complicated problems.

4.3 Analytical solutions of commensurate-order differential equations A very important property in the theory of Laplace transform was presented in (4.9). Based on the property, some other important properties on fractional-order derivatives can be derived, and analytical solutions to commensurate-order differential equations subjected to step and impulse signals can be found.

4.3.1 General form of commensurate-order differential equations Consider the orders in (4.1). If the orders are all rational numbers, then the orders of the terms can be written as (η1 , η2 , . . . , η n , γ1 , γ2 , . . . , γ m ) = (

c1 c2 c n+m , ,..., ), d1 d2 d n+m

where the pairs (c i , d i ) are coprime integers. Then the least common multiple (lcm) of d i and greatest common divisor (gcd) of c i can be extracted d = lcm(d1 , d2 , . . . , d n+m ), c = gcd(c1 , c2 , . . . , c n+m ) such that a base order α = c/d can be defined, by denoting λ = s α . The original transfer function can be mapped into an integer-order system of λ. In stability assessment terms, the base order can be extracted from the orders of the denominators only.

4.3 Analytical solutions of commensurate-order differential equations | 107

Definition 4.9. Under the base order α, a commensurate-order differential equation can be written as (n−1)α

a1 Dtnα y(t) + a2 Dt

y(t) + ⋅ ⋅ ⋅ + a n Dtα y(t) + a n+1 y(t) (m−1)α

= b1 Dtmα v(t) + b2 Dt

v(t) + ⋅ ⋅ ⋅ + b m Dtα v(t) + b m+1 v(t).

(4.16)

Definition 4.10. The common form of a commensurate-order transfer function can be expressed by the integer-order transfer function of λ: G(λ) =

b1 λ m + b2 λ m−1 + ⋅ ⋅ ⋅ + b m−1 λ + b m , a1 λ n + a2 λ n−1 + ⋅ ⋅ ⋅ + a n−1 λ + a n

with λ = s α . Theorem 4.8. If there are only distinct poles −p i in the system, the original transfer function can be written in the following form with the partial fraction expansion technique: n

G(λ) = ∑ i=1

n ri ri =∑ α . λ + p i i=1 s + p i

(4.17)

If there is a repeated pole −p i , with multiplicity of m, the portion in the partial fraction expansion becomes m r ij r i2 r i1 r im = . + + ⋅ ⋅ ⋅ + ∑ α α m α 2 α s + p i (s + p i ) (s + p i ) (s + p i )j j=1

(4.18)

Definition 4.11. For a commensurate-order system, with a base order of α, the partial fraction expansion can be expressed as N mi

G(s) = ∑ ∑ i=1 j=1

(s α

r ij , + p i )j

where p i and r ij are complex values, and m i is a positive integer, which is the multiplicity of the ith pole p i , with m1 + m2 + ⋅ ⋅ ⋅ + m N = n. The partial fraction expansion can be obtained with the function residue() directly in MATLAB. Also, with symbolic computation engine, the partial fraction expansion can be obtained with partfrac().

4.3.2 Some commonly used Laplace transforms in linear fractional-order systems Recall the important Laplace transform formula in (4.9), rewritten here as follows: L −1 [

s αγ−β γ ] = t β−1 Eα,β (−at α ). (s α + a)γ

(4.19)

For different parameter combinations, some variations can be derived as shown in the following corollaries.

108 | 4 Solutions of linear fractional-order differential equations Corollary 4.1. If γ = 1, and αγ = β, i.e. β = α, formula (4.19) can be rewritten as L −1 [

1 ] = t α−1 Eα,α (−at α ). sα + a

(4.20)

In control terms, this formula can be understood as follows. Suppose that there is a fractional-order transfer function 1/(s α + a), and it is driven by an impulsive signal whose Laplace transform is one. The analytical solution of the output can be evaluated from t α−1 Eα,α (−at α ). It can be seen from the example that the behaviour of the Mittag-Leffler function is quite similar to the exponential function e−λt in integer-order systems. With such a property, the analytical solutions to a class of linear fractionalorder differential equation can be configured. This topic will further be presented in later chapters. Corollary 4.2. If γ = 1, and αγ − β = −1, i.e. β = α + 1, formula (4.19) can be written as L −1 [

1 ] = t α Eα,α+1 (−at α ). + a)

s(s α

(4.21)

This formula can be regarded as the analytical solution of the fractional-order transfer function 1/(s α + a) driven by a unit step input. Theorem 4.9. An alternative solution of (4.21) can be expressed as L −1 [

1 1 ] = [1 − Eα (−at α )]. + a) a

s(s α

(4.22)

Proof. It is easily found from the definitions of the Mittag-Leffler functions that ∞

at α Eα,α+1 (−at α ) = − ∑ k=0

(−at α )k+1 Γ(αk + α + 1) ∞

=1−



(k+1)=0

(−at α )k+1 = 1 − Eα (−at α ), Γ(α(k + 1) + 1)

hence the theorem is proven. Corollary 4.3. If γ = k is an integer, and αγ = β, i.e. β = αk, formula (4.19) can be written as 1 k L −1 [ α (−at α ). (4.23) ] = t αk−1 Eα,αk (s + a)k This corollary can be regarded as the analytical solution of the output of the system 1/(s α + a)k driven by an impulsive signal. Corollary 4.4. If γ = k is an integer, and αγ − β = −1, i,e., β = αk + 1, it is found that L −1 [

1 k (−at α ). ] = t αk Eα,αk+1 s(s α + a)k

(4.24)

The solution can be regarded as the output of the system 1/(s α + a)k driven by a unit step signal.

4.3 Analytical solutions of commensurate-order differential equations | 109

4.3.3 Analytical solutions of commensurate-order equations Since it is known that commensurate-order transfer functions can be expressed by partial fractions in (4.17), if there is no repeated poles, it is immediately found from the Laplace transform formula given in (4.20) and (4.21) that the analytical solutions of the impulse and step responses of the system can be obtained respectively as n

L −1 [ ∑ i=1

and

n

L −1 [ ∑ i=1

n ri = r i t α−1 Eα,α (−p i t α ) ] ∑ sα + pi i=1

(4.25)

n ri = r i t α Eα,α+1 (−p i t α ). ] ∑ s(s α + p i ) i=1

The latter one can alternatively be written as n

L −1 [ ∑ i=1

n ri ri = [1 − Eα (−p i t α )]. ] ∑ s(s α + p i ) p i=1 i

More generally: Theorem 4.10. If the partial fraction expansion is given by Definition 4.11, the analytical solution of the impulse response of the system is N mi

y δ (t) = L −1 [ ∑ ∑ i=1 j=1

(s α

N mi r ij j ] = ∑ ∑ r ij t αj−1 Eα,αj (−at α ), j + pi ) i=1 j=1

while the analytical solution of the step response of the system is N mi

y u (t) = L −1 [ ∑ ∑ i=1 j=1

s(s α

N mi r ij j ] = ∑ ∑ r ij t αj Eα,αj+1 (−at α ). j + pi ) i=1 j=1

Example 4.2. Find the step response of the equation in Example 4.1: 0D

0.8

y(t) + 0.750 D 0.4 y(t) + 0.9y(t) = 5u(t).

Solution. It is easily seen that the fractional-order differential equation is a commensurate-order one, with base order of α = 0.4. Denote λ = s0.4 . The original fractionalorder transfer function can be expressed as an integer-order one of variable λ, G(s) =

s0.8

5 5 󳨐⇒ G(λ) = 2 , 0.4 + 0.75s + 0.9 λ + 0.75λ + 0.9

and with the statements >> [r,p,k]=residue(5,[1 0.75 0.9]) and it can be seen that the partial fraction expansion of G(λ) can be written as G(s) =

s0.4

−2.8689j 2.8689j + 0.4 . + 0.3750 − 0.8714j s + 0.3750 + 0.8714j

110 | 4 Solutions of linear fractional-order differential equations It can then be seen that the analytical solution can be written as y(t) = −2.8698j t0.4 E0.4,1.4 ((−0.3750 + 0.8714j)t0.4 ) + 2.8698j t0.4 E0.4,1.4 ((−0.3750 − 0.8714j)t0.4 ). The numerical solution can be obtained with the following statements, and the result is exactly the same as the one obtained in Example 4.1. >> t=0:0.001:5; t1=t.^0.4; y=r(1)*t1.*ml_func([0.4,1.4],p(1)*t1)+... r(2)*t1.*ml_func([0.4,1.4],p(2)*t1); y1=ml_step3(1,0.75,0.9,0.8,0.4,t); plot(t,y,t,5*y1,’--’) Example 4.3. Find the impulse response of the fractional-order system G(s) =

s1.2 + 3s0.4 + 5 . s1.6 + 10s1.2 + 35s0.8 + 50s0.4 + 24

Solution. Let λ = s0.4 . The original transfer function can be written as G(λ) =

λ4

+

λ3 + 3λ + 5 . + 35λ2 + 50λ + 24

10λ3

The MATLAB function residue() can be used >> n=[1 0 3 5]; d=[1 10 35 50 24]; [r,p,K]=residue(n,d) and the partial fraction expansion of the system can be written as G(s) =

71 1 31 1 9 1 1 1 − + + . 0.4 0.4 0.4 0.4 6 s +4 2 s +3 2s +2 6s +1

With the property in (4.20), the analytical solution of the impulse response of the system can be written as y(t) =

31 −0.6 71 −0.6 t E0.4,0.4 (−4t0.4 ) − t E0.4,0.4 (−3t0.4 ) 6 2 9 1 + t−0.6 E0.4,0.4 (−2t0.4 ) + t−0.6 E0.4,0.4 (−t0.4 ). 2 6

The graphical representation of the impulse response of the system can be evaluated with the following statements, as shown in Figure 4.2. >> t=0:0.001:0.2; t1=t.^(0.4); y=71/6*t.^(-0.6).*ml_func([0.4,0.4],-4*t1)-... 31/2*t.^(-0.6).*ml_func([0.4,0.4],-3*t1)+... 9/2*t.^(-0.6).*ml_func([0.4,0.4],-2*t1)+... 1/6*t.^(-0.6).*ml_func([0.4,0.4],-t1); plot(t,y)

4.3 Analytical solutions of commensurate-order differential equations | 111 8 7 6 5 4 3 2 1 0 -1

0

0.02

0.04

0.06

0.08

0.1

0.12

0.14

0.16

0.18

0.2

Fig. 4.2: Impulse response of the fractional-order system.

Example 4.4. Solve the fractional-order differential equation D 1.2 y(t) + 5D 0.9 y(t) + 9D 0.6 y(t) + 7D 0.3 y(t) + 2y(t) = u(t), with zero initial conditions, where u(t) is the unit impulse and step signals. Solution. If the base order is selected as λ = s0.3 , the integer-order transfer function of λ can be written as 1 . G(λ) = 4 λ + 5λ3 + 9λ2 + 7λ + 2 The following MATLAB statements can be used to find its partial fraction expansion regarding to λ. >> num=1; den=[1 5 9 7 2]; [r,p]=residue(num,den) The results can be obtained as G(λ) = −

1 1 1 1 + − + . 2 λ + 2 λ + 1 (λ + 1) (λ + 1)3

If the input signal u(t) is an impulse signal, the the Laplace transform of the output signal can be written as Y(s) = G(s) = −

1 1 1 1 + 0.3 − 0.3 + 0.3 . 2 + 2 s + 1 (s + 1) (s + 1)3

s0.3

From (4.20) and (4.23), it can be found that the analytical solution of the equation can be written as y1 (t) = −t−0.7 E0.3,0.3 (−2t0.3 ) + t−0.7 E0.3,0.3 (−t0.3 ) 3 2 − t−0.4 E0.3,0.6 (−t0.3 ) + t−0.1 E0.3,0.9 (−t0.3 ).

If the input signal u(t) is a step signal, the Laplace transform of the output signal can be written as 1 1 1 1 1 Y(s) = G(s) = − 0.3 + − + . s s(s + 2) s(s0.3 + 1) s(s0.3 + 1)2 s(s0.3 + 1)3

112 | 4 Solutions of linear fractional-order differential equations 0.15

0.1

impulse response

0.05

step response 0

0

0.05

0.1

0.15

0.2

0.25

0.3

0.35

0.4

0.45

0.5

Fig. 4.3: The step and impulse responses.

Therefore, the analytical solution to the step input can be obtained as y2 (t) = −t0.3 E0.3,1.3 (−2t0.3 ) + t0.3 E0.3,1.3 (−t0.3 ) 3 2 − t0.6 E0.3,1.6 (−t0.3 ) + t0.9 E0.3,1.9 (−t0.3 ).

Alternatively, with (4.22), the step response can be written as y3 (t) =

1 1 3 2 + E0.3 (−2t0.3 ) − E0.3 (−t0.3 ) − t0.6 E0.3,1.6 (−t0.3 ) + t0.9 E0.3,1.9 (−t0.3 ). 2 2

The impulse and step responses of the system can be evaluated with the following statements, as shown in Figure 4.3. It can also be seen that the curves y2 (t) and y3 (t) are identical. >> t=0:0.002:0.5; y1=-t.^-0.7.*ml_func([0.3,0.3],-2*t.^0.3)... +t.^-0.7.*ml_func([0.3,0.3],-t.^0.3)... -t.^-0.4.*ml_func([0.3,0.6,2],-t.^0.3)... +t.^-0.1.*ml_func([0.3,0.9,3],-t.^0.3); y2=-t.^0.3.*ml_func([0.3,1.3],-2*t.^0.3)... +t.^0.3.*ml_func([0.3,1.3],-t.^0.3)... -t.^0.6.*ml_func([0.3,1.6,2],-t.^0.3)... +t.^0.9.*ml_func([0.3,1.9,3],-t.^0.3); y3=1/2+0.5*ml_func(0.3,-2*t.^0.3)-ml_func(0.3,-t.^0.3)... -t.^0.6.*ml_func([0.3,1.6,2],-t.^0.3)... +t.^0.9.*ml_func([0.3,1.9,3],-t.^0.3); plot(t,y1,’-’,t,y2,’--’,t,y3,’:’) It should be noted that the for these Mittag-Leffler functions, if large t is used, for instance, t = 50, the truncation algorithm fails to converge, and the embedded mlf() is far too slow to complete the computation. Other algorithms should be adopted to solve numerically the different equations.

4.4 Closed-form solutions of linear fractional-order differential equations | 113

4.4 Closed-form solutions of linear fractional-order differential equations with zero initial conditions It can be seen that the solution approaches discussed so far are quite restricted. In this section, the numerical solutions of any linear fractional-order differential equation with zero initial conditions are studied. A closed-form solution is presented first, followed by the high-precision algorithm and a matrix approach.

4.4.1 Closed-form solution If the initial values of the output signal y(t), input signal u(t) and their derivatives ̂ alone, the at t = 0 are all zero, and the right-hand side of the equation contains u(t) original differential equation can be simplified as γ γ γ γ ̂ a1 Dt 1 y(t) + a2 Dt 2 y(t) + ⋅ ⋅ ⋅ + a n−1 Dt n−1 y(t) + a n Dt n y(t) = u(t),

(4.26)

̂ is composed of the linear combination of a signal u(t) and its fractional-order where u(t) derivatives, and can be evaluated independently η

η

η

̂ = b1 Dt 1 u(t) + b2 Dt 2 u(t) + ⋅ ⋅ ⋅ + b m Dt m u(t). u(t) For simplicity in the presentation, assume that γ1 > γ2 > ⋅ ⋅ ⋅ > γ n−1 > γ n > 0. If the following two special cases emerge, conversions should be made first, then the solutions can be found. Comments 4.1 (Special cases). We remark the following: (1) If the order of the original equation does not satisfy the inequalities, the terms in the original equation should be sorted first. (2) If there exists a negative γ i , the fractional-order integral-differential equation is γ involved. In this case, a new variable z(t) = Dt n y(t) should be introduced such that the original equation can be converted to the fractional-order differential equation of signal z(t). Theorem 4.11 ([79]). The closed-form solution can be obtained from yt =

n 1 ai ̂ u − [ ∑ t n γi −γ i h ∑i=1 a i h i=1

[(t−t0 )/h]



(γ )

w j i y t−jh ].

(4.27)

j=1

Proof. Consider the Grünwald–Letnikov definition in (3.6). The modified discrete version can be written as γi t0 Dt y(t) (γ i )

where w j



1 h γi

[(t−t0 )/h]

∑ j=0

(γ )

w j i y t−jh =

[(t−t0 )/h] 1 (γ ) [y t + ∑ w j i y t−jh ], γ h i j=1

can still be evaluated recursively γ i + 1 (γ i ) (γ ) (γ ) w0 i = 1, w j i = (1 − )w j−1 , j

j = 1, 2, . . . .

(4.28)

(4.29)

114 | 4 Solutions of linear fractional-order differential equations Substituting the formula into (4.26), the closed-form numerical solution to the fractional-order differential equation can be obtained as shown in (4.27). Consider the general form of the fractional-order differential equation in (4.1). It appears that the right-hand side of the function can be evaluated first, and the equation can be converted into the form in (4.26). The above idea is shown in Figure 4.4 (a). Unfortunately, if the input signal is a constant, which is usually the case in step response evaluation, and there exist integer-order derivatives on the right-hand side of the equation, some sub-signals may vanish, and misleading results may be obtained. In practical programming, the original linear problem can be equivalently con̂ under the excitation of u(t). Then take fractional-order verted to evaluate the signal y(t) ̂ to find the actual y(t). The idea is shown in the signal flow graph in derivatives to y(t) Figure 4.4 (b), where since the system is linear, it can be decomposed into two parts, N(s) and 1/D(s), where N(s) and D(s) are pseudo-polynomials. It can be seen that the orders of the two blocks can be swapped, and finally the same y(t) can be evaluated. Therefore, a new algorithm as follows can be proposed. u(t) ✲ N(s) (a) Conventional.

̂ u(t) ✲ 1/D(s)

y(t) ✲

u(t) ✲ 1/D(s)

̂ y(t) ✲ N(s)

y(t) ✲

(b) Order swapped.

Fig. 4.4: Computation orders swapped for better evaluation results.

Algorithm 4.1 (Closed-form solution algorithm). Proceed as follows. (1) Evaluate recursively the coefficients w j for all the orders through (4.29). ̂ (2) Evaluate the solution y(t) from (4.27), and denote it as y(t). (3) Find the actual solution y(t) through numerical differentiation. Based on the algorithm, a MATLAB function fode_sol() can be written to solve numerically linear fractional-order differential equations with zero initial conditions. In the function, W is in fact a matrix whose jth row stores the coefficients w j for all the orders. function y=fode_sol(a,na,b,nb,u,t) h=t(2)-t(1); D=sum(a./[h.^na]); nT=length(t); D1=b(:)./h.^nb(:); nA=length(a); vec=[na nb]; y1=zeros(nT,1); W=ones(nT,length(vec)); for j=2:nT, W(j,:)=W(j-1,:).*(1-(vec+1)/(j-1)); end for i=2:nT, A=[y1(i-1:-1:1)]’*W(2:i,1:nA); y1(i)=(u(i)-sum(A.*a./[h.^na]))/D; end for i=2:nT, y(i)=(W(1:i,nA+1:end)*D1)’*[y1(i:-1:1)]; end

4.4 Closed-form solutions of linear fractional-order differential equations | 115

The function can be called with y = fode_sol(a, na , b, nb , u, t), where the time and input vectors can be obtained in vectors in t and u. Example 4.5. Find the step response of the equation in Example 4.4: D 1.2 y(t) + 5D 0.9 y(t) + 9D 0.6 y(t) + 7D 0.3 y(t) + 2y(t) = u(t). Solution. The equation can easily be solved with the following statements, and the result is exactly the same as the one expressed in Example 4.4. >> t=0:0.002:0.5; u=ones(size(t)); y2=-t.^0.3.*ml_func([0.3,1.3],-2*t.^0.3)... +t.^0.3.*ml_func([0.3,1.3],-t.^0.3)... -t.^0.6.*ml_func([0.3,1.6,2],-t.^0.3)... +t.^0.9.*ml_func([0.3,1.9,3],-t.^0.3); y=fode_sol([1 5 9 7 2],1.2:-0.3:0,1,0,u,t); plot(t,y,t,y2,’--’) For a larger time span of t ∈ (0, 50), the analytical solution-based algorithm is timeconsuming, or it is even impossible to solve the problem. The new algorithm can be used instead. In the following statements, two step-sizes, h = 0.01, h = 0.002, are used, and the step responses are shown in Figure 4.5. It can be seen that the two curves are identical, which means that the solutions are correct. >> t1=0:0.01:50; u=ones(size(t1)); y1=fode_sol([1 5 9 7 2],1.2:-0.3:0,1,0,u,t1); t2=0:0.002:50; u=ones(size(t2)); y2=fode_sol([1 5 9 7 2],1.2:-0.3:0,1,0,u,t2); plot(t1,y1,t2,y2,’--’) Normally, to solve numerically a fractional-order differential equation, a validation process must be carried out to make sure that the solution obtained is correct. The 0.25

0.2

0.15

0.1

0.05

0

0

5

10

15

20

25

30

35

Fig. 4.5: Step responses over larger time span.

40

45

50

116 | 4 Solutions of linear fractional-order differential equations simplest way is to select two different step-sizes and to see whether the two solutions yield the same curve. If the two curves are identical, it is validated. Otherwise, smaller step-sizes can be used again, until accurate results can be obtained. The impulse response of a differential equation cannot be evaluated directly with the function fode_sol(), since the impulsive function cannot be modelled. Two alternative methods can be used. (1) The step response y1 (t) can be obtained first; then the first-order derivative of it can be computed such that the result is the impulse response of the original system. (2) Add all the orders on the right-hand side of the equation by one; then evaluate the step response of the new equation, which is equivalent to the impulse response of the original equation. Example 4.6. Find the impulse response of the equation in Example 4.4. Solution. For this particular example, the impulse responses with the above two methods can be obtained, as shown in Figure 4.6. It can be seen that the two curves are almost identical. For simplicity, normally the second method is recommended. >> t=0:0.01:50; u=ones(size(t)); y1=fode_sol([1 5 9 7 2],1.2:-0.3:0,1,0,u,t); y2=fode_sol([1 5 9 7 2],1.2:-0.3:0,1,1,u,t); y3=glfdiff9(y1,t,1,4); plot(t,y2,t,y3,’--’) 0.14 0.12 0.1 0.08 0.06 0.04 0.02 0

0

5

10

15

20

25

30

35

40

45

50

Fig. 4.6: Impulse response of the equation.

Example 4.7. Solve numerically the fractional-order differential equation Dt3.5 y(t) + 8Dt3.1 y(t) + 26Dt2.3 y(t) + 73Dt1.2 y(t) + 90Dt0.5 y(t) = 30u󸀠 (t) + 90D 0.3 u(t), with u(t) = sin t2 . Solution. It is simply found that this equation cannot be solved with the algorithms presented in the previous sections, since the input is neither a step signal nor an

4.4 Closed-form solutions of linear fractional-order differential equations | 117 1 0.8 0.6 0.4 0.2 0 -0.2 -0.4 -0.6

0

1

2

3

4

5

6

7

8

9

10

Fig. 4.7: Output signals under different step-sizes.

impulsive signal. Numerical algorithms must be used in finding the solutions of such an equation. The vectors a, n a , b and n b can be extracted from the original equation first, and then the function fode_sol() can be used to solve the original equations. The output signal can be obtained as shown in Figure 4.7. Normally, one should verify the numerical results obtained. The simplest way is to modify the control parameters in the solution. For instance, one can change the step-size from 0.002 to 0.001, and check whether they give the same results. For this example, it can be seen that the results are the same, as shown in Figure 4.7. >> a=[1,8,26,73,90]; b=[30,90]; na=[3.5,3.1,2.3,1.2,0.5]; nb=[1,0.3]; t=0:0.002:10; u=sin(t.^2); y=fode_sol(a,na,b,nb,u,t); t1=0:0.001:10; u1=sin(t1.^2); y1=fode_sol(a,na,b,nb,u1,t1); plot(t,y,’-’,t1,y1,’--’)

4.4.2 The matrix-based solution algorithm The closed-form solution of (4.26) can also be implemented in matrix form. In the γ matrix-based algorithm, one can simply construct a matrix W (γ i ) to represent Dt i , (γ )

W (γ i )

w i [ 0(γ ) [w i [ 1 [ [ w(γ i ) 1 [ [ 2 = γ [ .. h i [ . [ [ (γ i ) [w N−1 [ [ (γ i ) [ wN

0 (γ ) w0 i (γ ) w1 i

0

0

⋅⋅⋅

0

0

⋅⋅⋅

0 .. .

⋅⋅⋅ .. .

(γ )

..

.

ω0 i .. .

..

.

ω2

(γ )

i w N−1

(w i )

⋅⋅⋅

(γ )

w1 i

(γ )

w2 i

(γ )

w0 i

(γ )

w1 i

0

] 0 ] ] ] 0 ] ] ] .. ] , . ] ] ] 0 ] ] ] (γ i ) w0 ]

(4.30)

118 | 4 Solutions of linear fractional-order differential equations where N = (t − t0 )/h. It can be seen that such a matrix is a rotated Hankel matrix. ̂ and u(t), The vectors ŷ and u can be constructed to represent the samples of y(t) respectively: T ̂ ̂ ̂ ̂ ŷ = [y(0), y(h), y(2h), . . . , y(Nh)] ,

u = [u(0), u(h), u(2h), . . . , u(Nh)]T . (4.31)

Following the strategies illustrated in Figure 4.4 (b), it can be seen that n

∑ a i W (γ i ) ŷ = u, i=0

and the output of the system can be evaluated eventually from m

n

i=0

i=0

−1

y = BA−1 u = [ ∑ b i W (η i ) ][ ∑ a i W (γ i ) ] u, where

n

A = ∑ a i W (γ i ) , i=0

(4.32)

m

B = ∑ b i W (η i ) .

(4.33)

i=0

The matrix algorithm for solving the fractional-order differential equation in (4.26) is proposed as follows. Algorithm 4.2 (The matrix algorithm). Proceed as follows. (1) Evaluate the key vectors from (4.29). (2) Configure the matrices A and B as rotated Hankel matrices. (3) Compute the solution y(t) through (4.32). Based on the algorithm, a MATLAB function fode_solm() can be written to solve numerically linear fractional-order differential equations with zero initial conditions. The syntax of the function is exactly the same as the on in fode_sol(). function y=fode_solm(a,na,b,nb,u,t) h=t(2)-t(1); u=u(:); A=0; B=0; g=double(genfunc(1)); nt=length(t); n=length(a); m=length(b); for i=1:n, A=A+get_vecw(na(i),nt,g)*a(i)/(h^na(i)); end for i=1:m, B=B+get_vecw(nb(i),nt,g)*b(i)/(h^nb(i)); end A=rot90(hankel(A(end:-1:1))); B=rot90(hankel(B(end:-1:1))); y=B*inv(A)*u; Example 4.8. Solve the equation in Example 4.7 with the matrix algorithm. Solution. Selecting a step-size of h = 0.002, the solutions of the two algorithms can be obtained with the following statements, and it can be seen that the solutions with the two algorithms are exactly the same. >> a=[1,8,26,73,90]; b=[30,90]; na=[3.5,3.1,2.3,1.2,0.5]; nb=[1,0.3]; t=0:0.002:10; u=sin(t.^2); tic, y=fode_sol(a,na,b,nb,u,t); toc tic, y1=fode_solm(a,na,b,nb,u,t); toc, plot(t,y,t,y1)

4.4 Closed-form solutions of linear fractional-order differential equations | 119

The closed-form solver takes 0.73 seconds, while the matrix solver takes 19.9 seconds. It can be seen that for this example, the closed-form solver is much more effective. It should be noted that, if N is large, the storage space is demanding, and the computation speed could be extremely slow. Therefore, the matrix-based algorithm is not recommended.

4.4.3 High-precision closed-form algorithm A closed-form solution was introduced in the previous section, and it seems that the precision is not high, since the o(h) algorithm for fractional-order differentiation was used. Considering the o(h p ) high-precision algorithm proposed in the fractional-order differentiation in Chapter 3, it is quite natural to formulate a high-precision closed-form solutions for fractional-order differential equations. If there are zero initial conditions in (4.26), the fractional-order derivatives in the equation can be replaced by Grünwald–Letnikov fractional-order derivatives, and the closed-form solution can also be obtained as (4.27). If the coefficients w j in (4.27) are evaluated by the high-precision algorithm in Chapter 3, the high-precision closed-form solutions to the original equations can be obtained, with the following algorithm. Algorithm 4.3 (High-precision closed-form solution algorithm). Proceed as follows. (1) Replace the fractional-order derivatives in (4.26) with the Grünwald–Letnikov operator directly. (2) Evaluate the coefficient w j with the algorithm in Chapter 3. (3) Compute the numerical solutions y k with (4.27). Based on the algorithm, a MATLAB function fode_sol9() can be written as follows. The interface is close to fode_sol(), with an extra argument p. function y=fode_sol9(a,na,b,nb,u,t,p) h=t(2)-t(1); n=length(t); vec=[na nb]; u=u(:); g=double(genfunc(p)); t=t(:); W=[]; for i=1:length(vec), W=[W; get_vecw(vec(i),n,g)]; end D1=b(:)./h.^nb(:); nA=length(a); y1=zeros(n,1); W=W.’; D=sum((a.*W(1,1:nA))./[h.^na]); for i=2:n A=[y1(i-1:-1:1)]’*W(2:i,1:nA); y1(i)=(u(i)-sum(A.*a./[h.^na]))/D; end for i=2:n, y(i)=(W(1:i,nA+1:end)*D1)’*[y1(i:-1:1)]; end The necessary condition of using the pth-order Algorithm 4.3 is that the first p samples in functions y(t) are all zero or very close to zero. The reason is that there are some

120 | 4 Solutions of linear fractional-order differential equations terms missing in the formula in evaluating the w j in the initial period, as demonstrated in Table 3.6. If the equation does not satisfy the condition, it can be solved by the algorithm presented in the next sections. Example 4.9. Consider the fractional-order differential equation t2 − t0.5 E1,1.5 (−t). 2 It is known that the constructed equation has an analytical solution of the form y(t) = −1 + t − t2 /2 + e−t . Solve the equation with different values for p. y󸀠󸀠󸀠 (t) + C Dt2.5 y(t) + y(t) = −1 + t −

Solution. A large step-size of h = 0.1 is selected, and the numerical solutions to the given equation can be found for different selections of p. The following statements can be issued and it can be found that the solution curves are almost identical. >> a=[1 1 1]; na=[3 2.5 0]; b=1; nb=0; t=0:0.1:1; y=-1+t-t.^2/2+exp(-t); u=-1+t-t.^2/2-t.^0.5.*ml_func([1,1.5],-t); y1=fode_sol9(a,na,b,nb,u,t,1); y2=fode_sol9(a,na,b,nb,u,t,2); y3=fode_sol9(a,na,b,nb,u,t,3); y4=fode_sol9(a,na,b,nb,u,t,4); e1=y-y1; e2=y-y2; e3=y-y3; e4=y-y4; y(1:4) plot(t,e1,’-’,t,e2,’--’,t,e3,’:’,t,e4,’-.’) The error curves of the solutions can be assessed with the above statements, as shown in Figure 4.8. It can be seen that the solution obtained with p = 2 is significantly more accurate than the one obtained with p = 1, since the order of the higher. Also it is better than the ones with p = 3 and p = 4, because the first four samples in y(t) are 0, −0.0002, −0.0013 and −0.0042, respectively, where the third or the fourth samples are not very close to zero; therefore, the computation accuracy can be affected. The matrix version of the algorithm can also be established. However, since the computation load is too heavy, it is not practical in real applications. 0.04

0.03

p=1

0.02

0.01

0

p=2 p = 3 and 4

-0.01

-0.02

0

0.1

0.2

0.3

0.4

0.5

0.6

Fig. 4.8: Errors under different orders p .

0.7

0.8

0.9

1

4.5 Numerical solutions to Caputo differential equations | 121

4.5 Numerical solutions to Caputo differential equations with nonzero initial conditions 4.5.1 Mathematical description of Caputo equations The linear fractional-order differential equations with zero initial conditions have been studied thoroughly in the previous section, and all the equations can be solved easily with MATLAB functions. For equations with nonzero initial conditions, Caputo differential equations are usually involved. In this section, numerical solutions of linear Caputo differential equations will be fully studied. Definition 4.12. The general form of a linear Caputo fractional-order differential equation is expressed as γ γ γ ̂ a1 C0 Dt 1 y(t) + a2 C0 Dt 2 y(t) + ⋅ ⋅ ⋅ + a n C0 Dt n y(t) = u(t),

(4.34)

and it is safe to assume that γ1 > γ2 > ⋅ ⋅ ⋅ > γ n . The initial conditions are y(0) = c0 , y󸀠 (0) = c1 , . . . , y(q−1) (0) = c q−1 . When c i = 0, 0 ≤ i ≤ q − 1, it is referred to as zero initial conditions, otherwise as nonzero initial conditions. For a fractional-order differential equation to have a unique solution, q = ⌈γ1 ⌉. ̂ ̂ should be If there are right-hand dynamic expressions on u(t), the signal u(t) evaluated first, with high-precision function caputo9(): η

η

η

̂ = b1 C0 Dt 1 u(t) + b2 C0 Dt 2 u(t) + ⋅ ⋅ ⋅ + b m C0 Dt m u(t). u(t)

(4.35)

It should be noted that in the Caputo definition the initial values involved are the original functions and its integer-order derivatives, while under other definitions the initial values of some fractional-order derivatives should be provided. Therefore, Caputo fractional-order differential equations with nonzero initial values are more practical, and deserve more attention in research.

4.5.2 Taylor auxiliary algorithm If there exist nonzero initial conditions in (4.34), an auxiliary function T(t) should be introduced to convert the original Caputo equation into a zero initial value problem. Definition 4.13 ([56]). The Taylor auxiliary function is constructed such that q−1

T(t) = ∑ k=0

y(k) (0) k t , k!

(4.36)

and the actual output signal y(t) can be decomposed as y(t) = z(t) + T(t),

(4.37)

122 | 4 Solutions of linear fractional-order differential equations where the initial conditions of T(t) are the same as those of y(t), while z(t) is the signal with zero initial conditions. Substituting y(t) with z(t) + T(t) in (4.34), the original equation can be transformed into the equation of z(t) with zero initial conditions γ γ γ ̂ − P(t), a1 C0 Dt 1 z(t) + a2 C0 Dt 2 z(t) + ⋅ ⋅ ⋅ + a n C0 Dt n z(t) = u(t)

(4.38)

where P(t) can be expressed as γ

γ

γ

P(t) = (a1 C0 Dt 1 + a2 C0 Dt 2 + ⋅ ⋅ ⋅ + a n C0 Dt n )T(t).

(4.39)

α Since z(t) is the function with zero initial conditions, one has Ct0 D α z(t) = GL t0 D z(t). The right-hand side samples of the equation can be evaluated first; then the numerical solution of z m can be solved in a similar form of (4.27).

Theorem 4.12. The closed-form solution to linear Caputo differential equations with nonzero initial conditions can be expressed as ym =

∑ni=1

n a i m−1 1 (û m − P m − ∑ γ ∑ w m−j y j ) + T m . −γ h i j=0 ai h i i=1

(4.40)

Example 4.10. Solve the Bagley–Torwik equation (see [16]) Ay󸀠󸀠 (t) + BD 3/2 y(t) + Cy(t) = C(t + 1)

with y(0) = y󸀠 (0) = 1,

and show that the solutions are independent of the constants A, B and C. Solution. An auxiliary signal T(t) = t + 1 can be introduced such that y(t) = z(t) + t + 1, and it can be seen that signal z(t) has zero initial conditions. Thus, the Caputo equation of z(t) is the same as the Grünwald–Letnikov equation. Also, due to the definition of Caputo differentiation, the 1.∗th-order Caputo derivative to y(t) takes second-order derivatives to y(t) first, and then evaluate the integral, which means that the compensation term t + 1 can be eliminated when the second-order derivative is taken. Therefore, the original equation can be rewritten as Az󸀠󸀠 (t) + BD 3/2 z(t) + Cz(t) + C(t + 1) = C(t + 1). It can be seen that the term C(t + 1) on both sides can be cancelled such that the original equation is reduced to Az󸀠󸀠 (t) + BD 3/2 z(t) + Cz(t) = 0. Since the initial conditions of z(t) are zero, and there is no external excitation to the system, which means that no matter what are the values of the parameters A, B, C , the signal z(t) ≡ 0. Therefore, y(t) = t + 1. The auxiliary function T(t) is a Taylor series of y(t) at t = 0 based on the initial conditions, the difference between T(t) and y(t) may be small when t is near t = 0. However, if y(t) is a bounded function, then |y(t) − T(t)| may increase as t increases. When the variable t is big enough, |T(t)| becomes an increasing function; |z(t)| = |y(t) − T(t)| is

4.5 Numerical solutions to Caputo differential equations | 123

also an increasing function, with z(t) and T(t) having opposite signs. Hence, y(t) = z(t) + T(t) is the difference between two large values. Obviously, this kind of calculation is too difficult to guarantee the accuracy. The following example illustrates the problem. Algorithm 4.4 (Numerical solution of the Caputo differential equation). One proceeds in the following way. (1) For the given initial condition vector, construct T m from (4.36). (2) Evaluate the equivalent input û from (4.35). (3) Compute the new equivalent input ũ = û − P from (4.39). (4) Solve the Grünwald–Letnikov equation of z with Algorithm 4.1. (5) Add T m back to the solution such that y m = z m + T m . A MATLAB function is written based on the algorithm function y=fode_caputo0(a,na,b,nb,y0,u,t) h=t(2)-t(1); T=0; P=0; U=0; for i=1:length(y0), T=T+y0(i)*t.^(i-1)/gamma(i); end for i=1:length(na), P=P+a(i)*caputo9(T,t,na(i),5); end for i=1:length(nb), U=U+b(i)*caputo9(u,t,nb(i),5); end z=fode_sol(a,na,1,0,U-P.’,t); y=z+T; with the syntax y = fode_caputo0(a, n a , b, n b , y0 , u, t), where the initial value vector is specified as y0 = [c0 , c1 , . . . , c q−1 ]. In the function, a fifth-order algorithm is used as the default order. Example 4.11. Solve the Caputo differential equation 1 C 2.5 4 3 1 C 0.5 6 172 4t y󸀠󸀠󸀠 (t) + D y(t) + y󸀠󸀠 (t) + y󸀠 (t) + D y(t) + y(t) = cos , 16 0 t 5 2 25 0 t 5 125 5 with the initial conditions y(0) = 1, y󸀠 (0) = 4/5, y󸀠󸀠 (0) = −16/25. It is known that the analytical solution is y(t) = √2 sin(4t/5 + π/4). Solution. With Taylor auxiliary algorithm, select T(t) = 1 + 4t/5 − 8t2 /25; let y(t) = z(t) + T(t). The equation can be solve numerically with (4.40). If the step-size are selected as h = 0.01 and h = 0.1, the numerical solutions are obtained as shown in Figure 4.9. >> a=[1 1/16 4/5 3/2 1/25 6/5]; na=[3 2.5 2 1 0.5 0]; b=1; nb=0; t=[0:0.01:30]; u=172/125*cos(4*t/5); y0=[1 4/5 -16/25]; y1=fode_caputo0(a,na,b,nb,y0,u,t); t1=[0:0.1:30]; u2=172/125*cos(4*t1/5); y2=fode_caputo0(a,na,b,nb,y0,u2,t1); y=sqrt(2)*sin(4*t/5+pi/4); plot(t,y,’-’,t,y1,’--’,t1,y2,’:’) It can be seen that the solution at h = 0.01 is quite close to the theoretical one; however, when the step-sizes increase, the errors become larger and larger.

124 | 4 Solutions of linear fractional-order differential equations 1.5

1

0.5

0

-0.5

-1

-1.5

0

5

10

15

20

25

30

Fig. 4.9: Analytical and numerical solutions.

It should be noted that in earlier publications, since the o(h) algorithm was used in ̂ and y(t) in the equation, it follows that, when h is large, evaluating the signals u(t) the error may become too large to ensure convergent equation solutions. The Taylor auxiliary function approach may fail, and exponential auxiliary functions were introduced [73]. New problems were also introduced with such an auxiliary function. With the high-precision Caputo differentiation algorithm, the Taylor auxiliary function is adequate.

4.5.3 High-precision algorithm As it was indicated earlier, for the pth-order algorithm, the necessary condition of using the high-precision algorithm is that the first p samples in z(t) = y(t) − T(t) should all be zero or very close to zero. Unfortunately, the necessary condition listed above are not satisfied in most of the actual Caputo equations. This phenomenon was illustrated in Example 4.9. Since the order p can be selected independently, there are two possibilities in the relationship between p and the order q of the actual Caputo equation. If p ≤ q, then the given initial conditions are adequate in finding the high-precision numerical solutions with Algorithm 4.3; if p > q, however, the first p equivalent initial conditions should be constructed, such that Algorithm 4.3 can be used. In the latter case, a two-step algorithm is proposed. (1) Finding equivalent initial conditions. No matter what is the order q of the Caputo different equation, the new Taylor auxiliary function T(t) must be constructed such that p−1

T(t) = ∑ c k k=0

tk . k!

(4.41)

4.5 Numerical solutions to Caputo differential equations | 125

If p ≤ q, then assign c k = y(k) (0) for k = 0, 1, . . . , p − 1. However, if p > q, set c k = y(k) (0) for k = 0, 1, . . . , q − 1, and the remaining p − q initial values of c k can be computed as follows. In this case, the signals T(t) and y(t) have the same initial conditions, written as c k for 0 ≤ k ≤ p − 1. Therefore, the signal z(t) has zero initial conditions. For the convenience of the expression, rewrite T(t) as p−1

T(t) = ∑ c k T k ,

(4.42)

k=0

where T k = t k /k!. Since z(t) has zero initial conditions, its interpolation polynomial with the first p samples is approximated to zero. The signal T(t) has the same initial conditions as y(t), T(t) and y(t) have the same interpolation polynomial with the first p samples. In other words, the function T(t) satisfies the original Caputo equation at the first p samples. Substitute (4.42) into the original equation. It is found that p

̂ ∑ c k x k (t) = u(t),

(4.43)

k=0

where η

η

η

x k = (a1 C0 Dt 1 + a2 C0 Dt 2 + ⋅ ⋅ ⋅ + a n C0 Dt n )T k (t);

(4.44)

it is tunable at the first p sampling points. Set t = h, 2h, . . . , Kh, respectively, in (4.43), where K = p − q. The following equation can be established: x0 (h) [ x (2h) [ 0 [ . [ . [ . [x0 (Kh)

x1 (h) x1 (2h) .. . x1 (Kh)

⋅⋅⋅ ⋅⋅⋅ .. . ⋅⋅⋅

̂ x p (h) c0 u(h) [ c ] [ u(2h) ] ̂ x p (2h) ] ][ 1 ] [ ] ] [ ] [ .. ] [ .. ] = [ .. ] ], . ][ . ] [ . ] ̂ x p (Kh)] [c p−1 ] [u(Kh) ]

(4.45)

from which it can be seen that c k (0 ≤ k ≤ q − 1) equal the initial conditions of the original equation, the number of the unknown c i (q ≤ i ≤ p − 1) is K, and they can be solved from (4.45). The coefficients c i (0 ≤ i ≤ p − 1) can be regarded as the new equivalent initial conditions; the auxiliary function T(t) should be constructed as in (4.42). The first p samples in y(t) can be evaluated by T(t). The above procedures could be described as the following algorithm. Algorithm 4.5 (Evaluate equivalent initial conditions). Proceed as follows. (1) Construct T k (t) = t k /k!, where 0 ≤ k ≤ p − 1; compute x k from (4.44). (2) Set K = p − q, the coefficient matrix in (4.45) could be constructed by x k . ̂ ̂ ̂ (3) Input u(h), u(2h), . . . , u(Kh); they are written as the vector on the right-hand side of (4.45). (4) The coefficients c k (0 ≤ k ≤ q − 1) equal the initial conditions of the original equation, while the remaining coefficients c i (q ≤ i ≤ p − 1) can be computed from (4.45).

126 | 4 Solutions of linear fractional-order differential equations Based on the algorithm, a MATLAB function caputo_ics() can be written to find the equivalent initial conditions. function [c,y]=caputo_ics(a,na,b,nb,y0,u,t) na1=ceil(na); q=max(na1); K=length(t); p=K+q-1; y0=y0(:); u=u(:); t=t(:); d1=y0./gamma(1:q)’; I1=1:q; I2=(q+1):p; X=zeros(K,p); for i=1:p, for k=1:length(a) if i>na1(k), X(:,i)=X(:,i)+a(k)*t.^(i-1-na(k))*gamma(i)/gamma(i-na(k)); end, end, end u1=0; for i=1:length(b), u1=u1+b(i)*caputo9(u,t,nb(i),K-1); end X(1,:)=[]; u2=u1(2:end)-X(:,I1)*d1; d2=inv(X(:,I2))*u2; c=[d1;d2]; y=0; for i=1:p, y=y+c(i)*t.^(i-1); end The syntax of the function is [c, y] = caputo_ics(a, n a , b, n b , y0 , u, t), where the vectors a, n a , b, n b store the coefficient and order vectors of both sides of the equation, y0 stores the initial vector, whose length is q. The arguments u and t are the input and input vector, with length of p + 1. The returned argument c is the equivalent initial vector computed, and y vector returns the first p samples of the output signal. Comments 4.2 (Equivalent initial conditions). Proceed as follows. (1) If the precision is not required very high, i.e. p ≤ q, the given initial conditions y0 are adequate. (2) If the precision required is high, i.e. p > q, not only the vector y0 , but also high-order terms in the Taylor series expansion of y(t) at t = 0 are required. The equivalent initial conditions are then the ones of high-order terms in Taylor series expansion of y(t). (3) In real applications, since y(t) is not known, high-order terms can only be obtained by solving equations in the algorithm. Example 4.12. Consider the Caputo equation in Example 4.11, whose analytical solution is y(t) = √2 sin(4t/5 + π/4). Analyse the accuracy of the algorithm for different step-sizes h. Solution. With the MATLAB statements >> syms t; y=sqrt(2)*sin(4*t/5+pi/4); F=taylor(y,t,’Order’,7), y0a=sym2poly(F) it can be found that the first seven terms of the Taylor series expression is y(t) = 1 +

4 8 2 32 3 32 4 128 5 256 6 t− t − t + t + t − t + o(h6 ). 5 25 375 1875 46875 703125

4.5 Numerical solutions to Caputo differential equations | 127 Tab. 4.1: Computing errors for different orders and step-sizes. p

h = 0.1

h = 0.05

h = 0.02

h = 0.01

h = 0.005

h = 0.002

h = 0.001

1 2 3 4 5 6

5.0×10−6

3.2×10−7

8.2×10−9

5.1×10−10

3.2×10−11

8.2×10−13

3.74×10−7 4.00×10−8 7.22×10−9 5.16×10−10

5.9×10−9 3.77×10−10 2.89×10−11 1.35×10−12

2.4×10−11 6.84×10−13 1.90×10−14 4.96×10−16

3.8×10−13 5.3×10−15 2.22×10−16 5.43×10−16

6.0×10−15 3.0×10−16 3.8×10−16 4.4×10−16

3.0×10−16 2.0×10−16 3.1×10−16 3.1×10−16

5.1×10−14 2.0×10−16 2.0×10−16 3.0×10−16 3.8×10−16 3.8×10−16

1.6×10−6

5.7×10−8

6.1×10−10

2.0×10−11

6.2×10−13

6.0×10−15

New equivalent initial conditions can be evaluated with Algorithm 4.5, for different selections of orders p and step-sizes h; the reconstructed initial condition vector can be computed, and the approximation errors are obtained as shown in Table 4.1. >> a=[1 1/16 4/5 3/2 1/25 6/5]; na=[3 2.5 2 1 0.5 0]; y0=[1 4/5 -16/25]; b=1; nb=0; h=[0.1,0.05,0.02,0.01,0.005,0.002,0.001]; for i=1:7, for p=1:6 t=[0:h(i):p*h(i)]; u=172/125*cos(4/5*t); y=sqrt(2)*sin(4*t/5+pi/4); [ee,yy]=caputo_ics(a,na,b,nb,y0,u,t); err(p,i)=norm(yy-y’); end, end It can be seen that in most cases, the error of the algorithm is very small. Therefore, the necessary condition of reliable equivalent essential values are ensured with such a high-precision algorithm. (2) Solve the equation with the high-precision algorithm. The first p samples in y(t) could be evaluated by Algorithm 4.5; the auxiliary function T(t) can be accurately constructed. Then the original equation could be solved by a high-precision algorithm; the procedures could be described as the following algorithm. Algorithm 4.6 (High-precision algorithm for Caputo equations). Proceed as follows. (1) Evaluate the equivalent initial conditions with Algorithm 4.5. (2) Construct the function T(t) and decompose y(t) into T(t) + z(t). (3) Solve the zero initial condition problem of z(t) with Algorithm 4.3. (4) Solve the high-precision numerical solutions y(t) = T(t) + z(t). Based on this algorithm, a MATLAB function fode_caputo9() can be written as follows. The functions fode_sol9() and caputo_ics() are embedded in the new function fode_caputo9(). function y=fode_caputo9(a,na,b,nb,y0,u,t,p) T=0; dT=0; t=t(:); u=u(:);

128 | 4 Solutions of linear fractional-order differential equations

if p>length(y0) yy0=caputo_ics(a,na,b,nb,y0,u(1:p),t(1:p)); y0=yy0(1:p).*gamma(1:p)’; elseif p==length(y0) yy0=caputo_ics(a,na,b,nb,y0,u(1:p+1),t(1:p+1)); y0=yy0(1:p+1).*gamma(1:p+1)’; end for i=1:length(y0), T=T+y0(i)/gamma(i)*t.^(i-1); end for i=1:length(na), dT=dT+a(i)*caputo9(T,t,na(i),p); end u=u-dT; y=fode_sol9(a,na,b,nb,u,t,p)+T’; The syntax of the function is y = fode_caputo9(a, n a , b, n b , y0 , u, t, p). Similar to the function fode_caputo0() discussed earlier, the new function allows the user to select order p such that high-precision results may be obtained. Example 4.13. Solve the differential equation in Example 4.11 with high-precision algorithm, and assess the behaviour under different step-sizes and orders. Solution. Select a large step-size of h = 0.1, different orders p can be tried. The numerical solutions of the Caputo equation can be obtained, and the computation error at selected time instances can be measured as shown in Table 4.2. It can be seen that the error is decreasing significantly when p increases, except p = 6 may yield large errors when t is large. For this example p = 5 is a good choice. >> h=0.1; t=0:h:30; y=sqrt(2)*sin(4*t/5+pi/4); u=172/125*cos(4/5*t); ii=31:30:301; y=y(ii); a=[1 1/16 4/5 3/2 1/25 6/5]; na=[3 2.5 2 1 0.5 0]; b=1; nb=0; y0=[1 4/5 -16/25]; T=[]; for p=1:6 y1=fode_caputo9(a,na,b,nb,y0,u,t,p); err=y-y1(ii); T=[T abs(err.’)]; end In fact, smaller step-sizes can be tried. For instance, a moderate one h = 0.01 can be selected, and the errors at selected time instances can be measured as shown in Table 4.3. It seems that the error for large t is increased, compared with larger step-size of h, due to the increase of accumulative errors, since the computation points are increased by ten times. For this example, p = 3 is a good choice. >> h=0.01; t=0:h:30; y=sqrt(2)*sin(4*t/5+pi/4); u=172/125*cos(4/5*t); ii=301:300:3001; y=y(ii); a=[1 1/16 4/5 3/2 1/25 6/5]; na=[3 2.5 2 1 0.5 0]; b=1; nb=0; y0=[1 4/5 -16/25]; T=[]; for p=1:6

4.5 Numerical solutions to Caputo differential equations | 129 Tab. 4.2: Computing errors with different orders for h = 0.1. time t 3 6 9 12 15 18 21 24 27 30

p=1 0.00103 0.000353 0.023482 0.04216 0.03881 0.00876 0.03357 0.066352 0.069885 0.040484

p=2 0.01469 0.00974 0.00347 0.004681 0.01212 0.016612 0.016534 0.011885 0.004088 0.004329

p=3 0.002614 0.000599 0.001267 0.000448 0.000228 0.001434 0.000876 0.001411 0.000479 0.0006258

p=4

p=5

p=6

0.000146 4.576×10−5 4.883×10−5 0.000137 0.0001925 0.0002022 0.0001621 8.124×10−5 1.812×10−5 0.000112

5.150×10−7

1.911×10−7 2.068×10−7 6.854×10−7 4.163×10−6 1.816×10−5 5.091×10−5 0.0001259 0.0002638 0.00050 0.000895

3.034×10−7

1.111×10−9 3.720×10−7 1.016×10−6 1.313×10−6 2.251×10−6 1.074×10−6 1.767×10−6 3.506×10−6

Tab. 4.3: Computing errors with different orders for h = 0.01. time t 3 6 9 12 15 18 21 24 27 30

p=1 0.0002661 0.0004601 0.0003474 0.000589 0.0007764 0.0038395 0.0071045 0.0086094 0.0067604 0.001628

p=2 0.0008371 8.888×10−5 0.0005399 0.0010615 0.0013316 0.001292 0.00094829 0.00038365 0.00026481 0.00084186

p=3

p=4

p=5

p=6

0.0002531 4.260×10−5 0.0001029 5.969×10−5 5.806×10−5 0.0001429 0.00014851 9.830×10−5 0.0001268 7.436×10−5

9.896×10−7

6.242×10−9

1.245×10−5 2.683×10−5 6.094×10−5 0.00010372 0.00016832 0.00026625 0.00034755 0.00052638

1.145×10−7 1.155×10−5 4.532×10−5 0.0001179 0.0001865 0.0003858 0.0009031 0.0019239

1.622×10−7 6.299×10−5 0.001254 0.0075575 0.026356 0.069312 0.15579 0.30641 0.55351 0.94305

2.256×10−6

9.681×10−8

y1=fode_caputo9(a,na,b,nb,y0,u,t,p); err=y-y1(ii); T=[T abs(err.’)]; end It can be concluded from the example that the step-size should not be selected too large. Normally, the step-size can be selected such that the total samples of computation do not exceed 1,500. Otherwise, the accumulated error will be in effect. Example 4.14. Consider again the equation in Example 4.9, with zero initial conditions. With the algorithm in that example, the accuracy for high-precision algorithm was not satisfactory. Solve the problem with the Caputo equation algorithm for different values for p. Solution. The following statements can be issued, and the errors for different selections of p can be measured, as shown in Figure 4.10. It can be seen that the accuracy is significantly increased, compared with the results in Example 4.9, and when p = 4, there is almost no error witnessed from the figure.

130 | 4 Solutions of linear fractional-order differential equations 0.04 0.03

p=1

0.02 0.01

p=4

0

p=3 -0.01 -0.02

p=2 0

0.1

0.2

0.3

0.4

0.5

0.6

0.7

0.8

0.9

1

Fig. 4.10: Computation errors for different selections of p.

>> a=[1 1 1]; na=[3 2.5 0]; b=1; nb=0; t=0:0.1:1; y=-1+t-t.^2/2+exp(-t); y0=0; u=-1+t-t.^2/2-t.^0.5.*ml_func([1,1.5],-t); y1=fode_caputo9(a,na,b,nb,y0,u,t,1); e1=y-y1; y2=fode_caputo9(a,na,b,nb,y0,u,t,2); e2=y-y2; y3=fode_caputo9(a,na,b,nb,y0,u,t,3); e3=y-y3; y4=fode_caputo9(a,na,b,nb,y0,u,t,4); e4=y-y4; plot(t,e1,’-’,t,e2,’--’,t,e3,’:’,t,e4,’-.’) Therefore, it is advised that high-precision solutions of zero initial condition problems be obtained with fode_caputo9(), with relatively large step-size.

4.6 Numerical solutions of irrational system equations 4.6.1 Irrational transfer function expression The rational fractional-order transfer function has been studied in (4.2). In real applications, if there exists a term like p γ (x), where γ is not an integer, the term must be expanded as an infinite series. Therefore, it is not possible to express it in the rational form of (4.2). In this case, a class of irrational fractional-order transfer functions should be considered. Even more, any nonlinear transfer function containing fractional power of s can be regarded as an irrational transfer function. Definition 4.14 (Class of irrational transfer functions). If a transfer function G(s) can be written in the form G(s) =

ηri βi η1 η2 ∏m i=1 (b 1 s + b 2 s + ⋅ ⋅ ⋅ + b r i s ) , n ∏i=1 (a1 s γ1 + a2 s γ2 + ⋅ ⋅ ⋅ + a k i s γ ki )α i

then it is called an irrational transfer function.

(4.46)

4.6 Numerical solutions of irrational system equations | 131

4.6.2 Simulation with numerical inverse Laplace transforms The analytical solutions of Laplace transforms of some functions can be obtained with the direct call of the functions laplace() or ilaplace(). However, there exist many functions where analytical solutions of their Laplace and inverse transforms cannot be obtained. In this case, numerical approaches should be considered instead. There are many numerical inverse Laplace transform solvers downloadable from network. The function INVLAP(), developed by Juraj Valsa [69, 70], is one of them. To serve for our purposes in typical feedback control system simulation of irrational systems, some changes and modifications are made to the code such that more facilities are supported in the function. The new function is renamed to INVLAP_new(). function [t,y]=INVLAP_new(G,t0,tn,N,H,tx,ux), G=add_dots(G); if nargin syms x t; % declare symbolic variables and the function G=(-17*x^5-7*x^4+2*x^3+x^2-x+1)... /(x^6+11*x^5+48*x^4+106*x^3+125*x^2+75*x+17); f=ilaplace(G,x,t); fun=char(subs(G,x,’s’)); [t1,y1]=INVLAP_new(fun,5,100); % numerical inverse Laplacian tic, y0=subs(f,t,t1); toc % theoretical results evaluation y0=double(y0); err=norm((y1-y0)./y0)*100 % compute relative error If the number of points is increased from 100 to 1,000, the elapsed time of the function INVLAP_new() is about 0.11 seconds, while the functions ilaplace() and subs() may need more than 35.1 seconds. It can be seen that the numerical algorithm is much more effective. In practical applications, the transfer functions G(s) of a certain complicated system are known, and the Laplace transform of the input signal is R(s); the output signal Y(s) = G(s)U(s) can be obtained. The method can also be used to find numerical solutions of the output signal. Example 4.16. Consider the irrational transfer function given by G(s) =

(s0.4 + 0.4s0.2 + 0.5) . √s(s0.2 + 0.02s0.1 + 0.6)0.4 (s0.3 + 0.5)0.6

Draw the inverse Laplace transform of G(s) in the interval t ∈ (0.01, 1). Solution. Unlike the previous example, the function ilaplace() cannot be used in finding the analytical solution to the inverse Laplace transform problems, and numeri-

4.6 Numerical solutions of irrational system equations | 133 2 0 -2 -4 -6 -8 -10 -12 -14 -16 -18 -20

0

0.1

0.2

0.3

0.4

0.5

0.6

0.7

0.8

0.9

1

Fig. 4.11: Time response obtained from numerical inverse Laplace transform.

cal approach is the only choice. Select N = 1,000; the following statements can be used to find numerically the inverse Laplace transform, and the time domain function is obtained as shown in Figure 4.11. If the number of points N is increased to N = 5,000, the same curve can be obtained, which validated the results. >> f=[’(s^0.4+0.4*s^0.2+0.5)/(s^0.2+0.02*s^0.1+0.6)^0.4’,... ’/(s^0.3+0.5)^0.6’]; [t,y]=INVLAP_new(f,0,1,1000); plot(t,y), ylim([-20,2]) It should be noted that the fractional-order polynomial p γ (x) corresponds in fact to infinite series, and it is not possible to find the analytical solution in inverse Laplace transforms. Numerical techniques should be used to solve the original problems. The function is very effective and can be used in practical computations.

4.6.3 Closed-loop irrational system response evaluation In the existing function LAPINV(), the whole system should be modelled with a single string, and for closed-loop system evaluations, such a string could be extremely long, and efficient solutions to the problem are expected. With the modified function INVLAP_new(), a simple syntax is provided: [t, y] = INVLAP_new(G, t0 , tn , N, H, U). Comments 4.4 (The arguments of the function). We remark the following. (1) The whole forward path transfer function can be expressed in a single string G, while the feedback path is represented as string H, with the Laplace operator represented by character s. Dot operation is not necessary. (2) For unity negative feedback structure, H can be assigned to a numeric 1, and for unity positive feedback, H = −1.

134 | 4 Solutions of linear fractional-order differential equations (3) The argument U is the string representation of the Laplace transform of the input signal, with unit step signal represented by ’1/s’. (4) The arguments (t0 , tn ) specify the time span, while N specify the number of points to simulate. (5) The returned variables t and y are the vectors of time instances and output signal, respectively. Example 4.17. A more complicated open-loop irrational system model is given by (see [3]) sinh(w√s) 2 1 G(s) = [ . ] √s sinh(√s) w√s with w = 0.1. Draw the closed-loop step response. Solution. The open-loop model is an irrational one, and it is different from the one in Definition 4.14. The open-loop transfer function can be specified in a string, and then the closed-loop step response under unity negative feedback structure can be evaluated directly, as shown in Figure 4.12. >> G=’(sinh(0.1*sqrt(s))/0.1/sqrt(s))^2/sqrt(s)/sinh(sqrt(s))’; [t,y]=INVLAP_new(G,0,10,1000,1,’1/s’); plot(t,y) 1.2

1

0.8

0.6

0.4

0.2

0

0

1

2

3

4

5

6

7

8

9

10

Fig. 4.12: Closed-loop step response.

4.6.4 Stability assessment of irrational systems To check the stability of an irrational transfer function G(s), the following conjecture can be introduced. Conjecture 4.1. If an irrational transfer function can be written as the ratio of two functions of s, i.e. N(s) G(s) = , D(s)

4.6 Numerical solutions of irrational system equations | 135

where N(s) is a function of any form and D(s) is a pseudo-polynomial, then the system is stable if none of the solutions of D(s) = 0 has positive real part. Comments 4.5 (About the conjecture). We remark the following. (1) The function N(s) needs not be a pseudo-polynomial with non-integer power, it can be any irrational function. (2) Many examples were tried, with no exception found. Now, the most important problem is the following: How can all the solutions of irrational equation D(s) = 0 be found? An algorithm is presented in [77] to find possibly all the solutions of an equation. Algorithm 4.7 (Finding all the solutions of a given set of equations, [77]). Proceed as follows. Require: Anonymous function Y = F(X), initial solution X, range of interested region A, error tolerance ϵ Initialisation, construct initial stored solution set X, in a 3D array while true do Randomly generate an x0 , and find a solution x with fsolve() if this solution does not exist in X then Store it in X end if if no new solution found in tlim seconds then Terminate the while loop with break command end if end while A MATLAB function more_sols() [77] is written to implement the algorithm. The user may terminate the function at any time by pressing the “Ctrl-C” keys. The listing of the function is as follows. function more_sols(f,X0,varargin) if nargin> syms x z; f1=z^23+5*z^16+6*z^13-5*z^4+7; p=sym2poly(f1); r=roots(p); f=x^2.3+5*x^1.6+6*x^1.3-5*x^0.4+7; r1=r.^10, double(subs(f,x,r1)) Unfortunately, most of the “solutions” thus obtained do not satisfy the original equation. How many solutions are there in the original equation? This question can be answered with the following statements, and believe it or not, there are only two solutions in the equation, x = −0.1076 ± j0.5562. The remaining 21 solutions are extraneous roots. The conclusion is supported by the results of more_sols() function call, with the same poles. >> f=@(x)x.^2.3+5*x.^1.6+6*x.^1.3-5*x.^0.4+7; more_sols(f,zeros(1,1,0),100+100i), x0=X(:) In mathematical terms, the genuine roots of the equations are located in the first Riemann sheet, or primary sheet, while the other solutions are located in other sheets. Example 4.19. Assess the stability of the system, for K = 3, G(s) =

1 s√5 + 25s√3 + 16s√2 − Ks0.4 + 7

.

Moreover, find the boundary value of K which makes G(s) unstable. Solution. Clearly, the z substitution method cannot be used in this problem. The solutions to the characteristic equation can be found easily with the statements >> K=3; f=@(s)s^sqrt(5)+25*s^sqrt(3)+16*s^sqrt(2)-K*s^0.4+7; more_sols(f,zeros(1,1,0),100+100i); where the two solutions can be found at s = −0.0812 ± 0.2880j. It can be seen that the two solutions have negative real parts; therefore, the system G(s) is stable. Set K = 10 and run the above statements again. It is found that the system is unstable, with unstable poles at 0.0338 − 0.2238j. The bisection algorithm can be used to find the boundary, with the statements >> a=3; b=10; while (b-a)>0.001, K=0.5*(a+b); f=@(s)s^sqrt(5)+25*s^sqrt(3)+16*s^sqrt(2)-K*s^0.4+7;

138 | 4 Solutions of linear fractional-order differential equations

more_sols(f,zeros(1,1,0),100+100i,eps,1); if real(X(1))>0, b=K; else, a=K; end end K0=K and K0 = 7.8492 is found. It is found that when K < K0 , the system is stable. To validate the result, select a slightly larger K0 = 7.85; the step response can be obtained with the statements >> na=[sqrt(5),sqrt(3),sqrt(2),0.4,0]; K0=7.85; a=[1 25 16 -K0 7]; t=0:0.1:100; y=fode_caputo9(a,na,1,0,0,ones(size(t)),t,4); plot(t,y) and the result is shown in Figure 4.13. It can be seen that the response curve is almost equal-magnitude oscillation, and begins to diverge. 0.5 0.4 0.3 0.2 0.1 0 -0.1 -0.2

0

10

20

30

40

50

60

70

80

90

100

Fig. 4.13: Step response of a critical unstable model.

4.6.5 Numerical Laplace transform If the analytical solution to the Laplace transform of the input u(t) cannot be obtained, numerical approaches should also be used as well. In the source code of the function INVLAP(), the loop structure is the major structure in the function, and in each step, an vector s is created. Based on the vector, a numerical integral is used in the numerical Laplace transform ∞

L [u(t)] = ∫ u(t) e−st dt = U(s), 0

where s is a vector. Since there is the term e−st in the integral, and in practice, if the interval (0, tn ) is large enough, finite time integrals can be used to approximate the infinite ones. Thus, the numerical Laplace transform can be implemented. If the inputs

4.6 Numerical solutions of irrational system equations | 139

are described by the sample vectors t x and u x , the practical input signal u(t) can be approximated with spline interpolation methods. The dot product of the Laplace input signal and the transfer function G(s) can be obtained, and it is the Laplace transform of the output signal. The MATLAB function integral() can be used to take numerical integrals with vector s. All the facilities are implemented in the updated function INVLAP_new(). Supposed G is the transfer function of the system, the syntaxes are [t, y] = INVLAP_new(G, t0 , tn , N, H, f ) [t, y] = INVLAP_new(G, t0 , tn , N, H, t x , u x ), where in the former syntax, f is the handle of the given input function, which can be specified in an anonymous function, or an M-function. In the latter one, the matching samples of the time and input signals should be provided, and the input itself is generated with spline interpolation from the samples. Example 4.20. Assume that the transfer function of the fractional-order model G(s) is given in Example 4.16. Draw the response of the system under the excitation of the input signal u(t) = e−0.3t sin t2 . Solution. The transfer function of the generalised fractional-order model can be entered in the same way, and the input function is described by an anonymous function. With the following statements, the output signal can be obtained as shown in Figure 4.14. The embedded numerical integration statements are quite time-consuming, and the following code takes more than 50 seconds. >> f=@(t)exp(-0.3*t).*sin(t.^2); % if input function is known G=’(s^0.4+ 0.4*s^0.2+0.5)/(s^0.2+0.02*s^0.1+0.6)’,... ’^0.4/(s^0.3+0.5)^0.6’; tic, [t,y]=INVLAP_new(G,0,15,400,0,f); toc, plot(t,y) 1 0.8 0.6 0.4 0.2 0 -0.2 -0.4 -0.6 -0.8

0

5

10

15

Fig. 4.14: Output of irrational system driven by complicated signal.

140 | 4 Solutions of linear fractional-order differential equations Now consider again the input signal. Assume that the mathematical function of the input signal is unknown, only a set of samples in the interval t ∈ (0, 15) is known. Interpolation is used in the input function evaluation; thus, it is even more timeconsuming, and it may need about 9 minutes to find the numerical solution, even if the number of points in computation is reduced by half. The curve obtained is virtually the same. >> x0=0:0.2:15; u0=exp(-0.3*x0).*sin(x0.^2); tic, [t,y]=INVLAP_new(G,0,15,200,0,x0,u0); toc, plot(t,y)

5 Approximation of fractional-order operators In the fractional-order differentiation computation D α y(t) discussed in the previous chapters, the signal y(t) is a known function, or the samples of it are known. If the differentiation operator D α is driven by an unknown signal, like in the case of a fractionalorder plant driven by a controller, while the control signal is not previously known, the fractional-order actions cannot be evaluated with the methods discussed earlier. A filter imitating the fractional-order actions must be constructed to complete the task. To design such a filter, the most sensible thing to do is to design an integer-order transfer function as the filter such that the frequency response behaviour of the filter is close to its original fractional-order counterpart. It is known that the magnitude frequency response of pure differentiation s α is a straight line in the Bode diagram, with a slope of 20α dB/dec, while the phase response is a horizontal line, with a constant value of απ/2. It is also known that the magnitude Bode plots of integer-order system have always slopes of 20k dB/dec. Therefore it is not possible to approximate a fractional-order operator for the entire frequency range, by integer-order filters. In the earlier studies on integer-order transfer function approximations, continued fraction-based filters are useful in real applications, and the commonly used ones are continued fraction approximation, Carlson filter and Matsuda–Fujii filter. These filters will be discussed in Section 5.1. Unfortunately, the behaviours of these filters are some times very poor, and the interested frequency intervals cannot be specified by the users. The innovative filters proposed by Professor Alain Oustaloup and colleagues [49] opened a new era in the simulation of really complicated fractional-order systems, by imitating the fractional-order derivative and integral actions with filters, with user selected orders and interested frequency intervals. The design and applications of this kind of filters will be presented in Section 5.2. Based on the filters, if each of the fractional-order operators is replaced by an Oustaloup filter, the high-order approximation of conventional transfer functions can be achieved, and low-order optimal reduction can be carried out to reduce the total order, these topics will be covered in Section 5.3. For irrational systems which cannot be modelled exactly with standard form of fractional-order transfer functions, for instance, systems with p γ (s) factors, normally two approaches can be adopted. One is to use frequency response fitting technique, while the other using Charef filter design technique, and these approaches will be presented in Section 5.4. An optimised Charef filter design algorithm is also proposed and implemented. In practical applications, different discrete filters can also be used [11, 52]; however, these filters are not covered in the book.

DOI 10.1515/9783110497977-005

142 | 5 Approximation of fractional-order operators

5.1 Some continued fraction-based approximations In early researches, continued fraction expansion-based techniques were applied to design filters. Continued fraction was often regarded as an effective tool in approximating certain functions. Definition 5.1. The typical form of a continued fraction expansion to a given function f(x) can be expressed by (x − a)c1

f(x) = b1 +

,

(x − a)c2

b2 +

(x − a)c3

b3 + b4 +

(x − a)c4 b5 +

(x − a)c5 ..

.

where b i are constants, c i are rational numbers and a is the reference point for continued fraction expansion. For rational approximations, one usually has c i = 1.

5.1.1 Continued fraction approximation It is not possible to find directly a continued fraction approximation to s−α . Therefore, alternative continued fractions to G1 (s) = (1 + Ts)−α and G2 (s) = (1 + 1/s)α for high frequency and low frequency approximations, respectively, can be found instead. It can be seen that when ωT ≫ 1, G1 (s) can be used to approximate s−α , while ω ≪ 1, G2 (s) can be used. An interface to MuPAD’s function contfrac() is written in MATLAB to find continued fraction approximation to a given function. function [cf,r]=contfrac0(f,s,n,a) cf=feval(symengine,’contfrac’,f,[inputname(2) ’=’ num2str(a)],n); if nargout==2, r=feval(symengine,’contfrac::rational’,cf); end The syntax of the function is [f1 , r] = contfrac0(f , s, n, a), where f is the function to be approximated, s is the independent variable, n is the order, and a is the reference point. The returned argument f1 is the continued fraction expansion of the function f , while r is the corresponding rational approximation. Example 5.1. Find the high and low frequencies continued fraction approximations to the fractional-order integral 1/√s.

5.1 Some continued fraction-based approximations | 143

Solution. Selecting the reference point at a = 2, the high-order approximation can be obtained with the following statements, with order n = 9, which enables both fourth orders in numerator and denominator. >> syms s; T1=1/(1+s)^0.5; [c1,G2]=contfrac0(T1,s,9,2) The continued fraction expansion is c1 (s) ≈

√3 + 3

s−2 s−2

−6√3 + −

2√3 + 9

,

s−2 s−2

−54√3 + −

2√3 + 45

s−2 −150√3 +

s−2

2√3 − + ⋅⋅⋅ 105

while the rational approximation is G2 (s) =

s4 + 112s3 + 1464s2 + 4864s + 4240 . 9√3s4 + 288√3s3 + 1944√3s2 + 4032√3s + 2448√3

It is noted that the approximate model is different from the following ones presented in [53]: Gh (s) =

0.3513s4 + 1.405s3 + 0.8433s2 + 0.1574s + 0.008995 s4 + 1.333s3 + 0.478s2 + 0.064s + 0.002844

and Gl (s) =

s4 + 4s3 + 2.4s2 + 0.448s + 0.0256 , 9s4 + 12s3 + 4.32s2 + 0.576s + 0.0256

maybe because the reference point selection is different. The Bode diagram comparisons of fitting models and the original model 1/√s can be obtained with the following statements, as shown in Figure 5.1. >> w=logspace(-3,3); s=fotf(’s’); G0=s^-0.5; s=tf(’s’); G2=(s^4+112*s^3+1464*s^2+4864*s+4240)/... (9*3^(1/2)*s^4+288*3^(1/2)*s^3+1944*3^(1/2)*s^2 ... +4032*3^(1/2)*s+2448*3^(1/2)); n=[0.3513 1.405 0.8433 0.1574 0.008995]; d=[1 1.333 0.478 0.064 0.002844]; Gh=tf(n,d); n=[1 4 2.4 0.448 0.0256]; d=[9 12 4.32 0.576 0.0256]; Gl=tf(n,d); H=bode(G0,w); bode(H,’-’,G2,’--’,Gh,’:’,Gl,’-.’) It can be seen that the fitting of G2 (s) is slightly better than the model Gh (s) in high frequency region, but poorer in the lower frequency region. The behaviour of the fitting model is very poor in the frequency range.

144 | 5 Approximation of fractional-order operators Bode Diagram

40

Magnitude (dB)

1/√s 20

Gh (s)

0

Gl (s)

-20

G2 (s)

Phase (deg)

-40 0

-30

Gh (s) and Gl (s)

G2 (s) 1/√s

-60

10 -3

10 -2

10 -1

10 0

10 1

Frequency (rad/s)

10 2

10 3

10 4

Fig. 5.1: Bode diagram comparisons with continued fractions.

Example 5.2. Compare the 0.5th-order integral of f(t) = e−t sin(3t + 1) using the two filters G2 (s) and Gh (s) with the Grünwald–Letnikov definition. Solution. The comparisons of the theoretical results are carried out with the two filters, with the following statements, and the function lsim() is provided in the Control System Toolbox of MATLAB. The comparisons are given in Figure 5.2. It can be seen that the errors in the computation results are far too large for practical use. In other words, since the fitted frequency band is too narrow, the continued fraction expansion approach is not suitable for use in practical situations. >> t=0:0.01:10; y=exp(-t).*sin(3*t+1); y0=glfdiff9(y,t,-0.5,5); y1=lsim(G2,y,t); y2=lsim(Gh,y,t); plot(t,y0,’-’,t,y1,’--’,t,y2,’:’) 0.6 0.5

Grünwald–Letnikov definition

0.4 0.3 0.2

filtered with Gh (s)

0.1



0 -0.1 -0.2

filtered with G2 (s) 0

1

2

3

4

5

6

Fig. 5.2: Comparisons of 0.5th-order integral.

7

8

9

10

5.1 Some continued fraction-based approximations | 145

5.1.2 Carlson’s method Another continued fraction-based method is the Carlson’s method [53]. The target of Carlson’s method is to find a rational approximation H(s) to a model with a fractional power of integer-order G(s), i.e. H(s) ≈ G α (s). Algorithm 5.1 (Design of Carlson’s model). Proceed as follows. (1) Input base model G(s) and fractional power α. (2) Denote q = α, m = q/2: set the initial model H0 (s) = 1. (3) Assign the number of iterations as n. (4) Iterations can be used to design Carlson’s filter, and in each iteration, an approximated rational function is obtained in the form H i (s) = H i−1 (s)

2 (q − m)H i−1 (s) + (q + m)G(s) 2 (q + m)H i−1 (s) + (q − m)G(s)

.

Based on the algorithm, a MATLAB implementation of Carlson’s filter is written in the function function H=carlson_fod(alpha,G,iter) q=1/alpha; m=q/2; H=1; s=tf(’s’); for i=1:iter H=H*((q-m)*H^2+(q+m)*G)/((q+m)*H^2+(q-m)*G); end H=minreal(H); with the syntax H = carlson_fod(α, G, n), where G is the integer-order base model such that an irrational G α (s) is to be approximated. The argument n is the number of iterations, and the returned H is the Carlson’s filter model. Note that the actual order of the filter increases drastically with the increase in n, and so, normally, n = 2 is usually selected. Example 5.3. Design a filter using Carlson’s algorithm for 1/√s and observe the frequency fitting quality. Also verify computing accuracy for the 0.5th-order integral of the function f(t) = e−t sin(3t + 1). Solution. For this particular problem, it can be seen that the base model is G(s) = 1/s, and α = 0.5. Therefore, the Carlson’s filter can be designed >> s=tf(’s’); G=1/s; alpha=0.5; H2=carlson_fod(alpha,G,2) and the Carlson filter designed is H2 (s) =

0.1111s6 + 4.074s5 + 16.68s4 + 19.11s3 + 8.778s2 + 1.704s + 0.1111 . s6 + 10s5 + 20.33s4 + 14.37s3 + 4.333s2 + 0.5185s + 0.01235

146 | 5 Approximation of fractional-order operators Bode Diagram

40

Magnitude (dB)

1/√s 20 0

H2 (s) and G3 (s)

-20

Phase (deg)

-40 0

-30

-60

10 -3

10 -2

10 -1

10 0

Frequency (rad/s)

10 1

10 2

10 3

Fig. 5.3: Bode diagram comparisons with continued fractions.

It should be noted here that the Carlson’s filter obtained is different from the one presented in [53]: G3 (s) =

s4 + 36s3 + 126s2 + 84s + 9 . 9s4 + 84s3 + 126s2 + 36s + 1

The approximate models are compared with the following statements, as shown in Figure 5.3, and it can be seen that the fittings of the two models are almost the same. The fitting is good in the interval [10−1 rad/s,101 rad/s]. >> G3=tf([1 36 126 84 9],[9 84 126 36 1]); w=logspace(-3,3); s=fotf(’s’); H=bode(s^-0.5,w); bode(H,’-’,H2,’--’,G3,’:’) If the filters are used in the 0.5th-order integral evaluation, the result can be obtained with the following statements, as shown in Figure 5.4. It can be seen that within the specified frequency region, the output evaluated is almost identical to the one obtained with the Grünwald–Letnikov definition. >> t=0:0.01:10; y=exp(-t).*sin(3*t+1); y0=glfdiff9(y,t,-0.5,5); y1=lsim(H2,y,t); plot(t,y0,’-’,t,y1,’--’) If the time span is increased to the interval t ∈ (0, 100), the following statements can be specified, and the time responses are shown in Figure 5.5. Unfortunately, the matching becomes worse when t further increases. It is known from the properties of Laplace transform that the error in large t indicates the mismatch in low frequency responses. From the Bode diagram matching in Figure 5.3 it can be seen that the matching is very poor when ω < 10−2 . In order to provide better filtering results, a filter with better low frequency response matching is expected. >> t=0:0.01:100; y=exp(-t).*sin(3*t+1); y0=glfdiff9(y,t,-0.5,3); y1=lsim(H2,y,t); plot(t,y0,’-’,t,y1,’--’)

5.1 Some continued fraction-based approximations | 147 0.6 0.5 0.4 0.3 0.2 0.1 0 -0.1

0

1

2

3

4

5

6

7

8

9

10

70

80

90

100

Fig. 5.4: Comparisons of 0.5th-order integral. 0.6 0.5 0.4 0.3 0.2 0.1 0 -0.1

0

10

20

30

40

50

60

Fig. 5.5: Comparisons of 0.5th-order integral over larger time span.

5.1.3 Matsuda–Fujii filter Matsuda and Fujii proposed a rational function approximation algorithm for an irrational transfer function, with continued fraction technique [38, 53]. Unfortunately, the description of the original algorithm is mathematically wrong, and a revised version is stated as follows. Algorithm 5.2 (Revised Matsuda–Fujii filter design algorithm). Proceed as follows. (1) Select a frequency vector ω i , i = 1, 2, . . . , n, and compute the frequency response f(jω i ) from the original model to fit. (2) Assign an initial sequence v1 (ω k ) = |f(jω k )|, for k = 1, 2, . . . , n. (3) In a loop for k, extract a k = v k (ω k ), and update v k+1 (ω) with v k+1 (ω j ) = for j = 1, 2, . . . , n.

ωj − ωk v k (ω j ) − v k (ω k )

148 | 5 Approximation of fractional-order operators (4) Compose the Matsuda–Fujii filter H(s) from the coefficients a k , k = 1, 2, . . . , n: H(s) = a1 +

s − ω1 . s − ω2 a2 + s − ω3 a3 + s − ω4 a4 + a5 + ⋅ ⋅ ⋅

The MATLAB implementation of the algorithm is given below. function G=matsuda_fod(G0,n,wb,wh) if nargin==2, f=G0; w=n; n=length(w); else, if isnumeric(G0), s=fotf(’s’); G0=s^G0; end w=logspace(log10(wb),log10(wh),n); f=mfrd(G0,w); end v=abs(f(:).’); s=tf(’s’); n=length(w); for k=1:n, a(k)=v(k); v=(w-w(k))./(v-v(k)); end G=a(n); for k=n-1:-1:1, G=a(k)+(s-w(k))/G; end Comments 5.1 (About the filter design function). We make the following remarks. (1) The function can be called in either of the three syntaxes ∙ G = matsuda_fod(f , ω), for given frequency response data, ∙ G = matsuda_fod(γ, n, ωb , ωh ), fitting s γ in (ωb , ωh ), with n points, ∙ G = matsuda_fod(G0 , n, ωb , ωh ), fitting G0 in (ωb , ωh ), with n points. (2) Since ωb and ωh are selected as interior fitting points around the boundaries, the actual fitting interval should be larger than (ωb , ωh ). (3) For good approximation, please select an odd n, and the actual order of the filter equals (n − 1)/2. (4) The fitting target is good magnitude matching, the reason is not known why the phase matching quality is also good. Example 5.4. Design the Matsuda–Fujii filter for 1/√s and assess the fitting quality. Solution. Selecting frequency range (10−1 rad/s, 101 rad/s), the filter can be designed; the Bode diagram comparison is given in Figure 5.6. >> s=fotf(’s’); G=s^-0.5; G1=matsuda_fod(G,9,1e-1,1e1) G2=matsuda_fod(-0.5,9,1e-1,1e1) H=bode(G,w); bode(H,G1,G2,{1e-3,1e3}) The two filters designed with different syntaxes are exactly the same, and they are the same as the one given in [53]: G1 (s) =

0.0855s4 + 4.876s3 + 20.84s2 + 13s + 1 . s4 + 13s3 + 20.84s2 + 4.876s + 0.0855

The low-frequency fitting is close to the Carlson’s filter, where the low-frequency fitting problem is not solved. The frequency interval boundaries and order of the

5.2 Oustaloup filter approximations | 149 Bode Diagram

Magnitude (dB)

40 20 0 -20

Phase (deg)

-40 0

-30

-60

10 -3

10 -2

10 -1

10 0

Frequency (rad/s)

10 1

10 2

10 3

Fig. 5.6: Bode diagram comparisons.

Matsuda–Fujii filter can be reassigned for better frequency response fitting qualities, so that much better filter behaviours can be achieved.

5.2 Oustaloup filter approximations It has been demonstrated through examples that the continued fraction-based filters, except the Matsuda–Fujii filter, are not quite good in frequency response fitting, and besides, the expected frequency ranges cannot be selected by the users, which may not be ideal in real applications. In this section, a more practical filter – the Oustaloup filter – and its modified forms will be discussed.

5.2.1 Ordinary Oustaloup approximation Suppose that an interested frequency range is selected as (ωb , ωh ), and a set of asymptotic segments can be used to approximate the straight line in the range, as shown in Figure 5.7. A well-known French scholar in fractional-order control, Professor Alain

ω1

ω1

ω2

ω2

ωN − 1 ωN − 1 ωN

Fig. 5.7: Zigzag poly-line fitting with Oustaloup filter.

ωN

ω (rad/dec)

150 | 5 Approximation of fractional-order operators Oustaloup, proposed a filter design methodology based on such an idea [51], and we refer to the filter as the Oustaloup filter in the book. All the segments in the asymptotes are generated by integer-order poles and zeros such that the slopes of the magnitude asymptotes are alternating at 0 dB/dec and −20 dB/dec. If an adequate amount of segments is selected, the shapes of the exact Bode magnitude may look like a straight line in the range. Therefore, the poly-lines will approximate the straight line in a very close manner. Definition 5.2. The standard form of the Oustaloup filter is N

Gf (s) = K ∏ k=1

s + ω󸀠k , s + ωk

(5.1)

where the poles, zeros and gain can be obtained from (2k−1−γ)/N

ω󸀠k = ωb ωu

,

(2k−1+γ)/N

ω k = ωb ωu

for k = 1, 2, . . . , N with ωu = √

,

γ

K = ωh

ωh . ωb

(5.2)

(5.3)

The following algorithm is constructed to design an Oustaloup filter. Algorithm 5.3 (Standard Oustaloup filter). Proceed as follows. (1) Input the order γ, select the values ωb , ωh and N. (2) Compute ωu from (5.3); then compute ω k , ω󸀠k and K from (5.2). (3) Compose the Oustaloup filter Gf (s) from (5.1). Based on the definition, the following MATLAB function can be written to implement the Oustaloup filter: function G=ousta_fod(gam,N,wb,wh) if round(gam)==gam, G=tf(’s’)^gam; else, k=1:N; wu=sqrt(wh/wb); wkp=wb*wu.^((2*k-1-gam)/N); wk=wb*wu.^((2*k-1+gam)/N); G=zpk(-wkp,-wk,wh^gam); G=tf(G); end with the syntax G = ousta_fod(γ, N, ωb , ωh ), where γ is the order of derivative, N is the order of the filter. The variables ωb and ωh are the lower and upper bounds of the frequency interval under the user’s choice. Normally within the frequency range, the Bode diagram fitting qualities of the Oustaloup filter to the target fractional-order operator are satisfactory, while the fitting outside the range is not. Comments 5.2 (Oustaloup filter). We make the following remarks. (1) The algorithm presented here avoids the restriction on ωb ωh = 1 such that the two frequency boundaries can be selected independently.

5.2 Oustaloup filter approximations | 151

(2) If γ is an integer, the filter designed is G(s) = s γ . (3) Here, γ can be either positive or negative, for differentiations and integrals. (4) The absolute values of γ can be larger than 1, for instance, γ = 3.7. However, in this case, it is suggested to keep −1 < γ < 1, and leave the remaining integers as an integer-order action. For instance, it is better to use s3.7 = s3 s0.7 , or s3.7 = s4 s−0.3 . (5) It should also be noted that although the fitting in Figure 5.7 seems to be poly-lines, they are in fact the asymptotes in the Bode diagram. The exact Bode diagram with the filter should be a smooth straight line. If y(t) is the input signal to the filter, the output of the filter can be approximately γ regarded as the Riemann–Liouville fractional-order derivative Dt y(t). Example 5.5. Assume that the interested frequency response range is selected as ωb = 0.01 rad/s and ωh = 1000 rad/s. Oustaloup filters with different orders can be designed to approximate 0.5th-order derivatives. Suppose the test signal is f(t) = e−t sin(3t + 1). Assess the filtering quality. Solution. The following MATLAB commands can be used to design a fifth-order Oustaloup filter G1 and Matsuda–Fujii filter G2 : >> G1=ousta_fod(0.5,5,0.01,1000), G1a=zpk(G1) G2=matsuda_fod(0.5,11,0.01,1000), G2a=zpk(G2) s=fotf(’s’); G=s^0.5; w=logspace(-3,4); H=bode(G,w); bode(H,’-’,G1,’--’,G2,’:’) the factorised forms of the filters designed are, respectively, G1a (s) =

31.623(s + 177.8)(s + 17.78)(s + 1.778)(s + 0.1778)(s + 0.01778) (s + 562.3)(s + 56.23)(s + 5.623)(s + 0.5623)(s + 0.05623)

and G2a (s) =

72.126(s + 291.8)(s + 19.88)(s + 1.721)(s + 0.1399)(s + 0.0056) . (s + 1792)(s + 71.46)(s + 5.81)(s + 0.503)(s + 0.03427)

The Bode diagram of the filter is shown in Figure 5.8, superimposed by the exact responses of s0.5 , obtained by the FOTF object, which is to be discussed in detail in Chapter 6. It can be seen that the fitting band of the Matsuda–Fujii filter is wider. Meanwhile, the 0.5th-order derivatives can be evaluated both by the Grünwald– Letnikov definition, and the two filters can be obtained as shown in Figure 5.9. It can be seen that the fractional-order derivatives obtained with the filter is very accurate, and the Oustaloup filter is slightly better. >> t=0:0.001:pi; y=exp(-t).*sin(3*t+1); y1=lsim(G1,y,t); y2=lsim(G2,y,t); y0=glfdiff9(y,t,0.5,3); plot(t,y1,t,y2,t,y0)

152 | 5 Approximation of fractional-order operators Bode Diagram

Magnitude (dB)

40 20 0 -20

ωb

Phase (deg)

-40 60

ωh

Matsuda–Fujii filter

30

Oustaloup filter 0

10 -3

10 -2

10 -1

10 0

10 1

10 2

Frequency (rad/s)

10 3

10 4

Fig. 5.8: Bode diagram fitting with the Oustaloup filter. 4 3.5

0.25

3 0.2

2.5

0.15

2 1.5

rG

u

0.1

1

(s)

2

e pp

s) 1( er G w o l ion nit defi L G–

0.05 1.5

0.5

1.55

1.6

1.65

1.7

1.75

1.8

1.85

0 -0.5 -1

0

0.5

1

1.5

2

2.5

3

Fig. 5.9: Fractional-order derivative.

Example 5.6. Consider the 0.5th-order derivative of the Heaviside function, over a longer time span. Observe what is the impact on the Oustaloup filter parameters. Solution. It is known that the frequency of the Heaviside function is zero. Still the interested frequency range [0.01 rad/s, 1000 rad/s] is selected. The simulation interval is selected as [0 s, 20 s]. The following statements can be issued and the results using two methods are shown in Figure 5.10. It can be seen that the fitting at t = 2 is not good; besides, the matching is very poor when t is large. >> t=0:0.005:20; y=ones(size(t)); G=ousta_fod(0.5,5,0.01,1000), y1=lsim(G,y,t); y2=glfdiff9(y,t,0.5,3); plot(t,y1,’-’,t,y2,’--’), ylim([0,1]) Let us increase the order of Oustaloup filter, and run the following statements again. >> G=ousta_fod(0.5,9,0.01,1000); y1=lsim(G,y,t); plot(t,y1,’-’,t,y2,’--’), ylim([0,1])

5.2 Oustaloup filter approximations | 153 1 0.9 0.8 0.7 0.6 0.5 0.4 0.3 0.2 0.1 0

0

2

4

6

8

10

12

14

16

18

20

Fig. 5.10: Fractional-order derivative of a Heaviside function. 1 0.9 0.8 0.7 0.6 0.5 0.4 0.3 0.2 0.1 0

0

2

4

6

8

10

12

14

16

18

20

Fig. 5.11: Evaluation with a higher-order filter.

The results are shown in Figure 5.11. It can be seen that although the matching at t = 2 is improved, the fitting error for large values of t still exists. It means that the matching at that time interval cannot be improved at all by merely increase the size of the filter. The only possible solution for a better large t matching is to reduce the expected lower frequency ωb ; therefore, a smaller value of ωb = 0.001 rad/s can be assigned and the improved fitting results can be obtained as shown in Figure 5.12. >> G=ousta_fod(0.5,9,0.001,1000); y1=lsim(G,y,t); plot(t,y1,’-’,t,y2,’--’), ylim([0,1]) To further improve the fitting results, an even lower ωb can be assigned, and a suitably higher-order should be selected. Example 5.7. It can be observed that for accurate computation results, normally a larger interested frequency range should be selected. However, how can we select the order of the filter?

154 | 5 Approximation of fractional-order operators 1 0.9 0.8 0.7 0.6 0.5 0.4 0.3 0.2 0.1 0

0

2

4

6

8

10

12

14

16

18

20

Fig. 5.12: Improved fitting results.

Solution. It has been demonstrated in Example 5.5 that, when the interested frequency range is selected as [10−2 rad/s, 103 rad/s], a fifth-order Oustaloup filter is good. If the frequency range is increased by one decade, the order of Oustaloup filter should be increased by at least two. For instance, the frequency range can be assigned to (10−4 rad/s, 104 rad/s); the frequency range is increased by three decades, an 11th-order filter should be designed. The following commands can be issued to design the 11th-order filter, and compare the fitting, as shown in Figure 5.13. >> s=fotf(’s’); G1=s^0.5; w=logspace(-5,5,200); H=bode(G1,w); G=ousta_fod(0.5,11,1e-4,1e4); bode(G,’-’,H,’--’) It can be seen that for the selected frequency range, an 11th-order Oustaloup filter is satisfactory. If the magnitude plot is zoomed several times, one may still notice that the plot is almost a straight-line.

Magnitude (dB)

50

Bode Diagram

0

Phase (deg)

-50 60

30

0

10 -5

10 0

Frequency (rad/s)

Fig. 5.13: Eleventh order Oustaloup filter.

10 5

5.2 Oustaloup filter approximations | 155 Bode Diagram

Magnitude (dB)

100 50 0 -50

Phase (deg)

-100 60

30

0

10 -10

10 -5

10 0

Frequency (rad/s)

10 5

10 10

Fig. 5.14: Bode diagram of a high-standard Oustaloup filter.

Comments 5.3. Hints on the parameter selections of the filters: (1) If the time response fitting is not good when t is large, the expected lower bound ωb should be reduced. However, if the initial time fitting is not good, try to increase the upper bound ωh . (2) Do not be afraid to use high orders. The selection of N = 30 may not imply to increase significantly the computation burden on modern computers. Example 5.8. How far can we go with an Oustaloup filter? Solution. The computing facilities of modern computers are progressing rapidly, and for better fitting performances, we can specify large frequency intervals and high orders for the Oustaloup filter design process. For instance, we can select (10−8 rad/s, 108 rad/s), and an order of N = 35. The filter can be designed and the Bode diagram of the filter is shown in Figure 5.14. It can be seen that such a filter can almost be recognised as a perfect fractional-order differentiator. >> w1=1e-8; w2=1e8; N=35; G=ousta_fod(0.6,N,w1,w2); bode(G)

5.2.2 A modified Oustaloup filter It can be seen from the previous examples that the fitting of Oustaloup filters may not be satisfactory around the boundaries ωh and ωb of the interested frequency range. Also, the fitting, especially the phase response fitting, is sometimes not ideal. Therefore, an improved one is expected. Definition 5.3. A modified Oustaloup filter can be introduced [78], and an improved version is given by sγ ≈ (

N s + ω󸀠 dωh γ ds2 + bωh s k , ) ( ) ∏ b d(1 − γ)s2 + bωh s + dγ k=1 s + ω k

(5.4)

156 | 5 Approximation of fractional-order operators where

(2k−1−γ)/N

ω󸀠k = ωb ωu

,

(2k−1+γ)/N

ω k = ωb ωu

,

γ

K = ωh .

(5.5)

Note that the order γ must satisfy γ ∈ (0, 1). Normally, the weighting factors are selected as b = 10 and d = 9, by default. Following Algorithm 5.3, adding the new terms, the modified Oustaloup filter can be constructed. The following MATLAB function can be written to design the modified Oustaloup filter: function G=new_fod(r,N,wb,wh,b,d) if nargin==4, b=10; d=9; end; G=ousta_fod(r,N,wb,wh); G=G*(d/b)^r*tf([d,b*wh,0],[d*(1-r),b*wh,d*r]); and the syntax of the function is Gf = new_fod(γ, N, ωb , ωh , b, d), where the arguments b and d can usually be omitted such that the default values can be used directly. Example 5.9. Consider again the problem in Example 5.5. Selecting ωb = 0.01 and ωh = 1000, observe the fitting results of the new filter, and compare the fractional-order derivatives with the filter. Solution. The two filters can both be designed and compared, and the Bode diagrams can be obtained as shown in Figure 5.15. >> G1=ousta_fod(0.5,5,0.01,1000); G2=new_fod(0.5,5,0.01,1000); zpk(G2) s=fotf(’s’); G0=s^0.5; w=logspace(-5,5,100); H=bode(G0,w); bode(G1,’-’,G2,’--’,H,’:’) Here the factorised form of the filter G2 (s) is 60s(s + 0.01778)(s + 0.1778)(s + 1.778)(s + 17.78)(s + 177.8)(s + 1111) . (s + 0.05623)(s + 0.5623)(s + 5.623)(s + 56.23)(s + 562.3)(s + 2222)(s + 0.00045) Bode Diagram

Magnitude (dB)

50

0

G1 (s) G2 (s)

√s

-50

ωb

Phase (deg)

-100 90

ωh

G2 (s)

√s

45

0 10

G1 (s) -5

10 0

Frequency (rad/s)

Fig. 5.15: Bode diagrams comparisons.

10 5

5.3 Integer-order approximations of fractional-order transfer functions | 157 8 -0.4

6

-0.6

4

-0.8 2

G(s 0.6

G1

) 0.7

0.8

(s) G2

(s) 0.9

1

1.1

1.2

0

-2

0

0.5

1

1.5

2

2.5

3

Fig. 5.16: Fractional-order derivatives with different filters.

The fractional-order derivative are compared in Figure 5.16. It can be seen that the fitting result is significantly improved. >> t=0:0.001:pi; y=exp(-t).*sin(3*t+1); y1=lsim(G1,y,t); y2=lsim(G2,y,t); y0=glfdiff9(y,t,0.5,3); plot(t,y1,t,y2,t,y0), axis([0,pi,-2 8]) Example 5.10. For the following model, compare the two Oustaloup filters: G(s) =

s+1 . 10s3.2 + 185s2.5 + 288s0.7 + 1

Solution. The exact Bode diagram can be obtained with the function bode(). The function high_order() in the next section is used here to find the high-order approximations under the two filters, and the Bode diagram fitting is shown in Figure 5.17. It can be seen that the modified method provides a better fitting. >> b=[1 1]; a=[10,185,288,1]; nb=[1 0]; na=[3.2,2.5,0.7,0]; G0=fotf(a,na,b,nb); w=logspace(-4,4,200); H=bode(G0,w); N=7; w1=1e-3; w2=1e3; G1=high_order(G0,’ousta_fod’,w1,w2,N); G2=high_order(G0,’new_fod’,w1,w2,N); bode(H,’-’,G1,’--’,G2,’:’)

5.3 Integer-order approximations of fractional-order transfer functions 5.3.1 High-order approximations A simple algorithm can be introduced to convert the fractional-order transfer function into integer-order ones, with Oustaloup filters.

158 | 5 Approximation of fractional-order operators Bode Diagram

Magnitude (dB)

0 -50 -100 -150 -200 0

Phase (deg)

-45

Oustaloup filter

origin

al and

-90

modifi ed filt er

-135 -180 -225

10 -4

10 -2

10 0

Frequency (rad/s)

10 2

10 4

Fig. 5.17: Bode diagram comparisons.

Algorithm 5.4 (Integer-order model approximation). Proceed as follows. (1) Write a function to process pseudo-polynomials, find in each term the factors s α with α ∈ (0, 1) and substitute them with the selected filters. (2) Process numerator and denominator individually to get the high-order integerorder model for the fractional-order model. (3) Perform minimal realisation to the integer-order model. A MATLAB function can be written to convert automatically fractional-order transfer functions into higher-order integer-order ones. function Ga=high_order(G0,filter,wb,wh,N), [n,m]=size(G0); F=filter; for i=1:n, for j=1:m, if G0(i,j)==fotf(0), Ga(i,j)=tf(0); else, G=simplify(G0(i,j)); [a na b nb]=fotfdata(G); G1=pseudo_poly(b,nb,F,wb,wh,N)/pseudo_poly(a,na,F,wb,wh,N); Ga(i,j)=minreal(G1); end, end, end % sub-function to process pseudo-polynomials function G1=pseudo_poly(a,na,filter,wb,wh,N) G1=0; s=tf(’s’); for i=1:length(a), na0=na(i); n1=floor(na0); if na0>n1, g1=eval([filter ’(na0-n1,N,wb,wh)’]); else, g1=1; end, G1=G1+a(i)*s^n1*g1; end The syntax is Ga = high_order(G0 , filter, ωb , ωh , N), where the input argument G0 can be a fractional-order transfer function model. It is also extended into the case in handling multivariable systems, where fractional-order transfer function matrix can

5.3 Integer-order approximations of fractional-order transfer functions | 159

be specified. Details of fractional-order transfer function matrix will be fully described in Chapter 6. The argument filter can be assigned as ’ousta_fod’, matsuda_fod() and ’new_fod’, the arguments ωb and ωh are the lower and upper bounds of the expected frequency range, and N is the order of the filter. It can be seen that other filters can also be allowed by the easy-to-extend format. Example 5.11. Consider again the FOTF model in Example 5.10. Establish the integerorder transfer function model to fit it. Solution. It can be seen that the original model can be approximated by the Oustaloup and modified Oustaloup filters with the following statements. The frequency response fittings are the same as the ones obtained in Figure 5.17. >> b=[1 1]; a=[10,185,288,1]; nb=[1 0]; na=[3.2,2.5,0.7,0]; G0=fotf(a,na,b,nb); N=4; w1=1e-3; w2=1e3; b=10; d=9; G10=high_order(G0,’ousta_fod’,w1,w2,N) G20=high_order(G0,’new_fod’,w1,w2,N) w=logspace(-3,3); bode(G0), hold on; bode(G10,’--’,G20,’:’) The approximate models are, respectively,

G10 (s) =

0.025119(s + 595.7)(s + 421.7)(s + 251.2)(s + 18.84) (s + 13.34)(s + 7.943)(s + 1)(s + 0.5957)(s + 0.4217)(s + 0.2512) (s + 0.01884)(s + 0.01334) (s + 595.7)(s + 507)(s + 154.7)(s + 40.89)(s + 18.78)(s + 7.377) (s + 2.041)(s + 0.4418)(s + 0.2511)(s + 0.05445)(s + 0.01334) (s + 0.002349)(s2 + 0.3824s + 1.613)

and

G20 (s) =

0.020523(s + 1)(s + 0.5957)(s + 0.4217)(s + 0.2512)(s + 7.943) (s + 13.34)(s + 18.84)(s + 251.2)(s + 421.7)(s + 595.7)(s + 1389) (s + 2222)(s + 3704)(s + 0.01884)(s + 0.01334)(s + 0.007943) (s + 0.00063)(s + 0.00045)(s + 0.00018) (s + 3704)(s + 2325)(s + 1111)(s + 595.7)(s + 488.2)(s + 152.6) (s + 40.07)(s + 18.78)(s + 7.358)(s + 2.041)(s + 0.4422) (s + 0.2511)(s + 0.05453)(s + 0.01334)(s + 0.007943) (s + 0.002197)(s + 0.00045)(s + 0.0002201)(s + 0.00018) (s2 + 0.3762s + 1.574)

.

Step responses of the original-order transfer function model and the ones with the two filters can be obtained as shown in Figure 5.18. It is also seen that the modified Oustaloup filter is not satisfactory for this example.

160 | 5 Approximation of fractional-order operators 0.035

up

lo usta

0.03

ed O

difi

mo

0.025

r filte lter

up fi

talo Ous

0.02 0.015 0.01 0.005 0

0

2

4

6

8

10

12

14

16

18

20

Fig. 5.18: Step response comparisons.

>> t=0:0.01:20; y=step(G0,t); y1=step(G10,t); y2=step(G20,t); plot(t,y,’-’,t,y1,’--’,t,y2,’:’) Furthermore, the fitting at large t is not satisfactory. This means that the lower bound of the frequency range should be reduced, and the order of Oustaloup filter should be increased. The step response comparison of the original model and the one obtained with the Oustaloup filter can be obtained as shown in Figure 5.19. It can be seen that the fitting is significantly improved, over a larger time interval. >> N=9; w1=1e-4; w2=1e3; b=10; d=9; G10=high_order(G0,’ousta_fod’,w1,w2,N) t=0:0.01:100; y=step(G0,t); y1=step(G10,t); plot(t,y,’-’,t,y1,’--’) 0.1 0.09 0.08 0.07 0.06 0.05 0.04 0.03 0.02 0.01 0

0

10

20

30

40

50

60

70

Fig. 5.19: With large frequency response interval.

80

90

100

5.3 Integer-order approximations of fractional-order transfer functions | 161

5.3.2 Low-order approximation via optimal model reduction techniques A low-order approximation in the following form is considered: G r/m,τ (s) =

β1 s r + β2 s r−1 + ⋅ ⋅ ⋅ + β r+1 e−τs . s m + α1 s m−1 + ⋅ ⋅ ⋅ + α m−1 s + α m

To get a better reduction model, the error between the original model and the reduced model to the same input signal should be defined, and the target is to minimise the error criterion. The optimal reduced model can be converted to the minimisation of the following objective function: J = min ‖̂ G(s) − G r/m,τ (s)‖2 , θ

(5.6)

where θ is the vector of undetermined model parameters, i.e. θ = [β1 , β2 , . . . , β r , β r+1 , α1 , α2 , . . . , α m , τ]T . Since there are delay terms in (5.6), the Padé approximation is used for them, and the objective function can be changed to the following norm evaluation form: J = min ‖̂ G(s) − ̂ G r/m (s)‖2 . θ

There is no analytical solution to the problem; thus, numerical optimisation technique can be used. Based on the optimal model reduction algorithm in [76], we can solve directly the optimal model reduction problem. Algorithm 5.5 (Optimal model reduction). Proceed as follows. (1) Get the original model ̂ G(s). (2) Select expected orders r and m in numerator and denominator. (3) Express the objective function (5.6) in MATLAB. (4) Call fminsearch() to find the optimal reduced model G r/m,τ (s). It can be seen that the orders of the approximate models are extremely high, optimal model reduction technique [74, 76] can be used to find low-order approximations. The MATLAB function can be written as follows: function Gr=opt_app(G,r,k,key,G0) GS=tf(G); num=GS.num{1}; den=GS.den{1}; Td=totaldelay(GS); GS.ioDelay=0; s=tf(’s’); GS.InputDelay=0; GS.OutputDelay=0; if nargin> G2=opt_app(G1,4,5,0), zpk(G2) w=logspace(-4,4); H=bode(G,w); bode(H,G1,’--’,G2,’:’) and the Bode diagrams of the original model, high- and low-order approximate models can be obtained as shown in Figure 5.20. Bode Diagram

20

Magnitude (dB)

0 -20

G2 (s)

-40 -60

G(s) and G1 (s)

-80

Phase (deg)

-100 720 360

G2 (s)

G1 (s)

0

G(s)

-360 -720

10 -4

10 -3

10 -2

10 -1

10 0

Frequency (rad/s)

10 1

10 2

10 3

10 4

Fig. 5.20: Bode diagram approximations.

The low-order approximate model obtained is G2 (s) =

−4.7911(s + 0.2452)(s + 0.01657)(s2 − 8.983s + 31.65) . (s + 140.9)(s + 0.3799)(s + 0.0173)(s2 + 0.2303s + 0.2479)

It can be seen that the high-order approximation model is very close to that of the original model, and the high-frequency fitting of the low-order model looks not quite satisfactory, and the differences are significant, as seen in the Bode magnitude plot. However, it is not worth worrying too much about this, since the difference happened when the gain is below −20 dB. In decimal terms, −20 dB is about 10−1 in gain, while −50 dB is about 10−2.5 in gain, therefore, the difference in gain is not thus significant. The differences in phase are almost 360∘ and 720∘ , therefore, the phase matching is satisfactory. The step responses of the models can also be obtained with the statements as shown in Figure 5.21. >> t=0:0.01:10; y=step(G,t); y1=step(G1,t); y2=step(G2,t); plot(t,y,’-’,t,y1,’--’,t,y2,’:’) It can be seen that the step response matching of the high- and low-order approximations are very close to that of the original model.

164 | 5 Approximation of fractional-order operators 1

0

-1

-2

-3

-4

-5

0

1

2

3

4

5

6

7

8

9

10

Fig. 5.21: Step response comparisons.

Example 5.13. Find low-order approximations for the following model: G(s) =

5 . s2.3 + 1.3s0.9 + 1.25

Solution. The high-order approximation of the original fractional-order transfer function can be obtained first, and different reduced models can be obtained. The step responses of the original model and the reduced ones can be obtained as shown in Figure 5.22. >> w1=1e-3; w2=1e3; N=9; s=fotf(’s’); G=5/(s^2.3+1.3*s^0.9+1.25); G0=high_order(G,’ousta_fod’,w1,w2,N); zpk(G0) G1=opt_app(G0,1,2,0), G2=opt_app(G0,2,3,0) G3=opt_app(G0,3,4,0), G4=opt_app(G0,4,5,0); step(G0,G1,G2,G3,G4) Step Response 5

G2 (s)

4

G1 (s) Amplitude

3 2 1 0 -1

0

2

4

6

8

10

Time (seconds)

12

14

16

Fig. 5.22: Step response comparisons of the example.

18

20

5.4 Approximations of irrational models | 165

The 19th-order approximation obtained is

G0 (s) =

0.62946(s + 926.1)(s + 584.3)(s + 199.5)(s + 125.9)(s + 42.99) (s + 27.12)(s + 9.261)(s + 5.843)(s + 1.995)(s + 1.259) (s + 0.4299)(s + 0.2712)(s + 0.09261)(s + 0.05843)(s + 0.01995) (s + 0.01259)(s + 0.004299) (s + 926.1)(s + 368.6)(s + 199.5)(s + 79.35)(s + 42.95) (s + 16.98)(s + 9.19)(s + 3.514)(s + 1.859)(s + 0.8717) (s + 0.3914)(s + 0.2652)(s + 0.09077)(s + 0.0584)(s + 0.01986) (s + 0.01259)(s + 0.004295)(s2 + 0.5945s + 1.899)

.

The optimum approximated results are listed as follows: G1 (s) =

−2.045s + 7.654 , s2 + 1.159s + 1.917

G2 (s) =

−0.5414s2 + 4.061s + 2.945 , s3 + 0.9677s2 + 1.989s + 0.7378

G3 (s) =

−0.2592s3 + 3.365s2 + 4.9s + 0.3911 s4 + 1.264s3 + 2.25s2 + 1.379s + 0.09797

G4 (s) =

s5

and 1.303s4 + 1.902s3 + 11.15s2 + 4.71s + 0.1898 . + 2.496s4 + 3.485s3 + 4.192s2 + 1.255s + 0.04755

It can be seen that the second- and third-order approximations are not satisfactory, while the fourth- and fifth-order approximations are quite close to the original fractional-order model. For some particular examples, such as the one given in Example 5.10, the low-order approximation may not be good when the order selected is relatively low. Therefore, the low-order approximation should be validated before it can actually be used in real applications.

5.4 Approximations of irrational models In control systems, some of the control components cannot be modelled accurately by typical fractional-order transfer functions. For instance, if there exist terms like [(as + b)/(cs + d)]α , since α is not an integer, it can only be expanded as an infinite series. If there exist terms like (as β + b)α , the case is even more difficult to handle. In control system design, the controller obtained may be rather complicated, and may not be easy to implement. For instance, if such a term is contained in the controller, the relevant algorithms can be used in controller approximation. In this section, this kind of problems is studied.

166 | 5 Approximation of fractional-order operators 5.4.1 Frequency response fitting approach A MATLAB function invfreqs() is suitable to find integer-order approximate models to fit the frequency response data of the target system. The syntax of the function is G = invfreqs(H, w, m, n), where H is the frequency response data, w is the frequency vector, and m and n are the orders of the numerator and denominator, respectively. Algorithm 5.6 (Approximation of irrational models). Proceed as follows. (1) Generate exact frequency response samples for the irrational model. (2) Select suitable orders in numerator and denominator for the model. (3) Use the function invfreqs() to get the frequency response fitting model. (4) Validate the fitting results. If it is not satisfactory, the orders should be increased in step (2), or go to step (1) to select again the interested frequency range until a satisfactory model can be obtained. Example 5.14. Find an optimal reduced model for the following fractional-order QFT controller model given in [42]: Gc (s) = 1.8393(

s + 0.011 0.96 8.8 × 10−5 s + 1 1.76 1 ) ) ( s 2. s 8.096 × 10−5 s + 1 (1 + 0.29 )

Solution. The controller is rather complicated, so that approximate integer-order controllers are expected, with frequency response fitting approach. Since the MATLAB function frd() can only be used to deal with integer-order systems, its field ResponseData can be used to complete the computation with non-integer powers. Thus, the frequency response data of G(s) can be obtained. The function invfreqs() can then be used to get the fitting model, within the frequency range of ω ∈ (10−4 rad/s, 102 rad/s), with the following MATLAB statements. >> w=logspace(-4,2); G1=tf([1 0.011],[1 0]); F1=frd(G1,w); G2=tf([8.8e-5 1],[8.096e-5 1]); F2=frd(G2,w); s=tf(’s’); G3=1/(1+s/0.29)^2; F3=frd(G3,w); F=F1; h1=F1.ResponseData; h2=F2.ResponseData; h3=F3.ResponseData; h=1.8393*h1.^0.96.*h2.^1.76.*h3; F.ResponseData=h; [n,d]=invfreqs(h(:),w,4,4); H1=zpk(tf(n,d)) The approximate controller obtained is H1 (s) =

−1.5×10−10 (s − 3.913×104 )(s + 2.635×104 )(s + 0.01071)(s + 0.0006019) . (s + 0.2922)(s + 0.2878)(s + 0.0007664)(s + 1.284×10−7 )

It can also be noticed that there are pairs of stable poles and zeros located in very near places, and they can be cancelled with minimum realisation technique in the following way: >> H1a=minreal(H1,1e-3), H1b=minreal(H1,1e-2)

5.4 Approximations of irrational models | 167

with the approximate models −1.5002×10−10 (s − 3.913×104 )(s + 2.635×104 )(s + 0.01071) , (s + 0.2922)(s + 0.2878)(s + 1.284×10−7 ) −1.5002×10−10 (s − 3.913×104 )(s + 2.635×104 ) H1b (s) = . (s + 0.2922)(s + 0.2878) H1a (s) =

For the irrational system model, it seems that the Matsuda–Fujii filter can be designed. The following statements can be issued. >> w=logspace(-4,2,11); F1=frd(G1,w); F2=frd(G2,w); F3=frd(G3,w); h1=F1.ResponseData; h2=F2.ResponseData; h3=F3.ResponseData; h=1.8393*h1.^0.96.*h2.^1.76.*h3; H2=zpk(matsuda_fod(h,w)) Unfortunately, the filter H2 (s) designed is unstable. This reminds us that the continued fraction-based approximation approach cannot retain the stability of the original system. Therefore, caution must be taken in using continued fraction-based filters. We can use a larger frequency interval (10−6 rad/s, 104 rad/s) to validate the fitting results. It can be seen the Bode diagrams of the controllers and their integer-order fitting models are shown in Figure 5.23. >> w=logspace(-6,4,200); F1=frd(G1,w); h1=F1.ResponseData; F2=frd(G2,w); h2=F2.ResponseData; F3=frd(G3,w); h3=F3.ResponseData; h=1.8393*h1.^0.96.*h2.^1.76.*h3; F=F1; F.ResponseData=h; bode(F,’-’,H1,’--’,H1a,H1b,w) It can be seen that the minimal realised model H1a (s) behaves almost the same as the fourth-order controller H1 (s), and the original irrational model, while the secondorder model H1b (s) is not quite good. This means that at least third-order model is Bode Diagram

Magnitude (dB)

100

original, H1 (s), H1a (s) 0 -100 -200 360

Phase (deg)

H1b (s)

180

H1b (s) H1 (s) and H1a (s)

0 -180 10-6

original controller 10-4

10-2

100

Frequency (rad/s)

102

104

Fig. 5.23: Comparisons of the fractional-order QFT controller and their integer-order approximations.

168 | 5 Approximation of fractional-order operators needed for the original irrational model. The model H1a (s) is obtained by minimal realisation technique, and it is better to use the function invfreqs() directly: >> w=logspace(-4,2); F1=frd(G1,w); F2=frd(G2,w); F3=frd(G3,w); h1=F1.ResponseData; h2=F2.ResponseData; h3=F3.ResponseData; h=1.8393*h1.^0.96.*h2.^1.76.*h3; [n,d]=invfreqs(h(:),w,3,3); H3=zpk(tf(n,d)) and the third-order model is H3 (s) =

−1.4886×10−10 (s − 3.93×104 )(s + 2.644×104 )(s + 0.01009) . (s + 0.3028)(s + 0.2768)(s + 1.103×10−5 )

5.4.2 Charef approximation The Charef approximation technique [7, 53] is a useful tool in finding integer-order fitting of the irrational model H(s) =

1 . (1 + s/pT )α

(5.7)

Definition 5.4. The technique similar to the Oustaloup filter can be used to approximate the irrational transfer function with zigzag segments over the frequency interval (0 rad/s, ωM rad/s), with H1 (s) =

(1 + s/z0 )(1 + s/z1 ) ⋅ ⋅ ⋅ (1 + s/z n−1 ) , (1 + s/p0 )(1 + s/p1 ) ⋅ ⋅ ⋅ (1 + s/p n )

(5.8)

where the first pole and other parameters can be computed from p0 = pT 10δ/(20α) ,

a = 10δ/[10(1−α)] ,

b = 10δ/(10α) ,

(5.9)

and the other poles and zeros can be evaluated with p i+1 = p0 (ab)i+1 , z i = ap i ,

i = 0, 1, . . . , n − 1.

(5.10)

The number of pole and zero pairs can be determined by n=⌈

ln(ωM /p0 ) ⌉. ln(ab)

(5.11)

The variable δ is the error tolerance in the frequency response fitting process. Normally, one may select δ = 1 dB. The following algorithm is constructed to design Charef filters. Algorithm 5.7 (Charef filter design). One proceeds as follows. (1) Assign the values pT , α of the model to be fitted. (2) Select δ, ωM to design the filter. (3) Compute a, b, p0 and n from (5.9) and (5.11). (4) Compute poles, zeros and gain from (5.10) and compose the Charef filter.

5.4 Approximations of irrational models | 169

A MATLAB function can be written as follows: function G=charef_fod(alpha,pT,delta,wM) p0=pT*10^(delta/20/alpha); a=10^(delta/10/(1-alpha)); b=10^(delta/10/alpha); n=ceil(log(wM/p0)/log(a*b)); ii=1:n; p=p0*(a*b).^ii; p=[p0 p]; z=a*p(1:n); K=prod(p)/prod(z); G=zpk(-z,-p,K); and the syntax of the function is G = charef_fod(α, pT , δ, ωM ). The input arguments are the same as the one in mathematical formulae, and the returned argument is the Charef filter. Example 5.15. Fit G(s) = 1/(1 + 0.2s)0.5 with the Charef filter. Solution. Selecting the error tolerance of δ = 1 dB, and maximum ωM , the Charef filter can be designed with >> G1=charef_fod(0.5,1/0.2,1,1000) and the filter designed is G1 (s) =

99.763(s + 9.98)(s + 25.06)(s + 62.95)(s + 158.1)(s + 397.2)(s + 997.6) . (s + 6.3)(s + 15.81)(s + 39.72)(s + 99.76)(s + 250.6)(s + 629.5)(s + 1581)

Alternatively, an integer-order approximation of the original irrational system can also be obtained with the frequency response fitting approach >> w=logspace(-2,3); G0=tf(1,[0.2 1]); H=frd(G0,w); h=H.ResponseData; h=h.^0.5; [n,d]=invfreqs(h(:),w,5,5); G2=zpk(tf(n,d)) and the fitting model is G2 (s) =

0.01541(s + 8899)(s + 1121)(s + 340.2)(s + 91.78)(s + 17.16) . (s + 2412)(s + 609.2)(s + 183.6)(s + 41.34)(s + 7.41)

Selecting a set of frequencies, the Matsuda–Fujii filter can also be designed: >> w=logspace(-1,3,9); H=frd(G0,w); h=H.ResponseData; h=h.^0.5; G3=zpk(matsuda_fod(h,w)) where the filter model designed is G3 (s) =

0.030608(s + 1849)(s + 75.69)(s2 + 4.828s + 44.75) . (s + 305.9)(s + 18.7)(s2 + 2.177s + 33.52)

Also, the Carlson filter discussed earlier is suitable for the fitting of the irrational model, with the following: >> s=tf(’s’); G=1/(1+0.2*s); G4=carlson_fod(0.5,G,2); zpk(G4)

170 | 5 Approximation of fractional-order operators Bode Diagram

Magnitude (dB)

0

G4 (s)

-20

G2 (s)

-40

G(s) G1 (s)

-60

Phase (deg)

-80 0

G4 (s)

G2 (s)

-45

G(s) G1 (s)

-90

10-2

100

Frequency (rad/s)

102

104

Fig. 5.24: Comparisons of the frequency responses.

and the transfer function of the filter is

G4 (s) =

0.1111(s + 165.8)(s + 20)(s + 8.52)(s + 6.667) (s + 6.666)(s + 5.662)(s + 5)6 (s + 42.74)(s + 12.1)(s + 6.678)(s + 4.887) (s2 + 13.32s + 44.37)(s2 + 10.33s + 26.67) (s2 + 9.859s + 24.31)(s2 + 10.08s + 25.42)

.

The frequency response fitting of the Charef filter and the frequency response fitting model can be compared with that of the original irrational system as shown in Figure 5.24. It can be seen that the frequency response fitting and the Charef filter are rather satisfactory. >> w=logspace(-3,5); Ga=tf(1,[0.2 1]); H=frd(Ga,w); h=H.ResponseData; H.ResponseData=h.^0.5; bode(H,G1,G2,G4), xlim([1e-3 1e5]) The step response of the irrational system G(s) can be evaluated with the numerical inverse Laplace approach, using the function INVLAP_new(), and the step responses of the Charef filter and frequency fitting model can be compared in Figure 5.25. >> [t,y]=INVLAP_new(’1/(1+0.2*s)^0.5’,0,3,1001,0,’1/s’); y1=step(G1,t); y2=step(G2,t); y3=step(G4,t); plot(t,y,’-’,t,y1,’--’,t,y2,’:’,t,y3,’-.’) Example 5.16. Consider an irrational model with irrational multiple factors: G(s) =

1 . (1 + s/1.6)0.6 (1 + s/6.2)0.3

Find the optimised Charef approximations of the original model.

5.4 Approximations of irrational models | 171 1.2

1

0.8

0.6

0.4

0.2

0

0

0.5

1

1.5

2

2.5

3

Fig. 5.25: Comparisons of the step responses.

Solution. If the original model is regarded as the product of two irrational transfer functions, the Charef approximation can be obtained with >> G11=charef_fod(0.6,1/1.6,1,1000); G12=charef_fod(0.3,1/6.2,1,1000); G1=G11*G12; zpk(G1) and the approximate model is

G1 (s) =

1792.6(s + 1.347)(s + 0.9847)(s + 2.948)(s + 3.515) (s + 8.825)(s + 9.174)(s + 23.94)(s + 26.42)(s + 62.5) (s + 79.08)(s + 163.1)(s + 236.7)(s + 425.8) (s + 708.7)(s + 1111)(s + 0.329) (s + 0.7572)(s + 0.7087)(s + 1.976)(s + 2.122) (s + 5.159)(s + 6.351)(s + 13.47)(s + 19.01)(s + 35.15) (s + 56.92)(s + 91.74)(s + 170.4)(s + 239.4)(s + 510) (s + 625)(s + 1527)(s + 1631)(s + 0.2367)

.

Selecting the frequency interval of ω ∈ (10−2 rad/s, 103 rad/s), the frequency matching can be obtained: >> G01=tf(1,[1/1.6 1]); G02=tf(1,[1/6.2 1]); w=logspace(-2,3); H1=frd(G01,w); h1=H1.ResponseData; H2=frd(G02,w); h2=H2.ResponseData; H=H1; h=h1.^0.6.*h2.^0.3; [n,d]=invfreqs(h(:),w,5,5); H.ResponseData=h; G2=zpk(tf(n,d)) and the model obtained is

G2 (s) =

0.00011629(s + 4.456×104 )(s + 1348)(s + 358.2) (s + 81.78)(s + 7.508) (s + 1579)(s + 407.5)(s + 97.05)(s + 10.91)(s + 2.266)

.

172 | 5 Approximation of fractional-order operators Bode Diagram

Magnitude (dB)

0

H(s),G2 (s),G3 (s)

-50

G1 (s)

-100

Phase (deg)

-150 0

G2 (s),G3 (s)

-45 -90

H(s)

-135

G1 (s)

-180

10 -4

10 -2

10 0

Frequency (rad/s)

10 2

10 4

Fig. 5.26: Comparisons of the frequency responses.

If two separate frequency fitting models are used, the overall approximate filter can be found with the following statements. >> h1=h1.^0.6; h2=h2.^0.3; [n,d]=invfreqs(h1(:),w,5,5); g=tf(n,d); [n,d]=invfreqs(h2(:),w,5,5); G3=zpk(tf(n,d)*g) The fitting model is

G3 (s) =

0.00029552(s + 1.066×104 )(s + 6244)(s + 1122)(s + 1016) (s + 321.9)(s + 319.3)(s + 87.85)(s + 76.86)(s + 17.36)(s + 9.993) (s + 2975)(s + 2079)(s + 709.9)(s + 526.2)(s + 223.3)(s + 143.8) (s + 55.69)(s + 24.43)(s + 10.66)(s + 2.536)

.

The frequency response fitting of the two models are obtained with the following statements, as shown in Figure 5.26. It can be seen that for this example the piecewise fitting approach, G1 (s), fails. >> w=logspace(-4,5); H1=frd(G01,w); h1=H1.ResponseData; H2=frd(G02,w); h2=H2.ResponseData; H=H1; h=h1.^0.6.*h2.^0.3; H.ResponseData=h; bode(H,G1,G2,G3,w) Step responses of the models can be obtained with the statements >> sys=’1/(1+s/1.6)^0.6/(1+s/6.2)^0.3’; [t,y]=INVLAP_new(sys,0,3,1001,0,’1/s’); y1=step(G1,t); y2=step(G2,t); y3=step(G3,t); plot(t,y,t,y1,t,y2,t,y3) as shown in Figure 5.27. It can be seen that the fitting of G3 (s) is quite satisfactory, while the response G2 (s) is better than that of G3 (s). Among the models, G1 (s) is very poor in this example.

5.4 Approximations of irrational models | 173 1

H(s)

0.9

G3 (s)

0.8 0.7 0.6 0.5

G1 (s)

0.4 0.3 0.2 0.1 0

0

0.5

1

1.5

2

2.5

3

Fig. 5.27: Comparisons of the step responses.

5.4.3 Optimised Charef filters for complicated irrational models Let us revisit the Charef filter defined in Definition 5.4, the ratios z i /p i = a and p i+1 /z i = b for i = 0, 1, . . . , n − 1, where a and b are fixed numbers. If the ratios are no longer fixed, they can be optimised around certain performance indices; then the optimised Charef filters can be introduced [40]. Definition 5.5. The complicated irrational models G(s) can be approximated by piecewise segments over the frequency interval (ωm rad/s, ωM rad/s), where in the Charef-type filter H(s) = K

(1 + s/z0 ) ⋅ ⋅ ⋅ (1 + s/z n−1 ) , (1 + s/p0 )(1 + s/p1 ) ⋅ ⋅ ⋅ (1 + s/p n )

(5.12)

where the parameters are optimised. Based on the frequency range of (ωm , ωM ), a set of central frequency points ω u i of pairs (z i , p i ) can be generated such that ω u i+1 = c, ω u i+1 = cω u i , i = 0, 1, 2, . . . , n. ω ui Therefore, the poles and zeros in the filter can be introduced: p i = 10y i ω ui ,

i = 0, 1, . . . , n,

z i = 10ω ui /y i ,

i = 0, 1, . . . , n − 1,

where y i are adjustable parameters. Select a set of frequency samples ω1 , ω2 , . . . , ω N , and generate the frequency response data h a . The target of optimisation is to optimise x = [c, w u0 , k, y0 , y1 , . . . , y n ], which is selected as the decision vector, such that the following objective function is minimised: w2 N w4 N ̂ ̂ ̂ ̂ f(x) = w1 ‖∆ h(jω ∑ ∆ h(jω ∑ ∆∠h(jω i )‖∞ + i ) + w 3 ‖∆∠ h(jω i )‖∞ + i ), (5.13) N i=1 N i=1 where w i are weighting coefficients, h(jω) is the frequency response of the filter model,

174 | 5 Approximation of fractional-order operators 󵄨󵄨 󵄨󵄨 ̂ ̂ and ∆ h(jω i ) = 󵄨󵄨|h(jω i )| − |h a (jω i )|󵄨󵄨, ∆∠ h(jω i ) = |∠h(jω i ) − ∠h a (jω i )|, and the infinite norm of a vector is in fact the maximum absolute value of all the elements in the vector. Definition 5.6. The constrained optimisation problem for the Charef filter design is proposed in [40]: min f(x) x

c > 1, { { { with x s.t. {ω u0 > 0, { { {0 < y i < 1, i = 0, 1, . . . , n.

Algorithm 5.8 (Optimal Charef filter design). Proceed as follows. (1) Select a set of frequency points ωr and generate frequency response data from the irrational model. (2) Write the objective function based in (5.13). (3) Generate an initial search point x0 and start the nonlinear optimisation process. (4) Configure the optimal Charef filter from the optimisation results. Based on this algorithm, an optimal Charef filter design function can be written as function Ga=charef_opt(wr,n,G,wt,a,wc) if nargin> H.ResponseData=h; bode(H,’-’,G1,’--’,w) with the comparison results shown in Figure 5.28. It can be seen that the fitting quality of the 6th-order optimal Charef filter is close to the 17th-order filter in Example 5.16. Even if good fitting over a large frequency range like (10−2 , 106 ) is expected, a 16th-order optimal Charef filter can still be designed, and the fitting over the expected

176 | 5 Approximation of fractional-order operators Bode Diagram

Magnitude (dB)

0 -10 -20 -30 -40

Phase (deg)

-50 0

-45

-90

10 -2

10 -1

10 0

10 1

Frequency (rad/s)

10 2

10 3

Fig. 5.28: Comparisons of the frequency responses.

interval is perfect, as shown in Figure 5.29. Almost no differences can be witnessed from the fitting results. The fitting model and fitting quality are not shown here: >> w=logspace(-2,7,100); H1=frd(G01,w); h1=H1.ResponseData; H2=frd(G02,w); h2=H2.ResponseData; H=H1; h=h1.^0.6.*h2.^0.3; h=h(:); H1.ResponseData=h; G2=charef_opt(w,15,h) bode(H1,’-’,G2,’--’,w) with the optimal Charef filter

G2 (s) =

2.2487×10−7 (s+5.727)(s+13.57)(s+45.24)(s+126.4)(s+340.5) (s+969.3)(s+2716)(s+7588)(s+2.13×104 )(s+5.98×104 )(s+1.66×105 ) (s+4.7×105 )(s+1.33×106 )(s+3.45×106 )(s+1.09×107 )(s+3.95×107 )

.

(s+2.164)(s+7.171)(s+16.9)(s+47.52)(s+138.5)(s+382.1)(s+1071) (s+3011)(s+8419)(s+2.36×104 )(s+6.66×104 )(s+1.84×105 ) (s+5.14×105 )(s+1.56×106 )(s+3.87×106 )(s+8.37×106 )

The step response of the filters and the original irrational system are obtained as shown in Figure 5.30. >> step(G1,G2,t); f=’1/(1+s/1.6)^0.6/(1+s/6.2)^0.3’; [t,y]=INVLAP_new(f,0,3,1001,0,’1/s’); line(t,y) It can be seen that the responses of the two filters are close, but there exist slight differences between the filters and the original model. Example 5.18. Design an optimised Charef filter for the complicated irrational transfer function given below and assess its behaviours: G(s) =

(1 + s/6.2)0.3 . (1 + s/1.6)0.6 (s0.7 /3 + 1)0.2

Solution. Select a frequency vector in the interval (10−4 , 105 ); the frequency response of the original irrational plant model can be evaluated, from which an optimised Charef

5.4 Approximations of irrational models | 177 Bode Diagram

Magnitude (dB)

0

-50

-100

Phase (deg)

-150 0

-45

-90

10 -2

10 0

10 2

Frequency (rad/s)

10 4

10 6

Fig. 5.29: Fitting comparisons over a larger interval. Step Response

1 0.9 0.8

Amplitude

0.7 0.6 0.5 0.4 0.3 0.2 0.1 0

0

0.5

1

1.5

Time (seconds)

2

2.5

Fig. 5.30: Step response comparisons.

filter can be designed with >> G01=tf(1,[1/1.6 1]); G02=tf([1/6.2 1],1); w=logspace(-4,5,200); H1=frd(G01,w); h1=H1.ResponseData; H2=frd(G02,w); h2=H2.ResponseData; H=H1; G3=fotf([1/3 1],[0.7 0],1,0); H3=bode(G3,w); h3=H3.ResponseData; h=h1.^0.6.*h2.^0.3.*h3.^0.2; H1.ResponseData=h; G1=charef_opt(w,11,h(:)), bode(H1,G1,{1e-4,1e5}) and the 12th-order optimised Charef filter is

G1 (s) =

0.0027751(s+3.591)(s+8.44)(s+23.97)(s+64.13) (s+179.1)(s+488.1)(s+1351)(s+3681)(s+1.019×104 ) (s+2.772×104 )(s+7.561×104 )(s+2.515×105 ) (s+1.757)(s+5.638)(s+14.97)(s+42.22)(s+114)(s+315.6)(s+860.2) (s+2380)(s+6487)(s+1.799×104 )(s+4.973×104 )(s+1.128×105 )

.

178 | 5 Approximation of fractional-order operators Bode Diagram

Magnitude (dB)

0 -10 -20 -30 -40

Phase (deg)

-50 0 -10 -20 -30 -40

10 -4

10 -2

10 0

Frequency (rad/s)

10 2

10 4

Fig. 5.31: Frequency response comparisons.

The fitting in frequency domain responses are obtained, and the Bode diagrams are obtained as shown in Figure 5.31. It can be seen that the fitting is very good in the Bode diagrams. With numerical inverse Laplace transform technique, the step response of the irrational system can be obtained easily. Also, the step response of the approximate G1 (s) can be found. Both the step responses are shown and compared in Figure 5.32. It can be seen that the two step responses are quite close to each other. >> step(G1,t); f=’(1+s/6.2)^0.3/(1+s/1.6)^0.6/(s^0.7/3+1)^0.2’; [t,y]=INVLAP_new(f,0,3,1001,0,’1/s’); line(t,y) Step Response

1 0.9 0.8

Amplitude

0.7 0.6 0.5 0.4 0.3 0.2 0.1 0

0

0.5

1

1.5

Time (seconds)

Fig. 5.32: Step response comparisons.

2

2.5

6 Modelling and analysis of multivariable fractional-order transfer function matrices Consider the linear fractional-order system shown in (4.1). If the initial values of the input signal u(t) and output y(t) and their derivatives are all zero, with Laplace transform, the fractional-order transfer function can be established. Definition 6.1. A typical single-input single-output fractional-order transfer function with constant delay can be expressed as G(s) =

b 1 s γ1 + b 2 s γ2 + ⋅ ⋅ ⋅ + b m s γ m e−Ts . a1 s η1 + a2 s η2 + ⋅ ⋅ ⋅ + a n−1 s η n−1 + a n s η n

(6.1)

It can be seen that the fractional-order transfer function can be described by the ratio of two pseudo-polynomials with a delay constant T. Compared with the integer-order transfer functions, apart from the numerator and denominator coefficients, the orders should also be declared. Therefore, normally four vectors and a delay constant can be used to describe uniquely the fractional-order transfer function model in (6.1), simply denoted as (a, η, b, γ, T). For multivariable fractional-order systems, like in integer-order systems, an FOTF matrix can be considered as the standard model. Definition 6.2. Multivariable linear fractional-order systems can be described by an FOTF matrix expressed as g11 (s) .. . [g q1 (s)

[ G(s) = [ [

⋅⋅⋅ .. . ⋅⋅⋅

g1p (s) .. ] ] . ], g qp (s)]

where g ij (s) are fractional-order transfer functions defined in (6.1). Since this model is useful in the analysis and design of linear fractional-order systems, a MATLAB class FOTF is constructed to describe the model, as it is done in TF class, in Control System Toolbox. When the class is created, overload functions can be written to implement the modelling, analysis and design tasks, in a simple and straightforward way. Based on the FOTF object, an FOTF Toolbox is written to facilitate all the MATLAB code in the book, to solve better modelling, analysis and design problems in linear fractional-order systems. Some nonlinear fractional-order systems can also be managed with the new FOTF Toolbox. In this chapter, the creation of classes and object-oriented programming technique are demonstrated first, then based on the class, the modelling and analysis of

DOI 10.1515/9783110497977-006

180 | 6 Modelling and analysis of multivariable FOTF matrices fractional-order transfer functions are presented. In Section 6.1, essential techniques and programming in declaring and manipulating FOTF objects are presented. In the current FOTF Toolbox, a multivariable fractional-order transfer function matrix is used as the basic form of FOTF objects. In Section 6.2, several overload functions are written such that systems with complicated structures can be processed to find the overall model of the system. Sometimes, the fractional-order transfer function processing facilities may not be very efficient; they should be converted to commensurate-order ones. In Section 6.3, some properties are assessed for linear fractional-order transforms. For instance, the stability criterion is proposed, and H2 and H∞ norms are also computed. The frequency domain and time domain analysis for single-variable and multivariable systems can be carried out with the methods in Sections 6.4 and 6.5, where for multivariable systems, the facilities of diagonal dominance analysis using inverse Nyquist arrays with Gershgorin bands are provided. Singular value plots are also provided. Root locus analysis for single-variable systems can be carried out with the methods in Section 6.6. Some of the work was documented in [42, 76] for single-variable systems only, and now it is fully extended to handle directly multivariable FOTF matrices, and also provide an interface to fractional-order state space objects to be discussed in the next chapter.

6.1 FOTF — Creation of a MATLAB class If one wants to create a MATLAB class, a name of the class should be selected first. For instance, for fractional-order transfer function, we selected “FOTF” as its name. A folder @fotf should be created for it, and all the files related to the class should be placed in the folder. Normally, for a new class, at least two functions should be written: fotf.m is used to define the class, which allows the user to input an object, and display.m is used to display the object. The programming of the two functions will be illustrated in this section, followed by the programming details and listings of other supporting functions and overload functions. With these functions, the FOTF models can be established and processed.

6.1.1 Defining FOTF class A function fotf.m should be written and placed in the folder @fotf. To design a class in MATLAB, the fields under the class should be designed first. The class should be defined uniquely with these fields. In the FOTF object, it can be seen from Definition 6.1 that the coefficients and orders of the numerator and denominator are needed; besides, a delay constant is

6.1 FOTF — Creation of a MATLAB class | 181

needed. Therefore, the fields in Table 6.1 are designed. In the current version, only the fields of SISO FOTF objects can be assessed directly by other functions; however, the whole FOTF matrix for multivariable systems are fully supported. Tab. 6.1: The fields in the FOTF object. field name num nn den nd ioDelay

description of the field the vector of numerator coefficients of an individual SISO object (for multivariable systems, the fields cannot be accessed) the orders of the numerator the vector of denominator coefficients the orders of the denominator the delay constant of the element

To define a class in MATLAB, the essential task is to assign the relevant information to all the fields. A special structure should be used, led by the command classdef. Besides, it is also necessary to convert other types of model into a standard FOTF object. Therefore, the following statements can be used to define an FOTF matrix class. Note that the structure of a class defining function is slightly different from the conventional MATLAB functions; however, the structure is easy to understand. classdef fotf properties num, nn, den, nd, ioDelay end methods function G=fotf(a,na,b,nb,T) if isa(a,’fotf’), G=a; elseif isa(a,’foss’), G=foss2fotf(a); elseif nargin==1 & (isa(a,’tf’)|isa(a,’ss’)|isa(a,’double’)), a=tf(a); [n1,m1]=size(a); G=[]; D=a.ioDelay; for i=1:n1, g=[]; for j=1:m1, [n,d]=tfdata(tf(a(i,j)),’v’); nn=length(n)-1:-1:0; nd=length(d)-1:-1:0; g=[g fotf(d,nd,n,nn,D(i,j))]; end, G=[G; g]; end elseif nargin==1 & a==’s’, G=fotf(1,0,1,1,0); else, ii=find(abs(a)1, display([’ ’ int2str(j) ’:’]), end str=fotfdisp(G(j,i),key); if nargout==0, display(’ ’); end end, end % display a SISO object function str=fotfdisp(G,key)

6.1 FOTF — Creation of a MATLAB class | 183

strN=polydisp(G.num,G.nn); str=strN; strD=polydisp(G.den,G.nd); nn=length(strN); if nn==1 & strN==’0’, if key==0, disp(strN), end else, nd=length(strD); nm=max([nn,nd]); if key==0, disp([char(’ ’*ones(1,floor((nm-nn)/2))) strN]), end ss=[]; T=G.ioDelay; if T>0, ss=[’exp(-’ num2str(T) ’*s)’]; end if ~strcmp(strD,’1’), if T>0, str=[’(’ str ’)*’ ss ’/(’ strD ’)’]; else, str=[’(’ str ’)/(’ strD ’)’]; end str=strrep(strrep(str,’{’,’’),’}’,’’); if key==0, disp([char(’-’*ones(1,nm)), ss]); disp([char(’ ’*ones(1,floor((nm-nd)/2))) strD]) end, end, end % display a pseudo-polynomial function strP=polydisp(p,np) if length(np)==0, p=0; np=0; end P=’’; [np,ii]=sort(np,’descend’); p=p(ii); for i=1:length(p), P=[P,’+’,num2str(p(i)),’*s^{’,num2str(np(i)),’}’]; end P=P(2:end); P=strrep(P,’s^{0}’,’’); P=strrep(P,’+-’,’-’); P=strrep(P,’^{1}’,’’); P=strrep(P,’+1*s’,’+s’); strP=strrep(P,’-1*s’,’-s’); nP=length(strP); if nP>=3 & strP(1:3)==’1*s’, strP=strP(3:end); end if strP(end)==’*’, strP(end)=’’; end With the two essential functions, a fractional-order system can be entered and displayed easily. Example 6.1. Enter the following fractional-order transfer function model in MATLAB workspace: 0.8s1.2 + 2 G(s) = e−0.5s . 1.8 1.1s + 1.9s0.5 + 0.4 Solution. The following MATLAB commands can be used. >> G=fotf([1.1,1.9,0.4],[1.8,0.5,0],[0.8,2],[1.2,0],0.5) The coefficients, orders and delay constant of the fractional-order transfer function can be assigned to the relevant fields such that the FOTF object can be established and displayed.

184 | 6 Modelling and analysis of multivariable FOTF matrices Example 6.2. Convert the following integer-order model into an FOTF object: G(s) =

s3 + 7s2 + 24s + 24 . s4 + 10s3 + 35s2 + 50s + 24

Solution. The integer-order transfer function can be entered first. Then the function fotf() can be called directly for conversion, and its description in FOTF format will be obtained and displayed. >> G=tf([1 7 24 24],[1 10 35 50 24]); G1=fotf(G) Another function fotfdata() can be written to extract all the five fields from an FOTF object. For SISO models, the fields can be extracted directly, while for MIMO models, the fields are extracted as cells, where a command like a = a{i, j} can be used to extract the contents from the cell. function [a,na,b,nb,L]=fotfdata(G), [n,m]=size(G); if n*m==1, b=G.num; a=G.den; nb=G.nn; na=G.nd; L=G.ioDelay; else % for MIMO FOTF objects for i=1:n, for j=1:m, [a0,na0,b0,nb0,L0]=fotfdata(G(i,j)); a{i,j}=a0; b{i,j}=b0; na{i,j}=na0; nb{i,j}=nb0; L(i,j)=L0; end, end, end

6.1.3 Multivariable FOTF matrix support Linear multivariable integer-order systems can usually be described by transfer function matrices, and in fractional-order systems, FOTF matrices are also the essential models for multivariable systems. A multivariable FOTF matrix can be entered easily. The command for straightforward matrix input syntax is allowed. Due to the use of the structure classdef in defining the class, the traditional functions cat(), subsasgn() and subsref() are no longer needed. Example 6.3. Input the following multivariable FOTF matrix: e−0.5s /(1.5s1.2 + 0.7) G(s) = [ 3/(0.7s1.3 + 1.5)

2 e−0.2s /(1.2s1.1 + 1) ]. 2 e−0.2s /(1.3s1.1 + 0.6)

Solution. The four FOTF blocks can be entered first into MATLAB workspace; then the normal matrix input statement can be used to enter the multivariable FOTF matrix. >> g1=fotf([1.5 g2=fotf([1.2 g3=fotf([0.7 g4=fotf([1.3

0.7],[1.2 0],1,0,0.5); 1],[1.1 0],2,0,0.2); 1.5],[1.3 0],3,0); 0.6],[1.1 0],2,0,0.2); G=[g1,g2; g3,g4]

6.2 Interconnections of FOTF blocks | 185

Since there is no semicolon at the end of the final statement, the FOTF matrix is displayed directly.

6.2 Interconnections of FOTF blocks It is well known that integer-order models can be calculated with the functions +, * and feedback() to process the parallel, series and feedback connections. Similar to the idea, the following overload functions can be written. These files should be placed in the folder @fotf. Most of the functions are from the book [42]; however, all of them are modified and extended such that FOTF matrices are fully supported.

6.2.1 Multiplications of FOTF blocks To define the expression G = G1 ∗ G2 , the overload function mtimes() should be written. To begin with, let us consider first the series connection of two SISO FOTF blocks G1 (s) and G2 (s), one has G(s) = G1 (s)G2 (s) =

N1 (s)N2 (s) . D1 (s)D2 (s)

To multiply two pseudo-polynomials p1 (s) and p2 (s), represented by vector groups (c1 , n1 ) and (c2 , n2 ), representing the coefficients and orders of the pseudo-polynomials, the coefficient vector of the product pseudo-polynomial is c = c1 ⊗ c2 , while the order vector n = n1 ⊕ n2 . The operations ⊗ and ⊕ are the Kronecker product and the Kronecker sum, respectively, and they were discussed in Section 2.4. The resulted order and coefficient vectors are collected to simplify the pseudo-polynomial (c, n). A subfunction sisofotftimes() is written to evaluate the product of the two SISO FOTF objects, and it will be given later. Comments 6.2 (FOTF multiplication function). We remark the following. (1) The two objects are unified in the FOTF format. (2) If one or two of the FOTF blocks are FOTF matrices, then matrix multiplication of FOTF matrices can be performed. (3) If the sizes or delay terms of the two matrices are not compatible, an error will be displayed. (4) With such a function, more than two FOTF objects can be multiplied together, with the syntax G = G1 ∗ G2 ∗ ⋅ ⋅ ⋅ ∗ G k . The overload function mtimes() can be written for multiplying two FOTF objects, where MIMO ones are supported. The listing of the function is as follows. function G=mtimes(G1,G2) G1=fotf(G1); G2=fotf(G2); [n1,m1]=size(G1); [n2,m2]=size(G2);

186 | 6 Modelling and analysis of multivariable FOTF matrices

if m1==n2, G=fotf(zeros(n1,m2)); % matrix multiplication for i=1:n1, for j=1:m2, for k=1:m1, G(i,j)=G(i,j)+sisofotftimes(G1(i,k),G2(k,j)); end, end, end elseif n1*m1==1, G=fotf(zeros(n2,m2)); % if G1 is a scalar for i=1:n2, for j=1:m2, G(i,j)=G1*G2(i,j); end, end elseif n2*m2==1, G=fotf(zeros(n1,m1)); % if G2 is a scalar for i=1:n1, for j=1:m1, G(i,j)=G2*G1(i,j); end, end else error(’The two matrices are incompatible for multiplication’) end % product of two SISO FOTF objects function G=sisofotftimes(G1,G2) if iszero(G1) | iszero(G2), G=fotf(0); else, [a1,na1,b1,nb1]=fotfdata(G1); [a2,na2,b2,nb2]=fotfdata(G2); a=kron(a1,a2); b=kron(b1,b2); na=kronsum(na1,na2); nb=kronsum(nb1,nb2); G=simplify(fotf(a,na,b,nb,G1.ioDelay+G2.ioDelay)); end A supporting function simplify() is written to simplify individual FOTF models in the FOTF matrix. The subfunction polyuniq() can be used to collect coefficients of pseudo-polynomials. The subfunction cannot be called directly. function G=simplify(G,eps1) [n,m]=size(G); if nargin==1, eps1=eps; end for i=1:n, for j=1:m, G1=G(i,j); [b,nb]=polyuniq(G1.num,G1.nn,eps1); [a,na]=polyuniq(G1.den,G1.nd,eps1); if length(a)==length(b), da=a(1); db=b(1); if abs(a/da-b/db)> s=fotf(’s’); Gc=5+2*s^(-0.2)+3*s^0.6 Example 6.6. Input the fractional-order transfer function model G(s) =

(s0.3 + 3)2 . (s0.2 + 2)(s0.4 + 4)(s0.4 + 3)

Solution. The complicated system can be entered by first declaring a Laplace operator s; then perform simple manipulating with the following statements: >> s=fotf(’s’); G=(s^0.3+3)^2/(s^0.2+2)/(s^0.4+4)/(s^0.4+3)

6.2 Interconnections of FOTF blocks | 193

The following expanded model can be obtained: G(s) =

s+

2s0.8

s0.6 + 6s0.3 + 9 . + 7s0.6 + 14s0.4 + 12s0.2 + 24

Example 6.7. Assume that a typical unity negative feedback control system is G(s) =

0.8s1.2 + 2 , 1.1s1.8 + 0.8s1.3 + 1.9s0.5 + 0.4

Gc (s) =

1.2s0.72 + 1.5s0.33 . 3s0.8

Find the closed-loop model. Solution. The following statements can be used to enter the model into MATLAB workspace. >> G=fotf([1.1,0.8 1.9 0.4],[1.8 1.3 0.5 0],[0.8 2],[1.2 0]); Gc=fotf([3],[0.8],[1.2 1.5],[0.72 0.33]); G1=feedback(G*Gc,1) Then G1 (s) =

3.3s2.27

+

2.4s1.77

0.96s1.59 + 1.2s1.2 + 2.4s0.39 + 3 + 0.96s1.59 + 1.2s1.2 + 5.7s0.97 + 1.2s0.47 + 2.4s0.39 + 3

is the obtained closed-loop model.

6.2.5 Conversions between FOTFs and commensurate-order models To convert a multivariable FOTF matrix into a commensurate-order transfer function (COTF), a base order α is required, and it can be recognised from the existing orders from the numerators and denominators. A MATLAB function is written to find the base order of the FOTF matrix. function alpha=base_order(G) [n,m]=size(G); a=[]; for i=1:n, for j=1:m, g=G(i,j); a=[a,g.nn,g.nd]; end, end [c,d]=rat(a,1e-20); alpha=double(gcd(sym(c))/lcm(sym(d))); Once a base order is found, the FOTF to COTF conversion function can be written as follows. function [G1,alpha]=fotf2cotf(G) [n,m]=size(G); alpha=base_order(G); if alpha==0 for i=1:n, for j=1:m, g=G(i,j); a=g.den; b=g.num; D(i,j)=b(1)/a(1); T(i,j)=g.ioDelay; end, end, G1=tf(D);

194 | 6 Modelling and analysis of multivariable FOTF matrices

else for i=1:n, for j=1:m, g=G(i,j); a=[]; b=[]; n0=round(g.nd/alpha); a(n0+1)=g.den; a=a(end:-1:1); m0=round(g.nn/alpha); b(m0+1)=g.num; b=b(end:-1:1); g1=tf(b,a); G1(i,j)=g1; T(i,j)=g.ioDelay; end, end, end G1.ioDelay=T; Also, if the commensurate-order model G and base order α are known, it can be converted back into FOTF matrix as follows. function G1=cotf2fotf(G,alpha) [n,m]=size(G); G1=fotf(G); for i=1:n, for j=1:m, g=fotf(G(i,j)); g.nn=g.nn*alpha; g.nd=g.nd*alpha; G1(i,j)=g; end, end Comments 6.6 (Commensurate conversion). We remark the following. (1) The so-called COTF object G1 is in fact a TF object in MATLAB. (2) For single-variable systems, models can be computed easily without COTF object conversions. (3) For a static matrix, the common-order is zero. (4) If FOTF object is used alone, the matrix inversion and similar processes should rely on overload functions, which are not quite efficient. (5) If converted into COTF blocks, matrix inversion can be processed under the integerorder transfer function framework, which overcomes the problems in multivariable FOTF blocks. Example 6.8. Solve the problem in Example 6.4 under the framework of commensurate-order systems. Solution. The closed-loop model can be obtained by either of the two methods, and it can be seen that the commensurate-order conversion method is much faster, and the result obtained are much simpler. Therefore, the commensurate-order conversion method is recommended for the closed-loop manipulation of multivariable FOTF systems. >> g1=fotf([1.5 0.7],[1.2 0],1,0); g2=fotf([1.2 1],[1.1 0],2,0); g3=fotf([0.7 1.5],[1.3 0],3,0); g4=fotf([1.3 0.6],[1.1 0],2,0); G=[g1,g2; g3,g4]; H=fotf(eye(2)); tic, G1=feedback(G,H); toc tic, [G0,a]=fotf2cotf(G); G2=feedback(G0,eye(2)); G2=cotf2fotf(minreal(G2,1e-5),a); toc

6.3 Properties of linear fractional-order systems | 195

Please note that if the FOTF matrices involved are more complicated, for instance, if there are four terms in each FOTF block, the function feedback() may not yield a result within an acceptable time period, while the COTF approach can reliably produce a result.

6.3 Properties of linear fractional-order systems In this section, a stability assessment function of linear fractional-order systems is developed. An H2 /H∞ norm evaluation function is also developed.

6.3.1 Stability analysis The stability assessment of linear fractional-order system is slightly different from those in integer-order systems. Theorem 6.1. Suppose a linear commensurate-order system is given in Definition 4.9, with the base order α. Denoting λ = s α , the original transfer function G(s) can be expressed as an integer-order transfer function of λ: G(λ) =

b1 λ m + b2 λ m−1 + ⋅ ⋅ ⋅ + b m λ + b m+1 . a1 λ n + a2 λ n−1 + ⋅ ⋅ ⋅ + a n λ + a n+1

The original transfer function G(s) is stable if and only if all the poles p i of λ satisfy (cf. [37]) π |arg(p i )| > α , 2 where arg(z) is the argument of complex variable z. The graphical interpretation of the stability region is shown in the shaded area in Figure 6.1. An informal explanation of the theorem is given below. I [s ✻

α

]

ary

nd

stable region

ity bil sta α 2π

u bo

R[s α ]

■ ✠−α



π 2

stable region

Fig. 6.1: Stable regions for commensurate-order systems.

196 | 6 Modelling and analysis of multivariable FOTF matrices Note that since the original fractional-order system is a commensurate-order system of λ = s α , and the stability boundary for s is ±π/2, it follows that the stability boundary for λ is ±απ/2. Comments 6.7. stability issues (1) Please note that the criterion is for λ = s α , not for s. The stability boundary for s is still the imaginary axis. (2) When the base order is α = 1, the system is of integer-order, and the stable boundary is changed back to the imaginary axis, which agrees well with the cases in integer-order systems. (3) If α ≥ 2, the boundary is negative real axis thus the system is unstable. (4) For multivariable systems, the stability is assessed to all its components. (5) As indicated in Example 4.18, there may exist extraneous roots in the pseudopolynomial equations; one may query the correctness of the theorem. In fact, it has been found through many examples that the genuine poles are those with the smallest arguments; hence, the theorem is valid. The stability assessment of commensurate-order systems can be carried out directly in a revised way. For the base order α, the boundaries of stable regions are the straight lines with arguments of ±απ/2. If all the genuine roots are stable, the system is stable. The extraneous roots should not be considered. Theorem 6.2. All the genuine roots of the pseudo-polynomial equation satisfy |arg(p i )| < απ, and they are located in the first Riemann sheet. Other roots do not satisfy the original characteristic equation. Based on the idea, the following MATLAB function can be written. The function can be used to convert FOTF object to commensurate-order system first, and then the poles of the system can be evaluated. Although sometimes the order of the commensurateorder system is extremely high, it can also be processed by MATLAB easily. The syntax [key, α, a1 , p] = isstable(G, a0 ) can be used to assess the stability of the FOTF object, where key is one for stable. The argument α returns the base order, a1 is the slopes of ±απ/2, and a0 is the user selected base order, with default of 0.001. The argument p returns all the roots of λ satisfying the original pseudo-polynomial characteristic equation, with all the extraneous roots removed. The boundaries of the first Riemann sheet, |arg(p i )| < απ, are also drawn. function [K,alpha0,apol,p]=isstable(G,a0) [n0,m0]=size(G); K=1; if nargin==1, a0=0.001; end for i=1:n0, for j=1:m0 g=G(i,j); a=g.nd; a1=fix(a/a0); if length(a1)==1 & a1==0

6.3 Properties of linear fractional-order systems | 197

else [g1,alpha]=fotf2cotf(g); c=g1.den{1}; alpha0(i,j)=alpha; p0=roots(c); kk=[]; for k=1:length(p0) a=g.den; na=g.nd; pa=p0(k)^(1/alpha); if norm(a*[pa.^na’])> a=[2,3.8,2.6,2.5,1.5]; na=[3.501,2.42,1.798,1.31,0]; b=[-2,-4]; nb=[0.63,0]; G=fotf(a,na,b,nb); [key,alpha,a1,p]=isstable(G), p1=p.^(1/alpha) It is obvious that the base order is α = 0.001, the commensurate-order model of the system can be rewritten as an integer-order transfer function of λ: G(λ) =

−2λ630 − 4 . 2λ3501 + 3.8λ2420 + 2.6λ1798 + 2.5λ1310 + 1.5

The roots of the polynomials of λ satisfying the original pseudo-polynomial equation can be obtained automatically in the function, as shown in Figure 6.2. The smallest argument of all the poles is a1 = 0.0018 radian = 0.1029∘ , which is larger than the boundary value of απ/2 = 0.009∘ . It can be seen that all the poles of the system are located in the stable regions. Therefore, the system is stable. Since α = 0.001, the polynomial is of 3,501st-order. It may take some time to find all the poles.

198 | 6 Modelling and analysis of multivariable FOTF matrices 4

Pole Map

10 -3

3

Imaginary Axis

2

ann

iem

R first

1

stability

0

et she

ry

nda

bou

ry

bounda

-1 -2 -3 -4 -0.4

-0.2

0

0.2

0.4

0.6

0.8

1

1.2

Real Axis

Fig. 6.2: Pole positions and stability assessment.

In fact, it has been shown in Example 4.18 that although there exist 3,501 roots for λ, a great many of them do not satisfy the original characteristic equation. The poles should be distinguished. With the function isstable(), only two pairs of poles are genuine roots, and they are located at 1.0006 ± 0.0023j, 0.9993 ± 0.0018j, as shown in the figure. The roots of λ correspond to the roots of s at −1.2923 ± 1.3226j, −0.1098 ± 0.4803j, all having negative real parts. It can be seen that these roots are located in the first Riemann sheet. The genuine roots can also be found with the equation solver more_sols() in Section 4.6.4, from the original characteristic equation, where the two pairs of solutions can be found, and they are exactly the same as the ones obtained above, all having negative real parts. >> f=@(s)2*s^3.501+3.8*s^2.42+2.6*s^1.798+2.5*s^1.31+1.5; more_sols(f,zeros(1,1,0),10+10i)

6.3.2 Partial fraction expansion and stability assessment Partial fractional expansion can be made to commensurate-order systems and the stability can then be assessed. An overload MATLAB function residue() is written to convert an SISO commensurate-order system into a partial fraction form. function [alpha,r,p,K]=residue(G) [H,alpha]=fotf2cotf(G); [r,p,K]=residue(H.num{1},H.den{1}); The syntax of the function is [α, r, p, K] = residue(G), where α is the base order, r, and p and the vectors of the partial fraction expansion, with K the possible reminder.

6.3 Properties of linear fractional-order systems | 199

Example 6.10. Find the partial fraction expansion of the model in Example 4.4. Solution. From the fractional-order differential equation, the FOTF model can be written as 1 G = 1.2 , s + 5s0.9 + 9s0.6 + 7s0.3 + 2 and the partial fractional expansion of the model can be obtained as follows. >> G=fotf([1 5 9 7 2],1.2:-0.3:0,1,0); [alpha,r,p,K]=residue(G) It can be found that the returned arguments are α = 0.3,

r = [−1, 1, −1, 1],

p = [−2, −1, −1, −1]

and K is an empty matrix, meaning that the mathematical form of partial fraction expansion is G(s) =

−1 1 −1 1 + + + . s0.3 + 2 s0.3 + 1 (s0.3 + 1)2 (s0.3 + 1)3

For certain fractional-order transfer functions, it may not be necessary to find the base order, and convert it into a commensurate-order system. Definition 6.3. A generalised form of partial fraction expansion of a fractional-order transfer function can be written as N mi

r ij , α i + p )j (s i i=1 j=1

G(s) = ∑ ∑

where p i and r ij are complex numbers, while m i are positive integers. It can be seen that Definition 4.11 is just a special case of Definition 6.3, since there is a common base order α. Theorem 6.3 ([52]). The generalised partial fraction expansion in Definition 6.3 is stable if and only if 0 ≤ α i < 2,

and |arg(p i )| < π(1 −

αi ) 2

for all i.

In fact, with the support of MATLAB, it is even much easier to find a base order and use Definition 4.11, rather than finding the partial fraction expansion in Definition 6.3.

6.3.3 Norms of fractional-order systems The norms of the systems are important quantities in robust control design. The definitions and evaluation algorithms of the norms of fractional-order systems are illustrated in this subsection.

200 | 6 Modelling and analysis of multivariable FOTF matrices Definition 6.4. The H2 and H∞ norms of an SISO FOTF model G(s) are defined respectively as j∞

‖G(s)‖2 = √

1 ∫ G(s)G(−s) ds, 2πj −j∞

‖G(s)‖∞

󵄨 󵄨 = sup 󵄨󵄨󵄨G(jω)󵄨󵄨󵄨. ω

It can be seen that the norm ‖G(s)‖2 can be evaluated through numerical integral methods, while the norm ‖G(s)‖∞ can be obtained with numerical optimisation approaches. If the model G(s) is a multivariable FOTF matrix, the norms of individual FOTFs should be evaluated first; then the norms of the whole matrix can be evaluated. The overload function norm() can be written and placed in the folder @fotf, with the syntaxes norm(G) and norm(G,inf). The subfunction snorm() is written to evaluate the norms of each SISO FOTF block G(i, j). function n=norm(G,eps0) [n,m]=size(G); if nargin==1, eps0=1e-6; end for i=1:n, for j=1:m, A(i,j)=snorm(G(i,j),eps0); end, end n=norm(A); % computing numerically the norm of each SISO FOTF block function n=snorm(G,eps0) j=sqrt(-1); dx=1; f0=0; if nargin==2 & ~isfinite(eps0) % H∞ norm, find the maximum value f=@(w)[-abs(freqresp(j*w,G))]; w=fminsearch(f,0); n=abs(freqresp(j*w,G)); else % H2 norm, numerical integral f=@(s)freqresp(s,G).*freqresp(-s,G); n=integral(f,-inf,inf)/(2*pi*j); end Here the low-level frequency response function freqresp() is given as follows, and it can be seen that the low-level function can also be used G(s) for a given vector s. If s is set to a frequency vector s = ω*j, the frequency response multivariable systems can be evaluated. In the subfunction snorm(), only SISO FOTF objects are used in calling the function freqresp(). function H=freqresp(s,G1) [n,m]=size(G1); for i=1:n, for j=1:m, [a,na,b,nb,L]=fotfdata(G1(i,j)); for k=1:length(s),

6.4 Frequency domain analysis | 201

P=b*(s(k).^nb.’); Q=a*(s(k).^na.’); H1(k)=P/Q; end if L>0, H1=H1.*exp(-L*s); end, H(i,j,:)=H1; end, end if n*m==1, H=H(:).’; end Example 6.11. Compute the norms of the fractional-order model given in Example 6.9. Solution. The FOTF model can be entered first, and the norms of the system can be evaluated, and the results are n1 = 2.7168, and n2 = 8.6115. >> a=[2,3.8,2.6,2.5,1.5]; na=[3.501,2.42,1.798,1.31,0]; b=[-2,-4]; nb=[0.63,0]; G=fotf(a,na,b,nb); n1=norm(G), n2=norm(G,inf)

6.4 Frequency domain analysis Frequency domain analysis and design techniques are very useful in linear systems studies. In this section, frequency analysis of single-variable systems is presented first, and the overload functions for Bode diagrams, Nyquist plots and Nichols charts are written. Then frequency domain analysis for multivariable systems are introduced, and Nyquist arrays with Gershgorin bands are studied to check whether a system is diagonal dominant or not. Finally, singular value plots of multivariable systems are explored.

6.4.1 Frequency domain analysis of SISO systems Consider a fractional-order transfer function G(s). If jω is used to substitute s, through simple complex number computation, the exact frequency response data can be obtained directly. The data can be written in the form of the function frd() in the Control System Toolbox, so that the frequency domain analysis functions such as bode() can be used to draw frequency domain plots. Overload functions for these can also be written, and placed in the folder @fotf. The listing of the overload function bode() is as follows, with the supporting function freqresp() as its kernel. function H=bode(G,w) if nargin==1, w=logspace(-4,4); end j=sqrt(-1); H1=freqresp(j*w,G); H1=frd(H1,w); if nargout==0, subplot(111), bode(H1); else, H=H1; end The syntaxes of the function is H = bode(G, w), where w is the user-specified frequency vector, with default vector w specified in ω ∈ (10−4 rad/s, 104 rad/s). If the returned variable H is omitted, the Bode diagram can be drawn automatically.

202 | 6 Modelling and analysis of multivariable FOTF matrices Similarly, overload functions for Nyquist plots and Nichols charts are as follows, where the function bode() is used to evaluate the frequency response data if the system G is specified as an FOTF object. function nichols(G,w) if nargin==1, w=logspace(-4,4); end, H=bode(G,w); subplot(111), nichols(H); function nyquist(G,w) if nargin==1, w=logspace(-4,4); end H=bode(G,w); subplot(111), nyquist(H); The gain and phase margins can also be evaluated from the overload function margin() with the syntax of [Gm , ϕm , ωcg , ωcp ] = margin(G), where Gm and ϕm the gain and phase margins, and ωcg and ωcp are the corresponding frequencies. function [Gm,Pm,Wcg,Wcp]=margin(G) H=bode(G,logspace(-4,4,1000)); [Gm,Pm,Wcg,Wcp]=margin(H); Example 6.12. Analyse the frequency response of the fractional-order model in Example 6.9. Solution. The statements >> b=[-2,-4]; nb=[0.63,0]; w=logspace(-2,2); a=[2,3.8,2.6,2.5,1.5]; na=[3.501,2.42,1.798,1.31,0]; G=fotf(a,na,b,nb); bode(G,w); figure, w=logspace(-2,4,400); nyquist(G,w); grid [Gm,Pm,Wcg,Wcp]=margin(G) can be used to draw the Bode diagram and Nyquist plot as shown in Figures 6.3 and 6.4. Bode Diagram

Magnitude (dB)

0

System: H1 Gain Margin (dB): 9.43 At frequency (rad/s): 0.723 Closed loop stable? Not known

-100 -200 -300

Phase (deg)

-400 0 -180 -360 -540 10-4

10-2

100

Frequency (rad/s)

102

Fig. 6.3: Bode diagram of the fractional-order transfer function.

104

6.4 Frequency domain analysis | 203 Nyquist Diagram

10

0 dB

Imaginary Axis

8

System: H 6 System: H Phase Margin (deg): 178 4 Gain Margin (dB): -8.44 Delay Margin (sec): 2.66 2 dBfrequency (rad/s): 1.17 At 2frequency (rad/s): 0.00128 At 4 dB -4 dB 6 10 dB dB Closed loop stable? Not knownClosed loop stable? known dB Not -10-6 dB 0 -2

-2 dB

✻ (−1, j0) point

-4 -6 -8 -10 -6

-5

-4

-3

-2

-1

Real Axis

0

1

2

3

4

Fig. 6.4: Nyquist plot of the fractional-order transfer function.

The gain is 0.3750, which is 20 lg(0.375) = −8.5193 dB, at ω = 2.0366×10−5 rad/s, while the phase margin is 177.7581∘ , at ω = 1.1683 rad/s. It is worth mentioning that the frequency response plots inherit the graphical facilities of those in the Control System Toolbox. For instance, the gain and phase margins can be displayed by clicking the right mouse button. Unfortunately, the stability information displayed in Figures 6.3 and 6.4 are “Not known”, since fractional-order system stability analysis facilities were not available. Stability can be assessed by other methods manually.

6.4.2 Stability assessment with Nyquist plots The Nyquist plot of the whole single-variable open-loop model G(s) can be used in assessing the closed-loop system stability under the unity negative feedback structure through the well-known Nyquist theorem. Theorem 6.4 (Nyquist theorem, [45]). If the open-loop model G(s) has p unstable poles, and its Nyquist plot encircles the point (−1, j0) in counterclockwise direction q times, then the closed-loop system is stable if p = q; otherwise, the closed-loop system is unstable, with q − p unstable poles. Comments 6.8 (Nyquist theorem). We remark the following. (1) The method applies to single-variable systems only. (2) If the open-loop model is stable, then the closed-loop system is stable if there is no encirclement around the point (−1, j0); otherwise, there are q unstable poles. (3) The Nyquist theorem uses open-loop information to assess the stability of closedloop systems under unity negative feedback structure. (4) In the original Nyquist theorem, there was no assumption that G(s) is a rational integer-order transfer function. Therefore, the theorem should be valid for fractional-order, or even, irrational systems.

204 | 6 Modelling and analysis of multivariable FOTF matrices Example 6.13. If the open-loop model is given in Example 6.9, assess the closed-loop stability of the system under unity negative feedback. Solution. It has been shown that the open-loop model is stable, i.e. p = 0, and it can be seen from Figure 6.4 that there is one encirclement around the point (−1, j0). It can be concluded that the closed-loop system is unstable, with one unstable closed-loop pole. The closed-loop model can be obtained with the statements >> b=[-2,-4]; nb=[0.63,0]; w=logspace(-2,2); a=[2,3.8,2.6,2.5,1.5]; na=[3.501,2.42,1.798,1.31,0]; G=fotf(a,na,b,nb); G1=feedback(G,1) and it is found that the closed-loop model is G1 (s) =

−s0.63 − 2 , s3.501 + 1.9s2.42 + 1.3s1.798 + 1.25s1.31 − s0.63 − 1.25

where the closed-loop characteristic equation is c(s) = s3.501 + 1.9s2.42 + 1.3s1.798 + 1.25s1.31 − s0.63 − 1.25 = 0. All the closed-loop poles can be found with the commands >> f=@(s)s^3.501+1.9*s^2.42+1.3*s^1.798+1.25*s^1.31-s^0.63-1.25; more_sols(f,zeros(1,1,0),-100+100i) and it is found that all the three closed-loop poles are −1.2896 ± j1.6689, 0.6176. Further, it can be seen that there is only one unstable pole in the closed-loop system. Therefore, the theorem is valid in this example.

6.4.3 Diagonal dominance analysis Although functions like bode() can be used to deal with multivariable systems directly, the frequency response plots are those used in typical frequency domain analysis and design in multivariable system research. Let us summarise the multivariable frequency domain analysis of integer-order systems first. Definition 6.5. Assume that the transfer function matrix of the forward path is Q(s) and the feedback path transfer function matrix is H(s). The closed-loop transfer function matrix can then be written as G(s) = [I + Q(s)H(s)]−1 Q(s),

(6.2)

where I + Q(s)H(s) is referred to as the return difference matrix. In the return difference matrix, the frequency domain analysis becomes more convenient with inverse Nyquist array (INA) approaches [59].

6.4 Frequency domain analysis | 205

Similar to the single-variable system, the Nyquist plot encircles the point (−1, j0), while for multivariable systems, the number of encirclements of the INA of the return difference matrix around the point (0, j0) are counted. The Gershgorin theorem stated below is the foundation of the inverse Nyquist array in the design of multivariable design approaches. Theorem 6.5 (Gershgorin theorem). For the complex matrix

the eigenvalue λ satisfies

c11 [ . [ .. [ C=[ [ c k1 [ .. [ . [c n1

|λ − c kk | ≤ ∑ |c kj | j=k ̸

c1k .. . c kk .. . c nk

⋅⋅⋅ .. . ⋅⋅⋅ .. . ⋅⋅⋅

⋅⋅⋅ .. . ⋅⋅⋅ .. . ⋅⋅⋅

c1n .. ] . ] ] c kn ] ], .. ] . ] c nn ]

(6.3)

and |λ − c kk | ≤ ∑ |c jk |. j=k ̸

In other words, one has a cluster of circles with centre c kk and radius of the sum of the absolute values of the rest of the elements in the row or column. These circles are referred to as Gershgorin circles. Besides, the two inequalities are referred to as column and row Gershgorin circles. In fact, a smaller radius can be obtained with |λ − c kk | ≤ min(∑ |c kj |,

∑ |c jk |).

j=k ̸

j=k ̸

Assume that at a frequency ω, the frequency response in the forward path is [ Q(jω) = [

q11 (jω) .. .

⋅⋅⋅ .. . ⋅⋅⋅

q1p (jω) .. ] . ],

(6.4)

q qp (jω)] [q q1 (jω) where q ij (jω) is a complex quantity. For frequency response data, the envelope of a series of Gershgorin circles forms the Gershgorin bands. Definition 6.6. If none of the Gershgorin bands for the frequencies encircles the origins, the system is referred to as a diagonal dominant system. For the selected frequency vector w, and if the multivariable FOTF model is known, the overload function mfrd() is written such that frequency response in the data type of the MFD Toolbox can be returned. function H1=mfrd(G,w) H=bode(G,w); h=H.ResponseData; H1=[]; for i=1:length(w); H1=[H1; h(:,:,i)]; end The syntaxes of the function is H = mfrd(G, w), where the returned variable H is a matrix composed of frequency response data, and it is essential data type in the MFD Toolbox. The functions plotnyq() and fgersh() in the MFD Toolbox can be used to

206 | 6 Modelling and analysis of multivariable FOTF matrices draw the Nyquist plot and Gershgorin circles. However, the syntaxes of the procedures are quite complicated. For the systems with equal input and output channels, a new function gershgorin(H) can be written, which can be used to draw Nyquist plot with Gershgorin bands, and the contents of the function are as follows. function gershgorin(H,key), if nargin==1, key=0; end t=[0:.1:2*pi,2*pi]’; [nr,nc]=size(H); nw=nr/nc; ii=1:nc; for i=1:nc, circles{i}=[]; end for k=1:nw % evaluate Nyquist/inverse Nyquist array for the frequencies G=H((k-1)*nc+1:k*nc,:); if key==1, G=inv(G); end, H1(:,:,k)=G; for j=1:nc, ij=find(ii~=j); v=min([sum(abs(G(ij,j))),sum(abs(G(j,ij)))]); x0=real(G(j,j)); y0=imag(G(j,j)); r=sum(abs(v)); % compute Gershgorin circles radius circles{j}=[circles{j} x0+r*cos(t)+sqrt(-1)*(y0+r*sin(t))]; end, end hold off; nyquist(tf(zeros(nc)),’w’); hold on; h=get(gcf,’child’); h0=h(end:-1:2); for i=ii, for j=ii, axes(h0((j-1)*nc+i)); NN=H1(i,j,:); NN=NN(:); if i==j, % diagonal plots with Gershgorin circles cc=circles{i}(:); x1=min(real(cc)); x2=max(real(cc)); y1=min(imag(cc)); y2=max(imag(cc)); plot(NN), plot(circles{i},’g’), plot(0,0,’+’), axis([x1,x2,y1,y2]) else, plot(NN), end % non-diagonal elements end, end if key==1, title(’Inverse Nyquist Diagram’); end The input argument H is the frequency data H. If the inverse Nyquist array is to be plotted, the function can be called with the syntax getshgorin(H, 1). Compared with the function plotnyq() in the MFD Toolbox, the function gershgorin() is much simpler. Besides, the minimum radius obtained by row and column Gershgorin circles is used. The axes system for the Nyquist plot in the Control System Toolbox is borrowed; therefore, manual rescaling in the axes are often required. Example 6.14. A multivariable fractional-order transfer function matrix G(s) is given by e−0.5s /(1.5s1.2 + 0.7) 2 e−0.2s /(1.2s1.1 + 1) G(s) = [ ]. 3/(0.7s1.3 + 1.5) 2 e−0.2s /(1.3s1.1 + 0.6) Draw the frequency domain response of the system and analyse the diagonal dominant behaviour of the system.

6.4 Frequency domain analysis | 207 Nyquist Diagram

Imaginary Axis

To: Out(1)

From: In(1)

From: In(2)

0

-1

-2

To: Out(2)

0 -1 -2

-1.5

-1

-0.5

0

0.5

1

Real Axis

-1

-0.5

0

0.5

1

1.5

Fig. 6.5: Nyquist array with Gershgorin bands.

Solution. The multivariable model can be entered with the following statements, and the Nyquist array with Gershgorin circles can be obtained as shown in Figure 6.5. It can be seen that the origins in the Nyquist plots are encircled by the Gershgorin bands, which means that the original model is not diagonal dominant. >> g1=fotf([1.5 0.7],[1.2 0],1,0,0.5); g2=fotf([1.2 1],[1.1 0],2,0,0.2); g3=fotf([0.7 1.5],[1.3 0],3,0); g4=fotf([1.3 0.6],[1.1 0],2,0,0.2); G=[g1,g2; g3,g4]; w=logspace(0,1); H=mfrd(G,w); gershgorin(H)

6.4.4 Frequency response evaluation under complicated structures If there exists time delays in multivariable FOTF blocks, it might be difficult to find FOTF models under complicated connections, since if the delays of different blocks are not compatible, the connections in FOTF matrix expression may not be found. Alternative ways should be used to evaluate frequency response under complicated connections. One immediate way is to get the frequency response for each block; then use the relevant functions in the MFD Toolbox to compute frequency responses for complicated systems. Suppose the frequency responses of two blocks G1 (s) and G2 (s) are obtained respectively as H1 and H2 . Then the following functions can be used to compute frequency responses for complicated system structures. Comments 6.9 (Different frequency response functions). We remark the following. (1) Series connection of two blocks G1 (s) and G2 (s) can be evaluated with the function H = fmulf(w, H2 , H1 ). If one of the blocks is a constant matrix K, then H = fmul(w, H2 , K) or H = fmul(w, K, H1 ) can be used to evaluate frequency responses in series connection. Pay attention to the orders of the blocks!

208 | 6 Modelling and analysis of multivariable FOTF matrices (2) Parallel connection of two blocks G1 (s) and G2 (s) can be evaluated with the function H = faddf(w, H1 , H2 ). If one of the blocks is given as a constant matrix K, then H = fadd(w, K, H1 ) can be used to evaluate frequency responses of blocks in parallel connection. (3) MFD Toolbox functions can also be used to evaluate frequency responses with constant delay matrices. The function H = fdly(w, H1 , D) can be used, where D is a delay constant matrix. With MFD Toolbox, the function H = finv(w, H1 ) can also be used to compute inverse Nyquist arrays. However, note that this function may conflict with function in Statistics Toolbox. Therefore, make sure that the priority of search path of MFD is higher than the Statistics Toolbox. (4) The frequency response data of g i,j (s) can be extracted from H with the function h = fget(H, [i, j]). Example 6.15. For a given multivariable plant model 1/(1.35s1.2 + 2.3s0.9 + 1) G(s) = [ 1/(0.52s1.5 + 2.03s0.7 + 1)

2/(4.13s0.7 + 1) ], −1/(3.8s0.8 + 1)

with pre-compensation matrices Kp = [ Kd (s) = [

1/3 1/3

2/3 ], −1/3

1/(2.5s + 1) 0

0 ], 1/(3.5s + 1)

assess the diagonal dominance of the original and compensated models. Solution. The plant model can be entered first, and the Nyquist arrays with Gershgorin bands are obtained as shown in Figure 6.6. It is seen that the Gershgorin bands encircles the origin with very large radii, which means that the multivariable plant is seriously coupled. >> s=fotf(’s’); g1=1/(1.35*s^1.2+2.3*s^0.9+1); g2=2/(4.13*s^0.7+1); g3=1/(0.52*s^1.5+2.03*s^0.7+1); g4=-1/(3.8*s^0.8+1); G0=[g1,g2; g3,g4]; w=logspace(-1,0); H=mfrd(G0,w); gershgorin(H) The compensating matrices can be entered, under them the Nyquist array with Gershgorin bands can be obtained directly as shown in Figure 6.7. It can be seen that the compensated system is diagonal dominant; therefore, fractional-order PID controllers can be designed individually for each channel. >> Kp=[1/3 2/3; 1/3 -1/3]; s=tf(’s’); Kd=[1/(2.5*s+1), 0; 0, 1/(3.5*s+1)]; K1=G0*fotf(Kd)*fotf(Kp); H2=mfrd(K1,w); gershgorin(H2)

6.4 Frequency domain analysis | 209 Nyquist Diagram From: In(2)

0 -0.5

Imaginary Axis

To: Out(1)

From: In(1)

-1

To: Out(2)

1 0.5 0

0

0.5

1

1.5

-1

Real Axis

-0.5

0

Fig. 6.6: Nyquist array with Gershgorin bands. Nyquist Diagram From: In(1)

From: In(2)

Imaginary Axis

To: Out(1)

0 -0.2 -0.4 -0.6

To: Out(2)

0 -0.2 -0.4 -0.6

-0.1

0

0.1

0.2

0.3

0.4

0.5

Real Axis

0

0.2

0.4

0.6

Fig. 6.7: Nyquist array with Gershgorin bands of compensated system.

Example 6.16. Consider the fractional-order plant model shown in Example 6.14. It is known that the plant is not diagonal dominant. Assume that there is a pre-compensation matrix 0.7042 −0.7100 Kp = [ ]. −1, 0.0038 Check whether the compensated system is diagonal dominant or not. Solution. The multivariable plant model G(s) and the compensator matrix Kp can be entered first. The frequency response of G(s) can be evaluated over the frequency range of (1 rad/s, 10 rad/s). Then the frequency response of the compensated model G(s)Kp can be evaluated. The Nyquist array with Gershgorin bands can be drawn, as shown in Figure 6.8. >> g1=fotf([1.5 0.7],[1.2 0],1,0,0.5); g2=fotf([1.2 1],[1.1 0],2,0,0.2);

210 | 6 Modelling and analysis of multivariable FOTF matrices Nyquist Diagram From: In(1)

From: In(2)

Imaginary Axis

To: Out(1)

1

0.5

To: Out(2)

0

1

0.5

0 -1

-0.8

-0.6

-0.4

-0.2

0

0.2

Real Axis

-1.5

-1

-0.5

0

Fig. 6.8: Nyquist array of the compensated system.

g3=fotf([0.7 1.5],[1.3 0],3,0); g4=fotf([1.3 0.6],[1.1 0],2,0,0.2); G=[g1,g2; g3,g4]; w=logspace(0,1); H=mfrd(G,w); Kp=[0.7042,-0.71; -1,0.0038]; H1=fmul(w,H,Kp); gershgorin(H1) It can be seen that, since the Gershgorin bands do not encircle the origin, the compensated system is diagonal dominant.

6.4.5 Singular value plots in multivariable systems Frequency response of single-variable system can be described by Bode diagrams. For multivariable systems, since the frequency response at each frequency ω is a complex matrix, the singular values σ1 (ω), σ2 (ω), . . . , σ m (ω) of the matrices can be extracted and the trajectories of these singular values can be joined and multivariable “Bode diagram” can be drawn. This kind of diagram is referred to as singular value plot for multivariable systems. In the Robust Control Toolbox [80], the function sigma() is provided to draw singular value plot, and an overload function with the same name can also be written for MIMO FOTF models. function sigma(G,w) if nargin==1, w=logspace(-4,4); end H=bode(G,w); subplot(111); h1=[]; H1=H; h=H.ResponseData; [n,m,k]=size(h); for i=1:k, h1=[h1, svd(h(:,:,i))]; end for i=1:min([n,m]), h2(1,1,:)=h1(i,:).’; H1.ResponseData=h2; bodemag(H1), hold on end, hold off

6.5 Time domain analysis | 211 Bode Diagram

20

maximum singular value

Magnitude (dB)

0

-20

minimum singular value

-40

-60

-80

-100 10 -4

10 -3

10 -2

10 -1

10 0

Frequency (rad/s)

10 1

10 2

10 3

10 4

Fig. 6.9: Singular value plot of the system.

Example 6.17. Draw singular value plots for the multivariable system shown in Example 6.14. Solution. The multivariable system can be specified in MATLAB, and the singular value plot of the system can be obtained, as shown in Figure 6.9. >> g1=fotf([1.5 g2=fotf([1.2 g4=fotf([1.3 g3=fotf([0.7

0.7],[1.2 0],1,0,0.5); 1],[1.1 0],2,0,0.2); 0.6],[1.1 0],2,0,0.2); 1.5],[1.3 0],3,0); G=[g1,g2; g3,g4]; sigma(G)

6.5 Time domain analysis In the previous sections, analytical solutions to linear fractional-order systems were explored. It seems that the systems whose analytical solutions exist are very rare. Besides, in practical applications, analytical solutions are not always necessary. In these cases, numerical solution techniques can be adopted instead to evaluate system responses and the response plots can also be obtained.

6.5.1 Step and impulse responses Based on the closed-form solutions of linear fractional-order differential equations, and its MATLAB implementation in the function fode_sol9(), an overload step response function step(), an impulse response function impulse(), and an arbitrary input time response function lsim() of the fractional-order transfer functions can easily be written. The overload step response evaluation function step() can be developed as follows, which is similar to the function step() in the Control System Toolbox.

212 | 6 Modelling and analysis of multivariable FOTF matrices

function Y=step(G,t,key) [n,m]=size(G); M=tf(zeros(n,m)); nt=length(t); if nargin==1, t=[0:0.1:10]’; elseif nt==1, t=0:t/100:t; end if nargout==0, t0=t(1); t1=t(end); if nargin=T); lz=zeros(1,ii(1)-1); y=[lz, y1(1:end-length(lz))]; According to the presentation in Example 4.6, impulse response of system G is equivalent to the step response of the model sG(s). Therefore, an overloaded impulse function can be written as as follows. function y=impulse(G,t), G1=G*fotf(’s’); if nargin==1, t=[0:0.1:10]’; elseif length(t)==1, t=0:t/100:t; end if nargout==0, step(G1,t,1); else, y=step(G1,t,1); end The syntaxes of the two functions are y = step(G, t) and y = impulse(G, t). Comments 6.10 (Step and impulse responses). We remark the following. (1) Although three input arguments are allowed in the function step(), two arguments G and t can only be used, while the third one is reserved. (2) The argument t can be omitted, or input as a scalar. In the latter case, t is the terminate time, where in (0, t), 101 samples are used. The default t is set to (0, 10) with 101 samples. (3) For an n × m multivariable systems G, the step and impulse responses are defined as follows: the graphics window is divided into n × m subplots, with the (i, j)th subplot the step response of the ith output, driven solely by the jth input. (4) The high-precision algorithm with p = 5 is embedded in the functions. (5) If there is no returned variables, the responses are plotted automatically. (6) The axes system in time response of Control System Toolbox is borrowed and manual rescaling in the axes are often required.

6.5 Time domain analysis | 213 Step Response

1

impulse response 0

Amplitude

-1

-2

-3

-4

-5

step response 0

5

10

15

20

Time (seconds)

25

30

Fig. 6.10: Step and impulse responses.

Example 6.18. Draw the step and impulse responses for the FOTF model studied in Example 6.9, with G(s) =

−2s0.63 − 4 . 2s3.501 + 3.8s2.42 + 2.6s1.798 + 2.5s1.31 + 1.5

Solution. The fractional-order transfer function model can be entered first, then the step and impulse responses can be obtained as shown in Figure 6.10. >> b=[-2,-4]; nb=[0.63,0]; a=[2,3.8,2.6,2.5,1.5]; na=[3.501,2.42,1.798,1.31,0]; G=fotf(a,na,b,nb); t=0:0.01:30; step(G,t); y=impulse(G,t); line(t,y); If the model G(s) is the open-loop model, the step response of the closed-loop model under unity negative feedback structure can be obtained as shown in Figure 6.11. >> G1=feedback(G,1); step(G1,10) Step Response 0 -50 -100

Amplitude

-150 -200 -250 -300 -350 -400

0

1

2

3

4

5

Time (seconds)

Fig. 6.11: Closed-loop step response.

6

7

8

9

10

214 | 6 Modelling and analysis of multivariable FOTF matrices It can be seen that the closed-loop system is unstable, which agrees the conclusions in Example 6.13. Example 6.19. Consider the models in Example 6.15. Since after compensation the system is diagonal dominant, it means that individual channel design is possible. Suppose the two PIλ Dμ controllers are designed as c1 (s) = 11.6 + 2.89s−0.9 + 15s0.8 and c2 (s) = 13 + 2.82s−0.9 + 15s0.8 . Draw the step response of the closed-loop system. Solution. The step responses of the open-loop plant model can be obtained with >> s=fotf(’s’); g1=1/(1.35*s^1.2+2.3*s^0.9+1); g2=2/(4.13*s^0.7+1); g3=1/(0.52*s^1.5+2.03*s^0.7+1); g4=-1/(3.8*s^0.8+1); G0=[g1,g2; g3,g4]; step(G0,10) as shown in Figure 6.12. Step Response From: In(1)

Amplitude

To: Out(1)

1

From: In(2)

0.5

0 1.5

To: Out(2)

1 0.5 0 -0.5 0

2

4

6

8

10 0

Time (seconds)

2

4

6

8

10

Fig. 6.12: Open-loop step response.

It can be seen that when the first input is activating alone upon the system, the responses of the two outputs are responding in a similar scale. When the second input is activating alone, similar things happens. This means that there exist serious coupling in the input/output pairs. This phenomenon is consistent with the conclusion in Example 6.15, where the plant model is not diagonal dominant. When the two compensators Kd (s) and Kp are introduced, the open-loop step responses of the compensated system are as shown in Figure 6.13. >> Kp=[1/3 2/3; 1/3 -1/3]; Kd=[1/(2.5*s+1), 0; 0, 1/(3.5*s+1)]; G1=G0*Kp*Kd; step(G1,10)

6.5 Time domain analysis | 215 Step Response From: In(1)

0.8

From: In(2)

Amplitude

To: Out(1)

0.6 0.4 0.2 0 0.8

To: Out(2)

0.6 0.4 0.2 0 0

2

4

6

8

10 0

Time (seconds)

2

4

6

8

10

Fig. 6.13: Step responses of the compensated system.

It can be seen that the scales of the off-diagonal responses are much smaller than the ones in the diagonal responses, and it also agrees with the conclusion that the compensated system is diagonal dominant. The closed-loop model can be obtained with the commensurate-order conversion approach. The model cannot be obtained with the direct method under FOTF framework. Unfortunately, the function step() yields unstable results, although the closed-loop system is stable. For this kind of problems, the functions in the FOSS framework in Chapter 7 and Simulink-based simulation in Chapter 8 should be used instead. >> C=diag([11.6+2.89*s^-0.9+15*s^0.8,13+2.82*s^-0.9+15*s^0.8]); [G1,a]=fotf2cotf(G0); Kd1=fotf2cotf(Kd,a); C1=fotf2cotf(C,a), G2=feedback(G1*Kp*Kd1,eye(2)); G3=minreal(G2,1e-10), G4=cotf2fotf(G3,a);

6.5.2 Time responses under arbitrary input signals If the input signals are known, the samples u at time t should be provided first, and the output signals can be evaluated. The overload function lsim() is written as follows. function y=lsim(G,u,t), [n,m]=size(G); M=tf(zeros(n,m)); t0=t(1); t1=t(end); [nu,mu]=size(u); if nu==m & mu==length(t), u=u.’; end if nargout==0, lsim(M,’w’,zeros(size(u)),t); end for i=1:n, y1=0; for j=1:m, g=G(i,j); uu=u(:,j); y2=fode_sol9(g.den,g.nd,g.num,g.nn,uu,t,5); ii=find(t>=g.ioDelay); lz=zeros(1,ii(1)-1); y2=[lz, y2(1:end-length(lz))]; y(:,i)=y1+y2;

216 | 6 Modelling and analysis of multivariable FOTF matrices

end, end if nargout==0, khold=ishold; hold on h=get(gcf,’child’); h0=h(end:-1:2); for i=1:n, axes(h0(i)); plot(t,y(:,i),t,u,’--’), xlim([t0,t1]) end, if khold==0, hold off, end, end The syntax of the function is y = lsim(G, u, t), where G is the FOTF model. We tried to make the syntaxes of the overload functions as close as possible to those in the Control System Toolbox. Comments 6.11 (Arbitrary input time domain responses). We remark the following. (1) The row number of the matrix u should be equal to the length of the vector t. Both the arguments cannot be omitted from the function call. (2) The vector t is an evenly spaced time vector, u(:, i) is a vector of the samples of the ith input. (3) If there is no argument returned, for an n × m multivariable systems G, the graphics window is divided into n × 1 subplots, with the ith plot the ith output, when all the input signals are acting at the same time. All the inputs are also displayed in the plots in dash lines. Example 6.20. Consider the following fractional-order differential equation: Dt3.5 y(t) + 8Dt3.1 y(t) + 26Dt2.3 y(t) + 73Dt1.2 y(t) + 90Dt0.5 y(t) = 90 sin t2 . Find the numerical solution of the equations. Solution. The fractional-order transfer function corresponding to the equation can be established first, where 90 G(s) = 3.5 , 3.1 2.3 s + 8s + 26s + 73s1.2 + 90s0.5 and the input is u(t) = sin t2 . The following statements can be used to draw the time response of the output y(t) as shown in Figure 6.14, where the solid curve is the output while the dash plot is the input. >> a=[1,8,26,73,90]; n=[3.5,3.1,2.3,1.2,0.5]; G=fotf(a,n,90,0); t=0:0.05:10; u=sin(t.^2); lsim(G,u’,t); Similar to other computation problems in MATLAB, the results obtained should be validated. Smaller step-sizes can be tried to seen whether the same results can be obtained. If the results with different step-sizes are the same, the results can be accepted, otherwise the step-size should be reduced again. For this example, the results are validated.

6.6 Root locus for commensurate-order systems | 217 Linear Simulation Results

1

y(t)

0.8 0.6

Amplitude

0.4 0.2 0 -0.2 -0.4

u(t)

-0.6 -0.8 -1

0

1

2

3

4

5

Time (seconds)

6

7

8

9

10

Fig. 6.14: Input and output signals of the system.

6.6 Root locus for commensurate-order systems Root locus, proposed by Walter R Evans [19], is an effective tool in linear control systems study. The open-loop information is used to assess the behaviours of closedloop systems under unity negative feedback structures. Definition 6.7. Consider an SISO system model KG(s) in the forward path under a unity negative feedback structure. If the value of the gain K changes, the closed-loop poles, which are the solutions of the characteristic equation 1 + KG(s) = 0, also change. For a changing K, the trajectories of the closed-loop poles are referred to as the root loci of the system. It might be a complicated task, if not impossible, to build root locus of ordinary fractional-order systems; therefore, the topic here is restricted to commensurate-order systems, with approximate extensions to non-commensurate-order ones. Another restriction of root locus analysis is that the system must be an SISO one. For commensurate-order systems, if the base order is α, we can let λ = s α , and the original system can be written as the integer-order model of λ, denoted by G1 (λ). The function rlocus() in Control System Toolbox can be used to draw the root locus of the integer-order model G1 (λ), and the stability boundaries ±απ/2 are superimposed on the root locus. Based on the idea, an overload function rlocus() can be designed. The syntax of the function is rlocus(G), where G is an FOTF object. function rlocus(G) if prod(size(G))==1 [G1,alpha]=fotf2cotf(G); rlocus(G1), xm=xlim; if alpha> G=fotf([1 10 35 50 24],0.7*[5:-1:1],1,0); rlocus(G) The direct equation solution function more_sols() can also be used to check the stability when K = 370. Also please ignore the information displayed in the box, since Root Locus

15

y

ar

nd

Imaginary Axis (seconds-1)

10

y

lit

i ab st

5

u bo

0 -5 -10

2 1 0

-15 -15

-10

System: G1 Gain: 370 Pole: 0.886 + 1.73i Damping: -0.455 (%): 1Overshoot 1.5 2 499 -5 0 Frequency (rad/s): 1.95-1

Real Axis (seconds )

Fig. 6.15: Root locus of the system with critical gain.

5

10

6.6 Root locus for commensurate-order systems | 219

it is drawn upon the integer-order standard, which is not relevant to fractional-order ones. In this case, the characteristic equation becomes c(s) = s3.5 + 10s2.8 + 35s2.1 + 50s1.4 + 24s0.7 + 370 = 0, whose solutions can be found as follows. >> f=@(s)s^3.5+10*s^2.8+35*s^2.1+50*s^1.4+24*s^0.7+370; more_sols(f,zeros(1,1,0),-10+10i) There is only one pair of complex conjugate solutions 0.0058 ± 2.5883j, which lie on the right-hand side of the s plane. Therefore, the closed-loop system becomes unstable. Example 6.22. Consider the model in Example 6.9, where G(s) =

2s3.501

+

−2s0.63 − 4 . + 2.6s1.798 + 2.5s1.31 + 1.5

3.8s2.42

Find the critical gain with root locus method. Solution. If λ = s0.001 is selected, then a 3,501st-order model can be obtained. It is for sure that the root locus cannot be drawn for such a higher-order system. Approximations should be made first. For instance, by approximating s3.501 by s3.5 , s1.798 by s1.8 , s1.31 by s1.3 , s2.42 by s2.4 and s0.63 by s0.6 , the root locus can be obtained, >> b=[-2 -4]; nb=[0.6 0]; a=[2 3.8 2.6 2.5 1.5]; na=[3.5 2.4 1.8 1.3 0]; G1=fotf(a,na,b,nb); rlocus(G1) and the zoomed root locus is shown, manually, in Figure 6.16, from which it can be found that the critical gain is K = 0.320. Root Locus

5 4

Imaginary Axis (seconds-1)

3 2 1 0 -1 -2

0.1

-3

0.05

-4 -5

0 -5

-4

System: G1 Gain: 0.320 Pole: 0.793 + 0.123i Damping: -0.988 0.8 0.855.89e+10 0.9 Overshoot (%): -3 -2 0 -1 -1 (seconds Frequency (rad/s): 0.802

1

Real Axis (seconds-1)

Fig. 6.16: Root locus with critical gain.

2

3

4

5

220 | 6 Modelling and analysis of multivariable FOTF matrices Step Response

5

Amplitude

0

-5

-10

-15

-20

0

20

40

60

80

100

Time (seconds)

120

140

160

180

200

Fig. 6.17: Oscillation with almost identical magnitude.

Apply a slightly smaller K = 0.319 back to the original system, the closed-loop step response can be obtained as shown in Figure 6.17. >> G1.nn=[0.63 0]; G1.nd=[3.501 2.42 1.798 1.31 0]; K=0.319; G=feedback(K*G1,1); step(G,200) It can be seen that the magnitude of the oscillation remains as a constant, which means that the critical gain obtained is accurate. Also, the closed-loop characteristic equation can be formulated as c(s) = 2s3.501 + 3.8s2.42 + 2.6s1.798 + 2.5s1.31 + 1.5 + K(−2s0.63 − 4) = 0. All the poles of the closed-loop system at the assigned K = 0.319 can be obtained easily with >> K=0.319; f=@(s)2*s^3.501+3.8*s^2.42+2.6*s^1.798+... 2.5*s^1.31+1.5+K*(-2*s^0.63-4); more_sols(f,zeros(1,1,0),10+10i) and it can be seen that there are two pairs of complex conjugate solutions in the original characteristic equation −1.2728 ± 1.4529j and 0.0012 ± 0.1152j, and the latter pair are the unstable ones.

7 State space modelling and analysis of linear fractional-order systems It is known in conventional linear control systems theory that linear differential equations can be described by transfer functions and state space models, and both models are equally important in system analysis and design. In fractional-order systems, state space models can also be established. In this chapter, the modelling and analysis of fractional-order state space systems are mainly studied. In Section 7.1, the standard form of a fractional-order state space model is introduced. In Section 7.2, a devoted FOSS class is designed, which allows the input of standard multivariable fractional-order state space models, and manipulating complicated connections of fractional-order systems. Sometimes, the FOSS structure is more efficient than their FOTF counterparts, since in the latter case, complicated systems must be converted manually into the commensurate-order framework and the algorithms used are less reliable. In Section 7.3, stability, controllability and observability of fractional-order state space models are assessed with dedicated functions, and H2 and H∞ norms of fractional-order state space models are evaluated. Analytical and numerical solutions of state transition matrices are also presented. In Section 7.4, time domain, frequency domain and root locus analysis of FOSS objects are implemented. All the analysis facilities are inherited from FOTF objects discussed fully in Chapter 6. In Section 7.5, an extended fractional-order state space model is illustrated with an example and, further, the fractional-order state space is extended to model some nonlinear systems, also illustrated with examples.

7.1 Standard representation of state space models Standard fractional-order state space models are defined below, and most part of this chapter is based on these definitions. Definition 7.1. If the fractional-order system can be expressed as the commensurateorder transfer function G(λ) with base order α, the matrices (A, B, C, D) for the integerorder model G(λ) can be obtained, and the fractional-order state space representation of the system can be written as {

D α x(t) = Ax(t) + Bu(t − T), y(t) = Cx(t) + Du(t − T),

(7.1)

where T is the delay constant. For fractional-order systems with zero initial conditions, the Riemann–Liouville fractional-order derivative definition can be used. As it is pointed out in [60], in fractional-order systems, the system behaviour cannot be predicted solely on the state DOI 10.1515/9783110497977-007

222 | 7 State space modelling and analysis of linear fractional-order systems vector x(t0 ), the past information of t0 is required. Therefore, it is more precise to refer to the states as “pseudo-states”. In the book, we shall still use the term “states”. For systems with nonzero initial conditions, the infinite-dimensional properties under Caputo derivatives are lost, and computation of system responses may lead to incorrect results [60]. Definition 7.2. For better describing fractional-order state space models, especially in the case of improper models, i.e. the numerator order is higher than the denominator order, a generalised state space model is defined as {

ED α x(t) = Ax(t) + Bu(t − T), y(t) = Cx(t) + Du(t − T),

(7.2)

where if the system is improper, the descriptor matrix E is singular. For a proper model, according to MATLAB convention, E is an empty matrix. For simplicity, the fractional-order state space model presented in Definition 7.1 is simply denoted as (A, B, C, D, α, T, E).

7.2 Descriptions of fractional-order state space models 7.2.1 Class design of FOSS Like in the case of FOTF class, a class for fractional-order state space models can be assigned as FOSS, with a folder @foss be created first. The relevant fields for an FOSS class are shown in Table 7.1. In the future versions of FOTF Toolbox to be released, three fields, Gss, alpha and x0 may be used, with Gss storing the integer-order state space model. Some of the overload functions may also be modified accordingly. Tab. 7.1: The fields for FOSS class. Fields

Description of the fields

a, b, c, d ioDelay alpha

the state space matrices, similar to those used in integer-order systems time delay scalar or matrix for multivariable systems base order of the system. For an extended FOSS model, the field alpha can also be a vector (this case will be dealt with in Section 7.5) descriptor matrix E in the generalised state space model the initial state vector x0 , if not known, it is given as an empty vector

E x0

The two necessary functions, foss.m and display.m, must be created in the folder. The function foss() can be designed as follows. classdef foss properties, a, b, c, d, alpha, ioDelay, E, x0, end

7.2 Descriptions of fractional-order state space models | 223

methods function G=foss(a,b,c,d,alpha,L,E,x0) if nargin0, x0=G.x0; disp([’Initial state vector x0 = [’ num2str(x0(:).’),’]’]) end

224 | 7 State space modelling and analysis of linear fractional-order systems The overload function order() is written to extract the order of the system, where the order is defined as the product of the number of states and the base order. function n=order(G), n=length(G.a)*G.alpha; Also, the sizes of the multivariable FOSS model can be extracted from the following overload function, where p and q are the numbers of inputs and outputs, respectively, while n returns the number of states. function [p,q,n]=size(G) q=size(G.c,1); p=size(G.b,2); n=length(G.a);

7.2.2 Conversions between FOSS and FOTF objects A single-input single-output FOSS to FOTF conversion function foss2fotf() can be written as follows. function G1=foss2fotf(G), [n,m]=size(G); G1=fotf(zeros(n,m)); G0=ss_extract(G); G0=tf(G0); key=length(G.alpha)>1; T=G.ioDelay; if isscalar(T), T=T*ones(n,m); end if key~=0, n0=G.alpha; n1=n0(end:-1:1); n2=0; for i=1:length(n1), n2(i+1)=n2(i)+n1(i); end n2=n2(end:-1:1); end for i=1:n, for j=1:m, g=G0(i,j); [num,den]=tfdata(g,’v’); if key==0, n2=[(length(den)-1):-1:0]*G.alpha; end h=fotf(den,n2,num,n2,T(i,j)); h=simplify(h); G1(i,j)=h; end, end A low-level function ss_extract() is written to extract from FOSS object the state space model and returns it as an integer-order state space (SS) object. function [G1,alpha]=ss_extract(G) G1=ss(G.a,G.b,G.c,G.d); G1.E=G.E; alpha=G.alpha; Alternatively, the function fotf2foss() can be used to convert FOTF into an equivalent FOSS object. function G1=fotf2foss(G), [n,m]=size(G); [G2,alpha]=fotf2cotf(G); G2=minreal(G2); G2=ss(G2); for i=1:n, for j=1:m, g=G(i,j); T(i,j)=g.ioDelay; end, end G1=foss(G2.a,G2.b,G2.c,G2.d,alpha,T,G2.E);

7.2 Descriptions of fractional-order state space models | 225

Example 7.1. Consider the problem in Example 4.4 rewritten below. Find its fractionalorder state space realisation D 1.2 y(t) + 5D 0.9 y(t) + 9D 0.6 y(t) + 7D 0.3 y(t) + 2y(t) = u(t). Solution. Denote x2 (t) = D 0.3 y(t),

x1 (t) = y(t),

x3 (t) = D 0.6 y(t),

x4 (t) = D 0.9 y(t).

It is easily found that D 0.3 x1 (t) = x2 (t),

D 0.3 x2 (t) = x3 (t),

D 0.3 x3 (t) = x4 (t),

and finally from the original equation, it can be seen that D 0.3 x4 (t) = −5x4 (t) − 9x3 (t) − 7x2 (t) − 2x1 (t) + u(t). The matrix form of the state space representation of the original equation can be written as 0 1 0 0 0 { { ] [0] [0 { { 0 1 0 [ ] [ ] { 0.3 { ] x(t) + [ ] u(t), { D x(t) = [ [0 [0] 0 0 1] { { { { [1] [−2 −7 −9 −5] { { { y(t) = [1 0 0 0] x(t). { Alternatively, the fractional-order transfer function can be entered first, and then its fractional-order state space model can be converted with the command foss() directly >> G=fotf([1 5 9 7 2],1.2:-0.3:0,1,0); G1=foss(G) the state space representation of the system is −5 −2.25 −0.875 { { [4 { { 0 0 [ { 0.3 { { D x(t) = [ [0 2 0 { { { 0 0.5 { [0 { { { y(t) = [0 0 0 0.5] x(t). {

−0.5 0.5 [ 0 ] 0 ] ] [ ] ] x(t) + [ ] u(t), [ 0 ] 0 ] 0 ] [ 0 ]

In fact, under the default state selection in MATLAB Control System Toolbox, the actual states are x4 (t) = 2y(t),

x3 (t) = 4D 0.3 y(t),

x2 (t) = 2D 0.6 y(t),

x1 (t) = D 0.9 y(t)/2.

It should be noted that, since there might be many different selections of the state variables, the state space form of the system is not unique. Example 7.2. Consider the multivariable transfer function matrix which was defined in Example 6.4. Convert it to fractional-order state space, and then convert it back to FOTF matrix and see whether the original model can be restored.

226 | 7 State space modelling and analysis of linear fractional-order systems Solution. The following commands can be used to enter the original FOTF matrix; then a 47 × 47 order state space model can be found, with a base order of α = 0.1. The FOSS model can then be converted back to the FOTF matrix. >> g1=fotf([1.5 0.7],[1.2 0],1,0); g2=fotf([1.2 1],[1.1 0],2,0); g3=fotf([0.7 1.5],[1.3 0],3,0); g4=fotf([1.3 0.6],[1.1 0],2,0); G=[g1,g2; g3,g4]; G1=foss(G), order(G1), G2=fotf(G1), G2-G It can be seen that the original model is restored successfully in G2 , and the conversion error is very small for this example.

7.2.3 Model augmentation under different base orders Suppose that an FOSS model is given by (A, B, C, D, α). When it is connected with another block with a base order of α/k, where k is an positive integer, it must be converted equivalently first into a model of (A1 , B1 , C1 , D1 , α/k). This process is referred to as state augmentation. It is our task to find how to construct the matrices A1 , B1 , C1 and D1 from the original model. Assume that the original state vector x(t) is defined as x = [x1 (t), x2 (t), . . . , x n (t)]T . To complete the expected state augmentation, a new state vector z(t) should be selected such that z(t) = [z1 (t), z2 (t), . . . , z nk (t)]T , where it is known that z(i−1)k+1 (t) = x i (t),

i = 1, 2, . . . , n.

Now, consider the first k new state variables, selected as z1 (t) = x1 (t),

z2 (t) = D α/k x1 (t),

...,

z k (t) = D (k−1)α/k x1 (t).

The portion of the state space equation can be written as D α/k z1 (t) = z2 (t), D α/k z2 (t) = z3 (t), .. . D α/k z k (t) = a11 z1 (t) + a12 z k+1 (t) + ⋅ ⋅ ⋅ + a1n z(n−1)k+1 (t) + b1 u(t). Therefore, the new matrices of the equation above are A11 [ . . A1 = [ [ . [A1n

⋅⋅⋅ .. . ⋅⋅⋅

A1n .. ] . ] ], A nn ]

B11 [ . ] . ] B1 = [ [ . ], [B n1 ]

C1 = [C11 ⋅ ⋅ ⋅ C1n ],

7.2 Descriptions of fractional-order state space models | 227

where the diagonal terms of A ii and off-diagonal terms A ij are different A ii = [

0 a ii

I(k−1)×(k−1) ], 0

A ij = [

0 a ij

0 ], 0

0 B i1 = [ ] , bi

C1i = [c i 0].

A better and reliable way for constructing an FOSS model is as follows: convert to an FOTF matrix model first, and each sub-transfer function is augmented. The integerorder state space model can be constructed, from which the augmented fractional-order state space model can be created. With MATLAB, the augmented model can be obtained with the following function, with G1 = coss_aug(G, k), where k is a positive integer, G is the original FOSS object, while G1 is the augmented FOSS object. As a special case, if α = 0, then G is a static gain, and it is passed to G1 directly. In this augmentation function, if G is an improper model, it can still be manipulated. function G1=coss_aug(G,k) if G.alpha==0 | k==1, G1=G; else, alpha=G.alpha/k; G=fotf(G); [n,m]=size(G); for i=1:n, for j=1:m, g=G(i,j); a=g.den; na=g.nd; b=g.num; nb=g.nn; ii=1:k:k*length(a); a1(ii)=a; ii=1:k:k*length(b); b1(ii)=b; G2(i,j)=tf(b1,a1); end, end G1=foss(ss(G2)); G1.alpha=alpha; end

7.2.4 Interconnection of FOSS blocks If similar commands are tried to the model in Example 6.3, where there exist delays in different matrix elements, there is no corresponding state space model in the form of Definition 7.1. Therefore, the FOSS model cannot be found. Besides, the overload functions, such as mtimes(), plus(), feedback() and uminus(), are provided for model connection. In these functions, the base orders are unified first such that they can be manipulated directly. Then the facilities in MATLAB Control System Toolbox can be used to connect the models. Let us first consider the multiplication of two FOSS blocks. The following overload function can be written, where the common order can be found first, and the augmented blocks can be obtained under the same order. The series connected state space model can be obtained under the framework of high-efficiency integer-order state space models. function G=mtimes(G1,G2) G1=foss(G1); G2=foss(G2); if length(G1.alpha)>1 | length(G2.alpha)>1, a=0.0001;

228 | 7 State space modelling and analysis of linear fractional-order systems

else, a=common_order(G1.alpha,G2.alpha); end if a==0, G=foss([],[],[],G1.d*G2.d,0); elseif a> s=fotf(’s’); G1=5/(s^0.2+3); G2=(2*s^0.6+3)/(s^1.2+2*s^0.6+5); G10=foss(G1); G20=foss(G2); G3=G10*G20, G1*G2, G4=fotf(G3)

7.2 Descriptions of fractional-order state space models | 229

The state space form of the system can be obtained as −3 [0 [ [ [0 [ D 0.2 x(t) = [ [0 [0 [ [ [0 [0

0 0 2 0 0 0 0

0 0 0 1 0 0 0

2 −1 0 0 1 0 0

y(t) = [2.5

0

0

0

0 0 0 0 0 1 0 0

0 0 0 0 0 0 2 0

0 1.5 [1] −1.25] [ ] ] [ ] ] [0] 0 ] [ ] ] [0] u(t), x(t) + 0 ] [ ] ] [0] ] 0 ] [ ] [ ] ] [0] 0 ] 0 ] [0] 0] x(t).

The resulted FOTF object is obtained as follows, and it is exactly the same as the one obtained under the FOTF framework. G4 (s) =

10s0.6 + 15 . s1.4 + 3s1.2 + 2s0.8 + 6s0.6 + 5s0.2 + 15

Similar to the new function mtimes(), another overload function plus() is written to implement the parallel connection processing of two FOSS objects. function G=plus(G1,G2) G1=foss(G1); G2=foss(G2); if length(G1.alpha)>1 | length(G2.alpha)>1, a=0.0001; else, a=common_order(G1.alpha,G2.alpha); end if a==0, G=foss([],[],[],G1.d+G2.d,0); elseif a1 | length(G2.alpha)>1, a=0.0001; else, a=common_order(G1.alpha,G2.alpha); end if a==0, G=foss([],[],[],G1.d*inv(eye(size(G1.d))+G2.d*G1.d),0); elseif a tic, s=fotf(’s’); g1=1/(1.35*s^1.2+2.3*s^0.9+1); g2=2/(4.13*s^0.7+1); g3=1/(0.52*s^1.5+2.03*s^0.7+1); g4=-1/(3.8*s^0.8+1); G0=foss([g1,g2; g3,g4]); Kp=foss([1/3 2/3; 1/3 -1/3]); Kd=foss([1/(2.5*s+1), 0; 0, 1/(3.5*s+1)]); C=diag([11.6+2.89*s^-0.9+15*s^0.8,13+2.82*s^-0.9+15*s^0.8]); C=foss(C); H=foss(eye(2)); Ga=feedback(G0*Kp*Kd*C,H); Ga1=minreal(Ga); G1=simplify(fotf(Ga1),1e-5), toc step(G1) The validated closed-loop step response of the system can be obtained with the function step(), as shown in Figure 7.1.

7.3 Properties of fractional-order state space models Stability, controllability and observability of fractional-order state space models are assessed with dedicated functions. Further, H2 and H∞ norms of fractional-order state space models are evaluated. Analytical and numerical solutions of state transition matrices are also presented in the section.

7.3.1 Stability assessment The definition of the stability assessment conditions is exactly the same as the one in Theorem 6.1. All the poles of λ = s α of the FOSS model can be evaluated with the

232 | 7 State space modelling and analysis of linear fractional-order systems overload function function p=eig(G), p=eig(G.a) and the stability can be assessed with the overload function isstable(); the syntax of the function is exactly the same as the one for FOTF model. function [K,alpha,apol]=isstable(G) p=eig(G); alpha=G.alpha; plot(real(p),imag(p),’x’,0,0,’o’) apol=min(abs(angle(p))); K=apol>alpha*pi/2; xm=xlim; if alpha> syms t x a E(x) A=[-2,0,-1,0; -1,-3,1,0; 2,1,1,1; 0,1,-2,-2]; Phi=funmsym(A,E(x*t^a),x) The mathematical form of the state transition is (2) (1) Eα (−t α ) − t2α Eα (−t α )/2 − t α Eα (−t α ) [ (2) [ Eα (−3t α ) − Eα (−t α ) + t2α Eα (−t α )/2 + t α Eα(1) (−t α ) Φ(t) = [ (2) (1) [ t2α Eα (−t α )/2 + 2t α Eα (−t α ) [ (2) α α 2α α α (1) α [ Eα (−t ) − Eα (−3t ) − t Eα (−t )/2 − 2t Eα (−t ) (2)

−t2α Eα (−t α )/2 (2) Eα (−3t α ) + t2α Eα (−t α )/2 (2) (1) t2α Eα (−t α )/2 + t α Eα (−t α ) (1) (1) Eα (−t α ) − Eα (−3t α ) − t2α Eα (−t α )/2 − t α Eα (−t α ) (2)

(1)

−t2α Eα (−t α )/2 − t α Eα (−t α ) (2) (1) t2α Eα (−t α )/2 + t α Eα (−t α ) (2) (1) α 2α α Eα (−t ) + t Eα (−t )/2 + 2t α Eα (−t α ) (2) (1) −t2α Eα (−t α )/2 − 2t α Eα (−t α ) (2)

−t2α Eα (−t α )/2 (2) t2α Eα (−t α )/2 (2) (1) t2α Eα (−t α )/2 + t α Eα (−t α ) (1) (2) (1) Eα (−t α ) − t2α Eα (−t α )/2 − t α Eα (−t α )

] ] ]. ] ] ]

Comments 7.4 (States and state transition matrix, [42]). We remark the following. (1) As indicated earlier, the states are in fact pseudo-states, since it depends on the “history” of t0 upon to current time. Therefore, in order to determine the future behaviour of a fractional-order system, not only the value of the state at instant t, but also all the values of the state in the interval [t0 , t] are needed. (2) The matrix Φ(t) is not a usual state transition matrix as in integer-order systems. It really means Φ(t, t0 ). The standard state transition matrix property, Φ(t, t0 ) = Φ(t, τ)Φ(τ, t0 ) for t0 < τ < t, is not satisfied here. It is also better to be referred to as pseudo-state transition matrix.

7.3 Properties of fractional-order state space models | 235

7.3.3 Controllability and observability Controllability and observability are useful concepts in ordinary control system theory. Similar concepts are also available for linear fractional-order state space models [42]. Definition 7.4. A system is fully controllable if it is possible to establish a non-restricted control vector which can lead the system from an initial state x(t0 ) to another final state x(tn ) in a finite time t0 ≤ t ≤ tn . Definition 7.5. A system is fully observable if any state x(tn ) can be determined from the observation of y(t) and the knowledge of the input u(t) in a finite interval of time t0 ≤ t ≤ tn . Theorem 7.3. Like in integer-order systems, if the testing matrix Tc = [B, AB, A2 B, . . . , A n−1 B] satisfies rank(Tc ) = n, the system is fully controllable. If the testing matrix T

To = [CT , ACT , A2 CT , . . . , A n−1 CT ] satisfies rank(To ) = n, the system is fully observable.

The overload function Tc = ctrb(G) can be written to compute the controllability matrix Tc of the FOSS object G. If the matrix is of full rank, the system is fully controllable. function Tc=ctrb(G), Tc=ctrb(G.a,G.b); Another overload function To = obsv(G) can be written to compute the observability matrix To of the FOSS object G. If the matrix is of full rank, the system is fully observable. function To=obsv(G), To=obsv(G.a,G.c);

7.3.4 Controllable and observable staircase forms For partially controllable systems, staircase decomposition can be applied such that a nonsingular matrix T can be constructed so that the original system (A, B, C, D) can be decomposed as ̂ Ac̄ A c = [̂ A21

0 ], ̂ Ac

0 Bc = [ ̂ ] , Bc

Cc = [̂ Cc̄ ̂ Cc ],

and such a case is referred to as the controllable staircase form. The uncontrollable space (̂ Ac̄ , 0, ̂ Cc̄ ) can be separated directly from the controllable subspace (̂ Ac , ̂ Bc , ̂ Cc ). It is complicated to construct such a transformation matrix, and the function ctrbf() can be used directly to get the staircase form and the transformation matrix Tc with [Ac , Bc , Cc , Tc ] = ctrbf(A, B, C).

236 | 7 State space modelling and analysis of linear fractional-order systems Example 7.6. For the fractional-order state space model −2.2 [ 0.2 [ D 0.4 x(t) = [ [ 0.6 [ 1.4

−0.7 −6.3 −0.9 −0.1

1.5 6 −2 −1

6 −1 [4 −1.5] [ ] ] x(t) + [ [4 −0.5] −3.5] [8

9 6] ] ] u(t), 4] 4]

check its controllability and find its controllable staircase form. Solution. Since the output equation is not given, assume that C = [1, 2, 3, 4]. The following statements can be used to find the controllability and the staircase form as well. >> A=[-2.2,-0.7,1.5,-1; 0.2,-6.3,6,-1.5;... 0.6,-0.9,-2,-0.5; 1.4,-0.1,-1,-3.5]; B=[6,9; 4,6; 4,4; 8,4]; rank(ctrb(A,B)) C=[1 2 3 4]; [Ac,Bc,Cc,Tc]=ctrbf(A,B,C); It can be seen that the rank of the testing matrix is 3, which means that the system is not fully controllable, with one uncontrollable mode. The controllable staircase form of the system can be written as

D

0.4

0 0 −4 0 0 0 [ ] [ −4.638 −3.823 −0.5145 −0.127 ] 0 0 [ ] ] z(t) = [ [ −3.637 0.1827 −3.492 −0.1215 ] z(t) + [ 2.754 −2.575 ] u(t), [ −11.15 −11.93 ] [ −4.114 −1.888 1.275 −2.685 ] y(t) = [−1.098, 1.379, −2.363, 4.616]z(t),

with the transformation matrix 0.091 [ −0.588 [ Tc = [ [ 0.467 [ 0.653

0.32 0.781 0.311 0.435

−0.914 0.202 −0.05 0.346

0.228 −0.05 ] ] ]. −0.825] 0.513 ]

The observability staircase forms can often be defined and computed, with the function obsvf(), under the syntax [Ao , Bo , Cc , To ] = obsvf(A, B, C).

7.3.5 Norm measures The norms of FOSS objects can be evaluated by converting the models first to FOTF objects. Then use the FOTF framework to complete the computations. The evaluation function is given below. function n=norm(G,varargin), n=norm(fotf(G),varargin{:});

7.4 Analysis of fractional-order state space models | 237

7.4 Analysis of fractional-order state space models The time domain, frequency domain and root locus of FOSS objects can all be implemented based on the overload functions written for FOTF objects. Therefore, the following overload functions can be written, which inherit all the properties in the FOTF objects. (1) Time domain analysis functions. The time domain responses can be obtained from the FOTF objects, studied in Chapter 6. Therefore, the overload functions are in fact based on the overload functions written for FOTF objects. The listings of the functions are as follows. function Y=step(G,varargin) if nargout==0, step(fotf(G),varargin{:}); else, Y=step(fotf(G),varargin{:}); end function Y=impulse(G,varargin) if nargout==0, impulse(fotf(G),varargin{:}); else, Y=impulse(fotf(G),varargin{:}); end function Y=lsim(G,varargin) if nargout==0, lsim(fotf(G),varargin{:}); else, Y=lsim(fotf(G),varargin{:}); end (2) Frequency domain functions. Like in the cases of time domain analysis functions, frequency domain responses can also be evaluated via FOTF objects. The listings of the functions are as follows. function H=bode(G,varargin) if nargout==0, bode(fotf(G),varargin{:}); else, H=bode(fotf(G),varargin{:}); end function H=nyquist(G,varargin) if nargout==0, nyquist(fotf(G),varargin{:}); else, H=nyquist(fotf(G),varargin{:}); end function H=nichols(G,varargin) if nargout==0, nichols(fotf(G),varargin{:}); else, H=nichols(fotf(G),varargin{:}); end function H=mfrd(G,varargin), H=mfrd(fotf(G),varargin{:}); function [Gm,Pm,Wcg,Wcp]=margin(G) [Gm,Pm,Wcg,Wcp]=margin(fotf(G));

238 | 7 State space modelling and analysis of linear fractional-order systems (3) Root locus analysis. If the plant model is a single-variable system, the model can be converted to FOTF object. Then the root locus can be drawn, with the following overload function. function rlocus(G), rlocus(fotf(G))

7.5 Extended fractional state space models It can be seen from (7.2) that the state space models are restricted to commensurateorder systems. If a model cannot be expressed as commensurate-order systems, i.e. a base order α cannot be selected, the state space model in (7.2) cannot be established. In this section, extended linear and nonlinear state space expressions are presented.

7.5.1 Extended linear fractional-order state space models Let us consider first an extended state space model for linear fractional-order systems, followed by illustrative examples. Definition 7.6. If the order α is not the base order but a vector of orders, the extended state space model can be expressed as {

ED α x(t) = Ax(t) + Bu(t), y(t) = Cx(t) + Du(t).

Compared with the state space model in (7.2), the size of the matrix A can be significantly reduced in some examples. Example 7.7. Consider the FOTF model given in Example 5.12 again: G(s) =

2s3.501

+

−2s0.63 − 4 . + 2.6s1.798 + 2.5s1.31 + 1.5

3.8s2.42

Find its extended fractional-order state space model. Solution. The fractional-order differential equation can be written as ̂ D 3.501 y(t) + 1.9D 2.42 y(t) + 1.3D 1.798 y(t) + 1.25D 1.31 y(t) + 0.75y(t) = u(t). Let x1 = y(t), x2 (t) = D 1.31 y(t), x3 (t) = D 1.798 y(t), x4 (t) = D 2.42 y(t). The state space form of the system can be written as D 1.31 x1 (t) = x2 (t), { { { { { { D 0.488 x2 (t) = x3 (t), { { D 0.612 x3 (t) = x4 (t), { { { { 1.081 ̂ x4 (t) = −0.75x1 (t) − 1.25x2 (t) − 1.3x3 (t) − 1.9x4 (t) + u(t). {D

7.5 Extended fractional state space models | 239

Therefore, the state space expression is 0 [ 0 [ A=[ [ 0 [−0.75

1 0 0 −1.25

0 1 0 −1.3

0 0 ] ] ], 1 ] −1.9]

0 [0] [ ] B = [ ], [0] [1]

1.31 [0.488] [ ] α=[ ]. [0.612] [1.081]

If the numerator of the original transfer function is a pseudo-polynomial of s, the states of the system should be reselected. The term s in the numerator should also be considered. In this case, five states should be selected. For instance, one can select x1 (t) = y(t), x2 (t) = D 0.63 y(t), x3 (t) = D 1.31 y(t), x4 (t) = D 1.798 y(t), x5 (t) = D 2.42 y(t). The full state equation can be written as D 0.63 x1 (t) = x2 (t), { { { { { { D 0.68 x2 (t) = x3 (t), { { { 0.488 D x3 (t) = x4 (t), { { { 0.612 { { D x4 (t) = x5 (t), { { { { 1.081 x5 (t) = −0.75x1 (t) − 1.25x3 (t) − 1.3x4 (t) − 1.9x5 (t) + u(t), {D and Y(t) = [−2, −1, 0, 0, 0]x(t). Therefore, the extended state space expression is 0 0 0 0 [−0.75

[ [ [ A=[ [ [

1 0 0 0 0

0 1 0 0 −1.25

0 0 1 0 −1.3

0 0 ] ] ] 0 ], ] 1 ] −1.9]

0 [0] [ ] [ ] B = [0] , [ ] [0] [1]

0.63 [ 0.68 ] ] [ ] [ α = [0.488] . ] [ [0.612] [1.081]

A MATLAB function foss_a() can be written to convert an FOTF object into the above mentioned extended state space form. function G1=foss_a(G), [n,m]=size(G); n0=[]; for i=1:n, for j=1:m, g=G(i,j); n0=[n0, g.nn g.nd]; end, end n0=unique(n0); n1=n0(end:-1:1); for i=1:n, for j=1:m, g=G(i,j); num=[]; den=[]; nn=g.nn; nd=g.nd; b=g.num; a=g.den; for k=1:length(nn), t=find(nn(k)==n1); num(t)=b(k); end for k=1:length(nd), t=find(nd(k)==n1); den(t)=a(k); end Gt(i,j)=tf(num,den); T(i,j)=g.ioDelay; end, end Gf=ss(Gt); E=Gf.e; [a b c d]=dssdata(Gf); alpha=-diff(n1); G1=foss(a,b,c,d,alpha,T,E);

240 | 7 State space modelling and analysis of linear fractional-order systems Example 7.8. Write out the extended fractional-order state space model for the FOTF model in Example 7.7. Solution. Apart from direct modelling, the function foss_a() written above can also be used with the statements >> b=[-2,-4]; nb=[0.63,0]; a=[2,3.8,2.6,2.5,1.5]; na=[3.501,2.42,1.798,1.31,0]; G=fotf(a,na,b,nb); G1=foss_a(G) and the extended fractional state space model is −1.9 [ 2 [ [ A=[ 0 [ [ 0 [ 0

−0.65 0 1 0 0

0 0 0 0 0.5

−0.625 0 0 1 0

−0.75 0 ] ] ] 0 ], ] 0 ] 0 ]

2 [0] [ ] [ ] B = [0] , [ ] [0] [0]

1.081 [0.6219] [ ] [ ] α = [ 0.488 ] , [ ] [ 0.68 ] [ 0.63 ]

and C = [0, 0, 0, −0.25, −1]. It can be seen that the five states are selected as x5 (t) = y(t),

x4 (t) = 2D 0.63 y(t),

x2 (t) = D 1.798 y(t),

x1 (t) = D 2.42 y(t)/2.

x3 (t) = D 1.31 y(t),

The extended state space model can also be converted back to FOTF object directly with the function fotf().

7.5.2 Nonlinear fractional-order state space models Like in classical control systems, state space expressions can also be used in the modelling of nonlinear fractional-order systems. In this subsection, two examples will be given to demonstrate the state space representations of nonlinear fractional-order systems. Example 7.9. Fractional-order Chua chaotic oscillator is a good example of nonlinear state space description of fractional-order systems [52] q

1 C { 0 Dt x(t) = α[y(t) − x(t) + f(x(t))], { { C q2 0 Dt y(t) = x(t) − y(t) + z(t), { { { C q3 { 0 Dt z(t) = −βy(t) − γz(t),

where the nonlinear function is expressed as f(x(t)) = m1 x(t) +

1 (m0 − m1 )(|x(t) + 1| − |x(t) − 1|). 2

7.5 Extended fractional state space models | 241

Solution. Select the state variables x1 (t) = x(t), x2 (t) = y(t), x3 (t) = z(t). The extended nonlinear state space model can be established q

1 C { 0 Dt x 1 (t) = α[x 2 (t) − x 1 (t) + f(x 1 (t))], { { C q2 0 Dt x 2 (t) = x 1 (t) − x 2 (t) + x 3 (t), { { { C q3 { 0 Dt x3 (t) = −βx2 (t) − γx3 (t),

where the nonlinear function is expressed as 1 (m0 − m1 )(|x1 (t) + 1| − |x1 (t) − 1|). 2 Example 7.10. Find the state space description of the Caputo differential equation given by (see [16]) f(x1 (t)) = m1 x1 (t) +

C 1.455 y(t) 0 Dt

= −t0.1

E1,1.545 (−t) t e y(t) C0 Dt0.555 y(t) + e−2t −[y󸀠 (t)]2 , E1,1.445 (−t)

with y(0) = 1 and y󸀠 (0) = −1. Solution. In the original equation in [16], the two Mittag-Leffler functions were E1.545 (−t) and E1.445 (−t), and it was claimed that the analytical solution is e−t . If the function e−t is substituted back to the original equation, the equation does not hold. Therefore, the two Mittag-Leffler functions are modified to the ones in two parameters such that the analytical solution to the equation is e−t . Selecting state variables x1 (t) = y(t), x2 = C0 Dt0.555 y(t) and x3 (t) = y󸀠 (t), the state space representation can be expressed as C 0.555 x1 (t) = x2 (t), { 0D { { { { C D 0.445 x2 (t) = x3 (t), 0 { { { { C { 0 D 0.455 x3 (t) = −t0.1 E1,1.545 (−t) et x1 (t)x2 (t) + e−2t −x23 (t). E1,1.445 (−t) { For the initial values of the states, note that, if the state happens to be the output y(t) or integer-order derivatives of the output y(t), the initial values can be set to the initial values of the original Caputo equation; otherwise, the initial value of the states must be assigned to zero. For this example, the initial values of the states are x1 (0) = 1, x2 (0) = 0 and x3 (0) = −1.

Summarising the above examples, the standard form of extended fractional-order state space equations can be defined. Definition 7.7. Extended fractional-order state space equations are defined as C α 0 D x(t)

= f (t, x)

with initial state vector x(0). In the following chapter, numerical solution algorithms will be presented. Also, general purpose nonlinear differential equation solvers will be addressed.

8 Numerical solutions of nonlinear fractional-order differential equations In Chapter 4, analytical and numerical solutions of linear fractional-order differential equations are studied, and high-precision algorithms for Riemann–Liouville and Caputo equations are presented. In control systems, nonlinear behaviours are inevitable. Therefore, the solution strategies of nonlinear fractional-order differential equations are also expected. There are not as many numerical algorithms for nonlinear fractional-order differential equations as for their integer-order counterparts. Only very few of the vast amount of algorithms for integer-order equations has been transplanted for fractionalorder differential equations. The predictor–corrector algorithm (PECE) based on the Adams–Bashforth–Moulton algorithm is well documented in [16] and will be presented and implemented in Section 8.1. Also, numerical algorithms for extended state space models are proposed and implemented. In Section 8.2, more effective and highprecision algorithms are proposed for explicit and implicit Caputo equations. For complicated systems, especially those in control systems, where a large-scale system is composed of several individual yet interconnected blocks, it may not be possible to write out a single fractional-order differential equation for the whole system. A block diagram-based solution philosophy is particularly useful in dealing with simulation problems in fractional-order control systems. To solve these kind of problems, the sophisticated Simulink environment should be used. A Simulink block library is designed, with dedicated blocks to model from essential components such as fractional-order differentiator and integrators, to fractional-order PID controllers, FOTF objects and even FOTF matrices. The dedicated Simulink library and internal programming will be presented in Section 8.3, with illustrative examples in control system modelling and simulation. Block diagram-based modelling and solutions to nonlinear fractional-order ordinary differential equations with zero and nonzero initial conditions will be demonstrated in Sections 8.4 and 8.5. Detailed techniques and guidelines on modelling will be fully discussed through examples. Systematic schemes are proposed for explicit and even implicit Caputo equations. It can be seen that in theory, nonlinear fractional-order differential equations of any complexity can be modelled and solved with the dedicated tools and schemes.

8.1 Numerical solutions of nonlinear Caputo equations The linear Caputo differential equations are studied extensively in Chapter 4, and MATLAB tools are developed to solve general linear Caputo equations of any kind, DOI 10.1515/9783110497977-008

244 | 8 Numerical solutions of nonlinear fractional-order differential equations with high-precision algorithms. In this section, a variety of algorithms is introduced in finding numerical solutions of nonlinear Caputo equations. Definition 8.1. The general form of nonlinear Caputo differential equations is given with α α F (t, y(t), Ct0Dt 1 y(t), . . . , Ct0Dt n y(t)) = 0, (8.1) where F ( ⋅ ) is a functional of variable t, the function y(t) and its fractional-order derivatives. It is safe to assume that α n > α n−1 > ⋅ ⋅ ⋅ > α2 > α1 > 0. The initial conditions, when q = ⌈α n ⌉, are given by y(0) = y0 ,

y󸀠 (0) = y1 ,

y󸀠󸀠 (0) = y2 ,

...,

y(q−1) (0) = y q−1 .

(8.2)

The simplest form, the single-term form, of nonlinear Caputo equations is studied first in this section, and the algorithm will be extended to the multi-term case. The solutions of extended fractional-order state space equations will also be studied in this section.

8.1.1 Numerical solutions of single-term equations The so-called single-term nonlinear Caputo fractional-order differential equations will be studied first in this subsection. Definition 8.2. The mathematical form of a single-term nonlinear Caputo fractionalorder differential equation is defined as C α 0 D y(t)

= f (t, y(t)),

(8.3)

with known initial conditions in (8.2). The variable y(t) is the state vector in the equation. Here, the differential equation is deliberately written in the form of vector functionals y(t), meaning that the equation format is suitable in describing vectorised Caputo equation problems. This kind of equations is similar to the integer-order explicit equation of y󸀠 (t) = f(t, y(t)). Theorem 8.1 ([16]). The analytical form of the solution can be written as q−1

y(t) = ∑ y k k=0

t

tk 1 f (τ, y(τ)) + dτ. ∫ k! Γ(α) (t − τ)1−α 0

For simplicity, the classical Adams–Bashforth–Moulton algorithm for first-order differential equations given below is summarised: y󸀠 (t) = f (t, y(t)),

with initial condition y(0) = y0 .

Assume that the interested time interval is t ∈ (0, tn ), and a step-size h is selected, the interval can be divided into subintervals by the grids 0, h, 2h, . . . . Suppose that

8.1 Numerical solutions of nonlinear Caputo equations | 245

the approximations y i = y(ih) have already been found for i = 0, 1, . . . , k; the next term, y k+1 , can easily be approximated by t k+1

y k+1 = y k + ∫ f (τ, y(τ)) dτ. tk

For a small step-size h, the integral in the equation can be approximated by the two-point trapezoidal quadrature formula such that h (8.4) [f (t k , y k ) + f (t k+1 , y k+1 )]. 2 Since the term y k+1 appears on both sides of the equation, and it is not quite simple to solve the nonlinear equation for y k+1 , an iterative algorithm can be introduced to find the unknown quantity y k+1 . Assume that the initial value of y k+1 on the right-hand side of the equation is p denoted by y k+1 , referred to as a predictor, and (8.4) can be rewritten as y k+1 = y k +

h p (8.5) [f (t k , y k ) + f (t k+1 , y k+1 )]. 2 The equation is referred to as the corrector equation. p The initial value of the predictor y k+1 can be evaluated by the rectangular quadrature formula or the Euler formula: y k+1 = y k +

p

y k+1 = y k + hf (t k , y k ).

(8.6)

The equation is referred to as the predictor equation. Algorithm 8.1 (Adams–Bashforth–Moulton predictor–corrector algorithm). (1) Set k to 0 and use the first term y0 . p (2) For each value of k, compute the initial predictor y k+1 from (8.6). (3) Compute the corrector y k+1 from (8.5). Iterate the process until a convergent y k+1 p can be found, for instance, ‖y k+1 − y k+1 ‖ < ϵ. Similar to the integer-order case, a corrector equation can be formulated [16] for the Caputo fractional-order differential equation in (8.3): q−1

y k+1 = ∑ y i i=0

k t ik+1 1 p + [a k+1,k+1 f (t k+1 , y k+1 ) + ∑ a i,k+1 f (t i , y i )], i! Γ(α) i=0

(8.7)

where for unevenly spaced grids, the coefficients a ij can be recursively computed from (t k+1 − t1 )α+1 + t αk+1 [(α + 1)t1 − t k+1 ] , t1 α(α + 1) (t k+1 − t i−1 )α+1 + (t k+1 − t i )α [α(t i−1 − t i ) + t i−1 − t k+1 ] = (t i − t i−1 )α(α + 1) (t k+1 − t i−1 )α+1 − (t k+1 − t i )α [α(t i − t i+1 ) − t i+1 + t k+1 ] + (t i+1 − t i )α(α + 1)

a0,k+1 = a i,k+1

246 | 8 Numerical solutions of nonlinear fractional-order differential equations for 1 ≤ i ≤ k, and

(t k+1 − t k )α . α(α + 1) For evenly spaced grids, the above equations are simplified as a k+1,k+1 =

a i,k+1

h α [k α+1 − (k − α)(k + 1)α ]/[α(α + 1)], { { { { = {h α [(k − i + 2)α+1 + (k − i)α+1 − 2(k − i + 1)α+1 ]/[α(α + 1)], { { { α {h /[α(α + 1)],

i = 0, 1 ≤ i ≤ k, (8.8) i = k + 1.

The predictor equation for the fractional-order Adams–Bashforth–Moulton algorithm is q−1 ti 1 k p y k+1 = ∑ y i k+1 + (8.9) ∑ b i,k+1 f (t i , y i ), i! Γ(α) i=0 i=0 and for evenly spaced grids, the coefficients b i,k+1 can be computed from b i,k+1 =

hα [(k + 1 − i)α − (k − i)α ]. α

(8.10)

It has been shown that the accuracy of the algorithm is o(h2 ) for α ≥ 1, and o(h1+α ) if α < 1. The fractional-order predictor–corrector Adams–Bashforth–Moulton algorithm is then formulated below. Algorithm 8.2 (Fractional-order Adams–Bashforth–Moulton algorithm). (1) For each value of k, compute the coefficients b i,k+1 from (8.10). (2) Compute the coefficients a i,k+1 from (8.8). p (3) Compute the predictor y k+1 from (8.9). (4) Iterate the solution y k+1 from (8.7) until a convergent solution is found. Comments 8.1 (predictor–corrector algorithm). We remark the following. (1) In real applications, the predictor equation may not be very important, since it is merely an initial search point in the corrector algorithm. The predictor computation can be bypassed, such that the computational load can be reduced almost by 30 %–50 %. p (2) To bypass the predictor, the first value y1 can be assigned to y0 , and the subsequent p values can be set to y k+1 = y k . (3) The algorithm is based on the assumption that the time grid is evenly spaced. For unevenly spaced one, the algorithm can still be used by replacing the terms a i,k+1 with corresponding formula with the predictor bypassed. (4) Although a ij and b ij appear in matrix form, for each step k, vectors are sufficient in storing them. Based on the algorithm, a MATLAB implementation is written. function [y,t]=pepc_nlfode(f,alpha,y0,h,tn,err,key) if nargin> h=0.01; [y,t]=pepc_nlfode(f,alpha,y0,h,tn,err,key); [y2,t]=pepc_nlfode(f,alpha,y0,h,tn,err,2); ya1=t.^8-3*t.^(4+alpha/2)+9*t.^alpha/4; plot(t,y,’-’,t,ya1,’--’,t,y2,’:’), norm(y-ya1) The solution of the equation can be obtained and shown in Figure 8.1, together with the one from the predictor only. The analytical solution is also superimposed, and it can be seen that the results are almost the same. In fact, the norm of the error vector is 5.3814×10−4 . It is also seen that there exists small discrepancies in the predictor solutions. Selecting different step-sizes, and select key to turn on or off the predictor and the corrector, the errors and elapsed time are measured and recorded in Table 8.1. It can be seen that the predictor can be turned off for this example, while the corrector is the key factor to maintain the accuracy of the algorithm. It can be concluded that one cannot use predictor alone to solve the equations, corrector must be used. The predictor result is not necessary in real applications, and it is suggested to set key to 1. Tab. 8.1: Algorithms comparisons under difference step-sizes. step-

without predictor

with predictor

predictor alone

size h

norm of error

time (sec)

norm of error

time (sec)

norm of error

time (sec)

0.05 0.01 0.005 0.001 0.0005

0.0018 5.3814×10−4 2.1210×10−4 1.7932×10−5 6.2378×10−6

0.069 0.46 0.83 14.94 72.47

0.0020 6.7297×10−4 2.2789×10−4 1.9005×10−5 6.5731×10−6

0.13 0.53 1.08 27.94 96.15

0.1760 0.1220 0.0854 0.0378 0.0267

0.031 0.13 0.591 7.01 30.95

8.1 Numerical solutions of nonlinear Caputo equations | 249

8.1.2 Solutions of multi-term equations It is immediately seen that the single-term Caputo equation covers a very narrow form of nonlinear Caputo equations. In this subsection, a more general form of Caputo equations is to be studied. Definition 8.3. A class of nonlinear explicit Caputo equations is defined as C α 0 Dt y(t)

α

α

= f (t, y(t), C0 Dt 1 y(t), . . . , C0 Dt n−1 y(t)),

(8.11)

where α = α n is the highest fractional order. The initial conditions are given in (8.2). In contrast to single-term equations, this type of equations is referred here to as multi-term Caputo equations. In fact, the algorithm presented in [16] is rather complicated to implement, we shall only consider a special case – the commensurate-order case. Definition 8.4. A commensurate-order nonlinear explicit Caputo equations is defined as (n−1)α0 α 2α C nα0 y(t) = f (t, y(t), C0 Dt 0 y(t), C0 Dt 0 y(t), . . . , C0 Dt y(t)), (8.12) 0 Dt where α0 is the base order. The initial conditions are given in (8.2). Introduce the following set of state variables: x1 (t) = y(t), { { { { α { x2 (t) = C0 Dt 0 y(t), { { { { 2α x (t) = C0 Dt 0 y(t), { { 3 { .. { { { . { { { C (n−1)α0 y(t). { x n (t) = 0 Dt The state space equation can be written as α

C 0 0 Dt x 1 (t) = { { { C α0 { { D x2 (t) = { { { 0 t .. . { { { { C α0 { { { 0 Dt x n−1 (t) = { C α0 { 0 Dt x n (t) =

x2 (t), x3 (t), x n (t), f(t, x1 (t), x2 (t), . . . , x n (t)),

and the matrix form of the equation is C α0 0 Dt x(t)

= F(t, x(t)).

The settings of initial conditions are as follows [16]: if (i − 1)α0 is integer, then x i (0) = y(i−1)α0 ; otherwise, x i (0) = 0. It can be seen that the equation can be solved directly with the single-term algorithm studied earlier. The solution procedure is demonstrated through the following examples.

250 | 8 Numerical solutions of nonlinear fractional-order differential equations 4.5 4 3.5

x 1(t

3

dt+

) an

1

2.5 2

x 2 (t)

1.5

x3 (t)

1 0.5

x4 (t)

0 -0.5

0

0.5

1

1.5

2

2.5

3

Fig. 8.2: Numerical solution of Bagley–Torwik equation.

Example 8.2. Solve numerically the Bagley–Torwik equation in Example 4.10: Ay󸀠󸀠 (t) + BD 3/2 y(t) + Cy(t) = C(t + 1),

with y(0) = y󸀠 (0) = 1.

Solution. It is easily seen that the base order is α0 = 1/2. Selecting state variables 1/2

x2 (t) = C0 Dt

x1 (t) = y(t),

y(t),

x3 (t) = y󸀠 (t),

3/2

x4 (t) = C0 Dt

y(t),

the state space model can be written as

C 1/2 0 Dt x(t)

x2 (t) ] x3 (t) ] ], ] x4 (t) C(t + 1) − Cx (t)/A − Bx (t)/A 1 4 ] [

[ [ =[ [

1 [0] [ ] x0 = [ ] . [1] [0]

In the initial conditions of the state vector, it is found that the first and third states satisfy the constraints, where iα0 is integers. Thus, they can be assigned to y(0) and y󸀠 (0), respectively, while the initial conditions of the other states should be set to zero. Select A = 1, B = 2 and C = 3. The states of the equations can be obtained with >> A=1; B=2; C=3; x0=[1; 0; 1; 0]; f=@(t,x)[x(2); x(3); x(4); C*(t+1)-C*x(1)/A-B*x(4)/A]; [x,t]=pepc_nlfode(f,1/2,x0,0.01,3,1e-10,1); plot(t,x,t,1+t) as shown in Figure 8.2. It can be seen that the solutions are accurate enough in this example. In fact, as stated in Example 4.10, no matter what the values of A, B and C are, the solution of x1 (t) is always t + 1. Example 8.3. Solve the nonlinear Caputo fractional-order differential equation studied in Example 7.10 (cf. [16]) C 1.455 y(t) 0 Dt

= −t0.1

E1,1.545 (−t) t e y(t) C0 Dt0.555 y(t) + e−2t −[y󸀠 (t)]2 , E1,1.445 (−t)

8.1 Numerical solutions of nonlinear Caputo equations | 251

where y(0) = 1, y󸀠 (0) = −1. The analytical solution y = e−t can be used to validate the solution. Solution. To convert the equation into a commensurate-order one, a base order must be found first. With the commands >> [n d]=rat([1.455, 1, 0.555]); a=gcd(sym(n))/lcm(sym(d)), n=1.455/a, n1=0.555/a, n2=1/a it is found that the base order is α = 0.005, and there should be n = 291 states. Besides, the key states are x1 (t) = y(t), x112 (t) = D 0.555 y(t), x201 (t) = y󸀠 (t), x291 = D 1.450 y(t). The fractional-order state space model can be written as D α x i (t) = x i+1 (t), i = 1, 2, . . . , 290, { { { { D α x291 (t) = −t0.1 E1,1.545 (−t) et x1 (t)x112 (t) + e−2t −x2 (t), 201 E1,1.445 (−t) { with x1 (0) = y(0) = 1, x201 (0) = y󸀠 (0) = −1, and the initial values of the rest states are zero. The equations can then be described as follows. function x1=c8nleq(t,x) x1=[x(2:end); -t^0.1*ml_func([1,1.545],-t)/ml_func([1,1.445],-t)... *exp(t)*x(1)*x(112)+exp(-2*t)-x(201)^2]; Assign properly the initial conditions y(0) and y󸀠 (0) to x1 and x201 and leave the others zero. The equations can be solved with the following statements, with h = 0.001 selected. >> y0=zeros(291,1); y0([1,201])=[1; -1]; tic, [y,t]=pepc_nlfode(@c8nleq,0.005,y0,0.001,1,1e-10,1); toc max(abs(y(1,:)-exp(-t))) It is seen that the time elapsed 19287.57 seconds – nearly five and a half hours; however, the maximum error is 4.7712×10−4 , much higher than the reported one of 0.06 in [16], under a smaller step-size of h = 1/1600. If the error tolerance in the above solver is reduced to ϵ = 10−6 from 10−10 , the computation load will be reduced significantly, with elapsed time reduced to 5723.12 seconds, while the maximum error is still 4.7649×10−4 . When ϵ is further reduced to 10−5 , the elapsed time is reduced to 2654.23 seconds, and the maximum error is maintained at 4.9907×10−4 . If a step-size of h = 0.005 is used, the time elapsed is 435.70 seconds, with the maximum error of 0.0023. It can be seen that the genuine multi-term equation solution under this approach is extremely time-consuming. Therefore, efficient algorithms are expected in dealing with such equations.

252 | 8 Numerical solutions of nonlinear fractional-order differential equations 8.1.3 Numerical solutions of extended fractional-order state space equations For a given simple single-term scalar differential equation D α z(t) = f(t, z(t)) with zero initial conditions, from the Grünwald–Letnikov definition, we have m 1 m (α) 1 (α) ∑ w j z(t k−j ) = α [z(t k ) + ∑ w j z(t k−j )] = f(t k , z(t k )), α h j=0 h j=1

where m = ⌈t k /h⌉ + 1. It can be seen that m

(α)

z(t k ) = h α f(t k , z(t k )) − ∑ w j z(t k−j ), j=1

where z k appears on both sides of the equation. If the step-size h is small enough, the first term on the right-hand side can be approximated by h α f(t k , z(t k−1 )), and the recursive approach can be used in finding the approximate solution of the original one-term differential equation. For the extended fractional-order state space equations D α1 z1 (t) = f1 (t, z1 (t), z2 (t), . . . , z n (t)), { { { { { { D α2 z2 (t) = f2 (t, z1 (t), z2 (t), . . . , z n (t)), .. { { { . { { { αn D z (t) = f n (t, z1 (t), z2 (t), . . . , z n (t)), n { the recursive solution algorithm can be constructed – extended from [52] – as m

(α ) { z1 (t k ) = h α1 f1 (t k , z1 (t k−1 ), z2 (t k−1 ), . . . , z n (t k−1 )) − ∑ w j 1 z1 (t k−j ), { { { { j=1 { { { { m { { (α2 ) { α { { z2 (t k ) = h 2 f2 (t k , z1 (t k ), z2 (t k−1 ), . . . , z n (t k−1 )) − ∑ w j z2 (t k−j ), j=1 { { { .. { { { . { { { m { { (α ) { { { z n (t k ) = h α n f n (t k , z1 (t k ), z2 (t k ), . . . , z n (t k−1 )) − ∑ w j n z n (t k−j ). j=1 {

(8.13)

The nonlinear state space equations under the Caputo definition with nonzero initial conditions are explored in this subsection. Definition 8.5. The typical form of a Caputo state space equation is C α 0 D x(t)

= f (t, x(t))

with initial conditions x(0) = [x1 (0), x2 (0), . . . , x n (0)]T .

8.1 Numerical solutions of nonlinear Caputo equations | 253

Consider the nonzero initial conditions z k (t) = x k (t) − x k (0). It is easily found that the following key recursive equation can be derived: m

(α ) { x1 (t k ) = h α1 f1 (t k , x1 (t k−1 ), x2 (t k−1 ), . . . , x n (t k−1 )) − ∑ w j 1 z1 (t k−j ) + x1 (0), { { { { j=1 { { { { m { { (α2 ) { α { {x2 (t k ) = h 2 f2 (t k , x1 (t k ), x2 (t k−1 ), . . . , x n (t k−1 )) − ∑ w j z2 (t k−j ) + x2 (0), (8.14) j=1 { { { . { .. { { { { { m { { (α ) { { {x n (t k ) = h α n f n (t k , x1 (t k ), x2 (t k ), . . . , x n (t k−1 )) − ∑ w j n z n (t k−j ) + x n (0). j=1 {

If the number of sample points in the simulation is too large, the short-memory principle can be considered. Suppose the most recent L0 samples are reserved, the sum can be approximated by m

(α i )

∑ wj j=1

min(L0 ,k)

z i (t k−j ) ≈



(α i )

wj

z i (t k−j ).

j=1

Based on the above considerations, the following algorithm is proposed in solving extended nonlinear fractional-order state space equations. Algorithm 8.3 (Numerical solutions of extended state space model). One proceeds in the following way. (1) A double loop structure can be used to solve the equations. (2) Solve recursively equation (8.14) and store the results for z, where a vector x1 is used to store the most recent vector x. (3) Find the solution to the original equation with x = z + x0 . A MATLAB function for solving nonlinear explicit Caputo state space equations is written as function [x,t]=nlfode_vec(f,alpha,x0,h,tn,L0) if nargin==5, L0=1e20; end n=length(x0); m=round(tn/h)+1; t=0; g=double(genfunc(1)); x0=x0(:); ha=h.^alpha; z=zeros(n,m); x1=x0; for i=1:n, W(i,:)=get_vecw(alpha(i),min(m,L0+1),g); end for k=2:m, tk=(k-1)*h; L=min(k-1,L0); for i=1:n, x1(i)=f(tk,x1,i)*ha(i)-W(i,2:L+1)*z(i,k-1:-1:k-L).’+x0(i); end, t=[t,tk]; z(:,k)=x1-x0; end x=(z+repmat(x0,[1,m])).’;

254 | 8 Numerical solutions of nonlinear fractional-order differential equations with the syntax [x, t] = nlfode_vec(fun, α, x0 , h, tn , L0 ). Here fun is the function handle of the right-hand side of the original explicit equation, whose format will be explained by the example; α is the vector of orders, x0 the initial state vector. The arguments h and tn are the step-size and terminate time, respectively; the optional L0 is the length of short-memory and it can be omitted to bypass the short-memory principle; the returned variables t and y are the solutions of the equation, with the ith column the solution of the ith state x i (t). Example 8.4. Simulate the following fractional-order Chua system in Example 7.9: α

C 1 { 0 Dt x 1 (t) = α[x 2 (t) − x 1 (t) + f(x 1 (t))], { { C α2 D x (t) = x1 (t) − x2 (t) + x3 (t), { {0 t 2 { C α3 { 0 Dt x3 (t) = −βx2 (t) − γx3 (t),

where the nonlinear function is expressed as f(x1 (t)) = m1 x1 (t) +

1 (m0 − m1 )(|x1 (t) + 1| − |x1 (t) − 1|). 2

Solve the equations with (cf. [52]) α = 10.725,

β = 10.593,

m0 = −1.1726,

m1 = −0.7872,

α1 = 0.93,

α2 = 0.99,

x(0) = 0.2,

y(0) = −0.1,

γ = 0.268, α3 = 0.92, z(0) = 0.1.

Solution. The MATLAB function to describe the state space model is given as follows, where the derivative of the kth state can be evaluated with. function y=c8mchua(t,x,k) a=10.725; b=10.593; c=0.268; m0=-1.1726; m1=-0.7872; switch k case 1, f=m1*x(1)+0.5*(m0-m1)*(abs(x(1)+1)-abs(x(1)-1)); y=a*(x(2)-x(1)-f); case 2, y=x(1)-x(2)+x(3); case 3, y=-b*x(2)-c*x(3); end It is also noticed that although t is not explicitly involved, the argument t in the function must still be used. Selecting h = 0.001 and tn = 200, the following statements can be used to solve the chaotic equation. >> alpha=[0.93,0.99,0.92]; x0=[0.2; -0.1; 0.1]; h=0.001; tn=200; [x,t]=nlfode_vec(@c8mchua,alpha,x0,h,tn); plot(x(:,1),x(:,2))

8.1 Numerical solutions of nonlinear Caputo equations | 255 0.5 0.4 0.3 0.2 0.1 0 −0.1 −0.2 −0.3 −0.4 −0.5 −4

−3

−2

−1

0

1

2

3

4

Fig. 8.3: Phase plane trajectory of fractional-order Chua system.

The x-y plane phase trajectory can be obtained as shown in Figure 8.3. The time elapsed is around 20 minutes. It can be seen that the algorithm is quite time consuming when the number of points are large, since the sums are really with heavy computation loads. With short-memory of L0 = 10,000 points, the terminate time of tn = 500 only needs 228.5 seconds, and the trajectory is shown in Figure 8.4. >> tn=500; L=10000; [x,t]=nlfode_vec(@c8mchua,alpha,x0,h,tn,L); plot(x(:,1),x(:,2)) Although the trend in the trajectory is similar to the one shown in Figure 8.3, the details in the initial time period is different. 0.5 0.4 0.3 0.2 0.1 0 −0.1 −0.2 −0.3 −0.4 −0.5 −4

−3

−2

−1

0

1

2

3

4

Fig. 8.4: Phase plane trajectory for terminate time of tn = 500.

Example 8.5. Consider the extended fractional-order state space model studied in Example 7.10. Solve the differential equation with the new algorithm.

256 | 8 Numerical solutions of nonlinear fractional-order differential equations Solution. With the states selected in Example 7.10, the extended state space can be established for the original equation as follows: C 0.555 x1 (t) = x2 (t), 0D { { { { { C 0.445 x2 (t) = x3 (t), 0D { { { { { C D 0.455 x (t) = −t0.1 E1,1.545 (−t) et x (t)x (t) + e−2t −x2 (t), 3 1 2 3 E1,1.445 (−t) {0

with initial state vector x(0) = [1, 0, −1]T . The original state space equation can be expressed as follows. function y=c8mexp1x(t,x,k) if k> alpha=[0.555, 0.445, 0.455]; h=0.01; tn=1; x0=[1; 0; -1]; tic, [x,t]=nlfode_vec(@c8mexp1x,alpha,x0,h,tn); toc max(abs(x(:,1)-exp(-t’))) Since the solver is quite efficient, the step-size h can be reduced significantly. For instance, for h = 0.001, the time required is 0.19 seconds, while the maximum error is 0.0016; for h = 0.0001, the time elapsed is 2.57 seconds, with maximum error of 1.5736×10−4 ; and for h = 0.00001, the time is 184 seconds, while the maximum error is 1.5743×10−5 . It can be seen that the time elapsed is highly nonlinear, due to the computation burden in the sums in (8.13).

8.1.4 Equation solution-based algorithm In Algorithm 8.3, when x i (t k ) appeared on both sides of the equation, some of the states were replaced by x i (t k−1 ) such that recursive procedures can be carried out. Reconsider the equations in (8.14). If x i (t k−1 ) are replaced back by x i (t k ), all the equations have the same form such that x i (t k ) − h α i f i (t k , x1 (t k ), x2 (t k ), . . . , x n (t k )) +

min(L0 ,k)

∑ j=1

(α i )

wj

z i (t k−j ) − x i (0) = 0. (8.15)

8.1 Numerical solutions of nonlinear Caputo equations | 257

Thus there are n algebraic equations, with n unknowns, x1 (t k ), x2 (t k ), . . . , x n (t k ), for each time instance t k . Therefore, an equation solution-based solver can be developed. Comments 8.2 (The new algorithm). We remark the following. (1) A loop structure should be used to solve the equation. (2) It can be seen that the original equation solver is changed into an algebraic equation solution problem for each k. (3) The sums in the equations are independent of the unknowns x i (t k ), and only needed to be computed once for each k. Based on the above considerations, a MATLAB function can be written to solve the state space equations with the new algorithm, and the syntax is exactly the same as the one in nlfode_vec(). function [x,t]=nlfode_vec1(f,alpha,x0,h,tn,L0) if nargin==5, L0=1e20; end n=length(x0); m=round(tn/h)+1; t=0; g=double(genfunc(1)); x0=x0(:); x1=x0; ha=h.^alpha(:); z=zeros(n,m); tk=0; for i=1:n, W(i,:)=get_vecw(alpha(i),min(m,L0+1),g); end for k=2:m, tk=(k-1)*h; L=min(k-1,L0); for i=1:n, F0(i,1)=W(i,2:L+1)*z(i,k-1:-1:k-L).’-x0(i); end F=@(x)x-f(tk,x).*ha+F0; x1=fsolve(F,x1); t=[t,tk]; z(:,k)=x1-x0; end x=(z+repmat(x0,[1,m])).’; Example 8.6. Solve again the Caputo equation in Example 7.10. Solution. The extended state space equation can be described by the following MATLAB function, which is slightly different from the one in Example 8.5, and a standard state space model is described here. function y=c8mexp2m(t,x) y3=-t^0.1*ml_func([1,1.545],-t)/ml_func([1,1.445],-t)... *exp(t)*x(1)*x(2)+exp(-2*t)-x(3)^2; y=[x(2); x(3); y3]; The original Caputo equation can be solved that with a step-size of h = 0.01, and the maximum error is 0.0012, with time elapsed 1.92 seconds. >> alpha=[0.555, 0.445, 0.455]; h=0.01; tn=1; x0=[1; 0; -1]; tic, [x,t]=nlfode_vec1(@c8mexp2m,alpha,x0,h,tn); toc norm(abs(x(:,1)-exp(-t’))) For a smaller step-size of h = 0.001, the maximum error is 1.3036×10−4 , and the time elapsed is about 15 seconds. It can be seen that this function is less efficient than the one in Algorithm 8.3.

258 | 8 Numerical solutions of nonlinear fractional-order differential equations It is also seen that if the problem in Example 8.4 is considered, there should be 200,000 calls to algebraic equation solvers, which makes it impossible to realise. Therefore, the equation solution algorithm is not recommended.

8.2 Efficient high-precision algorithms for Caputo equations In the predictor–corrector algorithm presented earlier, there are two immediate disadvantages emerged: (1) the algorithm is not quite efficient, (2) only commensurateorder nonlinear equations can be handled. Yet another hidden disadvantage is that the precision of the algorithm is not quite high. In this section, the algorithms introduced for the analysis of linear fractional-order differential equations in Chapter 4 are extended to handle a class of nonlinear Caputo equations. The genuine multi-term explicit Caputo equation in Definition 8.3 is considered first. The special kind of predictor–corrector algorithm is presented, and the problem is divided into two steps: (1) Use predictor equation to find a reasonable prediction results. (2) Based on the prediction results, the corrector equation can be used to solve the equation, and find the results. Further, a matrix-based algorithm is proposed in finding high-precision numerical solutions of implicit Caputo equations.

8.2.1 Predictor equation Consider again the explicit nonlinear Caputo equations defined as C α 0 Dt y(t)

α

α

= f (t, y(t), C0 Dt 1 y(t), . . . , C0 Dt n−1 y(t)).

(8.16)

Recall the algorithms in Chapter 4. The key-point is to introduce a Taylor auxiliary function T(t) such that the output signal can be decomposed into y(t) = z(t) + T(t), where T(t) is the Taylor auxiliary function defined in (4.36), rewritten here as q−1

T(t) = ∑ k=0

y(k) (0) k yk k t = ∑ t , k! k! k=0 q−1

(8.17)

and z(t) is the function with zero initial conditions. From (8.16), it is also known that C α 0 Dt z(t) γ

α

α

= f (t, z(t), C0 Dt 1 z(t), . . . , C0 Dt n−1 z(t)),

γ

and since C0 Dt z(t) = RL 0 Dt z(t), one has RL α 0 Dt z(t)

α

α

RL n−1 1 = f (t, z(t), RL z(t)). 0 Dt z(t), . . . , 0 Dt

(8.18)

8.2 Efficient high-precision algorithms for Caputo equations | 259

Select a step-size h; the first q points of z i are zero, while y i = T i , i = 1, 2, . . . , q. Now, a loop for k can be used to solve the equation. Assume that z k is known. In order to compute the numerical solution of T k+1 , an initial value or predicted value of z k+1 is p denoted as z k+1 = z k . Substituting it into the right-hand side of (8.18), it is found that RL α 0 Dt z(t)

= f ̂,

(8.19)

where if f ̂ can be regarded as a given function with known quantities, formula (8.19) can be regarded as a single-term equation, and a solution ẑ k+1 can be found. If p ‖ẑ k+1 − z k+1 ‖ < ϵ, where ϵ is a pre-selected error tolerance, the value ẑ k+1 is regarded p as the solution; otherwise, let z k+1 = ẑ k+1 and continue the iterative process. The predictor algorithm is summarised as follows. Algorithm 8.4 (The predictor algorithm). Proceed as follows. (1) Construct T(t), decompose y(t) = z(t) + T(t), and establish (8.18). (2) Select step-size h, and let y i = T i , z i = 0 for i = 1, 2, . . . , q. (3) Start a loop structure from k = q + 1. p αi (4) Denote z k+1 = z k and compute RL 0 Dt z k+1 . Substitute them into (8.18) and obtain formula (8.19); solve it to find ẑ k+1 . p p (5) If ‖ẑ k+1 − z k+1 ‖ < ϵ, accept the solution z k+1 = ẑ k+1 ; otherwise, denote z k+1 = ẑ k+1 , repeat the iterative process until a solution is found. Based on the above algorithm, a MATLAB function nlfep() is written to find the predict solutions to the original problem. function [y,t]=nlfep(fun,alpha,y0,tn,h,p,err) m=ceil(tn/h)+1; t=(0:(m-1))’*h; ha=h.^alpha; z=0; [T,dT,w,d2]=aux_func(t,y0,alpha,p); y=z+T(1); dy=zeros(1,d2-1); for k=1:m-1 zp=z(end); yp=zp+T(k+1); y=[y; yp]; z=[z; zp]; [zc,yc]=repeat_funs(fun,t,y,d2,w,k,z,ha,dT,T); while abs(zc-zp)>err yp=yc; zp=zc; y(end)=yp; z(end)=zp; [zc,yc]=repeat_funs(fun,t,y,d2,w,k,z,ha,dT,T); end, end % subfunction of repetitive codes function [zc,yc]=repeat_funs(fun,t,y,d2,w,k,z,ha,dT,T) for j=1:(d2-1) dy(j)=w(1:k+1,j+1)’*z((k+1):-1:1)/ha(j+1)+dT(k,j+1); end, f=fun(t(k+1),y(k+1),dy); zc=((f-dT(k+1,1))*ha(1)-w(2:k+1,1)’*z(k:-1:1))/w(1,1); yc=zc+T(k+1);

260 | 8 Numerical solutions of nonlinear fractional-order differential equations where a function aux_func() is written, which is shared between this function and the corrector function to be introduced next, to reduce the repetitive code. function [T,dT,w,d2]=aux_func(t,y0,alpha,p) an=ceil(alpha); y0=y0(:); q=length(y0); d2=length(alpha); m=length(t); g=double(genfunc(p)); for i=1:d2, w(:,i)=get_vecw(alpha(i),m,g)’; end b=y0./gamma(1:q)’; T=0; dT=zeros(m,d2); for i=1:q, T=T+b(i)*t.^(i-1); end for i=1:d2 if an(i)==0, dT(:,i)=T; elseif an(i)> f=@(t,y,Dy)-t.^0.1.*ml_func([1,1.545],-t).*exp(t)./... ml_func([1,1.445],-t).*y.*Dy(:,1)+exp(-2*t)-Dy(:,2).^2; where the arguments t and y are the time and output expressed as column vectors, while Dy is a matrix whose columns correspond to the fractional-order derivative signals on the right-hand side of the equation.

8.2 Efficient high-precision algorithms for Caputo equations | 261 1 0.9 0.8 0.7 0.6

0.42

solut ion p=1

0.38

p=

0.36

0.4 0.3

exact

0.4

0.5

0.34 0

0.1

2

0.88 0.9 0.92 0.94 0.96 0.98 0.2

0.3

0.4

0.5

0.6

0.7

0.8

0.9

1

Fig. 8.5: Predictions to the solution of the original equation.

The predictor vector of the equation can be obtained with the following statements, and the results are shown in Figure 8.5. It can be seen that the solutions are not good for this example, and in fact, the solution for p = 2 is slightly worse than the one obtained with p = 1. The total time used in each solution is around 0.1 seconds. >> alpha=[1.455,0.555,1]; y0=[1,-1]; tn=1; h=0.01; err=1e-8; p=1; tic, [yp1,t]=nlfep(f,alpha,y0,tn,h,p,err); toc tic, [yp2,t]=nlfep(f,alpha,y0,tn,h,2,err); toc plot(t,yp1,t,yp2,t,exp(-t))

8.2.2 Corrector equation A better solution algorithm, referred here to as the corrector algorithm, follows immediately. With the same step-size h and the predictor result yp , substitute it into the right-hand side of (8.16); the single-term equation can be found. The iterative process can be carried on until an accurate solution can be found. The following vectorised algorithm is summarised. Algorithm 8.5 (Corrector algorithm). Proceed as follows. (1) Start from the predictor yp . (2) Substitute yp into the right-hand side of (8.16) to find a corrector y.̂ (3) If ‖ŷ − yp ‖ < ϵ, accept the solution y;̂ otherwise, denote yp = ŷ and go to (2) to continue the iteration until a solution can be found. Based on the above algorithm, the following MATLAB function can be written to find solutions of any explicit Caputo equations. function y=nlfec(fun,alpha,y0,yp,t,p,err) yp=yp(:); t=t(:); h=t(2)-t(1); m=length(t); ha=h.^alpha;

262 | 8 Numerical solutions of nonlinear fractional-order differential equations

[T,dT,w,d2]=aux_func(t,y0,alpha,p); [z,y]=repeat_funs(fun,t,yp,T,d2,alpha,dT,ha,w,m,p); while norm(z)>err, yp=y; z=zeros(m,1); [z,y]=repeat_funs(fun,t,yp,T,d2,alpha,dT,ha,w,m,p); end % subfunction of repetitive codes function [z,y]=repeat_funs(fun,t,yp,T,d2,alpha,dT,ha,w,m,p) for i=1:d2, dyp(:,i)=glfdiff9(yp-T,t,alpha(i),p)’+dT(:,i); end f=fun(t,yp,dyp(:,2:d2))-dyp(:,1); y=yp; z=zeros(m,1); for i=2:m, ii=(i-1):-1:1; z(i)=(f(i)*(ha(1))-w(2:i,1)’*z(ii))/w(1,1); y(i)=z(i)+yp(i); end The syntax of the function is y = nlfec(fun, α, y0 , y p , t, p, ϵ), and the arguments are almost the same as the ones in function nlfep(). Example 8.8. Solve numerically the equation in Example 7.10. Solution. Having obtained the predictor solution in the previous example, the corrector solution for p = 2 can be obtained with the following statements. The solution can be found with norm of error vector of 3.9337×10−5 , and the time elapsed is around 15 seconds. It can be seen that the solver is much more efficient. >> tic, [y2,t]=nlfec(f,alpha,y0,yp1,t,2,err); toc max(abs(y2-exp(-t))) If a smaller step-size is used, h = 0.001, the equation can be solved again, and the error is 4.0035×10−7 , and elapsed time is around 20 seconds. One may further decrease the step-size to h = 0.0001, the error is 6.8855×10−9 , with elapsed time of 159 seconds, while for p = 3, the error is 3.8361×10−9 , with elapsed time of 530 seconds. >> h=0.001; tn=1; tic, [yp,t]=nlfep(f,alpha,y0,tn,h,p,err); toc tic, [y2,t]=nlfec(f,alpha,y0,yp,t,2,err); toc max(abs(y2-exp(-t))) Comments 8.3 (Nonlinear Caputo equation solvers). We remark the following. (1) The algorithm is much more efficient than the algorithm in Section 8.1.2. (2) Compared with the algorithms previously discussed, the orders are not necessary to be in commensurate-order; therefore, it is more flexible. (3) Since the computation load is much less, smaller step-sizes can be selected to significantly improve the accuracy of the solutions. (4) The order p cannot be chosen arbitrarily. The maximum effective p can be assigned to ⌈α⌉. The accuracy may be slightly improved with larger p, but cannot really reach o(h p ).

8.2 Efficient high-precision algorithms for Caputo equations | 263

8.2.3 High-precision matrix algorithm for implicit Caputo equations The algorithms presented in earlier subsections are restricted to the problems of explicit nonlinear Caputo differential equations. In this subsection, a more general form of the implicit Caputo equations presented in Definition 8.1 is mainly considered. The general form of nonlinear Caputo differential equations is described with α

α

F (t, y(t), Ct0Dt 1 y(t), . . . , Ct0Dt n y(t)) = 0,

(8.20)

with initial conditions y i , i = 0, 1, . . . , ⌈max(α i )⌉ − 1. Again the signal y(t) can be decomposed into y(t) = z(t) + T(t), with T(t) defined in (8.17) and z(t) the signal with zero initial conditions, and α

α

RL n 1 F (t, y(t), RL t0 Dt z(t), . . . , t0 Dt z(t)) = 0.

(8.21)

Let us recall the pth-order matrix algorithm in finding Riemann–Liouville derivatives. The original function y(t) can be decomposed into u(t) + v(t) with p

u(t) = ∑ c k (t − t0 )k , k=0

where c k =

yk k! .

The matrix formula for the α i th-order derivative can be evaluated from RL α i t0 Dt y(t)

p

=

1 c k Γ(k + 1) W αi v + ∑ (t − t0 )k−α i , h αi Γ(k + 1 − αi ) k=0

(8.22)

where W α i is a lower triangular matrix of w i , obtainable from the MATLAB function get_vecw(). For simplicity, denote B i = W α i /h α i . Consider the fact that to take α i th-order derivative for the polynomial under the Caputo definition, the (⌈α i ⌉ + 1)st-order derivative to the polynomial should be taken first, which may eliminate all the terms whose degree is less than ⌈α i ⌉. Therefore, (8.22) can be rewritten as RL α i t0 Dt y(t)

q

=

1 c k Γ(k + 1) W αi v + ∑ (t − t0 )k−α i . h αi Γ(k + 1 − αi ) k=⌈α ⌉+1

(8.23)

i

For the time vector t = [0, h, 2h, . . . , mh], the new nonlinear algebraic equation below can be constructed, f (t, B1 v, B2 v, . . . , B n v) = 0, and the nonlinear algebraic equation can easily be solved with the MATLAB function fsolve(). The following MATLAB code can be written to implement the algorithm. The B i and the matrix du are computed in the function, and passed to the equation function f through additional parameters in the solver fsolve(), where B i is stored in a three-dimensional array.

264 | 8 Numerical solutions of nonlinear fractional-order differential equations

function [y,t]=nlfode_mat(f,alpha,y0,tn,h,p,yp) y0=y0(:); alfn=ceil(alpha); m=ceil(tn/h)+1; t=(0:(m-1))’*h; d1=length(y0); d2=length(alpha); B=zeros(m,m,d2); g=double(genfunc(p)); for i=1:d2 w=get_vecw(alpha(i),m,g); B(:,:,i)=rot90(hankel(w(end:-1:1)))/h^alpha(i); end c=y0./gamma(1:d1)’; du=zeros(m,d2); u=0; for i=1:d1, u=u+c(i)*t.^(i-1); end for i=1:d2 if alfn(i)==0, du(:,i)=u; elseif alfn(i)> alpha=[1.455,0.555,1]; y0=[1,-1]; tn=1; h=0.01; f=@(v,t,u,B,du)(du(:,1)+B(:,:,1)*v)+t.^0.1.*exp(t)... .*ml_func([1,1.545],-t)./ml_func([1,1.445],-t)... .*(v+u).*(du(:,2)+B(:,:,2)*v)-exp(-2*t)+... (du(:,3)+B(:,:,3)*v).^2;

8.3 Simulink block library for typical fractional-order components | 265

tic, [y1,t]=nlfode_mat(f,alpha,y0,tn,h,1); toc tic, [y2,t]=nlfode_mat(f,alpha,y0,tn,h,2); toc max(abs(y1-exp(-t))), max(abs(y2-exp(-t))) Although it seems that this algorithm is not as efficient as the new predictor–corrector algorithm, the advantage is that it applies to implicit Caputo equations directly.

8.3 Simulink block library for typical fractional-order components In control system simulation problems, the function-call style of numerical solution techniques and algorithms such as those studied earlier are not quite convenient, since before using those techniques, the whole system models should first be expressed by a single fractional-order differential equation; otherwise, numerical solutions cannot be obtained. In control systems, the whole system models are usually composed of block diagrams, which could be a very heavy task to write the complicated system model into a single differential equation. Therefore, block diagram-based solution schemes are essential in control systems simulation. In this section, the design and construction of an FOTF Simulink library is presented, and the use and programming essentials are presented. Further, the schemes in control system modelling and simulation are illustrated.

8.3.1 FOTF block library A Simulink block library is established and the major blocks are designed as shown in Figure 8.6. In the library, five commonly used fractional-order blocks are provided. The model library is saved in a file fotflib.slx, and a MATLAB function slblocks.m is written to include the library in Simulink search path, with the following key statements. function blkStruct=slblocks blkStruct.OpenFcn=’fotflib’; blkStruct.IsFlat=1; blkStruct.Name=sprintf(’%s\n%s’,’MIMO FOTF’,’Toolbox’); Simulink Blockset for FOTF Toolbox (c) 2016, Professor Dingyu Xue, Northeastern University, China

Fractional Der s^0.7

Caputo Der s^0.75

Fractional operator

Caputo operator

Fractional−order PID controller

Fractional−order transfer function

FOTF matrix

Approximate fPID controller

Approximate FOTF model

FOTF matrix

Fig. 8.6: A block library for multivariable FOTF blocks.

266 | 8 Numerical solutions of nonlinear fractional-order differential equations To open the FOTF block library, two ways can be taken: One can type fotflib to invoke the FOTF library, or the FOTF library can be opened directly in the “Toolboxes & Blocksets” window of the Simulink-model browser. Five blocks are provided in the FOTF library, and are summarised as follows: (1) The Fractional Operator block is an integer-order approximation to fractionalorder differentiator or integrator with Oustaloup, modified Oustaloup or Matsuda–Fujii filters. The dialog box for this block is shown in Figure 8.7. The internal structure of the masked block is in fact a standard integer-order transfer function block, specified by its coefficients vectors num and den.

Fig. 8.7: Fractional-order differentiator/integrator dialog box.

In the block masking procedures, the parameter interface in “Parameters & Dialog” pane can be designed as shown in Figure 8.8, and the masked initialisation commands are written as wb=ww(1); wh=ww(2); str=’Fractional\n’; if kF==1, G=ousta_fod(gam,n,wb,wh); elseif kF==2, G=new_fod(gam,n,wb,wh); else, G=matsuda_fod(gam,2*n+1,wb,wh); end num=G.num{1}; den=G.den{1}; T=1/wh; if isnumeric(gam) if gam>0, str=[str, ’Der s^’ num2str(gam) ]; else, str=[str, ’Int s^{’ num2str(gam) ’}’]; end else, str=[str, ’Der s^gam’]; end if kF1==1, den=conv(den,[T,1]); end

8.3 Simulink block library for typical fractional-order components | 267

Fig. 8.8: Parameter design window.

where, sometimes, in order to avoid algebraic loops in Simulink, a low-pass filter can be appended to the filter. (2) The Caputo Operator block implements Caputo derivative of the input signal. This block will be presented later. (3) The Approximate fPID Controller block implements integer-order approximation to a fractional-order PID controller [55]. The internal structure of the PIλ Dμ block is constructed as shown in Figure 8.9, where an integrator is explicitly used to eliminate the steady-state errors of the closed-loop system.

Kp Gain 1 In1

Ki Gain1

Kd Gain2

Fractional Der s^0.0123 Fractional DD Fractional Int s^{−0.5714} Fractional DD1

1 s

1 Out1

Integrator key Constant du/dt Derivative

> Switch

Fractional Der s^0.4286 Fractional DD2

Fig. 8.9: Structure of fractional-order PID controller block.

The parameters of PIλ Dμ controller can be specified in the dialog box shown in Figure 8.10, when the block is double clicked. The “Parameters & Dialog” item can be selected, and the parameter design interface is shown in Figure 8.11, or the FOTF object of the PIλ Dμ controller is filled in the first edit box.

268 | 8 Numerical solutions of nonlinear fractional-order differential equations The initialisation commands for the block can be written as follows. if isa(Kp,’fotf’), a=Kp; lam=a.nd; mu=a.nn; mu=mu(1)-lam; b=a.num; Kd=b(1); Kp=b(2); Ki=b(3); else II=[II 1]; DD=[DD 1]; Ki=II(1); lam=II(2); Kd=DD(1); mu=DD(2); end key=0; if mu>=1, key=1; end

Fig. 8.10: Dialog box of Fractional-order PID controller block.

Fig. 8.11: Parameter setting window in mask facilities.

8.3 Simulink block library for typical fractional-order components | 269

Fig. 8.12: Dialog box of FOTF block.

(4) The Approximate FOTF Model implements an FOTF block, which allows both FOTF and FOTF matrix objects. It is known that an FOTF block can be eventually approximated by a high-order transfer function model, obtained by the function high_order(). The dialog box shown in Figure 8.12 is designed. Therefore, two ways are allowed to input the fractional-order transfer function. One way is to specify the coefficients and orders of the numerator and denominator of the FOTF block in the first four edit boxes, while the other way is to input in the first edit box the variable name of an existing FOTF object. The initialisation commands in the masked block are as follows. if isa(na,’fotf’), G=na; else if length(na)~=length(a), errordlg(’Error’,’Mismatch on the denominator’), end if length(b)~=length(nb), errordlg(’Error’,’Mismatch on the numerator’), end

270 | 8 Numerical solutions of nonlinear fractional-order differential equations

G=fotf(a,na,b,nb,T); end wb=ww(1); wh=ww(2); switch kFilter, case 1, str=’ousta_fod’; case 2, str=’new_fod’; case 3, str=’matsude_fod’; end G1=high_order(G,str,wb,wh,N); [n,m]=size(G1); for i=1:n, for j=1:m, T(i,j)=G(i,j).ioDelay; end, end G1.ioDelay=T; In the masked model, only one block, the LTI block, in Control System Toolbox is embedded, under the name G1. (5) The Approximate FOTF Matrix block deals with multivariable FOTF matrices, and it will be discussed later. However, from the point view of computational robustness, this block is not recommended; use Approximate FOTF model instead for multivariable FOTF matrices. It can be seen that multivariable fractional-order systems can easily be modelled with the blocks provided in the toolbox, and simulation analysis to such systems can be made very simple.

8.3.2 Implementation of FOTF matrix block In the Approximate FOTF Matrix block, the dialog box is shown in Figure 8.13, where the FOTF matrix object G should be specified in the edit box directly.

Fig. 8.13: Dialog box of FOTF matrix block.

8.3 Simulink block library for typical fractional-order components | 271

Once the multivariable FOTF matrix is specified, the initialisation commands as follows in the masked block are called str=get_param(gcb,’MaskValueString’); key=eval([’fotf2sl(G,’’’ str ’’’);’]); where the internal structure of the FOTF matrix can be drawn automatically with the MATLAB function fotf2sl() written as function key=fotf2sl(G,str0) ik=strfind(str0,’|’); sG=str0(1:ik(1)-1); key=1; M=[gcb ’/FOTF_matx’]; if exist(’G’) & strcmp(class(G),’fotf’) [m,n]=size(G); open_system(M,’loadonly’); set_param(M,’Location’,[100,100,500,400]); ll=get_param(M,’lines’); bb=get_param(M,’blocks’); for i=1:length(ll), delete_line(ll(i).Handle); end for i=1:length(bb), str=char(bb(i)); if ~strcmp(str,’In1’) & ~strcmp(str,’Out1’), delete_block([M,’/’,str]); end, end set_param([M ’/In1’],’Position’,[40,80,70,94]); add_block(’built-in/Demux’,[M ’/Demux’],’Outputs’,int2str(n),... ’Position’,[140,80,145,120]); add_block(’built-in/Mux’,[M ’/Mux’],’Inputs’,int2str(m),... ’Position’,[170+200*n,80,175+200*n,120]); set_param([M ’/Out1’],’Position’,... [200+200*n,80,230+200*n,94]); pos=zeros(n*m,4); s=’+’; ss=repmat(s,1,n); add_line(M,’In1/1’,’Demux/1’,’autorouting’,’on’); add_line(M,’Mux/1’,’Out1/1’,’autorouting’,’on’); for i=1:m, i1=int2str(i); add_block(’built-in/Sum’,[M ’/Add’ i1],... ’Inputs’,ss,’Position’,[110+200*n,... 100+80*(i-1),125+200*n,100+80*(i-1)+10*n]); for j=1:n, j1=int2str(j); blkname=[M,’/’ sG,i1,j1]; pos(i+(j-1)*n,:)=[180+10*n+160*(j-1),... 80+80*(i-1)+40*(j-1),300+10*n+160*(j-1),... 120+80*(i-1)+40*(j-1)]; add_block(’fotflib/Approximate FOTF model’,... blkname,’Position’,pos(i+(j-1)*n,:)); str=get_param(blkname,’MaskValueString’); ii=strfind(str,’|’);

272 | 8 Numerical solutions of nonlinear fractional-order differential equations

str=[sG ’(’ i1,’,’,j1 ’)’ str(ii(1):ii(4)),... str0(ik(1)+1:end) str(ii(7):end)]; set_param(blkname,’MaskValueString’,str) add_line(M,[’Demux/’,j1],[sG i1 j1 ’/1’],... ’autorouting’,’on’) add_line(M,[sG i1 j1 ’/1’],[’Add’ i1 ’/’,j1],... ’autorouting’,’on’) end add_line(M,[’Add’ i1 ’/1’],[’Mux/’ i1],’autorouting’,’on’) end else, key=0; end and a low-level Simulink block diagram can be drawn automatically if the multivariable FOTF matrix object is specified. The internal structure of such a generated model is shown in Figure 8.14.

1 In1

Demux Demux

Fractional−order transfer function G11

Fractional−order transfer function G21

Add1

Mux Mux

Fractional−order transfer function

1 Out1

G12 Add2

Fractional−order transfer function G22

Fig. 8.14: An automatically generated Simulink model for FOTF matrix.

8.3.3 Numerical solutions of control problems with Simulink Typical feedback control systems are normally composed of a plant model, a controller model and a feedback model, each of them can be a fractional-order one. In this case, it may usually be very complicated, if not impossible, to express the whole system with a single fractional-order differential equation. Therefore, the fractional-order equation solvers discussed so far cannot be used in studying complicated control simulation problems. A block diagram-based simulation strategy is very useful in handling these kinds of systems. Here, the simulation analysis of control systems are demonstrated through several examples. Example 8.10. If the plant model and controller are both FOTF models, i.e. G(s) =

1 e−s , 0.8s2.2 + 0.5s0.9 + 1

Gc (s) = 0.45966 +

0.5761 + 0.49337s1.3792 , s

simulate the system and find the output and control signal.

8.3 Simulink block library for typical fractional-order components | 273

Solution. The plant model and the controller can both be entered first into MATLAB environment, with the following statements. >> s=fotf(’s’); G=1/(0.8*s^2.2+0.5*s^0.9+1); G.ioDelay=1; Gc=0.45966+0.5761/s+0.49337*s^1.3792; It can be seen that the function feedback() cannot be used to get the overall model, due to the existence of the delay term. Therefore, Simulink should be used instead to find the closed-loop behaviour of the system. A Simulink model can be established as shown in Figure 8.15, where two output ports are drawn, for the output signal and the control signal, respectively, the model is saved in the file c8mfpid1.slx. 2 Out2

Fractional−order PID controller

Fractional−order transfer function

Step Approximate fPID controller

1 Out1

Approximate FOTF model

Fig. 8.15: Simulink model of closed-loop control (c8mfpid1.slx).

The output of the system can be obtained with >> [t x y]=sim(’c8mfpid1’,[0,10]); plot(t,y(:,1)) as shown in Figure 8.16. In this example, the order of the filters are selected both at 13, and the frequency range are set to (10−5 rad/s, 103 rad/s). 1.2

1

0.8

0.6

0.4

0.2

0

0

1

2

3

4

5

Fig. 8.16: Closed-loop step response.

6

7

8

9

10

274 | 8 Numerical solutions of nonlinear fractional-order differential equations The results from Simulink can also be cross-validated with the following statements, and it can be seen that both the results agree well. >> G=’exp(-s)/(0.8*s^2.2+0.5*s^0.9+1)’; Gc=’(0.45966+0.5761/s+0.49337*s^1.3792)’; G1=[G ’*’ Gc]; [t,y]=INVLAP_new(G1,0,10,1000,1,’1/s’); plot(t,y,tout,yout(:,1)) Example 8.11. Simulate the closed-loop system in Example 6.19. Solution. It has been shown in Example 6.19 that for the multivariable system, the command-line computation of closed-loop step response evaluation failed. Simulinkbased analysis in this section should be used instead. The multivariable FOTF matrix G(s) and the compensation matrix can be entered into MATLAB environment first. >> s=fotf(’s’); g1=1/(1.35*s^1.2+2.3*s^0.9+1); g2=2/(4.13*s^0.7+1); g3=1/(0.52*s^1.5+2.03*s^0.7+1); g4=-1/(3.8*s^0.8+1); G=[g1,g2; g3,g4]; c1=11.6+2.89*s^-0.9+15*s^0.8; c2=13+2.82*s^-0.9+15*s^0.8; Kp=[1/3 2/3; 1/3 -1/3]; s=tf(’s’); Kd=[1/(2.5*s+1), 0; 0, 1/(3.5*s+1)]; Simulink can be invoked, and fotflib can be opened; the Simulink model can be created as shown in Figure 8.17. The following statements can be used to evaluate the closed-loop step response of the multivariable system under PIλ Dμ controllers. >> u1=1; u2=0; [t,x,y]=sim(’c8mstep’,10); subplot(221), plot(t,y(:,1)), ylim([-0.1 subplot(223), plot(t,y(:,2)), ylim([-0.1 u1=0; u2=1; [t,x,y]=sim(’c8mstep’,10); subplot(222), plot(t,y(:,1)), ylim([-0.1 subplot(224), plot(t,y(:,2)), ylim([-0.1

u1 Constant

1.1]) 1.1]) 1.1]) 1.1])

Fractional−order PID controller Approximate fPID controller1 Kd

u2 Constant1

Fractional−order PID controller

Mux

LTI System

Approximate fPID controller

Demux

Fig. 8.17: Closed-loop multivariable system (c8mstep.slx).

K*u Gain

Fractional−order transfer function Scope Approximate FOTF model

1 Out1

8.4 Block diagram-based solutions of fractional-order differential equations | 275 Step Response From: In(1)

From: In(2)

To: Out(1)

1

0.5

Amplitude

0

To: Out(2)

1

0.5

0 0

2

4

6

8

10 0

Time (seconds)

2

4

6

8

10

Fig. 8.18: Closed-loop step response of the multivariable system.

The closed-loop step responses are shown in Figure 8.18, and it can be seen that the control behaviour is satisfactory.

8.3.4 Validations of Simulink results Sometimes, due to the improper setting or parameter selection in the interface, the simulation results obtained may not be reliable. It is a good habit for the user to validate the solutions. For instance, one can change the Simulink parameters or model parameters and see whether the results obtained are correct. For the Simulink parameters, one may select the “Simulation → Model Configuration Parameters” menu to open a dialog box, as shown in Figure 8.19. The parameters such as “Relative tolerance” can be changed, and run simulation again. A good choice of the parameter is 1e−8. The user can check whether the same results can be obtained. For fractional-order systems, other important parameters are the interested frequency regions and orders of the filters. For better approximation, one may select a higher order, for instance, 13th-order. More important is the frequency region. If the terminate time is large, for instance, tn = 100, a very small lower bound ωb should be selected, for instance, 10−5 . The bound can be changed to check whether the same results can be obtained.

8.4 Block diagram-based solutions of fractional-order differential equations with zero initial conditions The fractional-order systems which are difficult to be solved with the functions in MATLAB can be tried with simulation approaches in Simulink. In this section, several

276 | 8 Numerical solutions of nonlinear fractional-order differential equations

Fig. 8.19: Parameters dialog box for model parameters.

illustrative examples will be given to demonstrate the modelling and simulation procedures in fractional-order differential equation problems. For systems with zero initial conditions, the key signals can be constructed with integer-order integrators and Fractional Operator blocks in the FOTF blockset. Then based on the signals, the fully Simulink model can be established. The modelling procedures will be fully illustrated through the following examples. Example 8.12. Consider first the following linear fractional-order differential equation studied in Example 4.7: Dt3.5 y(t) + 8Dt3.1 y(t) + 26Dt2.3 y(t) + 73Dt1.2 y(t) + 90Dt0.5 y(t) = 30u󸀠 (t) + 90D 0.3 u(t), with zero initial conditions, and the input signal is u(t) = sin t2 . Solve the differential equation with Simulink. Solution. For simplicity, the explicit form of the equation can be written as Dt3.5 y(t) = −8Dt3.1 y(t) − 26Dt2.3 y(t) − 73Dt1.2 y(t) − 90Dt0.5 y(t) + 30u󸀠 (t) + 90D 0.3 u(t). To establish a Simulink model for the differential equation, the following procedures should be taken. (1) Open a blank model window. The menu “File → New → New Model” in Simulink environment should be selected to create a new blank window.

8.4 Block diagram-based solutions of fractional-order differential equations | 277

(2) Define key signals. Key signals in the equations such as y(t) and its integer-order derivatives should be declared with series connected integrator blocks, as shown in Figure 8.20 (a), referred to as an integrator chain. From these signals, the fractional-order derivatives of y(t) needed in the equation can be constructed as shown in Figure 8.20 (b). y󸀠󸀠󸀠 (t)

y󸀠󸀠 (t)

y󸀠 (t)

y(t)

1/s

1/s

1/s

Integrator

Integrator1

Integrator2

(a) Key signals of integer-order derivatives of y(t) (c8mblk1a.slx). D 2.3 y(t)

Fractional Der s^0.3

D

3.5

y(t)

y (t)

Fractional DD

D 3.1 y(t)

Fractional DD4

󸀠󸀠󸀠

Fractional Int s^{−0.5}

Fractional Der s^0.1 Fractional DD1

Fractional Der s^0.5

D 0.5 y(t)

Fractional DD2

󸀠

1 s Integrator

D 1.2 y(t)

󸀠󸀠

y (t)

1 s

y (t)

Integrator1

1 s Integrator2

y(t)

Fractional Der s^0.2 Fractional DD3

(b) Key signals of integer- and fractional-order derivatives of y(t) (c8mblk1b.slx). Fig. 8.20: Declarations of the necessary key signals in the equation.

(3) Create the full Simulink model. When all the key signals are defined, the full Simulink model can be established with low-level Simulink blocks, as shown in Figure 8.21. The parameters of frequency interval ww and order N in the Fractional Operator can be declared in MATLAB workspace. For instance, the following parameters can be assigned. >> ww=[1e-5 1e3]; N=15; (4) Start simulation process. The simulation result of the system is exactly the same as the one obtained in Example 4.7. Also, since the model to be simulated is linear, an alternatively Simulink model can be established as shown in Figure 8.22. The equivalent fractional-order transfer function model can be entered as follows. >> b=[30,90]; nb=[1,0.3]; a=[1,8,26,73,90]; na=[3.5,3.1,2.3,1.2,0.5]; G=fotf(a,na,b,nb); The simulation result obtained is the same as the ones obtained earlier.

278 | 8 Numerical solutions of nonlinear fractional-order differential equations

Fractional Der s^0.5

90 Gain6

Fractional Der s^0.3

26 Gain5 30 Gain1

sin(u^2) Clock

Fcn

90 Gain

du/dt

Fractional DD2

Fractional Int s^{−0.5}

Derivative

1/s

1/s

1/s

Integrator Integrator1 Integrator2

Fractional DD

Fractional Der s^0.3

Fractional DD4

1 Out1

Add

Fractional DD5

Fractional Der s^0.1

8 Gain2 73

Fractional DD1

Gain3

Fractional Der s^0.2 Fractional DD3

Fig. 8.21: Simulink description of the equation (c8mblk2.slx).

sin(u^2) Clock

Fcn

Fractional−order transfer function

1 Out1

Approximate FOTF model

Fig. 8.22: A simpler Simulink description of the equation (c8mblk3.slx).

Example 8.13. Consider the fractional differential equation in Example 4.4 D 1.2 y(t) + 5D 0.9 y(t) + 9D 0.6 y(t) + 7D 0.3 y(t) + 2y(t) = u(t), with zero initial conditions, and under step input; the analytical solution is y(t) =

1 1 3 2 + E0.3 (−2t0.3 ) − E0.3 (−t0.3 ) − t0.6 E0.3,1.6 (−t0.3 ) + t0.9 E0.3,1.9 (−t0.3 ). 2 2

Assess the filter parameter settings and their impact in Simulink-based modelling. Solution. Following the standard modelling procedures discussed earlier, the Simulink model shown in Figure 8.23 can be constructed. >> ww=[1e-3 1e3]; % one can change the range accordingly for n=5:15 [t,x,y1]=sim(’c8mblk5’); y=1/2+0.5*ml_func(0.3,-2*t.^0.3)-ml_func(0.3,-t.^0.3)... -t.^0.6.*ml_func([0.3,1.6,2],-t.^0.3)... +t.^0.9.*ml_func([0.3,1.9,3],-t.^0.3); e=norm(y-y1), T(n-4)=e; end

8.4 Block diagram-based solutions of fractional-order differential equations | 279

Fractional Int s^{−0.2}

Step

Integrator

Fractional DD1

Add

Fractional Der s^0.9

5 Gain

1 Out1

1/s

Fractional DD Fractional Der s^0.6

9 Gain1

Fractional DD2

7 Gain2

Fractional Der s^0.3 Fractional DD3

2 Gain3

Fig. 8.23: A Simulink model (c8mblk5.slx).

Select different parameters n and (ωb , ωh ); the accuracy of the filter is assessed and the results are shown in Table 8.2. It can be seen that, if the interested frequency range covers six decades, n = 7 gives good fitting. If the number of decades is increased by one, then it is advised to increase n by one or two. If the terminate time is increased to tn = 1, the function ml_func() may fail to evaluate the analytical solution. However, with Simulink model, the terminate time can be assigned to even larger values. It should be noted that the when terminate time is large, low-frequency fitting is crucial. If a larger terminate time of tn = 500 to Tab. 8.2: Error comparisons under different filter parameters. frequency ranges (ωb , ωh ) rad/s

order n 5 6 7 8 9 10 11 12 13 14 15

(10−3 , 103 )

(10−4 , 104 )

(10−4 , 103 )

(10−5 , 103 )

(10−5 , 105 )

0.0018887 9.4813×10−4 7.1361×10−4 5.9778×10−4 5.6608×10−4 5.8438×10−4 5.8583×10−4 5.8736×10−4 5.9075×10−4 5.9276×10−4 5.9437×10−4

0.019173 0.0066357 0.0039705 0.0025542 0.0014187 6.2025×10−4 2.7473×10−4 2.485×10−4 1.7408×10−4 1.6454×10−4 1.622×10−4

0.0051851 0.0016134 9.2667×10−4 7.2636×10−4 6.4566×10−4 5.8083×10−4 5.9078×10−4 6.0362×10−4 6.0229×10−4 6.0382×10−4 6.0708×10−4

0.0043723 0.0044481 0.0015384 9.2456×10−4 7.0486×10−4 6.83×10−4 5.9552×10−4 5.8138×10−4 6.002×10−4 6.0507×10−4 6.0397×10−4

0.15125 0.067618 0.029391 0.014882 0.010767 0.0076537 0.0047523 0.00254 0.0012318 6.9777×10−4 4.7956×10−4

280 | 8 Numerical solutions of nonlinear fractional-order differential equations 0.35 0.3

y1 (t)

0.25 0.2 0.15 0.1

rs all othe y2 (t)

0.3

y1 (t)

0.28 300

0.05 0

0.32

0

50

100

150

200

350 250

300

400 350

450 400

450

500

Fig. 8.24: Simulation result comparisons

cross-validate the simulation results different parameters of the filter are tried, the responses are obtained as shown in Figure 8.24. >> n=8; ww=[1e-3 1e3]; [t1,~,y1]=sim(’c8mblk5’,500); n=10; ww=[1e-4 1e3]; [t2,~,y2]=sim(’c8mblk5’,500); n=10; ww=[1e-5 1e3]; [t3,~,y3]=sim(’c8mblk5’,500); n=12; ww=[1e-6 1e3]; [t4,~,y4]=sim(’c8mblk5’,500); tic, n=16; ww=[1e-8 1e3]; [t5,~,y5]=sim(’c8mblk5’,500); toc plot(t1,y1,t2,y2,t3,y3,t4,y4,t5,y5) It can be seen that ωb = 10−3 is not a good choice, ωb = 10−4 is improved but still yields visible errors. For this problem with such a large tn = 500, ωb must be selected at least ωb = 10−5 rad/s. One may further reduce ωb and increase n such that consistent results can be achieved. Example 8.14. Solve the nonlinear fractional-order differential equation 3D 0.9 y(t) 󵄨 󵄨1.5 4 + 󵄨󵄨2D 0.7 y(t)󵄨󵄨󵄨 + y(t) = 5 sin 10t. 3 3 + 0.2D 0.8 y(t) + 0.9D 0.2 y(t) 󵄨 Solution. From the given equation, the explicit form of y(t) can be obtained as y(t) =

3 3D 0.9 y(t) 󵄨 󵄨1.5 − 󵄨󵄨2D 0.7 y(t)󵄨󵄨󵄨 ]. [5 sin 10t − 4 3 + 0.2D 0.8 y(t) + 0.9D 0.2 y(t) 󵄨

From the explicit expression of y(t), the key signals can be created with fractional-order differentiators, and the block diagram in Simulink can be established as shown in Figure 8.25. From the Simulink model, the simulation results can be obtained as shown in Figure 8.26. The results are verified by different control parameters in the filter, and they give consistent results. It should be noted that the Simulink modelling of a given fractional-order differential equation is not unique. Consider the original equation; one can alternatively

8.4 Block diagram-based solutions of fractional-order differential equations | 281

D 0.9 y(t)

Sine Wave

Fractional Der s^0.9

y(t)

3

0.75

Gain4

Fractional Der s^0.8

Gain3

Product

3

1 Out1

Constant 0.2 Gain

Fractional Der s^0.2

0.9 Gain1

Fractional Der s^0.7

2

abs(u)^1.5

Gain2

Fcn

|2D 0.7 y(t)|1.5

Fig. 8.25: Simulink description of the nonlinear equations (c8mnlf1.slx).

rewrite it as D 0.9 y(t) =

4 3 + 0.2D 0.8 y(t) + 0.9D 0.2 y(t) 󵄨 󵄨1.5 [5 sin 10t − y(t) − 󵄨󵄨󵄨2D 0.7 y(t)󵄨󵄨󵄨 ], 3 3

where dynamic signals no longer exist in the denominator. From such a model, another Simulink model can be established as shown in Figure 8.27. The simulation result under this model is the same as the one obtained earlier. 0.5

0

-0.5

-1

-1.5

-2

-2.5

0

0.5

1

1.5

Fig. 8.26: Simulation results.

2

2.5

3

3.5

4

282 | 8 Numerical solutions of nonlinear fractional-order differential equations

Fractional Int s^{−0.9}

1 y(t) Out1

Fractional DD1 Sine Wave 4/3 Gain4 Fractional Der s^0.7 Fractional DD

Product

2

abs(u)^1.5

Gain2

Fcn 3

1/3

Constant Fractional Der s^0.8 Fractional DD2 Fractional Der s^0.2 D 0.9 y(t)

Fractional DD3

Gain3

0.2 Gain 0.9 Gain1

Fig. 8.27: Another Simulink model of the nonlinear equations (c8mnlf2.slx).

8.5 Block diagram solutions of nonlinear Caputo differential equations For Caputo differential equations, where the initial conditions of the output signal y(t) and input signal u(t) are not zero, the solutions of the equations cannot be solved with the approaches presented in the previous section. Some special treatment should be made. In this section, a Caputo operator block for Simulink library shown in Figure 8.6 is designed. Then in Simulink modelling, a systematic manual conversion of Caputo differential equation to zero initial condition equation will be presented. Finally, a simple Caputo differential equation in Simulink modelling will be presented. Theoretic speaking, this modelling strategy can be used to model Caputo differential equations of any complexity.

8.5.1 Design of a Caputo operator block For the signal y(t) with nonzero initial conditions y i , i = 0, 1, . . . , q, the Taylor auxiliary function T(t) is of the form q−1

T(t) = y0 + y1 t +

1 yi 1 2 y2 t + ⋅ ⋅ ⋅ y q−1 t q−1 = ∑ t i . 2 (q − 1)! i! i=0

8.5 Block diagram solutions of nonlinear Caputo differential equations | 283

With such an auxiliary signal T(t), the output signal y(t) can be decomposed into y(t) = z(t) + T(t), where z(t) is the signal with zero initial conditions, and C α 0 D y(t)

= C0 D α z(t) + C0 D α T(t).

(8.24)

It is known from the Caputo definition that to compute the αth-order derivative, the ⌈α⌉th-order derivative of T(t) is taken first, which makes the polynomial terms lower than ⌈α⌉ vanish. Therefore, it can be found that q−1 C α 0 D T(t)

= ∑ i=⌈α⌉

q−1

y i Γ(i + 1) i−α yi t = ∑ t i−α , Γ(i + 1 − α)i! Γ(i + 1 − α) i=⌈α⌉

(8.25)

which can be regarded as the time domain compensation to the Oustaloup filter result. Similar to the fractional-order differentiator block discussed earlier, a Caputo differentiator block can be designed as shown in Figure 8.28. Note that this block should be applied to z(t) rather than to y(t) to compute C0 D α y(t).

1 In1

num(s) den(s) Transfer Fcn

1 Out1

f(u) Clock

Fcn

Fig. 8.28: Internal structure of Caputo differentiator block.

When masking the block, the “Initialisation” pane in the mask window can be clicked, and the following code can be written, to enable the compensate function in (8.25) written to the Fcn block. wb=ww(1); wh=ww(2); if kF==1, G=ousta_fod(gam,n,wb,wh); else, G=new_fod(gam,n,wb,wh); end num=G.num{1}; den=G.den{1}; T=1/wh; if kF1==1, den=conv(den,[T,1]); end nam=get_param(gcs,’Name’); q=length(y0); S=’’; ngam=ceil(gam); for i=ngam+1:q astr=num2str(i-1-gam); a1=num2str(y0(i)/gamma(i-gam)); S=[S num2str(a1) ’*u^(’ astr ’)+’]; end, S=S(1:end-1); S=strrep(strrep(S,’--’,’-’),’+-’,’-’); set_param([nam ’/Caputo operator/Fcn’],’Expr’,S); str=[’Caputo\n Der s^’ num2str(gam)];

284 | 8 Numerical solutions of nonlinear fractional-order differential equations 8.5.2 Typical procedures in modelling Caputo equations With the Caputo differentiator designed earlier and Taylor auxiliary function technique, one may manually convert the original from a Caputo equation into a Riemann– Liouville equation, so that both normal Oustaloup filter and Caputo differentiator blocks can be used. The modelling framework discussed above can be further demonstrated with the following example. Example 8.15. Consider the following nonlinear Caputo fractional-order differential equation studied in Example 7.10: C 1.455 y(t) 0 Dt

= −t0.1

E1,1.545 (−t) t e y(t) C0 Dt0.555 y(t) + e−2t −[y󸀠 (t)]2 , E1,1.445 (−t)

where y(0) = 1 and y󸀠 (0) = −1. The analytical solution y = e−t can be used to validate the solution. Solution. Since the highest order in the equation is 1.455 < 2, Taylor auxiliary signal can be introduced such that z(t) = y(t) − y(0) − y󸀠 (0)t = y(t) − 1 + t. The variable y(t) = z(t) + 1 − t can be substituted back to the original equation to convert it to an equation of signal z(t), with zero initial conditions: C 1.455 [z(t) 0 Dt

+ 1 − t] = −t0.1 et

E1,1.545 (−t) y(t) C0 Dt0.555 [z(t) + 1 − t] E1,1.445 (−t)

+ e−2t −[[z(t) + 1 − t]󸀠 ] . 2

It is known from (3.66) that for the signal z(t) with zero initial conditions, C α 0 D z(t)

α = RL 0 D z(t).

Besides, it is known from the Caputo definition, since C0 D 1.455 [z(t) + 1 − t] takes first the second-order derivative to z(t) + 1 − t with respect to t, the augmented term 1 − t vanishes. Therefore, it is found that C 1.455 [z(t) 0D

1.455 + 1 − t] = RL z(t). 0 D

The original Caputo equation can be converted to RL 1.455 z(t) 0 Dt

= −t0.1 et

E1,1.545 (−t) y(t) C0 Dt0.555 y(t) + e−2t −[z󸀠 (t) − 1]2 . E1,1.445 (−t)

The term C0 D 0.555 y(t) can be modelled directly with the Caputo Operator block. Note that the block is connected to the signal z(t) rather than to the signal y(t). Therefore, the Simulink model shown in Figure 8.29 can be established. Besides, since the Mittag-Leffler function is used, its S-function implementation can be constructed as follows.

8.5 Block diagram solutions of nonlinear Caputo differential equations | 285 −t + 1

−u + 1 Fcn5

u^0.1*exp(u) sfun_mls S-Function −1

sfun_mls

Gain

Product Fractional Der s^0.555

S-Function1

Product1

Caputo operator

y(t) Add2

exp(−2*u) Clock

Fcn1

In1

Fractional Out1 Int s^{−0.455}

z󸀠 (t)

Riemann−Liouville operator

z(t)

1 Out 1

1/s Integrator

(u −1)^2 Fcn4

Fig. 8.29: Simulink description of the nonlinear equation (c8mcaputo.slx).

function [sys,x0,str,ts]=sfun_mls(t,x,u,flag,a) switch flag case 0, sizes=simsizes; sizes.NumContStates=0; sizes.NumDiscStates=0; sizes.NumOutputs=1; sizes.NumInputs=1; sizes.DirFeedthrough=1; sizes.NumSampleTimes=1; sys=simsizes(sizes); x0=[]; str=[]; ts=[-1 0]; case 3, sys=ml_func([1,a],u); case {1,2,4,9}, sys=[]; otherwise, error([’Unhandled flag=’,num2str(flag)]); end The Simulink model can be solved numerically, and the result are compared with theoretical one of e−t . Unfortunately, the simulation results are not quite accurate. Algebraic loops are involved the Simulink model, and the maximum error is as high as 0.0071. Changes in the filter parameters may not yield better results for this example. >> ww=[1e-3,1e3]; N=9; [t,x,y]=sim(’c8mcaputo’); max(sum(y-exp(-t))), plot(t,y,t,exp(-t)) It can be seen that this modelling procedure is not well-established, since in the modelling process, a great amount of manual derivation is involved, and the accuracy is not high. Therefore, better modelling and simulation approaches are expected.

286 | 8 Numerical solutions of nonlinear fractional-order differential equations 8.5.3 Simpler block diagram-based solutions of Caputo equations In order to apply a better modelling scheme for Caputo differential equations, the following ideal case is conceived. That is, with the initial values of the integer-order derivatives specified by integrator blocks connected as an integrator chain, the Fractional Operator blocks can be used directly – without bothering the initial values – to represent the Caputo derivatives. Let us recall Theorem 3.18, shown again below. It established the theoretical foundation for the above conceived modelling approach. Theorem 8.2. If the signal y(t) is ⌈γ⌉th-order differentiable, then γ C t0 Dt y(t)

−(⌈γ⌉−γ)

= RL t0 Dt

[y(⌈γ⌉) (t)].

(8.26)

γ

It can be seen from the theorem that the signal Ct0 Dt y(t) can be established by taking the (⌈γ⌉ − γ)th-order Riemann–Liouville integral to the key signal y(⌈γ⌉) (t). Therefore, the Riemann–Liouville integrator block can be used directly in defining the key signals of Caputo derivatives, without bothering the considerations of the initial conditions. Corollary 8.1. Taking (⌈γ⌉ − γ)th-order Riemann–Liouville derivatives to both sides, it can be shown that γ RL ⌈γ⌉−γ C (8.27) [t0 Dt y(t)] = y(⌈γ⌉) (t). t0 Dt The corollary states that the ⌈γ⌉th-order derivative of y(t) can be restored by taking γ the (⌈γ⌉ − γ)th-order Riemann–Liouville derivative to the signal Ct0 Dt y(t). Again, the Riemann–Liouville derivative block can be used. A simple algorithm with less considerations of initial conditions conceived above is implemented and the following procedures are proposed. Algorithm 8.6 (A simpler modelling algorithm). Proceed as follows. (1) Assume that α is the highest order in the equation, and q = ⌈α⌉. The initial conditions are specified in (8.2), represented again here as y(0) = y0 ,

y󸀠 (0) = y1 ,

y󸀠󸀠 (0) = y2 ,

...,

y(q−1) (0) = y q−1 .

A set of q integrator blocks can be created first and cascade connected as shown in Figure 8.30 to represent the key signals of the system, i.e. the output y(t) and all its integer-order derivatives. The initial values y i can be assigned accordingly to the q integrators. (2) The key signals of the fractional-order derivative C D γ y(t), can be extracted from the signal y(⌈γ⌉) (t), followed by a simple Riemann–Liouville integral block – Fractional Operator, with the order of (γ − ⌈γ⌉). For instance, the 2.3rd-order Caputo derivative can be constructed by a 0.7th-order Riemann–Liouville integral block acting on the key signal y(3) (t). (3) To close up the loop, the Fractional Operator block can also be used to restore integer-order derivatives to the signals.

8.5 Block diagram solutions of nonlinear Caputo differential equations | 287

y(q) (t)

✲ 1 s

y(q−1) (t)

✲ 1

y(q−2) (t)

s

✲ ⋅⋅⋅

y󸀠󸀠 (t)

✲ 1

y󸀠 (t)

s

✲ 1

y(t)



s

Fig. 8.30: The integer-order integrator chain.

(4) Having defined all the key signals, the whole system can be modelled with ease, and ready for simulation solutions. Note that the consideration here is different from the one in (8.24), where the Caputo derivative of the full signal y(t) = z(t) + T(t) was involved, and the Caputo Operator block must be used. This idea is illustrated by the following examples. Example 8.16. Solve the Caputo differential equation in Example 4.11 3π y󸀠󸀠󸀠 (t) + C0 Dt2.5 y(t) + y󸀠󸀠 (t) + 4y󸀠 (t) + C0 Dt0.5 y(t) + 4y(t) = 3√2 sin(t + ), 4 with the initial conditions y(0) = 1,

y󸀠 (0) = 0,

y󸀠󸀠 (0) = −1.

It is known that the analytical solution is y(t) = cos t. Solution. The original model can be rewritten as y󸀠󸀠󸀠 (t) = −C0 Dt2.5 y(t) − y󸀠󸀠 (t) − 4y󸀠 (t) − C0 Dt0.5 y(t) − 4y(t) + 3√2 sin(t +

3π ). 4

With the standard modelling procedures, the key signals y(t), y󸀠 (t), y󸀠󸀠 (t) and y󸀠󸀠󸀠 (t) can be defined first, with three integrators. Based on the key signals, fractional-order derivatives can be constructed with the use of Riemann–Liouville integrator blocks. Finally, the full simulation model can be established as shown in Figure 8.31. It can be seen that the key signals C D 0.5 y(t) and C D 2.5 y(t) are established based on Theorem 8.2. With the commands >> N=15; ww=[1e-4 1e4]; tic, [t,x,y]=sim(’c8mlinc1’); norm(y-cos(t)), toc it can be seen that the simulation results can be obtained directly, and the error norm with the analytical solution, cos t, is evaluated. It can be seen that the error is as low as 1.33×10−11 , with time elapse of about 1.3 seconds. Example 8.17. Solve the nonlinear Caputo fractional-order differential equation studied in Example 8.15 again with the simpler approach. Solution. In Example 8.15, it can be seen that the modelling procedures are quite complicated, since auxiliary signals and manual formulation to the original equations are needed. This procedure may be error-prone. A better modelling approach is needed.

288 | 8 Numerical solutions of nonlinear fractional-order differential equations

3*sqrt(2)*sin(u+3*pi/4) Clock

Fcn

y󸀠󸀠󸀠 (t)

1 s

y󸀠󸀠 (t)

Integrator C

y󸀠 (t)

1 s Integrator1

1 s

y(t)

Integrator2

1 Out1

D 2.5 y(t)

Fractional Int s^{−0.5} Fractional DD C

D 0.5 y(t)

Fractional Int s^{−0.5} Fractional DD1

4

4 Gain

Fig. 8.31: Simulink model of the linear Caputo equation (c8mlinc1.slx).

Use two integrator blocks to declare the key signals y(t), y󸀠 (t) and y󸀠󸀠 (t), and assign initial values y(0) = 1 and y󸀠 (0) = −1 to the integrators; the Simulink model shown in Figure 8.32 can be established. In this model, D 1.545 y(t) and D 0.555 y(t) can be generated from the key signals, while the standard Riemann–Liouville fractionalorder differentiator block is used directly. It can be seen that the key signal C D 1.455 y(t) is defined based on Corollary 8.1, while C D 0.555 y(t) is defined based on Theorem 8.2. y󸀠󸀠 (t)

1/s

y󸀠 (t)

y(t)

1/s

initial −1

1 Out1

initial 1 C

D 0.555 y(t)

Fractional Int s^{−0.445} Fractional DD1 u^0.1*exp(u) sfun_mls −1 Clock

sfun_mls

Product C

D 1.455 y(t)

Gain exp(−2*u)

Fractional Der s^0.545 Fractional DD

u^2

Fig. 8.32: A new Simulink model (c8mexp2.slx).

8.5 Block diagram solutions of nonlinear Caputo differential equations | 289

Selecting the following parameters to Oustaloup filters, the simulation results can be obtained, and the maximum error with the analytical solution, e−t , can be obtained. The maximum error is as low as 1.1636×10−4 , with time elapse of 0.21 seconds. It can be seen that the algorithm is much more efficient than the command-line-based solvers. >> N=18; ww=[1e-7 1e4]; tic, [t,x,y]=sim(’c8mexp2’); toc, max(abs(y-exp(-t))) When even higher orders and wider bandwidth are selected, for instance, N = 35, and frequency range of [10−8 rad/s, 107 rad/s] is selected, the maximum error may become as small as 1.353×10−7 , within 4.7 seconds. It can be seen that the simulation approach is rather high efficient. The model can also be used to handle larger terminate time of tn = 2 and tn = 3 can be tried, and the errors are 1.851×10−5 and 8.240×10−5 , respectively. For an even larger terminate time, the simulation may not converge. Example 8.18. Solve the fractional-order Chua system problem in Example 7.9 with a Simulink environment. Solution. Three integer-order integrators can be drawn first to represent the three pairs of key signals x(t), y(t) and z(t) and their first-order derivatives. From the first-order derivatives, the key signals such as C0 D α1 x(t) can be created, based on Corollary 8.1. From the signals, the Simulink model can be established as shown in Figure 8.33. m1*u+0.5*(m0−m1)*(abs(u+1)−abs(u−1)) piecewice−linear nonlinearity

D

a

1−α1

x(t)

Fractional Der s^0.07

x󸀠 (t)

Add1

x(t) 1 s

1

Integrator1

Add2

D

1−α2

y(t)

󸀠

Fractional Der s^0.01

y (t)

Add

y(t) 1 s

2

Integrator2

D 1−α3 z(t)

c

Fractional Der s^0.08

z󸀠 (t)

Add3

z(t) 1 s Integrator3

b

Fig. 8.33: A Simulink model for Chua system (c8mchuasim.slx).

3

290 | 8 Numerical solutions of nonlinear fractional-order differential equations

Interpreted MATLAB Fcn Interpreted MATLAB Function

x(t) [ y(t)] [ z(t)]

Fractional Der s^0.07

1 s Integrator2

1 Out1

Fractional Der s^0.01

Fractional Der s^0.08

Fig. 8.34: Another Simulink model for Chua system (c8mchaos.slx).

>> alpha=[0.93,0.99,0.92]; x0=[0.2; -0.1; 0.1]; a=10.725; b=10.593; c=0.268; m0=-1.1726; m1=-0.7872; ww=[1e-4 1e3]; N=13; tic, [t,x,y]=sim(’c8mchuasim’,500); toc plot(yout(:,1),yout(:,2)) The phase plane trajectory can be obtained within 1.1 seconds and it is quite similar to the one in Figure 8.4. Alternatively, if the vectorised integrator block is used, a much simpler Simulink model can be constructed as shown in Figure 8.34, and the MATLAB function embedded in the Interpreted MATLAB Function block is listed below. function Y=c8mchaosd(u) a=10.725; b=10.593; c=0.268; m0=-1.1726; m1=-0.7872; f=m1*u(1)+1/2*(m0-m1)*(abs(u(1)+1)-abs(u(1)-1)); Y=[a*(u(2)-u(1)-f); u(1)-u(2)+u(3); -b*u(2)-c*u(3)]; The following statements can be used to simulate the system, >> alpha=[0.93,0.99,0.92]; x0=[0.2; -0.1; 0.1]; ww=[1e-4 1e3]; N=13; tic, [t,x,y]=sim(’c8mchaos’,500); toc plot(yout(:,1),yout(:,2)) and it can be seen that the results are exactly the same as the ones obtained above, within 1.97 seconds. Since in this case, a MATLAB function block is used, the speed is affected. Comments 8.4 (Simulink modelling of Caputo equations). We remark the following. (1) Two procedures in Sections 8.5.2 and 8.5.3 are presented, and it is easily seen that the former one needs a lot manual work, and is error-prone. Also, the accuracy is very low. Much less effort is needed in the latter method. Therefore, the latter one is recommended.

8.5 Block diagram solutions of nonlinear Caputo differential equations | 291

(2) The simulation results must be validated under different settings of the Oustaloup filters, until acceptable results can be obtained.

8.5.4 Numerical solutions of implicit fractional-order differential equations The modelling and simulation procedures are mainly applicable to explicit fractionalorder differential equations. In real applications, implicit ones may be encountered, and special modelling and simulation strategies must be introduced. In this subsection, an algorithm for implicit equations will be introduced, followed by an illustrative example. Definition 8.6. The standard mathematical form of implicit Caputo differential equations is given by α

α

F (t, u(t), y(t), C0 Dt 1 y(t), . . . , C0 Dt n y(t)) = 0,

(8.28)

with α i an increasing sequence, and q = ⌈α n ⌉. The initial conditions are given by y(q−1) (0) = c q−1 , y(q−2) (0) = c q−2 , . . . , y(0) = c0 . Algorithm 8.7 (Simulink modelling algorithm for Caputo equations). One proceeds as follows. (1) If there are q initial conditions, the integrator chain scheme shown in Figure 8.30 can be used to define key signals y(t), y󸀠 (t), . . . , y(q) (t). (2) The q initial conditions should be assigned accordingly to the q integrators such that the initial conditions of y(t) are the same as the one required in the original differential equations. (3) The fractional-order derivatives involved can be established with Oustaloup filters on the corresponding integer-order derivative signals. (4) Construct the signals on the left-hand side of (8.28), and feed it into a Solve block, and its output can be connected to the key signals to build the final Simulink model. This modelling procedure will be demonstrated in the following example. Example 8.19. Solve the implicit fractional-order differential equation + C0 Dt0.3 y(t) C0 Dt1.7 y(t) 1 1 t t t t = − tE1,1.8 (− )E1,1.2 (− ) − tE1,1.7 (− )E1,1.3 (− ), 8 2 2 8 2 2

C 0.2 C 1.8 0 Dt y(t) 0 Dt y(t)

with y(0) = 1, y󸀠 (0) = −1/2. The analytical solution is y(t) = e−t/2 . Solution. The original equation should be rewritten to the standard form first: + C0 Dt0.3 y(t) C0 Dt1.7 y(t) 1 t t 1 t t + t E1,1.8 (− )E1,1.2 (− ) + t E1,1.7 (− )E1,1.3 (− ) = 0. 8 2 2 8 2 2

C 0.2 C 1.8 0 Dt y(t) 0 Dt y(t)

Based on the modelling algorithm, the key signals y(t), y󸀠 (t), y󸀠󸀠 (t) can be constructed first, and the fractional-order derivative signals C0 D 0.2 y(t), C0 D 0.3 y(t), C0 D 1.7 y(t)

292 | 8 Numerical solutions of nonlinear fractional-order differential equations y󸀠󸀠 (t)

1/s

y󸀠 (t)

initial −1/2

C

Fractional Int s^{−0.2}

Fractional Int s^{−0.3}

−0.5 Clock

C

D 1.7 y(t)

Fractional Int s^{−0.7}

y(t)

initial 1

1 Out1

Integrator1

D 1.8 y(t) Fractional Int s^{−0.8}

1/s

C

C

Product D 0.2 y(t)

D 0.3 y(t) f(z)

Product1

Solve f(z) = 0

z

Algebraic Constraint

sfun_mls

Gain

sfun_mls Product2 sfun_mls

Product4

sfun_mls 1/8

y󸀠󸀠 (t)

Product3

Gain1

Fractional Der s^0.2

C

D 1.8 y(t)

Fig. 8.35: Simulink model for implicit Caputo equation (c8mimp.slx).

and C0 D 1.8 y(t) are also be defined. With the key signals, the left-hand side of the equation can be constructed and fed into the Solve block, and the output signal can be regarded as C0 D 1.8 y(t). Taking the 0.2nd-order derivative to the signal, it can then be connected directly with the key signal y󸀠󸀠 (t) in the integrator chain such that the overall Caputo equation can be implemented in Simulink, as shown in Figure 8.35. Since all the initial conditions are already expressed in the integrator chain, the common Oustaloup filter blocks can be used directly in defining the fractional-order derivatives. Selecting a set of parameters for the Oustaloup filters, the maximum computational errors are measured, and the maximum error is 7.5710×10−5 . It should be noted that a time-consuming algebraic loop is involved in the model and it seems to be inevitable for implicit equations. >> ww=[1e-5,1e5]; n=20; [t,x,y]=sim(’c8mimp’); max(abs(y-exp(-t/2))))

9 Design of fractional-order PID controllers PID control is one of the earliest developed control strategies in real control systems [2]. Since the design methods and control structures are simple and straightforward, it is quite suitable for industrial applications. Besides, since the design procedures of PID controllers do not require precise plant models, and the control results are usually satisfactory, it becomes the most widely used controller type in process industry. Over the recent thirty years, theoretical research and applications of PID controllers recalled the attention of academic researchers, and many new design procedures appeared. In traditional PID controllers, the orders of the integral and derivative are all ones. Professor Igor Podlubny introduced the concepts of fractional calculus into PID controllers and proposed a PIλ Dμ controller structure in 1999 [55], which has more tuning knobs. The standard form of fractional-order PID controllers is presented in Section 9.1, where the advantages of PIλ Dμ controllers are also summarised. In Section 9.2, the optimum design approaches of integer-order PID controllers are proposed. Meaningful objective functions for servo control system design are discussed, and a graphical interface, OptimPID, is developed. In Section 9.3, some of the fractional-order PID-type controller design algorithms for several commonly used plant templates are discussed, using frequency response specification and numerical optimisation techniques, and the standard MATLAB implementations are constructed. A unified design framework is established for a variety of similar problems. In Section 9.4, fractional-order PID controller design problems are explored with plants described with fractional-order transfer functions, another interface is designed, and the plants with time delays are also explored. In Section 9.5, a variable-order fuzzy PID controller design approach is proposed and implemented, and simulation analysis to this kinds of problems are studied.

9.1 Introduction to fractional-order PID controllers In this section, the mathematical forms of conventional PID and fractional-order PID controllers are summarised, and the benefits of fractional-order PID controller are presented. Definition 9.1. The standard form of conventional PID controller is defined as Gc = Kp +

Ki + Kd s, s

where Kp , Ki and Kd are three tunable parameters.

DOI 10.1515/9783110497977-009

294 | 9 Design of fractional-order PID controllers The three parameters Kp , Ki and Kd can be tuned such that satisfactory control behaviours are finally reached. Normally, in industrial applications, the pure derivative term should be replaced by a first-order filter. Definition 9.2. The PID controller with filter is mathematically expressed as Gc = Kp +

Ki Kd s + , s Tf s + 1

where Tf is the filter time constant, which can be chosen as a fixed small positive value, for instance, Tf = 0.001. Once the four parameters are obtained, a PID controller object can be constructed in MATLAB with Gc = pid(Kp , Ki , Kd , Tf ). The fractional PIλ Dμ controller was invented by Professor Igor Podlubny [55] by introducing two more tuning knobs λ, μ on the orders of the integrator and differentiator, respectively. Definition 9.3. The typical PIλ Dμ controller is mathematically expressed as Gc (s) = Kp +

Ki + Kd s μ . sλ

(9.1)

A MATLAB function fopid() can be written to allow the user to input directly the PIλ Dμ controller, as an FOTF object. function Gc=fopid(Kp,Ki,Kd,lam,mu0), s=fotf(’s’); if length(Kp)==1, Gc=Kp+Ki*s^(-lam)+Kd*s^mu0; else, x=Kp; Gc=x(1)+x(2)*s^(-x(4))+x(3)*s^(x(5)); end In the illustration shown in Figure 9.1, the orders of integral and differentiation are used as the two axes. It can be seen that the conventional PID-type controllers are just a few specific points on the order planes. However, the orders of the controllers in fractional-order PID controllers can be relatively arbitrary chosen. Normally, with stability considerations, 0 < λ, μ < 2. Because there are two more parameters to tune μ ✻ μ=1

O

✻μ

PD

PID

P

PI λ=1

(a) Conventional PID.

μ=1

✲λ

O

PD

PID

P

PI λ=1

(b) Fractional-order PIλ Dμ .

Fig. 9.1: Illustration of fractional-order PID controllers.

✲λ

9.2 Optimum design of integer-order PID controllers | 295

than the PID controllers, the PIλ Dμ controllers are usually more flexible, and better performances [55] may be expected. From a loop shaping point of view, the slopes in Bode magnitude plots are multiples of 20 dB/dec in integer-order systems, while in fractional-order systems there is no such restriction. Therefore, the shape can be arbitrarily assigned. For instance, at the places around the crossover frequency, the slope of the magnitude plot can be assigned to very small values such that the robustness of the closed-loop system can be increased. Definition 9.4. In real PID controllers, since the integral term is constructed by analog filters, the steady-state error may not be eliminated completely when λ < 1, it is better to introduce an integer-order integrator such that Gc (s) = Kp +

Ki s1−λ + Kd s μ . s

(9.2)

9.2 Optimum design of integer-order PID controllers The design of classical PID controllers is discussed first in the section for fractionalorder plants. Then useful integral-type error criteria are compared, and numerical optimisation based design of PID controllers are discussed. Finally, a universal graphical interface is presented.

9.2.1 Tuning rules for FOPDT plants Many of the classical PID tuning algorithms are proposed based on the assumption that the process plant can be modelled well by a first-order plus dead-time (FOPDT) model. Definition 9.5. The FOPDT model is expressed by k e−Ls . (9.3) Ts + 1 Based on such a typical model, many PID controller tuning algorithms can be used to design integer-order PID controllers, good references can be found in [46]. For instance, the classical Ziegler–Nichols tuning formula, and the Wang–Juang–Chan algorithm [71] can be used to design optimal PID controllers under ITAE criterion G(s) =

Kp =

(0.7303 + 0.5307T/L)(T + 0.5L) , K(T + L)

Ti = T + 0.5L,

Td =

0.5TL . T + 0.5L

(9.4)

Therefore, the following procedures can be used to design PID controllers for a class of fractional-order plants. Algorithm 9.1 (Design a PID controller). Proceed as follows. (1) If the plant can be approximated well with an FOPDT model, then find its key parameters T, L and k.

296 | 9 Design of fractional-order PID controllers (2) Design a PID controller with, say, the Wang–Juang–Chan algorithm. (3) Observe the closed-loop behaviour under such a controller. If the behaviour is not satisfactory, then try another tuning algorithm. Example 9.1. For the given fractional-order plant model G(s) =

s2.6

+

2.2s1.5

1 , + 2.9s1.3 + 3.32s0.9 + 1

design a PID controller and observe its closed-loop behaviour. Solution. With the following MATLAB commands, the fractional-order plant can be entered, and with optimal model reduction technique; a good approximation of the FOPDT model can be obtained. >> N=5; w1=1e-3; w2=1e3; s=fotf(’s’); G=1/(s^2.6+3.3*s^1.5+2.9*s^1.3+3.32*s^0.9+1); G0=high_order(G,’ousta_fod’,w1,w2,N); Gr=opt_app(G0,0,1,1) The reduced model can be obtained as Gr (s) = 0.1836

e−0.827s . s + 0.1836

A PID controller can be designed with the Wang–Juang–Chan algorithm in (9.4) using the following statements. >> L=Gr.ioDelay; [n,d]=tfdata(Gr,’v’); K=n(2)/d(2); T=d(1)/d(2); Ti=T+0.5*L; Kp=(0.7303+0.5307*T/L)*Ti/(K*(T+L)); Td=(0.5*L*T)/(T+0.5*L); s=tf(’s’); Gc=Kp*(1+1/Ti/s+Td*s); w=logspace(-4,4,200); C=fotf(Gc); H=bode(G*C,w); bode(G0*Gc,’-’,H,’--’); The open-loop Bode diagrams can be obtained as shown in Figure 9.2. Bode Diagram

Magnitude (dB)

100 50 0 -50 -100

Phase (deg)

-150 -60 -90

integerorder

-120

fractional-150

10 -4

10 -3

10 -2

10 -1

10 0

Frequency (rad/s)

Fig. 9.2: Bode diagram comparisons.

10 1

10 2

10 3

10 4

9.2 Optimum design of integer-order PID controllers | 297 Step Response 1

Amplitude

0.8

0.6

0.4

0.2

0 0

2

4

6

8

10

Time (seconds)

12

14

16

18

20

Fig. 9.3: Closed-loop step responses under the PID controller.

The closed-loop step response of the system can be obtained as shown in Figure 9.3. It can be seen that the two systems are quite close. >> t=0:0.01:20; step(feedback(G0*Gc,1),20), y=step(feedback(G*C,1),t); hold on; plot(t,y,’--’) The PID controller can be designed as Gc (s) = 3.9474(1 +

1 + 0.3843s). 5.8232s

The seemingly good PID controller can be designed easily for the given fractional-order plant. Our questions now are: (1) Are there better controllers? (2) What is the best controller? (3) How can we find the best controller for the given plant? These questions will be answered in the following presentations.

9.2.2 Meaningful objective functions for servo control In practical control systems, series control structure shown in Figure 9.4 is a widely used structure. r(t)

e(t)

controller

u(t)

Fig. 9.4: Series control structure.

plant

y(t)

298 | 9 Design of fractional-order PID controllers Definition 9.6. In the control system, r(t) and y(t) are the input and output of the control system. The target of control is to let the output y(t) follow the changes in the input signal r(t). The control strategy is usually referred to as servo control. In this control structure, the signals e(t) and u(t) are also very important. They are often referred to as the error signal and the control signal, respectively. In the servo control systems, we often expect the error signal e(t) as small as possible. Since the error signal e(t) is a dynamic signal, integral-type criteria are usually adopted, since they correspond to the weighted area of the error signal. Definition 9.7. The integral of squared error (ISE) criterion is often used in control systems design. The criterion is defined as ∞

JISE = ∫ e2 (t) dt. 0

Definition 9.8. Alternative integral-type criteria for servo control systems are integral of absolute error (IAE) and integral of time weighted absolute error (ITAE) defined respectively as ∞



JIAE = ∫ |e(t)| dt,

JITAE = ∫ t|e(t)| dt.

0

0

Comments 9.1 (The integral-type criteria). We remark the following. (1) The common thing among these criteria is that the integrands are all nonnegative functions, which makes the criteria the monotonic increasing functions. (2) In control theory, it is easily shown that the ISE criterion is equivalent to the H2 norm of the error function E(s), and it can easily be evaluated by other approaches rather than by simulation methods. (3) The other two criteria can only be evaluated through simulation processes. Therefore, the ISE criterion was very popular among control researchers, before handy software such as MATLAB and Simulink are widely adopted by the control community. In the following example, a fare comparison is made among the behaviours of various integral-type criteria, and useful suggestions will be made on how to select meaningful criteria. Example 9.2. Suppose that the plant model is G(s) =

1 . (s + 1)5

Design optimal PID controllers under different criteria and assess the suitability of the criteria in servo control. Solution. The numerical optimisation techniques presented in Chapter 2 can be used to design optimum controllers. The first thing to do is to write a MATLAB function to

9.2 Optimum design of integer-order PID controllers | 299

evaluate the criteria. Assume that the decision variables are selected as x = [Kp , Ki , Kd ], an objective function can be written as function f=c9ef1(x), t=[0:0.01:20]’; s=tf(’s’); G=1/(s+1)^5; Gc=x(1)+x(2)/s+x(3)*s; e=step(feedback(1,Gc*G),t); f=sum(e.^2)*(t(2)-t(1)); where feedback(1, Gc ∗ G) is the transfer function from input u(t) to error e(t). Two other files, c9ef2.m and c9ef3.m, for ITAE and IAE criteria can be written by simply replacing the item sum(e.ˆ2) in the last statement by sum(t.*abs(e)) and sum(abs(e)), respectively. The optimal PID controllers under ISE, ITAE and IAE criteria can be obtained using the following commands. >> s=tf(’s’); G=1/(s+1)^5; T=0.001; x=fminsearch(@c9ef1,rand(3,1)); Gc1=pid(x(1),x(2),x(3),T); x=fminsearch(@c9ef2,rand(3,1)); Gc2=pid(x(1),x(2),x(3),T); x=fminsearch(@c9ef3,rand(3,1)); Gc3=pid(x(1),x(2),x(3),T); step(feedback(G*Gc1,1),feedback(G*Gc2,1),feedback(G*Gc3,1),20) The three controllers are ISE criterion:

4.69s 0.697 + , s 0.001s + 1 0.359 1.71s Gc2 (s) = 1.26 + + , s 0.001s + 1 0.501 2.78s Gc3 (s) = 1.49 + + , s 0.001s + 1

Gc1 (s) = 1.52 +

ITAE criterion: IAE criterion:

and the closed-loop step responses under the three controllers are obtained as shown in Figure 9.5. Step Response

1.4 1.2

ISE criterion

IAE ITAE

Amplitude

1 0.8 0.6 0.4 0.2 0

0

2

4

6

8

10

Time (seconds)

12

14

Fig. 9.5: Comparisons of different PID controllers.

16

18

20

300 | 9 Design of fractional-order PID controllers It can be seen that for this example, the control behaviour under the commonly used ISE criterion is very poor, and the one under ITAE is the best of the three. This is because the ISE and IAE criteria treat the error signal at any time equally, while ITAE imposes time weight on the error such that the error at large t is forced to approach zero such that the overall objective function can be minimised. Therefore, it is recommended to use the ITAE criterion in optimal controller design, and the ITAE criterion is considered as a meaningful criterion in controller design procedures. Another thing should be noticed is that, normally, the initial control signal u(t) could be very large when typical PID controllers are used. This can be illustrated with the statements >> step(feedback(Gc3,G),1) and it will be noticed that at initial instance, the control signal u(t) may reach 3,000 in the first 0.05 seconds. This phenomenon cannot be accepted in real applications. Therefore, an actuator saturation is expected after the PID controller to restrict the controller signal within an accepted interval. By introducing such a saturation element, the system is no longer linear, which makes the design more complicated.

9.2.3 OptimPID: An optimum PID controller design interface An optimum PID controller design interface was designed in [76] and can be used directly in optimum PID controller design. The plant to use should be of a singlevariable one, and it can be a nonlinear one, as long as it can be described by a Simulink model. The PID controllers can also be appended by actuator saturations. The design procedure will be illustrated through an example. Example 9.3. Design an optimum PID controller for the fractional-order plant model in Example 9.1. Solution. The design procedures of OptimPID interface is as follows. (1) Describe the plant. A Simulink model describing the plant can be constructed, as shown in Figure 9.6, where a pair of inport and outport blocks should be used. The model is saved to the file c9mplant.mdl. >> s=fotf(’s’); G=1/(s^2.6+3.3*s^1.5+2.9*s^1.3+3.32*s^0.9+1); (2) Invoke the interface. Type optimpid in the MATLAB command window, the interface shown in Figure 9.7 is displayed. (3) Fill in the parameters. The user needs to specify the Simulink model name in the interface, in the edit box prompted by “Plant model name”. Also, the “Terminate time” edit box is essential, a default of 10 seconds is fine for this example. For this example,

9.2 Optimum design of integer-order PID controllers | 301

1 In1

Fractional−order transfer function

1 Out1

Approximate FOTF model

Fig. 9.6: Plant model in Simulink (c9mplant.mdl).

Fig. 9.7: OptimPID interface.

an actuator saturation of |u(t)| ≤ 10 can be assigned in the edit boxes in “Actuator saturation”. (4) Create the objective function file. Click “Create File” button to generate the objective function automatically. (5) Start design facilities. Click the “Optimize” button to initiate the design process, and the optimisation process is animated in Figure 9.8. For better display of the results, the colours in the figure are inverted. It can be seen that the optimal controller is much better than the one designed in Example 9.1.

302 | 9 Design of fractional-order PID controllers

Fig. 9.8: Design results from the interface.

Comments 9.2 (Facilities in the OptimPID interface). We remark the following. (1) Continuous and discrete PID controllers are both supported. (2) A variety of optimisation solvers can be selected by the users. Global optimisation solvers such as genetic algorithm, particle swarm optimisation and pattern search are also supported. (3) Maximum overshoot can be assigned such that constrained optimisation problems can be solved directly in the interface. (4) Simulation of systems under staircase inputs can be evaluated.

9.3 Fractional-order PID controller design with frequency response approach The procedures for the design of integer-order PID controllers were discussed in the previous section, while in this section, the design procedures of fractional-order PID controllers will be proposed. The design approach introduced in [43] is studied and implemented in MATLAB first, and based on the algorithm, a series of algorithms are proposed for different controller types and plant model templates. The approach introduced in this section can be extended into any controller types and plant templates. Several design examples will be presented in the section.

9.3 Fractional-order PID controller design with frequency response approach | 303

9.3.1 General descriptions of the frequency domain design specifications Suppose that the fractional-order transfer function of the plant model is G(s), which can have transport delay terms. Also assume that the fractional-order controller is described by C(s), which is not necessarily a PIλ Dμ controller. The overall open-loop model can be denoted as F(s) = C(s)G(s). Before introducing the design procedures, the concepts of sensitivity and complimentary sensitivity functions are proposed. Definition 9.9. The sensitivity function S(s) and complementary sensitivity function T(s) of the typical feedback system are defined respectively as S(s) =

1 , 1 + F(s)

T(s) = 1 − S(s) =

F(s) . 1 + F(s)

Therefore, by following the design approaches in [43], the frequency response based specifications are formulated. Algorithm 9.2 (Design specifications and procedures). Proceed as follows. (1) Phase margin specification. At the gain crossover frequency of ωcg , one has |F(jωcg )| = 1,

(9.5)

arg(F(jωcg )) = −π + ϕm ,

(9.6)

where ϕm is referred to as the phase margin of the system, arg( ⋅ ) is the argument. (2) Robustness variations in the plant gain. The constraint is discussed in [8] 󵄨󵄨 d 󵄨󵄨 arg󵄨󵄨󵄨󵄨 F(s)󵄨󵄨󵄨󵄨 = 0. 󵄨 ds 󵄨s=jωcg

(9.7)

This constraint forces the phase around ωcg to go flat such that the phase remain within a neighbourhood of the expected phase margin such that for the variations in the plant gain, the closed-loop system reserve iso-damping behaviour. (3) Good output disturbance rejection. For low frequency ω ≤ ωs , the sensitivity function S(s) satisfies 󵄨󵄨 󵄨󵄨 1 󵄨󵄨 ≤ B dB, |S(jω)| = 󵄨󵄨󵄨󵄨 󵄨 󵄨 1 + F(jω) 󵄨󵄨

with |S(jωs )| = B dB.

(9.8)

(4) High-frequency noise rejection. For high frequency w ≥ ωt , the complementary sensitivity function satisfies 󵄨󵄨 F(jω) 󵄨󵄨 󵄨󵄨 ≤ A dB, |T(jω)| = 󵄨󵄨󵄨󵄨 󵄨 󵄨 1 + F(jω) 󵄨󵄨

with |T(jωt )| = A dB.

(9.9)

(5) Steady-state error. If there is no integrator in the plant, it is suggested that there should be an integrator in the controller. For the case of PIλ Dμ controller, λ ≥ 1.

304 | 9 Design of fractional-order PID controllers 9.3.2 Design of PIλ Dμ controllers for FOPDT plants Considering the standard form of PIλ Dμ controllers in (9.1) and the FOPDT plant in (9.3), denote x1 = Kp , x2 = Ki , x3 = Kd , x4 = λ and x5 = μ. The controller and the forward path models can be rewritten as C(s) = x1 +

x2 + x 3 s x5 , s x4

F(s) = (x1 +

x2 k + x 3 s x5 ) e−Ls . x 4 s Ts + 1

For (9.7), the derivative dF/ds can be derived symbolically with >> syms x1 x2 x3 x4 x5 s G(s) F=(x1+x2/s^x4+x3*s^x5)*G(s), diff(F,s) and the derivative can be computed as dF(s) d x2 x4 = C(s) G(s) − G(s)( x +1 − x3 x5 s x5 −1 ). ds ds s 4

(9.10)

In particular, if the plant model is an FOPDT one, described in (9.3), the derivative of G(s) with respect to s can be derived with >> syms k T L; G=k*exp(-L*s)/(T*s+1); G1=simplify(diff(G,s)/G) where

d L G(s) = −( + L)G(s), ds Ts + 1 such that (9.10) can be rewritten as dF(s) T x2 x4 = −( + L)F(s) − G(s)( x +1 − x3 x5 s x5 −1 ). ds Ts + 1 s 4

It seems that five equations in (9.6)–(9.9) can be setup and there are five unknown variables in x. The following solution commands can be tried. Unfortunately, it could be very difficult to actually find the numerical solutions to the nonlinear equations [43]. Algorithm 9.3 (Fractional-order PID controller design I). Proceed as follows. (1) Write a MATLAB function to describe the original nonlinear equations as follows. function feq=c9mfpid(x,wcg,phi,ws,B,wt,A,k,T,L) jw=1i*wcg; G=@(s)k*exp(-L*s)/(T*s+1); C=@(s)x(1)+x(2)/s^x(4)+x(3)*s^x(5); F=@(s)C(s)*G(s); feq(1)=abs(F(jw))-1; feq(2)=angle(F(jw))+pi-phi*pi/180; feq(3)=angle(-(T/(T*jw+1)+L)*F(jw)-... G(jw)*(x(2)*x(4)/jw^(x(4)+1)-x(3)*x(5)*jw^(x(5)-1))); feq(4)=20*log10(abs(1/(1+F(ws*1i))))-B; feq(5)=20*log10(abs(F(wt*1i)/(1+F(wt*1i))))-A; (2) Specify the parameters of the specifications and the plant.

9.3 Fractional-order PID controller design with frequency response approach | 305

(3) Issuing the equation solving commands in MATLAB >> [x,f,flag]=fsolve(@c9mfpid,x0,’’,wcg,phi,ws,B,wt,A,k,T,L); where x0 should be specified by the user, as the initial search point. If flag returned is positive, the solution is successful; otherwise, select another x0 and try again. There might be multiple solutions in the equations. However, it could be very difficult to assign a suitable x0 that leads to one solution. As it was hinted in [43], algebraic equation solution algorithms are not efficient in solving these equations. The problem can be converted alternatively into a nonlinear optimisation problem, where the first two constraints are equation constraints, the last two are inequality constraints, while the third one is the objective function such that the absolute value of arg( dF(s)/ds)|s=jωcg is minimised. Definition 9.10. The numerical optimisation problem is formulated as

󵄨󵄨 󵄨󵄨 d 󵄨󵄨 󵄨󵄨󵄨 󵄨󵄨, min󵄨󵄨󵄨󵄨arg󵄨󵄨󵄨󵄨 F(s)󵄨󵄨󵄨󵄨 x 󵄨 󵄨 ds 󵄨s=jωcg 󵄨󵄨

{|F(jωcg )| − 1 = 0, { { { { {arg(F(jωcg )) + π − ϕm = 0, x s.t. { {20 lg |F(jωt )/(1 + F(jωt ))| − A ≤ 0, { { { { {20 lg |1/(1 + F(jωs ))| − B ≤ 0.

Based on the above mathematical form of the optimisation problem, the controller design problem can be solved with the following algorithm. Algorithm 9.4 (Fractional-order PID controller design II). Proceed as follows. (1) Write a MATLAB function to describe the objective function as follows. function y=c9mfpid_opt(x,wcg,phi,ws,B,wt,A,k,T,L) jw=1i*wcg; G=@(s)k*exp(-L*s)/(T*s+1); C=@(s)x(1)+x(2)/s^x(4)+x(3)*s^x(5); F=@(s)C(s)*G(s); y=abs(angle(-(T/(T*jw+1)+L)*F(jw)-... G(jw)*(x(2)*x(4)/jw^(x(4)+1)-x(3)*x(5)*jw^(x(5)-1)))); (2) Write the constraints in MATLAB as follows. function [c,ceq]=c9mfpid_con(x,wcg,phi,ws,B,wt,A,k,T,L) jw=1i*wcg; G=@(s)k*exp(-L*s)/(T*s+1); C=@(s)x(1)+x(2)/s^x(4)+x(3)*s^x(5); F=@(s)C(s)*G(s); ceq=[abs(F(jw))-1; angle(F(jw))+pi-phi*pi/180]; c=[20*log10(abs(1/(1+F(ws*1i))))-B; 20*log10(abs(F(wt*1i)/(1+F(wt*1i))))-A]; (3) Specify the parameters of the specifications and the plant. (4) Find the solution of the original problem with the following commands. >> x=fmincon(@c9mfpid_opt,x0,[],[],[],[],[],[],... @c9mfpid_con,’’,wcg,phi,ws,B,wt,A,k,T,L)

306 | 9 Design of fractional-order PID controllers Comments 9.3 (Optimisation problem solutions). We remark the following. (1) The problem is more easily to be solved than Algorithm 9.3. (2) There should be several solutions in the problem, but it could be very difficult to judge which is “local” and which is “global”, by merely comparing the values of the objective function. The control behaviour should be compared instead through simulation analysis. (3) Due to the above reasons, the global optimisation solvers cannot be used. Example 9.4. Consider an integer-order FOPDT plant G(s) =

k 3.13 e−Ls = e−50s , Ts + 1 433.33s + 1

with k = 3.13, T = 433.33, L = 50, and the following specifications are imposed: (1) The crossover frequency ωcg = 0.008 rad/s, and phase margin ϕcg = 60∘ . (2) The robustness constraint is fulfilled. (3) The sensitivity function satisfies |S(jω)| ≤ −20 dB, where ω ≤ ωs = 0.001 rad/s. (4) The complementary sensitivity function satisfies |T(jω)| ≤ −20 dB when ω ≥ ωt = 10 rad/s. Solution. The following MATLAB function used to solve numerically the optimisation problem with Algorithm 9.4. >> wcg=0.008; phi=60; ws=0.001; B=-20; wt=10; A=-20; k=3.13; T=433.33; L=50; x0=rand(5,1); [x,f,kk]=fsolve(@c9mfpid,x0,’’,wcg,phi,ws,B,wt,A,k,T,L) Unfortunately, as indicated in [43], it is very difficult if not impossible to find a solution to the algebraic equation. Therefore, it is better to solve the problem with the numerical optimisation problem, with the following statements. >> x=fmincon(@c9mfpid_opt,rand(5,1),[],[],[],[],[],[],... @c9mfpid_con,’’,wcg,phi,ws,B,wt,A,k,T,L) If the above function call is executed several times, some different solutions can be found. For instance, the solution vector x = [0.4649, 0.0060, 0.7085, 0.9203, 0.0586] can be found. Thus, we denote C1 (s) as C1 (s) = 0.4649 +

0.0060 + 0.7085s0.0586 . s0.9203

9.3 Fractional-order PID controller design with frequency response approach | 307 Bode Diagram

Magnitude (dB)

50 0 -50

System: H1 Phase Margin (deg): 59.5 Delay Margin (sec): 129 At frequency (rad/s): 0.00803 Closed loop stable? Not known

-100

Phase (deg)

-150 0 -360 -720 -1080

10 -4

-115 -120

phase margin

-125 5 10 -3

6 10 -2

7

❛ 8

10 -1

9

10 x10-3 10 0

Frequency (rad/s)

10 1

10 2

10 3

10 4

Fig. 9.9: Open-loop Bode diagram with the controller.

With such a controller, the open-loop Bode diagram can be obtained, as shown in Figure 9.9, and it can be seen that the phase at the crossover frequency, ω = 0.008, is rather flat, as expected, such that the controller may be robust to gain variations of the plant. >> s=fotf(’s’); G=k/(T*s+1); G.ioDelay=L; Gc=fopid(x); bode(G*Gc) In [43], another controller C2 (s) is given with 0.01 C2 (s) = 0.6152 + 0.8968 + 4.3867s0.4773 . s If such a model is used as the initial search point >> x0=[0.6152 0.01 4.3867 0.8968 0.4773]; x=fmincon(@c9mfpid_opt,x0,[],[],[],[],[],[],... @c9mfpid_con,’’,wcg,phi,ws,B,wt,A,k,T,L) yet another controller can be found: 0.0047 + 0.3379s1.9927 . s0.9457 The three controllers can be compared with the following statements and the closedloop step responses are obtained as shown in Figure 9.10. The function INVLAP_new() was explained in Chapter 4. C3 (s) = 1.0210 +

>> C1=’(0.4649+0.0060/s^0.9203+0.7085*s^0.0586)’; C2=’(0.6152+0.01/s^0.8968+4.3867*s^0.4773)’; C3=’(1.0210+0.0047/s^0.9457+0.3377*s^1.9927)’; G=’*3.13*exp(-50*s)/(433.33*s+1)’; [t1,y1]=INVLAP_new([C1 G],0,1400,8000,1,’1/s’); [t2,y2]=INVLAP_new([C2 G],0,1400,8000,1,’1/s’); [t3,y3]=INVLAP_new([C3 G],0,1400,8000,1,’1/s’); plot(t1,y1,t2,y2,t3,y3)

308 | 9 Design of fractional-order PID controllers 1.2

C2 (s)

1

C1 (s) and C3 (s)

0.8 0.6 0.4 0.2 0 0

200

400

600

800

1000

1200

1400

Fig. 9.10: Closed-loop step responses with different controllers. 1.2

1

0.8 1.2

0.6

k = 1.2k lk na mi o n k = 0.8k

1 0.4

0.8 0.6

0.2

100 0

0

200

400

150 600

200

250 800

300

350

1000

400 1200

1400

Fig. 9.11: Robustness with variations in the gain k.

It can be seen that the behaviours of C1 (s) and C3 (s) are almost the same, although their parameters are completely different. There are all significantly better than C2 (s), reported in [43]. When the gain k of the plant is subject to ±20 % of variations, the closed-loop step responses are shown in Figure 9.11, and it can be seen that the controller is robust to gain variations in the plant model. >> [t1,y1]=INVLAP_new([C1 G],0,1400,8000,1,’1/s’); [t2,y2]=INVLAP_new([C1 ’*0.8’ G],0,1400,8000,1,’1/s’); [t3,y3]=INVLAP_new([C1 ’*1.2’ G],0,1400,8000,1,’1/s’); plot(t1,y1,t2,y2,t3,y3)

9.3.3 Controller design with FOIDPT plants Another often encountered plant model type is first-order lag and integrator plus dead-time (FOIPDT) model. For this template, the controller design procedures will be studied.

9.3 Fractional-order PID controller design with frequency response approach | 309

Definition 9.11. The FOIPDT plant model is mathematically defined as G(s) =

k e−Ls . s(Ts + 1)

Since the plant model is changed to an FOIPDT one, the derivative of G(s) with respect to s can be derived with >> syms s k L T; G=k*exp(-L*s)/s/(T*s+1); G1=diff(G,s); simplify(G1/G) with the result dG(s) Ls + 2Ts + LTs2 + 1 =− G(s) ds Ts2 + s such that (9.10) can be rewritten as Ls + 2Ts + LTs2 + 1 dF(s) x2 x4 =− F(s) − G(s)( x +1 − x3 x5 s x5 −1 ). ds Ts2 + s s 4 The fractional-order PID controllers under all the specifications can be designed with the following algorithm. Design examples are not presented in the following algorithms due to the page allocations of the book. Algorithm 9.5 (Controller design for FOIPDT models). Proceed as follows. (1) Write the objective function in MATLAB as follows. function y=c9mfpid_opt1(x,wcg,phi,ws,B,wt,A,k,T,L) s=1i*wcg; G=@(s)k*exp(-L*s)/s/(T*s+1); C=@(s)x(1)+x(2)/s^x(4)+x(3)*s^x(5); F=@(s)C(s)*G(s); y=abs(angle(-(L*s+2*T*s+L*T*s^2+1)/(T*s^2+s)*F(s)-... G(s)*(x(2)*x(4)/s^(x(4)+1)-x(3)*x(5)*s^(x(5)-1)))); (2) Establish the constraints description function in MATLAB as follows. function [c,ceq]=c9mfpid_con1(x,wcg,phi,ws,B,wt,A,k,T,L) jw=1i*wcg; G=@(s)k*exp(-L*s)/s/(T*s+1); C=@(s)x(1)+x(2)/s^x(4)+x(3)*s^x(5); F=@(s)C(s)*G(s); ceq=[abs(F(jw))-1; angle(F(jw))+pi-phi*pi/180]; c=[20*log10(abs(1/(1+F(ws*1i))))-B; 20*log10(abs(F(wt*1i)/(1+F(wt*1i))))-A]; (3) Specify the parameters of the specifications and the plant. (4) Design the controller with the following commands. >> x=fmincon(@c9mfpid_opt1,x0,[],[],[],[],[],[],... @c9mfpid_con1,’’,wcg,phi,ws,B,wt,A,k,T,L)

310 | 9 Design of fractional-order PID controllers 9.3.4 Design of PIλ Dμ for typical fractional-order plants Different plant model types can also be considered in controller design. In this subsection, fractional-order first-order lag plus delay (FOFOPDT) model is considered, and a PIλ Dμ controller design procedure is proposed. Definition 9.12. A typical FOFOPDT model is mathematically defined as G(s) =

k e−Ls . Ts a + 1

Now, the derivative of G(s) with respect to s can be derived with the MATLAB statements >> syms s a k L T; G=k*exp(-L*s)/(T*s^a+1); G1=diff(G,s); simplify(G1/G) with the result

dG(s) Ls + aTs a + LTs a+1 =− G(s), ds s(Ts a + 1) such that (9.10) can be rewritten as Ls + aTs a + LTs a+1 dF(s) x2 x4 =− F(s) − G(s)( x +1 − x3 x5 s x5 −1 ). a ds s(Ts + 1) s 4

The fractional-order PID controllers under all the specifications can be designed with the following algorithm. Algorithm 9.6 (PIλ Dμ controller design algorithm). Proceed as follows. (1) Write the objective function in MATLAB as follows. function y=c9mfpid_opt2(x,wcg,phi,ws,B,wt,A,k,T,L,a) s=1i*wcg; G=@(s)k*exp(-L*s)/(T*s^a+1); C=@(s)x(1)+x(2)/s^x(4)+x(3)*s^x(5); F=@(s)C(s)*G(s); y=abs(angle(-(L*s+a*s^a*T+L*s^a+1)*T)/(s*(s^a*T+1)*F(s)-... G(s)*(x(2)*x(4)/s^(x(4)+1)-x(3)*x(5)*s^(x(5)-1)))); (2) Establish the constraints description function in MATLAB as follows. function [c,ceq]=c9mfpid_con2(x,wcg,phi,ws,B,wt,A,k,T,L,a) jw=1i*wcg; G=@(s)k*exp(-L*s)/(T*s^2+1); C=@(s)x(1)+x(2)/s^x(4)+x(3)*s^x(5); F=@(s)C(s)*G(s); ceq=[abs(F(jw))-1; angle(F(jw))+pi-phi*pi/180]; c=[20*log10(abs(1/(1+F(ws*1i))))-B; 20*log10(abs(F(wt*1i)/(1+F(wt*1i))))-A]; (3) Specify the parameters of the specifications and the plant. (4) Design the controller with the following commands. >> x=fmincon(@c9mfpid_opt2,x0,[],[],[],[],[],[],... @c9mfpid_con2,’’,wcg,phi,ws,B,wt,A,k,T,L,a)

9.3 Fractional-order PID controller design with frequency response approach | 311

9.3.5 Design of PIDμ controllers Apart from the fractional-order PID controllers, other types of controllers can also be designed with similar approach. For instance, to eliminate steady-state errors, a PIDμ controller can be designed. Definition 9.13. A PIDμ controller is defined as Ki + Kd s μ , C(s) = Kp + s where there are four parameters to tune. It can be seen that the equation solution design method in Algorithm 9.3 can only be used to solve the problems where the number of unknown equals the number of equations, while the optimisation based method in Algorithm 9.4 does not have to meet the specifications. This is the advantages of the optimisation based solvers. Now, let us select the decision variables x1 = Kp , x2 = Ki , x3 = Kd and x4 = μ, the PIDμ can be expressed as x2 C(s) = x1 + + x 3 s x4 . s The derivative of F(s) with respect to s can be derived with >> syms x1 x2 x3 x4 s G(s) F=(x1+x2/s+x3*s^x4)*G(s); diff(F,s) and the derivative is dF(s) dG(s) x2 = C(s) − G(s)( 2 − x3 x4 s x4 −1 ). ds ds s For different G(s) models, the dF(s)/ds term can be formulated and design scheme of the controller can finally be carried out.

9.3.6 Design of FO-[PD] controllers In this subsection, the so-called “FO-[PD]” is the notation introduced in [32]. Such a controller also exhibit some robustness in the control behaviours. Definition 9.14. The FO-[PD] controller is mathematically defined as α

C(s) = (Kp + Kd s μ ) . Again, for FO-[PD] controllers, the number of unknowns is less than the number of equations. Therefore, optimisation algorithms can be used to design the controllers. Let us select the decision variables x1 = Kp , x2 = Kd , x3 = μ and x4 = α. The FO-[PD] controller can be expressed as C(s) = (x1 + x2 s x3 )x4 .

312 | 9 Design of fractional-order PID controllers The derivative of F(s) with respect to s can be derived with >> syms x1 x2 x3 x4 s G(s) F=(x1+x2*s^x3)^x4*G(s); diff(F,s) and the result is dG(s) dF(s) = C(s) − x2 x3 x4 s x3 −1 (x1 + x2 s x3 )x4 −1 G(s). ds ds With reference to other procedures discussed above, for a given template of G(s), the objective function and constraints can be formulated and MATLAB solutions to the controller design tasks can be solved easily.

9.3.7 Other considerations on controller design The robust control for gain variations is considered in the previous section. In order to further enhance the flat phase behaviour, more frequency points can be selected such that an enhanced optimisation problem can be proposed. Definition 9.15. The enhanced optimisation problem for fractional-order controller design is proposed as

m

󵄨󵄨 󵄨󵄨 󵄨󵄨 d 󵄨󵄨 󵄨󵄨, min ∑ 󵄨󵄨󵄨󵄨arg󵄨󵄨󵄨󵄨 F(s)󵄨󵄨󵄨󵄨 󵄨 x 󵄨 󵄨 ds 󵄨s=jω i 󵄨󵄨 i=1

|F(jωcg )| − 1 = 0, { { { { { {arg(F(jωcg )) + π − ϕm = 0, x s.t. { { {20 lg |F(jωt )/(1 + F(jωt ))| − A ≤ 0, { { { {20 lg |1/(1 + F(jωs ))| − B ≤ 0,

where w i , i = 1, 2, . . . , m is a set of user-chosen frequency points around the crossover frequency ωcg . For instance, w i can be selected as the frequencies 0.8ωcg , ωcg , 1.2ωcg . Also, the robust control on other parameters of the plant, such as the time constant, can be formulated by other approaches. For instance, we refer to the algorithms in [27].

9.4 Optimal design of PIλ Dμ controllers In this section, an optimisation based optimal PIλ Dμ controller design approach is presented first, followed by the optimal PIλ Dμ controller design problems for plants with delays. Finally, a user interface is presented.

9.4.1 Optimal PIλ Dμ controller design The numerical optimisation approach presented in Section 9.2 can also be extended to the problems in designing fractional-order PID controllers for given fractional-order

9.4 Optimal design of PIλ Dμ controllers | 313

plant models. This idea is demonstrated through a simple example first. Then universal design procedures are presented for any linear fractional-order plants. Example 9.5. Try to design an optimum PIλ Dμ controller for the fractional-order plant model in Example 9.1. Solution. The numerical optimisation algorithm can be used to design optimal PIλ Dμ controllers. The following MATLAB function can be written to describe the objective function. function fy=fpidfun(x,G,t,key) C=fopid(x); dt=t(2)-t(1); e=step(feedback(1,G*C),t); if key==1, fy=dt*sum(t.*abs(e)); else, fy=dt*sum(e.^2); end disp([x(:).’, fy]) In the last statement, the intermediate results can be extracted and displayed during the optimisation process. The function has three additional arguments: G is the FOTF plant model, t is the evenly spaced time vector and key is the criterion, with 1 for the ITAE criterion, otherwise for the ISE criterion. Assume the terminate time is 8 seconds, and also the parameters of the PIλ Dμ controller are all smaller than 30, and the orders are in the interval (0, 2). The function fminsearchbnd() is recommended to find optimal PIλ Dμ controller >> s=fotf(’s’); G=1/(s^2.6+3.3*s^1.5+2.9*s^1.3+3.32*s^0.9+1); xm=zeros(5,1); xM=[30; 30; 30; 2; 2]; x0=rand(5,1)’; t=0:0.01:8; x=fminsearchbnd(@fpidfun,x0,xm,xM,[],G,t,1) Gc1=fopid(x); step(feedback(G*Gc1,1),t); The optimal controller is Gc (s) = 30 + 2.8766s−1.1483 + 13.7401s0.8928 . The step responses of the systems under this controller and the one obtained in the previous example are obtained as shown in Figure 9.12. It can be seen that the fractionalorder PID controller is better than the integer-order controller. Based on the above idea, an optimal fractional-order PID controller design function can be extended easily to the PIλ Dμ controller design problems for linear fractionalorder plants. For different types of fractional-order controllers and criteria, a MATLAB function describing the objective function can be written as follows. function [fy,C]=fpidfuns(x) global G t key1 key2; s=fotf(’s’); t=t(:); switch key1 case {’fpid’,1}, C=x(1)+x(2)*s^(-x(4))+x(3)*s^(x(5)); case {’fpi’,2}, C=x(1)+x(2)*s^(-x(3)); case {’fpd’,3}, C=x(1)+x(2)*s^x(3);

314 | 9 Design of fractional-order PID controllers Step Response 1

Amplitude

0.8

0.6

0.4

0.2

0 0

1

2

3

4

Time (seconds)

5

6

7

8

Fig. 9.12: Step response under optimum PIλ Dμ controller.

case {’fpidx’,4}, C=x(1)+x(2)/s+x(3)*s^x(4); case {’pid’,’PID’,5}, C=x(1)+x(2)/s+x(3)*s; end dt=t(2)-t(1); e=step(feedback(1,G*C),t); switch key2 case {’itae’,’ITAE’,1}, fy=dt*abs(e)*t; case {’ise’,’ISE’,2}, fy=dt*sum(e.^2); case {’iae’,’IAE’,3}, fy=dt*sum(abs(e)); case {’itse’,’ITSE’,4}, fy=dt*e.^2*t; otherwise, error([’Available criteria are itae, ise, iae, itse.’]) end disp([x(:).’ fy]) Comments 9.4 (Describing the objective function). We remark the following. (1) Since this is an open structure with the commands switch, other controller structures and criteria selections can be added to the source code directly by the readers, if necessary. (2) Unlike the case in Example 9.5, where additional parameters were used in the objective function, global variables are used here such that there is only one input argument x in the objective function. This is required by some of the global optimisation solvers. (3) The global variables should be declared so that they can be shared by the objective function and the optimisation solvers >> global G t key1 key2 where G is the FOTF plant model, t is the evenly spaced time vector, the identifier key1 is the type of criterion, with options ’itae’, ’ise’, ’iae’ and ’itse’, where

9.4 Optimal design of PIλ Dμ controllers | 315

’itae’ is recommended, while key2 is the expected controller type, with options ’fpid’, ’fpi’, ’fpd’, ’fpidx’, and ’pid’, with ’fpidx’ for PIDμ controller with integer integral. With such an objective function, one can write an optimal PIλ Dμ controller design function fpidtune() as follows. function [Gc,x,y]=fpidtune(x0,xm,xM,kAlg) global G t key1 key2; n=length(x0); if nargin==5, ff=optimset; ff.MaxIter=50; end switch kAlg case {1,2} x=fminsearchbnd(@fpidfuns,x0,xm,xM); case 3 x=patternsearch(@fpidfuns,x0,[],[],[],[],xm,xM); case 4 x=ga(@fpidfuns,n,[],[],[],[],xm,xM); case 5, x=particleswarm(@fpidfuns,n,xm,xM); end [y,Gc]=fpidfuns(x); The syntax of the function is [Gc , x, y] = fpidtune(x0 , xm , xM , kA), where the definitions of all the arguments are the same as defined earlier. Comments 9.5 (Optimal PIλ Dμ controller design). We remark the following. (1) Again the open switch structure is used. Currently, the argument kA is the identifier, where kA select 1, 2 for normal optimisation, and kA selects 3, 4, 5 for, respectively, the pattern search method, the genetic algorithm and the particle swarm optimisation algorithm. More optimisation solvers can be introduced to the design function. (2) The values of the four global variables should be assigned in MATLAB workspace before the function fpidtune() is called. Example 9.6. Consider again the problem in Example 9.5. Solution. The optimal fractional-order PID controller can be obtained directly with the following MATLAB function, and the results are exactly the same as the ones obtained in the previous example. >> global G t key1 key2; s=fotf(’s’); G=1/(s^2.6+3.3*s^1.5+2.9*s^1.3+3.32*s^0.9+1); xm=zeros(5,1); xM=[30; 30; 30; 2; 2]; x0=rand(5,1); t=0:0.01:8; key1=’fpid’; key2=’itae’; [Gc,x]=fpidtune(x0,xm,xM,1)

316 | 9 Design of fractional-order PID controllers

The following statements can also be used to design the optimal integer-order PID controller: >> xm=zeros(3,1); xM=[30; 30; 30]; x0=[1;1;1].’; key1=’pid’; key2=’itae’; [Gc1,x]=fpidtune(x0,xm,xM,1) step(feedback(G*Gc,1),t); hold on; step(feedback(G*Gc1,1),t); with an optimal conventional PID controller 3.9413 + 20.1348s. s The closed-loop step responses under the two controllers are shown in Figure 9.13. It can be seen that the closed-loop response under PIλ Dμ controller is much better than the conventional PID controller for the fractional-order plant model. Gc1 (s) = 30 +

Step Response

1.2

λ

PI D controller

1

integer-order PID

0.8

Amplitude

μ

0.6

0.4

0.2

0 0

1

2

3

4

Time (seconds)

5

6

7

8

Fig. 9.13: Comparisons of different PID controllers.

If the plant and controller models are both approximated with Oustaloup filters, the following statements should be specified, and an integer-order closed-loop model, usually of extremely high order (for this example, a 43th order closed-loop model is obtained, with 7th order Oustaloup filter), can be obtained. The closed-loop step response of the high order integer-order system is almost the same as the one obtained in Figure 9.13. >> Gc0=high_order(Gc,’ousta_fod’,1e-3,1e3,7); G0=high_order(G,’ousta_fod’,1e-3,1e3,7); G1=feedback(G0*Gc0,1); order(G1), step(G1,t); hold on; step(feedback(G*Gc,1),t) Gc10=high_order(Gc1); step(feedback(G0*Gc10,1),t) Example 9.7. Simulate the PIλ Dμ control system in Example 9.6 in Simulink environment.

9.4 Optimal design of PIλ Dμ controllers | 317

Fractional−order PID controller

Fractional−order transfer function

Approximate fPID controller

Approximate FOTF model

Step

Scope

1 Out1

Fig. 9.14: Fractional-order PID controller system (file: c9mfpids.slx).

Solution. The fractional-order PID control system is modelled with Simulink as shown in Figure 9.14. When the parameters in the plant and fractional-order PID controller are specified, simulation results can be obtained. It can be seen that the control results are exactly the same as the one obtained in the previous example.

9.4.2 Optimal PIλ Dμ controller design for plants with delays If the plant models comes with delay terms, the function feedback() in the FOTF Toolbox cannot be used, and the closed-loop behaviour of the system cannot be assessed. In this case, numerical inverse Laplace transform technique can be used instead to evaluate the closed-loop behaviour. For commonly used integral-type criteria, a MATLAB function is written as follows. function fy=fun_opts(x) global G t key1 key2 t0=t(1); t1=t(end); N=length(t); dt=t(2)-t(1); Gf=get_fpidf(x,G,key1); U=’1/s’; [t,y]=INVLAP_new(Gf,t0,t1,N,1,U); e=1-y; switch key2 case {’itae’,’ITAE’,1}, fy=dt*sum(t.*abs(e)); case {’ise’,’ISE’,2}, fy=dt*sum(e.^2); case {’iae’,’IAE’,3}, fy=dt*sum(abs(e)); end disp([x(:).’ fy]) Comments 9.6 (Details of the objective function). We remark the following. (1) This function can be called directly with numerical optimisation solvers, with global variables specified. (2) The value of the object function is returned, and during optimisation processes, the intermediate results are displayed. (3) A low-level function get_fpidf() is written to extract the string representation of the transfer function in the forward path, with the PIλ Dμ type of controller. function Gf=get_fpidf(x,G,key1), x=[x(:).’,1 1];

318 | 9 Design of fractional-order PID controllers

x1=num2str(x(1)); x2=num2str(x(2)); x4=num2str(x(4)); x3=num2str(x(3)); x5=num2str(x(5)); switch key1 case {’fpid’,1}, Gc=[’(’ x1 ’+’ x2 ’*s^-’ x4 ’+’ x3 ’*s^’ x5 ’)’]; case {’fpi’,2}, Gc=[’(’ x1 ’+’ x2 ’*s^-’ x3 ’)’]; case {’fpd’,3}, Gc=[’(’ x1 ’+’ x2 ’*s^’ x3 ’)’]; case {’fpidx’,4}, Gc=[’(’ x1 ’+’ x2 ’/s+’ x3 ’*s^’ x4 ’)’]; case {’pid’,5}, Gc=[’(’ x1 ’+’ x2 ’/s+’ x3 ’*s)’]; end G=strrep(G,’+-’,’-’); Gf=[G ’*’ Gc]; The syntax is Gf = get_fpidf(x, G, key1), where Gf is the string for forward path transfer function under controller specified in x, under the type of key1, and G is the string of the plant model. This function can also be called by the user when an optimum controller is to be designed. Example 9.8. Consider the plant model G(s) =

0.8s2.2

1 e−s . + 0.5s0.9 + 1

Design an optimum PIλ Dμ controller for ITAE criterion, and compute the closed-loop step responses. Solution. The plant model should first be described by a string. Then, for the ITAE criterion, the following command can be specified to design optimal PIλ Dμ controllers. >> clear; global G t key1 key2 G=’exp(-s)/(0.8*s^2.2+0.5*s^0.9+1)’; t=0.01:0.01:20; key2=’itae’; key1=’fpid’; x0=rand(5,1); xm=zeros(5,1); xM=[20;20;20;2;2]; x=fminsearchbnd(@fun_opts,x0,xm,xM) After the optimisation process, an optimum PIλ Dμ controller is designed: G c1 (s) = 0.45966 +

0.5761 + 0.49337s1.3792 . s0.99627

It can be seen that the integral is almost of first-order; therefore, a PIDμ can also be used if needed. The forward path transfer function string can be obtained by calling the function get_fpidf(), and the closed-loop step response can be obtained with the following statements. >> t0=0.01; tn=20; N=2000; Gf=get_fpidf(x,G,’fpid’); [t1,y1]=INVLAP_new(Gf,t0,tn,N,1,’1/s’);

9.4 Optimal design of PIλ Dμ controllers | 319 1.2 1 0.8 0.6

1.15

0.4

1.05

integer-order PID G c3 (s) G c2 under ISE criterion PIλ Dμ G c1 (s)

1.1

1

0.2

0.95

0 0

2

4

2

6

8

4

10

6

12

Fig. 9.15: Step responses under optimum

8

14

PIλ Dμ

16

10 18

20

controllers.

To design the optimum PIλ Dμ controller under the ISE criterion, and to design the integer-order PID controller, the following statements can be executed. >> key2=’ise’; x=fminsearchbnd(@fun_opts,x0,xm,xM); U=’1/s’; Gf=get_fpidf(x,G,’fpid’); [t2,y2]=INVLAP_new(Gf,t0,tn,N,1,U); x0=rand(3,1); xm=zeros(3,1); xM=[100;100;100]; key1=’pid’; key2=’itae’; x=fminsearchbnd(@fun_opts,x0,xm,xM); Gf=get_fpidf(x,G,’pid’); [t3,y3]=INVLAP_new(Gf,t0,tn,N,1,U); plot(t1,y1,t2,y2,t3,y3) The other two controllers designed are G c2 (s) = 1.2369 +

0.5628 + 0.8041s1.6982 s1.1798

and G c3 (s) = 0.0795 +

0.5206 + 0.3587s. s

The closed-loop step responses under the three controllers are shown in Figure 9.15. It can be seen that the step-response under the ITAE-PIλ Dμ controller is satisfactory, while the one with the ISE-PIλ Dμ controller is very poor, since the error signal are treated equally at all time instances. Therefore, in practice, the ITAE criterion is highly recommended. It is also seen that the optimum integer-order PID controller cannot control the plant satisfactorily. Example 9.9. Design a Simulink model for the closed-loop control problem in Example 9.8, and get simulation results. Solution. The Simulink model shown in Figure 9.14 can again be used to simulate the system with delay, and the delay information is stored in the plant block. Note that the orders of the Oustaloup filters in the two blocks should be assigned to larger numbers,

320 | 9 Design of fractional-order PID controllers and small values for lower frequency bound should be set. The simulation result of the closed-loop step response is exactly the same as the one in Figure 9.15. >> s=fotf(’s’); G=1/(0.8*s^2.2+0.5*s^0.9+1); G.ioDelay=1; Gc=fopid([0.4597 0.5761 0.4934 0.9963 1.3792]); ww=[1e-5 1e3]; N=20; [t,x,y]=sim(’c9mfpids’); plot(t,y)

9.4.3 OptimFOPID: An optimal fractional-order PID controller design interface Based on the algorithms presented earlier, a graphical user interface, OptimFOPID, is designed. This function can be used to design optimal fractional-order PID controllers with user interface [75]. More facilities are implemented in the new interface. The plant model G in FOTF format should be entered into MATLAB workspace first. Then type the command optimfopid at MATLAB prompt. The user interface shown in Figure 9.16 will be displayed. Click “Plant model”, the model G can be loaded into the interface. Then the controller type, objective function type and terminate time should be selected from the listboxes in the interface. Clicking the button “Optimize” will invoke the optimal controller design process, and finally the optimal controller can be obtained in Gc , in MATLAB workspace. Clicking the button “Closed-loop response” will show the step response of the closed-loop system.

Fig. 9.16: Optimal fractional-order PID controller design interface.

9.4 Optimal design of PIλ Dμ controllers | 321

Example 9.10. Consider the plant model G(s) =

1 0.8s2.2 + 0.5s0.9 + 1

and design optimum PIλ Dμ controllers. Solution. The following procedures can be used to design optimal fractional-order PID controller. (1) Type optimfopid command to invoke the interface. (2) Enter the FOTF model G into MATLAB workspace, and click the button “Plant model” to load the model into the interface. >> G=fotf([0.8 0.5 1],[2.2 0.9 0],1,0) (3) Set the upper bounds of the controller parameters to 15, and terminate time at 8. It should be noted that the upper bounds of controller parameters may sometimes affect the final search results. (4) Click the button “Optimize” to initiate the optimisation process, and the optimal fractional-order controller can be obtained, and for this example, the optimal vector is x = [4.1557, 12.3992, 8.8495, 0.9794, 1.1449]. The controller model can be written as 12.3992 Gc (s) = 4.1557 + 0.9794 + 8.8495s1.1449 . s (5) Click the button “Closed-loop response” to draw the closed-loop step response of the system, as shown in the interface. It can be seen that the output signal is quite satisfactory. The controller can be stored in Gc1 with the following statement. >> Gc1=Gc; Since the order of integrator is very close to 1, the item “PIDˆmu” from the controller type listbox can be selected, and the optimal PIDμ controller can be designed. The result is very close to the one obtained by the PIλ Dμ controller. The controller obtained can be stored in Gc2 (s): 13.3206 + 9.9338s1.2333 . s If the item “PID” from the listbox “Controller Type” is selected, and if the button “Optimize” is clicked, the optimal integer-order PID controller can be designed, and the closed-loop step response can be obtained as shown in the interface, and the controller can be stored as Gc3 (s). Gc2 (s) = 6.7337 +

>> t=0:0.01:8; Gc3=Gc; y1=step(feedback(G*Gc1,1),t); y2=step(feedback(G*Gc2,1),t); y3=step(feedback(G*Gc3,1),t); plot(t,y1,t,y2,t,y3) The optimum integer-order PID controller designed is Gc3 (s) = 15 + 15/s + 12.5678s.

322 | 9 Design of fractional-order PID controllers 1.2

1

0.8

Gc1 (s)

0.6

1

0.4

0.9

0.2

0.8

0

integer-order PID Gc3 (s)

1.1

Gc2 (s)

0.2 0

1

2

0.4 3

0.6 4

0.8 5

1 6

1.2 7

8

Fig. 9.17: Comparisons of optimal PID controllers.

The closed-loop step responses under the controllers are shown in Figure 9.17. It can be seen that for this example, the results of fractional-order PID controller is better than the integer-order one. For linear fractional-order plant models, the OptimFOPID interface can be used to design directly fractional-order PID controllers in a user friendly manner. There are also limitations in the interface, which are summarised below. Comments 9.7 (Limitations of OptimFOPID). We remark the following. (1) Delay in the plants are modelled by its Padé approximations. (2) Sometimes, if there exist delay terms, an optimal fractional-order PID can be designed on the Padé approximated plant, while when used back to delayed plant, the control results may be deteriorated. (3) Controllers with actuator saturation cannot be processed.

9.5 Design of fuzzy fractional-order PID controllers Apart from the fixed-parameter PIλ Dμ controllers studied in the previous sections, variable-order fuzzy PID controllers can also be used [14, 30]. In this section, the design and simulation procedures of the variable-order fuzzy PID controllers are studied.

9.5.1 Fuzzy rules of the controller parameters Extensive simulations are performed and recorded, and the studies have been carried out on the impact of the orders λ and μ on the closed-loop behaviour of the system. Therefore, the fuzzy rules for the increments of the orders λ and μ are summarised in Table 9.1 (see [30]).

9.5 Design of fuzzy fractional-order PID controllers | 323 Tab. 9.1: Fuzzy rules of increments in parameters λ and μ. e(t)

de(t)/dt

rules for ∆λ

rules for ∆μ

NB

NM

NS

Z

PS

PM

PB

NB

NM

NS

Z

PS

PM

PB

NB

PB

PB

PB

PM

PM

Z

Z

NS

PS

Z

Z

Z

NB

NB

NM

PB

PB

PM

PM

PS

Z

Z

PS

PS

PS

PS

Z

NS

NM

NS

PM

PM

PS

PS

Z

NS

NS

PB

PB

PM

PS

Z

NS

NM

Z

PM

PS

PS

Z

NS

NS

NM

PB

PM

PM

PS

Z

NS

NS

PS

PS

PS

Z

NS

NS

NM

NM

PB

PB

PM

PS

Z

NS

NS

PM

Z

Z

NS

NM

NM

NB

NB

PM

PS

PS

PS

Z

NS

NS

PB

Z

Z

NS

NM

NB

NB

NB

NS

Z

Z

Z

Z

NB

NB

Tab. 9.2: Fuzzy inference tables for the PID parameters. de(t)/dt ∆Kp

e(t)

NB NM NS Z

∆Ki PS PM PB

NB NM NS Z

∆Kd PS PM PB

NB NM NS Z

PS PM PB

NB

PB PB PM PM PS Z

Z

NB NB NM NM NS Z

Z

NM

PB PB PM PS PS Z

Z

NB NB NM NS NS Z

Z

NS

PM PM PM PM Z

Z

PM PM PS Z

PS

PS PS Z

PM

PS Z

NS NM NM NM NB

Z

Z

PS PS PM PB PB

PB NS PS PS PS PS PB

PB

Z

NM NM NM NB NB

Z

Z

PS PM PM PB PB

PB PM PM PM PS PS PB

Z

NS NS

NS NM NM

NS NS NM NM

NB NM NS NS Z

PS NS NB NB NB NM PS

PS PS

Z

NS NM NM NS NS Z

PS PM PM

Z

NS NS NS NS NS Z

PS PS PM PB

Z

Z

NM NM NS Z NM NS Z

PS NS NB NB NB NM PS

Z

Z

Z

Z

Z

The actual orders of the controller are evaluated from λ k = λ0 + ∆λ k ,

λ k = λ0 + ∆μ k ,

and they are updated upon the discrete-time mechanism with sampling period of T. The increments of the controller gains Kp , Ki and Kd are also defined as fuzzy variables, and the inference tables are shown in Table 9.2.

9.5.2 Design and implementation with Simulink The fuzzy inference system is created and saved in the file voffuzzy.fis, which can be loaded into MATLAB environment with fuz = readfis(󸀠 voffuzzy󸀠 ). Details of the fuzzy inference system can be viewed and modified with fuzzy(fuz) in a graphical user interface. The surfaces of the fuzzy inference rules of the offsets of the orders are shown in Figure 9.18. To implement the fuzzy PIλ Dμ control structure, a normal fractional-order PID block is used, with variables names assigned as Kp, Ki, Kd, lam, mu0. These parameters

324 | 9 Design of fractional-order PID controllers

(a) Surface of ∆λ.

(b) Surface of ∆μ.

Fig. 9.18: Surfaces of the fuzzy inference rules.

are stored in MATLAB workspace. Therefore, a mechanism is needed to update these variables. An S-function is written for the tasks. function [sys,x0,str,ts]=ffuz_param(t,x,u,flag,T,fuz,K0) switch flag, case 0, [sys,x0,str,ts] = mdlInitializeSizes(T); case 2, sys = mdlUpdates(x,u); case 3, sys = mdlOutputs(x,u,fuz,K0); case {1, 4, 9}, sys = []; otherwise, error([’Unhandled flag=’,num2str(flag)]); end; % initialisation setting of the S-function function [sys,x0,str,ts]=mdlInitializeSizes(T) sizes=simsizes; sizes.NumContStates=0; sizes.NumDiscStates=2; sizes.NumOutputs=5; sizes.NumInputs=1; sizes.DirFeedthrough=0; sizes.NumSampleTimes=1; sys=simsizes(sizes); x0=zeros(2,1); str=[]; ts=[T 0]; % discrete states updates from the fuzzy systems function sys = mdlUpdates(x,u) sys=[u(1); u(1)-x(1)]; % computing the output signal function sys = mdlOutputs(x,u,fuz,K0) Kfpid=K0+evalfis([x(1),x(2)],fuz)’; assignin(’base’,’Kp’,Kfpid(1)); assignin(’base’,’Ki’,Kfpid(2)); assignin(’base’,’Kd’,Kfpid(3)); assignin(’base’,’lam’,Kfpid(4)); assignin(’base’,’mu0’,Kfpid(5)); sys=Kfpid; Comments 9.8 (S-function specifications). We remark the following. (1) The S-function block is a discrete one, with sampling interval of T.

9.5 Design of fuzzy fractional-order PID controllers | 325

Step

Fractional−order PID controller

Fractional−order transfer function

Approximate fPID controller1

Approximate FOTF model

ffuz_param S−Function

2 Out2

To Variable Time Delay2

0.5 Constant1

Scope

1 Out1

Fig. 9.19: Simulink model of fuzzy PIλ Dμ controller (c9mvofuz.mdl).

(2) Two discrete states are introduced in the system, as the error x1 (k) = e k and the difference of the error x2 (k) = e k − e k−1 , and the latter can be used to extract information for e󸀠 (t). There is no continuous states. (3) The function assignin() is used to write the controller parameters back into MATLAB workspace, and the controller is updated according to the newly assigned the parameters. (4) The outputs of the S-function block are the five controller parameters. A Simulink model is constructed as shown in Figure 9.19. This model can be used to simulate a variety of fractional-order fuzzy PID control systems with constant delays. Comments 9.9 (The Simulink model). We remark the following. (1) The plant model can be a fractional-order one or an integer-order one, with FOTF the common object used. (2) There can be a time delay in the plant model, and a “Transport Delay” block with two input ports are deliberately used here, since non-fixed delays can also be handled with the model. (3) In the Simulink model, two output ports are used, with the first one returns the output signal of the systems, while the second one returns all the five controller parameters. (4) The responsibility of the S-function block is to update the controller parameters in MATLAB workspace; therefore, there is no need for it to connect with other blocks. (5) Fixed-step-size simulation is carried out, since the controller is in fact a discrete one. Example 9.11. Consider the control problem with the plant model with a fixed delay, G(s) = e−0.5s /(s + 1)5 . Design PIλ Dμ and variable-order fuzzy PID controllers and simulate the systems. Solution. To design a fixed PIλ Dμ controller, the procedures in the previous example can be used directly, with the statements >> clear; global G t key1 key2 G=’exp(-0.5*s)/(s+1)^5’; t=0.01:0.01:20; key1=1; key2=1;

326 | 9 Design of fractional-order PID controllers

x0=rand(5,1); xm=zeros(5,1); xM=[100;100;100;2;2]; x=fminsearchbnd(@fun_opts,x0,xm,xM), Gf=get_fpidf(x,G,1); [t1,y1]=INVLAP_new(Gf,0,30,2000,1,’1/s’); and it is found that the controller parameter designed is G1 (s) = 0.0014 + 0.3310s−0.8913 + 1.1907s0.3844 . If the optimal controller parameters are used in the fuzzy PID control framework, the simulation results can be obtained with >> s=fotf(’s’); G=1/(s+1)^5; G.ioDelay=0.5; Kp=0.0014; Ki=0.3310; Kd=1.1907; lam=0.8913; mu0=0.3844; T=0.001; fuz=readfis(’voffuzzy’); K0=[Kp,Ki,Kd,lam,mu0]’; [t0,~,y0]=sim(’c9mvofuz’); plot(t1,y1,t0,y0(:,1)) as shown in Figure 9.20. It can be seen that the closed-loop behaviour under the two controllers are almost identical. The fuzzy controller parameters can also be obtained with the statements >> plot(t0,y0(:,2:6)) as shown in Figure 9.21. It can be seen that the parameters changes and the settled down when t is large; however, the steady-state values are not the same as the initial values. If the initial values of the controller parameters are set as the ones in [30], similar results can be obtained, which means that the initial values are not thus important in control systems. >> Kp=1.1202; Ki=0.4391; Kd=2.0016; lam=0.8647; mu0=1.0877; K0=[Kp,Ki,Kd,lam,mu0]’; [t0,~,y0]=sim(’c9mvofuz’); plot(t1,y1,t0,y0(:,1)) 1.2

1

0.8

0.6

0.4

0.2

0

0

5

10

15

20

Fig. 9.20: Control results under the two controllers.

25

30

9.5 Design of fuzzy fractional-order PID controllers | 327 2.5 2

Kd (t)

1.5

λ(t)

1

μ(t)

0.5 0 -0.5

Kp (t)

-1 -1.5

Ki (t) 0

5

10

15

20

25

30

Fig. 9.21: Fuzzy controller parameters.

Example 9.12. Consider the plant model in the previous example again. If the delay is not fixed, but a random one, just as in the case of networked control, simulate the closed-loop system again. Solution. The interval of random delay is set to [0.4, 0.7] in the new Simulink model shown in Figure 9.22. Another Simulink model shown in Figure 9.23 is constructed for the PIλ Dμ controller, saved in the file c9mfpd2.mdl. The closed-loop step responses under the two controllers can be obtained as shown in Figure 9.24. >> Kp=0.0014; Ki=0.3310; Kd=1.1907; lam=0.8913; mu0=0.3844; T=0.001; fuz=readfis(’voffuzzy’); K0=[Kp,Ki,Kd,lam,mu0]’; [t0,~,y0]=sim(’c9mvofuz2’); [t1,~,y1]=sim(’c9mfpid2’); plot(t0,y0(:,1),t1,y1(:,1)) It can be seen that the one under fixed PIλ Dμ controller is much better than the variable-order fuzzy PID controller. However, this does not necessarily mean that the fuzzy controller itself has problems. It may mean that the parameters of fuzzy inference system, or universe setting, fuzzy rule expression and membership function need to be fine tuned.

Step

Fractional−order PID controller

Fractional−order transfer function

Approximate fPID controller1

Approximate FOTF model

ffuz_param S−Function

2 Out2

0.5 Constant1

Fig. 9.22: Simulink model for the fuzzy PIλ Dμ controller (c9mvofuz2.mdl).

To Variable Time Delay2

Scope

1 Out1

328 | 9 Design of fractional-order PID controllers In the fuzzy control system, the curves of the PID controller parameters can also be obtained, as shown in Figure 9.25. >> plot(t0,y0(:,2:6))

Fractional−order PID controller

Fractional−order transfer function

Approximate fPID controller1

Approximate FOTF model

Step

Uniform Random Number

Fig. 9.23: Simulink model for the PIλ Dμ controller (c9mfpid2.mdl). 1.2

PIλ Dμ controller

1 0.8

variable-order fuzzy controller 0.6 0.4 0.2 0

0

5

10

15

20

25

30

Fig. 9.24: Control results with random delays. 2.5

Kd (t)

2

Kp (t)

1.5

λ(t)

1

μ(t)

0.5 0 -0.5 -1 -1.5

Ki (t) 0

5

10

15

Fig. 9.25: Fuzzy controller parameters.

20

25

30

To Variable Time Delay2

Scope

1 Out1

10 Frequency domain controller design for multivariable fractional-order systems It has been pointed out that the frequency domain methods are commonly used methods in the design of single-variable systems. The main reason is that in the design process many meaningful plots can be obtained and the users can tune the parameters of the controllers according to the plots. In the 1960s, the British scholars Professors Howard H Rosenbrock and Alistair G J MacFarlane initiated the research on frequency domain design methods on multivariable systems, and a series of achievements were made. In literature, the researchers are called the British School. Since there is coupling among the input–output pairs in multivariable systems, most work is on finding good decoupling approaches, so that the design tasks can be converted to single-variable design problems. For integer-order multivariable systems, the main frequency domain methods including inverse Nyquist array (INA) methods [59], the characteristic locus method [34], the reversed-frame normalisation (RFN) [25], the sequential return-difference approach [39], and the parameters optimisation method [18]. The target of this chapter is to try to transplant some of the approaches to multivariable fractional-order systems. In this chapter, the pseudodiagonalisation method in multivariable systems is given in Section 10.1 such that the compensated system may become diagonal dominant, and individual channel design algorithms can be used to design controllers for multivariable fractional-order systems. The parameter optimisation method is illustrated in Section 10.2, and robust integer-order controllers can be designed for multivariable fractional-order systems with or without time delays. When the controllers are designed, robustness analysis to the designed systems are carried out, mainly on the robustness upon variations of the plants. The robust behaviours of the algorithms are satisfactory.

10.1 Pseudodiagonalisation of multivariable systems If the transfer function matrix studied is not diagonal dominant, some kind of compensation methods should be introduced, so that it can be converted to diagonal dominant matrices. Then individual loop single-variable design method can be used, regardless of the coupling. The typical block diagram of Nyquist-type methods is shown in Figure 10.1, where Kp (s) is the pre-compensating matrix such that G(s)Kp (s) is a diagonal dominant matrix. The matrix Kd (s) can be used to introduce further dynamic compensation to diagonal dominant matrices. In this section, pseudodiagonalisation algorithms to fractional-order systems are introduced, followed by individual channel design algorithms. DOI 10.1515/9783110497977-010

332 | 10 Frequency domain controller design for multivariable systems 330 | 10 Frequency domain controller design for multivariable systems

✲ ❣✲ + −✻

R(s)

controller Gc (s)



Kd (s)



Kp (s)



plant model

Y(s)



G(s)

Fig. 10.1: Typical block diagram of multivariable systems. Fig. 10.1: Typical block diagram of multivariable systems.

10.1.1 Pseudodiagonalisation and implementations 10.1.1 Pseudodiagonalisation and In multivariable systems design, theimplementations design of the matrix Kp (s) is a crucial step. It may affect the final design results. In practical applications, this matrix is usually designed Inasmultivariable systems design, theusers design of the matrix Kp (s) ismatrix a crucial step. It may a simple constant matrix. The may select a constant with their own affect the final design results. In practical applications, this matrix is usually designed experience, the matrix may introduce elementary matrix transformation such that the ascompensated a simple constant The users may select a constant their own matrixmatrix. is diagonal dominant. The matrix Kp (s) matrix can be with chosen with the experience, the matrix may introduce elementary matrix transformation such that the trial-and-error method, where normally Kp (s) can first be selected as compensated matrix is diagonal dominant. The matrix Kp (s) can be chosen with the Kp (s)Kp=(s) G−1 (0)first be selected as Kp (s) = G−1 (0) trial-and-error method, where normally can such that at least G(s)Kp (s) is an identity matrix at frequency 0. such that at least G(s)Kp (s) is an identity matrix at frequency 0. Trial-and-error methods are not quite suitable in computer-aided design processes. Trial-and-error methods are not quite suitable in computer-aided design processes. Therefore, different methods are proposed to perform diagonal dominant transformaTherefore, different methods are proposed to perform diagonal dominant transformations. In this subsection, the pseudodiagonalisation method [21] is presented to select tions. In this subsection, the pseudodiagonalisation method [21] is presented to select the pre-composition matrix Kp . Assume that at frequency jω0 , the Nyquist array ofthe the pre-composition matrix Kp . Assume that at frequency jω0 , the Nyquist array of the plant transfer function matrix can be expressed as plant transfer function matrix can be expressed as gĝ ̂ik (jω 1,1, 2,2, ⋅ ⋅ ⋅. ., .m, 0 ) = α ik + jβ ik , i, k i, = k= , m, ik (jω 0 ) = α ik + jβ ik ,

(10.1)

where thethe number of outputs, and we assume that thethat numbers of inputsof wheremmis is number of outputs, andhave we to have to assume the numbers and outputs are the same. Insame. order In to order obtaintoan optimal constantconstant compensation matrix inputs and outputs are the obtain an optimal compensation Kmatrix , the following procedures are taken. p Kp , the following procedures are taken. Algorithm Algorithm10.1 10.1(Pseudodiagonalisation). (Pseudodiagonalisation).Proceed Proceedasasfollows. follows. (1) Select a frequency jω and find the Nyquist (1) Select a frequency jω0 0 and find the Nyquistarray arrayĝ gik̂ ik(jω (jω0 ). 0 ). (2) q ,qwhere (2) For Foreach eachqq(q(q==1,1,2,2,⋅ ⋅.⋅. ,. m), , m),create createa amatrix matrixAA , where m m

= aail,q il,q =

[αikikααlklk ++ ββikikββlklk],], i, i, , m. [α l =l =1,1, 2,2, ⋅ ⋅ .⋅ ., .m.

∑ ∑

(10.2)

k=1and andkk=q =q k=1 ̸̸

Findthe theeigenvalues eigenvaluesand andeigenvectors eigenvectorsofofthe thematrix matrixAAq ,q and , anddenote denoteitsitseigenvector eigenvector (3) Find (3) of its smallest eigenvalue by k . q of its smallest eigenvalue by k q . (4) For Foreach eachq,q,the theminimum minimumeigenvector eigenvectorcan canbe bespanned spannedinto intoaacompensation compensationmatrix matrix (4) givenby by KKp pgiven Kp−1= = , k , . . . , k ]TT . K −1 [k[k (10.3) 1 ,1k 2 ,2 ⋅ ⋅ ⋅ , k mm] . p

The above pseudodiagonalisation is introduced about a certain frequency, and the frequency itself still needs the trial-and-error method to find. Alternatively, several

10.1 Pseudodiagonalisation of multivariable systems | 331

frequency points in an interested frequency range can be used to implement the weighted pseudodiagonalisation method. Select N frequency points, ω1 , ω2 , . . . , ω N , and assume that its rth frequency point is weighted by ψ r . The following method can be used to construct the matrix A q : N

a il,q = ∑ ψ r [ r=1

m



k=1 and k=q ̸

(α ik,r α lk,r + β ik,r β lk,r )],

where α:,:,r and β:,:,r represent the values of α and β at the rth frequency. Thus, step (3) in the pseudodiagonalisation algorithm can be used to establish the constant matrix Kp . According to the above algorithm, a MATLAB function pseudiag() can be written to compute the pseudodiagonalisation matrix. The Nyquist array for the certain frequency points is expressed in the variable G1 , and R is the weighting vector composed of ψi , with default values of 1. The matrix Kp can be returned and the listing of the function is as follows. function Kp=pseudiag(G1,R) A=real(G1); B=imag(G1); [n,m]=size(G1); N=n/m; Kp=[]; if nargin==1, R=ones(N,1); end for q=1:m, L=1:m; L(q)=[]; for i=1:m, for l=1:m, a=0; for r=1:N, k=(r-1)*m; a=a+R(r)*sum(A(k+i,L).*A(k+l,L)+B(k+i,L).*B(k+l,L)); end, Ap(i,l)=a; end, end [x,d]=eig(Ap); [xm,ii]=min(diag(d)); Kp=[Kp; x(:, ii)’]; end Example 10.1. Find a pseudodiagonalisation matrix for the multivariable plant studied in Example 6.14 and study the behaviours of the matrix. Solution. The plant model can be entered first, and with the following statements, the Nyquist plot with Gershgorin bands can be obtained as shown in Figure 10.2. It can be seen that the plant is not diagonal dominant. In fact, there exists strong coupling in the system. >> g1=fotf([1.5 0.7],[1.2 0],1,0,0.5); g2=fotf([1.2 1],[1.1 0],2,0,0.2); g3=fotf([0.7 1.5],[1.3 0],3,0); g4=fotf([1.3 0.6],[1.1 0],2,0,0.2); G=[g1,g2; g3,g4]; w=logspace(0,1); H=mfrd(G,w); gershgorin(H)

332 | 10 Frequency domain controller design for multivariable systems Nyquist Diagram

Imaginary Axis

To: Out(1)

From: In(1)

From: In(2)

0

-1

-2

To: Out(2)

0 -1 -2

-1.5

-1

-0.5

0

0.5

1

Real Axis

-1

-0.5

0

0.5

1

1.5

Fig. 10.2: Nyquist plot with Gershgorin bands in the plant.

Selecting a frequency vector in the range of (1 rad/s, 10 rad/s); the frequency response of the plant can be obtained. Based on the response, a pre-compensation matrix Kp with 0.7042 −0.7100 Kp = [ ] −1 0.0038 can be designed as follows. >> Kp=pseudiag(H), H1=fmul(w,H,Kp); gershgorin(H1) The compensation behaviour under such a matrix is obtained as shown in Figure 10.3. It can be seen that by applying such a pre-compensation matrix, the compensated system is diagonal dominant. Nyquist Diagram From: In(1)

From: In(2)

Imaginary Axis

To: Out(1)

1

0.5

To: Out(2)

0

1

0.5

0 -1

-0.8

-0.6

-0.4

-0.2

0

0.2

Real Axis

-1.5

-1

Fig. 10.3: Nyquist plots of the compensated system.

-0.5

0

10.1 Pseudodiagonalisation of multivariable systems | 333 Nyquist Diagram From: In(1)

From: In(2)

Imaginary Axis

To: Out(1)

1

0.5

0

To: Out(2)

1.5 1 0.5 0 -0.4

-0.2

0

0.2

-1.2

Real Axis

-1

-0.8

-0.6

-0.4

-0.2

0

Fig. 10.4: Nyquist plots of the newly compensated model.

If the expected frequency range is reselected as (100.2 rad/s, 101.5 rad/s), a different pre-compensation matrix can be obtained, namely 0.6854 Kp = [ −0.9999

−0.7282 ], −0.01749

and the Nyquist plot with the new compensated system can be obtained as shown in Figure 10.4. It can be seen that the new compensated model is also diagonal dominant. >> w=logspace(0.2,1.5,200); H=mfrd(G,w); Kp=pseudiag(H), H1=fmul(w,H,Kp); gershgorin(H1)

10.1.2 Individual channel design of the controllers To further improve the diagonal dominance of the open-loop plant model, a dynamic compensation model Kd (s) can be introduced, normally, with the trial-and-error method. The Nyquist plots with Gershgorin bands can be drawn again and again to build the dynamic compensation Kd (s). With the diagonal dominant plant model, the controllers can be designed on an individual channel bases, and the algorithms in the previous chapter can be used. For plant models with delays, although it may be compensated into a diagonal dominant system, it is still rather difficult to design dynamic controllers with individual channel design algorithms. Therefore, in the next example, a delay-free plant model is studied and the design process is illustrated. Example 10.2. Consider the multivariable plant model studied in Example 6.15 1/(1.35s1.2 + 2.3s0.9 + 1) G(s) = [ 1/(0.52s1.5 + 2.03s0.7 + 1)

2/(4.13s0.7 + 1) ]. −1/(3.8s0.8 + 1)

334 | 10 Frequency domain controller design for multivariable systems Design fractional-order PID controllers and evaluate the closed-loop step response of the controlled systems. Solution. The plant model can be entered first. Then select a frequency vector in the range of (1 rad/s, 10 rad/s), the frequency response of the plant can be obtained. Based on the response, a pre-compensation matrix Kp can be designed, and the Nyquist plot with Gershgorin bands are obtained with >> s=fotf(’s’); g1=1/(1.35*s^1.2+2.3*s^0.9+1); g2=2/(4.13*s^0.7+1); g4=-1/(3.8*s^0.8+1); g3=1/(0.52*s^1.5+2.03*s^0.7+1); G=[g1,g2; g3,g4]; w=logspace(-1,0); H=mfrd(G,w); gershgorin(H) as shown in Figure 10.5. It can be seen that the plant is seriously coupled. Nyquist Diagram From: In(2)

0 -0.5

Imaginary Axis

To: Out(1)

From: In(1)

-1

To: Out(2)

1 0.5 0

0

0.5

1

1.5

Real Axis

-1

-0.5

0

Fig. 10.5: Nyquist plot with Gershgorin bands.

Decoupling must be employed before controllers can be designed. Select a frequency range of (1 rad/s, 10 rad/s); the following statements can be used to perform pseudodiagonalisation, and the Nyquist plot with Gershgorin bands, as shown in Figure 10.6. >> w=logspace(0,1); H=mfrd(G,w); Kp=pseudiag(H) w1=logspace(-1,2); H1=mfrd(G*Kp,w1); gershgorin(H1) The pre-compensation matrix is designed as −0.4596 Kp = [ −0.8503

−0.8881 ]. 0.5262

It can be seen that the compensated system is diagonal dominant. However, if open-loop step response of the compensated system is to be drawn, it can be seen that the step responses of the diagonal elements are negative, due to the shapes and

10.1 Pseudodiagonalisation of multivariable systems | 335 Nyquist Diagram From: In(1)

From: In(2)

Imaginary Axis

To: Out(1)

0.8 0.6 0.4 0.2 0

To: Out(2)

0.6 0.4 0.2 0 -1.4

-1.2

-1

-0.8

-0.6

-0.4

-0.2

-1.2

-1

Real Axis

-0.8

-0.6

-0.4

-0.2

0

Fig. 10.6: Nyquist plot of the pre-compensated system.

directions of the diagonal Nyquist plots. Therefore, a dynamic matrix with negative gain should be introduced, for instance −1/(2.5s + 1) Kd (s) = [ 0

0 ]. −1/(s + 1)

The Nyquist plot with Gershgorin bands under the constant and dynamic matrices is obtained with >> s=tf(’s’); Kd=[-1/(2.5*s+1), 0; 0, -1/(s+1)]; G0=G*Kp*Kd; H3=mfrd(G0,w1); gershgorin(H3) as shown in Figure 10.7, and the diagonal dominance is further enhanced. This can further be verified by the open-loop step responses. Nyquist Diagram From: In(1)

To: Out(1)

0

From: In(2)

-0.5

Imaginary Axis

-1

To: Out(2)

0 -0.2 -0.4 -0.6 -0.8

0

0.2

0.4

0.6

0.8

1

1.2

Real Axis

0

0.2

0.4

0.6

0.8

Fig. 10.7: Nyquist plot with dynamic matrix compensation.

1

336 | 10 Frequency domain controller design for multivariable systems Now, two PIλ Dμ controllers can be designed individually for the two input-output pairs, with the following commands. >> G=G0(1,1); x0=rand(5,1); global G type key t t=0:0.02:10; xm=[0 0 0 0 0]; xM=[15 15 15 2 2]; type=’fpid’; key=’itae’; [Gc,x]=fpidtune(x0,xm,xM,1) With the above commands, after certain waiting time, an optimal PIλ Dμ controller c1 (s) can be designed. Similarly, when G0 (2, 2) is used as the equivalent plant model, the above statements may yield another optimal PIλ Dμ controller c2 (s). The two optimal PIλ Dμ controllers are respectively c1 (s) = 10.7003 + 2.9743s−0.86736 + 15s0.7876 , and c2 (s) = 14.848 + 10.1421s−0.81932 + 14.6848s0.7355 . In order to check the closed-loop behaviours of the two controllers, a Simulink model is established as shown in Figure 10.8.

u1 Constant

Fractional−order PID controller Approximate fPID controller Kd

u2 Constant1

Fractional−order PID controller

LTI System

K*u Gain

Approximate fPID controller1

Fractional−order transfer function

1 Out1

Approximate FOTF model

Fig. 10.8: Simulink model of multivariable system (c10mpdm2.slx).

The closed-loop step responses can be evaluated with the following statements, and the responses are obtained with >> G=[g1 g2; g3 g4]; s=fotf(’s’); c1=10.7003+2.9743*s^-0.86736+15*s^0.7876; c2=14.848+10.1421*s^-0.81932+14.6848*s^0.7355; u1=1; u2=0; [t1,~,y1]=sim(’c10mpdm2’); u1=0; u2=1; [t2,~,y2]=sim(’c10mpdm2’); subplot(221), plot(t1,y1(:,1)), ylim([-0.1 1.1]) subplot(223), plot(t1,y1(:,2)), ylim([-0.1 1.1]) subplot(222), plot(t2,y2(:,1)), ylim([-0.1 1.1]) subplot(224), plot(t2,y2(:,2)), ylim([-0.1 1.1]) as shown in Figure 10.9, and it can be seen that the results are satisfactory.

10.1 Pseudodiagonalisation of multivariable systems | 337 Step Response From: In(1)

From: In(2)

To: Out(1)

1

0.5

Amplitude

0

To: Out(2)

1

0.5

0 0

2

4

6

8

10 0

Time (seconds)

2

4

6

8

10

Fig. 10.9: Closed-loop step response of the multivariable system.

10.1.3 Robustness analysis of controller design through examples It is usually known in classical control systems theory that integer-order PID controllers are robust in process control. In this subsection, examples will be given to investigate the robustness of the optimal fractional-order PID controllers, and also the design process. Example 10.3. Robustness test for the controllers can be made by perturbing the gain of the plant model and see whether the control behaviours have much change. Try to perturb the gains in the plant by 80 % and 150 %, respectively, and see whether the controller is robust or not. Solution. If the gains of the plants are perturbed by 80 % and 150 %, the following statements can be issued, and the closed-loop step responses of the perturbed systems are obtained as shown in Figure 10.10. It can be seen that the closed-loop behaviours are satisfactory when subject to gain variations. Therefore, the controllers designed are robust in this aspect. >> G0=[g1 g2; g3 g4]; G=0.8*G0; s=fotf(’s’); c1=10.7003+2.9743*s^-0.86736+15*s^0.7876; c2=14.848+10.1421*s^-0.81932+14.6848*s^0.7355; u1=1; u2=0; [t1,~,y1]=sim(’c10mpdm2’); u1=0; u2=1; [t2,~,y2]=sim(’c10mpdm2’); G=1.5*G0; u1=1; u2=0; [t3,~,y3]=sim(’c10mpdm2’); u1=0; u2=1; [t4,~,y4]=sim(’c10mpdm2’); subplot(221), plot(t1,y1(:,1),t3,y3(:,1)), ylim([-0.1 1.2]) subplot(223), plot(t1,y1(:,2),t3,y3(:,2)), ylim([-0.1 1.2]) subplot(222), plot(t2,y2(:,1),t4,y4(:,1)), ylim([-0.1 1.2]) subplot(224), plot(t2,y2(:,2),t4,y4(:,2)), ylim([-0.1 1.2])

338 | 10 Frequency domain controller design for multivariable systems Step Response From: In(1)

From: In(2)

To: Out(1)

1

0.5

Amplitude

0

To: Out(2)

1

0.5

0 0

2

4

6

8

10 0

Time (seconds)

2

4

6

8

10

Fig. 10.10: Closed-loop step responses subject to gain perturbations.

Example 10.4. In the design process shown in Example 10.2, one may argue that a good pseudodiagonalisation matrix was selected such that the designed controller is robust. In this example, we can select another set of frequencies and generate another pseudodiagonalisation matrix; then design two optimal PIλ Dμ controllers. We shall also assess the robustness of the new controllers. Solution. Let the reference frequency point be selected at ω = 0 rad/s. A new compensation matrix can be created with the following statements. >> G=[g1 g2; g3 g4]; Kp=inv(mfrd(G,0)), w1=logspace(-1,2); H1=mfrd(G*Kp,w1); gershgorin(H1) Also, a dynamic matrix can be selected, and the two matrices are Kp = [

1/3 1/3

2/3 ], −1/3

Kd (s) = [

1/(2.5s + 1) 0

0 ]. 1/(3.5s + 1)

The compensated Nyquist plot with the two compensators can be obtained as shown in Figure 10.11. It can be seen again that the compensated system is diagonal dominant. In fact, compared with the Nyquist plots shown in Figure 10.7, it can be seen that the diagonal dominant behaviour under the new compensators is slightly better. >> Kp=[1/3 2/3; 1/3 -1/3]; s=tf(’s’); Kd=[1/(2.5*s+1), 0; 0, 1/(3.5*s+1)]; G0=G*Kp*Kd; H3=mfrd(G0,w1); gershgorin(H3) We can also use the function fpidtune() to design individually the two PIλ Dμ controllers c1 (s) = 11.5925 + 2.8932s−0.9062 + 14.9998s0.7808 , c2 (s) = 13.0054 + 2.8248s−0.9175 + 14.9979s0.7933 . The closed-loop step responses of the system can be obtained with the Simulink model

10.1 Pseudodiagonalisation of multivariable systems | 339 Nyquist Diagram From: In(1)

From: In(2)

Imaginary Axis

To: Out(1)

0 -0.2 -0.4 -0.6

To: Out(2)

0 -0.2 -0.4 -0.6

0

0.1

0.2

0.3

0.4

0.5

Real Axis

0

0.1

0.2

0.3

0.4

0.5

0.6

Fig. 10.11: Nyquist plots of the redesigned compensators.

discussed earlier, as shown in Figure 10.12. It can be seen that the control behaviour is satisfied. In fact, the new controllers are even more robust than the ones studied in the previous example. >> s=fotf(’s’); c1=11.5925+2.8932*s^-0.9062+14.9998*s^0.7808; c2=13.0054+2.8248*s^-0.9175+14.9979*s^0.7933; G0=G; G=0.8*G0; u1=1; u2=0; [t1,~,y1]=sim(’c10mpdm2’); u1=0; u2=1; [t2,~,y2]=sim(’c10mpdm2’); G=1.5*G0; u1=1; u2=0; [t3,~,y3]=sim(’c10mpdm2’); u1=0; u2=1; [t4,~,y4]=sim(’c10mpdm2’); subplot(221), plot(t1,y1(:,1),t3,y3(:,1)), ylim([-0.1 1.2]) subplot(223), plot(t1,y1(:,2),t3,y3(:,2)), ylim([-0.1 1.2]) subplot(222), plot(t2,y2(:,1),t4,y4(:,1)), ylim([-0.1 1.2]) subplot(224), plot(t2,y2(:,2),t4,y4(:,2)), ylim([-0.1 1.2]) Step Response From: In(1)

From: In(2)

To: Out(1)

1

0.5

Amplitude

0

To: Out(2)

1

0.5

0 0

2

4

6

8

10 0

Time (seconds)

2

4

6

8

10

Fig. 10.12: Closed-loop step responses under the new controllers.

340 | | 10 Frequency domain controller design for multivariable systems 342 10 Frequency domain controller design for multivariable systems Besides,ititisisseen seenthat thatthe theFOTF FOTFor orFOSS FOSSobjects objectscannot cannotbe beused usedto toconstruct constructthe the Besides, closed-loop model, since the controller is not ready to be expressed as commensurateclosed-loop model, since the controller is not ready to be expressed as commensurateorderones. ones.Therefore, Therefore,Simulink Simulinkmay maybe bethe theonly onlychoice choicein insimulating simulatingsuch suchkind kindof of order systems. systems. shouldbe benoted notedthat, that,since sincethe thedesign designprocedures proceduresdiscussed discussedabove aboverequired requiredthe the ItItshould equivalenttransfer transferfunctions functionsGG0(s), (s),ititmay maynot notbe beeasy easyto toget getthe theequivalent equivalentmodels models equivalent 0 delay terms. Therefore, for multivariable systems with delays, the numerical optimidelay terms. Therefore, for multivariable systems with delays, the numerical optimisation techniques are not suitable for designing appropriate PID controllers. Other sation techniques are not suitable for designing appropriate PID controllers. Other controller design approaches should be tried instead. controller design approaches should be tried instead.

10.2 Parameter Parameteroptimisation optimisationdesign designfor formultivariable multivariable 10.2 fractional-ordersystems systems fractional-order parameteroptimisation optimisationalgorithm algorithmwas wasproposed proposedto todesign designcontrollers controllersfor formultivarimultivariAAparameter ablesystems systemsin in[18]. [18].The Thealgorithm algorithmcan canbe beapplied applieddirectly directlyto todesign designinteger-order integer-order able controllers. In this section, we shall apply the algorithm to multivariable fractionalcontrollers. In this section, we shall apply the algorithm to multivariable fractionalorder plant models. It will be shown that the integer-order controllers thus designed order plant models. It will be shown that the integer-order controllers thus designed are robust. Therefore, it may not be necessary to design fractional-order controllers. are robust. Therefore, it may not be necessary to design fractional-order controllers.

10.2.1 Parameter Parameteroptimisation optimisationdesign designof ofinteger-order integer-ordercontroller controller 10.2.1 Assumethat thatthe theblock blockdiagram diagramof ofmultivariable multivariablecontrol controlsystem systemisisgiven givenin inFigure Figure10.13, 10.13, Assume whereG(s) G(s)isisthe thetransfer transferfunction functionmatrix matrixof ofthe theplant plantand andK(s) K(s)isisthe thetransfer transferfunction function where matrixof ofthe thecontroller. controller.There Thereare arel lchannels channelsof ofinputs inputsand andmmchannels channelsof ofoutputs. outputs.The The matrix closed-loop transfer function matrix can be written as closed-loop transfer function matrix can be written as −1 −1 T(s)==G(s)K(s)[I [I + G(s)K(s)] G(s)K(s). T(s) + G(s)K(s)] .

(10.5)

IfIfthe theclosed-loop closed-looptransfer transferfunction functionmatrix matrixapproaches approachesto toaapre-assigned pre-assignedtarget target transfer transferfunction functionmatrix matrixTTt (s) withinaaspecific specificfrequency frequencyrange, range,the theparameter parameteroptioptit (s)within misation misationmethod methodcan canbe beused usedto todesign designsuitable suitablecontrollers controllers[18]. [18].The Thecorresponding corresponding target targetcontroller controllerKKt (s) satisfies t (s)satisfies −1

−1(s)T t (s)[I − T t (s)] −1. KKt (s) = G−1 t (s) = G (s)T t (s)[I − T t (s)] .

r

✲ ❤ − ✻



K(s)



G(s)

Fig. 10.13: Structure of multivariable control system. Fig. 10.13: Structure of multivariable control system.

y ✲

(10.6) (10.1)

10.2 Parameter optimisation design for multivariable fractional-order systems | 341

An error function E(s) = Tt (s) − T(s) can be defined, and it is shown through simple conversion that E(s) = [I − T(s)][G(s)K(s) − G(s)Kt (s)][I − Tt (s)]. If ‖E(s)‖ is small enough, K(s) approaches Kt (s) close enough, thus E(s) = [I − Tt (s)][I − Tt (s)] + o(‖E(s)‖2 ) ≈ [I − Tt (s)][G(s)K(s) − G(s)Kt (s)][I − Tt (s)].

(10.2)

Denote K(s) = N(s)/d(s), where d(s) is the common denominator polynomial chosen by the user, and N(s) is the polynomial matrix with known order, and the coefficients are undetermined. Let B(s) = I − Tt (s), A(s) = B(s)G(s)/d(s), and Y(s) = B(s)G(s)Kt (s)B(s). Equation (10.2) can be written as Y(s) ≈ A(s)N(s)B(s) + E(s). In order to find the optimal parameters in polynomial matrix N(s), the following optimisation criterion is introduced: ∞

‖E‖22

= min ∫ trace[ET (−jω)E(jω)] dω, N(s)

−∞

where Y(s) = [y1 (s), y2 (s), . . . , y m (s)], N(s) = [n1 (s), n2 (s), . . . , n m (s)], E(s) = [e1 (s), e2 (s), . . . , e m (s)]. The following equation can be established: e1 (s) n1 (s) y1 (s) [ n (s) ] [ e (s) ] [ y (s) ] [ 2 ] [ 2 ] [ 2 ] [ . ] ≈ [BT (s) ⊗ A(s)] [ . ] + [ . ] , [ . ] [ . ] [ . ] [ . ] [ . ] [ . ] y (s) [n m (s)] [e m (s)] [ m ]

(10.3)

where ⊗ is the Kronecker product. The controller numerator polynomial n i (s) can be written as n i (s) = [n1i (s), n2i (s), . . . , n li (s)]T . Assume that p−1

p

n ij (s) = v0ij s p + v1ij s p−1 + ⋅ ⋅ ⋅ + v ij s + v ij , where for n i (s), p is a positive integer. Then the polynomial matrix can be used to describe numerator coefficients. For sub-polynomials of lower orders, the pth-order

342 | 10 Frequency domain controller design for multivariable systems form can still be used. In this case, the coefficients of higher order terms are assumed to be zero. The following matrix can be established: sp

s p−1

[ [ Σ(s) = [ [ [

⋅⋅⋅

1 sp

s p−1

⋅⋅⋅

] ] ], ] ]

1 ..

. sp

[

s p−1

1]

⋅⋅⋅

with n1 (s) [ n (s) ] [ 2 ] [ . ] = Σv, [ . ] [ . ] [n m (s)]

and

p

p

T

v = [v011 , v111 , . . . , v11 , v021 , v121 , . . . , v ml ] .

Introducing the symbols X(s) = [BT (s) ⊗ A(s)]Σ(s), T

η(s) = [yT1 (s), yT2 (s), . . . , yTm (s)] , T

ε(s) = [eT1 (s), eT2 (s), . . . , eTm (s)] , the standard least squares form of (10.3) can be written as η(s) = X(s)v + ε(s).

(10.4)

In order to get matrices η and X, frequency analysis methods can be used. A group of frequencies {ω i }, i = 1, 2, . . . , M can be chosen, and the matrices X(jω i ) and η(jω i ) can be approximated. The matrices X(jω) and η(jω) can be constructed as follows: X(jω1 ) [ X(jω ) ] 2 ] [ ] X(jω) = [ [ .. ] , [ . ] [X(jω M )]

η(jω1 ) [ η(jω ) ] 2 ] [ ] η(jω) = [ [ .. ] . [ . ] [η(jω M )]

It is easily found from (10.4) that the least squares solutions of the controller parameters can be obtained as follows: −1 ̂ v(jω) = [X T (−jω)X(jω)] X T (−jω)η(jω).

From the above presentation, it is found that the parameters v̂ may contain complex values such that the controllers may not be implemented. Numerical tricks are used to ensure v(jω) are real (cf. [18]) −1 ̂ v(jω) = R[X T (−jω)X(jω)] R[X T (−jω)η(jω)].

(10.5)

10.2 Parameter optimisation design for multivariable fractional-order systems | 343

10.2.2 Design procedures of parameter optimisation controllers MFD Toolbox provides a function fedmunds() to implement the parameter optimisation algorithm. The traditional algorithm is extended, since the common denominator d(s) is no longer needed. Each of the components in controller matrix can be set independently. The syntax of the function is N = fedmunds(w, G ω , T ω , N0 , D), where w is a vector of selected frequencies, G ω and T ω are the frequency responses of the plant G(s) and the target system Tt (s), respectively, D is the polynomial matrix of the denominator, while N0 represents the structure of the polynomial matrix of numerator. If a component in the matrix N0 is zero, it means that this component need not be optimised, thus the whole parameter optimisation process is simplified. The matrix N returns the numerator coefficients optimised, as will be demonstrated in the next example. Procedures in the design of integer-order controllers for multivariable fractionalorder plants are summarised below in the following algorithm. Algorithm 10.2 (Design of multivariable controller K(s)). Proceed as follows. (1) Select the expected integer-order closed-loop transfer function Tt (s). (2) Compute the frequency response of the target controller Kt (s) from (10.1). (3) Draw the Bode magnitude plot, recognise the poles in denominators d ij (s). (4) Call the function fedmunds() to design the numerator matrix N(s). (5) Extract the controller, construct the Simulink model and validate the closed-loop behaviour under the designed controller. To draw the Bode magnitude plots, the MFD frequency response data type should be converted into Control System Toolbox data structure. Therefore, the following MATLAB function can be written to perform the conversion. function H=mfd2frd(H1,w) [nr,nc]=size(H1); nw=length(w); nr=nr/nw; for i=1:nw, H(:,:,i)=H1((i-1)*nr+[1:nr],:); end, H=frd(H,w); Example 10.5. Consider a multivariable plant model in Example 10.2. Design integerorder controllers using parameter optimisation techniques. Solution. It can be seen from Example 10.2 that the original plant model is not diagonal dominant and the interaction between the input–output pairs are very serious. The direct design approach can be tried without compensating the plant model. Select the target closed-loop model as a diagonal integer-order transfer function matrix as 9/(s + 3)2 Tt (s) = [ 0

0 ]. 100/(s + 10)2

A diagonal matrix is selected, since the final target is to get fully decoupled closedloop system. Also, the diagonal components should be selected as transfer functions

344 | 10 Frequency domain controller design for multivariable systems Bode Diagram From: In(1)

80

From: In(2)

To: Out(1)

60 40 20

Magnitude (dB)

0 -20 80

To: Out(2)

60 40 20 0 -20 -40

10 -2

10 0

10 2

Frequency (rad/s)

10 -2

10 0

10 2

Fig. 10.14: Bode magnitude plot for target controller Kt (s).

with good behaviours. The frequency responses of the target controller can then be obtained with the following statements, >> s=fotf(’s’); g1=1/(1.35*s^1.2+2.3*s^0.9+1); g2=2/(4.13*s^0.7+1); g4=-1/(3.8*s^0.8+1); g3=1/(0.52*s^1.5+2.03*s^0.7+1); G=[g1,g2; g3,g4]; Ga=G; w=logspace(-3,2); s=tf(’s’); T=[9/(s+3)^2, 0; 0, 100/(s+10)^2]; Gw=mfrd(G,w); Tw=mfrd(fotf(T),w); I=eye(2); h1=finv(w,fadd(w,-Tw,I)); h2=finv(w,Gw); h3=fmulf(w,h2,Tw); Kt=fmulf(w,h3,h1); H=mfd2frd(Kt,w); bodemag(H) and the Bode magnitude plot of the target controller matrix can be obtained as shown in Figure 10.14. It can be seen that it happens that the slopes of Bode magnitude plots seems to be multiples of 20 dB/dec for this example. Therefore, integer-order controllers are sufficient for the system. Let us consider the Bode magnitude plot of k22 (s) first as an example; the Bode magnitude plot of the controller is shown in Figure 10.15. It can be seen that the slope in low-frequency range is almost −20 dB/dec. >> bodemag(H(2,2)) Concentrations will then be focused on the turning points of the downwards segments, and the upwards turning points are not quite necessary, since they are reflected in the numerator of the controller, which will be optimised later. The downwards turning point for this plot is at ω = 20 rad/s. Therefore, the denominator for k22 (s) should be assigned as d22 (s) = s(s + 20) = s2 + 20s.

10.2 Parameter optimisation design for multivariable fractional-order systems | 345 Bode Diagram

70 60

Magnitude (dB)

50 40 30 20 10

❛ω = 20

0 -10 10 -3

10 -2

10 -1

10 0

10 1

Frequency (rad/s)

10 2

Fig. 10.15: Bode magnitude plot for target controller k22 (s).

Observing other Bode magnitude plots, it can be seen that d11 (s) = s(s + 5), d12 (s) = s(s + 18), and d21 (s) = s(s + 3). The matrices N0 and D required in the function fedmunds() can be written as 1 N0 = [ 1

1 1

1 1

1 1

1 1

1 ], 1

1 D=[ 1

5 3

0 0

1 1

18 20

0 ]. 0

The numerator matrix of the optimal controller can be obtained directly with the statements >> N0=[1,1,1,1,1,1; 1,1,1, 1,1,1]; D=[1,5,0, 1,18,0; 1,3,0, 1,20,0]; N=fedmunds(w,Gw,Tw,N0,D) and the returned matrix is N=[

0.0094 −0.1606

6.6355 7.0723

3.4013 2.6293

6.0258 0.3126

140.8121 −50.4946

109.0931 ] −107.5375

such that the controller matrix K(s) can be extracted from >> k11=tf(N(1,1:3),D(1,1:3)); k12=tf(N(1,4:6),D(1,4:6)); k21=tf(N(2,1:3),D(2,1:3)); k22=tf(N(2,4:6),D(2,4:6)); K=[k11 k12; k21 k22], zpk(K) and the mathematical form of the controller is K(s) = [ =[

(0.0094s2 + 6.636s + 3.401)/s2 + 5s) (−0.1607s2 + 7.072s + 2.629)/(s2 + 3s) 0.0094(s + 702.3)(s + 0.513)/[s(s + 5)] −0.1607(s − 44.38)(s + 0.3687)/[s(s + 3)]

(6.026s2 + 140.8s + 109.1)/(s2 + 18s) ] (0.3127s2 − 50.49s − 107.5)/(s2 + 20s) 6.0259(s + 22.57)(s + 0.8023)/[s(s + 18)] ]. 0.31269(s − 163.6)(s + 2.102)/[s(s + 20)]

Still consider the example of k22 (s), the Bode diagram in Figure 10.15 can be redrawn for the target controller, with the one for k22 (s) designed above, as shown in Figure 10.16. It can be seen that there are discrepancy between the two plots. It can

346 | 10 Frequency domain controller design for multivariable systems Bode Diagram

Magnitude (dB)

60 40 20

k22 (s)

target

contr o

ller

0

Phase (deg)

-20 180 135 90 45 10-2

troller

target con k 22 (s)

10-1

100

Frequency (rad/s)

101

102

Fig. 10.16: Bode diagram comparison between k22 (s) and its target controller.

be imagined that, when fractional-order controllers are introduced, the error between them may be reduced. >> m=20*log10(abs(G0)); p=180*angle(G0)/pi; bode(K(2,2),{1e-2,1e2}); h=get(gcf,’Children’); axes(h(3)); line(w,m); axes(h(2)), line(w,p) With the new controller, the Simulink model of the closed-loop system can be established, as shown in Figure 10.17.

u1 Constant u2

K LTI System

Fractional−order transfer function Scope Approximate FOTF model

Constant1

1 Out1

Fig. 10.17: Simulink model of the closed-loop system (c10mpopt.slx).

With the following statements, the step response of the closed-loop system can be obtained as shown in Figure 10.18. The responses of the target closed-loop system Tt (s) are also shown for comparison. >> u1=1; u2=0; [t1,x,y1]=sim(’c10mpopt’,10); subplot(221), plot(t1,y1(:,1)), ylim([-0.1 subplot(223), plot(t1,y1(:,2)), ylim([-0.1 u1=0; u2=1; [t2,x,y2]=sim(’c10mpopt’,10); subplot(222), plot(t2,y2(:,1)), ylim([-0.1 subplot(224), plot(t2,y2(:,2)), ylim([-0.1

1.1]) 1.1]) 1.1]) 1.1])

10.2 Parameter optimisation design for multivariable fractional-order systems | 347 Step Response From: In(1)

From: In(2)

To: Out(1)

1

0.5

Amplitude

0

To: Out(2)

1

0.5

0 0

2

4

6

8

10 0

Time (seconds)

2

4

6

8

10

Fig. 10.18: Closed-loop step responses.

It can be seen that although the frequency response matching to the target controller has noticeable errors, the control behaviour is perfect, and it is quite close to the user-selected target model Tt (s). Therefore, it seems that there is no need to introduce fractional-order controllers.

10.2.3 Investigations on the robustness of the controllers Again, for parameter optimisation controllers, the robustness on the plant gain variations, and also the key factors in the design procedures. Examples will be given to demonstrate the robust issues. Example 10.6. Suppose that each of the gains in the plant in Example 10.5 are perturbed by 80 % or 150 %. Simulate the system to check the robustness of the controller. Solution. The controller designed in Example 10.5 can be used and when the gains of the plant model is changed by 80 % and 150 %, respectively; the closed-loop step responses can be obtained as shown in Figure 10.19. It can be seen that the controller designed is robust to the variations in the gains of the plant model. In the responses, the faster ones are those with higher gains, i.e. with 150 % gains. >> G0=G; u1=1; u2=0; [t1,x,y1]=sim(’c10mpopt’,10); u1=0; u2=1; G=0.8*G0; [t2,x,y2]=sim(’c10mpopt’,10); G=1.5*G0; u1=1; u2=0; [t3,x,y3]=sim(’c10mpopt’,10); u1=0; u2=1; [t4,x,y4]=sim(’c10mpopt’,10); subplot(221), plot(t1,y1(:,1),t3,y3(:,1)), ylim([-0.1 subplot(223), plot(t1,y1(:,2),t3,y3(:,2)), ylim([-0.1 subplot(222), plot(t2,y2(:,1),t4,y4(:,1)), ylim([-0.1 subplot(224), plot(t2,y2(:,2),t4,y4(:,2)), ylim([-0.1

1.1]) 1.1]) 1.1]) 1.1])

348 | 10 Frequency domain controller design for multivariable systems Step Response From: In(1)

From: In(2)

To: Out(1)

1

0.5

Amplitude

0

To: Out(2)

1

0.5

0 0

2

4

6

8

10 0

Time (seconds)

2

4

6

8

10

Fig. 10.19: Closed-loop step responses subject to gain changes.

Example 10.7. Consider the plant model in Example 10.5 again. If the poles of the target controller are not chosen accurately, assume that all the poles are selected at s = 0 and s = −20. Design the controller again and check the robustness of the controller. Solution. If the poles are not selected properly according to individual Bode magnitude plots, for instance, all the poles are selected at s = 0 and s = −20, the following statements can be used to design the controller: >> N0=[1,1,1,1,1,1; 1,1,1, 1,1,1]; D=[1,20,0, 1,20,0; 1,20,0, 1,20,0]; N=fedmunds(w,Gw,Tw,N0,D) and the returned matrix is −2.4066 N=[ −1.2775

27.8524 21.3734

−4.6715 39.3449

5.8261 0.3223

155.644 −50.4969

120.7598 ] −107.2941

such that the controller matrix K(s) can be extracted from the following. >> k11=tf(N(1,1:3),D(1,1:3)); k12=tf(N(1,4:6),D(1,4:6)); k21=tf(N(2,1:3),D(2,1:3)); k22=tf(N(2,4:6),D(2,4:6)); K=[k11 k12; k21 k22], zpk(K) The designed controller is K(s) =

1 −2.4067(s − 11.4)(s − 0.1702) [ s(s + 20) 5.8261(s + 25.92)(s + 0.7998)

−1.2776(s − 18.4)(s + 1.673) ]. 0.32234(s − 158.8)(s + 2.097)

It can be seen that the controller is much more easier to implement than the ones designed earlier. Under such a controller, the closed-loop step responses can be obtained for the perturbed plants, as shown in Figure 10.20. >> G=0.8*G0; u1=1; u2=0; [t1,x,y1]=sim(’c10mpopt’,10); u1=0; u2=1; [t2,x,y2]=sim(’c10mpopt’,10);

10.2 Parameter optimisation design for multivariable fractional-order systems | 349

G=1.5*G0; u1=1; u2=0; [t3,x,y3]=sim(’c10mpopt’,10); u1=0; u2=1; [t4,x,y4]=sim(’c10mpopt’,10); subplot(221), plot(t1,y1(:,1),t3,y3(:,1)), ylim([-0.1 subplot(223), plot(t1,y1(:,2),t3,y3(:,2)), ylim([-0.1 subplot(222), plot(t2,y2(:,1),t4,y4(:,1)), ylim([-0.1 subplot(224), plot(t2,y2(:,2),t4,y4(:,2)), ylim([-0.1

1.1]) 1.1]) 1.1]) 1.1])

It can be seen that although the closed-loop behaviours under such a controller is not as good as the one designed earlier, the responses and robustness are still satisfactory, under such a simple controller. Step Response From: In(1)

From: In(2)

To: Out(1)

1

0.5

Amplitude

0

To: Out(2)

1

0.5

0 0

2

4

6

8

10 0

Time (seconds)

2

4

6

8

10

Fig. 10.20: Closed-loop step responses subject to gain changes.

Through the demonstration, it can be seen that the controller design procedures are also robust to the pole selections. If the poles are not assigned accurately, satisfactory control behaviours may also be achieved.

10.2.4 Controller design for plants with time delays Since the design procedures discussed in the previous section are not quite suitable to design controllers for plants with time delays, we are consider here to apply the design approach to plants with delays through an example. Example 10.8. If there exist time delays in the multivariable system studied in the previous example, where G(s) = [

e−0.2s /(1.35s1.2 + 2.3s0.9 + 1) 1/(0.52s1.5 + 2.03s0.7 + 1)

2 e−0.2s /(4.13s0.7 + 1) ], − e−0.5s /(3.8s0.8 + 1)

design a multivariable controller with parameter optimisation approach and observe the results.

350 | 10 Frequency domain controller design for multivariable systems Solution. The fractional-order transfer function matrix can be entered first. Although the plant model contains time delays, the design procedure can still be used. If the target closed-loop transfer function is selected to be the same as the one in the previous example, the Bode magnitude plot of the target controller can be obtained with >> s=fotf(’s’); g1=1/(1.35*s^1.2+2.3*s^0.9+1); g1.ioDelay=0.2; g2=2/(4.13*s^0.7+1); g2.ioDelay=0.2; g4=-1/(3.8*s^0.8+1); g4.ioDelay=0.5; g3=1/(0.52*s^1.5+2.03*s^0.7+1); G=[g1,g2; g3,g4]; s=tf(’s’); T=[9/(s+3)^2, 0; 0, 100/(s+10)^2]; w=logspace(-2,3); Gw=mfrd(G,w); Tw=mfrd(fotf(T),w); I=eye(2); h1=finv(w,fadd(w,-Tw,I)); h2=finv(w,Gw); h3=fmulf(w,h2,Tw); Kt=fmulf(w,h3,h1); H=mfd2frd(Kt,w); bodemag(H) as shown in Figure 10.21. Bode Diagram 60

From: In(1)

From: In(2)

To: Out(1)

40 20 0

Magnitude (dB)

-20 -40

To: Out(2)

50

0

-50 10 -2

10 0

10 2

10 -2

10 0

Frequency (rad/s)

10 2

Fig. 10.21: Bode magnitude plot for target controller Kt (s).

Observing the Bode magnitude plot, the pole positions of the target controllers can be obtained. Therefore, the following two matrices can be established. N0 = [

1 1 1 1 1 1 ] 1 1 1 1 1 1

and D=[

1 1

10 10

0 0

1 1

10 10

0 ]. 0

With these matrices, the multivariable controller K(t) can be designed, and with the Simulink model shown in Figure 10.17, the following statements can be issued, with

10.2 Parameter optimisation design for multivariable fractional-order systems | 351 Step Response From: In(1)

From: In(2)

To: Out(1)

1

0.5

Amplitude

0

To: Out(2)

1

0.5

0 0

2

4

6

8

10 0

Time (seconds)

2

4

6

8

10

Fig. 10.22: Closed-loop step-response of the delay system.

the step response of the closed-loop system obtained, >> N0=[1,1,1, 1,1,1; 1,1,1, 1,1,1]; D=[1,10,0,1,10,0; 1,10,0,1,10,0]; N=fedmunds(w,Gw,Tw,N0,D) k11=tf(N(1,1:3),D(1,1:3)); k12=tf(N(1,4:6),D(1,4:6)); k21=tf(N(2,1:3),D(2,1:3)); k22=tf(N(2,4:6),D(2,4:6)); K=[k11 k12; k21 k22]; zpk(K) u1=1; u2=0; [t1,x,y1]=sim(’c10mpopt’,10); subplot(221), plot(t1,y1(:,1)), ylim([-0.1 1.1]) subplot(223), plot(t1,y1(:,2)), ylim([-0.1 1.1]) u1=0; u2=1; [t2,x,y2]=sim(’c10mpopt’,10); subplot(222), plot(t2,y2(:,1)), ylim([-0.1 1.1]) subplot(224), plot(t2,y2(:,2)), ylim([-0.1 1.1]) as shown in Figure 10.22. The controller designed is K(s) =

1 −1.7383(s − 4.95)(s + 0.4409) [ s(s + 10) 1.4122(s + 13.78)(s + 0.574)

8.2376(s + 12.4)(s + 0.53) ]. −0.475(s + 77.98)(s + 1.445)

The controlled system behaviour is almost the same as the one designed in the previous example, and the controller design procedure is almost the same. It can be seen that the design procedures are applicable equally easy to systems with time delays. Example 10.9. Consider again the plant model shown in Example 10.8. If the gain variations are again assumed as 80 % to 150 %, observe the closed-loop step responses of the systems. Solution. As in the cases of the previous examples, the following commands can be used to assess the closed-loop step responses of the system under the designed controllers, and the results are obtained as shown in Figure 10.23. It can be seen that

352 | 10 Frequency domain controller design for multivariable systems Step Response From: In(1)

From: In(2)

To: Out(1)

1

0.5

Amplitude

0

To: Out(2)

1

0.5

0 0

2

4

6

8

10 0

Time (seconds)

2

4

6

8

10

Fig. 10.23: Closed-loop step-response of the delay system subject to plant gain variations.

although there may exist uncertainties in the plant gains, the closed-loop behaviours are still satisfactory. >> G=0.8*G0; u1=1; u2=0; [t1,x,y1]=sim(’c10mpopt’,10); u1=0; u2=1; [t2,x,y2]=sim(’c10mpopt’,10); G=1.5*G0; u1=1; u2=0; [t3,x,y3]=sim(’c10mpopt’,10); u1=0; u2=1; [t4,x,y4]=sim(’c10mpopt’,10); subplot(221), plot(t1,y1(:,1),t3,y3(:,1)), ylim([-0.1 subplot(223), plot(t1,y1(:,2),t3,y3(:,2)), ylim([-0.1 subplot(222), plot(t2,y2(:,1),t4,y4(:,1)), ylim([-0.1 subplot(224), plot(t2,y2(:,2),t4,y4(:,2)), ylim([-0.1

1.1]) 1.1]) 1.1]) 1.1])

It can be seen from the examples that the integer-order controllers yield satisfactory closed-loop behaviour, and the controllers are robust to plant gain variations. Therefore, it may not be quite helpful if the tedious fractional-order controllers are introduced.

A Inverse Laplace transforms involving fractional and irrational operations A.1 Special functions for Laplace transform Since the evaluation for some fractional-order is difficult, special functions may be needed. Here some of the special functions are introduced and listed in Table A.1. Details of some of the special functions can be found in Chapter 2. Tab. A.1: Some special functions. Special functions

Definition

Bessel function

Jν (t), solution to Bessel equation t 2 y 󸀠󸀠 + ty 󸀠 + (t 2 − ν 2 )y = 0

beta function

B(z, m) = ∫0 t m−1 (1 − t)z−1 dt, R(m) > 0, R(z) > 0

1

2

t

2

daw(t) = e−t ∫0 eτ dτ

Dawson function

t 2 dτ ∫ e √π 0 2 ∞ 2 erfc(t) = √π ∫t e−τ dτ = 1 − erf(t) Iν (t) = j−ν Iν (jt) ∞ Γ(z) = ∫0 e−t t z−1 dt 2 dn −t 2 Hn (t) = et dt n e (γ)k,q z k γ,q Eα,β (z) = ∑∞ k=0 Γ(αk+β) k! , R(α), R(β), R(γ) γ,1 γ 1 Eα,β (z) = Eα,β (z), Eα,β (z) = Eα,β (z), Eα (z) = Γ(k+γ) (γ)k = γ(γ + 1)(γ + 2) ⋅ ⋅ ⋅ (γ + k − 1) = Γ(γ)

error function

erf(t) =

erfc function extended Bessel function Gamma function Hermit polynomial Mittag-Leffler function (special cases) Pochhammer symbol

−τ 2

> 0, and q ∈ N Eα,1 (z)

A.2 Laplace Transform tables An inverse Laplace transform table involving fractional and irrational operations is collected in Table A.2 (see [9, 35]). Tab. A.2: Inverse Laplace transforms with fractional and irrational operations. F(s)

f(t) = L −1 [F(s)]

s αγ−β (s α +a)γ

t β−1 Eα,β (−at α )

1 , s n √s

k πs ) coth( 2k s2 +k 2

|sin kt|

arctan

ln

s2 −a2 s2

2 t (1 − cosh at)

1 s√s

ln

s2 +a2 s2

2 t (1 − cos at)

e−k√s √s(a+√s)

γ

DOI 10.1515/9783110497977-011

f(t) = L −1 [F(s)]

F(s) n = 1, 2, . . . k s

e−k√s

2n t n−1/2 1⋅3⋅5⋅⋅⋅(2n−1)√π 1 t

sin kt

2√ πt e−k 2

2 /(4t)

−k erfc(

eak e a t erfc(a√t +

k ) 2√t

k ) 2√t

354 | A Inverse Laplace transforms involving fractional and irrational operations Tab. A.2 (continued): Inverse Laplace transforms with fractional and irrational operations. F(s)

f(t) = L −1 [F(s)]

F(s)

f(t) = L −1 [F(s)]

(1−s)n s n+1/2

n! H2n (√t) (2n)!√πt

1 √s+b(s+a)

1 √b−a

1 √ s2 +a2

J0 (at)

(1−s)n

s n+3/2

n! − (2n+1)!√π H2n+1 (√t)

1 √ s2 −a2

I0 (at)

(a−b)k (√s+a+√s+b)2k

k t

e−(a+b)t/2 Ik ( a−b 2 t), k > 0

√s+2a−√s √s+2a+√s

1 t

√s+2a−√s √s+2a+√s

1 t

e−at I1 (at)

a ν Jν (at), ν > −1

1 (s2 −a2 )k

√π ( t )k−1/2 Ik−1/2 (at) Γ(k) 2a

a ν Iν (at), ν > −1

1 (√ s2 +a2 )k

√π ( t )k−1/2 Jk−1/2 (at) Γ(k) 2a

(√ s2 +a2 −s)ν √ s2 +a2 (√ s2 −a2 +s)ν √ s2 −a2

e−at I1 (at)

(√s2 + a2 − s)k

ka k t Jk (at),

1 s+√ s2 +a2

J1 (at) at

k>0

ln

s−a s−b

e−at erf(√(b − a)t)

1 bt t (e

− eat )

1 √s+a√s+b

e−(a+b)t/2 I0 ( a−b 2 t)

b2 −a2 (s−a2 )(√s+b)

ea t [b − a erf(a√t)] − b eb t erfc(b√t)

√s+2a−√s √s

a e−at [I1 (at) + I0 (at)] 1 √πt

cos 2√kt

1 √πk

sin 2√kt

1 (s+√ s2 +a2 )N

NJN (at) , at

√s − a − √s − b

1 (ebt 2√ πt 3

1 s

J0 (2√kt)

1 √s

1 √πt

cosh 2√kt

1 s√s

1 √πk

sinh 2√kt

1 sν

e−k/s

( kt )(ν−1)/2 Jν−1 (2√kt), ν > 0

2 /(4t)

1 sν

ek/s

( kt )(ν−1)/2 Iν−1 (2√kt)

e−k/s

1 √s

ek/s

1 s√s

ek/s

e−k√s

k 2√ πt 3

1 s

erfc(

e−k√s

e−k

N>0 − eat )

k ) 2√t 2 /(4t)

e−k/s

1 s√s

e−k/s

e−√s

2

2

2√ πt e−1/(4t) − erfc(

1 ) 2√t

e−√s √s(√s+1)

et+1 erfc(√t +

1 s α +a

t α−1 Eα,α (−at α )

1 − Eα (−at α )

sα s(s α +a)

Eα (−at α )

1 s α (s−a) 1 √s

t α E1,1+α (at)

sα s−a 1 s√s

−t α E1,1−α (at), 0 < α < 1 2√ πt

1 √s(s+1)

2 √π

√s s+1

1 √πt



1 √s(s+a2 )

√t E1,3/2 (−a2 t)

s (s−a)√s−a

1 √πt

eat (1 + 2at)

√s s+a2

1 E (−a2 t) √t 1,1/2

1 √s+a

1 √πt

− a ea t erfc(a√t)

1 s√s+1

erf(√t)

√s s−a2

1 √πt

+ a ea t erf(a√t)

1 √s(s−a2 )

1 a

ea t erf(a√t)

1 √s(s+a2 )

2 a√π

e−a t ∫0

1 √s(√s+a)

ea t erfc(a√t)

2

s√s s+1

2√ πt −

1 √s(s−1)

et erf(√t)

k! √s±λ

t (k−1)/2 E1/2,1/2 (∓λ√t), R(s) > λ2

s α−1 s α ±λ

Eα (∓λt α ), R(s) > |λ|1/α

1 √πt

e−k

1 (s+a)α

t α−1 Γ(α)

e−at

a s(s α +a)

1 √s

e−k√s

1 √πt

daw(√t)

2

e−t

1 √s+1

√πt

√s s−1

1 √πt

1 sα

t α−1 Γ(α)

+ et erf(√t)

2 √π

1 ) 2√t

daw(√t)

2 2

a√t

2

2 √π

2

eτ dτ

daw(√t)

(k)

A.2 Laplace Transform tables | 355 Tab. A.2 (continued): Inverse Laplace transforms with fractional and irrational operations. F(s)

f(t) = L −1 [F(s)]

1 √s(s+a)(√s+a+√s)2ν

1 aν

a e−k√s s(a+√s)

− eak ea t erfc(a√t +

Γ(k) (s+a)k (s+b)k

t k−1/2 −(a+b)t/2 √π( a−b ) e Ik−1/2 ( a−b 2 t)

1 √s+a(s+b)√s+b

a−b t e−(a+b)t/2 [I0 ( a−b 2 t) + I1 ( 2 t)]

1 √ s2 +a2 (s+√ s2 +a2 )N

JN (at) aN

e−√s √s+1

e−1/(4k) √πt

1 √ s2 +a2 (s+√ s2 +a2 )

J1 (at) a

e−√s s(√s+1)

erfc(

b2 −a2 √s(s−a2 )(√s+b)

ea t [ ba erf(a√t) − 1] + eb t erfc(b√t)

e−at/2 Iν ( 2a t), k > 0 2

2

k k ) + erfc( √ ) 2√t 2 t

− et+1 erfc(√t +

1 ) − et+1 2√t

1 ) 2√t

erfc(√t + 2

1 ) 2√t

B FOTF Toolbox functions and models B.1 Computational MATLAB functions A great amount of MATLAB functions are written by the author so as to better presenting the materials in the book. The page numbers listed are the numbers where the listings of the functions locate. There are certain codes with no page number, namely of those functions not listed in the book, and they are provided in the FOTF Toolbox, dedicated to the book. (1) Special functions and fundamentals in numerical computation. Functions

Function explanations

beta_c() common_order() fence_shadow() fmincon_global() funmsym() gamma_c() kronsum() mittag_leffler() ml_func() more_sols() more_vpasols() new_inv()

beta function evaluation for complex arguments compute the common order draw the shadows on the walls a global constrained optimisation problem solver evaluate symbolic matrix functions Gamma function evaluation for complex arguments compute the Kronecker sum symbolic evaluation of the Mittag-Leffler functions numerical evaluation of the Mittag-Leffler functions find all possible solutions of nonlinear matrix equations symbolic version of more_sols(), high precision solutions simple matrix inverse function, not recommended

Page 17 228 96 46 234 14 35 24 25 135 37

(2) Numerical evaluations of fractional-order derivatives and integrals. Functions

Function explanations

caputo() caputo9() genfunc() get_vecw() glfdiff0() glfdiff() glfdiff2() glfdiff9() glfdiff_fft() glfdiff_mat() glfdiff_mem() rlfdiff()

computation of the Caputo derivatives, not recommended evaluation of the Caputo derivatives with o(h p ), recommended computation of generating function coefficients symbolically computation of o(h p ) weighting coefficients evaluation of the Grünwald–Letnikov derivatives, not recommended (N.R.) standard evaluation of the Grünwald–Letnikov derivatives, with o(h) evaluation of o(h p ) Grünwald–Letnikov derivatives, N.R. evaluation of o(h p ) Grünwald–Letnikov derivatives, recommended evaluation of o(h p ) Grünwald–Letnikov derivatives with FFT, N.R. matrix version of glfdiff(), N.R. for large sample Grünwald–Letnikov derivative evaluation with short-memory effect evaluation of the Riemann–Liouville derivatives, not recommended

DOI 10.1515/9783110497977-012

Page 89 91 73 79 55 56 79 81 75 62 63 70

358 | B FOTF Toolbox functions and models (3) Filter design for fractional-order derivatives and systems. Functions

Function explanations

Page

carlson_fod() charef_fod() charef_opt() contfrac0() matsuda_fod() new_fod() opt_app() ousta_fod()

design of a Carlson filter design of a Charef filter design of an optimal Charef filter continued-fraction interface to irrational functions design of a Matsuda–Fujii filter design of a modified Oustaloup filter optimal integer-order transfer approximation of high-order models design of a standard Oustaloup filter

145 169 174 142 148 156 161 150

(4) Numerical evaluations of linear fractional-order differential equations. Functions

Function explanations

Page

caputo_ics() fode_caputo0() fode_caputo9() fode_sol() fode_solm() fode_sol9() ml_step3()

equivalent initial condition reconstruction simple Caputo FODE solver with nonzero initial conditions o(h p ) solution of the Caputo equations with nonzero initial conditions closed-form solution of FODE with zero initial conditions matrix version of fode_sol(), N.R. for large samples o(h p ) version of fode_sol() numerical solution of step response of 3-term models

126 123 127 114 118 119 104

(5) Numerical evaluations of nonlinear fractional-order differential equations. Functions

Function explanations

Page

INVLAP_new() nlfode_mat() nlfode_vec() nlfode_vec1() nlfec() nlfep() pepc_nlfode()

updated version of INVLAP() for feedback control systems matrix-based solutions of implicit nonlinear FODEs nonlinear extended FOSS equation solver another version of nlfode_vec(), not recommended o(h p ) corrector solution of nonlinear multi-term FODEs o(h p ) predictor solution of nonlinear multi-term FODEs numerical solutions of single-term nonlinear FODE

131 263 253 257 261 259 246

(6) Fractional-order and other controller design. Functions

Function explanations

Page

c9mfpid() c9mfpid_con() c9mfpid_con1() c9mfpid_con2() c9mfpid_opt() c9mfpid_opt1()

FOPID controller design with equation solutions constraints in optimum FOPID design of FOPDT plants constraints in optimum FOPID design of FOIPDT plants constraints in optimum FOPID design of FOFOPDT plants objective function for FOPDT plants objective function for FOIPDT plants

304 305 309 310 305 309

to be continued on next page

B.2 Object-oriented program design | 359

Functions

Function explanations

Page

c9mfpid_opt2() ffuz_param() fopid() fpidfun() fpidfuns() fpidtune() gershgorin() get_fpidf() mfd2frd() optimfopid() optimpid() pseuduag()

objective function for FOFOPDT plants S-function for parameter setting of fuzzy FOPID controller construct a PIλ Dμ controller from parameters objective function for optimum fractional-order PID controller general objective function for optimum FOPID controller design of optimum fractional-order PID controllers draw the Nyquist plots with Gershgorin bands build string expression of the open-loop model convert MFD data type into FRD data type GUI for optimum fractional-order PID controller design GUI for optimum integer-order PID controller design pseudodiagonalisation of multivariable systems

310 324 294 313 313 315 206 317 343

331

B.2 Object-oriented program design For the ease of modelling, analysis and design tasks for fractional-order systems, two classes – FOTF and FOSS – are designed. According to the syntaxes in the Control System Toolbox, overload functions are designed for the two classes. We tried the best to keep the syntaxes of the overload functions the same as those in Control System Toolbox. (1) Class FOTF – Fractional-order transfer functions. Functions

Function explanations

Page

base_order() bode() diag() display() eig() eq() feedback() foss_a() fotf() fotf2cotf() fotf2foss() fotfdata() freqresp() high_order() impulse() inv() isstable() iszero()

find the base order from an FOTF object Bode diagram analysis diagonal FOTF matrix creation and extraction overload function to display a given MIMO FOTF object find all the poles, including extraneous roots checks whether two FOTF blocks equal or not overload the feedback function for two FOTF blocks convert an FOTF to an extended FOSS object creation of an FOTF class convert an FOTF object into commensurate-order form low-level conversion function of FOTF to FOSS object extract all the fields from an FOTF object low-level frequency response function of an FOTF object high-order integer-order transfer function fitting of FOTF objects evaluation of impulse response of an FOTF object inverse FOTF matrix check whether an FOTF object stable or not check whether a SISO FOTF object is zero or not

193 201 191 182 197 191 188 239 181 193 224 184 200 158 212 189 196 187

to be continued on next page

360 | B FOTF Toolbox functions and models

Functions

Function explanations

Page

lsim() margin() maxdelay() mfrd() minus() mldivide() mpower() mrdivide() mtimes() nichols() norm() nyquist() plus() residue() rlocus() sigma() simplify() step() uminus()

time response evaluation to arbitrary input signals compute the gain and phase margins extract maximum delay from an FOTF object frequency response evaluation minus operation of two FOTF objects left-division function power of an FOTF object right-division function overload function of multiplication of two blocks draw the Nichols chart evaluation of H2 and H∞ norms of FOTF objects draw the Nyquist plot overload function of plus operation of two blocks partial fraction expansion root locus analysis singular value plots simplification of an FOTF object step response unary minus of an FOTF object

215 202 189 205 191 191 192 191 185 202 200 202 187 198 217 210 186 211 191

(2) Class FOSS – Fractional-order state space models. Functions

Function explanations

Page

bode() coss_aug() ctrb() display() eig() eq() feedback() foss() foss2fotf() impulse() inv() isstable() lsim() margin() mfrd() minreal() minus() mpower() mtimes() nichols() norm()

Bode diagram analysis FOSS augmentation controllability test matrix creation overload function to display a given multivariable FOSS object compute the poles of an FOSS object checks whether two FOSS blocks are equal or not overload the feedback function for two FOSS blocks FOSS class creation low-level conversion from FOSS to FOTF object evaluation of impulse response of an FOSS object inverse of an FOSS object check whether an FOSS object is stable or not time response evaluation to arbitrary input signals compute the gain and phase margins frequency response evaluation minimum realisation of an FOSS object minus operation of two FOTF objects power of an FOSS object overload function of multiplication operation of two blocks draw the Nichols charts evaluation of the H2 and H∞ norm of an FOSS object

237 227 235 223 232 230 229 222 224 237 230 232 237 237 237 230 230 230 227 237 236

to be continued on next page

B.3 Simulink models | 361

Functions

Function explanations

Page

nyquist() obsv() order() plus() rlocus() size() ss_extract() step() uminus()

draw the Nyquist plots construct an observability test matrix find the orders of an FOSS object overload function of plus operation of two blocks root locus analysis for a SISO FOSS object find the numbers of inputs, outputs and states extract integer-order state space object from FOSS step response unary minus of an FOSS object

237 235 224 229 238 224 224 237 230

B.3 Simulink models An FOTF blockset is established in Simulink, with an entry model of fotflib, where necessary blocks in fractional-order system modelling are created in the blockset. Several reusable Simulink frameworks are also established and can be used in later simulation tasks. Models

Model explanations

Page

fotflib fotf2sl() sfun_mls() slblocks() c9mfpids c9mvofuz c9mvofuz2 c10mpdm2 c10mpopt

Simulink blockset for FOTF Toolbox multivariable FOTF to Simulink block convertor S-function version of the Mittag-Leffler function default Simulink description file Simulink model for fractional-order PID systems with variable delays Simulink model for variable-order fuzzy PID control systems Simulink model for VO fuzzy PID systems with variable delays Simulink model for multivariable PID control system Simulink model for MIMO parameter optimisation control systems

265 271 284 265 328 325 327 336 346

B.4 Functions and models for the examples For certain demonstrative examples, several dedicated MATLAB functions and Simulink models are established. These files may be useful for the reader to solve similar problems. Functions

Function explanations

Page

c2exnls() c8mstep c8mfpid1 c8mchaos

constraint function of an example Simulink model of a multivariable fractional-order system Simulink model of a fractional-order PID control system vectorised Simulink model for fractional-order Chua system

44 274 273 290

to be continued on next page

362 | B FOTF Toolbox functions and models

Functions

Function explanations

c8mchaosd() c8mchua() c8mchuasim c8mblk2,3,5 c8mcaputo c8mexp2 c8mnlf1,c8mnlf2 c8mexp1x() c8mimp c8nleq() c8mlinc1 c9ef1()-c9ef3() c9mplant

data input file for c8mchaos model MATLAB description of fractional-order Chua equations Simulink model for fractional-order Chua circuit three Simulink models of a linear fractional-order equation complicated Simulink model for nonlinear FODE simpler Simulink model for nonlinear Caputo equations two different Simulink models of a nonlinear FODE MATLAB function for describing nonlinear Caputo equations Simulink model for solving implicit Caputo equation MATLAB function of a nonlinear single-term Caputo equation Simulink model of a linear fractional-order Caputo equation criteria for optimum fractional-order PID controllers Simulink model used for optimpid design

Page 290 254 289 278 285 288 281 256 292 251 288 299 301

C Benchmark problems for the assessment of fractional-order differential equation algorithms The solution of fractional-order differential equations is a very important foundation in fractional calculus and its applications. Although there is a significant amount of numerical algorithms in this field, the levels of them are quite different, and the examples used in each paper are also completely different. It is not easy to make a fair assessment to the existing algorithms and the algorithms to be published. Also, it is hard for the users to choose the correct (i.e. suitable) algorithm for their own problems. A set of benchmark problems with analytical solutions are collected or constructed so as to assess the capabilities of different algorithms under the same framework. With the high-precision algorithms and the Simulink-based simulation schemes, the results obtained in this book are the most accurate numerical solutions so far obtainable with any existing approaches, under double-precision data type. Definition C.1 (Benchmark problem 1). A simple fractional-order differentiation problem is as follows: C 1.6 0.4 E1,1.4 (−t), (C.1) 0 Dt y(t) = t with y(0) = 1, y󸀠 (0) = −1, 0 ≤ t ≤ 100. It is known that the analytical solution to the problem is y(t) = e−t . Comments C.1. Although, theoretic speaking, the equation can be classified as a fractional-order differential equation, it is usually not the case, since the right-hand side of the equation is independent of the signal y(t), and it is merely a given function of t. It is suggested by the author not to use such an example to show your own fractional-order differential equation solution algorithms, because the results obtained may not be convincing. Definition C.2 (Benchmark problem 2). A linear fractional-order differential equation with nonzero initial conditions is given by y󸀠󸀠󸀠 (t) + C0 Dt2.5 y(t) + y󸀠󸀠 (t) + 4y󸀠 (t) + C0 Dt0.5 y(t) + 4y(t) = 6 cos t with the known initial values of y(0) = 1, y󸀠 (0) = 1, y󸀠󸀠 (0) = −1. It is known that the analytical solution of the equation is y(t) = √2 sin(t + π/4). Definition C.3 (Benchmark problem 3). A nonlinear fractional-order differential equation with nonzero initial conditions, originated from [16], is given by C 1.455 y(t) 0 Dt

= −t0.1

E1,1.545 (−t) t e y(t) C0 Dt0.555 y(t) + e−2t −[y󸀠 (t)]2 , E1,1.445 (−t)

where y(0) = 1, y󸀠 (0) = −1, and t ∈ [0, 1]. The analytical solution is y(t) = e−t . Also, check whether a larger interval of t can be solved. DOI 10.1515/9783110497977-013

364 | C Benchmark problems Comments C.2. This equation is fully studied in Examples 8.5, 8.6, 8.7, 8.8, 8.15 and 8.17 in this book with different algorithms and the accuracy and speed are completely different. The reader can select the appropriate algorithms for such an equation. Definition C.4 (Benchmark problem 4). An implicit nonlinear fractional-order ordinary differential equation is given by + C0 Dt0.3 y(t) C0 Dt1.7 y(t) 1 t t 1 t t = − t E1,1.8 (− )E1,1.2 (− ) − t E1,1.7 (− )E1,1.3 (− ), 8 2 2 8 2 2

C 0.2 C 1.8 0 Dt y(t) 0 Dt y(t)

with y(0) = 1, y󸀠 (0) = −1/2. The analytical solution is y(t) = e−t/2 . Definition C.5 (Benchmark problem 5). A fractional-order state space equation is of the form 1 C 0.5 1/6 { { 0 Dt x(t) = 2Γ(1.5) ([(y(t) − B)(z(t) − C)] + √t), { { { { C 0.2 0 Dt y(t) = Γ(2.2)[x(t) − A], { { { { { { C D 0.6 z(t) = Γ(2.8) [y(t) − B], Γ(2.2) {0 t with x(0) = A, y(0) = B, z(0) = C, and it is known that the analytical solution of the equation is x(t) = t + A, y(t) = t1.2 + B, z(t) = t1.8 + C. For instance, as a benchmark problem, one may select fixed parameters as A = 1, B = 2 and C = 3.

Bibliography [1]

[2] [3]

[4] [5] [6]

[7] [8] [9] [10] [11] [12] [13] [14]

[15] [16] [17] [18] [19] [20] [21] [22] [23] [24]

M. Abramowita and I. A. Stegun, Handbook of Mathematical Functions with Formulas, Graphs and Mathematical Tables, 9th ed., United States Department of Commerce, National Bureau of Standards, Washington D.C., 1970. S. Bennett, Development of the PID controllers, IEEE Control Syst. 13 (1993), no. 6, 58–65. F. M. Callier and J. Winkin, Infinite dimensional system transfer functions, in: Analysis and Optimization of Systems: State and Frequency Domain Approaches for Infinite-Dimensional Systems, Lecture Notes in Control and Inform. Sci. 185, Springer, Berlin (1993), 72–101. R. Caponetto, G. Dongola, L. Fortuna and I. Petráš, Fractional Order Systems: Modeling and Control Applications, World Scientific, Singapore, 2010. M. Caputo and F. Mainardi, A new dissipation model based on memory mechanism, Pure Appl. Geophys. 91 (1971), no. 8, 134–147. H. Chamati and N. S. Tonchev, Generalized Mittag–Leffler functions in the theory of finite-size scaling for systems with strong anisotropy and/or long-range interaction, J. Phys. A. Math. Gen. 39 (2006), 469–478. A. Charef, H. H. Sun, Y. Y. Tsao and B. Onaral, Fractal system as represented by singularity function, IEEE Trans. Automat. Control 37 (1992), no. 9, 1465–1470. Y. Q. Chen and K. L. Moore, Relay feedback tuning of robust PID controllers with iso-damping property, IEEE Trans. Syst. Men Cybern. Part B 35 (2005), no. 1, 23–31. Y. Q. Chen, I. Petráš and B. M. Vinagre, A list of Laplace and inverse Laplace transforms related to fractional order calculus, preprint (2007). Y. Q. Chen, I. Petras and D. Xue, Fractional control – A tutorial, in: Proceedings of the American Control Conference (St. Louis 2009), IEEE Press, Piscataway (2009), 1397–1410. Y. Q. Chen and B. M. Vinagre, A new IIR-type digital fractional order differentiator, Signal Process. 83 (2003), 2359–2365. L. O. Chua, Memristor – The missing circuit element, IEEE Trans. Circuit Theory 18 (1971), 507–519. S. Das, Functional Fractional Calculus, Springer, Berlin, 2011. S. Das, I. Pan, S. Das and A. Gupta, A novel fractional order fuzzy PID controller and its optimal time domain tuning based on integral performance indices, Eng. Appl. Artif. Intell. 25 (2012), no. 2, 430–442. J. D’Errico, fminsearchbnd, fminsearchcon, MATLAB Central File ID: #8277. K. Diethelm, The Analysis of Fractional Differential Equations: An Application-Oriented Exposition Using Differential Operators of Caputo Type, Springer, New York, 2010. L. Dorčák, Numerical models for simulation the fractional-order control systems, Technical report UEF SAV, The Academy of Sciences Institute of Experimental Physics, Kosice, 1994. J. M. Edmunds, Control system design and analysis using closed-loop Nyquist and Bode arrays, Internat. J. Control 30 (1979), no. 5, 773–802. W. R. Evans, Graphical analysis of control systems, Transactions of AIEE 67 (1948), 547–551. M. E. Fouda and A. G. Radwan, On the fractional-order memristor model, J. Fract. Calc. Appl. 4 (2013), no. 1, 1–7. D. J. Hawkins, Pseudodiagonalisation and the inverse Nyquist array method, Proc. IEE Part D 119 (1972), 337–342. D. Henrion, A review of the global optimization toolbox for Maple, preprint (2006). R. Hilfer, Applications of Fractional Calculus in Physics, World Scientific, Singapore, 2000. L. Huang, Linear Algebra in Systems and Control Theory (in Chinese), Science Press, Beijing, 1984.

DOI 10.1515/9783110497977-014

366 | Bibliography [25] Y. S. Hung and A. G. J. MacFarlane, Multivariable Feedback: A Quasi-Classical Approach, Lecture Notes in Control and Inform. Sci. 40, Springer, New York, 1982. [26] I. S. Jesus, J. A. Tenreiro Machado and R. S. Barbosa, Fractional dynamics and control of distributed parameter systems, Comput. Math. Appl. 59 (2010), no. 5, 1687–1694. [27] Y. S. Jin, Y. Q. Chen and D. Y. Xue, Time-constant robust analysis of a fractional order [proportional derivative] controller, IET Control Theory Appl. 5 (2011), no. 1, 164–172. [28] C. P. Li and F. H. Zeng, Numerical Methods for Fractional Calculus, CRC Press, Boca Raton, 2015. [29] Z. Li, L. Liu, S. Dehghan, Y. Q. Chen and D. Y. Xue, A review and evaluation of numerical tools for fractional calculus and fractional order control, Internat. J. Control (2016), DOI 10.1080/00207179.2015.1124290. [30] L. Liu, F. Pan and D. Y. Xue, Variable-order fuzzy fractional PID controller, ISA Trans. 55 (2015), 227–233. [31] C. H. Lubich, Discretized fractional calculus, SIAM J. Math. Anal. 17 (1986), no. 3, 704–719. [32] Y. Luo and Y. Q. Chen, Fractional order [proportional derivative] controller for a class of fractional order systems, Automatica J. IFAC 45 (2009), no. 10, 2446–2450. [33] Y. Luo and Y. Q. Chen, Fractional-Order Motion Control, John Wiley & Sons, London, 2012. [34] A. G. J. MacFarlane and B. Kouvaritakis, A design technique for linear multivariable feedback systems, Internat. J. Control 25 (1977), 837–874. [35] R. L. Magin, Fractional Calculus in Bioengineering, Begell House Publishers, Redding, 2006. [36] S. Manabe, The non-integer integral and its application to control systems (in Japanese), Jpn. J. Inst. Electr. Eng. 80 (1960), 589–597. [37] D. Matignon, Generalized fractional differential and difference equations: Stability properties and modelling issues, in: Proceedings of the Mathematical Theory of Networks and Systems (Padova 1998), Padova Il Poligrafo, Padova (1998), 503–506. [38] K. Matsuda and H. Fujii, H∞ -optimized wave-absorbing control: Analytical and experimental results, J. Guidence Control Dyn. 16 (1993), no. 6, 1146–1153. [39] D. Q. Mayne, Sequential design of linear multivariable systems, Proc. IEE Part D 126 (1979), no. 6, 568–572. [40] L. Meng and D. Y. Xue, A new approximation algorithm of fractional order system models based optimization, J. Dyn. Syst. Meas. Control 134 (2012), no. 4, Article ID 044504. [41] K. S. Miller and B. Ross, An Introduction to the Fractional Calculus and Fractional Differential Equations, Wiley, New York, 1993. [42] C. A. Monje, Y. Q. Chen, B. M. Vinagre, D. Y. Xue and V. Feliu, Fractional-Order Systems and Controls – Fundamentals and Applications, Springer, London, 2010. [43] C. A. Monje, B. M. Vinagre, V. Feliu and Y. Q. Chen, Tuning and auto-tuning of fractional-order controllers for industry applications, Control Eng. Practice. 16 (2008), no. 7, 798–812. [44] J. A. Nelder and R. Mead, A simplex method for function minimization, Comput. J. 7 (1965), 308–313. [45] H. Nyquist, Regeneration theory, Bell Syst. Technol. J. 11 (1932), 126–147. [46] A. O’Dwyer, Handbook of PI and PID Controller Tuning Rules, Imperial College Press, London, 2003. [47] K. B. Oldham and J. Spanier, The Fractional Calculus – Theory and Applications of Differentiation and Integration to Arbitary Order, Academic Press, San Diego, 1974. [48] A. Oustaloup, La Commande CRONE, Hermès, Paris, 1991. [49] A. Oustaloup, La Dérivation non Entière : Théorie, Synthèse et Applications, Hermès, Paris, 1995. [50] A. Oustaloup, Diversity and Non-integer Differentiation for System Dynamics, Wiley, London, 2014.

Bibliography | 367

[51] A. Oustaloup, F. Levron, F. Nanot and B. Mathieu, Frequency band complex non integer differentiator: Characterization and synthesis, IEEE Trans. Circuits Syst. I. Fundam. Theory Appl. 47 (2000), no. 1, 25–40. [52] I. Petráš, Fractional-Order Nonlinear Systems – Modelling, Analysis and Simulation, Higher Education Press, Beijing, 2011. [53] I. Petráš, I. Podlubny and P. O’Leary, Analogue realization of fractional order controllers, TU Košice: Fakulta BERG (2002). [54] I. Podlubny, Fractional Differential Equations: An Introduction to Fractional Derivatives, Fractional Differential Equations, to Methods of Their Solution and Some of Their Applications, Academic Press, San Diego, 1999. [55] I. Podlubny, Fractional-order systems and PIλ Dμ -controllers, IEEE Trans. Automat. Control 44 (1999), no. 1, 208–214. [56] I. Podlubny, Matrix approach to discrete fractional calculus, Fract. Calc. Appl. Anal. 3 (2000), no. 4, 359–386. [57] I. Podlubny, Geometric and physical interpretations of fractional integration and differentiation, Fract. Calc. Appl. Anal. 5 (2002), no. 4, 367–386. [58] I. Podlubny, Mittag–Leffler function, MATLAB Central File ID: #3738. [59] H. H. Rosenbrock, Computer-Aided Control System Design, Academic Press, New York, 1974. [60] J. Sabatier, C. Farges and J.-L. Trigeassou, Fractional systems state space description: Some wrong ideas and proposed solutions, J. Vib. Control 20 (2014), no. 7, 1076–1084. [61] T. O. Salim, Some properties relating to the generalized Mittag–Leffler function, Adv. Appl. Math. Anal. 4 (2009), no. 1, 21–30. [62] H. J. Seybold and R. Hilfer, Numerical results for the generalized Mittag–Leffler function, Fract. Calc. Appl. Anal. 8 (2005), no. 2, 127–139. [63] A. K. Shukla and J. C. Prajapati, On a generalization of Mittag–Leffler function and its properties, J. Math. Anal. Appl. 336 (2007), 797–811. [64] D. B. Strukov, G. S. Snider, D. R. Stewart and R. S. Williams, The missing memristor found, Nature 453 (2008), 80–83. [65] A. Tepljakov, Fractional-order calculus based identification and control of linear dynamic systems, Master thesis, Tallinn University of Technology, 2011. [66] V. V. Uchaikin, Fractional Derivatives for Physicists and Engineers – Volume I: Background and Theory, Higher Education Press, Beijing, 2013. [67] V. V. Uchaikin, Fractional Derivatives for Physicists and Engineers – Volume II: Applications, Higher Education Press, Beijing, 2013. [68] D. Valério, Ninteger v. 2.3 Fractional Control Toolbox for MATLAB, Technical Report, Universudade Téchica de Lisboa, 2006 [69] J. Valsa, Numerical inversion of Laplace transforms in MATLAB, MATLAB Central File ID: #32824. [70] J. Valsa and L. Brančik, Approximate formulae for numerical inversion of Laplace transforms, Int. J. Numer. Model. 11 (1998), no. 3, 153–166. [71] F. S. Wang, W. S. Juang and C. T. Chan, Optimal tuning of PID controllers for single and cascade control loops, Chemical Eng. Commun. 132 (1995), 15–34. [72] B. J. West, Fractional Calculus View of Complexity – Tomorrow’s Science, CRC Press, Boca Raton, 2016. [73] D. Y. Xue and L. Bai, Numerical algorithms for Caputo fractional-order differential equations, Internat. J. Control (2016), DOI 10.1080/00207179.2016.1158419. [74] D. Y. Xue and Y. Q. Chen, Suboptimum H2 pseudo-rational approximations to fractional-order linear time invariant systems, in: Advances in Fractional Calculus – Theoretical Developments and Applications in Physics and Engineering, Springer, Dordrecht (2007), 61–75.

368 | Bibliography [75] D. Y. Xue and Y. Q. Chen, OptimFOPID: A MATLAB interface for optimum fractional-order PID controller design for linear fractional-order plants, preprint (2012); Proceedings of Fractional Derivatives and Its Applications (Nanjing 2012). [76] D. Y. Xue and Y. Q. Chen, Modeling, Analysis and Design of Control Systems in MATLAB and Simulink, World Scientific, Singapore, 2014. [77] D. Y. Xue and Y. Q. Chen, Scientific Computing with MATLAB, 2nd ed., CRC Press, Boca Raton, 2016. [78] D. Y. Xue, C. N. Zhao and Y. Q. Chen, A modified approximation method of fractional order system, in: Proceedings of IEEE Conference on Mechatronics and Automation (Luoyang 2006), IEEE Press, Piscataway (2006), 1043–1048. [79] C. N. Zhao and D. Y. Xue, Closed-form solutions to fractional-order linear differential equations, Front. Electr. Electron. Eng. China 3 (2008), no. 2, 214–217. [80] The MathWorks Inc., Robust Control Toolbox User’s Manual, 2005.

Index accumulative error, 25, 128, 129 actuator saturation, 300, 301, 322 Adams–Bashforth–Moulton algorithm, 243–246 algebraic loop, 292 anonymous function, 11, 41, 42, 55, 135, 139, 247, 260 arbitrary input, 211, 215–216, 360 augmented FOSS, 227, 360 autonomous system, 232 auxiliary function, 95, 122, 125, 127, 283 Bagley–Torwik equation, 122, 250 base order, 106, 107, 109, 111, 195–199, 217, 218, 221, 223, 224, 226–227, 238, 249–251 benchmark problem, 8, 363–364 beta function, 9, 15–18, 353 binomial coefficients, 53–56, 62 binomial expression, 53, 54, 70 Bode diagram, 141, 143, 144, 146, 148–151, 156–158, 163, 167, 178, 201, 202, 210, 296, 345, 346, 359, 360 Bode magnitude plot, 163, 295, 343–345, 348, 350 Caputo definition, 52, 86–94, 222 Caputo differential equation, 99, 121–130, 241, 244, 263, 282–292 Carlson’s method, 145–146, 358 Cauchy’s integral formula, 2, 52–53, 56, 61, 65 chaotic oscillator, 240, 254 characteristic equation, 137, 196, 198, 204, 217, 219, 220 Charef approximation, 141, 168–178, 358 Chua system, 240, 254, 255, 289, 361 closed-form solution, 99, 113–117, 119–122, 211, 358 commensurate-order, 99, 106–107, 109–112, 180, 190, 193–199, 215, 217–221, 230, 249 commutative law, 95 complementary error function, 10, 11, 23 complementary sensitivity function, 303, 306 complex conjugate, 219, 220 confluent hypergeometric function, 21 constrained optimisation, 9, 40–44, 46, 47, 174, 302, 357

constraint, 41–44, 250, 303, 305, 309, 310, 312, 358, 361 continued fraction, 141–149 Control System Toolbox, 6, 7, 144, 182, 203, 211, 216, 217, 225, 227, 270, 359 controllability, 221, 231, 235, 236 controllable staircase, 235 conventional PID controller, 293–295, 316 convergent field, 10, 20, 21, 30 convolution, 48, 101, 233 corrector equation, 245, 247, 258, 261–262 coupling, 214, 329 critical gain, 218–220 CRONE Toolbox, 5 crossover frequency, 295, 303, 306 Dawson function, 9, 18–20, 353 decision variable, 40, 41, 43, 44, 46, 174, 299, 311 decoupling, 329, 334 delay constant, 179, 180, 182, 183, 189, 208, 221 descriptor matrix, 222, 223 diagonal dominant, 7, 180, 201, 205–210, 214, 215, 329, 330, 333–335, 338, 343 Dirac function, 68, 233 disturbance rejection, 303 double-precision, 55, 82, 84, 90, 363 dynamic compensation, 329, 333, 335 equivalent initial condition, 99, 124–127, 244, 358 error function, 9–11, 298, 341 error tolerance, 34, 63, 135, 168, 247, 251, 259, 260 Euler constant γ, 15 exponential auxiliary function, 124 exponential function, 4, 10, 23, 67, 86, 91 extended beta function, 17 extended fractional state space, 221, 238–241, 244, 252–256 extraneous root, 137, 196, 218 factorial, 12, 21 feedback connection, 185, 188, 229 FFT-based algorithm, 74–76, 82 field, 166, 180, 181, 183, 184, 222, 228, 359

370 | Index FO-[PD] controller, 311–312 FOFOPDT model, 310, 358 FOIPDT model, 308, 309, 358 FOMCON Toolbox, 5 FOPDT model, 295, 296, 304, 306 forward path, 133, 188, 204, 205, 217, 304, 317, 318 FOSS object, 7, 197, 215, 222–241, 340, 359, 360, 364 FOTF block library, 266, 269 FOTF matrix, 179–181, 184–186, 188–190, 193, 194, 200, 207, 226, 227, 269–272, 274, 359 FOTF object, 7, 180–220, 228–230, 232, 236–239, 243, 265, 269, 272, 359 FOTF Toolbox, 5, 6, 100, 179, 180, 190, 317 Fourier coefficients, 74, 75 fractional-order PID, 1, 7, 208, 243, 267, 293–295, 302, 304, 305, 309–313, 317, 322, 323, 334, 337, 361 frequency range, 154, 159, 160, 166, 173, 175, 209, 273, 289, 331 frequency response fitting, 141, 149, 159, 166–170, 172, 175 fuzzy fractional-order PID controller, 322–328 fuzzy inference system, 323, 327 gain margin, 202, 360 gain variation, 7, 308, 337, 347, 351, 352 Gamma function, 11–15, 26, 55, 353 Gaussian hypergeometric function, 21 generalised state space model, 222 generating function, 71–74, 76, 78, 82, 357 genetic algorithm, 44, 45, 302, 315 geometric interpretation, 2, 95–97 Gershgorin band, 180, 201, 205–209, 333–335, 359 Gershgorin Theorem, 205 global optimisation, 40, 44–47, 306, 314 Global Optimization Toolbox, 44 global variable, 314, 315, 317 Grünwald–Letnikov definition, 51, 53–66, 70, 71, 86, 88–91, 94, 144 H∞ norm, 180, 195, 200, 221, 231, 360 H2 norm, 180, 195, 200, 221, 231, 298, 360 Hankel matrix, 62, 118 heat diffusion, 4 Heaviside function, 68, 81, 104, 152, 153

high-precision, 5, 6, 34, 52, 70–74, 82, 85, 86, 88, 89, 91–93, 99, 103, 113, 119–121, 124–130, 132, 212, 244, 258, 263–265, 363 hypergeometric function, 9, 20–23, 27, 87 IAE criterion, 298–300 implicit Caputo equation, 243, 258, 263–265, 291–292 improper system, 222, 227, 230 impulse response, 109, 110, 112, 116, 211–213, 359, 360 incomplete beta function, 18 incomplete Gamma function, 23 individual channel design, 214, 329, 333–337 infinite integral, 11, 14, 138 infinite series, 10, 19, 23, 24, 26, 27, 34, 76, 130, 133, 165 infinite-dimensional, 222 inner product, 74 integral-type criterion, 295, 297–300, 317 integration by parts, 12 integrator chain, 277, 286, 287, 291, 292 interested frequency range, 155, 166, 175, 279 interpolation, 12, 89–91, 125, 140 inverse Laplace transform, 49, 50, 99, 170 inverse Nyquist array, 180, 205, 206, 208, 329 irrational system, 6, 99, 134–138, 141, 170, 176, 178, 203 ISE criterion, 298–300, 313, 319 iso-damping, 303 ITAE criterion, 295, 298–300, 313, 318, 319 iterative process, 259, 261 Jacobian determinant, 16 Jordan block, 38 Jordan decomposition, 38 key signal, 276, 277, 280, 286–289, 291, 292 Kronecker product, 35, 185, 187, 341 Kronecker sum, 35, 185, 187 Kummer hypergeometric function, 21 Laplace transform, 9, 47–50, 99–103, 106–108, 111, 146, 356 least squares, 342 Legendre formula, 13 Leibniz property, 94 linear Caputo equation, 121, 122, 243, 288

Index

linear multi-step algorithm, 73, 74 linear time-invariant, 100 low-pass filter, 267 LTI, 182, 223, 270 masked block, 266, 269, 271 mathematical induction method, 31–33 matrix algorithm, 61–63, 85–86, 117–119, 263–265, 357 matrix function, 9, 38, 39 matrix inversion, 37, 194 Matsuda–Fujii filter, 147–149, 151, 167, 169, 266, 358 membership function, 327 memory length, 63, 64 memristor, 4 MFD Toolbox, 205–208, 343 minimum realisation, 166, 228, 230, 360 Mittag-Leffler function, 9, 23–34, 234, 241, 284, 357 model augmentation, 226–227, 360 model reduction, 161–165, 296 modified Oustaloup filter, 155–157, 159, 160, 266 multi-term, 244, 249–251, 258, 358 multinomial coefficients, 106 multivariable system, 5, 158, 179–181, 200, 204–211, 221, 329–352, 359 multivariate beta function, 18 MuPAD, 19, 142 negative feedback, 188, 190 Nichols chart, 201, 202, 360 nilpotent matrix, 38 Ninteger Toolbox, 5 noise rejection, 303 nonlinear Caputo equation, 243–244, 247, 249, 250, 258, 262, 263, 284, 287, 362 nonlinear programming, 42, 43 nonzero initial condition, 6, 51, 90, 99, 102, 121–124, 222, 232, 243, 252, 253, 282, 363 norm, 200, 248, 262 numerical integral, 10, 11, 14, 17, 52, 65, 103, 138, 139, 200 numerical inverse Laplace transform, 131–134, 170, 178, 317 numerical Laplace transform, 6, 99, 178 Nyquist array, 201, 206–210, 330, 331

| 371

Nyquist plot, 201–203, 205–207, 332–335, 338, 339, 359–361 Nyquist theorem, 203 objective function, 40–46, 161, 173, 174, 293, 299–301, 305, 306, 309, 310, 312–315, 317 observability, 221, 231, 235, 236 optimal Charef filter, 141, 174–176, 358 optimal fractional-order PID, 313, 315, 320, 322, 359, 362 OptimFOPID interface, 320–322 OptimPID interface, 293, 300–302 Oustaloup filter, 141, 149–157, 159, 160, 168, 266, 283, 284, 289, 291, 292, 316, 319, 358 overload function, 179, 180, 185, 187–189, 194, 198, 200, 201, 215–217, 359 Padé approximation, 161, 322 parallel connection, 185, 187, 188, 208 parameter optimisation design, 340–352 partial fraction expansion, 99, 107, 109–111, 198, 199, 360 particle swarm optimisation, 9, 44, 45, 302, 315 pattern search, 44, 45, 302, 315 PECE algorithm, 243 phase margin, 202, 203, 303, 306, 360 phase plane, 255 physical interpretation, 2, 95, 97 Pochhammer symbol, 20, 21, 30 positive feedback, 188, 190 power function, 4, 67, 81, 192 power-law, 4 pre-compensation matrix, 208, 209, 329, 330, 332–335 predictor equation, 245, 246, 258–261 predictor–corrector algorithm, 243, 245, 246, 258 primary sheet, 137 pseudo-polynomial, 35, 100, 114, 135, 136, 158, 179, 182, 185, 186, 196, 239 pseudo-state, 222, 234 pseudodiagonalisation, 7, 329–340, 359 QFT controller, 166, 167 recursive algorithm, 71, 76–82, 252 return difference matrix, 204, 205

372 | Index Riemann sheet, 137, 196, 198 Riemann–Liouville definition, 66–86, 88, 89, 91, 94–96, 102, 263, 284, 286, 287 rising factorial, 21 robustness, 7, 337, 338, 347, 348 root locus, 180, 217–221, 237, 238 S-function, 284, 324, 325, 359, 361 sensitivity function, 303 sequential derivative, 62, 94, 95 series connection, 185, 207, 227, 228, 277 servo control, 293, 297–300 shadow, 2, 96, 97 short-memory effect, 62–66, 253, 254, 357 simulated annealing algorithm, 44 single-term, 244, 249, 252, 259, 261, 358, 362 single-variable system, 5, 180, 194, 201, 203, 205, 210, 238, 329 singular value plot, 180, 201, 210–211, 360 special function, 6, 9–34, 353, 357 spline interpolation, 93, 139 stability, 106, 134, 137, 180, 195–199, 203, 204, 217, 218, 221, 231, 232, 294 stability boundary, 195, 196, 198, 217, 218 stable region, 195–197 staircase form, 235–236 state augmentation, 226 state transition matrix, 38, 221, 231–234 static gain, 175, 223, 227, 228 steady-state error, 267, 295, 303, 311 step response, 138, 211–215, 220, 231, 273, 275, 297, 299, 307, 308, 313, 316, 318–321

Stieltjes integral, 95 symbolic computation, 14, 24, 48, 73 target controller, 340, 343–348, 350 Taylor auxiliary function, 121–124, 258, 282, 284 Taylor series expansion, 10, 67, 73, 74, 76, 78, 126 terminate time, 65, 212, 247, 254, 255, 260, 264, 275, 279, 289, 300, 313, 320, 321 time delay, 207, 222, 293, 325, 329, 349–352, 360 transformation matrix, 37, 235, 236 transmission line, 3 truncation algorithm, 34, 104, 112 tuning knob, 293, 294 unconstrained optimisation, 9, 40–41, 44 unified fractional-order operator, 51, 54 unity negative feedback, 190, 193, 203, 204, 213, 217 universe, 96, 327 unstable poles, 203, 204 variable substitution, 14, 19, 43 variable-order, 293, 322, 325, 327, 328, 361 zero initial condition, 4, 51, 82, 99, 100, 111, 113–119, 121, 122, 125, 127, 130, 221, 252, 258, 263, 275–284