Sliding-Mode Control and Variable-Structure Systems: The State of the Art (Studies in Systems, Decision and Control, 490) [1st ed. 2023] 3031370880, 9783031370885

This book reflects the latest developments in sliding-mode control (SMC) and variable-structure systems (VSS), comprisin

117 71 31MB

English Pages 613 [597] Year 2023

Report DMCA / Copyright

DOWNLOAD PDF FILE

Table of contents :
Preface
Contents
Contributors
List of Figures
List of Tables
New Properties of SMC Algorithms
Generalized Super-Twisting with Robustness to Arbitrary Disturbances: Disturbance-Tailored Super-Twisting (DTST)
1 Introduction
2 Motivation
2.1 Speed Tracking in a Simple Mechanical System
2.2 Limitations of STA
2.3 Limitations of Existing Generalized Second-Order Algorithms
3 Disturbance-Tailored Super-Twisting: The Basics
3.1 DTST Equations
3.2 DTST Construction and Tuning
3.3 DTST Construction Example
4 DTST: The Details
4.1 Initial Perturbation Bound Requirements
4.2 DTST Function Admissibility
4.3 Construction Properties
4.4 Finiteness of the Function ν2 at 0
5 Robust Stability and Finite-Time Convergence
5.1 Global Robust Stability
5.2 Finite-Time Convergence
6 Conclusions and Future Work
References
Prescribed-Time Stabilization and Inverse Optimality for Nonlinear Systems
1 Introduction
2 Problem Formulation
3 Prescribed-Time Controller Design and Stability Analysis
3.1 Prescribed-Time Controller Design
3.2 Prescribed-Time Stability Analysis
4 Prescribed-Time Inverse Optimality
5 A Simulation Example
6 Concluding Remarks
References
Designing Controllers with Predefined Convergence-Time Bound Using Bounded Time-Varying Gains
1 Introduction
2 Motivating Example
3 Preliminaries and Problem Statement
3.1 Fixed-Time Stability and Settling-Time Function
3.2 Problem Statement
4 First-Order Controllers
4.1 Prescribed-Time Controllers
4.2 Predefined-Time Controllers with Bounded Time-Varying Gains
4.3 On Reducing the Energy with Time-Varying Gains
4.4 Redesigning Fixed-Time Stabilizing Controllers Using Bounded Time-Varying Gains: The Second-Order Case
5 Main Result: Arbitrary-Order Predefined-Time Controller
5.1 Uniform Lyapunov Stability of Predefined-Time Controllers
6 Conclusion
References
SMC Observers
Bi-homogeneous Differentiators
1 Introduction and Overview
2 Preliminaries
3 The Arbitrary Order Differentiator Homogeneous in the Bi-limit
3.1 The First-Order Differentiator
3.2 The Arbitrary Order Differentiator
3.3 Convergence in Absence of Noise
3.4 Effect of Noise and the Perturbation δ(t)=-f0(n)(t)
4 Lyapunov Function Used for the Stability Analysis
4.1 Convergence Time Estimation
4.2 Gain Calculation
4.3 Acceleration of the Convergence and Scaling of the Lipschitz Constant Δ
5 Example: The Bi-homogeneous Second-Order Differentiator
6 Conclusions
References
On Finite- and Fixed-Time State Estimation for Uncertain Linear Systems
1 Introduction
1.1 Notation
2 Problem Statement
3 Homogeneous Observers
3.1 Nonlinear Output Injection Design
3.2 Design of the Observer Gains
4 A Fixed-Time Sliding-Mode Observer
4.1 Nonlinear Output Injection Design
5 Unknown Input Identification
6 Simulation Results
6.1 Homogeneous Observers
6.2 A Fixed-Time Sliding-Mode Observer
6.3 Unknown Input Identification
7 Concluding Remarks
References
Robust State Estimation for Linear Time-Varying Systems Using High-Order Sliding-Modes Observers
1 Introduction
2 Problem Statement
3 Preliminaries
4 Cascade Observer for Linear Time-Varying Strongly Observable Systems
4.1 Example
5 Non-cascade Observer for Linear Time-Varying Strongly Observable Systems
5.1 A Normal Form for Linear Time-Varying Systems with Unknown Inputs
5.2 Observer Design
5.3 Example
6 Conclusions
References
Discretization Issues
Effect of Euler Explicit and Implicit Time Discretizations on Variable-Structure Differentiators
1 Introduction
2 Continuous-Time AO-STD
3 Discretization Schemes
3.1 Explicit Discretization
3.2 Implicit Discretization
4 Open-Loop Numerical Simulations
5 Practical Experiments
5.1 Mathematical Modeling of the RIPS
5.2 Design of the Controller for the RIPS
5.3 Results of the Experiments
6 Conclusions
References
Discrete-Time Implementations of Differentiators Homogeneous in the Bi-Limit
1 Introduction
2 Differentiator Design via State-Dependent Eigenvalue Placement
2.1 Arbitrary-Order Differentiator Homogeneous in the Bi-Limit
3 Sliding-Mode Inspired Numerical Differentiation
3.1 Semi-implicit Mapped Differentiators
3.2 Stable Explicit Euler Discretized Differentiator
4 Stability Proof
4.1 The Unperturbed Case
4.2 The Perturbed Case
5 Simulation Study and Conclusion
5.1 Numerical Example
5.2 Conclusion
References
Lyapunov-Based Consistent Discretization of Quasi-continuous High-Order Sliding Modes
1 Introduction
2 Problem Statement and Preliminaries
2.1 Homogeneity
2.2 Problem Statement
3 Projected Dynamics
4 Discretization Scheme
5 Examples
6 Conclusion
References
Low-Chattering Discretization of Sliding Modes
1 Introduction
2 Discontinuous Dynamic Systems
3 Discretization of Filippov Dynamic Systems
3.1 Example: Alternative Discretization of Relay Control
4 Homogeneous Output Regulation
4.1 Differentiation and Filtering Based on SMs
4.2 Output Feedback Stabilization in Continuous Time
4.3 Output Feedback Stabilization Using Discrete Differentiators
5 Low-Chattering Discretization of HOSMs
5.1 Low-Chattering Discretization of Differentiators
5.2 Low-Chattering Discretization of Higher-Order SMC
5.3 Proof Sketch
6 Discretization Examples for the Sliding Orders 3, 4
6.1 Discretization of the 3-SM Car Control
6.2 Integrator Chain Control, r=4
7 Discretization of the Twisting Controller
7.1 Case Study: Targeting
7.2 Simulation of Targeting Control
8 Conclusions
References
Adaptation Methods
Adaptive Sliding Mode and Higher-Order Sliding-Mode Control Techniques with Applications: A Survey
1 Introduction
2 Adaptive Sliding-Mode Control (1-ASMC) Techniques
3 Applications of Adaptive Sliding-Mode Control (1-ASMC) Techniques
4 Adaptive Second-Order Sliding-Mode Control (2-ASMC) Techniques
4.1 Adaptive 2-SMC Super-Twisting Control
4.2 2-SMC Twisting Control
4.3 Other Adaptive 2-SMC Algorithms
5 Applications of Adaptive Second-Order Sliding-Mode Control (2-ASMC) Techniques
6 Higher-Order Adaptive Sliding-Mode Control (r-ASMC) Techniques
7 Applications of Adaptive Higher-Order Sliding-Mode Control (r-ASMC) Techniques
8 Conclusions
References
Unit Vector Control with Prescribed Performance via Monitoring and Barrier Functions
1 Introduction
2 Problem Formulation
3 Basic Techniques
3.1 Norm Observer
3.2 Monitoring Function—Reaching Phase
3.3 Barrier Function—Residual Phase
3.4 Adaptive Unit Vector Control
4 Stability Analysis
5 Numerical Examples
5.1 Academic Example
5.2 Application Example
6 Conclusion
References
Chattering Analysis and Adjustment
Chattering in Mechanical Systems Under Sliding-Mode Control
1 Introduction
2 The Chattering Effect and Metrics for Chattering Evaluation
3 Analysis and Comparison of Chattering
4 Analysis and Comparison of External Signals Propagation
5 The Chattering Effect and Metrics for Chattering Evaluation. Dry Friction
6 Conclusion
References
Describing Function-Based Analysis and Design of Approximated Sliding-Mode Controllers with Reduced Chattering
1 Introduction
2 Problem Statement
3 Describing Function (DF) of Approximated SMC
3.1 DF of Saturation Function
4 Relative Degree One Systems
4.1 First-Order Sliding-Mode Control
4.2 Super-Twisting Controller
4.3 Quantitative Analysis
5 Relative Degree Two Systems
5.1 Nested Algorithm
5.2 Twisting Algorithm
5.3 ST Extension for Relative Degree Two
5.4 Quantitative Analysis
6 Conclusions
References
Applications of SMC
Blood Glucose Regulation for Type 1 Diabetic Patients at Intensive Care Unit via Continuous Sliding-Mode Algorithms
1 Introduction
2 Problem Statement
3 Preliminaries
4 Intravenous Continuous Controllers Based on High-Order Sliding Modes
4.1 Super-Twisting Algorithm
4.2 Continuous Twisting Algorithm
4.3 Continuous Singular Terminal Algorithm
4.4 Continuous Nonsingular Terminal Algorithm
4.5 Output-Feedback Continuous Twisting Algorithm
4.6 Observer Design
5 In Silico Simulation Results
5.1 Super-Twisting Algorithm
5.2 Continuous Twisting Algorithm
5.3 Continuous Singular Terminal Algorithm
5.4 Continuous Nonsingular Terminal Algorithm
5.5 Output-Feedback Continuous-Twisting Algorithm
5.6 Standard Open-Loop Protocol at ICU
5.7 Discussion
6 Conclusions
References
A Reduced-Order Model-Based Design of Event-Triggered Sliding-Mode Control
1 Introduction
2 System Description
3 Problem Statement
4 Reduced-Order System via Aggregation Technique
5 Reduced-Order Event-Triggered SMC
5.1 Design of Sliding Manifold
5.2 Design of Event-Triggered SMC
5.3 Main Result
6 Simulation Results
7 Conclusion
References
A Robust Approach for Fault Diagnosis in Railway Suspension Systems
1 Introduction
1.1 Predictive Versus Preventive—Benefits and Requirements
1.2 Problem Statement in Terms of Predictive Maintenance
1.3 Related Work
1.4 Outline of the Book Chapter
2 Modeling of the Railway Vehicle Suspension System
2.1 Overview of Model Variations
2.2 The Vertical Vehicle Suspension System
3 Parameter Identification
3.1 Discontinuous Gradient Algorithm
4 Test Cases
4.1 Fault-Free Case
4.2 Faulty Case
4.3 Outlook
References
Flight Evaluation of a Sliding-Mode Fault-Tolerant Control Scheme
1 Introduction
2 Fault-Tolerant Control Flight Tests
2.1 FTC Hardware Implementation: Civil and Fighter Aircraft
2.2 FTC Hardware Implementation: Multirotor UAV
3 LPV Sliding-Mode Controller Design
3.1 Problem Formulation
3.2 Definition of the Switching Function
3.3 Control Law Description
4 Flight Validation Using a Quadrotor UAV
4.1 Implementation
5 HIL and Flight Validation Using MuPAL-α Research Aircraft
5.1 Controller Design
5.2 HIL Test on the MuPAL-α Platform
5.3 Piloted Flight Tests
6 Conclusion
References
Sliding-Mode-Based Fault Diagnosis and Fault-Tolerant Control for Quad-Rotors
1 Introduction
2 Problem Statements
3 Fault Diagnosis Strategy
3.1 Reduced Finite-Time Sliding-Mode Observer
3.2 Fault Detection
3.3 Fault Isolation
3.4 Fault Identification
3.5 Fault Diagnosis Implementation
4 Active Fault Accommodation Control Design
4.1 Baseline Robust-Nominal Control Design
4.2 Active Fault Accommodation Implementation
5 Fault-Tolerant Control for a Rotor Failure
5.1 Full Finite-Time Sliding-Mode Observer
5.2 Control Strategy
5.3 Virtual Control Disturbance Term
5.4 Yaw Dynamics
5.5 Fault-Tolerant Control Implementation
6 Conclusions
References
Second-Order Sliding-Mode Leader-Follower Consensus for Networked Uncertain Diffusion PDEs with Spatially Varying Diffusivity
1 Introduction
1.1 Chapter Organization
2 Preliminaries and Notations
2.1 Mathematical Notation and Norm Properties
2.2 Preliminaries on Sobolev Spaces and Integral Norms
2.3 Algebraic Graph Theory
3 Leader-Following Consensus for Diffusion PDEs
3.1 Problem Formulation
3.2 Control Objective and Operating Assumptions
4 Main Result
4.1 Control Synthesis
4.2 Convergence Analysis
4.3 Proof of Theorem 1
5 Simulation Results
6 Conclusion
References
Appendix Index
Index
Recommend Papers

Sliding-Mode Control and Variable-Structure Systems: The State of the Art (Studies in Systems, Decision and Control, 490) [1st ed. 2023]
 3031370880, 9783031370885

  • 0 0 0
  • Like this paper and download? You can publish your own PDF file online for free in a few minutes! Sign Up
File loading please wait...
Citation preview

Studies in Systems, Decision and Control 490

Tiago Roux Oliveira Leonid Fridman Liu Hsu   Editors

Sliding-Mode Control and Variable-Structure Systems The State of the Art

Studies in Systems, Decision and Control Volume 490

Series Editor Janusz Kacprzyk, Systems Research Institute, Polish Academy of Sciences, Warsaw, Poland

The series “Studies in Systems, Decision and Control” (SSDC) covers both new developments and advances, as well as the state of the art, in the various areas of broadly perceived systems, decision making and control–quickly, up to date and with a high quality. The intent is to cover the theory, applications, and perspectives on the state of the art and future developments relevant to systems, decision making, control, complex processes and related areas, as embedded in the fields of engineering, computer science, physics, economics, social and life sciences, as well as the paradigms and methodologies behind them. The series contains monographs, textbooks, lecture notes and edited volumes in systems, decision making and control spanning the areas of Cyber-Physical Systems, Autonomous Systems, Sensor Networks, Control Systems, Energy Systems, Automotive Systems, Biological Systems, Vehicular Networking and Connected Vehicles, Aerospace Systems, Automation, Manufacturing, Smart Grids, Nonlinear Systems, Power Systems, Robotics, Social Systems, Economic Systems and other. Of particular value to both the contributors and the readership are the short publication timeframe and the world-wide distribution and exposure which enable both a wide and rapid dissemination of research output. Indexed by SCOPUS, DBLP, WTI Frankfurt eG, zbMATH, SCImago. All books published in the series are submitted for consideration in Web of Science.

Tiago Roux Oliveira · Leonid Fridman · Liu Hsu Editors

Sliding-Mode Control and Variable-Structure Systems The State of the Art

Editors Tiago Roux Oliveira Department of Electronics and Telecommunication Engineering State University of Rio de Janeiro (UERJ) Rio de Janeiro, Brazil

Leonid Fridman Department of Control Engineering and Robotics National Autonomous University of Mexico (UNAM) Mexico City, Mexico

Liu Hsu Department of Electrical Engineering Federal University of Rio de Janeiro (COPPE/UFRJ) Rio de Janeiro, Brazil

ISSN 2198-4182 ISSN 2198-4190 (electronic) Studies in Systems, Decision and Control ISBN 978-3-031-37088-5 ISBN 978-3-031-37089-2 (eBook) https://doi.org/10.1007/978-3-031-37089-2 © The Editor(s) (if applicable) and The Author(s), under exclusive license to Springer Nature Switzerland AG 2023 This work is subject to copyright. All rights are solely and exclusively licensed by the Publisher, whether the whole or part of the material is concerned, specifically the rights of translation, reprinting, reuse of illustrations, recitation, broadcasting, reproduction on microfilms or in any other physical way, and transmission or information storage and retrieval, electronic adaptation, computer software, or by similar or dissimilar methodology now known or hereafter developed. The use of general descriptive names, registered names, trademarks, service marks, etc. in this publication does not imply, even in the absence of a specific statement, that such names are exempt from the relevant protective laws and regulations and therefore free for general use. The publisher, the authors, and the editors are safe to assume that the advice and information in this book are believed to be true and accurate at the date of publication. Neither the publisher nor the authors or the editors give a warranty, expressed or implied, with respect to the material contained herein or for any errors or omissions that may have been made. The publisher remains neutral with regard to jurisdictional claims in published maps and institutional affiliations. This Springer imprint is published by the registered company Springer Nature Switzerland AG The registered company address is: Gewerbestrasse 11, 6330 Cham, Switzerland Paper in this product is recyclable.

Preface

Variable-structure systems (VSS) and its main mode of operation sliding-mode control (SMC) are recognized as some of the most efficient tools to deal with uncertain systems due to their robustness and even insensitivity to perturbations. The main advantages of VSS/SMC methodology are: • theoretical insensitivity with respect to the matched perturbations/uncertainties; • finite-, fixed-time, or convergence with the predefined upper bound of settling time; • reduced order of sliding-mode dynamics or even the ability to collapse the system dynamics; • ability of sliding-mode observers to ensure theoretically exact estimation of the system states. The discontinuous nature of SMC has shown their main drawbacks: • the chattering phenomenon, appearing due to the presence of parasitic dynamics of actuators, sensors, and other non-idealities; • the difficulties of discrete realizations of SMC; • need of the upper bound of perturbations/uncertainties or its derivatives, which is normally unknown or overestimated, i.e., adaptation is required. This book reflects six main directions in recent developments of VSS/SMC theory and consists of six parts: New Properties of SMC Algorithms SMC Observers Discretization Issues Adaptation Methods Chattering Analysis and Adjustment Applications of SMC During the last decade, one of the main lines in development of the SMC theory was the development of new algorithms allowing to compensate more wide classes

v

vi

Preface

of perturbations and fixed time, or convergence with a predefined upper bound of settling time. The first part New Properties of SMC Algorithms consists of three chapters. The first Chapter “Generalized Super-Twisting with Robustness to Arbitrary Disturbances: Disturbance-Tailored Super-Twisting (DTST)” by Hernan Haimovich discusses a generalization of the Super-Twisting Algorithm (STA) allowing to substitute the relay SMC with a continuous one for more wide classes of uncertainties. STA can achieve finite time on the sliding surface, while completely rejecting essentially two types of perturbations: those that vanish toward this surface and those whose total time derivative is bounded. However, a general design framework allowing for perturbations of arbitrary growth was missing and was recently provided by Disturbance-Tailored Super-Twisting (DTST). In this chapter, it is explained the main concepts, requirements, and properties involved in DTST. In the second Chapter “Prescribed-Time Stabilization and Inverse Optimality for Nonlinear Systems”, Wuquan Li and Miroslav Krstic consider the prescribed-time stabilization and inverse optimality for strict-feedback nonlinear systems. Different from the existing dynamic high-gain scaling designs, they propose simpler designs with new scaled quadratic Lyapunov functions. The designed controller can ensure that the equilibrium at the origin of the plant is prescribed-time stable. Then, the authors redesign the controller and solve the prescribed-time inverse optimal stabilization problem, with an infinite gain margin. Specifically, the designed controller is not only optimal with respect to a meaningful cost functional but also globally stabilizes the closed-loop system in the prescribed time. In the third Chapter “Designing Controllers with Predefined Convergence-Time Bound Using Bounded Time-Varying Gains”, Rodrigo Aldana-López, Richard Seeber, Hernan Haimovich and David Gomez-Gutiérrez also consider the class of prescribed-time controllers, steering the system’s state to the origin in the time, a priori set by the user, regardless of the initial condition. Furthermore, such a class of controllers has been shown to maintain a prescribed-time convergence in the presence of disturbances even if the disturbance bound is unknown. However, such properties require a time-varying gain that becomes singular at the terminal time, limiting its application to scenarios under quantization or measurement noise. This chapter presents a methodology to design a broader class of controllers, called predefined-time controllers, with a predefined upper bound of settling time. The proposed approach allows designing robust predefined-time controllers based on time-varying gains while maintaining uniformly bounded time-varying gains. They analyze the condition for uniform Lyapunov stability under the proposed time-varying controllers. The second part SMC Observers of the book continues the discussion of the first part devoted to the predefined- and prescribed-time convergence reflecting the recent results in the sliding-mode-based differentiators/observers. In fourth Chapter “Bi-homogeneous Differentiators”, Jaime A. Moreno reminds that in many applications it is important to be able to estimate on-line some number of

Preface

vii

derivatives of a given (differentiable) signal. Some well-known algorithms to solve this problem include the (continuous) linear high-gain observers and (discontinuous) Levant’s exact differentiators. They are both homogeneous, as are many other ones. A disadvantage of continuous algorithms is that they are able to calculate exactly the derivatives only for a very small class of (polynomial) time signals. The discontinuous Levant’s differentiator, in contrast, can calculate exactly and in finite time the derivatives of Lipschitz signals, a much larger class. However, it has the drawback that its convergence time increases very strongly with the initial conditions. Thus a combination of both algorithms seems advantageous, and this has been proposed recently by the author in 2021. In this chapter, Professor Moreno reviews some techniques used to design differentiators and shows how the combination of two different homogeneous algorithms can be realized and that it leads to interesting properties. A novelty is the derivation of a very simple realization of the family of bi-homogeneous differentiators proposed before. The methodological framework is based on the use of smooth Lyapunov functions that allows for a common framework to study their convergence and performance analysis. In the fifth Chapter “On Finite- and Fixed-Time State Estimation for Uncertain Linear Systems”, Héctor Ríos provides different approaches to estimate the state of some classes of linear systems with some parametric uncertainties and unknown inputs. A family of homogeneous observers and fixed-time convergent sliding-mode observers are introduced to solve this problem. The Finite-Time and Fixed-Time convergence properties and the synthesis of these observers are described along this chapter. Moreover, an unknown input identification approach is also introduced. In the sixth Chapter “Robust State Estimation for Linear Time-Varying Systems Using High-Order Sliding-Modes Observers”, Jorge Dávila, Leonid Fridman, and Arie Levant present two algorithms for state estimation of linear time-varying systems affected by unknown inputs. The chapter is divided into two parts. In the first part, based on the structural property of strong observability, an observer for linear timevarying systems affected by unknown inputs is shown. The cascade structure of the algorithm provides a correct state reconstruction for the class of linear timevarying systems that satisfies the structural property of strong observability in spite of bounded unknown inputs and system instability. The second part of this chapter presents a finite-time observer that uses structural properties of a class of linear timevarying systems with bounded unknown inputs. The design of the observer exploits the structural properties of the system through a linear operator, avoiding the use of a cascade structure. As a result, the proposed observer provides an exact estimate of the states after a finite transient time even in the presence of possible instability of the system and the effects of bounded unknown inputs. The third part Discretization Issues of the book discusses the hottest topic of modern SMC-discretization issues, allowing the realization of the best features of modern higher-order SMC (HOSMC) algorithms. In the seventh Chapter “Effect of Euler Explicit and Implicit Time Discretizations on Variable-Structure Differentiators”, Mohammad Rasool Mojallizadeh and

viii

Preface

Bernard Brogliato deal with the time discretization of variable-structure differentiators with discontinuous (set-valued) parts. The explicit (forward) and implicit (backward) time-discretizations schemes have been developed for a specific type of variable structure differentiator. The causal implementation of the implicit discretization has been addressed and illustrated by flowcharts. Some properties of the implicit discretization have been studied analytically, e.g., finite-time convergence, numerical chattering suppression, exactness, gain insensitivity, and accuracy. These properties are validated in the open-loop configuration using numerical simulations. Furthermore, an electro-pneumatic setup has been considered as the benchmark to study the behavior of the discretization methods in closed-loop control systems in practice. The experiments validate the results obtained based on the analytical calculations and numerical simulations. In the eighth Chapter “Discrete-Time Implementations of Differentiators Homogeneous in the Bi-Limit”, Maximilian Rudiger-Wetzlinger, Markus Reichhartinger, and Martin Horn propose a discrete-time differentiation algorithm of arbitrary order inspired by the uniform robust exact differentiator and the generalized differentiator with negative homogeneity degree. As the well-known explicit Euler method is not suitable for discretizing algorithms with the fixed-time convergence property, a semi-implicit discretization, and a stable explicit Euler discretization approach are proposed. It is proven that the proposed discrete-time algorithms are globally asymptotically stable in the unperturbed case for arbitrary order and converge to an attractive invariant set around the origin in the perturbed case. In the ninth Chapter “Lyapunov-Based Consistent Discretization of Quasi-continuous High-Order Sliding Modes”, Tonametl Sanchez, Andrey Polyakov, and Denis Efimov suggest an explicit discretization scheme for class of disturbed systems controlled by homogeneous quasi-continuous HOSMC which are equipped with a homogeneous Lyapunov function. Such a Lyapunov function is used to construct the discretization scheme that preserves important features from the original continuoustime system: asymptotic stability, finite-time convergence, and the Lyapunov function itself. In the tenth Chapter “Low-Chattering Discretization of Sliding Modes”, Avi Hanan, Adam Jbara, and Arie Levant study the mitigation of the notorious chattering effect in SMC. The new control-discretization method diminishes the chattering while preserving the system trajectories, accuracy, and insensitivity to matched disturbances. The unavoidable restrictions of low-chattering discretization methods are discussed. The modern non-overestimation schemes are presented in the fourth part Adaptation Methods. In the eleventh Chapter “Adaptive Sliding Mode and Higher-Order Sliding-Mode Control Techniques with Applications: A Survey”, Yuri Shtessel, Franck Plestan, Christopher Edwards, and Arie Levant present a survey on adaptive sliding mode and higher-order sliding-mode control techniques with applications. Control under uncertainty is one of the main topics of the modern control theory. The idea of SMC/ HOSMC is to drive the system trajectory to properly chosen constraints (sliding

Preface

ix

manifold) in finite time and keeping it there for all consecutive time by means of high-frequency switching control. The main features of SMC/HOSMC are its insensitivity to bounded disturbances matched by control, high stabilization accuracy, and finite-time convergence. Therefore, SMC/HOSMC remains, probably, the main choice in handling systems with bounded uncertainties/disturbances. Adaptive HOSMC has been of great interest in the sliding-mode control community during the last 15 years due to its ability to handle perturbations with unknown bounds while mitigating chattering, if the adaptive control gains are not overestimated. A number of application specific results are also discussed. The literature in the area is presented in the context of continuing developments in the broad areas of the theory and application of adaptive SMC/HOSMC. In the twelfth Chapter “Unit Vector Control with Prescribed Performance via Monitoring and Barrier Functions”, Victor Hugo Pereira Rodrigues, Liu Hsu, Tiago Roux Oliveira, and Leonid Fridman propose an adaptive unit vector control approach via output feedback for a class of multivariable nonlinear systems. The sliding-mode-based controller can deal with parametric uncertainties and (un)matched disturbances with unknown upper bounds. Fixed-time convergence of the output to a predefined neighborhood of the origin of the closed-loop system is proved with guaranteed transient performance. The novelty of the authors’ result lies on combining two important adaptation tools: monitoring and barrier functions. The adaptation process is divided into two stages, where an appropriate monitoring function allows for the specification of performance criteria during the transient phase, while the barrier function ultimately confines the output within a small residual set in steady state. Simulation results including an application to Overhead Crane System illustrate the advantages of the proposed adaptive control strategy. The fifth part Chattering Analysis and Adjustment is devoted to the crucial SMC chattering aspects. In the thirteenth Chapter “Chattering in Mechanical Systems Under Sliding-Mode Control”, Igor Boiko investigates the chattering phenomenon in mechanical systems under sliding-mode control. Chattering in sliding-mode control systems is known as an undesirable effect that reduces system performance with respect to the theoretical performance expected in accordance with the design. In the present book chapter, the metrics applied to chattering evaluation as well as the effect of chattering on system performance in position-control mechanical systems are studied. In the fourteenth Chapter “Describing Function-Based Analysis and Design of Approximated Sliding-Mode Controllers with Reduced Chattering”, Antonio Rosales, Leonid Freidovich, and Ismael Castillo introduce describing function-based analysis and design of approximated sliding-mode controllers with reduced chattering. SMC is a powerful robust technique that ensures insensitivity to matched bounded disturbances and finite-time convergence. However, its use requires the implementation of discontinuous feedback signals with the sign function, leading to the presence of parasitic oscillations called chattering. Chattering is unavoidable in systems with SMC, including continuous and HOSMC approaches. One of the simplest and common solutions to alleviate chattering is the approximation of the

x

Preface

SMC by substituting the sign function with its approximation by a sigmoid function or a saturated function. Can one design approximated in such a way SMC avoiding a blind search requiring a huge number of numerical and/or hardware experiments? This chapter presents a method to design the boundary-layer parameter of the saturation function leading to suboptimal in certain sense approximated SMC algorithms. The design is based on the describing function and harmonic balance techniques for estimating the parameters of chattering, i.e., frequency and amplitude of the parasitic oscillations. Different applications of SMC are presented in the sixth part Applications of SMC. In the fifteenth Chapter “Blood Glucose Regulation for Type 1 Diabetic Patients at Intensive Care Unit via Continuous Sliding-Mode Algorithms”, Roberto Franco, Héctor Ríos, Alejandra Ferreira de Loza, Louis Cassany, David Gucik-Derigny, Jérôme Cieslak, and David Henry propose five continuous sliding-mode control algorithms to regulate blood glucose levels in critically ill patients affected with Type 1 Diabetes Mellitus. All the controllers deal with uncertainties from intrapatient and interpatient variability and external disturbances such as food intake. The proposed schemes do not require meal announcements nor patient individualization. The blood glucose measurement and insulin infusion are intravenous. The control objective is to regulate the blood glucose in a normoglycemia range, i.e., 70–180 (mg/dl). The approach is validated in the UVA/Padova metabolic simulator for ten in silico adult patients. The results show excellent performance and minimal risk of hyperglycemia and hypoglycemia events for every sliding-mode algorithm. Moreover, a quantitative analysis contrasts the algorithms’ closed-loop performance. In the sixteenth Chapter “A Reduced-Order Model-Based Design of Event-Triggered Sliding-Mode Control”, Kiran Kumari, Abhisek K. Behera, Bijnan Bandyopadhyay, and Johann Reger develop event-triggered SMC as an effective tool for stabilizing networked systems under external perturbations. In this chapter, a reduced-order model-based event-triggered controller is presented, unlike the case in the traditional full-order based design. Besides its inherent advantage of reduced computations, this technique also offers many benefits to the network-based implementation. Particularly in the event-triggering scenario, the use of a reduced-order state vector shows an increase in the sampling interval (also called the inter-event time), leading to a sparse sampling sequence. This is the primary goal of almost all event-triggered controllers. The second outcome of this design is the transmission of a reduced-order vector over the network. Consequently, the transmission cost associated with the controller implementation can be reduced. This chapter exploits the aggregation technique to obtain a reduced-order model for the plant. The design of SMC and the event condition are carried out using this reduced-order model. The analysis of the closed-loop system is discussed using the reduced-order model without transforming it into regular form. In the end, a practical example is considered to illustrate the benefit of the proposed technique. In the seventeenth Chapter “A Robust Approach for Fault Diagnosis in Railway Suspension Systems”, Selma Zoljic-Beglerovic, Mohammad Ali Golkani, Martin

Preface

xi

Steinberger, Bernd Luber, and Martin Horn bring a robust approach for fault diagnosis in railway suspension systems. Due to the increased popularity of railway transportation, maximizing the availability of both vehicles and infrastructure is on high demand. To fulfill this requirement, the development of efficient maintenance strategies is in the focus of research in this area. Such strategies require a robust estimation and prediction of vehicle’s components health states. A sliding-mode-based algorithm is proposed as a solution for the identification of suspension parameters, which are credible representatives of the operational status. The accuracy of the estimation, robustness to model uncertainties, and sensitivity to faults are shown through different test-scenarios in a simulation environment. In the eighteenth Chapter “Flight Evaluation of a Sliding-Mode Fault-Tolerant Control Scheme”, Halim Alwi, Lejun Chen, Christopher Edwards, Ahmed Khattab, and Masayuki Sato consider the development of Fault Tolerant Controllers (FTC) and their application to aerospace systems. In particular, given the extensive and growing literature in this area, this chapter focuses on methods where the schemes have been implemented and flight tested. One thread of the fault tolerant control literature has involved sliding-mode controllers. This chapter considers a specific class of slidingmode FTC which incorporates control allocation to exploit over-actuation (which is typically present in aerospace systems). The chapter describes implementations of these ideas on a small quadrotor UAV and also piloted flight tests on a full-scale twin-engined aircraft. In the nineteenth Chapter “Sliding-Mode-Based Fault Diagnosis and Fault-Tolerant Control for Quad-Rotors”, Romeo Falcón, Héctor Ríos, and Alejandro Dzul present the design of a sliding-mode-based fault diagnosis and a fault tolerant control for the trajectory tracking problem in quad-rotors. The problem considers external disturbances, and two different actuator faulty scenarios: multiple losses of rotor effectiveness or a complete rotor failure. The proposals are only based on the measurable positions and angles. For the fault diagnosis strategy a finite-time sliding-mode observer is proposed to estimate some state variables, and it provides a set of residuals. These residuals allow us to detect, isolate, and identify multiple actuator faults despite the presence of a class of external disturbances. Using the proposed fault diagnosis, an actuator fault accommodation controller is developed to solve the trajectory tracking problem in quad-rotors under the effects of multiple losses of rotor effectiveness and external disturbances. The fault accommodation partially compensates the actuator faults allowing the usage of a baseline robust-nominal controller that deals with external disturbances. Additionally, in order to deal with the rotor failure scenario, an active fault tolerant control is proposed. First, the rotor failure is isolated using the proposed fault diagnosis, and then, a combination of a finite-time slidingmode observer, PID controllers, and continuous high-order sliding-modes controllers is proposed. Such a strategy allows the yaw angular velocity to remain bounded and the position tracking to be achieved even in the presence of some external disturbances. Numerical simulations and experimental results on Quanser’s QBall2 show the performance of the proposed strategies.

xii

Preface

In the twentieth Chapter “Second-Order Sliding-Mode Leader-Follower Consensus for Networked Uncertain Diffusion PDEs with Spatially Varying Diffusivity”, Alessandro Pilloni, Alessandro Pisano, Elio Usai, and Yury Orlov address the distributed leader-following consensus tracking problem for a network of agents governed by uncertain diffusion PDEs with Neumann-type boundary actuation and uncertain spatially varying diffusivity. Except for the âleaderâ agent that generates the reference profile to be tracked, all remaining agents, called “followers”, are required to asymptotically track the infinite-dimensional time-varying leader state. The dynamics of the follower agents is affected by smooth boundary disturbances which are, possibly, unbounded in magnitude. The proposed local interaction rule is developed assuming that only boundary sensing is available, and it consists of a nonlinear sliding-mode-based protocol. The performance and stability properties of the resulting infinite-dimensional networked system are then formally studied by means of Lyapunov analysis. The analysis demonstrates the global exponential stability of the resulting error boundary-value problem in a suitable Sobolev space. This book would not have been possible without the help of the external referees who reviewed each chapter, and the technical support we received from Oliver Jackson and his colleagues at Springer. We thank all of them for their help, customary good cheer and efficiency. We hope you enjoy reading this book as much as we appreciated the opportunity to edit it. We also hope that this book allows the readers to familiarize with the recent results in Sliding-Mode Control and Variable-Structure Systems. Rio de Janeiro, Brazil Mexico City, Mexico Rio de Janeiro, Brazil February 2022

Tiago Roux Oliveira Leonid Fridman Liu Hsu

Contents

New Properties of SMC Algorithms Generalized Super-Twisting with Robustness to Arbitrary Disturbances: Disturbance-Tailored Super-Twisting (DTST) . . . . . . . . . . . Hernan Haimovich

3

Prescribed-Time Stabilization and Inverse Optimality for Nonlinear Systems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Wuquan Li and Miroslav Krstic

21

Designing Controllers with Predefined Convergence-Time Bound Using Bounded Time-Varying Gains . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Rodrigo Aldana-López, Richard Seeber, Hernan Haimovich, and David Gómez-Gutiérrez

37

SMC Observers Bi-homogeneous Differentiators . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Jaime A. Moreno On Finite- and Fixed-Time State Estimation for Uncertain Linear Systems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Héctor Ríos

71

97

Robust State Estimation for Linear Time-Varying Systems Using High-Order Sliding-Modes Observers . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 133 Jorge Dávila, Leonid Fridman, and Arie Levant Discretization Issues Effect of Euler Explicit and Implicit Time Discretizations on Variable-Structure Differentiators . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 165 Mohammad Rasool Mojallizadeh and Bernard Brogliato

xiii

xiv

Contents

Discrete-Time Implementations of Differentiators Homogeneous in the Bi-Limit . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 181 Maximilian Rüdiger-Wetzlinger, Markus Reichhartinger, and Martin Horn Lyapunov-Based Consistent Discretization of Quasi-continuous High-Order Sliding Modes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 205 Tonametl Sanchez, Andrey Polyakov, and Denis Efimov Low-Chattering Discretization of Sliding Modes . . . . . . . . . . . . . . . . . . . . . . 229 Avi Hanan, Adam Jbara, and Arie Levant Adaptation Methods Adaptive Sliding Mode and Higher-Order Sliding-Mode Control Techniques with Applications: A Survey . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 267 Yuri Shtessel, Franck Plestan, Christopher Edwards, and Arie Levant Unit Vector Control with Prescribed Performance via Monitoring and Barrier Functions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 307 Victor Hugo Pereira Rodrigues, Liu Hsu, Tiago Roux Oliveira, and Leonid Fridman Chattering Analysis and Adjustment Chattering in Mechanical Systems Under Sliding-Mode Control . . . . . . . 337 Igor Boiko Describing Function-Based Analysis and Design of Approximated Sliding-Mode Controllers with Reduced Chattering . . . . . . . . . . . . . . . . . . . 357 Antonio Rosales, Leonid Freidovich, and Ismael Castillo Applications of SMC Blood Glucose Regulation for Type 1 Diabetic Patients at Intensive Care Unit via Continuous Sliding-Mode Algorithms . . . . . . . . . . . . . . . . . . 385 Roberto Franco, Héctor Ríos, Alejandra Ferreira de Loza, Louis Cassany, David Gucik-Derigny, Jérôme Cieslak, and David Henry A Reduced-Order Model-Based Design of Event-Triggered Sliding-Mode Control . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 417 Kiran Kumari, Abhisek K. Behera, Bijnan Bandyopadhyay, and Johann Reger A Robust Approach for Fault Diagnosis in Railway Suspension Systems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 437 Selma Zoljic-Beglerovic, Mohammad Ali Golkani, Martin Steinberger, Bernd Luber, and Martin Horn

Contents

xv

Flight Evaluation of a Sliding-Mode Fault-Tolerant Control Scheme . . . 461 Halim Alwi, Lejun Chen, Christopher Edwards, Ahmed Khattab, and Masayuki Sato Sliding-Mode-Based Fault Diagnosis and Fault-Tolerant Control for Quad-Rotors . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 503 Romeo Falcón, Héctor Ríos, and Alejandro Dzul Second-Order Sliding-Mode Leader-Follower Consensus for Networked Uncertain Diffusion PDEs with Spatially Varying Diffusivity . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 541 Alessandro Pilloni, Alessandro Pisano, Elio Usai, and Yury Orlov Index . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 569

Contributors

Rodrigo Aldana-López Departamento de Informatica e Ingenieria de Sistemas (DIIS), Universidad de Zaragoza, Zaragoza, Spain Mohammad Ali Golkani Institute of Automation and Control, Graz University of Technology, Graz, Austria Halim Alwi The College of Engineering, Mathematics & Physical Sciences, University of Exeter, Exeter, UK Bijnan Bandyopadhyay Department of Electrical Engineering, Indian Institute of Technology Jodhpur, Jodhpur, India Abhisek K. Behera Department of Electrical Engineering, Indian Institute of Technology Roorkee, Roorkee, India Igor Boiko Department of Electrical Engineering and Computer Science, Khalifa University of Science and Technology, Abu Dhabi, UAE Bernard Brogliato University Grenoble Alpes, INRIA, CNRS, Grenoble INP, LJK, Grenoble, France Louis Cassany University of Bordeaux, CNRS, Bordeaux INP, IMS, UMR 5218, Talence, France Ismael Castillo Institute of Automation and Control, Graz University of Technology, Graz, Austria Lejun Chen Department of Electronic and Electrical Engineering, University College London, London, UK Jérôme Cieslak University of Bordeaux, CNRS, Bordeaux INP, IMS, UMR 5218, Talence, France Jorge Dávila Instituto Politécnico Nacional, ESIME-UPT, Section of Graduate Studies and Research, Mexico City, Mexico

xvii

xviii

Contributors

Alejandro Dzul Tecnológico Nacional de México/I.T. La Laguna, División de Estudios de Posgrado e Investigación, Torreón, Coahuila, México Christopher Edwards The College of Engineering, Mathematics & Physical Sciences, University of Exeter, Exeter, UK Denis Efimov University of Lille, Inria, CNRS, UMR 9189—CRIStAL, Lille, France Romeo Falcón Tecnológico Nacional de México/I.T. La Laguna, División de Estudios de Posgrado e Investigación, Torreón, Coahuila, México Alejandra Ferreira de Loza CONAHCYT IxM, Ciudad de México, México; Instituto Politécnico Nacional, Tijuana, Baja California, México Roberto Franco Tecnológico Nacional de México/I.T. La Laguna, División de Estudios de Posgrado e Investigación, Torreón, Coahuila, México Leonid Freidovich Department of Applied Physics and Electronics, Umeå University, Umeå, Sweden Leonid Fridman National Autonomous University of Mexico (UNAM), Mexico, CDMX, Mexico; Facultad de Ingeniería, Universidad Nacional Autónoma de México, Mexico City, Mexico David Gómez-Gutiérrez Intelligent Systems Research Lab, Intel Labs, Intel Corporation, Zapopan, Jalisco, Mexico; Instituto Tecnológico José Mario Molina Pasquel y Henríquez, Unidad Académica Zapopan, Tecnológico Nacional de México, Zapopan, Jalisco, Mexico David Gucik-Derigny University of Bordeaux, CNRS, Bordeaux INP, IMS, UMR 5218, Talence, France Hernan Haimovich Consejo Nacional de Investigaciones Científicas y Técnicas (CONICET), Centro Internacional Franco-Argentino de Ciencias de la Información y de Sistemas (CIFASIS), Universidad Nacional de Rosario (UNR), Rosario, Argentina Avi Hanan Tel-Aviv University, Tel-Aviv, Israel David Henry University of Bordeaux, CNRS, Bordeaux INP, IMS, UMR 5218, Talence, France Martin Horn Institute of Automation and Control, Graz University of Technology, Graz, Austria Liu Hsu Federal University of Rio de Janeiro (UFRJ), Rio de Janeiro, RJ, Brazil Adam Jbara Tel-Aviv University, Tel-Aviv, Israel Ahmed Khattab The College of Engineering, Mathematics & Physical Sciences, University of Exeter, Exeter, UK

Contributors

xix

Miroslav Krstic Department of Mechanical and Aerospace Engineering, University of California, San Diego, La Jolla, CA, USA Kiran Kumari Department of Electrical Engineering, Indian Institute of Science, Bengaluru, Karnataka, India; Control Engineering Group, Technische Universität Ilmenau, Ilmenau, Germany Arie Levant School of Mathematical Sciences, Tel-Aviv University, Tel-Aviv, Israel Wuquan Li School of Mathematics and Statistics Science, Ludong University, Yantai, China Bernd Luber Virtual Vehicle Research GmbH, Graz, Austria Mohammad Rasool Mojallizadeh University Grenoble Alpes, INRIA, CNRS, Grenoble INP, LJK, Grenoble, France Jaime A. Moreno Instituto de Ingeniería, Universidad Nacional Autónoma de México (UNAM), Coyoacán, Ciudad de México, Mexico Tiago Roux Oliveira State University of Rio de Janeiro (UERJ), Rio de Janeiro, RJ, Brazil Yury Orlov CICESE Research Center, Ensenada, Mexico Alessandro Pilloni Department of Electrical and Electronic Engineering (DIEE), University of Cagliari, Cagliari, Italy Alessandro Pisano Department of Electrical and Electronic Engineering (DIEE), University of Cagliari, Cagliari, Italy Franck Plestan Nantes Université, Ecole Centrale Nantes, CNRS, Nantes, France Andrey Polyakov University of Lille, Inria, CNRS, UMR 9189 - CRIStAL, Lille, France Johann Reger Control Engineering Group, Technische Universität Ilmenau, Ilmenau, Germany Markus Reichhartinger Institue of Automation and Control, Graz University of Technology, Styria, Austria Victor Hugo Pereira Rodrigues Federal University of Rio de Janeiro (UFRJ), Rio de Janeiro, RJ, Brazil Héctor Ríos Tecnológico Nacional de México/I.T. La Laguna, División de Estudios de Posgrado e Investigación, Torreón, Coahuila, México; CONAHCYT IxM, Ciudad de México, México Antonio Rosales Department of Applied Physics and Electronics, Umeå University, Umeå, Sweden

xx

Contributors

Maximilian Rüdiger-Wetzlinger Institue of Automation and Control, Graz University of Technology, Styria, Austria Tonametl Sanchez IPICYT, SLP, Mexico Masayuki Sato The Faculty of Advanced Science and Technology, Kumamoto University, (Although the work described here was conducted while he worked for JAXA), Kumamoto, Japan Richard Seeber Christian Doppler Laboratory for Model Based Control of Complex Test Bed Systems, Institute of Automation and Control, Graz University of Technology, Graz, Austria Yuri Shtessel University of Alabama in Huntsville, Huntsville, USA Martin Steinberger Institute of Automation and Control, Graz University of Technology, Graz, Austria Elio Usai Department of Electrical and Electronic Engineering (DIEE), University of Cagliari, Cagliari, Italy Selma Zoljic-Beglerovic Virtual Vehicle Research GmbH, Graz, Austria

List of Figures

Prescribed-Time Stabilization and Inverse Optimality for Nonlinear Systems Fig. 1 Fig. 2

Mass-spring mechanical system . . . . . . . . . . . . . . . . . . . . . . . . . . . . The response of the closed-loop system (79)–(81) . . . . . . . . . . . . .

33 34

Designing Controllers with Predefined Convergence-Time Bound Using Bounded Time-Varying Gains Fig. 1 Fig. 2

Fig. 3

Fig. 4

Fig. 5

Example of a time-scale transformation (left) and its related time-varying gain (right) with η = 1 and Tc = 1 . . . . . . . . . . . . . . . Simulation of the first-order prescribed-time controller, for different initial conditions, with φ(x, t; Tc ) = −κ(t)x and Tc = 1. It can be observed that, the state x(tstop ) at a time tstop grows linearly with |x0 |. Here we choose tstop = 0.9 . . . . . . . . Simulation of the first-order prescribed-time controller, for different initial conditions, with φ(x, t; Tc ) = −κ(t)c(1 − exp(−|x|))sign(x), with c = 10 and Tc = 1. It can be observed that, the state x(tstop ) at a time tstop grows with |x0 |. Here we choose tstop = 0.9 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Comparison of the proposed time-scale transformation against the trivial scalar scaling. On the subplot on the left shows that if the system in τ -time has an UBST given by T f , then the system in the t-time has a UBST given by Tc . The subplot on the right illustrates how the time-varying gain is uniformly bounded . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Simulation of a first-order predefined-time controller with bounded time-varying gains. Different values of the parameter alpha are shown, illustrating that a suitable selection of α allows to reduce the energy and the control magnitude . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

44

48

48

50

51 xxi

xxii

Fig. 6

Fig. 7

Fig. 8

List of Figures

Comparison between the autonomous predefined-time controller proposed in [1] and the proposed non-autonomous predefined-time controller as discussed in Example 3. In both cases the UBST is selected as Tc = 10 . . . . . . . . . . . . . . . . . . . Simulation of Example 5, showing the lack of robustness to measurement noise of a prescribed-time algorithm. On the left, the behavior of a prescribed control with Tc = 10 and without disturbance. In the center, a set of pulse disturbances in (36). On the right, the behavior of the closed-loop system under the prescribed control and in the presence of disturbance (36) . . . . . . . . . . . . . . . . . . . . . . Simulation of Example 6, showing robustness to measurement noise of a prescribed-time algorithm with bounded time-varying gains. On the left, the behavior of a prescribed control with Tc = 10 and without disturbance. In the center, a set of pulse disturbances in (36). On the right, the behavior of the closed-loop system under the prescribed control and in the presence of disturbance (36) . . . . . . . . . . . . . . . .

55

60

63

Bi-homogeneous Differentiators Fig. 1

Fig. 2

Fig. 3

Fig. 4

Time behavior of the three differentiation errors for initial conditions x0 = [1, −5, 1] × 10 p , with p = {0, 3, 6, 9} and scaling L = 1 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Time behavior of the three differentiation errors for initial conditions x0 = [1, −5, 1] × 10 p , with p = {0, 3, 6, 9} and scaling L = 2 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Differentiation errors for a signal without noise for three different  values of the 0-limit homogeneity degree d0 = 0, − 21 , −1 and the same d∞ = 0.15. Note that only the differentiator with d0 = −1 is able to estimate correctly the three signals . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Differentiation errors for a signal with a random noise of amplitude ε = 0.005, for three different  values  of the 0-limit homogeneity degree d0 = 0, − 12 , −1 and the same d∞ = 0.15 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

89

90

91

92

On Finite- and Fixed-Time State Estimation for Uncertain Linear Systems Fig. 1 Fig. 2

Trajectories of the system, state estimation and estimation error—linear observer, μ = 0 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Trajectories of the system, state estimation and estimation error—discontinuous observer, μ = 1 . . . . . . . . . . . . . . . . . . . . . . .

113 114

List of Figures

Fig. 3 Fig. 4 Fig. 5 Fig. 6 Fig. 7 Fig. 8 Fig. 9 Fig. 10

Trajectories of the system, state estimation and estimation error—nonlinear observer, μ = 0.5 . . . . . . . . . . . . . . . . . . . . . . . . . Estimation error—parametric uncertainties and unknown inputs . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Estimation error—unknown inputs . . . . . . . . . . . . . . . . . . . . . . . . . . Trajectories of the system and estimation error—unstable system . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Trajectories of the system, state estimation and estimation error—FxT observer . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . FxT Estimation error—uniform property . . . . . . . . . . . . . . . . . . . . . FT estimation error . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Trajectories of the system, unknown input identification and estimation error . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

xxiii

115 116 116 117 118 119 119 121

Robust State Estimation for Linear Time-Varying Systems Using High-Order Sliding-Modes Observers Fig. 1 Fig. 2

Fig. 3

Fig. 4 Fig. 5 Fig. 6 Fig. 7

Cascade observer block diagram . . . . . . . . . . . . . . . . . . . . . . . . . . . . State estimation. ©2022 John Wiley and Sons. Reprinted with permission from Galván-Guerra, R., Fridman, L., and Dávila, J. (2017) High-order sliding-mode observer for linear time-varying systems with unknown inputs. Int. J. Robust. Nonlinear Control, 27: 2338–2356. https://doi.org/ 10.1002/rnc.3698 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Estimation errors. ©2022 John Wiley and Sons. Reprinted with permission from Galván-Guerra, R., Fridman, L., and Dávila, J. (2017) High-order sliding-mode observer for linear time-varying systems with unknown inputs. Int. J. Robust. Nonlinear Control, 27: 2338–2356. https://doi.org/ 10.1002/rnc.3698 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . States x(t) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Estimation errors e(t) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Estimations of the states x1 (t) (above), and x2 (t) (below) . . . . . . . Estimations of the states x3 (t) (above), and x4 (t) (below) . . . . . . .

141

147

147 158 159 159 159

Effect of Euler Explicit and Implicit Time Discretizations on Variable-Structure Differentiators Fig. 1 Fig. 2 Fig. 3 Fig. 4

Flowchart of the I-AO-STD. The block D indicates one-step delay . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Output of the differentiators for the input f (t) = 5t without noise, under oversized gain L = 103 , and h = 50 ms . . . . Scheme of the rotary inverted pendulum . . . . . . . . . . . . . . . . . . . . . Closed-loop diagram of the RIPS . . . . . . . . . . . . . . . . . . . . . . . . . . .

170 171 172 173

xxiv

List of Figures

Discrete-Time Implementations of Differentiators Homogeneous in the Bi-Limit Fig. 1 Fig. 2 Fig. 3

Signal model as a chain of integrators . . . . . . . . . . . . . . . . . . . . . . . Simulation results for the 5th order bi-homogeneous differentiator with μ = 1 and μ = 0 . . . . . . . . . . . . . . . . . . . . . . . . . Comparison of the simulation results of the semi-implicit mapped and the stable explicit Euler and the explicit Euler discretized 5th order bi-homogeneous differentiator . . . . . . . . . . . .

183 191

202

Lyapunov-Based Consistent Discretization of Quasi-continuous High-Order Sliding Modes Fig. 1 Fig. 2 Fig. 3 Fig. 4 Fig. 5 Fig. 6 Fig. 7 Fig. 8

Discrete-time approximation of (1), (3) for n = 1 . . . . . . . . . . . . . . Discrete-time approximation of (1), (3) for n = 2 . . . . . . . . . . . . . . Discrete-time approximation of (1), (3) for n = 3 . . . . . . . . . . . . . . Norm of z˜ k in (28) for the discrete-time approximation of (1), (3) for n = 3 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . States of (1) in closed loop with the discretization of (3) given by (37) (undisturbed case) . . . . . . . . . . . . . . . . . . . . . . . . . . . . States of (1) in closed loop with the discretization of (3) given by (39) (undisturbed case) . . . . . . . . . . . . . . . . . . . . . . . . . . . . States of (1) in closed loop with the discretization of (3) given by (37) (disturbed case) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . States of (1) in closed loop with the discretization of (3) given by (39) (disturbed case) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

221 222 222 223 224 224 225 225

Low-Chattering Discretization of Sliding Modes Fig. 1

Fig. 2

Fig. 3 Fig. 4 Fig. 5

Filippov procedure. a Graph Γ ( f ) of the DE x˙ = f (x) = 1 − 2 sign x. b Graph of the corresponding Filippov inclusion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . First-order SMC (6) for h = g = 1 and its discretizations. a The proposed discretization method for x˙ = f (x) = 1 − 2 sign x. b The standard Euler discretization (9). c Alternative discretization (10) utilizing the knowledge of the system provides for the FT stability. d Alternative discretization (11) is effective for any h, g, |h| ≤ 1, |g| ∈ [1, 1.5] and provides for the asymptotic stability for h = g = 1 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Parameters λ˜ 0 , λ˜ 1 , . . . , λ˜ n d +n f of differentiator (22), (23) for n d + n f = 0, 1, . . . , 12 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Valid parameters k L of the discrete differentiator (32) corresponding to Fig. 3 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Kinematic car model and the desired trajectory y = g(x) . . . . . . .

233

237 245 245 250

List of Figures

Fig. 6

Fig. 7

Fig. 8 Fig. 9 Fig. 10 Fig. 11 Fig. 12

a 3-SM car control, car trajectory and control for the coinciding sampling step τ = 10−4 and the integration step. a–c standard Euler discretization, d–f new discretization . . . a 3-SM car control, car trajectory and control (steering angle derivative), sampling step τ = 0.03, integration step 10−4 . a–c standard Euler discretization, d–f new discretization. New discretization keeps the performance from Fig. 6 in spite of the much larger sampling step . . . . . . . . . . . . . . . . . . . . . 4-SM stabilization of the system (44) for h = 0.5 · cos(t + σ˙ ) . . . Asymptotic stabilization of the system (44) for h ≡ 0 by the new discretization . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Planar engagement geometry . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Intercepting maneuvering target. Standard discretization: τ = 10−6 , τ = 10−3 . New discretization: τ = 10−3 . . . . . . . . . . . . Intercepting ballistic target. The standard discretization (on the left) and the new discretization (on the right) are performed with the sampling step τ = 10−2 . . . . . . . . . . . . . . . . . .

xxv

251

252 253 253 255 257

258

Adaptive Sliding Mode and Higher-Order Sliding-Mode Control Techniques with Applications: A Survey Fig. 1

Fig. 2

Fig. 3

Fig. 4

Fig. 5

Schematic of an electropneumatic system. ©2010 TAYLOR & FRANCIS. Reprinted with permission from Plestan, F., Shtessel, Y., Brégeault, V., Poznyak, A.: New methodologies for adaptive sliding mode control. Int. J. Control. 83(9), 1907–1919 (2010) . . . . . . . . . . . . . . . . . . . . . . . . . . Top: Current (-) and desired (—) position trajectories (m) versus time (s). Center: Control input (V) versus time (s). Bottom: Control gain K versus time (s). ©2010 TAYLOR & FRANCIS. Reprinted with permission from Plestan, F., Shtessel, Y., Brégeault,V., Poznyak, A.: New methodologies for adaptive sliding mode control. Int. J. Control. 83(9), 1907–1919 (2010) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Time history of σ and v. ©2019 ELSEVIER. Reprinted with permission from Edwards, C., Shtessel, Y.: Enhanced continuous higher order sliding mode control with adaptation. J. Frankl. Inst. 356, 4773–4784 (2019) . . . . . . . . . Time histories of α, β and | f (t)|. ©2019 ELSEVIER. Reprinted with permission from Edwards, C., Shtessel, Y.: Enhanced continuous higher order sliding mode control with adaptation. J. Frankl. Inst. 356, 4773–4784 (2019) . . . . . . . . . Time histories of ρ(t) and L(t). ©2019 ELSEVIER. Reprinted with permission from Edwards, C., Shtessel, Y.: Enhanced continuous higher order sliding mode control with adaptation. J. Frankl. Inst. 356, 4773–4784 (2019) . . . . . . . . .

274

275

280

281

281

xxvi

Fig. 6

Fig. 7

Fig. 8

Fig. 9

Fig. 10

Fig. 11

Fig. 12

List of Figures

The disturbance f (t) reconstruction. ©2019 ELSEVIER. Reprinted with permission from Edwards, C., Shtessel, Y.: Enhanced continuous higher order sliding mode control with adaptation. J. Frankl. Inst. 356, 4773–4784 (2019) . . . . . . . . . Left, schematic of Quanser 3DOF tandem helicopter. Right, experimental set-up (with the fan on the left-hand side, producing perturbations as wind gusts). ©2017 JOHN WILEY AND SON. Reprinted with permission from Chriette, A., Plestan, F., Castaneda, H., Pal, M., Guillo, M., Odelga, M., Rajappa, S., Chandra, R.: Adaptive robust attitude control for UAVs - Design and experimental validation. Adaptive Control and Signal Processing 30,(8–10), 1478–1493 (2016) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Adaptive super-twisting control -experimental results for trajectory tracking. ©2017 JOHN WILEY AND SON. Reprinted with permission from Chriette, A., Plestan, F., Castaneda, H., Pal, M., Guillo, M., Odelga, M., Rajappa, S., Chandra, R.: Adaptive robust attitude control for UAVs - Design and experimental validation. Adaptive Control and Signal Processing 30,(8–10), 1478–1493 (2016) . . . . PID control - experimental results for trajectory tracking. ©2017 JOHN WILEY AND SON. Reprinted with permission from Chriette, A., Plestan, F., Castaneda, H., Pal, M., Guillo, M., Odelga, M., Rajappa, S., Chandra, R.: Adaptive robust attitude control for UAVs - Design and experimental validation. Adaptive Control and Signal Processing 30,(8–10), 1478–1493 (2016) . . . . . . . . . . . . . . . . . . . . . Mass-spring-damper (MSD) system: the Educational Control Products (ECP) model 210. ©2017 ELSEVIER. Reprinted with permission from Shtessel Y, Moreno J, and Fridman L. Twisting sliding mode control with adaptation: Lyapunov design, methodology and application. Automatica, 75:229–235, 2017 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Comparison of output tracking errors for ATC against fixed gain TC. ©2017 ELSEVIER. Reprinted with permission from Shtessel, Y., Moreno, J., Fridman, L.: Twisting Sliding Mode Control with Adaptation: Lyapunov Design, Methodology and Application. Automatica 75, 229–235 (2017) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Time history of the adaptive control gain. ©2017 ELSEVIER. Reprinted with permission from Shtessel, Y., Moreno, J., Fridman, L.: Twisting Sliding Mode Control with Adaptation: Lyapunov Design, Methodology and Application. Automatica 75, 229–235 (2017) . . . . . . . . . . . . . .

282

285

285

286

287

287

288

List of Figures

Fig. 13

Fig. 14

Fig. 15

Fig. 16

Fig. 17

Fig. 18

Fig. 19

Fig. 20

Evolution of σ, σ˙ , σ¨ . ©2019 ELSEVIER. Reprinted with permission from Edwards C and Shtessel Y. Enhanced continuous higher order sliding mode control with adaptation. J. Frankl. Inst, 356:4773–4784, 2019 . . . . . . . . . . Evolution of the control function u(t). ©2019 ELSEVIER. Reprinted with permission from Edwards C and Shtessel Y. Enhanced continuous higher order sliding mode control with adaptation. J. Frankl. Inst, 356:4773–4784, 2019 . . . . . . . . . . ˙ Evolution of the adaptive gains λ(t), β(t) and | (t)|. ©2019 ELSEVIER. Reprinted with permission from Edwards C and Shtessel Y. Enhanced continuous higher order sliding mode control with adaptation. J. Frankl. Inst, 356:4773–4784, 2019 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Evolution of ρ(t), δ(t), L(t). © 2019 ELSEVIER. Reprinted with permission from Edwards C and Shtessel Y. Enhanced continuous higher order sliding mode control with adaptation. J. Frankl. Inst, 356:4773–4784, 2019 . . . . . . . . . . Systems of HSM coordinates and corresponding variables. ©2016 JOHN WILEY AND SONS. Reprinted with permission from Yu P, Shtessel Y, and Edwards C. Continuous higher order sliding mode control with adaptation of air breathing hypersonic missile. Int. J. Adapt. Control Signal Process, 30:1099–1118, 2016 . . . . . . . . . . . A geometry of the HSM-target interaction. ©2016 JOHN WILEY AND SONS. Reprinted with permission from Yu P, Shtessel Y, and Edwards C. Continuous higher order sliding mode control with adaptation of air breathing hypersonic missile. Int. J. Adapt. Control Signal Process, 30:1099–1118, 2016 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Altitude Tracking. ©2016 JOHN WILEY AND SONS. Reprinted with permission from P. Yu, Y. Shtessel, and C. Edwards, “Continuous Higher Order Sliding Mode Control with Adaptation of Air Breathing Hypersonic Missile,” Int. J. Adapt. Control Signal Process, Vol. 30, Issues 8–10, August–October 2016, pp. 1099–1118 . . . . . . . . . . . . . . . . . . . . . . . Pitch Angle Tracking. ©2016 JOHN WILEY AND SONS. Reprinted with permission from P. Yu, Y. Shtessel, and C. Edwards, “Continuous Higher Order Sliding Mode Control with Adaptation of Air Breathing Hypersonic Missile,” Int. J. Adapt. Control Signal Process, Vol. 30, Issues 8–10, August–October 2016, pp. 1099–1118 . . . . . . . . . . . . . . . . . . . . . . .

xxvii

292

292

293

293

295

295

296

296

xxviii

Fig. 21

Fig. 22

Fig. 23

Fig. 24

Fig. 25

Fig. 26

List of Figures

Altitude and Pitch Angle Errors. ©2016 JOHN WILEY AND SONS. Reprinted with permission from P. Yu, Y. Shtessel, and C. Edwards, “Continuous Higher Order Sliding Mode Control with Adaptation of Air Breathing Hypersonic Missile,” Int. J. Adapt. Control Signal Process, Vol. 30, Issues 8–10, August–October 2016, pp. 1099–1118 . . . . . . . . . . . . Control Inputs. ©2016 JOHN WILEY AND SONS. Reprinted with permission from P. Yu, Y. Shtessel, and C. Edwards, “Continuous Higher Order Sliding Mode Control with Adaptation of Air Breathing Hypersonic Missile,” Int. J. Adapt. Control Signal Process, Vol. 30, Issues 8–10, August–October 2016, pp. 1099–1118 . . . . . . . . . . . . . . . . . . . . . . . Adaptive first layer control gains. ©2016 JOHN WILEY AND SONS. Reprinted with permission from P. Yu, Y. Shtessel, and C. Edwards, “Continuous Higher Order Sliding Mode Control with Adaptation of Air Breathing Hypersonic Missile,” Int. J. Adapt. Control Signal Process, Vol. 30, Issues 8–10, August–October 2016, pp. 1099–1118 . . . . . . . . . . . . Adaptive control terms that are hidden behind integrals. ©2016 JOHN WILEY AND SONS. Reprinted with permission from P. Yu, Y. Shtessel, and C. Edwards,“Continuous Higher Order Sliding Mode Control with Adaptation of Air Breathing Hypersonic Missile,” Int. J. Adapt. Control Signal Process, Vol. 30, Issues 8–10, August–October 2016, pp. 1099–1118 . . . . . . . . . . . . . . . . . . . . . . . Adaptive second layer control gains. ©2016 JOHN WILEY AND SONS. Reprinted with permission from P. Yu, Y. Shtessel, and C. Edwards, “Continuous Higher Order Sliding Mode Control with Adaptation of Air Breathing Hypersonic Missile,” Int. J. Adapt. Control Signal Process, Vol. 30, Issues 8–10, August–October 2016, pp. 1099–1118 . . . . . . . . . . . . Downrange tracking error. ©2016 JOHN WILEY AND SONS. Reprinted with permission from P. Yu, Y. Shtessel, and C. Edwards, “Continuous Higher Order Sliding Mode Control with Adaptation of Air Breathing Hypersonic Missile,” Int. J. Adapt. Control Signal Process, Vol. 30, Issues 8–10, August–October 2016, pp. 1099–1118 . . . . . . . . . . . .

297

297

298

298

299

299

Unit Vector Control with Prescribed Performance via Monitoring and Barrier Functions Fig. 1 Fig. 2

Simulation results—unit vector control with monitoring and positive barrier . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Simulation results—unit vector control with monitoring and positive barrier . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

319 320

List of Figures

xxix

Fig. 3 Fig. 4 Fig. 5 Fig. 6 Fig. 7 Fig. 8 Fig. 9 Fig. 10 Fig. 11

321 326 326 326 327 327 328 329

Fig. 12 Fig. 13 Fig. 14 Fig. 15 Fig. 16 Fig. 17

Free body diagram of overhead crane . . . . . . . . . . . . . . . . . . . . . . . . 3D Crane—MBF UVC with positive barrier, X (t) and X˙ (t) . . . . . 3D Crane—MBF UVC with positive barrier, Y (t) and Y˙ (t) . . . . . . ˙ 3D Crane—MBF UVC with positive barrier, R(t) and R(t) ..... 3D Crane—MBF UVC with positive barrier, α(t) and α(t) ˙ ...... ˙ 3D Crane—MBF UVC with positive barrier, β(t) and β(t) ...... 3D Crane—MBF UVC with positive barrier, σ (t) and σ (t) . . . . 3D Crane—MBF UVC with positive barrier, U and d(t) . . . . . . . . 3D Crane—MBF UVC with semi-positive barrier, X (t) and X˙ (t) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3D Crane—MBF UVC with semi-positive barrier, Y (t) and Y˙ (t) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3D Crane—MBF UVC with semi-positive barrier, R(t) ˙ and R(t) ............................................... 3D Crane—MBF UVC with semi-positive barrier, α(t) and α(t) ˙ ................................................ 3D Crane—MBF UVC with semi-positive barrier, β(t) ˙ and β(t) ................................................ 3D Crane—MBF UVC with semi-positive barrier, σ (t) and σ (t) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3D Crane—MBF UVC with semi-positive barrier, U and d(t) . . .

329 329 330 330 330 331 332

Chattering in Mechanical Systems Under Sliding-Mode Control Fig. 1

Bias functions for relay and continuous control; cr = cc = a = 1, β = 0.5 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

350

Describing Function-Based Analysis and Design of Approximated Sliding-Mode Controllers with Reduced Chattering Fig. 1

Fig. 2

Fig. 3 Fig. 4 Fig. 5

a System employed to compute the gains of the HOSM using only information of φ, i.e., unmodeled dynamics are omitted; b System controlled by HOSM with BL approximation, considering unmodeled dynamics, to be used for designing parameters of BL . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . a Block diagram of nonlinear system divided in linear part and nonlinear part. b Graphical solution of Eq. (4) with saturation DF in (7) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Estimated amplitude A and frequency ω for first-order SMC with BL approximation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Simulation of the plant (8) in cascade with (3) in a closed loop with the controller (13) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . −1 (A, ω) Curves of inverse describing function −1/N ST sat of approximated super-twisting algorithm . . . . . . . . . . . . . . . . . . . .

361

362 365 365 366

xxx

Fig. 6 Fig. 7 Fig. 8

Fig. 9

Fig. 10

Fig. 11

Fig. 12

List of Figures

Estimated amplitude A and frequency ω for super twisting with BL approximation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Simulation of the plant (8) in cascade with (3) in a closed loop with the controller (17) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Curves of inverse describing function −1/Nnessat (A, ω) of approximated nested algorithm. “©2022 IEEE. Reprinted with permission from A. Rosales, I. Castillo and L. Freidovich, ‘Analysis of Higher Order Sliding Mode Controllers with Boundary Layer Approximation,’ 2021 60th IEEE Conference on Decision and Control (CDC), Austin, TX, USA, 2021, pp. 7070–7075, https://doi.org/10. 1109/CDC45484.2021.9683424” . . . . . . . . . . . . . . . . . . . . . . . . . . . Amplitude and Frequency versus δ for nested algorithm with BL. “©2022 IEEE. Reprinted with permission from A. Rosales, I. Castillo and L. Freidovich, ‘Analysis of Higher Order Sliding Mode Controllers with Boundary Layer Approximation,’ 2021 60th IEEE Conference on Decision and Control (CDC), Austin, TX, USA, 2021, pp. 7070–7075, https://doi.org/10.1109/CDC45484.2021.9683424” . . . . . . . . . . . . Simulation of the plant (27) in cascade with (3) in a closed loop with the controller (34), “©2022 IEEE. Reprinted with permission from A. Rosales, I. Castillo and L. Freidovich, ‘Analysis of Higher Order Sliding Mode Controllers with Boundary Layer Approximation,’ 2021 60th IEEE Conference on Decision and Control (CDC), Austin, TX, USA, 2021, pp. 7070–7075, https://doi.org/10. 1109/CDC45484.2021.9683424” . . . . . . . . . . . . . . . . . . . . . . . . . . . −1 (A, ω) Curves of inverse describing function −1/Ntw sat of approximated twisting algorithm. “©2022 IEEE. Reprinted with permission from A. Rosales, I. Castillo and L. Freidovich, ‘Analysis of Higher Order Sliding Mode Controllers with Boundary Layer Approximation,’ 2021 60th IEEE Conference on Decision and Control (CDC), Austin, TX, USA, 2021, pp. 7070–7075, https://doi.org/10. 1109/CDC45484.2021.9683424” . . . . . . . . . . . . . . . . . . . . . . . . . . . Amplitude and Frequency versus δ for twisting with BL. “©2022 IEEE. Reprinted with permission from A. Rosales, I. Castillo and L. Freidovich, ‘Analysis of Higher Order Sliding Mode Controllers with Boundary Layer Approximation,’ 2021 60th IEEE Conference on Decision and Control (CDC), Austin, TX, USA, 2021, pp. 7070–7075, https://doi.org/10.1109/CDC45484.2021.9683424” . . . . . . . . . . . .

367 367

371

372

372

373

374

List of Figures

Fig. 13

Fig. 14

Fig. 15

Fig. 16

Fig. 17

Simulation of the plant (27) in cascade with (3) in a closed loop with the controller (37). “©2022 IEEE. Reprinted with permission from A. Rosales, I. Castillo and L. Freidovich, ‘Analysis of Higher Order Sliding Mode Controllers with Boundary Layer Approximation,’ 2021 60th IEEE Conference on Decision and Control (CDC), Austin, TX, USA, 2021, pp. 7070–7075, https://doi.org/10. 1109/CDC45484.2021.9683424” . . . . . . . . . . . . . . . . . . . . . . . . . . . −1 Curves of inverse describing function −1/N ST exsat (A, ω) of ST extension with BL approximation. “©2022 IEEE. Reprinted with permission from A. Rosales, I. Castillo and L. Freidovich, ‘Analysis of Higher Order Sliding Mode Controllers with Boundary Layer Approximation,’ 2021 60th IEEE Conference on Decision and Control (CDC), Austin, TX, USA, 2021, pp. 7070–7075, https://doi.org/10. 1109/CDC45484.2021.9683424” . . . . . . . . . . . . . . . . . . . . . . . . . . . Amplitude and Frequency versus δ for ST extension with BL approximation. “©2022 IEEE. Reprinted with permission from A. Rosales, I. Castillo and L. Freidovich, ‘Analysis of Higher Order Sliding Mode Controllers with Boundary Layer Approximation,’ 2021 60th IEEE Conference on Decision and Control (CDC), Austin, TX, USA, 2021, pp. 7070–7075, https://doi.org/10.1109/CDC45484.2021. 9683424” . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Simulation of the plant (27) in cascade with (3) in a closed loop with the controller (40). “©2022 IEEE. Reprinted with permission from A. Rosales, I. Castillo and L. Freidovich, ‘Analysis of Higher Order Sliding Mode Controllers with Boundary Layer Approximation,’ 2021 60th IEEE Conference on Decision and Control (CDC), Austin, TX, USA, 2021, pp. 7070–7075, https://doi.org/10. 1109/CDC45484.2021.9683424” . . . . . . . . . . . . . . . . . . . . . . . . . . . Closed loop with nested controller . . . . . . . . . . . . . . . . . . . . . . . . . .

xxxi

375

375

376

377 379

Blood Glucose Regulation for Type 1 Diabetic Patients at Intensive Care Unit via Continuous Sliding-Mode Algorithms Fig. 1

Fig. 2 Fig. 3

Average blood glucose concentration for the adult patient cohort using the STA. The ±1 STD values are given by the filled area . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Average insulin infusion rate for the adult patient cohort using the STA . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . CVGA for the adult patient cohort using the STA . . . . . . . . . . . . . .

398 398 399

xxxii

Fig. 4

Fig. 5 Fig. 6 Fig. 7

Fig. 8 Fig. 9 Fig. 10

Fig. 11 Fig. 12 Fig. 13

Fig. 14

Fig. 15

List of Figures

Average blood glucose concentration for the adult patient cohort using the CTA. The ±1 STD values are given by the filled area . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Average insulin infusion rate for the adult patient cohort using the CTA . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . CVGA for the adult patient cohort using the CTA . . . . . . . . . . . . . . Average blood glucose concentration for the adult patient cohort using the CSTA. The ±1 STD values are given by the filled area . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Average insulin infusion rate for the adult patient cohort using the CSTA . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . CVGA for the adult patient cohort using the CSTA . . . . . . . . . . . . Average blood glucose concentration for the adult patient cohort using the CNTA. The ±1 STD values are given by the filled area . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Average insulin infusion rate for the adult patient cohort using the CNTA . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . CVGA for the adult patient cohort using the CNTA . . . . . . . . . . . . Average blood glucose concentration for the adult patient cohort using the OFCTA. The ±1 STD values are given by the filled area. ©2022 IEEE. Reprinted with permission, from, R. Franco et al., “Output-Feedback Sliding-Mode Controller for Blood Glucose Regulation in Critically Ill Patients Affected by Type 1 Diabetes,” in IEEE Transactions on Control Systems Technology, vol. 29, no. 6, pp. 2704–2711, Nov. 2021, https://doi.org/10.1109/TCST. 2020.3046420 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Average insulin infusion rate for the adult patient cohort using the OFCTA. ©2022 IEEE. Reprinted with permission, from, R. Franco et al., “Output-Feedback Sliding-Mode Controller for Blood Glucose Regulation in Critically Ill Patients Affected by Type 1 Diabetes,” in IEEE Transactions on Control Systems Technology, vol. 29, no. 6, pp. 2704–2711, Nov. 2021, https://doi.org/10.1109/TCST. 2020.3046420 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . CVGA for the adult patient cohort using the OFCTA. ©2022 IEEE. Reprinted with permission, from, R. Franco et al., “Output-Feedback Sliding-Mode Controller for Blood Glucose Regulation in Critically Ill Patients Affected by Type 1 Diabetes,” in IEEE Transactions on Control Systems Technology, vol. 29, no. 6, pp. 2704–2711, Nov. 2021, https://doi.org/10.1109/TCST.2020.3046420 . . . . . . . . . . . . .

400 400 401

402 402 403

404 404 405

406

406

407

List of Figures

Fig. 16

Fig. 17

Fig. 18

Average blood glucose concentration for the adult patient cohort using the Portland protocol. The ±1 STD values are given by the filled area ©2022 IEEE. Reprinted with permission, from, R. Franco et al., “Output-Feedback Sliding-Mode Controller for Blood Glucose Regulation in Critically Ill Patients Affected by Type 1 Diabetes,” in IEEE Transactions on Control Systems Technology, vol. 29, no. 6, pp. 2704–2711, Nov. 2021, https://doi.org/10. 1109/TCST.2020.3046420 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Average insulin infusion rate for the adult patient cohort using the Portland protocol. ©2022 IEEE. Reprinted with permission, from, R. Franco et al., “Output-Feedback Sliding-Mode Controller for Blood Glucose Regulation in Critically Ill Patients Affected by Type 1 Diabetes,” in IEEE Transactions on Control Systems Technology, vol. 29, no. 6, pp. 2704–2711, Nov. 2021, https://doi.org/10. 1109/TCST.2020.3046420 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Insulin and glucose bolus for the adult patient cohort using the Portland protocol. ©2022 IEEE. Reprinted with permission, from, R. Franco et al., “Output-Feedback Sliding-Mode Controller for Blood Glucose Regulation in Critically Ill Patients Affected by Type 1 Diabetes,” in IEEE Transactions on Control Systems Technology, vol. 29, no. 6, pp. 2704–2711, Nov. 2021, https://doi.org/10. 1109/TCST.2020.3046420 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

xxxiii

408

409

409

A Reduced-Order Model-Based Design of Event-Triggered Sliding-Mode Control Fig. 1 Fig. 2 Fig. 3 Fig. 4 Fig. 5

The closed-loop plant with the event-triggered feedback strategy . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Evolution of state trajectories . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Evolution of sliding variable . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Evolution of control input . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Evolution of inter-event time . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

420 432 432 433 433

A Robust Approach for Fault Diagnosis in Railway Suspension Systems Fig. 1 Fig. 2 Fig. 3 Fig. 4

Fig. 5

Pyramid of maintenance strategies . . . . . . . . . . . . . . . . . . . . . . . . . . Benefits gained through predictive maintenance strategy . . . . . . . . Main parts of a railway vehicle . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Simplified model of the entire vehicle consisting of one carbody, two bogies, and four wheelsets, along with the two suspension systems—primary and secondary . . . . . . . . . . . . . . . . . Modeling concept for a simplified railway vehicle model . . . . . . . .

438 440 441

441 444

xxxiv

Fig. 6

Fig. 7 Fig. 8 Fig. 9

Fig. 10 Fig. 11

Fig. 12

Fig. 13

Fig. 14

Fig. 15 Fig. 16

List of Figures

Simplified railway vehicle with model states and parameters; stiffness (k p ) and damping (c p ) constants are the same for all primary spring–damper elements . . . . . . . . . . . . . . . . . . . . . . . . . . . Model check—vertical vehicle dynamics (Matlab) versus whole vehicle model (Simpack) . . . . . . . . . . . . . . . . . . . . . . Carbody as a submodel for the isolation task on secondary suspension level . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Model check—vertical dynamics of the carbody subsystem (Matlab) versus carbody of the whole vehicle model (Simpack) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Filtering concept for the sensor data, i.e., accelerations . . . . . . . . . Estimation of primary suspension parameters in a fault-free test case. All components on the primary suspension level have the same stiffness/damping constants . . . . . . . . . . . . . . . . . . . . Estimation of secondary suspension parameters in a fault-free test case. All components on the secondary suspension level have the same stiffness/damping constants . . . . . . . . . . . . . . . . . . . . Estimation of primary suspension parameters in a fault-free test case. All primary suspension parameters estimated separately . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Estimation of secondary suspension parameters in a fault-free test case. All secondary suspension parameters estimated separately . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Comparison of SM and KF performance . . . . . . . . . . . . . . . . . . . . . Estimation of secondary suspension parameters when a fault of 50% reduced secondary stiffness is introduced in the front/ right secondary spring starting from t = 0s . . . . . . . . . . . . . . . . . . .

445 448 448

449 449

454

454

455

455 456

457

Flight Evaluation of a Sliding-Mode Fault-Tolerant Control Scheme Fig. 1 Fig. 2

Fig. 3 Fig. 4 Fig. 5 Fig. 6 Fig. 7 Fig. 8 Fig. 9

UAV civil applications: cargo transport and delivery . . . . . . . . . . . . McDonnell Douglas MD-11: landing under engine power only. NASA Dryden Flight Research Center Photo Collection, photo by J. Ross [9] . . . . . . . . . . . . . . . . . . . . . . . . . . . . JAXA MuPAL-α experimental airplane (reproduced with permission from JAXA) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . K50-Advanced UAV platform . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3DR IRIS+ quadrotor [55] . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3DR IRIS+ quadrotor in gimbal . . . . . . . . . . . . . . . . . . . . . . . . . . . . Fault-free case (red curves represent the nominal performance) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Faulty case: w1 = 0.75 @ t = 2 s (red curves represent the nominal performance) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . MuPAL-α fly-by-wire configuration (adapted from [12] with permission from JAXA) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

463

464 464 465 475 479 481 482 483

List of Figures

xxxv

Fig. 10 Fig. 11

486

Fig. 12

Fig. 13 Fig. 14 Fig. 15 Fig. 16

MuPAL-α HIL test platform . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Fault-free case: states, switching functions and control surface deflections (HIL test with auto-pilot) . . . . . . . . . . . . . . . . . . Faulty case—K = diag(1, 0.5, 0.2): states, switching functions and control surface deflections (HIL test with auto-pilot) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Fault-free case: states, switching functions and control surface deflections (flight test) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Fault-free case: states, switching functions and control surface deflections (flight test for S-turn) . . . . . . . . . . . . . . . . . . . . . Aileron faults—K = diag(1, 0.5, 0): states, switching functions and control surface deflections (flight test) . . . . . . . . . . . Aileron and rudder faults—K = diag(1, 0.5, 0.2): states, switching functions and control surface deflections (flight test for S-turn) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

488

490 492 494 496

498

Sliding-Mode-Based Fault Diagnosis and Fault-Tolerant Control for Quad-Rotors Fig. 1

Fig. 2

Fig. 3

Representation of the Quad-rotor variables. ©2022 IEEE. Reprinted with permission, from, R. Falcón, H. Ríos and A. Dzul, “A Robust Fault Diagnosis for Quad-Rotors: A Sliding-Mode Observer Approach,” in IEEE/ASME Transactions on Mechatronics, vol. 27, no. 6, pp. 4487–4496, Dec. 2022, DOI:10.1109/TMECH.2022. 3156854 . . . . . . . . . . . . . Quanser Unmanned Vehicle System. ©2022 IEEE. Reprinted with permission, from, R. Falcón, H. Ríos and A. Dzul, “A Robust Fault Diagnosis for Quad-Rotors: A Sliding-Mode Observer Approach,” in IEEE/ASME Transactions on Mechatronics, vol. 27, no. 6, pp. 4487–4496, Dec. 2022, DOI:10.1109/TMECH.2022.3156854 . . . . . . . . . . . . . . Residual signals. The solid line represents the average of all the experimental tests. The shaded light gray area depicts the time when fault f 1 is active, the gray area when fault f 2 is active, and the dark gray when both of them are active. ©2022 IEEE. Reprinted with permission, from, R. Falcón, H. Ríos and A. Dzul, “A Robust Fault Diagnosis for Quad-Rotors: A Sliding-Mode Observer Approach,” in IEEE/ASME Transactions on Mechatronics, vol. 27, no. 6, pp. 4487–4496, Dec. 2022, DOI:10.1109/ TMECH.2022.3156854 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

507

514

515

xxxvi

Fig. 4

Fig. 5

Fig. 6

Fig. 7

Fig. 8

Fig. 9

Fig. 10

Fig. 11 Fig. 12

List of Figures

Fault detection. The solid line represents the average of all the experimental tests. ©2022 IEEE. Reprinted with permission, from, R. Falcón, H. Ríos and A. Dzul, “A Robust Fault Diagnosis for Quad-Rotors: A Sliding-Mode Observer Approach,” in IEEE/ASME Transactions on Mechatronics, vol. 27, no. 6, pp. 4487–4496, Dec. 2022, DOI:10.1109/TMECH.2022.3156854 . . . . . . . . . . . . . . Fault isolation. ©2022 IEEE. Reprinted with permission, from, R. Falcón, H. R os and A. Dzul, “A Robust Fault Diagnosis for Quad-Rotors: A Sliding-Mode Observer Approach,” in IEEE/ASME Transactions on Mechatronics, vol. 27, no. 6, pp. 4487–4496, Dec. 2022, DOI:10.1109/ TMECH.2022.3156854 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Fault identification on rotor 1. The solid line represents the average of all the experimental tests. ©2022 IEEE. Reprinted with permission, from, R. Falcón, H. Ríos and A. Dzul, “A Robust Fault Diagnosis for Quad-Rotors: A Sliding-Mode Observer Approach,” in IEEE/ASME Transactions on Mechatronics, vol. 27, no. 6, pp. 4487–4496, Dec. 2022, DOI:10.1109/TMECH.2022.3156854 . . . . . . . . . . . . . . Fault identification on rotor 2. The solid line represents the average of all the experimental tests. ©2022 IEEE. Reprinted with permission, from, R. Falcón, H. Ríos and A. Dzul, “A Robust Fault Diagnosis for Quad-Rotors: A Sliding-Mode Observer Approach,” in IEEE/ASME Transactions on Mechatronics, vol. 27, no. 6, pp. 4487–4496, Dec. 2022, DOI:10.1109/TMECH.2022.3156854 . . . . . . . . . . . . . . Quad-rotor position and attitude. The solid line represents the average of all the experimental tests. The shaded light gray area depicts the time when fault f 1 is active, the gray area when fault f 2 is active, and the dark gray when both of them are active. ©2022 IEEE. Reprinted with permission, from, R. Falcón, H. Ríos and A. Dzul, “A Robust Fault Diagnosis for Quad-Rotors: A Sliding-Mode Observer Approach,” in IEEE/ASME Transactions on Mechatronics, vol. 27, no. 6, pp. 4487–4496, Dec. 2022, DOI:10.1109/TMECH.2022.3156854 . . . . . . . . . . . . . . Quad-rotor position. The shaded light gray area depicts the time when fault. f 1 is active, and the dark gray when both faults are active . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Quad-rotor yaw angle. The shaded light gray area depicts the time when fault f 1 is active, and the dark gray when both faults are active . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Fault identification on rotor 1 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Fault identification on rotor 3 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

516

517

517

518

518

521

521 522 522

List of Figures

Fig. 13 Fig. 14 Fig. 15 Fig. 16 Fig. 17

Rotor thrusts . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Trajectories of the quad-rotor. Position . . . . . . . . . . . . . . . . . . . . . . . Disturbance and angular velocity constraints . . . . . . . . . . . . . . . . . . Trajectories of the quad-rotor. Orientation . . . . . . . . . . . . . . . . . . . . Rotor thrusts . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

xxxvii

522 530 530 531 531

Second-Order Sliding-Mode Leader-Follower Consensus for Networked Uncertain Diffusion PDEs with Spatially Varying Diffusivity Fig. 1

Fig. 2 Fig. 3 Fig. 4 Fig. 5 Fig. 6 Fig. 7 Fig. 8 Fig. 9 Fig. 10

Considered interaction graph G. Node 0 denotes the leader (23)–(26), whereas the nodes from 1 to 9 play as followers (15)–(18), which aim is to track the leader state q0 (ς, t) by exploiting only boundary neighboring information q j (1, t) ∈ N i . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Temporal profile of the boundary local disturbances ψi (t), with i = 1, 2, . . . , 9 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Spatio-temporal profile of the leader state q0 (ς, t) . . . . . . . . . . . . . Case 1: Spatio-temporal profile of the 2-th follower state q2 (ς, t) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Case 2: Spatio-temporal profile of the 2-th follower state q2 (ς, t) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Case 1: Spatio-temporal tracking error e2 (ς, t) = q2 (ς, t) − q0 (ς, t) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Case 1: Spatio-temporal tracking error e2 (ς, t) = q2 (ς, t) − q0 (ς, t) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Case 1: Temporal profile of the boundary control inputs u i (t) with i = 1, 2, . . . , 9 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Case 2: Temporal profile of the boundary control inputs u i (t) with i = 1, 2, . . . , 9 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Temporal profile of the [H2 (0, 1)]n -norm of the tracking error vector E(ς, t) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

561 562 562 563 563 563 564 564 565 565

List of Tables

Effect of Euler Explicit and Implicit Time Discretizations on Variable-Structure Differentiators Table 1 Table 2 Table 3 Table 4

Parameters of the differentiators obtained from the tuning procedure for the input I1 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Parameters of the differentiators obtained from the tuning procedure for the input I2 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Average values for 10 experiments when the parameters are tuned for I1 (parameters are shown in Table 1) . . . . . . . . . . . . . . . . Average values for 10 experiments when the parameters are tuned for I2 (parameters are shown in Table 2) . . . . . . . . . . . . . . . .

175 176 176 177

Discrete-Time Implementations of Differentiators Homogeneous in the Bi-Limit Table 1

Existing homogeneous differentiators and their corresponding eigenvalues . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

184

Adaptive Sliding Mode and Higher-Order Sliding-Mode Control Techniques with Applications: A Survey Table 1

The coefficients for the control law component in (35). © 2019 ELSEVIER. Reprinted with permission from Edwards C and Shtessel Y. Enhanced continuous higher-order sliding mode control with adaptation. J. Frankl. Inst, 356:4773–4784, 2019 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

291

xxxix

xl

List of Tables

Blood Glucose Regulation for Type 1 Diabetic Patients at Intensive Care Unit via Continuous Sliding-Mode Algorithms Table 1 Table 2 Table 3 Table 4 Table 5 Table 6

Table 7

Table 8

Gain selection of the sliding-mode controllers and observer parameters . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Performance indicators of the STA. The last row refers to the mean values of the adult cohort . . . . . . . . . . . . . . . . . . . . . . . Performance indicators of the CTA. The last row refers to the mean values of the adult cohort . . . . . . . . . . . . . . . . . . . . . . . Performance indicators of the CSTA. The last row refers to the mean values of the adult patient cohort . . . . . . . . . . . . . . . . . Performance indicators of the CNTA. The last row refers to the mean values of the adult cohort . . . . . . . . . . . . . . . . . . . . . . . Performance indicators of the OFCTA. The last row refers to the mean values of the adult cohort. ©2022 IEEE. Reprinted with permission, from, R. Franco et al., “Output-Feedback Sliding-Mode Controller for Blood Glucose Regulation in Critically Ill Patients Affected by Type 1 Diabetes,” in IEEE Transactions on Control Systems Technology, vol. 29, no. 6, pp. 2704–2711, Nov. 2021, https://doi.org/10.1109/TCST.2020.3046420 . . . . . . . . . . . . Performance indicators of the Portland protocol. The last row corresponds to the mean values of the patient cohort. ©2022 IEEE. Reprinted with permission, from, R. Franco et al., “Output-Feedback Sliding-Mode Controller for Blood Glucose Regulation in Critically Ill Patients Affected by Type 1 Diabetes,” in IEEE Transactions on Control Systems Technology, vol. 29, no. 6, pp. 2704–2711, Nov. 2021, https://doi.org/10.1109/TCST.2020.3046420 . . . . . . . . . . . . Performance indicators of the Sliding-Mode algorithms. The last row depicts algorithm with the best performance for the corresponding metric . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

396 399 401 403 405

407

410

410

A Robust Approach for Fault Diagnosis in Railway Suspension Systems Table 1 Table 2

The parameters of the vehicle suspension system . . . . . . . . . . . . . . Computational and parametrization effort of SM and KF algorithms . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

446 456

Sliding-Mode-Based Fault Diagnosis and Fault-Tolerant Control for Quad-Rotors Table 1 Table 2

Properties of the performance indexes e¯1R M S (t) and e¯2R M S (t) . . . . Properties of the tracking performance index ex R M S , e y R M S , ez R M S and e¯ψ R M S . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

518 523

New Properties of SMC Algorithms

Generalized Super-Twisting with Robustness to Arbitrary Disturbances: Disturbance-Tailored Super-Twisting (DTST) Hernan Haimovich

Abstract The super-twisting algorithm (STA) is one of the most popular secondorder sliding-mode techniques. STA can achieve finite-time convergence to a region of interest called the sliding surface while completely rejecting essentially two types of perturbations (i.e., disturbances): those that vanish toward this surface and those whose total time derivative is bounded. Existing modifications of STA can reject disturbances where the latter bound, instead of being constant, can depend on the distance to the sliding surface in some specific forms. However, a general design framework allowing for perturbations of arbitrary growth was missing and was recently provided by Disturbance-Tailored Super-Twisting (DTST). In this chapter, the main concepts, requirements, and properties involved in DTST are explained.

1 Introduction The two main features of sliding-mode control [18] are finite-time convergence to a region of interest called the sliding surface and capability of rejecting matched perturbations [4]. First-order sliding-mode techniques implement a discontinuous control action and can reject bounded disturbances. Higher-order sliding-mode techniques [2, 8] can sometimes reduce chattering while maintaining the finite-time convergence to the sliding surface [10]. One of the most popular second-order sliding-mode techniques is the supertwisting algorithm (STA) [9]. Perturbations that vanish as the state approaches the sliding surface can be completely rejected by the STA, as well as those having a bounded total time derivative. Modified variants of STA, such as the generalized STA (GSTA) of [12] or the generalized second-order algorithm (GSOA) in [13], enhance the disturbance rejection capability by covering cases where the bound on H. Haimovich (B) Centro Internacional Franco-Argentino de Ciencias de la Información y Sistemas (CIFASIS), Universidad Nacional de Rosario (UNR), Consejo Nacional de Investigaciones Científicas y Técnicas (CONICET), Ocampo y Esmeralda, 2000 Rosario, Argentina e-mail: [email protected] © The Author(s), under exclusive license to Springer Nature Switzerland AG 2023 T. R. Oliveira et al. (eds.), Sliding-Mode Control and Variable-Structure Systems, Studies in Systems, Decision and Control 490, https://doi.org/10.1007/978-3-031-37089-2_1

3

4

H. Haimovich

the total time derivative of the perturbation increases in specific forms. Disturbancetailored super-twisting (DTST) [5] allows even greater generality, as will soon be explained. Disturbance-Tailored Super-Twisting (DTST) can be said to achieve a type of paradigm change within second-order sliding-mode control when the sliding variable has relative degree one. Instead of fixing the algorithm’s defining functions beforehand, these functions are themselves constructed based on perturbation bound information. The main requirement for this construction to be possible is that the perturbations allow bounds that may grow only as the distance to the sliding surface becomes larger but cannot become unbounded when this distance is fixed. This requirement is standard in STA and its generalizations. The advantage in DTST is that the bound growth can be rather arbitrary, and not fixed to any specific form. This chapter focuses on explaining how to carry out the DTST construction and ensure its main properties: global robust stability and finite-time convergence. The emphasis is placed on the main concepts rather than on the technical details. The remainder of this chapter is organized as follows. Section 2 provides motivation through an example of speed-tracking control design for a simple mechanical system. Section 3 explains how to perform the DTST construction in general, and then specifically for the example of Sect. 2. Section 4 gives the requirements on the given perturbation bounds for the construction to be successful and shows the properties of the construction. In Sect. 5, the results ensuring global robust stability and finitetime convergence are stated. Conclusions and an outline for future work are given in Sect. 6. Throughout this chapter, the focus is placed on explaining how the DTST construction is performed and what properties are required and ensured. As long as possible, the technical details are kept to a minimum to emphasize the main concepts. The reader interested in the full technical details, including the detailed proofs, is referred to [5].

2 Motivation 2.1 Speed Tracking in a Simple Mechanical System Consider a very simple mass–spring–damper system of equation m q¨ + bq˙ + kq = u + δ, with linear position coordinate q, force input u, and matched perturbation δ. The mass m, viscous friction coefficient b, and spring stiffness k are assumed to be constant, positive, and perfectly known. Suppose that the control aim is to track a time-varying speed reference q˙d in a setting where only the speed tracking error q˙ − q˙d can be measured. Moreover, it is desired that the speed should converge to the reference in

Generalized Super-Twisting with Robustness to Arbitrary Disturbances …

5

a finite time. Suppose further that the perturbation δ has no known bound, but it is known that its total time derivative is bounded in relation to speed magnitude, ˙ ≤ ψ(|q|), |δ| ˙

(1)

where ψ is some continuous and nonnegative function. Since m, b, k are assumed to be perfectly known, the main obstacle to overcome is the effect of this perturbation. The speed reference is assumed bounded, sufficiently smooth, and with bounded derivatives. The following questions are posed: • What type of controller can achieve the objective? • Can the objective be achieved for any function ψ? • How would the resulting controller depend on ψ? In this chapter, it will be shown that a super-twisting type of controller, of the form u = −k1 φ1 (q˙ − q˙d ) + w,

(2a)

w˙ = −k2 φ2 (q˙ − q˙d ),

(2b)

can achieve the desired objective provided that the functions φ1 and φ2 are designed in correspondence with ψ and that the constant gains k1 , k2 are appropriately tuned.

2.2 Limitations of STA If ψ is constant, then the objective can be achieved by means of the standard STA with an additional term. To see this, define x1 = q˙ − q˙d so that 1 [u − bq˙ − kq + δ] − q¨d m b 1 k1 b 1 k = − φ1 (x1 ) − x1 + w − q˙d − q + δ − q¨d m m m m m m k1 b = − φ1 (x1 ) − x1 + x2 , m m

x˙1 = q¨ − q¨d =

with x2 :=

b 1 1 k w − q˙d − q + δ − q¨d . m m m m

6

H. Haimovich

The equation for x˙2 is then k2 φ2 (x1 ) − m k2 = − φ2 (x1 ) − m

x˙2 = −

with w2 := −

... b k 1 q˙ − q¨d − q d + δ˙ m m m k x1 + w2 , m

... k b 1 ˙ q˙d − q¨d − q d + δ. m m m

(3) (4)

(5)

The linear terms −bx1 and −kx1 appearing in the equations for x˙1 and x˙2 can be compensated exactly via φ1 and φ2 because b and k are assumed to be perfectly known. The perturbation term w2 can be bounded by some constant ... because ψ is assumed constant and the reference q˙d and its derivatives q¨d and q d are assumed bounded. Then, setting b x1 , k1 k φ2 (x1 ) = ν2 (|x1 |) sign(x1 ) − x1 k2 φ1 (x1 ) = ν1 (|x1 |) sign(x1 ) −

(6a) (6b)

yields the equations x˙1 = −k1 ν1 (|x1 |) sign(x1 ) + x2 ,

(7a)

x˙2 = −k2 ν2 (|x1 |) sign(x1 ) + w2 ,

(7b)

which for ν1 (|x1 |) = |x1 |1/2 and ν2 (|x1 |) ≡ 1 become the standard STA. Knowledge of a bound for w2 will then allow proper tuning, i.e., the selection of k1 and k2 to achieve finite-time convergence to x1 ≡ 0, the required control objective. However, if ψ is not constant then STA cannot guarantee the control objective.

2.3 Limitations of Existing Generalized Second-Order Algorithms If δ˙ does not admit a constant bound, then some generalizations of the STA may be employed. For example, if ψ in (1) is of affine form: ψ(r ) = c0 + c1r, for some positive constants c0 and c1 , then ˙ ≤ ψ(|q|) |δ| ˙ = ψ(|x1 + q˙d |) ≤ c0 + c1 |q˙d | + c1 |x1 |

Generalized Super-Twisting with Robustness to Arbitrary Disturbances …

7

and the GSTA of [12], which includes linear terms, can achieve the objective because the perturbation w2 also admits a bound of affine form. If ψ grows even faster than linearly for large values of its argument and the perturbation term w2 admits a bound such as |w2 | ≤ c0 + c1 |x1 | p ,

p ≥ 1,

then the GSOA of [13] can be employed. However, if ψ in (1) can only be ensured to be of a form such as ψ(r ) = er ,

(8)

then not even this GSOA can ensure the objective. The existing generalizations of STA work for different perturbation bounds of specific forms. This chapter generalizes and unifies many of the existing generalizations by showing how the functions φ1 and φ2 that define the algorithm can be constructed based on the knowledge of disturbance bounds of general forms, hence the name disturbance-tailored super-twisting (DTST).

3 Disturbance-Tailored Super-Twisting: The Basics This section explains how to perform the DTST construction. The DTST equations and the notation for the initial bound information are explained in Sect. 3.1. The method for performing the construction and subsequent tuning is given in Sect. 3.2. Application of this method to the example of Sect. 2 is illustrated in Sect. 3.3.

3.1 DTST Equations The disturbance-tailored super-twisting (DTST) algorithm is given by the following equations: x˙1 = −k1 ν1 (|x1 |) sign(x1 ) + x2 + w1 ,

(9a)

x˙2 = −k2 ν2 (|x1 |) sign(x1 ) + w2 ,

(9b)

where k1 > 0, k2 > 0, and ν1 , ν2 satisfy the key property ν2 = ν1 ν1 ,

(9c)

where ν1 denotes the derivative of ν1 . The terms w1 and w2 are considered perturbations, with known bounds

8

H. Haimovich

|w1 | ≤ ν˜ 1 (|x1 |),

|w2 | ≤ ν˜ 2 (|x1 |),

(10)

where ν˜ 1 , ν˜ 2 can be rather arbitrary, so long as ν˜ 1 vanishes as |x1 | approaches 0. The aim is to make x1 converge to 0 in finite time even under the action of the perturbations w1 and w2 . Note that (7) is of the form (9) with w1 ≡ 0, and that ν1 , ν2 need to be defined only for nonnegative values of their arguments. STA is of the form (9) with, e.g., ν1 (s) = s 1/2 and ν2 (s) = 1/2. Other existing generalizations of STA are also of this form, in particular satisfying (9c), including the GSTA with linear terms of [12] and the GSOA of [13].

3.2 DTST Construction and Tuning The DTST construction requires two steps: first, the DTST algorithm functions ν1 and ν2 are constructed based on the given disturbance bounds; second, values for the tuning parameters k1 and k2 are selected.

3.2.1

DTST Algorithm Functions

In order to satisfy some technical assumptions that will later be specified (Sect. 4.2), the initially known perturbation bounds (10) may have to be upper bounded by other (nicer) functions νˆ 1 and νˆ 2 : ν˜ 1 ≤ νˆ 1

ν˜ 2 ≤ νˆ 2 .

and

(11)

As observed in (10), the functions ν˜ 1 , ν˜ 2 and νˆ 1 , νˆ 2 need also be defined only for nonnegative values of their arguments. In correspondence with νˆ 2 , define the following auxiliary function: ⎧

s ⎪ ⎪ 0 if νˆ 2 (r )dr = 0, ⎪   ⎪  ⎪  s 0 ⎨ d ανˆ2 (s) := 2 νˆ 2 (r )dr =  νˆ 2 (s) otherwise. ⎪  s ds 0 ⎪ ⎪ ⎪ 2 ⎪ νˆ 2 (r )dr ⎩

(12)

0

Next, define1 γ (s) = γνˆ1 ,ˆν2 (s) := max{ανˆ2 (s), νˆ 1 (s)}

(13)

Here, it becomes clear that one of the technical assumptions mentioned is that νˆ 1 should be differentiable. This is not required for the initial bound ν˜ 1 and hence (11) is needed. The whole set of technical assumptions required for the construction to be well-defined is given in Sect. 4.

1

Generalized Super-Twisting with Robustness to Arbitrary Disturbances …

9

and set the functions ν1 and ν2 in (9) as 

s

ν1 (s) :=

γ (r )dr,

ν2 (s) := ν1 (s)γ (s),

(14)

0

where it is clear that (9c) is satisfied.

3.2.2

DTST Algorithm Tuning

Selection of the values for k1 and k2 for achieving finite-time convergence is very simple. Once the functions ν1 , ν2 have been constructed according to the perturbation bounds, then any values k1 , k2 satisfying k1 > 1,

k2 >

k1 + 1 2(k1 − 1)

2 +1

(15)

will ensure finite-time convergence. However, different choices will cause different convergence times. In Sect. 5, it will be shown that by performing this construction and selecting k1 and k2 as above, x1 converges to 0 with a convergence bound that is independent of the perturbations (Theorem 1). Moreover, conditions will be given for convergence to occur in a finite time that may depend on the initial conditions (Theorem 2) and an estimate of the convergence time will also be provided for specific cases (Proposition 7).

3.3 DTST Construction Example The DTST construction is next explained by means of the simple mechanical system of Sect. 2, with the bound ψ in (1) for the derivative of the matched perturbation δ being of exponential form, as in (8). ... From (1), (5), (8), and given that the speed reference q˙d and its derivatives q¨d and q d are bounded, it follows that |w2 | ≤ c0 +

e|q|˙ e|x1 +q˙d | e|q˙d | |x1 | = c0 + ≤ c0 + e ≤ c0 + c1 e|x1 | =: ν˜ 2 (|x1 |), m m m

for some positive constants c0 and c1 . Since w1 ≡ 0 for this example, then ν˜ 1 ≡ 0 can be adopted, so that (10) is satisfied. As will later become clear, these functions ν˜ 1 and ν˜ 2 are sufficiently “nice” so that in this case νˆ 1 = ν˜ 1 and νˆ 2 = ν˜ 2 in (11) can be selected. The next step is to compute ανˆ2 according to (12):

10

H. Haimovich



s

νˆ 2 (r )dr = c0 s + c1 (es − 1),

0

c0 + c1 es . ανˆ2 (s) = √ 2c0 s + 2c1 (es − 1)

According to (13) and since νˆ 1 ≡ 0, γ = ανˆ2 . The functions ν1 and ν2 required for the algorithm are then given by (14) which, taking into account (12) and that in this case γ = ανˆ2 , results in ν1 (s) =

2c0 s + 2c1 (es − 1),

ν2 (s) = νˆ 2 (s) = c0 + c1 es . Then, setting φ1 and φ2 as in (6) gives the desired control (2).

4 DTST: The Details This section contains the technical assumptions and results required for ensuring robust global stability and finite-time convergence of DTST. The material in this section is based on reference [5].

4.1 Initial Perturbation Bound Requirements The DTST construction is based on initial knowledge of bounds for the perturbations w1 , w2 appearing in the DTST equations (9). These bounds are given by the functions ν˜ 1 , ν˜ 2 : R≥0 → R≥0 in (10). The following definition and assumption give the basic technical assumptions required for ν˜ 1 , ν˜ 2 . Definition 1 ([5]) A pair of functions (ν1 , ν2 ), with ν1 : R≥0 → R≥0 and ν2 : R≥0 → R≥0 ∪ {∞} is said to be a DTST-IBP (Initial Bound Pair) if the following conditions are satisfied: (i) (ii) (iii) (iv) (v) (vi) (vii) (viii) (ix)

ν1 (0) = 0. ν1 is continuous. ν1 is continuously differentiable on R>0 . There exists a > 0 such that ν1 (s) ≥ 0 for all 0 < s < a. lims→0+ ν1 (s)ν1 (s) ∈ R≥0 ∪ {∞}. ν2 (s) ∈ R≥0 for all s > 0. ν2 is continuous on R>0 . lim

s s→0+ ν2 (s) = ν2 (0) ∈ R≥0 ∪ {∞}. 0 ν2 (r )dr < ∞ for all s > 0.

Generalized Super-Twisting with Robustness to Arbitrary Disturbances …

11

(x) Let t = inf{s ≥ 0 : ν2 (s) > 0}. If 0 < t < ∞, then ν2 is differentiable at t, and satisfies ν2 (t) = 0. Assumption 1 The given disturbance bounds ν˜ 1 , ν˜ 2 : R≥0 → R≥0 are such that (˜ν1 , ν˜ 2 ) is DTST-IBP. The functions ν1 , ν2 in (9) are constructed from the given disturbance bounds ν˜ 1 , ν˜ 2 . The latter functions have to be DTST-IBP, according to Assumption 1. The constructed ones, namely ν1 , ν2 , will also be DTST-IBP but should satisfy other properties, as well. Note that, according to Assumption 1, the given disturbance bounds are defined on the nonnegative reals and assume only nonnegative real values. Assumption 1 is very mild in practice, as next explained. If a given ν˜ 1 is not continuously differentiable and hence does not satisfy item (iii), it can then be suitably upper bounded (see, e.g., Lemma 2.5 of [1]). If a function ν˜ 1 satisfies items (i)–(iii), it is sufficient that lims→0+ ν˜ 1 (s) > 0 in order for item (iv) to be satisfied. As explained, the given ν˜ 2 must assume only real and nonnegative values. However, the ν2 constructed from the given ν˜ 1 , ν˜ 2 might turn out to satisfy ν2 (0) = ∞. Items (vi), (viii), and (ix) are written so that the constructed functions can also be DTST-IBP. Item (x) is required to prevent discontinuities in the constructed functions. This item can always be ensured replacing ν˜ 2 by a suitable upper bound, e.g., one for which the value of t in item (x) is 0. Example 1 Consider each of the following ν˜ 1 , ν˜ 2 : R≥0 → R≥0 , which satisfy Assumption 1. (a) ν˜ 1 (s) = 0, ν˜ 2 (s) = es . s 1/2 , ν˜ 2 (s) = 0. (b) ν˜ 1 (s) =  2s 1/2 if 0 ≤ s < 1, (c) ν˜ 1 (s) = s + 1 if s ≥ 1,  0 if 0 ≤ s < 2, ν˜ 2 (s) = 2 (s − 2) if s ≥ 2. Note that item (a) corresponds to the example of Sect. 3.3. The items of Definition 1 are simple to verify, including item (x). In case (a), one has t = 0 and in (b), t = ∞, ◦ so that item (x) trivially holds. Case (c) has t = 2, with ν2 (t − ) = 0 = ν2 (t + ).

4.2 DTST Function Admissibility The initial perturbation bounds ν˜ 1 , ν˜ 2 may be not suitable to define νˆ 1 = ν˜ 1 and νˆ 2 = ν˜ 2 for the construction (13)–(14) because the resulting functions ν1 , ν2 may fail to be well-defined. Functions suitable to be employed will be called DTST(pre)admissible. Definition 2 ([5]) A DTST-IBP pair of functions (ν1 , ν2 ) is said to be DTSTpreadmissible if either A(i)–A(ii) and C(i)–C(ii) or B(i)–B(ii) and C(i)–C(ii) are

12

H. Haimovich

satisfied. It is said to be DTST-admissible if A(i)–A(ii), B(i)–B(ii) and C(i)–C(ii) are all satisfied. ν1 (s) > 0 for all s > 0. lims→∞ ν1 (s) = ∞.

ν2∞(s) > 0 for every s > 0. 0 ν2 (s)ds = ∞. If lims→0+ ν1 (s)ν1 (s) = 0, then lims→0+ s 1/2 ν1 (s) = 0. (ii) If ν2 (0) = 0, then lims→0+ s 1/2 αν2 (s) = 0.

A. (i) (ii) B. (i) (ii) C. (i)

Upper bounds νˆ 1 , νˆ 2 for the given ν˜ 1 , ν˜ 2 can always be found so that (ˆν1 , νˆ 2 ) satisfies items A and B. If, in addition, νˆ 2 (0) > 0 then C(ii) becomes trivial. DTST-preadmissibility ensures the following properties of γ in (13). These properties cause the DTST algorithm (9) to be well defined under (14). Lemma 3 (Lemma 4.4 in [5]) Let (ˆν1 , νˆ 2 ) be DTST-preadmissible. Then, the function γ = γνˆ1 ,ˆν2 : R>0 → R≥0 defined via (12)–(13) is continuous and satisfies (16):

 lim γ (s)

s→0+



γ (s) > 0 for all s > 0, s

(16a)

γ (r )dr ∈ R≥0 ∪ {∞},

(16b)

γ (r )dr < ∞ for all s > 0, and

(16c)

γ (r )dr = ∞.

(16d)

0



s

0 ∞ 0

Although γ need not be defined at 0, ν2 (0) := lims→0+ ν1 (s)γ (s) should be welldefined. This is ensured by (16b), where an infinite limit is also allowed. From (14), this limit equals ν2 (0). Lemma 5 in Sect. 4.4 gives conditions for ν2 (0) to be finite.

4.3 Construction Properties The notation (ν1 , ν2 ) = C (ˆν1 , νˆ 2 ) will next be used to represent the construction of the functions ν1 , ν2 as per (12)–(14). The following result summarizes some properties of this construction. Lemma 4 (Lemma 4.6 in [5]) Let (ˆν1 , νˆ 2 ) be DTST-preadmissible and let ν1 , ν2 be constructed according to (12)–(14), i.e., (ν1 , ν2 ) = C (ˆν1 , νˆ 2 ). Then, (i) ν1 = αν2 . (ii) ν1 ≥ νˆ 1 , ν1 ≥ νˆ 1 , αν2 ≥ ανˆ2 , and ν2 ≥ νˆ 2 . (iii) (ν1 , ν2 ) is DTST-admissible.

Generalized Super-Twisting with Robustness to Arbitrary Disturbances …

13

(iv) If νˆ 2 = νˆ 1 νˆ 1 , then ν1 = νˆ 1 and ν2 = νˆ 2 . (v) C ◦ C = C , i.e., if (ξ1 , ξ2 ) = C (ν1 , ν2 ) then (ξ1 , ξ2 ) = (ν1 , ν2 ). (vi) Let (¯ν1 , ν¯ 2 ) be DTST-preadmissible. Suppose that there exists some 0 < ε ≤ ∞ so that ν¯ 1 (s) ≥ ν1 (s) and αν¯2 (s) ≤ αν2 (s) hold for all 0 ≤ s < ε. Let (ξ1 , ξ2 ) = C (¯ν1 , ν¯ 2 ). Then, ξ1 (s) = ν¯ 1 (s) for all 0 ≤ s < ε. (vii) Let (¯ν1 , ν¯ 2 ) be DTST-preadmissible. Suppose that there exists some 0 < ε ≤ ∞ so that ν¯ 1 (s) ≤ ν1 (s) and αν¯2 (s) ≥ αν2 (s) hold for all 0 ≤ s < ε. Let (ξ1 , ξ2 ) = C (¯ν1 , ν¯ 2 ). Then, ξ2 (s) = ν¯ 2 (s) for all 0 ≤ s < ε. Taking (12) into account, it follows that Lemma 4(i) is related to the fact that d (ν12 (s)). Lemma 4(ii) indicates that the constructed ν1 , ν2 ν2 (s) = ν1 (s)ν1 (s) = 21 ds are themselves disturbance bounds, since they upper bound νˆ 1 , νˆ 2 . Lemma 4(iv) shows that if (ˆν1 , νˆ 2 ) is DTST-preadmissible and the key property (9c) is satisfied by νˆ 1 , νˆ 2 , then it is sufficient to select (ν1 , ν2 ) = (ˆν1 , νˆ 2 ), and, by item (iii), (ˆν1 , νˆ 2 ) is also DTST-admissible. Item (v) shows that the construction is idempotent, meaning that iterating the construction gives the same result. Lemma 4(vi) and (vii) give conditions so that repeating the construction with one disturbance bound higher yields new bounds that do not increase the higher bound.

4.4 Finiteness of the Function ν2 at 0 The construction (ν1 , ν2 ) = C (ˆν1 , νˆ 2 ) may result in ν2 satisfying ν2 (0) = ∞. This is illustrated in the following example. Example 2 Consider the DTST-preadmissible pair (ˆν1 , νˆ 2 ) with νˆ 1 (s) = 3s 1/3 , νˆ 2 (s) = 0. According to (12)–(13) and (14), one may compute γ (s) = s −2/3 , ν1 (s) = νˆ 1 (s), ν2 (s) = 3s −1/3 , where ν2 (0) := lims→0+ ν2 (s) = ∞.



The following lemma gives necessary and sufficient conditions for ν2 (0) to be finite. Lemma 5 (Lemma 4.8 in [5]) Let (ˆν1 , νˆ 2 ) be DTST-preadmissible and let (ν1 , ν2 ) = C (ˆν1 , νˆ 2 ). Consider expressions (17)–(19). lim s 1/2 νˆ 1 (s) < ∞.

(17)

lim s 1/2 ανˆ2 (s) < ∞.

(18)

s→0+ s→0+

lim sup s −1/2 νˆ 1 (s) < ∞ and lim+ νˆ 2 (s) < ∞. s→0+

s→0

(19)

14

H. Haimovich

Then, the following implications hold: (17)–(18)

⇐⇒ ν2 (0) < ∞ =⇒

(19).

From (17), according to Lemma 5 it follows that the growth toward 0 of the derivative νˆ 1 cannot, in a sense, be higher than that of an inverse square root. Since DTSTpreadmissibility prerequires DTST-IBP, then νˆ 2 in particular has to satisfy the continuity requirement in items (vii)–(viii) of Definition 1. If 0 < νˆ 2 (0) < ∞, this continuity implies that ε > 0 exists so that √ 0 < m ≤ νˆ 2 (s) ≤ M for all 0 ≤ s ≤ ε. Recalls 1/2 M = M/ 2m < ∞, and hence (18) is satisfied. A delicate ing (12), s 1/2 ανˆ2 (s) ≤ √ 2ms case is νˆ 2 (0) = 0; in this case, C(ii) of Definition 2 forces lims→0+ s 1/2 ανˆ2 (s) = 0, making (18) hold. In this case, however, it is always possible to upper bound νˆ 2 so that νˆ 2 (0) > 0. The growth of νˆ 1 toward 0 is also limited by Lemma 5, being necessary (not sufficient) that this growth is at most of square-root form. In other words, if every function of the form Ms 1/2 with M > 0 lies below νˆ 1 in some right-neighborhood of 0, then the necessary condition lim sups→0+ s −1/2 νˆ 1 (s) < ∞ will not be satisfied (cf. Example 2, where νˆ 1 (s) = 3s 1/3 and lim sups→0+ s −1/2 νˆ 1 (s) = ∞).

5 Robust Stability and Finite-Time Convergence The main condition identifying DTST is Eq. (9c). This condition enables the linear framework and strict Lyapunov function analysis of [12, 14, 15] to be employed. The possibility of employing this type of non-smooth Lyapunov functions enabled to advance the theory of continuous sliding-mode controllers [3, 4, 7, 11] and is employed also for DTST. The linear framework is based on the nonlinear coordinate transformation     ν1 (|x1 |) sign(x1 ) ζ1 = T (x) = . (20) ζ = ζ2 x2 From (14) and (16), it is easy to verify that T is a global homeomorphism, i.e., T : R2 → R2 is continuous and has a continuous inverse T −1 : R2 → R2 , given by T −1 (ζ ) =



 ν1−1 (|ζ1 |) sign(ζ1 ) . ζ2

(21)

In the new coordinates ζ , the DTST equations (9) become “almost” linear, since direct computation of the derivatives yields ζ˙ = ν1 (|x1 |)A(δ(t, x))ζ, with

(22)

Generalized Super-Twisting with Robustness to Arbitrary Disturbances …

   a 1 −k1 + δ1 1 =: 11 , −k2 + δ2 0 a21 0 ⎤ ⎡ w (t, x) 1   sign(x1 ) δ (t, x) ⎥ ⎢ ν1 (|x1 |) δ(t, x) = 1 = ⎣w ⎦, 2 (t, x) δ2 (t, x) sign(x1 ) ν2 (|x1 |)

15



A(δ) :=

(23)

(24)

where (22) may be not valid when x1 = 0 because the derivative ν1 need not be finite at 0. The DTST functions ν1 , ν2 are themselves suitable disturbance bounds by (10), (11) and Lemma 4(ii), so that |w1 (t, x)| ≤ ν1 (|x1 |), |w2 (t, x)| ≤ ν2 (|x1 |).

(25)

Then, it happens that |δ1 (t, x)| ≤ 1,

|δ2 (t, x)| ≤ 1.

(26)

Global robust stability and finite-time convergence are then related to the matrix A(δ) being Hurwitz for all possible values of δ and the unboundedness of the scalar factor ν1 (|x1 |) in (22). The remainder of this section is based on reference [5].

5.1 Global Robust Stability Global robust stability refers to the fact that every trajectory converges at least asymptotically and has a convergence bound that depends on the initial condition but is independent of the possible perturbations. Definition 6 The DTST algorithm (9) is said to be globally asymptotically stable, robustly with respect to disturbances w1 , w2 satisfying (25), if the following conditions hold uniformly for every w1 , w2 satisfying (25): • (boundedness) for every δ > 0, there exists ε > 0 such that any solution x(·) with x(0) ≤ δ satisfies x(t) ≤ ε for all t ≥ 0; • (stability) for every ε > 0, there exists δ > 0 such that any solution x(·) with x(0) ≤ δ satisfies x(t) ≤ ε for all t ≥ 0; • (attractivity) for every 0 < ε ≤ δ, there exists T ≥ 0 such that any solution x(·) with x(0) ≤ δ satisfies x(t) ≤ ε for all t ≥ T . Theorem 1 Consider the DTST algorithm (9), with γ = ν1 satisfying (16), and where the disturbances w1 , w2 are bounded as |w1 (t, x)| ≤ ν1 (|x1 |), |w2 (t, x)| ≤ ν2 (|x1 |).

(27)

Let k1 , k2 satisfy (15). Then, the DTST algorithm is globally asymptotically stable, robustly with respect to w1 , w2 .

16

H. Haimovich

The proof of Theorem 1 follows from Propositions 3.1 and 3.2 of [5] and is based on the fact that the function V (x) = ζ T Pζ, with    2 r −p k1 + 2k2 − k1 −(k1 − 1) P= = 2 −(k1 − 1) −p 2 

(28)

and ζ , as in (20), is a strict Lyapunov function [15] for the DTST. Recalling (22), the derivative of V evaluated over a DTST trajectory is given by V˙ (t, x) = −ν1 (|x1 |)ζ T Q(δ(t, x))ζ, Q(δ) := −[A(δ)T P + P A(δ)]. The tuning requirements (15) ensure that Q satisfies λ¯ :=

min

|δ1 |≤1,|δ2 |≤1

λmin (Q(δ)) > 0,

(29)

and then, also taking (26) into account, ¯ 2 . V˙ (t, x) ≤ −ν1 (|x1 |)λζ

(30)

The right-hand side expression is negative whenever x1 = 0 (which implies ζ = 0), and tends to −∞ whenever x1 → 0 with x2 = 0.

5.2 Finite-Time Convergence By requiring some additional properties on the DTST functions, finite-time convergence, in addition to global asymptotic stability, can be established also robustly with respect to the possible disturbances. The following result shows that if ν1 , which bounds the disturbance w1 , satisfies additional conditions, then the DTST algorithm (9) provides finite-time convergence from every initial condition. Theorem 2 Consider the DTST algorithm (9), with γ = ν1 satisfying (16), and where the disturbances w1 , w2 are bounded as in (27). Let k1 , k2 satisfy (15). Suppose further that γ is non-increasing in some right-neighborhood of 0, i.e., there exists ε > 0 such that γ is non-increasing in (0, ε], and that the solutions of the scalar differential equation z˙ = −ν1 (z) converge to the origin in finite time for every z(0) ∈ (0, ε]. Then, each solution of (9) converges to the origin in finite time.

Generalized Super-Twisting with Robustness to Arbitrary Disturbances …

17

Proof This proof is based on a second strict Lyapunov function, such as the one considered in the proof of Proposition 3.4 in [5], given by   W (x) = ν1−1 αV 1/2 (x) , with V as in (28) for ζ = T (x) in (20), and where α > 0 is such that2 ζ  ≤ αV 1/2 (x)

(31)

holds for all x ∈ R2 . The derivative of W along the trajectories of (9) is α  W˙ (t, x) = (ν1−1 ) αV 1/2 (x) V −1/2 (x)V˙ (t, x). 2 From (30) and the quadratic form (28), it follows that V˙ (t, x) ≤ −ν1 (|x1 |)βV (x), for some3 β > 0. Combining this inequality with the expression for W˙ , and also using the fact that the derivative of an inverse function is the reciprocal of the derivative, yields ν1 (|x1 |) β   αV 1/2 (x), W˙ (t, x) ≤ − 2 ν1 ◦ ν1−1 αV 1/2 (x) where γ = ν1 is positive in R>0 . From (21), (31), and the fact that ν1−1 is increasing, then |x1 | = ν1−1 (|ζ1 |) ≤ ν1−1 (ζ ) ≤ ν1−1 (αV 1/2 (x)) = W (x).  The latter inequality  1/2 and the fact that ν1 = γ is non-increasing imply that whenever −1 W (x) = ν1 αV (x) ≤ ε, then

ν1 (|x1 |) ≥ ν1 ◦ ν1−1 (αV 1/2 (x))

(32)

β β W˙ ≤ − αV 1/2 (x) = − ν1 (W ). 2 2

(33)

and hence

By Theorem 1, for every initial condition there exists some time tx > 0 so that x(t), and hence W (x(t)) is as small as desired for all t ≥ tx , in particular less than ε. Then, (33) holds for all t ≥ tx . Using a comparison lemma, a time-scale change √ for example, α = 1/ λmin (P). 3 for example, β = λ ¯ /λmax (P). 2

18

H. Haimovich

dz τ = β2 t, and the finite-time convergence of dτ = −ν1 (z), it follows that there exists  t2 > tx such that W (x(t)) = 0 for all t ≥ t2 .

If the function ν1 in (9) is a power of its variable and satisfies the assumptions of Theorem 2, then the convergence time can also be estimated. Proposition 7 (Proposition 3.4 in [5]) Consider the DTST algorithm (9), where the disturbances w1 , w2 are bounded as in (27). Let k1 , k2 satisfy (15). If there exist constants h > 0 and −1 < q < 1 so that either ν1 satisfies (34) or ν2 satisfies (35)  ν1 (s) =

2h (q+1)/2 s , q +1

ν2 (s) = hs q ,

(34) (35)

and if x is any solution of (9) satisfying x(0) = x0 , then x(t) = 0 for all t ≥ t f , with t f given by  1−q   2 2 −1 T ν1 α ζ0 Pζ0 tf = , with (1 − q)γ¯ 

λ¯ β 2h , γ¯ = , α = 1/ λmin (P), β = λmax (P) 2 q +1

(36) (37)

ζ0 = T (x0 ) with T as in (20), P in (28), and λ¯ in (29). The proof of this proposition follows by particularizing the proof of Theorem 2 and evaluating the convergence of the function W from (33). The functions ν1 , ν2 as in (34)–(35) clearly satisfy ν2 = ν1 ν1 , as required by (9c). Hence, if either (34) or (35) holds, then both do. Also, ν2 constant corresponds to the case q = 0, which is covered by this proposition. If −1 < q < 0, then ν2 (s) is unbounded and satisfies ν2 (0) = ∞. For ν2 (0) to be finite, then 0 ≤ q < 1 is required. Better estimates of the convergence time can be obtained by means of the method developed in [17]. Under some additional requirements, this method can be extended for use in DTST. These ideas were pursued in [16] for the design of robust exact differentiators with prescribed convergence time.

6 Conclusions and Future Work The Disturbance-Tailored Super-Twisting (DTST) construction was explained, as well as some of its main properties. DTST allows to design a second-order slidingmode algorithm based on perturbation bound information, imposing only mild requirements and maintaining global robust stability and finite-time convergence. Future work is aimed at generalizing STA for control under time- and state-dependent

Generalized Super-Twisting with Robustness to Arbitrary Disturbances …

19

perturbations. In the latter case, it is usual that the perturbations depend on the control to be designed, and hence perturbation bounds require bounds on the control, which itself depends on the perturbations. DTST may be able to “break” this loop, allowing for suitable control design (see the very recent publication [6]).

References 1. Clarke, F.H., Ledyaev, Y.S., Stern, R.J.: Asymptotic stability and smooth Lyapunov functions. J. Differ. Equ. 149(1), 69–114 (1998) 2. Emel’yanov, S.V., Korovin, S.K., Levant, A.: High-order sliding modes in control systems. Comput. Math. Model. 7(3), 294–318 (1996) 3. Fridman, L., Moreno, J., Bandyopadhyay, B., Kamal, S., Chalanga, A.: Continuous nested algorithms: the fifth generation of sliding mode controllers. In: Yu, X., Efe, O. (eds.) Recent Trends in Sliding Mode Control, pp. 5–35. Studies in Systems, Decision and Control, vol. 24. Springer (2015) 4. Fridman, L., Moreno, J., Iriarte, R. (eds.): Sliding Modes After the First Decade of the 21st Century: State of the Art. Lecture Notes in Control and Information Sciences. Springer (2011) 5. Haimovich, H., De Battista, H.: Disturbance-tailored super-twisting algorithms: properties and design framework. Automatica 101, 318–329 (2019) 6. Haimovich, H., Fridman, L., Moreno, J.A.: Generalized super-twisting for control under timeand state-dependent perturbations: breaking the algebraic loop. IEEE Trans. Autom. Control (2022). https://doi.org/10.1109/TAC.2022.3183039 7. Laghrouche, S., Harmouche, M., Chitour, Y.: Higher order super-twisting for perturbed chains of integrators. IEEE Trans. Autom. Control 62(7), 3588–3593 (2017) 8. Levant, A.: Sliding order and sliding accuracy in sliding mode control. Int. J. Control 58(6), 1247–1263 (1993) 9. Levant, A.: Robust exact differentiation via sliding mode technique. Automatica 34(3), 379– 384 (1998) 10. Levant, A.: Universal single-input single-output (SISO) sliding-mode controllers with finitetime convergence. IEEE Trans. Autom. Control 46(9), 1447–1451 (2001) 11. Mercado-Uribe, J.A., Moreno, J.A.: Discontinuous integral action for arbitrary relative degree in sliding-mode control. Automatica 118(109018) (2020) 12. Moreno, J.A.: A linear framework for the robust stability analysis of a generalized supertwisting algorithm. In: 6th International Conference on Electrical Engineering, Computing Science and Automatic Control (CCE) 13. Moreno, J.A.: Lyapunov approach for analysis and design of second order sliding mode algorithms. In: Fridman L., Moreno J., Iriarte R. (eds.) Sliding Modes After the First Decade of the 21st Century. Lecture Notes in Control and Information Sciences, vol. 412. Springer, Berlin, Heidelberg (2011) 14. Moreno, J.A., Osorio, M.: A Lyapunov approach to second-order sliding mode controllers and observers. In: Proceedings of the 47th IEEE conference on Decision and Control, Cancun, Mexico, pp. 2856–2861 (2008) 15. Moreno, J.A., Osorio, M.: Strict Lyapunov functions for the super-twisting algorithm. IEEE Trans. Autom. Control 57(4), 1035–1040 (2012) 16. Seeber, R., Haimovich, H., Horn, M., Fridman, L., De Battista, H.: Robust exact differentiators with predefined convergence time. Automatica 134(109858) (2021) 17. Seeber, R., Horn, M., Fridman, L.: A novel method to estimate the reaching time of the supertwisting algorithm. IEEE Trans. Autom. Control 63(12), 4301–4308 (2018) 18. Utkin, V.I.: Variable structure systems with sliding modes. IEEE Trans. Autom. Control AC22(2), 212–222 (1977)

Prescribed-Time Stabilization and Inverse Optimality for Nonlinear Systems Wuquan Li and Miroslav Krstic

Abstract In this paper, the prescribed-time stabilization and inverse optimality problem is studied for strict-feedback nonlinear systems. Different from the existing dynamic high-gain scaling designs, we propose simpler designs with new scaled quadratic Lyapunov functions. The design can ensure that the equilibrium of the plant is prescribed-time stable. After that, we redesign a prescribed-time inverse optimal controller, which not only stabilizes the plant in the prescribed time, but also minimizes a meaningful cost functional. Finally, a simulation example is proposed to demonstrate the designs.

1 Introduction Prescribed-time control has drawn much attention due to its wide applications in tactical missile guidance [1] and other applications where the control objectives should be achieved in a short, finite amount of time. The merit of prescribed-time control is that the user can prescribe any convergence times. For deterministic systems, [2] solves the fixed-time regulation problem for nonlinear systems; [3] compares the traditional finite-time control with the prescribed-time control; [4] studies the prescribed-time distributed control of networked multiple systems; [5, 6] focus on observers and output-feedback designs for linear systems in the prescribed-time sense. Krishnamurthy et al. [7, 8] solve the prescribed-time designs for strict-feedback-like nonlinear systems; [9, 10] study the prescribed-time control problems for two classes of partial differential equations; [11] designs prescribed-time boundary observers for coupled diffusion-reaction systems; [12, 13] address the prescribed-time control W. Li (B) School of Mathematics and Statistics Science, Ludong University, Yantai 264025, China e-mail: [email protected] M. Krstic Department of Mechanical and Aerospace Engineering, University of California, San Diego, La Jolla, CA 92093, USA e-mail: [email protected] © The Author(s), under exclusive license to Springer Nature Switzerland AG 2023 T. R. Oliveira et al. (eds.), Sliding-Mode Control and Variable-Structure Systems, Studies in Systems, Decision and Control 490, https://doi.org/10.1007/978-3-031-37089-2_2

21

22

W. Li and M. Krstic

problems with bounded time-varying gains. When it turns to stochastic systems, [14] is the first paper to solve the stochastic nonlinear prescribed-time design problems, including state-feedback design and inverse optimality design; Unlike the scalingfree quartic Lyapunov function design in [14, 15] uses scaled quartic Lyapunov functions to reduce the control effort; Recently, [16] proposes stochastic prescribedtime output- feedback designs for systems without/with sensor uncertainty. The inverse optimal control aims to solve a Hamilton–Jacobi equation and results in a controller, which minimizes a meaningful cost functional. In this direction, [17–20] solve the inverse optimal control design problems of nonlinear systems. However, the designs in [17–20] can only achieve global asymptotic stability. To our knowledge, the prescribed-time inverse optimal control of nonlinear systems is an open topic. Motivated by the above discussions, we first study the prescribed-time stabilization problem, then solve the inverse optimality problem. The contributions of this chapter are two-fold: (1) We propose a novel non-scaling design framework for nonlinear strict-feedback systems in this chapter. Although the system model considered is a special case of [7] where more uncertainties are considered, our design is different from that in [7]. Specifically, the design in [7] is based on a complex dynamic high-gain scaling technique with a temporal transformation. Our non-scaling design is based on novel scaled quadratic Lyapunov functions and a simpler design procedure is developed. In addition, our designs are also different from the designs in [14– 16] which can only deal with nonlinear functions satisfying the linear growth condition. New design and stability techniques should be developed since we consider systems with more general growth conditions. (2) We develop a novel prescribed-time inverse optimal design in this paper. In [17–20], only asymptotic stabilization controllers are designed to minimize cost functionals. In this chapter, we design a prescribed-time stable controller, which also minimizes a cost functional characterized by time-varying value functions. The rest of this chapter is organized as follows. In Sect. 2, we formulate the problem. In Sect. 3, we present prescribed-time controller design and stability analysis. In Sect. 4, we analyze the prescribed-time optimality. In Sect. 5, we use a simulation example to show the effectiveness of the theoretical results. In Sect. 6, we give some concluding remarks. Finally, we collect some useful inequalities in the appendix.

2 Problem Formulation In this chapter, we consider the prescribed-time control of third-order nonlinear systems, which can be easily generalized to arbitrary-order systems by induction.

Prescribed-Time Stabilization and Inverse Optimality for Nonlinear Systems

23

Consider the systems described by x˙1 = x2 + f 1 (t, x), x˙2 = x3 + f 2 (t, x),

(1) (2)

x˙3 = u + f 3 (t, x),

(3)

where x = (x1 , x2 , x3 )T ∈ R 3 and u ∈ R are the system state and control input. The function f i : R + × R 3 → R is continuous in t, locally bounded and locally Lipschitz continuous in x uniformly in t ∈ R + , f i (t, 0) = 0, i = 1, 2, 3. To proceed, we need the following assumption for system (1)–(3). Assumption 1 For i = 1, 2, 3, there exists a nonnegative smooth function f i1 (x¯i ) such that | f i (t, x)| ≤ (|x1 | + · · · + |xi |) f i1 (x¯i ),

(4)

where x¯i = (x1 , . . . , xi )T . Define the scaling function:  μ(t) =

T t0 + T − t

m , ∀ t ∈ [t0 , t0 + T ),

(5)

where T > 0 is the freely prescribed time and m ≥ 2 is a integer. Obviously, μ(t) is increasing on [t0 , t0 + T ) with μ(t0 ) = 1 and lim μ(t) = t→t0 +T

+∞. The objective of this chapter is, to design a prescribed-time state-feedback controller for system (1)–(3) to ensure that the plant is prescribed-time stable in time T . Then, we solve the inverse optimal stabilization problem in time T . Remark 1 Assumption 1 is a general assumption for nonlinear systems. From (4), the nonlinear function f i (t, x) can tolerate a large class of bounded disturbances, including time-varying disturbances or state-dependent disturbances. In addition, the growth condition is more general than the linear growth condition used in stochastic systems [14–16], thus new design method should be developed in this chapter and this design cannot be covered by the stochastic prescribed-time stabilization designs in [14–16].

3 Prescribed-Time Controller Design and Stability Analysis In this section, we first design the prescribed-time controller, then analyze the prescribed-time stability.

24

W. Li and M. Krstic

3.1 Prescribed-Time Controller Design Step 1. Define ξ1 = x 1 , 1 V1 = ξ12 , 2

(6) (7)

from (1), (4) and (6)–(7) we have V˙1 = ξ1 x2 + ξ1 f 1 (x1 ) ≤ ξ1 (x2 − x2∗ ) + ξ1 x2∗ + ξ12 f 11 (x1 ).

(8)

α1 (x1 ) = 3 + f 11 (x1 ),

(9)

Choosing

x2∗

= −α1 μξ1 ,

(10)

which substitutes into (8) yields V˙1 ≤ −3μξ12 + ξ1 (x2 − x2∗ ).

(11)

Step 2. Define ξ2 = x2 − x2∗ , by (10) we get ξ2 = x2 + α1 μξ1 .

(12)

It follows from (2), (5), (9)–(10) and (12) that ξ˙2 = x3 + f 2 +

m 1+1/m dα1 μ α1 ξ1 + μ(α1 + x1 )(x2 + f 1 ). T d x1

(13)

Choose the new scaled Lyapunov function V2 = V1 +

1 2 ξ . 2μ2 2

(14)

By (11), (13) and (14) we get 1 m V˙2 ≤ −3μξ12 + ξ1 ξ2 + 2 ξ2 x3 − μ−2+1/m ξ22 μ T  1  m 1+1/m dα1 + 2 ξ2 f 2 + μ α1 ξ1 + μ(α1 + x1 )(x2 + f 1 ) . μ T d x1 By Lemma 1 we obtain

(15)

Prescribed-Time Stabilization and Inverse Optimality for Nonlinear Systems

ξ1 ξ2 ≤

1 2 1 2 μξ1 + ξ . 2 2μ 2

25

(16)

From (4), (6) and (12) we have      f 2 + m μ1+1/m α1 ξ1 + μ(α1 + dα1 x1 )(x2 + f 1 )   T d x1  m dα1   ≤ (|x1 | + |x2 |) f 21 (x¯2 ) + μ1+1/m α1 ξ1 + μα1 + x1 (|x2 | + |x1 | f 11 (x1 )) T d x1   ≤ μ2 |ξ1 | + μ|ξ2 | ρ2 (x¯2 ), (17) where ρ2 (x¯2 ) ≥ 0 is a smooth function. From (17) and Lemma 1 we have   ξ2 m 1+1/m dα1 f μ + α ξ + μ(α + x )(x + f ) 2 1 1 1 1 2 1 μ2 T d x1    m dα1 |ξ2 |  x1 )(x2 + f 1 ) ≤ 2  f 2 + μ1+1/m α1 ξ1 + μ(α1 + μ T d x1 1 ≤ |ξ1 ||ξ2 |ρ2 (x¯2 ) + ρ2 (x¯2 )ξ22 μ  1 2 1 1 ρ2 (x¯2 ) + ρ22 (x¯2 ) ξ22 . ≤ μξ1 + 2 μ 2

(18)

Substituting (16) and (18) into (15) yields  1 1 1 1 1 + ρ2 (x¯2 ) + ρ22 (x¯2 ) ξ22 . (19) V˙2 ≤ −2μξ12 + 2 ξ2 (x3 − x3∗ ) + 2 ξ2 x3∗ + μ μ μ 2 2 If we choose 5 1 + ρ2 (x¯2 ) + ρ22 (x¯2 ), 2 2 x3∗ = −α2 μξ2 ,

α2 (x¯2 ) =

(20) (21)

then we have 1 1 V˙2 ≤ −2μξ12 − 2 ξ22 + 2 ξ2 (x3 − x3∗ ). μ μ

(22)

Step 3. Let ξ3 = x3 − x3∗ , by (21) we obtain ξ3 = x3 + α2 μξ2 . From (3), (5), (20) and (23) we have

(23)

26

W. Li and M. Krstic

 m 1+1/m m μ α2 ξ2 + μα2 x3 + f 2 + μ1+1/m α1 ξ1 T T  dα   dα1 dα2 2 +μ(α1 + x1 )(x2 + f 1 ) + μξ2 (x2 + f 1 ) + (x3 + f 2 ) . (24) d x1 d x1 d x2

ξ˙3 = u + f 3 +

We choose the new scaled Lyapunov function V3 = V2 +

1 2 ξ . 2μ4 3

(25)

It follows (22) and (24)–(25) that 1 1 2m −4+1/m 2 1 1 μ V˙3 ≤ −2μξ12 − 2 ξ22 + 2 ξ2 ξ3 − ξ3 + 4 ξ3 u + 4 ξ3 μ μ T μ μ   m 1+1/m m 1+1/m dα1 · f3 + μ α2 ξ2 + μα2 x3 + f 2 + μ α1 ξ1 + μ(α1 + x1 ) T T d x1  dα   dα2 2 (x2 + f 1 ) + (x3 + f 2 ) . (26) ·(x2 + f 1 ) + μξ2 d x1 d x2 By Lemma 1 we get 1 1 2 1 2 ξ + ξ2 ξ3 ≤ ξ . μ2 2μ 2 2μ3 3

(27)

From (6), (12) and (23) we get |x1 | = |ξ1 |, |x2 | ≤ α1 μ|ξ1 | + |ξ2 |,

(28) (29)

|x3 | ≤ α2 μ|ξ2 | + |ξ3 |.

(30)

From (4), (28)–(30), there exists a nonnegative smooth function ρ3 (x¯3 ) such that   m 1+1/m m dα1 μ α2 ξ2 + μα2 x3 + f 2 + μ1+1/m α1 ξ1 + μ(α1 + x1 )(x2 + f 1 ) T T d x1  dα  dα2 2 (x2 + f 1 ) + (x3 + f 2 ) +μξ2 d x1 d x2 f3 +

≤ ρ3 (x¯3 )(μ3 |ξ1 | + μ2 |ξ2 | + μ|ξ3 |).

By Lemma 1 we have

(31)

Prescribed-Time Stabilization and Inverse Optimality for Nonlinear Systems

1 |ξ3 |(μ3 |ξ1 | + μ2 |ξ2 | + μ|ξ3 |)ρ3 (x¯3 ) μ4 1 1 1 = ρ3 (x¯3 )|ξ1 ||ξ3 | + 2 ρ3 (x¯3 )|ξ2 ||ξ3 | + 3 ρ3 (x¯3 )ξ32 μ μ μ  1 2 1  3 ξ2 + 3 ρ3 (x¯3 ) + ρ32 (x¯3 ) ξ32 . ≤ μξ12 + 2μ μ 4

27

(32)

Substituting (27) and (31)–(32) into (26) yields  1 1 1  1 3 V˙3 ≤ −μξ12 − ξ22 + 4 ξ3 u + 3 ρ3 (x¯3 ) + + ρ32 (x¯3 ) ξ32 . μ μ μ 2 4

(33)

If we choose 3 3 + ρ3 (x¯3 ) + ρ32 (x¯3 ), 2 4 u(x) = −μα3 (x¯3 )(μα2 (x¯2 )(μx1 α1 (x1 ) + x2 ) + x3 ),

α3 (x¯3 ) =

(34) (35)

then from (33)–(35) we get 1 1 V˙3 ≤ −μξ12 − ξ22 − 3 ξ32 . μ μ

(36)

Remark 2 Although the system (1)–(3) is a special case of [7] where more uncertainties are considered, we propose a simpler non-scaling design with scaled quadratic Lyapunov functions in this section, which is different from the dynamic high-gain design in [7].

3.2 Prescribed-Time Stability Analysis Next, we analyze the stability of system (1)–(3). Theorem 1 Consider the plant consisting of (1)–(3) and (35). If Assumption 1 holds, then the equilibrium of the closed-loop system is prescribed-time stable with lim x = lim u = 0.

t→t0 +T

t→t0 +T

Proof From (7), (14) and (25) we have V3 =

1 2 1 2 1 2 ξ + ξ , ξ + 2 1 2μ2 2 2μ4 3

(37)

which and (36) shows that V˙3 ≤ −2μV3 .

(38)

28

W. Li and M. Krstic

Let V = V3 (t)e

2

t t0

μ(s)ds

.

(39)

By (38) and (39) we get 2 V˙ = e

t t0

μ(s)ds

(V˙3 + 2μV3 ) ≤ 0.

(40)

Thus we have V3 (t) ≤ V3 (t0 )e ≤ V3 (t0 )e

−2

t t0

μ(s)ds

2 − m−1 Tm



1 (t0 +T −t)m−1

1 − T m−1

, ∀ t ∈ [t0 , t0 + T ).

(41)

By (41) we obtain |ξ | ≤ μ2 (2V3 (t))1/2 

1 1 1 − m−1 T m (t +T −t) 2 m−1 − T m−1 0 ≤ μ 2V3 (t0 )e , ∀ t ∈ [t0 , t0 + T ),

(42)

where ξ = (ξ1 , ξ2 , ξ3 )T . From (6), (7), (37) and (41) we get

|x1 | = |ξ1 | ≤ 2V3 (t) 

1 1 − 1 T m (t +T −t) m−1 − T m−1 0 ≤ 2V3 (t0 )e m−1 , ∀ t ∈ [t0 , t0 + T ),

(43)

from which we obtain lim x1 = 0.

t→t0 +T

(44)

By (12) and (42) we get |x2 | ≤ μα1 (x1 )|ξ1 | + |ξ2 | √ ≤ 2μα1 (x1 )|ξ | 

1 1 √ − 1 T m (t +T −t) m−1 − T m−1 0 , ∀ t ∈ [t0 , t0 + T ). (45) ≤ 2μ3 α1 (x1 ) 2V3 (t0 )e m−1 Noting lim α1 (x1 ) = α1 (0),  1 1 1 m − T m−1 3 − m−1 T (t0 +T −t)m−1 = 0, lim μ e

t→t0 +T

t→t0 +T

(46) (47)

Prescribed-Time Stabilization and Inverse Optimality for Nonlinear Systems

29

from (45) we have lim x2 = 0.

t→t0 +T

(48)

It can be inferred from (30) and (42) that √ 2μα2 (x¯2 )|ξ | 

1 1 √ 3 − 1 T m (t +T −t) m−1 − T m−1 0 , ∀ t ∈ [t0 , t0 + T ). (49) ≤ 2μ α2 (x¯2 ) 2V3 (t0 )e m−1

|x3 | ≤

From (44) and (48) we have lim α2 (x¯2 ) = α2 (0),

(50)

lim x3 = 0.

(51)

lim x = 0.

(52)

t→t0 +T

which and (47) yields that t→t0 +T

By (44), (48) and (51) we get t→t0 +T

From (35) and (46) we get |u| ≤ α3 μ|ξ3 | 

1 1 − 1 T m (t +T −t) m−1 − T m−1 0 , ∀ t ∈ [t0 , t0 + T ). ≤ α3 μ3 2V3 (t0 )e m−1

(53)

By (52) we obtain lim α3 (x¯3 ) = α3 (0),

t→t0 +T

(54)

which and (47), (53) shows taht lim u = 0.

t→t0 +T

(55)

Thus, the theorem is proved.

4 Prescribed-Time Inverse Optimality In this section, we solve the prescribed-time optimal problem for system (1)–(3).

30

W. Li and M. Krstic

By (1)–(3) we get ⎡

⎤ ⎡ ⎤ x2 + f 1 0 x˙ = ⎣ x3 + f 2 ⎦ + ⎣ 0 ⎦ u = Φ(t, x) + G 1 u. 1 f3

(56)

We get the following results. Theorem 2 Consider the plant consisting of (1)–(3). If Assumption 1 holds, then the controller u ∗ (x) = −βμα3 (x¯3 )(μα2 (x¯2 )(μx1 α1 (x1 ) + x2 ) + x3 ), β ≥ 2, ∀ t ∈ [t0 , t0 + T ) (57)

makes the plant achieve prescribed-time stabilization and minimizes 

t0 +T

J (u) = 2βV3 (t0 + T, x(t0 + T )) +

 l(t, x(t)) +

t0

 1 2 (t) dt, u α3 μ5 (t)

(58)

where 

 ∂ V3 ∂ V3 1 l(t, x) = −2β + Φ + 3 β 2 α3 ξ32 ∂t ∂x μ 2β |x|2 ≥ 3 μ (3 + μ2 α12 + μ2 α22 )

(59)

is positive definite and radially unbounded but not necessarily decrescent. Proof From (6), (12) and (23) we get ⎡

⎤ 1 0 0 x=⎣ −α1 μ 1 0 ⎦ξ, 0 −α2 μ 1

(60)

which yields that |ξ |2 ≥

1 3+

μ2 α12

+ μ2 α22

|x|2 .

(61)

Noting β ≥ 2, from (37), (38) and (61) we have l(t, x) ≥ 4βμV3 ≥

2β |x|2 . μ3 (3 + μ2 α12 + μ2 α22 )

(62)

Since μ3 (3+μ22β is positive and |x|2 is a positive definite, l(t, x) is well defined, α12 +μ2 α22 ) which shows that J (u) is meaningful.

Prescribed-Time Stabilization and Inverse Optimality for Nonlinear Systems

31

We first prove that (57) stabilizes (56). From (36)–(37) and β ≥ 2, we get β V˙3 |(57) = − 3 α3 ξ32 + μ 1 ≤ − 3 α3 ξ32 + μ ≤ −2μV3 .

∂ V3 ∂ V3 + Φ ∂t ∂x ∂ V3 ∂ V3 + Φ ∂t ∂x (63)

By (63) and Theorem 1, (57) stabilizes (56) with lim x = lim u = 0. t→t0 +T

t→t0 +T

Next, we prove optimality. From (58)–(59) we have 

 1 2 u dt α3 μ5 (t) t0   1 2 2β V˙3 |(56) + l(t, x(t)) + dt u α3 μ5 (t)   1 1 1 2 u 2β 4 ξ3 u + 3 β 2 α3 ξ32 + dt. (64) μ μ α3 μ5 (t)

J (u) = 2βV3 (t0 + T, x(t0 + T )) +  = 2βV3 (t0 , x(t0 )) +

t0 +T

t0

 = 2βV3 (t0 , x(t0 )) +

t0 +T

t0

t0 +T

 l(t, x(t)) +

By Lemma 2 we get − 2β

 √   2 1 1 2 − ξ u = β α ξ u √ 3 3 3 μ4 μ3/2 β α3 u 5/2 1 1 u 2 (t). ≤ 3 β 2 α3 ξ32 + μ α3 μ5 (t)

(65)

The equality in (65) holds when u ∗ (t, x) = −βα3 μξ3 = −βα3 μ(α2 μ(α1 μx1 + x2 ) + x3 ).

(66)

Thus, we get the minimum of (58) with u(t, x) = u ∗ (t, x) in (66), and min J (u) = 2βV3 (t0 , x(t0 )). u

(67)

Thus, the theorem is true. Remark 3 From Theorem 2, the inverse optimal controller (57) can make system (1)–(3) achieve prescribed-time stable with lim x = lim u = 0 while the cont→t0 +T

t→t0 +T

trollers in [17–20] can only ensure global asymptotic stability. The cost functional in (58) is accumulated over the time interval [t0 , t0 + T ). Noting l(t, x) is positive definite and radially unbounded but not necessarily decrescent, the integrand of J (u) is

32

W. Li and M. Krstic

nonnegative, which penalizes certain undesirable states and controls. Different from those in [17–20], the penalty is characterized by the scaling function μ. Remark 4 In Theorem 2, V3 (t, x) solves the following Hamilton- -Jacobi–Bellman equations with β ∈ [2, +∞) ∂ V3 l(t, x) ∂ V3 1 βα3 ξ32 + + Φ− = 0. ∂t ∂x 2μ3 2β

(68)

Remark 5 The results in this chapter can be easily generalized to the following general systems: x˙i = x2 + ϕi (t, x), 1 ≤ i ≤ n − 1, x˙n = u + ϕn (t, x)

(69) (70)

where the nonlinear function should satisfy |ϕi (t, x)| ≤ (|x1 | + · · · + |xi |)ϕi1 (x¯i ), 1 ≤ i ≤ n,

(71)

with x¯i = (x1 , . . . , xi )T . The virtual controllers x2∗ , . . . , xk∗ can be designed as x2∗ = −α1 (x1 )μξ1 , x3∗ = −α2 (x¯2 )μξ2 , .. .

.. .

xk∗ = −αk−1 (x¯k−1 )μξk−1 ,

ξ1 = x 1 , ξ2 = x2 − x2∗ ,

(72) (73)

∗ ξk−1 = xk−1 − xk−1 ,

(74)

and the controller as u = −αn (x¯n )μξn ,

(75)

where α1 (x1 ), . . . , αn (x¯n ) are positive smooth functions. We choose 1 1 2  ξ1 + ξi2 , 2(i−1) 2 2μ i=2 n

V =

(76)

then we have V˙ ≤ −μξ12 −

n  i=2

1 ξ 2 ≤ −2μV. μ2i−3 i

(77)

Prescribed-Time Stabilization and Inverse Optimality for Nonlinear Systems

33

Fig. 1 Mass-spring mechanical system

For system (69)–(70), from (77) we can get similar results as those in Theorems 1 and 2.

5 A Simulation Example Consider a mass-spring system described in Fig. 1, Let y be the displacement. The system is described as [21] m y¨ + F f + Fsp = u,

(78)

where F f is the resistive force and Fsp is the restoring force of the spring. We assume that F f = c y˙ and Fsp = ky(1 + ay 2 ) (a > 0), where c, k and a are parameters. With x1 = y and x2 = y˙ , from (78) we have x˙1 = x2 , kx1 (1 + ax12 ) cx2 u − − . x˙2 = m m m

(79) (80)

Choosing m = 1, k = 0.2, a = 1, c = 0.1, by following the design in Sect. 3 with t0 = 0, T = 1, m = 2, we get the controller as u = −μ(3.6 + 35.3(1 + x12 )2 )(x2 + 2μx1 ).

(81)

For simulation, we choose the initial conditions as x1 (0) = −1, x2 (0) = 3. Figure 2 gives the response of the closed-loop system (79)–(81). From Fig. 2, we find that lim x = lim u = 0, which shows that prescribed-time stabilization is achieved. t→1

t→1

Therefore, the effectiveness of the design in this chapter is demonstrated. By Theorem 2, we obtain that the controller u ∗ (x) = −βμ(3.6 + 35.3(1 + x12 )2 )(x2 + 2μx1 ), β ≥ 2, ∀ t ∈ [t0 , t0 + T ) (82)

34

W. Li and M. Krstic 4

x

1

States

2

x2

0 −2

0

0.1

0.2

0.3

0.5

0.4

0.6

0.7

0.8

1

0.9

Time(Sec) Controller

100

u

0 −100 −200

0

0.1

0.2

0.3

0.5

0.4

0.6

0.7

0.8

0.9

1

Time(Sec)

Fig. 2 The response of the closed-loop system (79)–(81)

solves the prescribed-time inverse optimal problem by minimizing 

t0 +T

J (u) = 2βV2 (t0 + T, x(t0 + T )) + t0

 l(t, x(t)) +

 1 u 2 (t) dt, 3 α2 μ (t)

(83)

where α2 = 3.6 + 35.3(1 + x12 )2 , 1 1 V2 = x12 + (x2 + 2μx1 )2 , 2 2μ2   1 ∂ V2 ∂ V2 + β 2 α2 ξ22 . + l(t, x) = −2β ∂t ∂ x2 μ

(84) (85) (86)

6 Concluding Remarks In this chapter we have addressed the prescribed-time stabilization and inverse optimality control problems for strict-feedback nonlinear systems. By using new scaled quadratic Lyapunov functions, the designed controller guarantees that the equilibrium of the plant is prescribed-time stable. In addition, we redesign the controller to minimize a cost functional and stabilize the plant simultaneously. For the nonlinear prescribed-time control, many open issues are worth investigating, such as generalizing the results in this chapter to output-feedback control, the prescribed-time control with measurement noise or rounding errors, etc. Acknowledgements This work is supported by National Natural Science Foundation of China under Grant (No. 61973150), Taishan Scholars Program of Shandong Province of China under

Prescribed-Time Stabilization and Inverse Optimality for Nonlinear Systems

35

Grant (No. tstp20221133), and Shandong Province Higher Educational Excellent Youth Innovation team (No.2019KJN017).

Appendix In this Appendix, we collect two lemmas which are used in the design and analysis. Lemma 1 ([19]) For (x, y) ∈ R 2 , the following Young’s inequality holds: xy ≤

νp p 1 |x| + q |y|q , p qν

(87)

where ν > 0, the constants p > 1 and p > 1 satisfy ( p − 1)(q − 1) = 1. Lemma 2 ([22]) For any two vectors x and y, γ and γ˙ are both K∞ functions, we have x T y ≤ γ (|x|) + γ (|y|),

(88)

where the equality is achieved if and only if y = γ˙ (|x|)

x . |x|

(89)

References 1. Zarchan, P.: Tactical and Strategic Missile Guidance. In: Progress in Astronautics and Aeronautics, 5th edn. (2007) 2. Song, Y.D., Wang, Y.J., Holloway, J.C., et al.: Time-varying feedback for robust regulation of normal-form nonlinear systems in prescribed finite time. Automatica 83, 243–251 (2017) 3. Song, Y.D., Wang, Y.J., Krstic, M.: Time-varying feedback for stabilization in prescribed finite time. Int. J. Robust Nonlinear Control 29(3), 618–633 (2019) 4. Wang, Y.J., Song, Y.D., Hill, D.J., et al.: Prescribed finite time consensus and containment control of networked multi-agent systems. IEEE Trans. Cybern. 49(4), 1138–1147 (2019) 5. Holloway, J., Krstic, M.: Prescribed-time observers for linear systems in observer canonical form. IEEE Trans. Autom. Control 64(9), 3905–3912 (2019) 6. Holloway, J., Krstic, M.: Prescribed-time output feedback for linear systems in controllable canonical form. Automatica 107, 77–85 (2019) 7. Krishnamurthy, P., Khorrami, F., Krstic, M.: A dynamic high-gain design for prescribed-time regulation of nonlinear systems. Automatica 115, 108860 (2020) 8. Krishnamurthy, P., Khorrami, F., Krstic, M.: Robust adaptive prescribed-time stabilization via output feedback for uncertain nonlinear strict-feedback-like systems. Eur. J. Control 55, 14–23 (2020) 9. Steeves, D., Krstic, M., Vazquez, R.: Prescribed-time H1 -stabilization of reaction-diffusion equations by means of output feedback. In: Proceedings of 2019 European Control Conference 1932–1937 (2019)

36

W. Li and M. Krstic

10. Steeves, D., Krstic, M., Vazquez, R.: Prescribed-time estimation and output regulation of the linearized Schr¨odinger equation by backstepping. Eur. J. Control 55, 3–13 (2020) 11. Camacho Solorio, L., Vazquez, R., Krstic, M.: Boundary observers for coupled diffusionreaction systems with prescribed convergence rate. Syst. Control Lett. 135, 104586 (2020) 12. Chitour, Y., Ushirobira, R., Bouhemou, H.: Stabilization for a perturbed chain of integrators in prescribed time. SIAM J. Control Optim. 58(2), 1022–1048 (2020) 13. Gomez-Gutierrez, D.: On the design of nonautonomous fixed-time controllers with a predefined upper bound of the settling time. Int. J. Robust Nonlinear Control 30(10), 3871–3885 (2020) 14. Li, W.Q., Krstic, M.: Stochastic nonlinear prescribed-time stabilization and inverse optimality. IEEE Trans. Autom. Control 67(3), 1179–1193 (2022) 15. Li, W.Q., Krstic, M.: Prescribed-time control of stochastic nonlinear systems with reduced control effort. J. Syst. Sci. Complex. 34(5), 1782–1800 (2021) 16. Li, W.Q., Krstic, M.: Prescribed-time output-feedback control of stochastic nonlinear systems. IEEE Trans. Autom. Control 68(3), 1431–1446 (2023) 17. Freeman, R.A., Kokotovic, P.V.: Robust Nonlinear Control Design: State-Space and Lyapunov Techniques. Birkhauser, Boston, MA (1996) 18. Krstic, M., Li, Z.H.: Inverse optimal design of input to state stabilizing nonlinear controllers. IEEE Trans. Autom. Control 43(3), 336–350 (1998) 19. Krstic, M., Deng, H.: Stabilization of Uncertain Nonlinear Systems. Springer, New York (1998) 20. Sepulchre, R., Jankovic, M., Kokotovic, P.V.: Constructive Nonlinear Control. Springer (1997) 21. Khalil, H.K.: Nonlinear Systems, 3rd edn. Prentice Hall (2002) 22. Hardy, G., Littlewood, J.E., Polya, G.: Inequalities, 2nd edn. Cambridge (1989)

Designing Controllers with Predefined Convergence-Time Bound Using Bounded Time-Varying Gains Rodrigo Aldana-López, Richard Seeber, Hernan Haimovich, and David Gómez-Gutiérrez

Abstract Recently, there has been a great deal of attention in a class of controllers based on time-varying gains, called prescribed-time controllers, that steer the system’s state to the origin in the desired time, a priori set by the user, regardless of the initial condition. Furthermore, such a class of controllers has been shown to maintain a prescribed-time convergence in the presence of disturbances even if the disturbance bound is unknown. However, such properties require a time-varying gain that becomes singular at the terminal time, which limits its application to scenarios under quantization or measurement noise. This chapter presents a methodology to design a broader class of controllers, called predefined-time controllers, with a prescribed convergence-time bound. Our approach allows designing robust predefinedtime controllers based on time-varying gains while maintaining uniformly bounded time-varying gains. We analyze the condition for uniform Lyapunov stability under the proposed time-varying controllers.

R. Aldana-López Departamento de Informatica e Ingenieria de Sistemas (DIIS), Universidad de Zaragoza, María de Luna, s/n, 50018 Zaragoza, Spain R. Seeber Christian Doppler Laboratory for Model Based Control of Complex Test Bed Systems, Institute of Automation and Control, Graz University of Technology, Graz, Austria e-mail: [email protected] H. Haimovich Centro Internacional Franco-Argentino de Ciencias de la Información y de Sistemas (CIFASIS), Universidad Nacional de Rosario (UNR), Consejo Nacional de Investigaciones Científicas y Técnicas (CONICET), Ocampo y Esmeralda, 2000 Rosario, Argentina e-mail: [email protected] D. Gómez-Gutiérrez (B) Intelligent Systems Research Lab, Intel Labs, Intel Corporation, Av. del Bosque 1001, 45019 Zapopan, Jalisco, Mexico e-mail: [email protected] Instituto Tecnológico José Mario Molina Pasquel y Henríquez, Unidad Académica Zapopan, Tecnológico Nacional de México, Cam. Arenero 1101, 45019 Zapopan, Jalisco, Mexico © The Author(s), under exclusive license to Springer Nature Switzerland AG 2023 T. R. Oliveira et al. (eds.), Sliding-Mode Control and Variable-Structure Systems, Studies in Systems, Decision and Control 490, https://doi.org/10.1007/978-3-031-37089-2_3

37

38

R. Aldana-López et al.

1 Introduction Stabilizing a system in finite time in the presence of disturbance is one of the main features of sliding-mode control [20]. However, in finite-time stability, the convergence time may be an unbounded function of the system’s initial condition. Thus, knowledge on the region of admissible initial conditions may be needed to deal with scenarios under time constraints. Time constraints are often present, for instance, in fault detection, isolation, and recovery schemes [24], where failing to recover from the fault on time may lead to an unrecoverable mode, or in missile guidance applications [26], where the control guidance laws require stabilization in the desired time [21]. A class of finite-time stabilization exists, called fixed-time stabilization, with a convergence-time bound independent of the initial conditions, which make it attractive to deal with time constraints. Multiple methods have been developed to obtain fixed-time stabilization, such as Lyapunov differential inequalities [16] and homogeneity theory [3]. However, not every technique allows setting a priori the desired upper bound for the convergence time, as a convergence-time bound estimate may be unknown [4]. Thus, developing methods for fixed-time stabilization with a convergence-time bound defined a priori by the user has recently received a great deal of attention [1, 11, 17, 21]. On the one hand, autonomous fixed-time controllers have been explored in [1, 7, 11, 16, 27], with emphasis on estimating an upper bound for the settling time (UBST ) of the closed-loop system. Although methodologies for obtaining the least UBST have been proposed in the literature, see e.g. [1, 2], this approach has proven challenging for higher-order systems, resulting in very conservative estimations of an UBST [27], and yielding over-engineered controllers with an unnecessarily large control magnitude. On the other hand, prescribed-time controllers based on time-varying gains have been proposed in [21, 22], which have the remarkable property that, for any nonzero initial condition, its convergence time is precisely the desired one, and that no information on the disturbance bound is needed to steer the system’s state to the origin. Unfortunately, the methodology requires time-varying gains that tend to infinity at the terminal time, which is problematic under quantization or measurement noise. Therefore, controllers with a predefined convergence time, taking advantage of timevarying gains while maintaining them bounded, are of great interest [6, 10]. Furthermore, it is essential to analyze the uniform (with respect to time) stability property when using controllers based on time-varying gains, as the absence of uniform stability may lead to an inherent lack of robustness. However, to the best of our knowledge, such analysis is missing in the existing prescribed-time control literature. In this chapter, we present a methodology for designing robust controllers such that the origin of its closed-loop system is fixed-time stable with a desired UBST, i.e., predefined-time controllers. Our analysis is based on relating the closed-loop system with an auxiliary system through a time-varying coordinate change and a time-scale transformation. The methodology is motivated by an analysis of the first-

Designing Controllers with Predefined Convergence-Time Bound …

39

order case. It is shown that applying it to a linear controller leads to a minimum energy solution. It generally allows to reduce the required control energy also when redesigning other controllers. Based on the auxiliary system’s stability properties, interesting features are obtained in the closed-loop system under the proposed controller. Such an approach allows deriving a controller with the desired convergence time regardless of the initial condition, as well as predefined-time controllers with uniformly bounded time-varying gains. Finally, this methodology is complemented by studying the uniform Lyapunov stability property, providing necessary and sufficient conditions such that our methodology yields a uniformly Lyapunov stable closed-loop system’s equilibrium. Additionally, we show that our approach yields existing autonomous controllers as an extreme case, while the use of time-varying gains provides extra degrees of freedom for reducing the control effort. The chapter is organized as follows: in Sect. 2 we present the example of a straightforward first-order system exhibiting interesting convergence properties and from which our general strategy using time-scale transformations arises. In Sect. 3 we provide some preliminaries on fixed-time stability and our problem of interest regarding the design of controllers with predefined convergence-time bound. In Sect. 4 we provide a methodology to solve this problem in some particular cases, including firstand second-order systems. We discuss some disadvantages of some prescribed-time convergence algorithms proposed in the literature, where the time-varying gains are unbounded. Finally, in Sect. 5 we introduce the main result of this chapter, which is the design methodology for arbitrary-order controllers with predefined convergencetime bound. In addition, we discuss the need to consider bounded time-varying gains by examining the uniform Lyapunov stability property. Notation: We use boldface lower case letter for vector and boldface capital letters for matrices. The notation J := [ai j ] ∈ Rn×n denotes a single Jordan block with zero eigenvalue, i.e., a square matrix with ai j = 1 if j = i + 1 and ai j = 0 otherwise. The vector bi ∈ Rn denotes a vector with one in the i-th entry and zeros otherwise. ¯ + = R+ ∪ {∞}. For a function φ : I → J , its Let R+ = {x ∈ R : x ≥ 0} and R −1 reciprocal φ(τ ) , τ ∈ I , is such that φ(τ )−1 φ(τ ) = 1 and its inverse function φ −1 (t), t ∈ J , is such that φ(φ −1 (t)) = t. Given a matrix A ∈ Rn×m , A T represents its transpose. For a signal y : R+ → R, y (i) (t) represents its i−th derivative with respect to time at time t. To denote a first-order derivative of y(t), we simply use the notation y˙ (t). Simulations: Throughout the chapter, simulations are performed on OpenModelica using the Euler integration method with step size 1e-5 and tolerance 1e-6.

2 Motivating Example Consider a first-order integrator x˙ = u + d(t)

40

R. Aldana-López et al.

where |d(t)| ≤ Δ with a nonnegative constant Δ. The aim is to design a feedback control law such that the origin is reached in a desired prespecified time Tc . Let us first consider the unperturbed case, i.e., where Δ = 0. To derive a controller, start from an auxiliary system dx = −x, (1) dτ written in an artificial time variable τ , whose solution is x(τ ) = x0 exp(−τ ). Our approach is to use a time-scale transformation τ = ϕ(t) such that system (1), written with respect to the time variable t, reaches the origin at t = Tc . For this transformation to be a suitable time-scale transformation, it must be: strictly increasing, differentiable, satisfy limt→Tc− ϕ(t) = ∞ and ϕ(0) = 0 (a characterization of such time-scale transformations is given in [2]). A simple example of a time-scale transformation with the above requirements is τ = ϕ(t) = − ln(1 − Tc−1 t) whose inverse is given by t = ϕ −1 (τ ) = Tc (1 − exp(−τ )) . Thus, the dynamics of (1) in t-time can be written, according to the chain rule, as   dx dτ dx = · dt dτ τ =ϕ(t) dt 1 x =− (Tc − t) with a solution

x(ϕ(t)) = x0 · (1 − Tc−1 t).

Clearly, lim x(ϕ(t)) = 0.

t→Tc−

Therefore, a controller u=−

1 x, (Tc − t)

(2)

steers the state of the unperturbed integrator to the origin at a time Tc . Let us now consider the case where Δ = 0, under the controller (2). To analyze its convergence, let us now rewrite the closed-loop system dynamics in τ -time

Designing Controllers with Predefined Convergence-Time Bound …

41

  dx dx dt = · dτ dt t=ϕ −1 (τ ) dτ   1 = − · Tc exp(−τ ) x + d(t) (Tc − t) t=ϕ −1 (τ ) = −x + Tc exp(−τ )d(ϕ −1 (τ )). The solution thus satisfies    τ   exp(−(τ − ξ ))Tc exp(−ξ )d(ϕ −1 (ξ ))dξ  |x(τ )| = x0 exp(−τ ) + 0    τ   = x0 exp(−τ ) + Tc exp(−τ ) d(ϕ −1 (ξ ))dξ  0

≤ exp(−τ )(|x0 | + ΔTc τ ). Therefore, limτ →∞ x(τ ) = 0. Hence, in t-time, limt→Tc− x(t) = 0. To maintain the state at the origin after Tc , regardless of the disturbance, we can combine the controller (2) with a sliding-mode controller as follows:  u=

− (Tc1−t) x for t ∈ [0, Tc ) −Δsign(x) otherwise.

Thus, we can summarize the following remarkable properties of this approach: • For every nonzero initial condition, the origin is reached precisely at Tc , regardless of the initial conditions and without knowledge of the disturbance bound (although notice that to maintain the state at the origin after Tc knowledge on the disturbance bound is required). Unfortunately, the approach also presents the following drawback: 1 , tends to infinity Tc − t as the time tends to Tc . This is problematic under quantization or measurement noise.

• The time-varying gain of the controller, namely, the factor

In the remainder of the chapter we develop a methodology to design controllers that converge to the origin with a predefined convergence-time bound. Additionally, we provide sufficient conditions for our methodology to yield bounded time-varying gains.

42

R. Aldana-López et al.

3 Preliminaries and Problem Statement 3.1 Fixed-Time Stability and Settling-Time Function Consider the system x˙ = f(x, t) + bn d(t), ∀t ≥ 0,

(3)

where x ∈ Rn is the state of the system, t ∈ [0, +∞) is time, bn = [0, . . . , 0, 1]T , and d is a disturbance satisfying |d(t)| ≤ Δ, for a constant d < ∞.1 The set of admissible disturbances is denoted by . The solution of (3), with disturbance d and initial condition x0 is denoted by x(t; x0 , d). If d(t) ≡ 0 we simply write x(t; x0 ). Furthermore, consider the origin to be an equilibrium point of (3) for every admissible disturbance, meaning that x(t; 0, d) = 0 for all t ≥ 0. Definition 1 (Settling-time function) The settling-time function of system (3) is ¯ +, defined as T : Rn → R   ¯ T (x0 ) := inf ξ ∈ R+ : ∀d ∈ , lim x(t; x0 , d) = 0 . t→ξ

Notice that Definition 1 admits T (x0 ) = ∞. Definition 2 (Finite-time stability) The origin of system (3) is said to be finite-time stable if it is asymptotically stable [12] and its settling-time function is finite for every x0 , i.e., T (x0 ) < ∞ for all x0 ∈ Rn . Definition 3 (Fixed-time stability) The origin of system (3) is said to be fixed-time stable if it is finite-time stable and its settling-time function T (x0 ) is uniformly bounded on Rn , i.e., there exists Tmax ∈ R+ \ {0} such that supx0 ∈Rn T (x0 ) ≤ Tmax . Then, Tmax is said to be a UBST of system (3).

3.2 Problem Statement Consider a chain of integrators 1 In the spirit of Filippov’s interpretation of differential equations, solutions of (3) are understood as any absolutely continuous function that satisfies the differential inclusion obtained by applying the Filippov regularization to f(•, •) (See [9, Page 85]), allowing us to consider f(•, •) discontinuous in the first argument. In the usual Filippov’s interpretation, it is assumed that f(x, t) has an integrable majorant function of time for any x, ensuring existence and uniqueness of solutions in forward time. However, in this work we deal with f(x, t) for which no majorant function exists, but existence and uniqueness of solutions are still guaranteed by an argument similar to [2]. In particular, existence of solutions follows directly from the equivalence of solutions to a well-posed Filippov system via the time-scale transformation.

Designing Controllers with Predefined Convergence-Time Bound …

x˙i = xi+1 ,

43

i = 1, . . . , n − 1

x˙n = u(x, t; Tc ) + d(t)

(4)

where the disturbance d(t) satisfies |d(t)| ≤ Δ with a known constant Δ, x = [x1 , . . . , xn ]T . We aim to design a controller u(x, t; Tc ) to steer the system to the origin before the desired time Tc a priori set by the user, i.e., the controller u(x, t; Tc ) is such that the origin of the closed-loop system is fixed-time stable with a predefined UBST given by Tc . Definition 4 The controller u(x, t; Tc ) is called: • a predefined-time controller if the settling-time function of the closed-loop system satisfies supx0 ∈Rn T (x0 ) ≤ Tc < ∞. • a prescribed-time controller if for all x0 = 0 the settling-time function of the closed-loop system satisfies T (x0 ) = Tc < ∞. Notice that prescribed-time controllers ensure convergence with an UBST given by Tc . Thus, prescribed-time controllers are a subclass of predefined-time ones, with the remarkable property that the settling-time function is precisely Tc . Our approach is a hybrid controller of the form:  u(x, t; Tc ) =

φ(x, t; Tc ) for t ∈ [0, Tc ) w(x; Δ) otherwise

(5)

where the time-varying controller φ(x, t; Tc ) should drive the state of the system to the origin with a convergence-time bound specified a priori by the parameter Tc and the robust controller w(x; Δ) should maintain the system at the origin in spite of the bounded disturbance d(t). Since the design of robust sliding-mode controllers w(x; Δ) is well understood, see, e.g., [8, 19, 20, 25], in the rest of the chapter we focus on the design of the controller φ(x, t; Tc ) and restrict the analysis to the interval [0, Tc ).

4 First-Order Controllers Consider the time-scale transformation τ = ϕ(t) = ln((1 − ηTc−1 t)− α ) 1

with constant positive parameters α, η, and Tc . Its inverse is given by t = ϕ −1 (τ ) = η−1 Tc (1 − exp(−ατ )) , together with the time-varying gain

(6)

44

R. Aldana-López et al. η − 1 Tc

α = 0.1 α = 0.2 α = 1.0

κ (t)

t-time

Tc η

τ -time

t-time

Fig. 1 Example of a time-scale transformation (left) and its related time-varying gain (right) with η = 1 and Tc = 1

κ(t) :=

η dτ = . dt α(Tc − ηt)

(7)

Such a time-scale transformation is illustrated in Fig. 1. Notice that, as τ tends to infinity, t approaches η−1 Tc , and limt→η−1 Tc− κ(t) = ∞. This property will be exploited to design an asymptotically stable system in τ -time and transform it into a predefined-time system in t-time, as explained next. Consider a first-order system x˙ = φ(x, t; Tc ) + d(t),

t ∈ [0, Tc )

(8)

with the controller φ(x, t; Tc ) = κ(t)v(x),

(9)

where x ∈ R, and v(x) is a virtual control to be defined below. System (8) in τ -time is given by  dx  dt dx = · dτ dt t=η−1 Tc (1−exp(−ατ )) dτ where

dt = αη−1 Tc exp(−ατ ) = κ(ϕ −1 (τ ))−1 . dτ

Thus, κ(ϕ −1 (τ )) = and

η exp(ατ ) αTc

dx = v(x) + αη−1 Tc exp(−ατ )d(ϕ −1 (τ )). dτ

Notice that, since |d(ϕ −1 (τ ))| ≤ Δ, then the disturbance term

(10)

(11)

(12)

Designing Controllers with Predefined Convergence-Time Bound …

αη−1 Tc exp(−ατ )d(ϕ −1 (τ ))

45

(13)

becomes vanishing in τ -time. Thus, if v(x) is chosen such that (12) is asymptotically stable with a settling-time function T (x0 ), due to the time-scale transformation (6), the settling-time function of (8) is given by T (x0 ) = η−1 Tc (1 − exp (−αT (x0 ))) . Thus, by an appropriate selection of v(x) and η, we can obtain a predefined-time controller (5). One drawback of the controller (9) is that, if v(x) contains discontinuous terms, then φ(x, t; Tc ) will have discontinuous terms that are increasing beyond what is necessary to cancel out the disturbance effect, possibly producing large chattering. For instance, notice that with v(x) = −sign(x) and η = 1, (12) is finite-time stable but φ(x, t; Tc ) = − α(T1c −t) sign(x). To address this important issue, consider the following generalization of the controller in (9), with an additional degree of freedom ρ ∈ [0, 1]: ˜ −1 κ(t)ρ x) φ(x, t; Tc ) = βκ(t)1−ρ v(β ˜ is an auxiliary controller to where β ≥ (αη−1 Tc )1−ρ , κ(t) is given in (7), and v(•) be specified below. To analyze the stability of the closed-loop system, consider the coordinate change: z = β −1 κ(t)ρ x, together with the time-scale transformation in (6). Noticing that −1 = ακ(t), κ(t)κ(t) ˙

(14)

the dynamics in the z-coordinates is given by ˙ + κ(t)v(z) ˜ + β −1 κ(t)ρ d(t) z˙ = ρκ(t)−1 κ(t)z  = κ(t) ραz + v(z) ˜ + β −1 κ(t)ρ−1 d(t) . Thus, from (10) and (11), it follows that the dynamics in z-coordinates and τ -time is given by dz = ραz + v(z) ˜ + β −1 (αη−1 Tc )1−ρ exp(−α(1 − ρ)τ )d(ϕ −1 (τ )). dτ Notice that π(τ ) = β −1 (αη−1 Tc )1−ρ exp(−α(1 − ρ)τ )d(ϕ −1 (τ )) satisfies

46

R. Aldana-López et al.

|π(τ )| ≤ Δ exp(−α(1 − ρ)τ ) and, therefore, with the ρ parameter, we can specify the rate at which π(τ ) vanishes. Moreover, with ρ = 1, π(τ ) is no longer a vanishing disturbance. Thus, choosing the auxiliary controller as v(z) ˜ = v(z) − αρz yields dz = v(z) + π(τ ). dτ

(15)

Thus, we can take advantage of existing robust controllers for (15), and the settlingtime function will become   T (x0 ) = η−1 Tc 1 − exp −αT (β −1 κ(0)ρ x0 ) . Furthermore, with ρ = 1, if v(x) contains an additive discontinuous term (designed to cope with the disturbance π(τ )), those terms will not be multiplied by κ(t) in φ(x, t; Tc ), and thus will not have its magnitude increased beyond what is necessary to reject the disturbance without increasing chattering. For instance, with ρ = 1, η = 1 and v(x) = −sign(x) we obtain φ(x, t; Tc ) = −βsign(x) −

1 x. (Tc − t)

4.1 Prescribed-Time Controllers In this subsection, we focus on controllers v(x), such that the settling-time function of the auxiliary closed-loop system (12) satisfies T (x0 ) = ∞, ∀x0 = 0 and we choose η = 1. Since in the τ -time the disturbance becomes vanishing, then, any Input-toState Stabilizing controller [23] can be applied as v(x) to stabilize system (12), even without knowledge of the disturbance bound Δ. This is because, for any bounded disturbance d(·), in τ -time the disturbance term (13) goes to zero as the τ -time goes to infinity. However, knowledge on Δ is required to maintain the state at the origin after the time Tc . Therefore, with the controller (5), the settling time of the closed-loop system (8) is T (x0 ) = Tc , i.e., the convergence occurs precisely at Tc regardless of the initial condition x0 . The following proposition provides a first-order prescribed-time controller with minimum energy among all controllers driving the system state from x(0) = x0 to x(Tc ) = 0.

Designing Controllers with Predefined Convergence-Time Bound …

47

Proposition 1 Let d(t) = 0. Then, the trajectory x(t) resulting from controller (5) where v(x) = −x, and κ(t) is given in (7) with α = 1 and η = 1, under the constraints x(0) = x0 and x(Tc ) = 0, minimizes the energy function 

Tc

E Tc =

u(ξ )2 dξ.

(16)

0

Proof Using x(t) ˙ = u(t), one can build a Lagrangian for (16) as L(t, x, x) ˙ = x˙ 2 . Hence, the well-known Euler–Lagrange equations [13, Page 38]: d dt



∂L ∂ x˙



∂L =0 ∂x

lead to x¨ = 0, ∀t ∈ (0, Tc ). Thus, the resulting trajectories which minimize (16) must be of the form x(t) = c1 + c2 t, ∀t ∈ [0, Tc ] for some constants c1 , c2 . This, along with boundary conditions x(0) = x0 , x(Tc ) = 0 leads to x(t) = x0 (1 − Tc−1 t), which  satisfies x˙ = κ(t)v(x) = − Tc1−t x, ∀t ∈ [0, Tc ), concluding the proof. The main drawback of prescribed-time controllers is that the origin of (8) is reached as the time-varying gain tends to infinity, which is problematic under noise or limited numerical precision. One may suggest, as a workaround to maintain the time-varying gain bounded, to consider, instead of controller (5), the controller  u(x, t; Tc ) =

φ(x, t; Tc ) for t ∈ [0, tstop ) w(x; Δ) otherwise

where tstop < Tc . Unfortunately, with φ(x, t; Tc ) = −κ(t)x, the state at tstop grows linearly with x0 , as illustrated in the following example. Example 1 Consider a prescribed-time controller with v(x) = −x, with η = 1, α = 1 and Tc = 1 and set tstop = 0.9. The trajectories for different initial conditions are shown in Fig. 2; notice that x(tstop ) = x0 (1 − Tc−1 tstop ) = 0.1x0 . A similar case occurs by taking v(x) = c(1 − exp(−|x|))sign(x),

(17)

where c ≥ 1, with such controller, the origin of system (12) is asymptotically stable. Thus, we take η = 1. A predefined-time controller (5) with v(x) as in (17) was proposed in [14]. The trajectories with c = 10 for different initial conditions are shown in Fig. 3, in this case x(tstop ) is also an unbounded function of the initial condition x0 .

tstop

x(tstop )

R. Aldana-López et al.

x(t)

48

t-time

x0

x(t)

x(tstop )

tstop

Fig. 2 Simulation of the first-order prescribed-time controller, for different initial conditions, with φ(x, t; Tc ) = −κ(t)x and Tc = 1. It can be observed that, the state x(tstop ) at a time tstop grows linearly with |x0 |. Here we choose tstop = 0.9

t-time

x0

Fig. 3 Simulation of the first-order prescribed-time controller, for different initial conditions, with φ(x, t; Tc ) = −κ(t)c(1 − exp(−|x|))sign(x), with c = 10 and Tc = 1. It can be observed that, the state x(tstop ) at a time tstop grows with |x0 |. Here we choose tstop = 0.9

4.2 Predefined-Time Controllers with Bounded Time-Varying Gains As discussed above, prescribed-time controllers have the remarkable property that the settling-time function of the closed-loop system is precisely Tc . Still, they present a major drawback: the time-varying gain grows to infinity as the trajectory goes zero. Our approach to maintain the gain finite at the reaching time is to choose v(x) such that T (x0 ) < ∞, i.e., such that the origin of (12) is finite-time stable. Then, the origin of (8) is reached before the singularity in κ(t) occurs. Moreover, a bounded time-varying gain can be obtained by choosing v(x) such that sup T (x0 ) ≤ T f < ∞

x0 ∈R

Designing Controllers with Predefined Convergence-Time Bound …

49

for a known T f , i.e., such that the origin of (12) is fixed-time stable with a known UBST. Then, by choosing   (18) η =: 1 − exp −αT f (notice that η < 1) with the controller (5), the origin is reached in a predefined-time Tc and κ(t) is uniformly bounded by κ(t) ≤ κmax :=

exp(αT f ) − 1 for t ∈ [0, Tc ), αTc

(19)

with a settling-time function bounded by Tc . Moreover, if supx0 ∈R T (x0 ) = T f , then supx0 ∈R T (x0 ) = Tc . Notice that the convergence is obtained before the desired time Tc , instead of precisely at time Tc , as with prescribed-time controllers.

4.3 On Reducing the Energy with Time-Varying Gains It is important to highlight that, in the first-order case, one can also obtain an autonomous predefined-time controller based on a fixed-time stable system with a known least UBST, such as those proposed in [1, 16], by using the trivial time-scale transformation Tc τ, (20) t= Tf which result in the predefined-time controller u(t) = −

Tf v(x) Tc

where Tc is the least UBST. Figure 4 illustrates how by using the time-scale transformation (6), the time-varying gain becomes bounded when a fixed-time controller v(x) with UBST given by T f is used, and it is contrasted with the static time-scale transformation (20) and its associated gain for predefined-time control. Our approach yields this trivial time-scaling as a special case as α tends to zero, T since limα→0 ϕ −1 (τ ) = TTcf τ and limα→0 κ(t) = Tcf . As shown in the following proposition, even in the case where there already exists an autonomous fixed-time controller with least UBST, our approach provides an extra degree of freedom to reduce the energy (16), used by the controller to drive system (8) from x(0) = x0 to x(Tc ) = 0, as well as to reduce the control magnitude supt∈[0,Tc ) (|u(t)|). Proposition 2 Let the scalar system x˙ = v(x) be such that its settling-time function satisfies supx0 ∈R T (x0 ) ≤ T f < ∞ for a known T f and v(•)2 is non-decreasing for nonnegative arguments. Using such v(x), T f and some α ≥ 0, construct a control u(t) as in (5) with η defined in (18) for the scalar system x˙ = u. Denote the energy

50

R. Aldana-López et al. η −1 Tc t = η −1 Tc (1 − exp(−ατ )) t=

Tc Tf

Tc η

κ max

τ κ (t)

t-time

Tc

Tf

Tc Tf Tc

τ -time

t-time

Fig. 4 Comparison of the proposed time-scale transformation against the trivial scalar scaling. On the subplot on the left shows that if the system in τ -time has an UBST given by T f , then the system in the t-time has a UBST given by Tc . The subplot on the right illustrates how the time-varying gain is uniformly bounded

E(α) = E Tc as defined in (16) for such α ≥ 0. Then, there always exist αx∗0 ∈ R which may depend on x0 such that E(αx∗0 ) ≤ E(α), ∀α ≥ 0. In particular, E(αx∗0 ) < E(0), ∀x0 = 0. T Proof First, write the energy as E(α) = 0 c κ(ξ )2 v(x(ξ ; x0 ))2 dξ using (16), where x(t; x0 ) is the solution of x˙ = u with x(0; x0 ) = x0 . Now, make the change of variables τ = ϕ(ξ ) from (6) which leads to 

ϕ(Tc )

E(α) = 

0 Tf

=



0

κ(ϕ −1 (τ ))2 v(x(ϕ −1 (τ )))2



1

κ(ϕ −1 (τ )) η exp(ατ ) v(x(ϕ −1 (τ )))2 dτ αTc



where dτ = κ(ξ )dξ was used from (7). Now, note that x(ϕ −1 (τ )) is the solution to dx = v(x) which follows from (12) since there is no disturbance. Hence, x(ϕ −1 (τ )) dτ does not depend on α. Now, it is straightforward to verify that: d lim α→0 dα



Tf η exp(ατ ) = (τ − T f /2) αTc Tc

 where we used η = 1 − exp −αT f , from which it follows:  Tf

η d exp(ατ ) v(x(τ ))2 dτ α→0 dα 0 αTc  Tf Tf (τ − T f /2)v(x(τ ))2 dτ = Tc 0

E (0) = lim

Designing Controllers with Predefined Convergence-Time Bound …

=

Tf Tc



T f /2

(τ − T f /2)v(x(τ ))2 dτ +

0

Tf Tc



Tf T f /2

51

(τ − T f /2)v(x(τ ))2 dτ

  T f T f /2 T f T f /2 =− τ v(x(T f /2 − τ ))2 dτ + τ v(x(τ + T f /2))2 dτ Tc 0 Tc 0  T f T f /2  τ v(x(τ + T f /2))2 − v(x(T f /2 − τ ))2 dτ. = Tc 0 Now, consider x0 > 0. Hence, x(•) is a decreasing function and x(τ + T f /2) ≤ x(T f /2 − τ ) ∀τ ∈ [0, T f /2]. Therefore v(x(τ + T f /2))2 ≤ v(x(T f /2 − τ ))2 ∀τ ∈ [0, T f /2]. Hence, E (0) ≤ 0 with equality only if x0 = 0 which is excluded in the proposition. Thus, one obtains the strict inequality E(α) < E(0) for α in some neighborhood of 0. Now, note that limα→∞ αTη c exp(ατ ) = +∞ so that limα→∞ E(α) = +∞. Hence, there exists α¯ > 0 sufficiently big, such that E(0) ≤ E(α), ∀α ≥ α. ¯ Combining these facts, there must exist an optimal α ∗ > 0 such that E(α ∗ ) ≤ E(α), ∀α ≥ 0 due to continuity of E(α). The strict inequality E(α ∗ ) < E(0) follows  from α ∗ > 0 concluding the proof.

t im e

Optimal α = 1.5 α = 1.0 α = 0.5 α = 0.1 α = 0.001

u(t)

E(t) =

x(t)

t

0

|u(ξ )|2 dξ

2

1

 Example 2 Let d(t) = 0 and the controller v(x) = − (a|x| p + b|x|q )k sign(x), with a, b, p, q and k as in Theorem 2 in the appendix and ζ ≥ Δ. Thus, T f = γ with γ as in (44) from that theorem. Then, with the controller (5) the origin is reached )−1 . Figure 5 illustrates the trajectories for in a predefined-time Tc and κmax = exp(αγ αTc the case when a = 4, b = 41 , k = 1, p = 0.9, q = 1.1 (it follows from Theorem 2, that γ = 15.71) obtained for different selections of α, with x0 = 100, as well as the Energy E Tc and the control signal u(t) obtained in each case. Notice that, on the one hand, a minimum energy prescribed-time controller is obtained in Proposition 1, but requires time-varying gains that tend to infinity. On the other hand, a predefined-time autonomous controller can be obtained by taking limα→0 κ(t), but with an energy function significantly larger than with the prescribed-time controller. However, by

time

time

Fig. 5 Simulation of a first-order predefined-time controller with bounded time-varying gains. Different values of the parameter alpha are shown, illustrating that a suitable selection of α allows to reduce the energy and the control magnitude

52

R. Aldana-López et al.

tuning α, the energy E Tc and control magnitude supt∈[0,Tc ) (|u(t)|) can be significantly reduced while maintaining the time-varying gain bounded.

4.4 Redesigning Fixed-Time Stabilizing Controllers Using Bounded Time-Varying Gains: The Second-Order Case In Sect. 4.2 we argued that, even in the case where a fixed-time controller with a known least UBST already exists, our approach allows to reduce the total energy required by the controller to drive the state of the system to the origin. For higher-order systems, it is well known that the UBST of fixed-time autonomous controllers, which are commonly based on Lyapunov analysis [15] or homogeneity theory [3], is very conservative, resulting in over-engineered controllers with large energy requirements (16) and large control signals. Thus, having predefined-time controllers that allow reducing such over-engineering is even more relevant in the high-order case. Frequently, robust fixed-time controllers v(z) are presented in the literature for system: dz 1 = z2 dτ dz 2 = v(z) + d(t). dτ

(21)

where d(t) satisfy |d| ≤ Δ for a known positive constant Δ. Assume that v(z) is such that the origin of (21) is asymptotically stable and its settling-time function T (z0 ) satisfies sup T (z0 ) ≤ T f z0 ∈R2

for a known T f < ∞. The approach described below can also be used to redesign finite-time controllers whose initial condition is bounded with a known settling-time function, see, e.g., [18]. To illustrate how to take advantage of such a controller and its estimation of the settling-time function to obtain predefined-time controllers based on time-varying gains, consider a more general predefined-time controller ˜ −1 κ(t)ρ x1 , β −1 κ(t)ρ−1 x2 ) φ(x, t; Tc ) = βκ(t)2−ρ v(β

(22)

˜ •) is an auxiliary control to be designed where ρ ∈ [0, 2], β ≥ (αη−1 Tc )2−ρ , and v(•, below. To analyze the behavior of the closed-loop system under controller (22), for t ∈ [0, Tc ), consider the coordinate change yi = β −1 κ(t)ρ−i+1 xi , i = 1, 2.

(23)

Designing Controllers with Predefined Convergence-Time Bound …

53

Recalling the equality (14) then, the dynamics of the y-coordinates are given by y˙1 = κ(t)[αρy1 + y2 ] y˙2 = κ(t)[α(ρ − 1)y2 + v(y ˜ 1 , y2 ) + β −1 κ(t)ρ−2 d(t)].

Thus, considering the time-scale transformation (6), the dynamics in y-coordinates and τ -time are given by dy1 = αρy1 + y2 dτ dy2 = α(ρ − 1)y2 + v(y ˜ 1 , y2 ) + π(τ ) dτ where

(24)

π(τ ) = β −1 (αη−1 Tc )2−ρ exp(−α(2 − ρ)τ )d(ϕ −1 (τ ))

satisfies |π(τ )| ≤ Δ. Notice that, by choosing the ρ parameter, the rate at which π(τ ) vanishes can be varied; and with ρ = 2, π(τ ) becomes a non-vanishing disturbance. Notice that, for the coordinate change (23) to be well defined, the auxiliary control v(y ˜ 1 , y2 ), and the parameters ρ and α need to be chosen such that lim exp(−α(ρ + 1 − i)τ )yi (τ ; y0 , π[0,∞] ) = 0 i = 1, 2,

τ →∞

for every admissible disturbance π[0,∞] , which guarantees that the coordinate change maps the origin of the y-coordinates to the origin of the x-coordinates (and vice versa). Such condition is trivially satisfied since the origin of (24) is finite-time stable. Now, to design v(•, ˜ •) based on the controller v(•) consider the coordinate change z 1 = y1 and z 2 = αρy1 + y2 , which will take the system into a controller canonical form: dz 1 = z2 dτ dz 2 = αρz 2 + α(ρ − 1)(z 2 − αρz 1 ) + v(z ˜ 1 , z 2 − αρz 1 ) + π(τ ). dτ Thus, choosing v(•, ˜ •) as v(z ˜ 1 , z 2 − αρz 1 ) = v(z) − c1 z 1 − c2 z 2 where c1 = −α 2 (ρ 2 − ρ) c2 = α(2ρ − 1)

(25)

54

R. Aldana-López et al.

yields system (21). Thus, taking η as in (18), with the controller (5) φ(x, t; Tc ) = βκ(t)2−ρ [v(β −1 Qρ Kρ−1 (t)x) − [c1 , c2 ]β −1 Qρ Kρ−1 (t)x]

(26)

with ci , i = 1, 2 is given by (25), Kρ (t) = diag(κ(t)−ρ , κ(t)1−ρ ) and  Qρ =

1 0 αρ 1



the origin of the closed-loop system is fixed-time stable and the settling-time function satisfies   T (x0 ) = η−1 Tc 1 − exp −αT (β −1 Qρ Kρ (0)x0 ) < Tc . Moreover, the time-varying gain is bounded by (19). Example 3 (Second-order system) Consider the autonomous predefined-time controller given in Theorem 3, where:

u(t) = ω(x1 , x2 ) = −

  k γ2  γ2  a2 |σ | p + b2 |σ |q + 12 a1 + 3b1 x12 + ζ sign(σ ) T f2 2T f 1

(27)

with

1/2 2γ12  1 3 σ = x2 + x2  + 2 a1 x1  + b1 x1  , T f1 

2

which was introduced in [1] for a second-order perturbed system. Consider a disturbance d(t) = sin(t), and let a1 = a2 = 4, b1 = b2 = 41 , p = 0.5, q = 1, k = 1.5, T f1 = T f2 = 5, and ζ = 1. According to Theorem 3 in the appendix, γ1 and γ1 are obtained as γ1 = 3.7081 and γ2 = 2, respectively, to obtain a predefined-time controller with UBST given by Tc = T f1 + T f2 = 10. A simulation for initial conditions x1 (0) = x2 (0) = 50 is shown in the first row of Fig. 6. Now, consider the predefined-time controller based on time-varying gains, given in (26), using as a base controller v(z) = ω(z 1 , z 2 ) the autonomous controller given in (27). Notice that T f = 10. Therefore,   κ(t)ρ x1 −1 Qρ Kρ (t)x = αρκ(t)ρ x1 + κ(t)ρ−1 x2 and  φ(x, t; Tc ) = βκ(t)2−ρ ω β −1 κ(t)ρ x1 , β −1 αρκ(t)ρ x1 + β −1 κ(t)ρ−1 x2 − (c1 + c2 αρ)κ(t)2 x1 − c2 κ(t)x2 .

Designing Controllers with Predefined Convergence-Time Bound … 2

1

u(ξ )2 dξ 0

E(t) =

t

u(t)

u(ξ )2 dξ

1

2

x1 x2

0

Non-autonomous

E(t) =

t

u(t)

Autonomous

x1 x2

55

time

time

time

Fig. 6 Comparison between the autonomous predefined-time controller proposed in [1] and the proposed non-autonomous predefined-time controller as discussed in Example 3. In both cases the UBST is selected as Tc = 10

We choose α = 0.5, ρ = 2, β = 1 and Tc = 10. Thus, η = 0.9933 and κmax = 29.483. Therefore, the predefined-time controller (5) with φ(x, t; Tc ) given by  3 φ(x, t; Tc ) = ω κ(t)2 x1 , κ(t)2 x1 + κ(t)x2 − κ(t)2 x1 − κ(t)x2 . 2 A simulation for initial conditions x1 (0) = x2 (0) = 50 is shown in the second row of Fig. 6. Notice that, an improved transient behavior is obtained with the non-autonomous controller when compared with the behavior of the autonomous controller. Also notice that the control magnitude (second column) and the energy function (third column) are significantly reduced in the time-varying case. The tuning parameters of our redesign methodology are α, β, and ρ. The insight on the function of such parameters on the redesigned controller is as follows: The α parameter is associated with the time-scale transformation as illustrated in Fig. 1, increasing its value helps to reduce the slack between the UBST and the true settling time [10], however, it increases κmax , which impacts on chattering. Increasing the β parameter allows to cope with disturbances d(t) of greater magnitude, as in the τ -time the magnitude of the disturbance π(τ ) is inversely proportional to the magnitude of β. However, large values of β increase chattering. Finally, as mentioned above, the ρ parameter allows to reduce chattering of the resulting predefined-time controller, in particular, when ρ = n discontinuous terms in the admissible auxiliary controller, does not appear multiplied by the increasing time-varying gain in the redesigned controller.

56

R. Aldana-López et al.

5 Main Result: Arbitrary-Order Predefined-Time Controller In this section, we present the extension to design arbitrary-order predefined-time controllers. Our approach can be seen as a “redesign" methodology that starts from an admissible auxiliary controller and uses time-varying gains to achieve predefinedtime convergence. Let us introduce the notion of an admissible auxiliary controller. Definition 5 (admissible auxiliary controller) Given parameters α ≥ 0, ρ ∈ [0, n] ¯ + , we say that v(z) is an admissible auxiliary controller if: and T f ∈ R (i) for every disturbance π(τ ) such that |π(τ )| ≤ Δ exp(−α(n − ρ)τ ),

(28)

it happens that the system dz = Jz + bn (v(z) + π(τ )) dτ

(29)

is asymptotically stable, where J := [ai j ] ∈ Rn×n denotes a single Jordan block with zero eigenvalue, i.e., a square matrix with ai j = 1 if j = i + 1 and ai j = 0 otherwise, and the vector bi ∈ Rn denotes a vector with one in the i-th entry and zeros otherwise. Moreover, the settling-time function satisfies sup T (z0 ) ≤ T f .

z0 ∈Rn

(30)

(ii) for every admissible disturbance π[0,∞] ∈ [0,∞] , it happens that lim exp(−α(ρ + 1 − i)τ )z i (τ ; z0 , π[0,∞] ) = 0 i = 1, . . . , n.

τ →∞

(31)

Based on an admissible auxiliary controller, we next present the methodology to design predefined-time controllers. Our main result is as follows. ¯ + , an admissible auxTheorem 1 Given parameters α ≥ 0, ρ ∈ [0, n] and T f ∈ R iliary controller v(z), and a desired convergence time Tc > 0, define the matrices Dρ , Qρ ∈ Rn×n as Dρ := diag{−ρ, 1 − ρ, . . . , n − 1 − ρ} and

Designing Controllers with Predefined Convergence-Time Bound …



b1T T b1 (J − αDρ ) .. .

⎢ ⎢ Qρ := ⎢ ⎣

57

⎤ ⎥ ⎥ ⎥, ⎦

(32)

b1T (J − αDρ )n−1 and the time-varying matrix Kρ (t) as Kρ (t) := diag(κ(t)−ρ , κ(t)1−ρ , . . . , κ(t)n−1−ρ ), where κ(t) is given by (7) with η as defined in (18). Then, with φ(x, t; Tc ) given by φ(x, t; Tc ) = βκ(t)n−ρ [v(β −1 Qρ Kρ−1 (t)x) − β −1 b1T (J − αDρ )n Kρ−1 (t)x] where β ≥ (αη−1 Tc )n−ρ , the hybrid controller in (5) is fixed-time stable with a settling-time function given by   T (x0 ) = η−1 Tc 1 − exp −αT (β −1 Qρ Kρ (0)x0 ) .

(33)

Proof Our approach for the proof of Theorem 1 is to show that the auxiliary system (29) and the closed-loop system (4) under controller (5), in the time interval [0, Tc ), are related by the coordinate change z = β −1 Qρ Kρ−1 (t)x

(34)

and the time-scale transformation (6). Consider the time interval [0, Tc ) and the time-varying coordinate change (34), and notice that, since v(z) is an admissible auxiliary controller, then the coordinate change is well defined. Then, the dynamics in the z-coordinates is given by z˙ = β −1 Qρ

dKρ−1 (t) dt

x + β −1 Qρ Kρ (t)˙x.

Thus, it follows from the identity (41) and x = βKρ (t)Q−1 ρ z, that −1 z˙ = −ακ(t)Qρ Dρ Q−1 ρ z + β Qρ Kρ (t)[Jx + bn (u + d(t))] −1 −1 −1 −1 = −ακ(t)Qρ Dρ Q−1 ρ z + Qρ Kρ (t)JKρ (t)Qρ z + β Qρ Kρ (t)bn (u + d(t)).

Moreover, applying identities (39) and (42) from Lemmas 1 and 2, yields −1 −1 −1 z˙ = −ακ(t)Qρ Dρ Q−1 ρ z + κ(t)Qρ JQρ z + β Qρ Kρ (t)bn (u + d(t)) −1 −1 = κ(t)[Qρ (J − αDρ )Q−1 ρ z] + β Qρ Kρ (t)bn (u + d(t))

= κ(t)(J + A)z + β −1 Qρ Kρ−1 (t)bn (u + d(t)).

58

R. Aldana-López et al.

From the identity Kρ−1 (t)bn = κ(t)ρ−n+1 bn and Qρ bn = bn it follows that z˙ = κ(t)(J + A)z + κ(t)ρ−n+1 β −1 Qρ bn (u + d(t)) = κ(t)(J + A)z + κ(t)ρ−n+1 β −1 bn (u + d(t)) ρ−n+1 −1 = κ(t)(J + A)z + κ(t)bn v(z) − bn b1T (J − αDρ )n Q−1 β bn d(t)). ρ z + κ(t)

Since, according to (40) from Lemma 1, A = bn b1T (J − αDρ )n Q−1 ρ , then z˙ = κ(t)(J + A)z + κ(t)bn v(z) − Az + κ(t)ρ−n+1 β −1 bn d(t)) = κ(t)[Jz + bn v(z) + κ(t)ρ−n β −1 bn d(t))]. Next, expressing the dynamics in the z-coordinates and τ -time, applying the timescale transformation (6), and noticing that  dz  dt dz =  dτ dt t=η−1 Tc (1−exp(−ατ )) dτ where

dt dτ

is given by (10), yields system (29) with π(τ ) := β −1 (αη−1 Tc )n−ρ exp(−α(n − ρ)τ )d(ϕ −1 (τ )).

Notice that π(τ ) satisfies (28). Since the origin of system (29) is asymptotically stable and its settling-time function satisfies (30), then the settling time of the closed-loop system under the controller (5) is given by (33), which completes the proof.  Corollary 1 Let v(z) be an auxiliary controller such that the closed-loop system of (29) is asymptotically stable with settling-time function T (z0 ) given by (30). Then, under the controller (5): 1. if T (z0 ) = ∞, then, the settling-time function (33) satisfies T (x0 ) = Tc . Thus, controller (5) is a prescribed-time controller but limt→T (x0 )− κ(t) = ∞. 2. if T (z0 ) < ∞, but T f = ∞, then, the time-varying gain κ(T (x0 )) is finite but an unbounded function of the initial condition x0 , i.e., supx0 ∈Rn κ(T (x0 )) = ∞. Thus, controller (5) is a predefined-time controller with a finite (but unbounded) time-varying gain at the settling time. 3. if supz0 ∈Rn T (z0 ) ≤ T f , with a known T f < ∞, then, the settling-time funcexp(αT f )−1 for tion (33) satisfies T (x0 ) ≤ Tc , and κ(t) is bounded by the κmax := αTc t ∈ [0, Tc ). Thus, controller (5) is a predefined-time controller with bounded time-varying gain. ¯ + , an Example 4 Let n = 3 and assume that for parameters α ≥ 0, ρ = n and T f ∈ R admissible auxiliary controller v(z) = ω(z 1 , z 2 , z 3 ) is given (fixed-time autonomous controllers with estimation of the settling time have been presented in [5, 27]). Thus, matrices Qρ and Kρ (t) are computed as

Designing Controllers with Predefined Convergence-Time Bound …

59



⎤ 1 0 0 Qρ = ⎣ 3α 1 0⎦ and Kρ (t) = diag(κ(t)−3 , κ(t)−2 , κ(t)−1 ), 9α 2 5α 1 respectively. Thus, ⎡

⎤ κ(t)3 x1 ⎦ 3ακ(t)3 x1 + κ(t)2 x2 Qρ Kρ−1 (t)x = ⎣ 9α 2 κ(t)3 x1 + 5ακ(t)2 x2 + κ(t)x3 and

b1T (J − αDρ )n Kρ−1 (t)x = 27α 3 κ(t)3 x1 + 19α 2 κ(t)2 x2 + 6ακ(t)x3 .

Therefore, we obtain the predefined-time controller (5) with φ(x, t; Tc ) given by  φ(x, t; Tc ) = ω κ(t)3 x1 , 3ακ(t)3 x1 + κ(t)2 x2 , 9α 2 κ(t)3 x1 + 5ακ(t)2 x2 + κ(t)x3 − 27α 3 κ(t)3 x1 − 19α 2 κ(t)2 x2 − 6ακ(t)x3 .

5.1 Uniform Lyapunov Stability of Predefined-Time Controllers Since our approach for predefined-time control uses time-varying gains to redesign an admissible auxiliary controller, it is essential to study the uniform (with respect to time) stability. This property has robustness implications, for instance, with respect to measurement noise, quantization, etc. We say that the origin of system (4) is uniformly Lyapunov stable [12, Definition 4.4], if for every  > 0 there exists δ > 0 such that for all s ≥ 0 and every admissible disturbance, ||x(s)|| ≤ δ implies ||x(t)|| ≤  for all t ≥ s. Example 5 Let Δ = 0, ρ = 0, α = 1, T f = ∞ and v(z) = −18z 1 − 9z 2 . Notice that such v(z) is an admissible auxiliary controller, since under such controller the state trajectory of the auxiliary system (29) is 1 1 exp(−3t) − exp(−6t) z 2 (0) z 1 (τ ; z0 ) = (2 exp(−3t) − exp(−6t)) z 1 (0) + 3 3 z 2 (τ ; z0 ) = (6 exp(−6t) − 6 exp(−3t))z 1 (0) + (2 exp(−6t) − exp(−3t))z 2 (0)

the origin of (29) is asymptotically stable with T (z0 ) = ∞, for every nonzero initial condition z0 . Moreover, it is easy to verify that (31) holds.

60

R. Aldana-López et al.

time

td td td td

x2 (t)

μ(t;td )

x1 x2

= 0.995 = 0.996 = 0.997 = 0.998

td td td td

ti m e

= 0.995 = 0.996 = 0.997 = 0.998 time

Fig. 7 Simulation of Example 5, showing the lack of robustness to measurement noise of a prescribed-time algorithm. On the left, the behavior of a prescribed control with Tc = 10 and without disturbance. In the center, a set of pulse disturbances in (36). On the right, the behavior of the closed-loop system under the prescribed control and in the presence of disturbance (36)

Thus,

and

v(β −1 Qρ Kρ−1 (t)x) = −18x1 − 9κ(t)−1 x2 β −1 b1T (J − αDρ )n Kρ−1 (t)x = −κ(t)−1 x2

and using such admissible auxiliary controller we obtain, based on Theorem 1, the prescribed-time controller: φ(x, t; Tc ) = −18κ(t)2 x1 − 8κ(t)x2 .

(35)

The trajectory of such prescribed-time controller, for Tc = 10 and initial condition x1 (0) = x2 (0) = 10 is shown in the first column of Fig. 7. Next, consider a disturbance  0.1 if td ≤ t < td + 0.001 μ(t; td ) = (36) 0 otherwise where td < Tc , such that the prescribed-time controller becomes 

 μ(t; td ) , t; Tc , φ x+ 0 with φ(x, t; Tc ) as in (35). The second and third column of Fig. 7 show the disturbance μ(t; td ) (which could occur due to quantization, noise, etc) and the trajectory for x2 (t), respectively, for different selections of td . As can be observed, an arbitrarily large transient can be obtained if td is sufficiently close to Tc , which shows absence of uniform stability and a lack of robustness to disturbances in the state. Note that when n = 1, the change of variables between z and x in (34) does not depend on time. Thus, if the dynamics of z in (21) is uniformly Lyapunov stable, this

Designing Controllers with Predefined Convergence-Time Bound …

61

property is transferred to x by continuity, as well. However, this reasoning fails with n > 1 due to the time dependence of (34). In the following, we establish under which conditions, uniform Lyapunov stability is attained for the origin of the closed-loop system (4) with the controller in (5). Proposition 3 Consider n > 1 and assume that the origin of the system (21) is uniformly Lyapunov stable and 0 ≤ ρ < n − 1. Then, the origin of the closed-loop system (4) with the controller in (5) is uniformly Lyapunov stable if and only if κ(t) is uniformly bounded. Proof First, note that uniform Lyapunov stability for t ≥ Tc of (4) follows since the controller (5) is time-independent for such t, and w(x, Δ) is assumed to make the origin of the system stable. Thus, we only need to analyze the uniform Lyapunov stability property of (4) for t ∈ [0, Tc ). Note that if the solution z(τ ) of (21) is uniformly Lyapunov stable, then the same applies to z(ϕ(t)), ∀t ∈ [0, Tc ). Recall that the solution x(t) of (4) and z(ϕ(t)) are related through the transformation in (34) for t ∈ [0, Tc ). In addition, note that due to stability property of trajectories x(t) obtained from Theorem 1, i.e., that x(Tc ) = 0 regardless of the initial conditions and disturbances, we can continue trajectories from t = Tc . We start by showing that κ(t) uniformly bounded implies uniform Lyapunov stability of the origin of system (4). Let 0 ≤ s < t < Tc and use Rayleigh’s inequality in (34) to obtain −1 βσ (Q−1 ρ )σ (ρ (t)) z(ϕ(t)) ≤ x(t) ≤ βσ (Qρ )σ (ρ (t)) z(ϕ(t))

(37)

where σ (•), σ (•) denote minimum and maximum singular values respectively. In addition, note that uniformly bounded κ(t) implies that there exists 0 < λ, λ ∈ R such that λ < σ (Kρ (t)) = min{κ(t)−ρ , . . . , κ(t)n−ρ−1 } σ (Kρ (t)) = max{κ(t)−ρ , . . . , κ(t)n−ρ−1 } < λ, for any t ∈ [0, Tc ]. Thus, (37) becomes −1 βσ (Q−1 ρ )λ z(ϕ(t)) ≤ x(t) ≤ βσ (Qρ )λ z(ϕ(t)) .

(38)

 −1 Now, choose any  > 0 and let z =  βσ (Q−1 and note that z > 0 since ρ )λ λ < +∞. For such z > 0, there exists δz > 0 such that z(ϕ(s)) ≤ δz implies z(ϕ(t)) ≤ z for ϕ(t) ≥ ϕ(s) due to Lyapunov stability and time invariance (and hence, uniform Lyapunov stability) of (21). Thus, in order to see how this property is transferred to x, let δ = δz βσ (Q−1 ρ )λ. Hence, using the first inequality in (38) it is obtained that x(s) ≤ δ = δz βσ (Q−1 ρ )λ implies z(ϕ(s)) ≤ δz . This in turn implies z(ϕ(t)) ≤ z . Using the second inequality in (38) we obtain x(t) ≤ βσ (Q−1 ρ )λz = , showing uniform Lyapunov stability for the origin of the closedloop system (4) with the controller in (5). Next, we show that if κ(t) is not bounded, then the origin of the closed-loop system (4) is not uniformly Lyapunov stable for ρ ∈ [0, n − 1). In particular, we

62

R. Aldana-López et al.

will show that for any δ,  > 0, there exist s, t with 0 ≤ s < t ≤ Tc , an admissible disturbance and a trajectory x of (4) which satisfies both x(s) ≤ δ and x(t) > , which is the direct negation of the uniform Lyapunov stability statement. This means that we only need to find a single trajectory of (4) which fails to fulfill the uniformity bounds. In this sense, we can focus on Δ = 0 and consider, for fixed δ and arbitrary τ0 ≥ 0, any trajectory z(τ ) of (21) with z(τ0 ) = wQρ b1 with nonzero constant w with |w| ≤ δ/(βκ(0)−ρ ). The proof strategy is to show that one of the zero components of x(s) with τ0 = ϕ(s) need to increase in magnitude at some later time s < t, enough to make x(t) as large as desired thanks to the unboundedness of the gain. To guarantee that one component of x(s) cannot remain at zero, we show  that there  z(τ ) = h for is no vector h = [h 1 , . . . , h n ]T ∈ Rn with h n = 0 such that dτd Q−1 ρ τ =τ0 this trajectory, meaning that the last component of x(s) cannot remain at the origin. Assume the opposite, which implies that h=

Q−1 ρ

 dz  −1 = Q−1 ρ bn v(z(τ0 )) + wQρ JQρ b1 = bn v(z(τ0 )) dτ τ =τ0

−1 since Q−1 this is impossible since h n = 0 ρ bn = bn and Qρ JQρ b1 = 0. However,   but v(z(τ0 )) = 0. Therefore, dτd bnT Q−1 z(τ ) is nonzero. The previous argument, ρ τ =τ0 in addition to the fact that (21) is time invariant and v(•) is continuous at z(τ0 ), implies that there exist positive constants τ˜ , , ˜ which only depend on δ, such that z(τ + τ ˜ )| > . ˜ |bnT Q−1 0 ρ Select now s ≥ 0 such that βκ(s)n−ρ−1 ˜ >  which is possible since κ(•) is unbounded and n − ρ − 1 > 0. Set τ0 = ϕ(s) and note that from (34), one then obtains −1 x(s) = βKρ (s)Q−1 ρ z(ϕ(s)) = βKρ (s)Qρ z(τ0 )

= βwKρ (s)b1 = βwκ(s)−ρ b1 and consequently x(s) = β|w|κ(s)−ρ ≤ β|w|κ(0)−ρ ≤ δ. Moreover, one has for t = ϕ −1 (ϕ(s) + τ˜ ) < Tc xn (t) = bnT x(t) = βκ(t)n−ρ−1 bnT Q−1 ρ z(ϕ(t)) = βκ(t)n−ρ−1 bnT Q−1 ρ z(τ0 + τ˜ ) and hence x(t) ≥ |xn (t)| ≥ βκ(t)n−ρ−1 ˜ ≥ βκ(s)n−ρ−1 ˜ > , since κ(t) is nondecreasing and n − ρ − 1 > 0, completing the proof.  Proposition 3 implies that, with the proposed approach, no uniformly stable prescribed-time controller can be obtained for a chain of integrators of order greater than one. Ensuring uniform stability for predefined-time controllers, on the other hand, may be achieved by ensuring that the time-varying gain stays bounded. Although this proof is particular for our approach, the proof suggests that we can take

Designing Controllers with Predefined Convergence-Time Bound …

63

time

td td td td

x2 (t)

μ(t;td )

x1 x2

= 0.995 = 0.996 = 0.997 = 0.998 time

td td td td

= 0.995 = 0.996 = 0.997 = 0.998 time

Fig. 8 Simulation of Example 6, showing robustness to measurement noise of a prescribed-time algorithm with bounded time-varying gains. On the left, the behavior of a prescribed control with Tc = 10 and without disturbance. In the center, a set of pulse disturbances in (36). On the right, the behavior of the closed-loop system under the prescribed control and in the presence of disturbance (36)

a similar path to show the non-uniformity of other prescribed-time control methods. Since, as illustrated in Example 5, the lack of uniform stability implies an inherent lack of robustness, then studying the uniform stability property in the prescribed-time control literature is essential. Example 6 Let us revisit the controller in Example 3 for a perturbed system with disturbance d(t) = sin(t). Similarly as in Example 5 consider the disturbance μ(t; td ) in (36), such that the predefined-time controller becomes

  μ(t; td ) φ x+ , t; Tc . 0 A simulation is shown in Fig. 8, where the second and third column show the disturbance μ(t; td ) (which could occur due to quantization, noise, etc) and the trajectory for x2 (t), respectively, for different selections of td . As can be observed, contrary to the case in Example 5, the transient behavior due to the perturbation remains bounded, no matter how close to Tc the disturbance μ(t; td ) occurs. Notice that according to Proposition 3 this predefined-time controller is uniformly Lyapunov stable since the time-varying gain is bounded.

6 Conclusion This chapter presents a methodology to design robust controllers achieving fixedtime stability with a desired upper bound for the settling time (UBST ). We show that the closed-loop system under the proposed controller is related to a suitable auxiliary system through a time-varying coordinate change and a time-scale transformation. The methodology is motivated by the analysis of the first-order case, where application to a linear controller leads to a minimum energy solution and generally allows to reduce control energy when redesigning other controllers. Depending on the conver-

64

R. Aldana-López et al.

gence properties of the resulting auxiliary system, interesting features are obtained in the closed-loop system. For instance, obtaining a prescribed-time controller steers the state to the origin in the desired time, regardless of the initial condition, but with time-varying gains that tend to infinity. Alternatively, we present conditions under which a predefined-time controller is obtained with bounded time-varying gains. Since the proposed controller is time varying, it is essential to study its uniform stability properties. For this purpose, we show that uniform boundedness of the timevarying gain is necessary and sufficient for uniform Lyapunov stability of the closedloop system obtained with our approach. It is moreover shown that such boundedness of the gain can be achieved by redesigning an existing fixed-time controller. Acknowledgements Work partially supported by the Christian Doppler Research Association, the Austrian Federal Ministry of Labour and Economy and the National Foundation for Research, Technology and Development, by Agencia I+D+i grant PICT 2018-01385, Argentina and by Consejo Nacional de Ciencia y Tecnología (CONACYT-Mexico) scholarship with grant 739841.

Appendix Auxiliary Lemmas Let us introduce the following Lemmas, on some properties of matrix Qρ and the time-varying matrix Kρ (t). Lemma 1 Let Dρ ∈ Rn×n and Qρ ∈ Rn×n be defined as in (32). Then, Qρ ∈ Rn×n is a lower triangular matrix satisfying

where

J + A = Qρ (J − αDρ )Q−1 ρ

(39)

A = bn b1T (J − αDρ )n Q−1 ρ

(40)

with bn = [0, · · · , 0, 1]T ∈ Rn×1 . Proof Notice that by construction Qρ is a lower triangular matrix with ones over the diagonal. Moreover, J is an upper shift matrix, thus ⎡

⎡ T ⎤ ⎤ b1T (J − αDρ ) b1 (J − αDρ ) ⎢ ⎢ ⎥ ⎥ .. .. ⎢ ⎢ ⎥ ⎥ . . JQρ = ⎢ ⎥ and JQρ + AQρ = ⎢ ⎥ ⎣bT (J − αDρ )n−1 ⎦ ⎣bT (J − αDρ )n−1 ⎦ 1 1 0nT b1T (J − αDρ )n where 0n ∈ Rn is a zero vector. Therefore, JQρ − AQρ = Qρ (J − αDρ ) which completes the proof. 

Designing Controllers with Predefined Convergence-Time Bound …

65

Lemma 2 Let κ(t) be given as in (7), with η as in (18), and let Kρ (t) := diag(κ(t)−ρ , κ(t)1−ρ , . . . , κ(t)n−ρ−1 ), where ρ ∈ [0, n]. Then, the following identities hold: d Kρ (t)−1 = −ακ(t)Dρ Kρ (t)−1 dt Kρ−1 (t)JKρ (t) = κ(t)J.

(41) (42)

Proof A direct calculation yields d d Kρ (t)−1 = diag(κ(t)ρ , κ(t)ρ−1 , . . . , κ(t)ρ−n+1 ) dt dt −1 diag(ρκ(t)ρ , ρ − 1κ(t)ρ−1 , . . . , ρ − n + 1κ(t)ρ−n+1 ). = κ(t)κ(t) ˙ −1 = ακ(t), Eq. (41) follows trivially by definition of Dρ . Since κ(t)κ(t) ˙ Now, to show that (42) holds, notice that since J is an upper shift matrix. Thus,

⎡ 0 κ(t)−ρ 0 ⎢0 0 κ(t)1−ρ ⎢ ⎢ .. .. .. ⎢ . . Kρ−1 (t)JKρ (t) = κ(t)Kρ−1 (t) ⎢ . ⎢0 0 0 ⎢ ⎣0 0 0 0 0 0

··· ··· .. .

0 0 .. .

· · · κ(t)n−3+ρ ··· 0 ··· 0

0 0 .. .



⎥ ⎥ ⎥ ⎥ ⎥ ⎥ 0 ⎥ n−2+ρ ⎦ κ(t) 0

= κ(t)J, which completes the proof.



Some Admissible Auxiliary Controllers Theorem 2 ([1, Theorem 3]) Consider a controller   u = − (a|x| p + b|x|q )k + ζ sign(x),

(43)

where ζ ≥ Δ, a, b, p, q, k > 0 are system parameters which satisfy the constraints kp < 1, and kq > 1. Then, the origin of (8) under the controller (43) is fixed-time stable and the settling-time function satisfies supx0 ∈R T (x0 ) = γ , where   Γ m p Γ m q  a m p γ = k , a Γ (k)(q − p) b

(44)

66

R. Aldana-López et al.

with m p =

1−kp q− p

and m q =

kq−1 . q− p

Theorem 3 ([1, Theorem 4]) Consider a second-order perturbed chain of integrators, and let a1 , a2 , b1 , b2 , p, q, k > 0, kp < 1, kq > 1, T f1 , T f2 > 0, ζ ≥ Δ, and γ1 = with m p =

1−kp q− p

Γ

 1 2

1/2

4

2a1 Γ



1 2

and m q =

a1 b1

kq−1 . q− p

1/4

  m p Γ m p Γ mq a2 , γ2 = k , a2 Γ (k)(q − p) b2

If the control input is selected as

 γ2  γ12  p q k 2 u=− a2 |σ | + b2 |σ | + a1 + 3b1 x1 + ζ sign(σ ), T f2 2T f21 where the sliding variable σ is defined as 1/2 2γ12  1 3 σ = x2 + x2  + 2 a1 x1  + b1 x1  , T f1 

2

then the origin (x1 , x2 ) = (0, 0) of system (4), with n = 2, is fixed-time stable with UBST given by T f = T f1 + T f2 .

References 1. Aldana-López, R., Gómez-Gutiérrez, D., Jiménez-Rodríguez, E., Sáanchez-Torres, J.D., Defoort, M.: Enhancing the settling time estimation of a class of fixed-time stable systems. Int. J. Robust Nonlinear Control 29(12), 4135–4148 (2019). https://doi.org/10.1002/rnc.4600 2. Aldana-López, R., Gómez-Gutiéerrez, D., Jiménez-Rodríguez, E., Sáanchez-Torres, J.D., Defoort, M.: On the design of new classes of fixed-time stable systems with predefined upper bound for the settling time. Int. J. Robust Nonlinear Control 95(10), 2802–2814 (2022). https:// doi.org/10.1080/00207179.2021.1936190 3. Andrieu, V., Praly, L., Astolfi, A.: Homogeneous approximation, recursive observer design, and output feedback. SIAM J. Control Optim. 47(4), 1814–1850 (2008). https://doi.org/10. 1137/060675861 4. Andrieu, V., Praly, L., Astolfi, A.: Homogeneity in the bi-limit as a tool for observer and feedback design. In: Proceedings of the IEEE Conference on Decision and Control, pp. 1050– 1055 (2009). https://doi.org/10.1109/CDC.2009.5400263 5. Cao, Y., Wen, C., Tan, S., Song, Y.: Prespecifiable fixed-time control for a class of uncertain nonlinear systems in strict-feedback form. Int. J. Robust Nonlinear Control 30(3), 1203–1222 (2020). https://doi.org/10.1002/RNC.4820 6. Chitour, Y., Ushirobira, R., Bouhemou, H.: Stabilization for a perturbed chain of integrators in prescribed time. SIAM J. Control Optim. 58(2), 1022–1048 (2020). https://doi.org/10.1137/ 19M1285937 7. Cruz-Zavala, E., Moreno, J.A.: High-order sliding-mode control design homogeneous in the bi-limit. Int. J. Robust Nonlinear Control 31(9), 3380–3416 (2021). https://doi.org/10.1002/ RNC.5242

Designing Controllers with Predefined Convergence-Time Bound …

67

8. Ding, S., Levant, A., Li, S.: Simple homogeneous sliding-mode controller. Automatica 67, 22–32 (2016). https://doi.org/10.1016/j.automatica.2016.01.017 9. Filippov, A.F.: Differential Equations with Discontinuous Righthand Sides. Kluwer Academic Publishers, Dordrecht, The Netherlands (1988) 10. Gómez-Gutiérrez, D.: On the design of nonautonomous fixed-time controllers with a predefined upper bound of the settling time. Int. J. Robust Nonlinear Control 30(10), 3871–3885 (2020). https://doi.org/10.1002/rnc.4976 11. Jimenez-Rodriguez, E., Munoz-Vazquez, A.J., Sanchez-Torres, J.D., Defoort, M., Loukianov, A.G.: A Lyapunov-like characterization of predefined-time stability. IEEE Trans. Autom. Control 65(11), 4922–4927 (2020). https://doi.org/10.1109/TAC.2020.2967555 12. Khalil, H.K.: Nonlinear Systems, vol. 3. Prentice Hall, Upper Saddle River, NJ, USA (2002) 13. Liberzon, D.: Calculus of Variations and Optimal Control Theory. Princeton University Press, Princeton, NJ, USA (2012) 14. Pal, A.K., Kamal, S., Nagar, S.K., Bandyopadhyay, B., Fridman, L.: Design of controllers with arbitrary convergence time. Automatica 112, 108710 (2020). https://doi.org/10.1016/j. automatica.2019.108710 15. Polyakov, A.: Nonlinear feedback design for fixed-time stabilization of linear control systems. IEEE Trans. Autom. Control 57(8), 2106–2110 (2012). https://doi.org/10.1109/TAC.2011. 2179869 16. Sánchez-Torres, J.D., Gómez-Gutiérrez, D., López, E., Loukianov, A.G.: A class of predefinedtime stable dynamical systems. IMA J. Math. Control Inf. 35(1), I1–I29 (2018). https://doi. org/10.1093/imamci/dnx004 17. Sánchez-Torres, J.D., Muñoz-Vázquez, A.J., Defoort, M., Jiménez-Rodríguez, E., Loukianov, A.G.: A class of predefined-time controllers for uncertain second-order systems. Eur. J. Control 53, 52–58 (2020). https://doi.org/10.1016/j.ejcon.2019.10.003 18. Seeber, R.: Convergence time bounds for a family of second-order homogeneous state-feedback controllers. IEEE Control Syst. Lett. 4(4), 1018–1023 (2020). https://doi.org/10.1109/LCSYS. 2020.2998673 19. Ding, S., Levant, A., Li, S.: New families of high-order sliding-mode controllers. In: Proceedings of the IEEE Conference on Decision and Control, pp. 4752–4757 (2015) 20. Shtessel, Y., Edwards, C., Fridman, L., Levant, A.: Observation and Identification via HOSM Observers. In: Sliding Mode Control and Observation, pp. 251–290. Birkhäuser, New York, NY, USA (2014). https://doi.org/10.1007/978-0-8176-4893-0_7 21. Song, Y., Wang, Y., Holloway, J., Krstic, M.: Time-varying feedback for regulation of normal form nonlinear systems in prescribed finite time. Automatica 83, 243–251 (2017). https://doi. org/10.1016/J.AUTOMATICA.2017.06.008 22. Song, Y., Wang, Y., Krstic, M.: Time-varying feedback for stabilization in prescribed finite time. Int. J. Robust Nonlinear Control 29(3), 618–633 (2019). https://doi.org/10.1002/rnc.4084 23. Sontag, E.D.: Input to state stability: basic concepts and results. In: Nonlinear and Optimal Control Theory 1932, 163–220 Springer, Berlin, Germany (2008). https://doi.org/10.1007/ 978-3-540-77653-6_3 24. Tabatabaeipour, S.M., Blanke, M.: Calculation of critical fault recovery time for nonlinear systems based on region of attraction analysis. IFAC Proc. 47(3), 6741–6746 (2014) 25. Utkin, V.I.: Sliding Modes in Control and Optimization. Springer, Berlin, Germany (1992). https://doi.org/10.1007/978-3-642-84379-2 26. Zarchan, P.: Tactical and Strategic Missile Guidance. American Institute of Aeronautics an Astronautics, Reston, VA, USA (2012) 27. Zimenko, K., Polyakov, A., Efimov, D., Perruquetti, W.: On simple scheme of finite/fixed time control design. Int. J. Control 93(6), 1353–1361 (2020). https://doi.org/10.1080/00207179. 2018.1506889

SMC Observers

Bi-homogeneous Differentiators Jaime A. Moreno

Abstract In many applications, it is important to be able to estimate online some number of derivatives of a given (differentiable) signal. Some famous algorithms solving the problem comprise linear high-gain observers and Levant’s exact differentiators, that is discontinuous. They are both homogeneous, as are many other ones. A disadvantage of continuous algorithms is that they are able to calculate exactly the derivatives only for a very small class of (polynomial) time signals. The discontinuous Levant’s differentiator, in contrast, can calculate in finite-time and exactly the derivatives of Lipschitz signals, which is a much larger class. However, it has the drawback that its convergence time increases very strongly with the size of the initial conditions. Thus, a combination of both algorithms seems advantageous, and this has been proposed recently by the author in [38]. In this work, some techniques used to design differentiators are reviewed and it is shown how the combination of two different homogeneous algorithms can be realized and that it leads to interesting properties. A novelty is the derivation of a very simple realization of the family of bi-homogeneous differentiators proposed in [38]. The methodological framework is based on the use of smooth Lyapunov functions to carry out their performance and convergence analysis. Keywords Sliding modes · Variable-structure control · Lyapunov methods · Homogeneous systems · Discontinuous observers · High-order sliding modes

1 Introduction and Overview The objective of a differentiator is to provide online estimates of a finite number of the derivatives of a signal f (t), defined for t ∈ [0, ∞), which is Lebesgue-measurable. Signal f (t) has usually two additive components, i.e., f (t) = f 0 (t) + ν(t): (i) the J. A. Moreno (B) Instituto de Ingeniería, Universidad Nacional Autónoma de México (UNAM), 04510 Coyoacán, Ciudad de México, Mexico e-mail: [email protected] © The Author(s), under exclusive license to Springer Nature Switzerland AG 2023 T. R. Oliveira et al. (eds.), Sliding-Mode Control and Variable-Structure Systems, Studies in Systems, Decision and Control 490, https://doi.org/10.1007/978-3-031-37089-2_4

71

72

J. A. Moreno

base signal f 0 (t), that one wants to differentiate, which is n−times differentiable, and (ii) the uniformly bounded noise signal ν(t). Using as states ς1 = f 0 (t) , . . . , ςi = f 0(i−1) (t) 

d i−1 f 0 (t) , i = 1, . . . , n , dt i−1

the base signal f 0 (t) is represented by the state-space model ς˙i (t) = ςi+1 (t) , i = 1, . . . , n − 1, ς˙n (t) = f 0(n) (t)

(1)

f (t) = ς1 (t) + v (t) , which is convenient in the following discussion. Equation (1) is a linear time-invariant system, with measurement f and having an unknown input f 0(n) (t). According to the standard theory for linear time-invariant systems (see e.g., [21]) (1) is strongly observable, i.e., all its states can be obtained by differentiating the (noise-free) output alone, without the necessity of knowing the unknown input. However, according to the standard theory of unknown input observers [21], since the relative degree between unknown input f 0(n) (t) and the output f is larger than one (for n > 1), there is no observer able to estimate the states ςi (t), when the unknown input is a completely arbitrary signal. In order to circumvent this situation, the unknown input f 0(n) (t) is assumed to be uniformly bounded, that is,    (n)   f 0 (t) ≤ Δ , ∀t ≥ 0 ,

(2)

for some nonnegative constant Δ. There are many proposed differentiators in the literature. Classical ones are linear time-invariant algorithms. Recently, the High-Gain Observer [43, 52] (see also [23, 24]), described by x˙i = −ki L i (x1 − f ) + xi+1 , .. . i = 1, . . . , n − 1

(3)

x˙n = −kn L (x1 − f ) , n

has been proved to be an appropriate differentiator in closed-loop control in theory and in several applications. The gains ki > 0 are selected as the coefficients of a Hurwitz polynomial, and the gain L > 0 can be used to both accelerate the convergence velocity of the differentiator and reduce the effect of the (bounded) unknown input f 0(n) (t). Since this is a linear time-invariant system, a thorough analysis can be carried out using quadratic Lyapunov functions. One drawback of this and all continuous differentiators is that the estimation of the derivatives xi does not converge asymptotically to the true values f 0(i−1) , even in the absence of noise, when

Bi-homogeneous Differentiators

73

Δ > 0. As defined by [26], a differentiator is exact if its estimation coincides with the derivatives of the signal. Accordingly, the high-gain observer is exact only for the     (n)  n class of polynomial signals, for which Δ = 0, i.e., S0 = f 0 (t) |  f 0 (t) ≡ 0 . In his seminal works, [26–28] proposed   which  is exact on a large  a differentiator  (n)  n class of signals, i.e., for the set SΔ = f 0 (t) |  f 0 (t) ≤ Δ . It is described in its non-recursive form by n−i

x˙i = −ki L x1 − f n + xi+1 , .. . i = 1, . . . , n − 1 x˙n = −kn L n x1 − f 0 ,

(4)

where z p  |z| p sign (z). The gains ki > 0 are selected such that stability is assured, and the gain L > 0 can be used to both accelerate the convergence velocity of the differentiator and modify the value of Δ for which finite time stability is assured. (4) is discontinuous, since the last injection term x1 − f 0 is the sign function. Solutions of this system are understood in the sense of Filippov [17]. The analysis of the algorithm in [26–28] is based on geometric properties for n = 2 and using the theory of homogeneous differential inclusions for n > 2. For n = 2, an approach using (homogeneous and explicit) Lyapunov functions has been developed in [33, 34, 36, 39, 40], for n = 3 in [35] and for arbitrary order in [10, 11, 37, 45]. Other variations of the use of high-order sliding modes for finite-time and robust differentiation and observation is given in [5, 6, 15, 16, 18, 25, 50, 51]. As an extension to the previous algorithms, a family of differentiators, given by 1−(n−i−1)d

x˙i = −ki L x1 − f 1−(n−1)d + xi+1 , .. . i = 1, . . . , n − 1 x˙n = −kn L n x1 − f

1+d 1−(n−1)d

(5)

,

1 , has been presented in [10, 37, 45] for and parametrized by the scalar −1 ≤ d < n−1 non-positive d and in [22] also for positive d, all based on a homogeneous Lyapunov analysis. (5) includes (3), when d = 0, and (4), for d = −1. (5) is exact for S0n if 1 , and for SΔn , only if d = −1. −1 < d < n−1 All these differentiators (3), (4), and (5) are robust on any (acceptable) signal f 0 (t), that is, the estimated derivatives converge uniformly to their actual values when the input signal f (t) tends uniformly to f 0 (t), as defined in [26]. They are also (weighted) homogeneous, what roughly means that if the initial conditions and the inputs are appropriately scaled, the trajectories will also be scaled. The parameter d in (5) is the homogeneity degree. Note that, for d < 0, the derivative of the injection 1−(n−i−1)d terms s 1−(n−1)d near s = 0 becomes very large and blows up to infinity for s = 0. This explains that for d < 0 the differentiator converges in finite time. For d = −1, the differential equation (5) has discontinuous right-hand side, and it has Filippov

74

J. A. Moreno

solutions [17]. In contrast, for large values of s, the derivative of the injection term becomes very small. This makes the convergence for large values rather slow. For 1−(n−i−1)d d = 0, the injection terms are linear, i.e., s 1−(n−1)d = s, so that the convergence is 1−(n−i−1)d exponential. When d > 0 the injection terms s 1−(n−1)d near s = 0 are very weak, i.e., its derivative is zero, and therefore the convergence is asymptotic, that is, slower than any exponential function. In contrast, for large values of s the injection terms become very strong and trajectories reach any neighborhood of zero within a time independent of initial conditions. According to the previous discussion, one disadvantage of purely homogeneous differentiators (5) with d < 0 is that, although the algorithm converges in finite time, if the initial estimation error grows unboundedly the convergence time tends to infinity. The combination of a differentiator with d < 0 with one with d > 0 counteracts this effect and leads to a differentiator converging in Fixed-Time. That is, the estimation error converges in finite time for an arbitrary initial condition, and the settling-time is a function globally bounded by a constant T , which is positive and does not depend on the initial estimation error. For the first-order differentiator, one providing just the first derivative, i.e., n = 2, this has been attained in [13, 14, 19, 34, 36], using injection functions which combine terms with exponents lower than 1 and terms with exponents greater than 1. These algorithms are homogeneous of negative degree near 0 and homogeneous of positive degree far from 0, a property called homogeneity in the bi-limit in [1]. The proof is performed using non-smooth quadratic-like Lyapunov functions [33, 34]. An extension has been achieved in [48], where the authors use the results of [46] to provide a detailed gain design and to give a tight estimation of the convergence time. For homogeneous differentiators with d = −1 and d > 0 of arbitrary order, a switching strategy is proposed in [2] between the two differentiators. In [31], the implicit Lyapunov function method has been used to combine two differentiators with positive and negative homogeneity degrees. In [9], a topological argument is used for homogeneous systems in order to design a control feedback, based on a preliminary linear controller design. In [42, 49], this idea is extended for the homogeneous observer design. Ménard et al. [32] extends this method for the design of an observer converging in fixed time, that combines two homogeneity degrees. However, the methods proposed in [31, 32] are only feasible for (negative and positive) homogeneity degrees near to zero, i.e., d ≈ 0. And thus, they cannot include Levant’s differentiator, since it has d = −1. In a recent work [38], a differentiator of arbitrary order, homogeneous in the bi-limit, has been proposed. That algorithm allows arbitrary homogeneity degrees near zero and far from it, including also the discontinuous injection term, crucial for the robustness of Levant’s differentiator. One of the nice features of designing an algorithm to be homogeneous in the bi-limit [1], in general, and of the differentiator proposed in [38], specifically, is that assigning a positive homogeneity degree d∞ > 0 to the ∞-limit and a negative homogeneity degree d0 < 0 to the 0-limit approximation, the estimation converges in fixed time for a class of signals depending on the homogeneity degrees selected. Selecting the homogeneity degree d0 = −1, robust

Bi-homogeneous Differentiators

75

and exact  estimation is assured for signals with bounded Lipschitz constant, i.e.,  (n)   f 0 (t) ≤ Δ. Moreno [38] is an extension to arbitrary order of the smooth combination of two homogeneous differentiators, developed in [13, 14, 36] and in the recent work by [47]. It also prolongs to the discontinuous case the homogeneous observer design developed for continuous systems in [44, 53], and highly improved 1 in [1]. We are mostly interested in the instances −1 ≤ d0 ≤ 0 ≤ d∞ < n−1 , and particularly d0 = −1. However, other combinations of d0 ≤ d∞ are possible. The goal of this chapter is to give an overview of some available methods to design differentiators, and to emphasize the advantages of a smooth combination of two homogeneous algorithms, therefore the name bi-homogeneous differentiator, as proposed for the general case in [38]. A novel and very simple realization of the bi-homogeneous differentiator is derived in this paper, leading to a much simpler implementation of the algorithm. No technical proofs are provided here, since they are basically contained in the literature, mainly in [38], and the reader is asked to consult the appropriate sources for the technical details. The paper is assembled as follows. The next Sect. 2 reviews some fundamental concepts required for reading the paper. Section 3 presents the bi-homogeneous differentiator, and a novel and very simple realization is introduced. The fundamental properties of the algorithms are formulated in Sect. 4. The proofs are not presented in this work, but the appropriate literature is cited, which uses a Lyapunov approach. An illustrative example is presented in Sect. 5. Some conclusions are drawn in Sect. 6.

2 Preliminaries The used notation is rather standard. The concepts of homogeneity and homogeneity in the bi-limit are briefly recalled, but for more details please refer to [3, 8, 9] for homogeneity, and for homogeneity in the bi-limit to [1, 12], respectively. The dilation operator of a vector x ∈ Rn is defined as Δrε x = [εr1 x1 , . . . , εrn xn ] , for all real values ε > 0, and n real numbers ri > 0. ri > 0 is the weight of xi , while r := [r1 , . . . , rn ] is the vector of weights. Function W : Rm → Rn (respectively, (r, l)vector field f : Rn → Rn ) is r-homogeneous of degree d ∈ R, abbreviated   homogeneous, if ∀ε > 0 and ∀x ∈ Rm \ {0} the equivalence W Δrε x = εd W (x) (respectively, f (Δrε x) = εd Δrε f (x)) holds. Function φ : Rn → R is homogeneous in the 0-limit, with associated triple (r0 , d0 , φ0 ), if near x = 0 it is approximated by the (r0 , d0 )-homogeneous function φ0 . It is homogeneous in the ∞-limit with associated triple (r∞ , d∞ , φ∞ ), if near x = ∞ it is approximated by the (r∞ , d∞ )-homogeneous function φ∞ . For vector fields and set-valued vector fields, similar definitions are used. Lastly, a function φ : Rn → R (or a vector field f : Rn → Rn , or set-valued vector field F : Rn ⇒ Rn ) is homogeneous in the bi-limit, abbreviated bi-homogeneous, if it is homogeneous in both, the 0-limit and the ∞-limit.

76

J. A. Moreno

Consider a system x˙ = f (x), x(0) = x0 , with state vector x ∈ Rn and vector field f : Rn → Rn , with Filippov solutions [17], and assume that the origin x = 0 is an asymptotically stable equilibrium point. Let Br = {x ∈ Rn : x < r } be an open ball with radius r > 0. x = 0 is finite-time stable if: (i) it is locally asymptotically stable, (ii) any solution x (t, x0 ) reaches x (t, x0 ) = 0 at some finite time t = T (x0 ) for x0 ∈ Br , for some r > 0, and it stays there ∀t ≥ T (x0 ), where T : Br → R+ ∪ {0} is the settling-time function. It is globally finite-time stable if Br = Rn . It is well known that for homogeneous systems local stability is equivalent to global stability and asymptotic stability becomes finite-time stability when the homogeneous degree is negative [3, 8, 28]. Let the origin of a Filippov differential inclusion, x˙ ∈ F(x) be strongly locally asymptotic stable and suppose that the vector-set field F is r -homogeneous of degree d < 0. x = 0 is then strongly globally finitetime stable and the settling-time function T (x0 ) is continuous at zero and locally bounded. Furthermore, for any strongly asymptotically stable differential inclusion x˙ ∈ F(x), there exists a C ∞ homogeneous strong Lyapunov function [3, 7, 9, 28, 41]. Finally, the symbol z p = |z| p sign(z), where z ∈ R is a variable and p ∈ R is d z m = a constant, denotes the signed power p of z. Note that z 0 = sign (z), dz m−1 m m−1 d m |z| , and dz |z| = m z .

3 The Arbitrary Order Differentiator Homogeneous in the Bi-limit 3.1 The First-Order Differentiator To estimate the first derivative f 0(1) (t) = dtd f 0 (t) of the base signal f 0 (t), in [13, 14, 19, 34, 36], the following homogeneous in the bi-limit algorithm is used: x˙1 = −k1 Lψ1 (x1 − f ) + x2 ,

(6)

x˙2 = −k2 L ψ2 (x1 − f ) , 2

where ψ2 (s) =

dψ1 (s) ψ1 (s) , ds

and ψ1 (s) is a monotone increasing function, as for example, 1

1

ψ1 (s) = κ s 1−d0 + θ s 1−d∞ , with −1 ≤ d0 < 0 and 0 ≤ d∞ < 1, and k1 > 0, k2 > 0, κ > 0, θ > 0, L > 0. In this case,

Bi-homogeneous Differentiators

ψ2 (s) =

77

1−d0 d∞ 1+d0 1+d∞ θ12 κ12 (2 − d0 − d∞ ) κ1 θ1 s 1−d0 + s 1−d∞ . s (1−d0 )(1−d∞ ) + 1 − d0 1 − d∞ (1 − d0 ) (1 − d∞ )

It is possible to show that the quadratic-like function [14, 33, 34, 36] VQ (e) = ζ T Pζ , ζ T = [ψ1 (e1 ) , e2 ] , where e T = [e1 , e2 ] is the differentiation error e1 = x1 − f 0 , e2 = x2 − f 0(1) , is a Lyapunov function for the estimation error dynamics. Moreover, P = P T ∈ R2×2 is the unique positive definite solution of the algebraic Lyapunov equation A T P + P A = −Q , 

where

−k1 1 A= −k2 0



is Hurwitz, and Q = Q T ∈ R2×2 is an arbitrary positive definite matrix. Note that since ψ1 (e1 ) is not differentiable at e1 = 0, the function VQ (e) is not smooth, and not even Lipschitz. And therefore some care has to be taken to conclude that it is a Lyapunov function. Furthermore, VQ (e) alone does not allow to conclude the fixed-time convergence of the algorithm. For this, an extra non-quadratic Lyapunov function has been added to the quadratic function VQ (e) in [34] to show this property. The proposed Lyapunov function is given by V (e) = VQ (e) + VN (e) , VN (e) = δk2 |ψ1 (e1 )|2 − ψ1 (e1 ) 1−d∞ e2 1+d∞ + δ |e2 |2 , with δ > 0 sufficiently large. Its derivative is given by 2+d0

2+d∞ 2

∞ V˙ ≤ −γ1 VQ 2 (e) − γ2 |e1 | 1−d∞ VQ (e) − γ3 VN d

(e) ,

for some positive constants γ1 , γ2 , and γ3 .

3.2 The Arbitrary Order Differentiator In order to estimate the derivatives f 0(i) (t) = dtd i f 0 (t), for i = 1, . . . , n − 1, [38] proposes an extension of (6), given by the family of differentiators i

78

J. A. Moreno

x˙i = −ki φi (x1 − f ) + xi+1 , .. . i = 1, . . . , n − 1 x˙n = −kn φn (x1 − f ) ,

(7)

which are homogeneous in the bi-limit. The output injection terms φi (·) are obtained as the composition (8) φi (s) = ϕi ◦ · · · ϕ2 ◦ ϕ1 (s) , of the functions ϕi : R → R ϕi (s) = ϕi, 0 (s) + ϕi, ∞ (s) = κi s

r0, i+1 r0, i

+ θi s

r∞, i+1 r∞, i

,

(9)

which are monotonic growing. ϕi is a sum of two (signed) power functions, with powers being fractions of the numbers r0, n = 1, r0, i = r0, i+1 − d0 = 1 − (n − i) d0 , i = 1, . . . , n ,

(10)

r∞, n = 1, r∞, i = r∞, i+1 − d∞ = 1 − (n − i) d∞ , i = 1, . . . , n . These are completely determined by two parameters, d0 and d∞ , satisfying −1 ≤ d0 ≤ d∞
−1, φi in (8) are continuous on R, continuously differentiable on R \ {0}, and they are strictly increasing and surjective. r∞, n+1 When d0 = −1, φn (s) = κn s 0 + θn ϕn−1 ◦ · · · ϕ2 ◦ ϕ1 (s) r∞, n is discontinuous at s = 0. For the differentiator (7) this means that, after the application of Filippov’s regularization procedure [17], it becomes a differential inclusion, having solutions in the sense of Filippov. The sign function s 0 = sign (s) turns into the multivalued function ⎧ ⎪ if s > 0 ⎨1 0 s = [−1 , 1] if s = 0 . ⎪ ⎩ −1 if s < 0 The internal gains κi > 0, θi > 0 correspond to the desired weighting of each of the terms in ϕi , and are chosen as arbitrary positive values. A simple selection is κi = μ, θi = 1 − μ for i = 1, . . . , n, with 0 < μ < 1.

A Simpler Version of the Differentiator The output injection terms φi of the differentiator (7) become rather complex, although they are compositions of the simple functions (9). Since the proof of the convergence of the differentiator given in [38] is valid for a larger class of functions ϕi (·) than the ones given by (9), we can take advantage of this flexibility to obtain a simpler version of the differentiator (7), given by x˙i = −ki φ˜ i (x1 − f ) + xi+1 , .. . i = 1, . . . , n − 1 x˙n = −kn φ˜ n (x1 − f ) , where

φ˜ i (s) = κ˜ i s

is obtained by selecting ϕi as

r0, i+1 r0, 1

+ θ˜i s

r∞, i+1 r∞, 1

(13)

,

(14)

80

J. A. Moreno r0, 2

r∞, 2

ϕ1 (s) = κ˜ 1 s r0, 1 + θ˜2 s r∞, 1 , ϕ2 (s) = φ˜ 2 ◦ ϕ1−1 (s) ,

(15)

··· −1 ϕi (s) = φ˜ i ◦ φ˜ i−1 (s) , i = 2, . . . , n .

The proof given in [38] carries over to (13) with the functions ϕi (s) (15). Differentiators (7) and (13) are very flexible. By selecting the parameters d0 , d∞ , κi and θi appropriately all algorithms presented previously can be recovered. The (linear) high-gain differentiator (3) is obtained using d0 = d∞ = 0, κi = θi = 1. Equations (7) and (13) become Levant’s differentiator (4) if d0 = d∞ = −1, κi = θi = 1. While the family of homogeneous differentiators (5) is obtained setting −1 ≤ 1 , κi = θi = 1. Convergence in finite time is obtained for d0 = d∞ < d0 = d∞ < n−1 0, for d0 = d∞ = 0 convergence is exponential, and when d0 = d∞ > 0 convergence is asymptotic. In the latter case, any (fixed) neighborhood of zero will be attained within a time independent of the initial conditions [1]. The case d0 = d∞ = −1 is particularly interesting, since φn is discontinuous and induces a higher-order sliding mode at the origin. The estimation, in the absence of noise, converges robustly, exactly, and  time under the condition that the n-th derivative of the signal is  in finite   bounded  f 0(n) (t) ≤ Δ. If d0 = d∞ > −1 convergence is only obtained for Δ = 0. In contrast to all previous algorithms (3)–(5), differentiators (7) and (13) are not homogeneous, but they are homogeneous in the bi-limit [1], abbreviated as bihomogeneous. Near the origin (respectively, far from 0) they are approximated by a homogeneous system of degree d0 (respectively, d∞ ). Choosing for the proposed differentiators (7) and (13) d∞ > 0 and d0 < 0 leads to fixed-time convergence [31]. This means that exact and robust estimation is surely attained after a constant time T , independent of the initial conditions, for polynomial signals, for which f 0(n) (t) ≡ 0. Furthermore,   if d0 = −1, this is true for all signals  (n)  having bounded Lipschitz constant  f 0 (t) ≤ Δ. The main result is that, in the absence of noise, the differentiators (7) and (13) n estimate asymptotically the first n − 1 derivatives of the signal f 0 (t). Let S0    (n)  n (n) f (t) ≡ 0 stand for the class of polynomial signals, and SΔ   f (t) ≤ Δ represent the class of n-Lipschitz signals.     Assumption 1 f (t) = f 0 (t) + ν(t), with f 0 (t) n-times differentiable,  f 0(n) (t) ≤ Δ, and ν (t) is a measurable noise, uniformly bounded, i.e., |ν (t)| ≤ ε.

3.3 Convergence in Absence of Noise It is clear that in the presence of an unknown noise signal, it is essentially impossible to estimate exactly f 0 and its derivatives. However, it is important to know what is the behavior of a differentiator under the ideal situation that there is no noise, since it

Bi-homogeneous Differentiators

81

has an impact on its response in the presence of measurement noise. This is clarified, for the family of proposed differentiators, in the following: 1 Theorem 2 Let signal f (t) satisfy Assumption 1. Take −1 ≤ d0 ≤ d∞ < n−1 and select arbitrary internal gains κi > 0 and θi > 0, for i = 1, . . . , n. Further, assume there is no noise, i.e., ν (t) ≡ 0. Under these conditions, there are gains ki > 0, for i = 1, . . . , n, so that the trajectories of the homogeneous in the bi-limit differentiators (7) or (13) satisfy: (i) If d0 = −1, they converge globally and in finite time to the signal’s derivatives, i.e., xi (t) = f 0(i−1) (t) for all t ≥ T , for some T > 0 depending on the gains and the initial conditions, for the class SΔn of n-Lipschitz signals. 1 (ii) If −1 = d0 < 0 < d∞ < n−1 , they converge in fixed time to the signal’s deriva(i−1) tives, i.e., xi (t) = f 0 (t) for all t ≥ T , for some T > 0 depending on the gains but independent of the initial conditions, for the class SΔn of n-Lipschitz signals. (iii) If −1 < d0 < 0, they converge globally and in finite time to the signal’s derivatives, i.e., xi (t) = f 0(i−1) (t) for all t ≥ T , for some T > 0 depending on the gains and the initial conditions, for the class S0n of polynomial signals. 1 , they converge in fixed time to the signal’s (iv) If −1 < d0 < 0 < d∞ < n−1 (i−1) derivatives, i.e., xi (t) = f 0 (t) for all t ≥ T , for some T > 0, which depends on the gains but is independent of the initial conditions, for the class S0n of polynomial signals. (v) If d0 = 0, they converge globally and locally exponentially to the signal’s derivatives, i.e., xi (t) → f 0(i−1) (t) as t → ∞, for the class S0n of polynomial signals. (vi) If d0 > 0, they converge globally and asymptotically to the signal’s derivatives, i.e., xi (t) → f 0(i−1) (t) as t → ∞, for the class S0n of polynomial signals.

The proof of Theorem 2 is given in [38]. The distinguishing feature of the differentiators (7) and (13), when they are compared with their homogeneous versions, is that they converge in fixed time, if the homogeneous degrees are chosen as d0 < 0 < d∞ . Moreover, if d0 = −1 a discontinuous differentiator is obtained and the convergence is achieved for a rather large class of signals SΔn . Note furthermore that S0n ⊂ SΔn and SΔn is much larger than S0n .

3.4 Effect of Noise and the Perturbation δ (t) = − f0(n) (t) The differentiation error is defined as ei  xi − f 0(i−1) , and its dynamics for (7) is given by e˙i = −ki φi (e1 − ν) + ei+1 , .. . i = 1, . . . , n − 1 e˙n = −kn φn (e1 − ν) + δ (t) .

(16)

82

J. A. Moreno

Here, δ (t) = − f 0(n) (t) represents the n-th order derivative of the base signal, and it is assumed to be uniformly bounded by the Lipschitz constant Δ, i.e., ∀t ≥ 0, |δ (t)| ≤ Δ. For (13), the same error dynamics is obtained, by replacing φi → φ˜ i . To simplify the presentation, only (16) will be referred to, but both cases are meant. When the base signal is corrupted by noise it is not possible to estimate exactly the values of the derivatives. However, as the following Proposition 1 shows, the estimation error is uniformly and ultimately bounded when the noise signal is bounded. Moreover, the estimation error trajectories have the same property when d0 > −1 and Δ > 0 and also if d0 = −1, d∞ > −1 and the differentiator gains are not sufficiently large to fully compensate the effect of δ (t). Proposition 1 Choose stabilizing gains ki for the differentiator (7) or (13) and let the hypothesis of Theorem 2 be fulfilled. The system (16), describing the estimation error, is input-to-state stable with respect to both inputs, noise ν (t) and perturbation δ (t), 1 1 or −1 = d0 < d∞ < n−1 . if the homogeneity degrees satisfy −1 < d0 ≤ d∞ < n−1 Proposition 1 implies that the estimation error e remains bounded when noise and perturbation signals are bounded. Furthermore, e (t) → 0 if (ν (t) , δ (t)) → 0 as t → ∞. The precision of the differentiator gives the relationship of the estimation error in steady state with respect to the amplitude of the noise or perturbation size (also in steady state). For small noise signals, the precision of the estimation error (16) is clearly determined by the 0-limit approximation; therefore, it is identical to the one of the homogeneous differentiator of homogeneity degree d0 [11, 27, 28, 45]. For d0 = −1, |ν (t)| ≤ ε, |δ (t)| ≤ Δ the following inequalities are thus achieved in finite time:   i−1 n−i+1   (i−1) (t) ≤ λi Δ n |ε| n , ∀t ≥ T . xi (t) − f 0

4 Lyapunov Function Used for the Stability Analysis The proof of the previous results is performed in [38] using a smooth Lyapunov function, which is homogeneous in the bi-limit. In this section, this function is recalled, since it can be used to provide an estimation of the convergence time and also for the calculation of the gains ki rendering the algorithm stable. First, the following state transformation is performed: for i = 1, . . . , n z1 = so that (16) becomes

e1 e2 ei en , z2 = , . . . , zi = , . . . , zn = , 1 k1 ki−1 kn−1

Bi-homogeneous Differentiators

83

z˙ i = −k˜i (φi (z 1 − ν) − z i+1 ) , .. .   z˙ n = −k˜n φn (z 1 − ν) − δ¯ (t) , where

(17)

ki f (n) (t) k˜i = , i = 1, . . . , n; k0 = 1 ; δ¯ (t) = − . ki−1 kn

The Lyapunov function given in [38] for n ≥ 2 is constructed in the following form. First, two positive real numbers p0 and p∞ are selected, which correspond to the homogeneity degrees of the approximations in the 0-limit and in the ∞-limit. They have to fulfill the conditions:  p0 ≥ max

i∈{1,..., n}

    r0, i  2r∞, i + d∞ , p∞ ≥ 2 max r∞, i + d∞ , i∈{1,..., n} r∞, i

and

p0 p∞ < . r0, i r∞, i

(18)

(19)

Next, for i = 1, . . . , n − 1 the functions  Z i (z i , z i+1 ) = β0, i + β∞, i

    p0   p0 −r0, i p0 r0, i r0, i  −1 r0, i r |z i | r0, i − z i ϕi−1 (z i+1 ) + 1− (20) ϕi (z i+1 ) 0, i p0 p0      p∞  p∞ −r∞, i  p∞ r∞, i r∞, i  −1 r∞, i r |z i | r∞, i − z i ϕi−1 (z i+1 ) + 1− ϕi (z i+1 ) ∞, i p∞ p∞

and for i = n the function Z n (z n ) = β0, n

1 1 |z n | p0 + β∞, n |z n | p∞ p0 p∞

are defined, where the constants β0, i > 0, β∞, i > 0 are chosen arbitrarily. Note that Z n (z n ) is obtained from (20) with i = n by setting z n+1 ≡ 0. In (20), for i = 1, . . . , n − 1, ϕi−1 represents the inverse function of ϕi , given in (9) or in (14). The Lyapunov function is obtained as the sum of the functions Z i , as V (z) =

n−1 

  Z j z j , z j+1 + Z n (z n ) ,

(21)

j=1

where z = [z 1 , . . . , z n ]T . Assuming that no noise is present and that either δ (t) ≡ 0 and d0 = −1, or |δ (t)| ≤ Δ and d0 = −1, the derivative of V along the trajectories of the estimation error system (17) is given by

84

J. A. Moreno

V˙ (z) ∈ W (z) ,

(22)

where W (z) = −k˜1 σ1 (z 1 , z 2 ) (φ1 (z 1 ) − z 2 ) −

n−1 

       k˜ j s j−1 z j−1 , z j + σ j z j , z j+1 φ j (z 1 ) − z j+1

(23)

j=2

    Δ ˜ − kn sn−1 (z n−1 , z n ) + σn (z n ) φn (z 1 ) − [−1, 1] . kn Here, [−1, 1] ∈ R stands for the closed interval of real numbers between −1 and 1, and functions σi and si are the partial derivatives of functions Z i (z i , z i+1 ), i.e.,  p0 −r0, i  ∂ Z i (z i , z i+1 ) = β0, i z i r0, i − ϕi−1 (z i+1 ) σi (z i , z i+1 ) := ∂z i   p∞ −r∞, i p∞ −r∞, i  + β∞, i z i r∞, i − ϕi−1 (z i+1 ) r∞, i

si (z i , z i+1 ) :=

p0 −r0, i r0, i



(24)

∂ Z i (z i , z i+1 ) ∂z i+1  p0 −2r0, i ∂ϕi−1 (z i+1 )  p0 − r0, i  z i − ϕi−1 (z i+1 ) ϕi−1 (z i+1 ) r0, i r0, i ∂z i+1 −1 p∞ −2r∞, i    ∂ϕ p∞ − r∞, i  i (z i+1 ) z i − ϕi−1 (z i+1 ) ϕi−1 (z i+1 ) r∞, i r∞, i ∂z i+1 (25)

= −β0, i − β∞, i

Notice that sn (z n , z n+1 ) ≡ 0. Moreover, functions σi (z i , z i+1 ) and si (z i , z i+1 ) are continuous on R. Furthermore, for all i = 1, . . . , n − 1, the set of values where σi = 0 coincides with the set of values where si = 0. This set corresponds also to the points where Z i reaches its minimum Z i = 0. It is possible to show (see [38]) that selecting the gains k˜i appropriately, W (z) becomes negative definite. Proposition 2 Let the hypothesis of Theorem 2 be satisfied. Select p0 and p∞ according to (18) and (19). Assume that there is no noise and that −1 ≤ d0 ≤ d∞ < 1 , and Δ = 0 in case d0 = −1. Under these conditions, for i = 1, . . . , n, there n−1 exist gains ki > 0 so that V (z), given by (21), is a Lyapunov function for the estimation error dynamics (17), which is continuously differentiable, and homogeneous in the bi-limit. Furthermore, the function V fulfills, for some constants η0 > 0 and η∞ > 0, the differential inequality (26)

Bi-homogeneous Differentiators

85

V˙ (z) ≤ −η0 V

p0 +d0 p0

(z) − η∞ V

p∞ +d∞ p∞

(z) .

(26)

This implies that z = 0 is an equilibrium point of (17), which is globally asymptotically stable.

4.1 Convergence Time Estimation The differential inequality (26) leads to an estimation of the convergence time. When d0 < 0 the estimation error is globally finite-time stable, and the settling-time function T (z 0 ) is bounded ∀z 0 ∈ Rn by T (z 0 ) ≤

|d0 | p0 V p0 (z 0 ) . |d0 |η0

(27)

z = 0 is fixed-time stable when d0 < 0 < d∞ . This means that the settling-time function T (z 0 ) is globally bounded by a positive constant T , which is independent of z 0 , i.e., 

1 p0 T (z 0 ) ≤ T¯ ≤ η∞ |d0 |

  1  η∞ pp∞0 |dd∞0 | +1 p∞ |d0 | +1 , ∀z 0 ∈ Rn . p0 d∞ η0

(28)

Note also that the function T (z 0 ) is continuous at z 0 = 0 and locally bounded.

4.2 Gain Calculation Gains ki > 0, for i = 1, . . . , n, assuring the convergence of the differentiator, can be calculated using the Lyapunov function V (z). For this, it is necessary to define the functions ωi , given by ! " ωn−1 k˜n 

max

(z n−1 , z n )∈R2

! " ωi k˜i+1 , . . . , k˜n  #n max

(z i ,..., z n )∈Rn−i+1

˜

j=i+1 k j



  −k˜n sn−1 (z n−1 , z n ) + σn (z n ) (ϕn ◦ ϕn−1 (z n−1 )) σn−1 (z n−1 , z n ) (ϕn−1 (z n−1 ) − z n )

      s j−1 z j−1 , z j + σ j z j , z j+1 z j+1 − ϕ j ◦ · · · ◦ ϕi (z i ) σi (z i , z i+1 ) (ϕi (z i ) − z i+1 )

.

Note that ϕn (s) in these expressions has to be replaced by ϕn (s) − kΔn [−1, 1]. It is shown in [38] that these maximizations are well-posed. Function ωi hinges on the

86

J. A. Moreno

" ! values of the previous gains k˜i+1 , . . . , k˜n , on the selection of β0, j > 0, β∞, j > 0, d0 , d∞ , p0 and p∞ , and also on the selected values of the internal gains κ j > 0 and θ j > 0. The following procedure can be used to calculate appropriate gains for the differentiator. 1 Algorithm 1 For a given order n > 1, choose −1 ≤ d0 ≤ d∞ < n−1 , internal gains κ j > 0, θ j > 0, for j = 1, . . . , n, and Δ ≥ 0 if d0 = −1. Take p0 and p∞ so that (18) and (19) are satisfied. A set of stabilizing gains ki > 0, for i = 1, . . . , n, can be calculated backwards as follows:

(a) Select kn κn > Δ and k˜n > 0. (b) For i = n − 1, n − 2, . . . , 1 select ! " k˜i > ωi k˜i+1 , . . . , k˜n . " ! Since the calculation of the gains is recursive, it follows that the gains k˜ j , . . . , k˜n stabilize the differentiator of order n − j + 1.

4.3 Acceleration of the Convergence and Scaling of the Lipschitz Constant Δ Once a set of gains has been determined for the differentiator, it is possible to accelerate its convergence velocity and to modify the size of the Lipschitz constant Δ performing a scaling of the gains. For this, select arbitrary constants α > 0 and L > 0 and scale the gains of system (7) as  n  r d0  n  r d∞ 0, i ∞, i L L κi , θi → θi , ki → ki L i . (29) κi → α α That means that the differentiator becomes x˙i = −ki L i φi (x1 − f ) + xi+1 , .. . i = 1, . . . , n − 1

(30)

x˙n = −kn L n φn (x1 − f ) , with the functions ϕi in (9) given by  ϕi (s) =

Ln α

 r d0

0, i

κi s

r0, i+1 r0, i

 +

Ln α

 r d∞

∞, i

θi s

r∞, i+1 r∞, i

.

Bi-homogeneous Differentiators

87

Note that −d0 ! α " ! α " r0, i+1 ! α " r∞, ri+1 −d∞ r0, i+1 r∞, i+1 α r0, i ∞, i r0, i r∞, i s s s = κ + θ = n ϕi (s) ϕi i i Ln Ln Ln L

and therefore1 φi

!α " !α " α = ϕ ε ◦ · · · ϕ ◦ ϕ ε1 = n φi (ε1 ) . 1 i 2 1 Ln Ln L

For the simple version of the differentiator (13), consider instead the scaling of the gains id0 id∞  n  1−(n−1)d  n  1−(n−1)d ∞ 0 L L ˜ θ˜i , ki → ki L i . κ˜ i , θi → (31) κ˜ i → α α Thus, using the scaled differentiation error given by εi =

L n−i+1 ei α

leads in both cases to a dynamics given by     Ln ν + εi+1 , ε˙ i = L −ki φi ε1 − α .. . i = 1, . . . , n − 1     1 Ln ε˙ n = L −kn φn ε1 − ν + δ (t) , α α (with φ˜ i instead of φi in the simple version of the differentiator). This is equivalent to (16) with a time scaling t → Lt, a perturbation scaling δ → αδ and a noise scaling n ν → Lα ν. This means that the scaled system accelerates the convergence and that it supports n a perturbation f 0(n) (t) = dtd n f 0 (t) of a larger size (Lipschitz constant), i.e., T (·) →

1 T (·) , Δ → αΔ. L n

The effect of the noise is also scaled by the factor Lα . Using the previous scalings, any (worst case) fixed-convergence time can be assigned to the differentiator. This is achieved following the procedure:

Note that these relations do not imply that ϕi and φi are homogeneous of degree one, because this equality is not valid for arbitrary (positive) constants multiplying s but only for the ratio Lαn , which is used in the definition of ϕi . Functions ϕi and φi are bi-homogeneous.

1

88

J. A. Moreno

(i) For a pair d0 < 0 < d∞ and a set of parameters κ j > 0 and θ j > 0, use, e.g., Algorithm 1 to obtain a set of gains ki , stabilizing the differentiator for a perturbation size Δ. (ii) Estimate the convergence time T¯ using simulations or by means of (28). (iii) Select the desired convergence time T¯ ∗ and the perturbation size Δ∗ . Choose the scaling parameters (α, L) of (29) so that they satisfy α≥

T¯ ∗ Δ∗ ,L≥ . Δ T¯

This procedure is a generalization of that in [47], which is restricted to the first-order differentiator with d0 = −1.

5 Example: The Bi-homogeneous Second-Order Differentiator The second-order differentiator (n = 3) has a realization (7) proposed in [38] given by x˙1 = −k1 φ1 (e1 ) + x2 x˙2 = −k2 φ2 (e1 ) + x3

(32)

x˙3 = −k3 φ3 (e1 ) e1 = x1 − f , where 1−d0

1−d∞

φ1 (e1 ) = ϕ1 (e1 ) = κ1 e1 1−2d0 + θ1 e1 1−2d∞   1−d1 1−d0 1−d∞ 0 φ2 (e1 ) = ϕ2 (ϕ1 (e1 )) = κ2 κ1 e1 1−2d0 + θ1 e1 1−2d∞   1−d1 ∞ 1−d0 1−d∞ +θ2 κ1 e1 1−2d0 + θ1 e1 1−2d∞ 1−(2−i)d0

1−(2−i)d∞

φ3 (e1 ) = ϕ3 (ϕ2 (ϕ1 (e1 ))) = κ3 φ2 (e1 ) 1−(3−i)d0 + θ3 φ2 (e1 ) 1−(3−i)d∞ . This shows the increasing complexity of the output injection terms. The simple realization (13) proposed in this article is given by ! " 1−d0 1−d∞ x˙1 = −k1 κ˜ 1 e1 1−2d0 + θ˜1 e1 1−2d∞ + x2 ! " 1 1 x˙2 = −k2 κ˜ 2 e1 1−2d0 + θ˜2 e1 1−2d∞ + x3 ! " 1+d0 1+d∞ x˙3 = −k3 κ˜ 3 e1 1−2d0 + θ˜3 e1 1−2d∞ e1 = x1 − f .

(33)

Bi-homogeneous Differentiators

89 f (t)=0.5sin(0.5t)+0.5cos(t), = 0 0

p=0 p=3 p=6 p=9

0.2

1

1 p=0 p=3 p=6 p=9

0.8

p=0 p=3 p=6 p=9

0.8

0.6

0.6

0.4

0.4

0.2

0.2

0

e3

e2

e1

0.1

0

0

-0.2

-0.2

-0.4

-0.4

-0.6

-0.6

-0.8

-0.8

-0.1

-0.2

-0.3

-1

-1 0

10

t

20

0

10

t

20

0

10

20

t

Fig. 1 Time behavior of the three differentiation errors for initial conditions x0 = [1, −5, 1] × 10 p , with p = {0, 3, 6, 9} and scaling L = 1

In both realizations, f (1) (t) is estimated by x2 (t) and f (2) (t) is estimated by x3 (t). For the simulations, only the simple version of the differentiator (33) is used. The 0-limit homogeneous degree is chosen to be d0 = −1, and  the ∞-limit homogeneous degree is set to d∞ = 0.15. f (t) = f 0 (t) = 21 sin 21 t + 21 cos (t) is the base signal, that has to be differentiated. For this signal, the Lipschitz constant is Δ = 58 . The internal gains are selected√as κ˜ i = θ˜i = 1 for i = 1, 2, 3, while the gains are calculated as k1 = 3, k2 = 1.5 3, k3 = 1.1. To illustrate the effect of the scalings in the convergence velocity and the size of the perturbation, two values of the scaling parameters have been selected: (α, L) = (1, 1) and (α, L) = (1, 2). During the simulation a bounded noise signal, i.e., |ν (t)| ≤ ε, has been used. In Figs. 1 and 2, the three differentiation errors ei , for i = 1, 2, 3, are presented. Figure 1 shows the time evolution of the estimation error for different initial conditions x0 = [1, −5, 1] × 10 p , for p = 0, 3, 6, 9, with a scaling parameter of L = 1, and in the absence of noise. Figure 2 illustrates the same behavior for a scaling parameter of L = 2. Note that the change in the initial conditions corresponds to nine orders of magnitude. These graphs show clearly that, despite the large change

90

J. A. Moreno f (t)=0.5sin(0.5t)+0.5cos(t), = 0 0

p=0 p=3 p=6 p=9

0.2

1

1 p=0 p=3 p=6 p=9

0.8

p=0 p=3 p=6 p=9

0.8

0.6

0.6

0.4

0.4

0.2

0.2

0

e3

e2

e

1

0.1

0

0

-0.2

-0.2

-0.4

-0.4

-0.6

-0.6

-0.8

-0.8

-0.1

-0.2

-0.3

-1

-1 0

5

t

10

0

5

t

10

0

5

10

t

Fig. 2 Time behavior of the three differentiation errors for initial conditions x0 = [1, −5, 1] × 10 p , with p = {0, 3, 6, 9} and scaling L = 2

in the size of the initial conditions, the convergence time does not change accordingly, and it approaches a finite value. Figures 1 and 2 also unveil that by doubling L from L = 1 to L = 2 the convergence time is divided by 2. In Figs. 3 and 4, the behavior of three different differentiators without noise and with noise, respectively, is illustrated. The differentiators have different 0-limit homogeneity degrees d0 = 0, − 21 , −1 and the same ∞-limit homogeneity degree. The gains, initial conditions, and the signal to differentiate are the same as before. Figure 3 shows that, if no measurement noise is present, the discontinuous differentiator, i.e., with d0 = −1, is the only one able to provide exact derivatives. The estimation error of the other differentiators shows a remaining error, even in the absence of measurement noise. Figure 4 shows the effect of a random noise bounded by ε = 0.005. Obviously, none of the differentiators can estimate correctly the derivatives. Note that the effect of noise in the discontinuous differentiator is very strong. However, for this noise amplitude, the smallest estimation error is provided by the discontinuous one. This also illustrates the general fact that for small noise amplitude

Bi-homogeneous Differentiators

91 f (t)=0.5sin(0.5t)+0.5cos(t), = 0 0

0.3 d=0 d=-0.5 d=-1

0.25

2 d=0 d=-0.5 d=-1

d=0 d=-0.5 d=-1

1.5

1.5 0.2 1 0.15 1 0.1

e

3

e2

e

1

0.5 0.05

0.5

0 0 0 -0.05 -0.1

-0.5

-0.5

-0.15 -1

-1

-0.2 0

20

40

t

0

20

40

t

0

20

40

t

Fig. 3 Differentiation errors  for a signal  without noise for three different values of the 0-limit homogeneity degree d0 = 0, − 21 , −1 and the same d∞ = 0.15. Note that only the differentiator with d0 = −1 is able to estimate correctly the three signals

and a signal with Δ > 0, the discontinuous differentiator, with d0 = −1, is always better than any other one.

6 Conclusions In this chapter, we have given an account on the design of discontinuous and continuous differentiators, based on the combination of homogeneous ones with two different homogeneity degrees near zero and far from it. These differentiators have very interesting properties, for example, that they can converge in Fixed-Time if a negative homogeneity degree is assigned near zero and a positive homogeneity degree is used near infinity. Moreover, using the discontinuous injection terms characteristic from Levant’s exact and robust differentiator, the convergence properties of the proposed differentiators can be strongly enhanced.

92

J. A. Moreno f (t)=0.5sin(0.5t)+0.5cos(t), = 0.005 0

0.3 d=0 d=-0.5 d=-1

0.25

2 d=0 d=-0.5 d=-1

d=0 d=-0.5 d=-1

1.5

1.5 0.2 1 0.15 1 0.1

e

3

e2

e

1

0.5 0.05

0.5

0 0 0 -0.05 -0.1

-0.5

-0.5

-0.15 -1

-1

-0.2 0

20

40

t

0

20

40

t

0

20

40

t

Fig. 4 Differentiation errors for a signal with a random noise of amplitude ε = 0.005, for three  different values of the 0-limit homogeneity degree d0 = 0, − 21 , −1 and the same d∞ = 0.15

The proposed analysis and design of the differentiators homogeneous in the bilimit is based on a Lyapunov framework, which unifies the continuous and discontinuous cases. Although a method to design appropriate gains has been put forward, based on the Lyapunov functions used in the work, a lot of work is still required to reach an efficient design method, considering, for example, performance measures. Discretization issues are very important for these differentiators, in particular when the homogeneity degree is positive far from the origin. This is due to the explosive behavior of such algorithms for large values of the states [4, 29, 30]. The reader may consult some important results attained recently in, e.g., [4, 20]. Acknowledgements The author would like to thank the financial support from PAPIIT-UNAM (Programa de Apoyo a Proyectos de Investigación e Innovación Tecnológica), project IN106323.

Bi-homogeneous Differentiators

93

References 1. Andrieu, V., Praly, L., Astolfi, A.: Homogeneous approximation, recursive observer design and output feedback. SIAM J. Control Optim. 47(4), 1814–1850 (2008) 2. Angulo, M., Moreno, J., Fridman, L.: Robust exact uniformly convergent arbitrary order differentiator. Automatica 49(8), 2489–2495 (2013). https://www.scopus.com/inward/record. uri?eid=2-s2.0-84882456671&doi=10.1016%2fj.automatica.2013.04.034&partnerID=40& md5=80ffefa4d941ca719509c0af8cbc2c75 3. Bacciotti, A., Rosier, L.: Liapunov Functions and Stability in Control Theory, 2nd edn. Springer, New York (2005) 4. Barbot, J.-P., Levant, A., Livne, M., Lunz, D.: Discrete differentiators based on sliding modes. Automatica 112, 108633 (2020). https://doi.org/10.1016/j.automatica.2019.108633 5. Bartolini, G., Pisano, A., Usai, E.: First and second derivative estimation by sliding mode technique. J. Signal Process. 4(2), 167–176 (2000) 6. Bejarano, F., Fridman, L.: High order sliding mode observer for linear systems with unbounded unknown inputs. Int. J. Control 83(9), 1920–1929 (2010) 7. Bernuau, E., Efimov, D., Perruquetti, W., Polyakov, A.: On an extension of homogeneity notion for differential inclusions. In: European Control Conference, pp. 2204–2209, Zurich, Switzerland, Jul 2013. https://doi.org/10.23919/ECC.2013.6669525 8. Bernuau, E., Efimov, D., Perruquetti, W., Polyakov, A.: On homogeneity and its application in sliding mode control. J. Frankl. Inst. 351(4), 1816–1901 (2014) 9. Bhat, S., Bernstein, D.: Geometric homogeneity with applications to finite-time stability. Math. Control Signals Syst. 17(2), 101–127 (2005) 10. Cruz-Zavala, E., Moreno, J.A.: Lyapunov functions for continuous and discontinuous differentiators. IFAC-PapersOnLine 49(18), 660–665 (2016). ISSN 2405-8963. https://doi.org/10.1016/j.ifacol.2016.10.241, https://www.scopus.com/inward/record. uri?eid=2-s2.0-85009178451&doi=10.1016%2fj.ifacol.2016.10.241&partnerID=40& md5=b84dbc58796bd53dab550d5ff67f507d. 10th IFAC Symposium on Nonlinear Control Systems NOLCOS 2016 11. Cruz-Zavala, E., Moreno, J.A.: Levant’s arbitrary order exact differentiator: a Lyapunov approach. IEEE Trans. Autom. Control 64(7), 3034–3039 (2019). ISSN 0018-9286. https:// doi.org/10.1109/TAC.2018.2874721 12. Cruz-Zavala, E., Moreno, J.A.: High-order sliding-mode control design homogeneous in the bilimit. Int. J. Robust Nonlinear Control 31(19), 3380–3416 (2021). https://onlinelibrary.wiley. com/doi/abs/10.1002/rnc.5242 13. Cruz-Zavala, E., Moreno, J., Fridman, L.: Uniform robust exact differentiator. In: Proceedings of the IEEE Conference on Decision and Control, pp. 102–107 (2010). https://doi.org/10.1109/CDC.2010.5717345, https://www.scopus.com/inward/record. uri?eid=2-s2.0-79953148717&doi=10.1109%2fCDC.2010.5717345&partnerID=40& md5=a8ca3cb846757ef88811074731eacd7a 14. Cruz-Zavala, E., Moreno, J., Fridman, L.: Uniform robust exact differentiator. IEEE Trans. Autom. Control 56(11), 2727–2733 (2011). https://doi.org/10.1109/TAC.2011. 2160030, https://www.scopus.com/inward/record.uri?eid=2-s2.0-80455150105&doi=10. 1109%2fTAC.2011.2160030&partnerID=40&md5=0545a8c7c1b07d7cb54f996107366654 15. Davila, J., Fridman, L., Levant, A.: Second-order sliding-mode observer for mechanical systems. IEEE Trans. Autom. Control 50(11), 1785–1789 (2005) 16. Efimov, D., Fridman, L.: A hybrid robust non-homogeneous finite-time differentiator. IEEE Trans. Autom. Control 56, 1213–1219 (2011) 17. Filippov, A.: Differential Equations with Discontinuous Righthand Side. Kluwer, Dordrecht, The Netherlands (1988) 18. Floquet, T., Barbot, J.P.: Super twisting algorithm-based step-by-step sliding mode observers for nonlinear systems with unknown inputs. Int. J. Syst. Sci. 38(10), 803–815 (2007)

94

J. A. Moreno

19. Fraguela, L., Angulo, M., Moreno, J., Fridman, L.: Design of a prescribed convergence time uniform robust exact observer in the presence of measurement noise. In: Proceedings of the IEEE Conference on Decision and Control, pp. 6615–6620 (2012). https://doi.org/10.1109/CDC.2012.6426147, https://www.scopus.com/inward/ record.uri?eid=2-s2.0-84874276290&doi=10.1109%2fCDC.2012.6426147&partnerID=40& md5=a79989cfdc846f1c56b5d46b2b33220c 20. Hanan, A., Levant, A., Jbara, A.: Low-chattering discretization of homogeneous differentiators. IEEE Trans. Autom. Control 67(6), 2946–2956 (2022). https://doi.org/10.1109/TAC.2021. 3099446 21. Hautus, M.: Strong detectability and observers. Linear Algebra Appl. 50, 353–368 (1983) 22. Jbara, A., Levant, A., Hanan, A.: Filtering homogeneous observers in control of integrator chains. Int. J. Robust Nonlinear Control 31(9), 3658–3685 (2021). https://doi.org/10.1002/ rnc.5295, https://onlinelibrary.wiley.com/doi/abs/10.1002/rnc.5295 23. Khalil, H.K.: High-Gain Observers in Nonlinear Feedback Control. Advances in Design and Control, vol. 31. Society for Industrial and Applied Mathematics, Philadelphia (2017) 24. Khalil, H.K., Praly, L.: High-gain observers in nonlinear feedback control. Int. J. Robust Nonlinear Control 24, 993–1015 (2014) 25. Kobayashi, S., Suzuki, S., Furuta, K.: Frequency characteristics of Levant’s differentiator and adaptive sliding mode differentiator. Int. J. Syst. Sci. 38(10), 825–832 (2007) 26. Levant, A.: Robust exact differentiation via sliding mode technique. Automatica 34(3), 379– 384 (1998) 27. Levant, A.: High-order sliding modes: differentiation and output-feedback control. Int. J. Control 76(9), 924–941 (2003) 28. Levant, A.: Homogeneity approach to high-order sliding mode design. Automatica 41, 823–830 (2005) 29. Levant, A.: On fixed and finite time stability in sliding mode control. In: 52nd IEEE Conference on Decision and Control, Firenze, Italy, pp. 4260–4265 (2013). https://doi.org/10.1109/CDC. 2013.6760544 30. Levant, A., Efimov, D., Polyakov, A., Perruquetti, W.: Stability and robustness of homogeneous differential inclusions. In: Proceedings of the IEEE Conference on Decision and Control, pp. 7288–7293 (2016) 31. Lopez-Ramirez, F., Polyakov, A., Efimov, D., Perruquetti, W.: Finite-time and fixed-time observer design: implicit Lyapunov function approach. Automatica 87, 52–60 (2018). ISSN 0005-1098. https://doi.org/10.1016/j.automatica.2017.09.007, http://www.sciencedirect.com/ science/article/pii/S0005109817304776 32. Ménard, T., Moulay, E., Perruquetti, W.: Fixed-time observer with simple gains for uncertain systems. Automatica 81, 438–446 (2017). ISSN 0005-1098. https://doi.org/ 10.1016/j.automatica.2017.04.009, https://www.sciencedirect.com/science/article/pii/ S0005109817301826 33. Moreno, J.A.: A linear framework for the robust stability analysis of a generalized super-twisting algorithm. In: 2009 6th International Conference on Electrical Engineering, Computing Science and Automatic Control, CCE 2009, pp. 12–17 (2009). https://doi.org/10.1109/ICEEE.2009.5393477, https://www.scopus.com/inward/record. uri?eid=2-s2.0-77949821085&doi=10.1109%2fICEEE.2009.5393477&partnerID=40& md5=b6163128ddd41d9a603b5c6182f39be9 34. Moreno, J.A.: Lyapunov approach for analysis and design of second order sliding mode algorithms. In: Fridman, L., Moreno, J., Iriarte, R. (eds.) Sliding Modes After the First Decade of the 21st Century, LNCIS, vol. 412, pp. 113–150. Springer, Berlin, Heidelberg (2011). https://doi.org/10.1007/978-3-642-22164-4_4, https://www.scopus.com/inward/ record.uri?eid=2-s2.0-81355142043&doi=10.1007/978-3-642-22164-4_4&partnerID=40& md5=3e95ff994d7ee607e3f8ef2c7486261a 35. Moreno, J.A.: Lyapunov function for Levant’s second order differentiator. In: 2012 IEEE 51st IEEE Conference on Decision and Control (CDC), pp. 6448–6453, Dec 2012. https://doi.org/10.1109/CDC.2012.6426877, https://www.scopus.com/inward/

Bi-homogeneous Differentiators

36.

37.

38. 39.

40.

41.

42. 43. 44.

45.

46.

47.

48.

49.

50. 51.

95

record.uri?eid=2-s2.0-84874252832&doi=10.1109%2fCDC.2012.6426877&partnerID=40& md5=1a5389eb65b5b948c0fa5bf42d4e2863 Moreno, J.A.: On discontinuous observers for second order systems: properties, analysis and design. In: Bandyopadhyay, B., Janardhanan, S., Spurgeon, S.K. (eds.) Advances in Sliding Mode Control - Concepts, Theory and Implementation, LNCIS, vol. 440, pp. 243–265. Springer, Berlin, Heidelberg (2013). https://doi.org/10.1007/978-3-64236986-5_12, https://www.scopus.com/inward/record.uri?eid=2-s2.0-84896820975&doi=10. 1007/978-3-642-36986-5_12&partnerID=40&md5=88eec69ec29d9dc1556ac615a295cce0 Moreno, J.A.: Exact differentiator with varying gains. Int. J. Control 91(9), 1983–1993 (2018), https://doi.org/10.1080/00207179.2017.1390262, https://www.scopus.com/inward/record. uri?eid=2-s2.0-85034667328&doi=10.1080%2f00207179.2017.1390262&partnerID=40& md5=592acc0aaba765198142c2aee815a3c0 Moreno, J.A.: Arbitrary-order fixed-time differentiators. IEEE Trans. Autom. Control 67(3), 1543–1549 (2022). https://doi.org/10.1109/TAC.2021.3071027 Moreno, J.A., Osorio, M.: A Lyapunov approach to second-order sliding mode controllers and observers. In: 47th IEEE Conference on Decision and Control, pp. 2856–2861, Cancún, Mexico, Dec 2008. https://doi.org/10.1109/CDC.2008.4739356, https://www.scopus.com/inward/ record.uri?eid=2-s2.0-62949230418&doi=10.1109%2fCDC.2008.4739356&partnerID=40& md5=b7a999d175320f71767e0bd95f5d2ee5 Moreno, J.A., Osorio, M.: Strict Lyapunov functions for the Super-Twisting algorithm. IEEE Trans. Autom. Control 57(4), 1035–1040 (2012), https://doi.org/10.1109/TAC.2012. 2186179, https://www.scopus.com/inward/record.uri?eid=2-s2.0-84859727230&doi=10. 1109%2fTAC.2012.2186179&partnerID=40&md5=e23bf5d63e7e95cc61aafa1d97642587 Nakamura, H., Yamashita, Y., Nishitani, H.: Smooth Lyapunov functions for homogeneous differential inclusions. In: Proceedings of the 41st SICE Annual Conference, vol. 3, pp. 1974– 1979 (2002) Perruquetti, W., Floquet, T., Moulay, E.: Finite-time observers: application to secure communication. IEEE Trans. Autom. Control 53(2), 356–360 (2008) Prasov, A.A., Khalil, H.K.: A nonlinear high-gain observer for systems with measurement noise in a feedback control framework. IEEE Trans. Autom. Control 58, 569–580 (2013) Qian, C., Lin, W.: Recursive observer design, homogeneous approximation, and nonsmooth output feedback stabilization of nonlinear systems. IEEE Trans. Autom. Control 51(9), 1457– 1471 (2006) Sanchez, T., Cruz-Zavala, E., Moreno, J.A.: An SOS method for the design of continuous and discontinuous differentiators. Int. J. Control 91(11), 2597–2614 (2018). https://doi.org/10. 1080/00207179.2017.1393564 Seeber, R., Horn, M., Fridman, L.: A novel method to estimate the reaching time of the SuperTwisting algorithm. IEEE Trans. Autom. Control 63(12), 4301–4308 (2018). https://doi.org/ 10.1109/TAC.2018.2812789 Seeber, R., Haimovich, H., Horn, M., Fridman, L., Battista, H.D.: Exact differentiators with assigned global convergence time bound. arXiv:2005.12366v1 (2020). https://doi.org/ 10.48550/arXiv.2005.12366 Seeber, R., Haimovich, H., Horn, M., Fridman, L.M., De Battista, H.: Robust exact differentiators with predefined convergence time. Automatica 134, 109858 (2021). ISSN 0005-1098. https://doi.org/10.1016/j.automatica.2021.109858, https://www. sciencedirect.com/science/article/pii/S0005109821003782 Shen, Y., Xia, X.: Semi-global finite-time observers for nonlinear systems. Automatica 44(12), 3152–3156 (2008). ISSN 0005-1098. https://doi.org/10.1016/j.automatica.2008.05. 015, https://www.sciencedirect.com/science/article/pii/S0005109808003087 Shtessel, Y.B., Shkolnikov, I.A.: Aeronautical and space vehicle control in dynamic sliding manifolds. Int. J. Control 76(9/10), 1000–1017 (2003) Utkin, V.: Sliding Modes in Control and Optimization. Springer, Berlin (1992)

96

J. A. Moreno

52. Vasiljevic, L.K., Khalil, H.K.: Error bounds in differentiation of noisy signals by high-gain observers. Syst. Control Lett. 57, 856–862 (2008) 53. Yang, B., Lin, W.: Homogeneous observers, iterative design, and global stabilization of highorder nonlinear systems by smooth output feedback. IEEE Trans. Autom. Control 49(7), 1069– 1080 (2004). ISSN 0018-9286. https://doi.org/10.1109/TAC.2004.831186

On Finite- and Fixed-Time State Estimation for Uncertain Linear Systems Héctor Ríos

Abstract In this chapter different approaches are provided to estimate the state of some classes of strongly observable linear systems with some parametric uncertainties and unknown inputs. A family of homogeneous observers and a fixed-time sliding-mode observer are introduced to solve this problem. The finite-time and fixedtime convergence properties and the synthesis of these observers are described along this chapter. Moreover, an unknown input identification approach is also introduced. Simulation results illustrate the performance of these state estimation approaches.

1 Introduction The problem of state estimation in dynamical system taking into account the presence of parametric uncertainties and unknown inputs or external disturbances is still one of the most important and challenging problems in the modern control theory. The literature provides several approaches related to this problem, e.g., adaptive observers, dissipative observers, high-gain observers, sampled-data observers, interval observers, hybrid observers, etc. In this chapter, we focused on those observers with a Finite-Time (FT) and/or Fixed-Time (FxT) convergence, and some robustness

This chapter contains material reprinted from Automatica, Vol 87, Héctor Ríos and Andrew R. Teel, A hybrid fixed-time observer for state estimation of linear systems, Pages 103–112, Copyright (2018), with permission from Elsevier. This chapter also contains material reprinted from A. Gutiérrez, Héctor Ríos and Manuel Mera, Robust output-regulation for uncertain linear systems with input saturation, IET Control Theory & Applications. Copyright (2023) The Institution of Engineering and Technology. The Institution of Engineering and Technology is registered as a Charity in England & Wales (no 211014) and Scotland (no SC038698). H. Ríos (B) Tecnológico Nacional de México/I.T. La Laguna, División de Estudios de Posgrado e Investigación, C.P. 27000 Torreón, Coahuila, México e-mail: [email protected] CONAHCYT IxM, C.P. 03940, Ciudad de México, México © The Author(s), under exclusive license to Springer Nature Switzerland AG 2023 T. R. Oliveira et al. (eds.), Sliding-Mode Control and Variable-Structure Systems, Studies in Systems, Decision and Control 490, https://doi.org/10.1007/978-3-031-37089-2_5

97

98

H. Ríos

properties against some parametric uncertainties and unknown inputs or external disturbances. The Sliding-Mode Control (SMC) theory has been attracted much the attention for the design of FT observers during the last decades (see, e.g., [13, 38, 43]). These Sliding-Mode Observers (SMOs) possess two interesting features: FT convergence, and insensitivity, more than just robustness, to some specific classes of disturbances. The first SMOs employed First-Order Sliding-Modes (FOSM) output injections (see, e.g., [39, 44]), and, in order to ensure asymptotic convergence of the estimation error, the system must be minimum phase and its output has to possess relative degree one with respect to the disturbances. After these first attempts, a linear output correction term was also included to the FOSM approach (see, e.g., [13, 41]), and considering the same assumptions, i.e., minimum phase systems and relative degree of the output, with respect to the disturbances, equal to one, asymptotic convergence of the state estimation error was ensured. However, such assumptions were restrictive since they are not satisfied even for mechanical systems. After this, in order to reduce such a restrictiveness, the High-Order Sliding-Modes (HOSM) observers were proposed based on strong observability and strong detectability conditions [42]. These observers relax the relative degree condition and guarantee exact and FT state estimation (see, e.g., [4, 5, 11, 14–16]). The homogeneity-based approach has also attracted a lot of the attention in the SMOs design. Different types of Continuous and discontinuous FT observers have been introduced based on homogeneity in [12, 24, 27, 29, 37], and in a more recent work [26]. The continuous observers provide exact and FT state estimation in the absence of parametric uncertainties and unknown inputs while the discontinuous observers ensure exact and FT state estimation despite the presence of some unknown inputs. A more recent and interesting convergence property has arrived during the last years: the FxT convergence. This property was introduced by [1], revisited then by [28]. The main feature of this property is the existence of a bound for the convergence time, which is independent of the initial conditions. This surprising property is crucial when there exist time constraints and a state estimation is required during a particular interval of time (e.g., in hybrid systems or any nonlinear dynamics with FT escape). In the literature, there exist few works regarding such a property: [10, 36], for the design of a first-order FxT robust exact differentiator, [2, 25], for arbitrary-order differentiation under some type of disturbances; and [1, 23], for the observer design framework, but only for the ideal linear case. In this chapter different approaches, based on strong observability, homogeneity and SMC theory, are provided to estimate the state of some classes of linear systems with some parametric uncertainties and unknown inputs. A family of homogeneous observers (see [18, 32]) and an FxT SM observer (see [33, 34]) are introduced to solve this problem. The exact and FT/FxT convergence properties are described when just some classes of unknown inputs are affecting strongly observable linear systems. For the case where the linear system is under the presence of parametric uncertainties and unknown inputs, some Input-to-State Stability (ISS)/FT-ISS/FxT-ISS properties are provided for these observers. The synthesis of these observers is given in terms of Linear Matrix Inequalities (LMIs) and some proper gain choice. Moreover, an

On Finite- and Fixed-Time State Estimation for Uncertain Linear Systems

99

unknown input identification approach is also introduced. It is worth mentioning that the chapter is just focused on the single-output single-unknown input case. With regard to the above-mentioned literature, the proposed observers certainly provide similar results but possess the following advantages: • The structure of the family of homogeneous observers is the most simple since requires neither additional observers for an a priori stabilization nor complex transformations. • The design of the observer gains is constructive since is given in terms of LMI. This chapter is organized as follows. The problem statement is given in Sect. 2. A family of homogeneous observers that solve the problem is described in Sect. 3 while an FxT SM observer is introduced in Sect. 4. Some alternatives to provide unknown input identification are given in Sect. 5. Simulation results for the proposed estimation approaches are shown in Sect. 6. Concluding remarks are discussed in Sect. 7. Finally, some preliminaries and all the proofs are postponed to the Appendix.

1.1 Notation Let ||q|| denote the Euclidean norm of a vector q ∈ Rn , In an identity matrix of dimension n, 0n×m a zeroes matrix of dimensions n × m, and 1, n a sequence of integers 1, ..., n. The induced norm for a matrix Q ∈ Rm×n is given as ||Q|| := 1/2 λmax (Q T Q), where λmax (respectively, λmin ) is the maximum (respectively, the minimum) eigenvalue. For a non-empty closed set A ⊂ Rn , the distance of q to A is denoted ||q||A and defined by ||q|| A := inf r ∈A ||q − r ||. For a Lebesgue measurable function w : R≥0 → R p define the norm ||w||(t0 ,t1 ) := ess supt∈(t0 ,t1 ) ||w(t)||, then ||w||∞ = ||w||(0,+∞) and the set of functions w with the property ||w||∞ < +∞ is denoted as L∞ . For a set Σ, its closure is denoted as Σ. Let Σ1 \ Σ2 denote the set of points in Σ1 that are not in Σ2 .

2 Problem Statement Consider the following class of systems x˙ = Ax + Δx + Dw, x(0) = x0 ,

(1)

y = C x,

(2)

where x ∈ Rn is the state, y ∈ R is the output and the term w : R≥0 → R represents the external disturbance or the unknown input of the system. The matrix Δ ∈ Rn×n is an unknown matrix. The matrices A, C and D are known and they have appropriate dimensions. The problem is to design state observers able to estimate the state of the

100

H. Ríos

system (1)–(2) despite the presence of some parametric uncertainties and unknown inputs. The following assumptions are imposed on the system (1)–(2). Assumption 1 The system (1)–(2) is Strongly Observable.

Assumption 2 The unknown input w is bounded, i.e., w ∈ W := {w ∈ L∞ : ||w||∞ ≤ w}, with w a positive known constant.

Assumption 3 The system (1)–(2) is Input-to-State Stable with respect to w.

Important: Assumption 3 is required if and only if there exist parametric uncertainties on the model dynamics, i.e., Δ = 0. In the following sections some observers, that solve the proposed problem, are introduced.

3 Homogeneous Observers For simplicity, the single-output single-unknown input case is considered throughout the whole chapter, i.e., y, w ∈ R. However, it is worth mentioning that for some particular multiple-output cases, under an additional vector relative degree condition, the proposed approaches can be also applied in a diagonal output block form, but in general, the multiple-output case is not trivial. A family of homogeneous observers, for system (1)–(2), takes the following structure: ˆ (3) x˙ˆ = T −1 A0 T xˆ + T −1 ay + T −1 χ (y − C x), where xˆ ∈ Rn is the estimation for x, a = (a1 , . . . , an )T , with ai , i = 1, n, is a vector containing the coefficients of the characteristic polynomial of matrix A, the matrix A0 is given as

On Finite- and Fixed-Time State Estimation for Uncertain Linear Systems



0 ⎜0 ⎜ ⎜ A0 = ⎜ ... ⎜ ⎝0 0

101

⎞ 0 ··· 0 1 ··· 0⎟ ⎟ .. . . .. ⎟ , . . .⎟ ⎟ 0 0 ··· 1⎠ 0 0 ··· 0 1 0 .. .

and T ∈ Rn×n , which is a transformation matrix, as  T −1 = An−1 ζ, An−2 ζ, . . . , ζ ,

(4)

with ζ = O −1 h, h = (0, 0, . . . , 0, 1)T ∈ Rn and O the observability matrix of the pair (A, C). Important: Assumption 1 implies the existence of the matrix T .

Note: A similar structure, for the observers (3), has also been proposed in [26]. However, it is important to mention that such a structure was firstly proposed in [32] and it does not require the use of an output dependent matrix polynomial. The function χ : R → Rn is the output injection, which is designed further, and its structure provides the family of homogeneous observers. Let us define the state estimation error as e = x − x. ˆ Thus, the dynamics for the state estimation error is given by e˙ = (A − T −1 A0 T − T −1 aC)x + T −1 A0 T e + Δx + Dw − T −1 χ (e y ),

(5)

where e y := y − C xˆ is the output error. Due to the structure of matrix T , given in (4), T AT −1 and C T −1 have the following form ⎛ T AT

−1

a1 a2 .. .

⎜ ⎜ ⎜ =⎜ ⎜ ⎝ an−1 an

⎞ 0 0⎟ ⎟ .. ⎟ , C T −1 =  1 0 . . . 0 . .⎟ ⎟ 0 0 ··· 1⎠ 0 0 ··· 0 1 0 .. .

0 1 .. .

··· ··· .. .

Hence, it is easy to show that A = T −1 A0 T − T −1 aC. Thus, the error dynamics is written as follows

102

H. Ríos

e˙ = T −1 A0 T e + Δx + Dw − T −1 χ (e y ),

(6)

e y = Ce.

(7)

Now, the objective is to provide some stability properties for the system (6) by choosing the shape of the output injection χ .

3.1 Nonlinear Output Injection Design The nonlinear injection term is designed as follows ⎛

−μ

1+μ(n−2)

2+2μ(n−1) k1 p11 e y 1+μ(n−1)

⎜ −2μ ⎜ k p 2+2μ(n−1) e 1+μ(n−3) 1+μ(n−1) ⎜ 2 11 y χ (e y ) = ⎜ .. ⎜ . ⎝ −nμ

⎞ ⎟ ⎟ ⎟ ⎟, ⎟ ⎠

(8)

1−μ

2+2μ(n−1) e y 1+μ(n−1) kn p11

with μ ∈ [0, 1], s α = |s|α sign(s), for any s ∈ R and α ≥ 0, gains ki > 0, i = 1, n, and p11 = C¯ P C¯ T with C¯ = C T −1 and positive definite matrix 0 < P T = P ∈ Rn×n . Note: For the case μ = 1, the observer (3) behaves like a discontinuous HOSM differentiator [21]. In this case, another observer for linear systems with unknown inputs is presented by [16], where xˆ is a linear combination of an estimation provided by a Luenberger observer and the compensation of its estimation error given by the HOSM differentiator [21]. For the case μ ∈ (0, 1), (3) becomes a nonlinear but continuous observer. Similar homogeneous observers are introduced by different authors, see e.g., [12, 24, 27, 37]. For the case μ = 0, we recover the classic linear observer, i.e., the Luenberger observer. Similar results are obtained with the structure proposed by [26].

Let us consider the dynamics of the state estimation error (6)–(7) and the transformation ε = T e = (ε1 , . . . , εn )T . Hence, based on Assumption 1, the error dynamics is given as ¯ − χ (e y ), ¯ + Dw ε˙ = A0 ε + Δx ¯ = ε1 , e y = Cε

(9) (10)

where Δ¯ = T Δ = (Δ¯ 1T , . . . , Δ¯ nT )T ∈ Rn×n , with Δ¯ i ∈ R1×n , i = 1, n, the rows of ¯ D¯ = T D = (0, 0, . . . , 0, C An−1 D)T ∈ Rn and C¯ = C T −1 . matrix Δ,

On Finite- and Fixed-Time State Estimation for Uncertain Linear Systems

103

¯ ≡ 0, is r -homogeneous with degree ¯ + Dw Important: System (9), with Δx d = −μ for a vector of weights r = (r1 , r2 , . . . , rn ) = (1 + μ(n − 1), 1 + μ(n − 2), . . . , 1). The r -homogeneous systems have some certain intrinsic robustness properties (see, e.g., [6, 7, 19, 35]).

The following theorem describes the convergence properties of the state estimation error dynamics (9) for different values of μ. Theorem 1 Let the family of observers (3) be applied to the system (1)–(2). Let Assumptions 1, 2 and 3 be satisfied. Then, there exist some positive constants ki , i = 1, n, and p11 such that the state estimation error has the following properties: 1. If μ = 0 the estimation error dynamics (9) is ISS with respect to x and w; 2. If μ = 1 the estimation error dynamics (9) is FT-ISS with respect to x for all w∈W; 3. If μ ∈ (0, 1) the estimation error dynamics (9) is FT-ISS with respect to x and w.

Note: Due to the linear transformation ε = T e, the state estimation error dynamics (6) possesses the same convergence properties of the error dynamics (9). Moreover, if there do not exist parametric uncertainties on the model dynamics, the family of observers (3) can provide a state estimation of any unstable linear system satisfying Assumptions 1 and 2. For the case μ = 1, such an estimation is exact and in a finite time despite the external disturbances.

3.2 Design of the Observer Gains Theorem 1 describes the ISS convergence properties of the estimation error dynamics (9) with respect to x and w, for different values of μ. Nevertheless, it does not provide a method to design the observer gains ki , i = 1, n. The following proposition offers a possible manner to design such gains for μ = 0. Proposition 1 Let Assumptions 1, 2 and 3 be satisfied. Suppose that there exist a positive definite matrix 0 < P T = P ∈ Rn×n and a matrix Y ∈ Rn×1 such that the following matrix inequality ⎞ P A0 + A0T P − Y C¯ − C¯ T Y T + β P P P D¯ ⎝

−β I 0 ⎠ ≤ 0,

−β ⎛

(11)

104

H. Ríos

is feasible for a given constant β > 0. Then, if μ = 0 and K = (k1 , . . . , kn )T = P −1 Y , the estimation error e converges exponentially to a neighborhood of the origin given as ¯ +w ||Δ||x ||e(t)|| ≤ 1/2 , as t → ∞, (12) λmax (T T P T ) for all x, w > 0. The following proposition provides a way to design the gains ki for μ = 1 (see [21] for more details). Proposition 2 [21]. Let Assumptions 1, 2 and 3 be satisfied. Then, if μ = 1, p11 = 1 and the gains ki , i = 1, n, are designed according to [21], e.g., for n = 5, ¯ k4 = 1.5ϕ¯ 1/2 , k3 = 2ϕ¯ 1/3 , k2 = 3ϕ¯ 1/4 , k1 = 5ϕ¯ 1/5 , with are designed as k5 = 1.1ϕ, n−1 ¯ ϕ¯ = ||Δn ||x + |C A D|w; the estimation error e converges in a finite time to a neighborhood of the origin proportional to x, i.e., ||e(t)|| ≤ κ(||e(0)||, t) + γ (||x||∞ ), ∀0 ≤ t ≤ T (e(0)),

(13)

||e(t)|| ≤ γ (x), ∀t > T (e(0)),

(14)

for some functions κ ∈ K L T and γ ∈ K , and for all x > 0. Currently, in [9] a recursive method has been proposed to calculate the gains ki based on an r -homogeneous and continuously differentiable Lyapunov function. Let Λr (λ) = diag(λr1 , ..., λrn ) and Λr˜ (λ) = diag(λr˜1 , ..., λr˜n ) be dilation matrices with vectors of weights r = (1 + μ(n − 1), 1 + μ(n − 2), ..., 1) and r˜ = (μ, 2μ, ..., nμ), respectively; and let V : Rn → R≥0 be a positive definite function implicitly defined by G(V, ε) = 0 for any ε ∈ Rn \ {0} (the reader is refereed to the Appendix “Implicit Lyapunov Function” in order to get introduced to some preliminaries about the implicit Lyapunov function). For the case μ ∈ (0, 1), the following proposition is provided (see [18] for more details). Proposition 3 [18]. Let Assumptions 1, 2 and 3 be satisfied. Assume that there exist a positive definite matrix 0 < P T = P ∈ Rn×n and a matrix Y ∈ Rn×1 such that the following matrix inequalities

P P A0 + A0T P − Y C¯ − C¯ T Y T + P H + H P + α P ≤ 0, P −β ϕ˜ −1 I

2 −α C¯ P C¯ T Y T ≤ 0, Y −τ −1 P P H + H P ≥ α + β ϕ˜ P > 0, α + β ϕ˜ < 1, λ

−2μ

(Λr˜ (λ) − I ) Λr (λ)PΛr (λ) (Λr˜ (λ) − I ) ≤ τ P,

(15) (16) (17) (18) (19)

On Finite- and Fixed-Time State Estimation for Uncertain Linear Systems

105

are feasible for all λ ∈ [0, λ∗ ], with λ∗ = δ −1/(2+2(n−1)μ) and δ, μ ∈ (0, 1), for given ¯ ¯ + || D||w, and a matrix H = diag(r ). If μ ∈ constants α, β, τ > 0 and ϕ˜ = ||Δ||x (0, 1), K = (k1 , . . . , kn )T = P −1 Y and p11 = C¯ P C¯ T , the state estimation error e converges to a region of the origin given as ||e(t)|| ≤

1 1/2 λmax (T T

PT )

, ∀t ≥ T (e(0)),

(20)

in a finite time with the settling-time function T (e(0)) ≤

V μ (0) − 1 , μ(1 − α − β ϕ) ˜

(21)

with V (0) ≥ 0 such that G(V (0), ε(0)) = ε T (0)Λr (V (0)−1 )PΛr (V (0)−1 )ε(0) − 1=0.

Important: The feasible solutions of the set of matrix inequalities given by Proposition 1 and 3 can be found by an LMI solver, e.g., SeDuMi solver among YALMIP in MatLab (see [22, 40]). For Proposition 1, (11) is an LMI with respect to P and Y , for a fixed value of β. For Proposition 3, (15), (16) and (17) are LMIs with respect to P and Y , for any fixed values of α, β, τ > 0 and ϕ˜ such that (18) is satisfied. For (19), one can firstly find a feasible solution P and Y , for (15), (16) and (17), and then, evaluate (19) numerically. Such an inequality could be checked on a grid with a small step size for λ = λ∗ j/N , j = 0, N , with N as a large number. For more details, the readers may check [23].

4 A Fixed-Time Sliding-Mode Observer Before introducing the proposed FxT observer, the reader is refereed to the Appendix “Hybrid Systems” in order to get introduced to some preliminaries about hybrid systems. For the design of an FxT observer the fulfillment of Assumptions 1, 2 and 3 is also considered. The proposed FxT observer possesses a hybrid structure, proposed in [33, 34], and it has the following form:

106

H. Ríos

xˆ = z + T −1 v, ⎛ ⎞ ⎛ ⎞ z Az + L(y − C z) −1 ⎟ ⎟ ⎜ d ⎜ ⎜ v ⎟ = ⎜ T (A − LC)T −1v + χq (e y , v1 ) ⎟ , (x, ˆ z, v, v, ¯ q) ∈ C , dt ⎝ v¯ ⎠ ⎝ T (A − LC)T v¯ + χ2 (e y , v¯1 ) ⎠ 0 q ⎛ +⎞ ⎛ ⎞ z z ⎜ v+ ⎟ ⎜ v ⎟ ⎜ +⎟=⎜ ⎟ ˆ z, v, v, ¯ q) ∈ D, ⎝ v¯ ⎠ ⎝ v¯ ⎠ , (x, + q 3−q

(22) (23)

(24)

where xˆ ∈ Rn is the state estimation, z ∈ Rn is an auxiliary state estimation, v = (v1 , . . . , vn )T ∈ Rn and v¯ = (v¯1 , . . . , v¯n )T ∈ Rn provide estimations of the error between the system state x and the auxiliary estimation z, and q ∈ N is a logic variable q ∈ {1, 2}. The transformation matrix T ∈ Rn×n is designed as in (4) but replacing A with A − LC, where L ∈ Rn×1 is designed such that A − LC is Hurwitz. The terms χq : R × R → Rn represent nonlinear output injections and they are designed later on. ¯ 0) = v¯0 , The initial conditions are given as z(0, 0) = z 0 , v(0, 0) = v0 , v(0, q(0, 0) = q0 , and with abuse of notation, the flow and jump sets are defined as C = {(x, ˆ z, v, v, ¯ q) ∈ R4n × {1, 2} : v − v¯ ∈ Cq },

(25)

D = {(x, ˆ z, v, v, ¯ q) ∈ R

(26)

4n

× {1, 2} : v − v¯ ∈ Dq }.

The main idea of such a hybrid observer is to switch, by a hysteresis mechanism, between the two injections, χ1 and χ2 , in order to provide FxT convergence to zero for the estimation error by means of using χ1 close to the estimation error origin and χ2 far from it. Therefore, based on a hysteresis mechanism [17], a hybrid output injection, which involves the logic variable q ∈ {1, 2} and the decision variable v˜ = v − v, ¯ is proposed. The mechanism sets χ1 , if q is equal to 1 and v˜ ∈ C1 , and the mechanism sets χ2 , if q is equal to 2 and v˜ ∈ C2 , otherwise, q switches. The auxiliary variable v, ¯ which never switches, defines the flow and jump sets in terms only of known variables, i.e., C1 should be taken as a compact region of the origin of v, ˜ which is contained in the attraction domain when χ1 is used, while D2 should be taken as another compact region of the origin of v, ˜ such that the solutions, when χ1 is used, that start in D2 , do not reach the boundary of C1 . In the following, the nonlinear output injection terms are designed, as well as the flow and jump sets, for the linear system (1)–(2) satisfying Assumptions 1, 2 and 3.

On Finite- and Fixed-Time State Estimation for Uncertain Linear Systems

107

4.1 Nonlinear Output Injection Design The nonlinear injection χq : R × R → Rn could take one of the following forms: ⎛

 n−1 ⎞ k1 e y − v1 n  n−2 ⎟ ⎜ ⎜ k2 e y − v1 n ⎟ ⎜ ⎟, χ1 (e y , v1 ) = ⎜ ⎟ .. ⎝ ⎠ .

0 kn e y − v1 ⎛ μ  n+μ 2n κ p11 e y − v¯1 n ⎜ 1 2μ ⎜ κ p 2n e − v¯  n+2μ n 1 ⎜ 2 11 y χ2 (e y , v¯1 ) = ⎜ .. ⎜ . ⎝ μ 1+μ 2 e y − v¯1 κn p11

(27) ⎞ ⎟ ⎟ ⎟ ⎟, ⎟ ⎠

(28)

with μ > 0, some positive gains ki , κi , i = 1, n, and p11 = C¯ P C¯ T with C¯ = C T −1 and some positive definite matrix 0 < P T = P ∈ Rn×n . Note: In the ideal case, one can just fix q = 2, for all t ≥ 0, and then, observer (22)–(24) will not switch and only χ2 is acting. In [1], it is shown that such an observer is r -homogeneous with degree d = μ > 0, for a vector of weights (r1 , r2 , . . . , rn ) = (n, n + μ, . . . , n + μ(n − 1)), and it provides FxT convergence. On the other hand, fixing q = 1, for all t ≥ 0, observer (22)–(24) does not jump and the term χ1 is active. In this case, the output injection term takes the form of the discontinuous HOSM differentiator [21], and the observer proposed by [16] is recovered, i.e., a linear combination of a Luenberger estimation and the compensation of its estimation error given by the HOSM differentiator. Let Λr (λ) = diag(λr1 , ..., λrn ) and Λr˜ (λ) = diag(λr˜1 , ..., λr˜n ) be dilation matrices with vectors of weights r = (n, n + μ, ..., n + μ(n − 1)) and r˜ = (μ, 2μ, ..., nμ), respectively. Define the estimation error e = x − x. ˆ The following theorem describes the FxT convergence properties of the estimation error (see [34] for more details). Theorem 2 [34]. Let the hybrid observer (22)–(24) be applied to the system (1)–(2). Let Assumptions 1, 2 and 3 be satisfied. Assume that there exist a positive definite matrix 0 < P T = P ∈ Rn×n and a matrix Y ∈ Rn×1 such that the following set of matrix inequalities

108

H. Ríos



P A0 + A0T P − Y C¯ − C¯ T Y T + P H + H P + α P P P −β ϕ˜ −1 I

≤ 0,

(29)

λ2n (Λr˜ (λ) − I ) Λr (λ)PΛr (λ) (Λr˜ (λ) − I ) ≤ τ P,

2 −α C¯ P C¯ T Y T ≤ 0, Y −τ −1 P P H + H P ≥ α + β ϕ˜ P > 0, α + β ϕ˜ < 1,

(30) (31) (32) (33)

is feasible for all λ ∈ [0, λ∗ ], with λ∗ = δ −1/2n and δ ∈ (0, 1), for given constants ¯ ¯ + || D||w, and a matrix H = diag(r ). Suppose that α, β, μ, τ > 0 and ϕ˜ = ||Δ||x n×1 L ∈ R is designed such that A − LC is Hurwitz. Then, if ki , i = 1, n, are designed according to Proposition 2, κ = (κ1 , . . . , κn )T = P −1 Y , and the flow and jump sets are designed as  C1 = v˜ ∈ Rn : ||v|| ˜ ≤

2 1/2

 + ρ , C2 = Rn \ D2 ,

λmax (P)   2 ˜ ≤ 1/2 , D1 = Rn \ C1 , D2 = v˜ ∈ Rn : ||v|| λmax (P)

¯ with with ρ being a sufficiently large constant such that ρ > supt≥0 ||vχ1 (t) − v(t)|| 1/2 ||vχ1 (0) − v(0)|| ¯ ≤ 2/λmax (P) and where vχ1 represents the trajectories of v for q = 1. Then, the estimation error dynamics is FxT-ISS with respect to x.

Important: The feasible solutions of the set of matrix inequalities given by Theorem 2 can be found in the same way as Proposition 3.

Note: In the case where parametric uncertainties do not exist on the model dynamics, i.e., Δ = 0, the hybrid observer (22)–(24) provides a state estimation of any unstable linear system satisfying Assumptions 1 and 2. Furthermore, such an estimation is exact and in a fixed time despite the external disturbances.

Comments regarding the hybrid observer: • Observer (23), with χq = χ1 , brings the trajectories of the state estimation error, which start at infinity, into zero in a finite time. However, such a time depends on the initial conditions. Nevertheless, from a compact set, the time of convergence does not depend on the initial conditions and is uniformly bounded.

On Finite- and Fixed-Time State Estimation for Uncertain Linear Systems

109

• Observer (23), with χq = χ2 , brings the trajectories of the state estimation error, which start at infinity, into a compact set in a finite time. Such a time is independent of the initial conditions, i.e., it is a fixed time. • Due to v˜ is available to measure, it is possible to calculate ρ by numerical sim¯ ulations. The estimation of ρ is made off- line by computing supt≥0 ||vχ1 − v||, 1/2 fixing ||vχ1 (0) − v(0)|| ¯ ≤ 2/λmax (P), where vχ1 represents the trajectories of v for q = 1. Thus, any trajectory starting in D2 does not reach the boundary of C1 while χ1 is active. • Whenever for some cause the trajectories of the state estimation error abandon the origin, the hybrid mechanism will again attract the error to zero in a fixed time. • The size of the flow and jump subsets, specifically C1 and D2 , changes depending on P and ρ. If these parameters are selected according to Theorem 2, the hybrid mechanism, that it is based on a hysteresis approach and not on a boundary-layer approach, will avoid any type of Zeno phenomena.

5 Unknown Input Identification For simplicity, let us suppose that p = q = 1, i.e., y, w ∈ R, and that there do not exist parametric uncertainties on the model dynamics, i.e., Δ = 0. In order to provide an unknown input estimation, the following assumption is introduced. Assumption 4 The unknown input w is Lipschitz, i.e., dw/dt = wl ∈ W¯ := {wl ∈ L∞ : ||wl ||∞ ≤ wl }, with wl a positive known constant. Therefore, consider the following extended system x˙e = Ae xe + De wl , y = Ce xe ,

(34) (35)

where xe = (x T , w)T ∈ Rn+1 , y ∈ R and wl ∈ R are the extended state, the output and the new unknown input of the system, respectively; and the matrices

Ae =

AD 0 0



 0 , De = , Ce = C 0 . 1

Therefore, under Assumption 1, i.e., strong observability of the triple (A, C, D), and Assumption 4, one can use the observer design approaches given in the previous sections in order to estimate the state of the extended system (34)–(35). For this purpose, the following homogeneous observer is provided:

110

H. Ríos

¯ y ), x˙ˆe = T −1 A0 T + T −1 ay + T −1 χ(e

(36)

where xˆe ∈ Rn+1 is the estimation of x and w, e y = y − Ce xˆe is the output error, and the nonlinear term χ¯ : R → Rn+1 is given as ⎛

n

k1 e y n+1 ⎜ k2 e y n−1 n+1 ⎜ ⎜ . .. χ(e ¯ y) = ⎜ ⎜ 1 ⎝ k e n+1 n

y

⎞ ⎟ ⎟ ⎟ ⎟, ⎟ ⎠

(37)

kn+1 e y 0

with some positive gains k j , j = 1, n + 1. Note that a = (a1 , . . . , an+1 )T with a j , j = 1, n + 1, being the coefficients of the characteristic polynomial of matrix Ae and that the transformation matrix T is designed according to (4) with matrix Ae . Define the estimation error as e = xe − xˆe . Hence, the estimation error dynamics is given by e˙ = T −1 A0 T xe + De wl − T −1 χ¯ (e y ), e y = Ce.

(38) (39)

Consider the estimation error dynamics (38)–(39) and the linear transformation ε = T e. Hence, the corresponding error dynamics is given as ε˙ = A0 ε + D¯ e wl − χ¯ (e y ), e y = C¯ e ε = ε1 ,

(40) (41)

where D¯ e = T De = (0, 0, . . . , 0, Ce Ane De )T ∈ Rn+1 and C¯ e = Ce T −1 . The error dynamics (40) can be rewritten as follows ε˙ i = εi+1 − ki ε1

n+1−i n+1

, ∀i = 1, n,

ε˙ n+1 = wl − kn+1 ε1 . 0

Then, it is clear that the dynamics (40) has the same structure of the error dynamics ¯ = 0. Therefore, one can directly apply given by (9), with μ = 1, p11 = 1 and Δx the results provided by Theorem 1 and Proposition 2. In this sense, the following proposition provides a way to design the gains k j , j = 1, n + 1, in order to achieve an FT unknown input estimation. Proposition 4 Let Assumptions 1 and 4 be satisfied. Then, if the gains k j , j = 1, n + 1, are designed according to [21], e.g., for n = 5, are designed as k5 = 1.1wl , 1/2 1/3 1/4 1/5 k4 = 1.5wl , k3 = 2wl , k2 = 3wl , k1 = 5wl ; the estimation error e converges to the origin in a finite time; and thus, xˆn+1 (t) = w(t), for all t ≥ T (e(0)).

On Finite- and Fixed-Time State Estimation for Uncertain Linear Systems

111

As it was mentioned before, in [9] a recursive method has also been proposed to calculate the gains ki based on an r -homogeneous and continuously differentiable Lyapunov function. Important: Following the same reasoning, it is possible to provide an FxT unknown input identification by using the FxT Observer given in Sect. 4 and the extended system (34)–(35).

Note: An alternative method for unknown input identification is based on the equivalent output injection concept (see, e.g., [13, 38]). For such a method, Assumption 4 and the system extension are not necessary. However, the use of a filter is required to obtain the unknown input identification from the discontinuous term ε1 0 . Moreover, such an identification is only an asymptotic approximation of the unknown input.

6 Simulation Results For the simulation results consider the linear system (1)–(2) with the following matrices: ⎛ ⎞ ⎛ ⎞ ⎛ ⎞ ⎛ ⎞T −1 1 0 0.2 −0.2 0 0 0 A = ⎝−1 −2 0 ⎠ , Δ = ⎝−0.2 −0.4 0 ⎠ , D = ⎝1⎠ , C = ⎝0⎠ , (42) 1 0 −1 0.2 0 −0.2 0 1 the unknown input w(t) = 1 + 2 sin(2t) and initial conditions x0 = (1, −1, 1)T . It is easy to check that C D = 0, C AD = 0 and C A2 D = 1; then, the relative degree of the output y, with respect to the unknown input w, is equal to 3 and the triple (A, C, D) is Strongly Observable. The unknown input w is clearly bounded, i.e., w ∈ W with w = 3. The eigenvalues of the matrix A are equal to λ1 = −1 and λ2,3 = −1.5 ± 0.8660 j; hence, A is Hurwitz and the trajectories of the system will be bounded. Therefore, this system fulfills Assumption 1, 2 and 3, respectively. All the simulations have been done in MATLAB with Euler discretization method, sample time equal to 0.001 and the solutions for the corresponding LMIs have been found by means of SeDuMi solver among YALMIP in MATLAB.

112

H. Ríos

6.1 Homogeneous Observers Let us implement the family of homogeneous observers (3) for three different values of μ, i.e., μ = 0, μ = 1 and μ = 0.5. The transformation matrix T ∈ R3×3 is given as ⎛ ⎞ 001 T = ⎝ 1 0 3⎠ , 213 ˆ = 0. the vector a = (−4, −6, −3)T and the initial conditions x(0) Linear Observer, µ = 0 In order to design the observer gains, for the case μ = 0, Proposition 1 is used. The following feasible results, fixing β = 2, are obtained: ⎛

⎞ ⎛ ⎞ ⎛ ⎞ 0.0065 −0.0015 −0.0002 1.8615 428.8974 P = ⎝−0.0015 0.0015 −0.0008⎠ , Y = ⎝ 0.0035 ⎠ , K = ⎝726.8080⎠ . −0.0002 −0.0008 0.0009 −0.0011 572.2210 The results are depicted in Fig. 1. These results illustrate the statements given by Theorem 1 and Proposition 1, i.e., the ISS properties with respect to x and w. It is shown that the estimation error converges to neighborhood of the origin. Discontinuous Observer, µ = 1 In order to design the observer gains, for the case μ = 1, Proposition 2 is applied. Then, taking x = 1 and w = 3, one obtains that ϕ¯ = 6.2806, and hence, the gains are given as k1 = 6.9087, k2 = 3.7592 and k3 = 3.6900. The results are given in Fig. 2. The results describe the statements given by Theorem 1 and Proposition 2, i.e., the FT-ISS properties with respect to x. One can see that the estimation error converges to a smaller neighborhood of the origin. Nonlinear Observer, µ = 0.5 In order to design the observer gains, for the case μ = 0.5, Proposition 3 is used. The following feasible results, fixing α = 0.1, β = 0.1, δ = 0.1, τ = 0.001, ϕ˜ = 6.6123 and r = (2, 1.5, 1), are obtained: ⎞ ⎛ ⎞ ⎛ ⎞ 0.0366 4.7590 0.0142 −0.0026 −0.0005 P = ⎝−0.0026 0.0015 −0.0007⎠ , Y = ⎝ 0.0022 ⎠ , K = ⎝14.1092⎠ . −0.0005 9.8696 −0.0005 −0.0007 0.0006 ⎛

On Finite- and Fixed-Time State Estimation for Uncertain Linear Systems

113

4 2 0 -2 4

0

3

6

9

12

15

0

3

6

9

12

15

0

3

6

9

12

15

0

3

6

9

12

15

2 0 -2 2 1 0 -1

3

2

1

0

-1

-2

Fig. 1 Trajectories of the system, state estimation and estimation error—linear observer, μ = 0

The results are shown in Fig. 3. These results illustrate the statements given by Theorem 1 and Proposition 3, i.e., the FT-ISS properties with respect to x and w. It is also shown that the estimation error converges to neighborhood of the origin in a faster way. For comparison purpose, Fig. 4 shows the behavior of the estimation error norm for all the presented cases. It is possible to show that the best performance and precision are given by the discontinuous observer, i.e., when μ = 1. In order to illustrate the results when there do not exist parametric uncertainties, Fig. 5 shows the behavior of the estimation error norm for the three observers when Δ = 0. Once again, once can see that the best performance and precision are

114

H. Ríos

4 2 0 -2 5 3 1 -1 -3 2

0

3

6

9

12

15

0

3

6

9

12

15

0

3

6

9

12

15

0

3

6

9

12

15

1 0 -1

2

1

0

-1

-2

-3

Fig. 2 Trajectories of the system, state estimation and estimation error—discontinuous observer, μ=1

provided by the discontinuous observer. Note that the effect of the unknown inputs is completely rejected by the discontinuous observer. Finally, in order to show the results when there do not exist parametric uncertainties and the system is unstable consider that ⎛

⎞ −1 1 0 A = ⎝−1 2 0 ⎠ . 1 0 −1

On Finite- and Fixed-Time State Estimation for Uncertain Linear Systems

115

Fig. 3 Trajectories of the system, state estimation and estimation error—nonlinear observer, μ = 0.5

The eigenvalues of the matrix A are equal to λ1 = −1, λ2 = 1.6180 and λ3 = −0.6180; hence, A is not Hurwitz and the trajectories of the system grow exponentially. The results are shown in Fig. 6. One can see that the error convergence properties are preserved for an unstable system providing that there do not exist parametric uncertainties.

116

H. Ríos

Fig. 4 Estimation error—parametric uncertainties and unknown inputs

Fig. 5 Estimation error—unknown inputs

6.2 A Fixed-Time Sliding-Mode Observer Consider the linear system (1)–(2) with the same matrices given in (42), unknown input w(t) = 1 + 2 sin(2t) and initial conditions x0 = (1, −1, 1)T . In order to design the observer gains, Theorem 2 is applied. The following feasible results, fixing α = 0.1, β = 0.0953, δ = 0.1, μ = 0.5, τ = 0.0001, ϕ˜ = 6.6123 and r = (3, 3.5, 4), are obtained:

On Finite- and Fixed-Time State Estimation for Uncertain Linear Systems

117

Fig. 6 Trajectories of the system and estimation error—unstable system



⎞ ⎛ ⎞ ⎛ ⎞ 0.0916 −0.0060 −0.0004 0.5727 13.3986 P = ⎝−0.0060 0.0015 −0.0002⎠ , Y = ⎝ 0.0128 ⎠ , κ = ⎝135.8895⎠ , −0.0004 −0.0002 0.0001 −0.0009 432.1734 and taking x = 1 and w = 3, one obtains that ϕ¯ = 6.2806, and hence, the gains ki are given as k1 = 6.9087, k2 = 3.7592 and k3 = 3.6900. The flow and jump sets, with ρ = 10, are given as

118

H. Ríos

Fig. 7 Trajectories of the system, state estimation and estimation error—FxT observer

  C1 = v˜ ∈ Rn : ||v|| ˜ ≤ 31.7404 , C2 = Rn \ D2 ,   D2 = v˜ ∈ Rn : ||v|| ˜ ≤ 21.7404 , D1 = Rn \ C1 , and the initial conditions are set as z(0, 0) = 0, v(0, 0) = 0, v(0, ¯ 0) = 50, q(0, 0) = 2. The results are shown in Fig. 7. It is shown that the estimation error converges to neighborhood of the origin. These results illustrate the statements given by Theorem 2, i.e., the FxT-ISS properties with respect to x.

On Finite- and Fixed-Time State Estimation for Uncertain Linear Systems

119

Fig. 8 FxT Estimation error—uniform property

Fig. 9 FT estimation error

Let us consider that there do not exist parametric uncertainties. In order to illustrate the uniformity with respect to the initial conditions, Fig. 8 shows the behavior of the estimation error norm for different initial conditions. These results effectively depict that there exists an upper bound for the convergence time that is independent of the initial conditions. For comparison purpose, Fig. 9 shows the behavior of the estimation error norm for different initial conditions when the homogeneous observer (3), with μ = 1, is implemented. Contrary to the FxT observer, the convergence time for the discontinuous observer proportionally increases with respect to the initial conditions.

120

H. Ríos

6.3 Unknown Input Identification Consider the linear system (1)–(2) with the following matrices

A=





T 1 4 1 0 , , D= , C= −1 3 0 1

(43)

the unknown input w(t) = 1 + 2 sin(2t) and initial conditions x0 = (1, −1)T . It is easy to check that C D = 0 and C AD = −1; then, the relative degree of the output y, with respect to the unknown input w, is equal to 2 and the triple (A, C, D) is Strongly Observable. The unknown input w is clearly bounded, i.e., w ∈ W ; and Lipschitz, i.e., wl ∈ W¯ . The eigenvalues of the matrix A are equal to λ1,2 = 2 ± 1.7321 j; hence, A is not Hurwitz and the trajectories of the system will grow exponentially. Therefore, this system fulfills Assumption 1, 2 3 and 4, respectively. Let us implement the observer (36). In order to design the observer gains, Proposition 2 is applied. Then, taking wl = 5, one obtains that the gains are given as k1 = 5.5000, k2 = 3.3541 and k3 = 3.4200. The results are given in Fig. 10. The results describe the statements given by Theorem 1 and Proposition 4, i.e., the FT convergence properties. One can see that the estimation error converges to the origin in a finite time providing an unknown input identification.

7 Concluding Remarks In this chapter different approaches, based on strong observability, homogeneity and SMC theory, are provided to estimate the state of some classes of linear systems with some parametric uncertainties and unknown inputs. A family of homogeneous observers and an FxT SM observer are introduced to solve this problem. The exact and FT/FxT convergence properties are described when just some classes of unknown inputs are affecting strongly observable linear systems. Some ISS/FT-ISS/FxT-ISS properties are provided for the case where the linear system is under the presence of parametric uncertainties and unknown inputs. The synthesis of these observers is given in terms of LMIs and some proper gain choice. Moreover, an unknown input identification approach is also introduced. Simulation results have illustrated the effectiveness of the state estimation approaches. Acknowledgements This work was supported in part by the SEP–CONACYT–ANUIES–ECOS NORD Project 315597. The author gratefully acknowledges the financial support from TecNM projects and CONAHCYT CVU 270504 project 922.

On Finite- and Fixed-Time State Estimation for Uncertain Linear Systems

121

Fig. 10 Trajectories of the system, unknown input identification and estimation error

Appendix This appendix collects some required preliminaries and the proofs of the given results.

Stability Notions Consider the following nonlinear system

122

H. Ríos

x˙ = f (x, w),

(44)

where x ∈ Rn is the state and w ∈ R p is the external disturbance such that w ∈ W = {w ∈ L∞ : ||w||∞ ≤ w}, with w ∈ R>0 . The function f : Rn × R p → Rn is a locally Lipschitz continuous function such that f (0, 0) = 0. The solutions of system (44) are denoted as x(t, x0 , w), with x(0) = x0 . Definition 1 [20, 31]. The origin of system (44) is said to be: Stable if for any  > 0 there is δ() such that for any x0 ∈ Ω ⊂ Rn the solutions are defined for all t ≥ 0 and, if ||x0 || ≤ δ(), then ||x(t, x0 ), w|| ≤ , for all t ≥ 0 and all w ∈ W ; Asymptotically Stable (AS) if it is Stable and for any  > 0 there exists T (, κ) ≥ 0 such that for any x0 ∈ Ω, if ||x0 || ≤ κ, then ||x(t, x0 , w)|| ≤ , for all t ≥ T (, κ) and all w ∈ W ; Finite-Time Stable (FTS) if it is AS and for any x0 ∈ Ω there exists 0 ≤ T x0 < +∞ such that x(t, x0 ) = 0, for all t ≥ T x0 and all w ∈ W 1 ; and FixedTime Stable (FxTS) if it is FTS and the settling-time function T (x0 ) is bounded, i.e., ∃T + > 0 : T (x0 ) ≤ T + , for all x0 ∈ Ω. If Ω = Rn ; then, the origin of system (44) is said to be globally Stable (GS), AS (GAS), FTS (GFTS), or FxT (GFxTS), respectively. Before introducing a definition of finite and fixed-time input to state stability, let us introduce some useful functions. A continuous function α : R≥0 → R≥0 belongs to class K if it is strictly increasing and α(0) = 0; it belongs to class K∞ if it is also unbounded. A continuous function β : R≥0 × R≥0 → R≥0 belongs to class K L T if for each fixed s, β(·, s) ∈ K , and for each fixed r there exists 0 < T (r ) < ∞ such that β(r, s) is decreasing to zero with respect to s < T (r ), and β(r, s) = 0, for all s ≥ T (r ). Definition 2 [7, 20]. The system (44) is said to be Input-to-State Stable (ISS), with respect to w, if there exist some functions β ∈ K L and γ ∈ K , such that any solution x(t, x0 , w), for any x0 ∈ Rn and any w ∈ W , satisfies ||x(t, x0 , w)|| ≤ β(||x0 ||, t) + γ (||w||∞ ), ∀t ≥ 0. The system (44) is said to be Finite-Time ISS (FT-ISS) if β ∈ K L T , and FixedTime ISS (FxT-ISS) if it is FT-ISS and T (x0 ) ≤ T + .

Homogeneity Some notions related to homogeneity are introduced. For any r j > 0, j = 1, n and λ > 0, define the dilation matrix Λr (λ) := diag(λr1 , . . . , λrn ) and the vector of weights r := (r1 , . . . , rn )T . Let us introduce the following homogeneity definition. The function T (x0 ) = inf{T x0 ≥ 0 : x(t, x0 , w) = 0 , ∀t ≥ T x0 } is called the uniform settling time of the system (44).

1

On Finite- and Fixed-Time State Estimation for Uncertain Linear Systems

123

Definition 3 [3]. The function g : Rn → R is called r -homogeneous, if there exists d ∈ R such that g(Λr (λ)x) = λd g(x), for all (x, λ) ∈ Rn × R>0 . The vector field f : Rn → Rn is called r -homogeneous, if there exists d ≥ − min1≤ j≤n ri such that f (Λr (λ)x) = λd Λr (λ) f (x), for all (x, λ) ∈ Rn × R>0 . The constant d is called the degree of homogeneity, i.e., deg(g) = d or deg( f ) = d. Note that a differential equation x˙ = f (x) with homogeneity degree d is invariant with respect to the time-coordinate transformation (t, x) → (λ−d t, Λr (λ)x). Then, defining deg(t) = −d, it is possible to call the differential equation itself homogeneous with degree d. It is worth mentioning that Definition 3 also applies for set-valued maps and differential inclusions [3]. Now, the following result, given by [8], represents the main application of homogeneity to finite-time stability and finite-time stabilization. Theorem 3 [8]. Let f : Rn → Rn be a continuous r -homogeneous vector field with a negative degree. If the origin of the system (44), with w ≡ 0, is locally AS then it is GFTS. Define an extended auxiliary vector field F for system (44) as follows F(x, w) =



f T (x, w) 0Tp

T

∈ Rn+ p ,

where 0 p ∈ R p is a zero vector of dimension p. The following Theorem describes the ISS properties of the system (44) in terms of its homogeneity. Theorem 4 [7]. Let the extended vector field F be r -homogeneous with degree d ≥ − min ri , with i = 1, n, for vectors of weights r = (r1 , . . . , rn ) > 0 and r˜ = (˜r1 , . . . , r˜ p ) ≥ 0, i.e., f (Λr (λ)x, Λr˜ (λ)w) = λd Λr f (x, w) holds. Let the system (44) be GAS for w ≡ 0. Then, the system (44) is ISS if min r˜ j > 0, with j = 1, p. Define the set Sr = {x ∈ Rn : ||x||r = 1}. Then, an extension of the previous Theorem for the case when f : Rn × R p ⇒ Rn is a set-valued map is given by the following Theorem. Theorem 5 [6]. Let the discontinuous extended vector field F be r -homogeneous with degree d ≥ − min ri , with i = 1, n, for vectors of weights r = (r1 , . . . , rn ) > 0 and r˜ = (˜r1 , . . . , r˜ p ) ≥ 0, i.e., f (Λr (λ)x, Λr˜ (λ)w) = λd Λr f (x, ξ ) holds. Let the system (44) be GAS for w ≡ 0 and also sup || f (z, w) − f (z, 0)|| ≤ σ (||w||),

∀z∈Sr

for all w ∈ W and some σ ∈ K∞ . Then, the system (44) is ISS if min r˜ j > 0, with j = 1, p. Moreover, if d < 0, then system (44) is FT-ISS.

124

H. Ríos

Implicit Lyapunov Function The following theorems provide the background for asymptotic and finite-time stability analysis, respectively, of (44) using the Implicit Lyapunov Function (ILF) Approach, with w ≡ 0, [30]. Theorem 6 [30]. If there exists a continuous function G : R≥0 ×Rn → R, (V, x) → G(V, x), satisfying the following conditions: 1) G is continuously differentiable outside the origin for all positive V ∈ R≥0 and for all x ∈ Rn \{0}; 2) for any x ∈ Rn \{0} there exists V ∈ R+ such that G(V, x) = 0; 3) let Φ = {(V, x) ∈ R+ × Rn \{0} : G(V, x) = 0}, then, lim||x||→0 V = 0+ , lim V →0 ||x|| = 0, lim||x||→∞ V = +∞, for < 0 holds for all V ∈ R+ and for all all (V, x) ∈ Φ; 4) the inequality ∂G(V,x) ∂V f (x) < 0 holds for all (V, x) ∈ Φ; then the origin of (44) is x ∈ Rn \{0}; 5) ∂G(V,x) ∂x GAS. Theorem 7 [30]. If there exists a continuous function G : R≥0 × Rn → R that satisfies the conditions 1–4 of Theorem 6, and there exist c > 0 and 0 < μ < 1, such that ∂G(V, x) ∂G(V, x) f (x) ≤ cV 1−μ , ∂x ∂V holds for all (V, x) ∈ Φ, then the origin of (44) is GFTS and T (x0 ) ≤ settling time function, where G(V0 , x0 ) = 0.

μ

V0 cμ

is the

Observability and Strong Observability Some definitions for strong observability, invariant zeros and relative degree are introduced in this section for the system (1)–(2), considering that Δ = 0, (see, e.g., [16, 42]). Definition 4 [16, 42]. The system (1)–(2) is called Strongly Observable if for any initial condition x(0) and every ϕ, the identity y ≡ 0, implies that also x ≡ 0. Definition 5 [16, 42]. The complex number s0 ∈ C is called an Invariant Zero of the triple (A, D, C) if rank(R(s0 )) < n + rank(D), where R(s) is the Rosenbrock matrix of system (1)–(2). In the case when D = 0, the notion of strong observability coincides with observability. Finally, the definition of relative degree is introduced. Definition 6 [16]. The output y is said to have a relative degree σ with respect to the input w if C Ak D = 0, ∀k < σ − 1, C Aσ −1 D = 0. Note that, according to [16], the following statements are equivalent:

On Finite- and Fixed-Time State Estimation for Uncertain Linear Systems

125

1. The system (1)–(2) is Strongly Observable. 2. The triple (A, C, D) has no invariant zeros. 3. The output of the system (1)–(2) has relative degree n with respect to w.

Hybrid Systems Consider the following model of a hybrid system [17] x˙ = f (x, w), x ∈ C , x = g(x, w), x ∈ D, +

(45) (46)

where x ∈ Rn is the state of the system changing according to the differential equation (45) while x is in the flow closed set C , and it can change according to the difference equation (46) while x is in the jump closed set D, x + ∈ Rn represents the value of the state after an instantaneous change, and w ∈ W . Let a hybrid arc φ(t, j) be a solution to the hybrid system (45)–(46), φ0 = φ(0, 0) be the initial condition, and domφ denotes the hybrid time domain of φ(t, j), where solutions are parameterized by both t ∈ R≥0 , the amount of time passed, and j ∈ N, the number of jumps that have occurred. The subsets of (t, j) ∈ R≥0 × N that correspond to evolutions of the hybrid system (45)–(46) are called hybrid time domains (for more details see [17]). Let us introduce, inspired by [28], a definition of finite and fixed-time attractiveness for a closed set M ⊂ Rn . Definition 7 [34]. The closed set M is said to be Finite-Time Attractive (FTA) for (45)–(46) if for each initial condition φ0 there exists T (φ0 ) > 0, such that for any solution φ to (45)–(46) with φ0 ∈ Rn , (t, j) ∈ domφ and t + j ≥ T (φ0 ) imply ||φ(t, j)||M = 0, where T : Rn → R≥0 is the settling-time function; Fixed-Time Attractive (FxTA) for (45)–(46) if it is FTA and T (φ0 ) is bounded by some number T + > 0. A definition of finite and fixed-time input-to state stability is also introduced. Definition 8 [34]. The system (45)–(46) is said to be FT-ISS, with respect to M , if for each initial condition φ0 and every input w ∈ W there exist T (φ0 ) > 0 and some functions β ∈ K L T and γ ∈ K , such that any solution φ to (45)–(46) with (t, j) ∈ domφ satisfies ||φ(t, j)||M ≤ β(||φ0 ||, t + j) + γ (||w||∞ ), ∀0 ≤ t + j < T (φ0 ), ||φ(t, j)||M ≤ γ (||w||∞ ), ∀t + j ≥ T (φ0 ). The system (45)–(46) is said to be FxT-ISS if it is FT-ISS and T (φ0 ) is bounded by some number T + > 0. Now, we provide some proofs of the given results.

126

H. Ríos

Proof of Theorem 1 with µ = 0 ¯ ≡ 0, is given as ¯ + Dw The error dynamics (9), when μ = 0 and Δx ¯ ε˙ = (A0 − K C)ε,

(47)

where K = (k1 , . . . , kn )T ∈ Rn and C¯ = C T −1 . It is clear that such a system is linear ¯ is observable, there always exists K ∈ Rn such that the and, since the pair (A0 , C) ¯ is Hurwitz. Therefore, system (47) is Globally Exponentially matrix (A0 − K C) Stable (GES). Define an extended auxiliary vector field F for system (9), when μ = 0, as follows: F(ε, ξ ) =



f T (ε, ξ ) 0Tp

T

T  T ¯ T + ξ T Ξ T 0n+1 = ε T (A0 − K C) ∈ Rn+(n+1) ,

¯ ∈ Rn×(n+1) . It is given that the ¯ D) where ξ = (x T , w T )T ∈ Rn+1 and Ξ = (Δ, extended vector field F is r -homogeneous with degree d = 0 for vectors of weights r = (1, . . . , 1) > 0 and r˜ = (1, . . . , 1) > 0, i.e., f (Λr (λ)ε, Λr˜ (λ)ξ ) = λd Λr f (ε, ξ ) holds. Therefore, according to Theorem 4, since min r˜ j = 1 > 0, with j = 1, n + 1, ¯ ≡ 0, is GES; then the system (9), when ¯ + Dw and system (47), with μ = 0 and Δx μ = 0, is ISS with respect to ξ , and hence, it is ISS with respect to x and w.

Proof of Theorem 1 with µ = 1 The error dynamics (9), when μ = 1, p11 = 1 and Δ¯ l = 0, l = 1, n − 1, can be written as n−l

ε˙l = εl+1 − kl ε1 n , ∀l = 1, n − 1, ε˙ n = Δ¯ n x + C An−1 Dw − kn ε1 0 .

(48) (49)

The previous dynamics, with Δ¯ n = 0 and w ≡ 0, is r -homogeneous with degree d = −1 for a vector of weights r = (n, n − 1, . . . , 1). Note that this dynamics has the same structure as the HOSM differentiator [21]. Its negative homogeneity degree and the discontinuous term of the algorithm, i.e., kn ε1 0 , ensure the robust and FT stability of ε = 0 against any unknown input w ∈ W and x ∈ X whenever the gains ki , with i = 1, n, are properly chosen. Then, based on homogeneity and Lyapunov theory one can show that the dynamics given by (48) and (49) is GFTS. The following result is recalled (for details see [9]). Theorem 8 [9]. System (48)–(49) admits the following strong, proper, smooth and r -homogeneous of degree m Lyapunov function

On Finite- and Fixed-Time State Estimation for Uncertain Linear Systems

V (z) =

n−1 

γl Z l (zl , zl+1 ) +

l=1

127

γn |z n |m , m

with m ≥ 2n − 1, some positive constants γi > 0, i = 1, n, z 1 = n , and z n = kεn−1 Z i (z i , z i+1 ) =

m−ri m ri |z i | ri − z i z i ri +1 + m



m − ri m



ε1 , 1

z2 =

ε2 ,..., k1

m

|z i+1 | ri+1 , i = 1, n.

Moreover, if x ∈ X and w ∈ W , then there exist some positive constants ki and γi , i = 1, n, such that system (48)–(49) is GFTS. Now, define an extended auxiliary vector field F for system (9), when μ = 1, as follows:  T F(ε, ξ ) = f T (ε, ξ ) 0Tp T  T T (ε1 ) + ξ T Ξ T 0n+1 = ε T A0T + h T ϕ − χ |μ=1 ∈ Rn+(n+1) , T where ϕ = Δ¯ n x + C An−1 Dw, ξ = x ∈ Rn and Ξ = (Δ¯ 1T , . . . , Δ¯ n−1 , 0)T ∈ Rn×n . It is given that the extended discontinuous vector field F, with ϕ ≡ 0, is r homogeneous with degree d = −1 for vectors of weights r = (n, n − 1, . . . , 1) > 0 and r˜ = (n − 1, . . . , 1, 1) > 0, i.e., f (Λr (λ)ε, Λr˜ (λ)ξ ) = λd Λr f (ε, ξ ) holds. Hence, according to Theorem 5, since min r˜ j = 1 > 0, with j = 1, n + 1, and system (48)–(49), with Δ¯ l = 0, l = l, n − 1, is GFTS for any x ∈ X and w ∈ W ; then the system (9), when μ = 1, is ISS with respect to x. Moreover, since d = −1, system (9), when μ = 1, is FT-ISS.

Proof of Theorem 1 with µ ∈ (0, 1) ¯ ≡ 0, can be written as ¯ + Dw The error dynamics (9), with μ ∈ (0, 1) and Δx −lμ

2+2μ(n−1) ε1

ε˙ l = εl+1 − ki p11 −nμ 2+2μ(n−1)

ε˙ n = −kn p11

ε1

1+μ(n−1−l) 1+μ(n−1)

1−μ 1+μ(n−1)

,

, ∀l = 1, n − 1,

(50) (51)

This system is r -homogeneous with degree d = −μ ∈ (−1, 0) for a vector of weights r = (1 + μ(n − 1), 1 + μ(n − 2), . . . , 1), and its dynamics is nonlinear but continuous. Note that system (50)–(51) also admits the strong, proper, smooth and r -homogeneous of degree m Lyapunov function, given by Theorem 8. Therefore, ¯ ≡ 0, there exist some positive constants ki , i = 1, n and ¯ + Dw based on [9], if Δx p11 such that system (50)–(51) is GFTS.

128

H. Ríos

Define an extended auxiliary vector field F for system (9), when μ ∈ (0, 1), as F(ε, ξ ) =



f T (ε, ξ ) 0Tp

T

T  T T (ε1 ) + ξ T Ξ T 0n+1 = ε T A0T − χ |μ∈(0,1) ∈ Rn+(n+1) ,

¯ = (Ξ1T , . . . , ΞnT )T ∈ Rn×(n+1) , with ¯ D) where ξ = (x T , w T )T ∈ Rn+1 and Ξ = (Δ, 1×(n+1) , i = 1, n, the rows of matrix Ξ . It is given that the extended continuous Ξi ∈ R vector field F is r -homogeneous with degree d ∈ (−1, 0) for vectors of weights r = (1 + μ(n − 1), 1 + μ(n − 2), . . . , 1) and r˜ = (1 + μ(n − 2), . . . , 1, 1, 1) > 0, i.e., f (Λr (λ)ε, Λr˜ (λ)ξ ) = λd Λr f (ε, ξ ) holds. Therefore, according to Theorem 4, since min r˜ j = 1 > 0, with j = 1, n + 1, and ¯ ≡ 0, is GFTS; then the system (9), when ¯ + Dw system (9), with μ ∈ (0, 1) and Δx μ ∈ (0, 1), is ISS with respect to ξ , and thus, it is ISS with respect to x and w.

Proof of Proposition 1 The error dynamics (9), when μ = 0, is given as follows ¯ + Δx ¯ ¯ + Dw. ε˙ = (A0 − K C)ε

(52)

Propose the following Lyapunov candidate function V (ε) = ε T Pε,

P = P T > 0,

that satisfies the following inequalities λmin (P)||ε||2 ≤ V (ε) ≤ λmax (P)||ε||2 . The time derivative of V along the trajectories of the system (52) is given by ¯ ¯ + Dw), V˙ (ε) = ε T (P Ak + AkT P)ε + 2ε T P(Δx ¯ Then, since Assumptions 2 and 3 hold, it follows that the where Ak = A0 − K C. time derivative of V can be upper bounded as ⎛

⎞T ⎛ ⎞⎛ ⎞ ε ε P Ak + AkT P + β P P P D¯ ¯ ⎠ ⎝ ¯ ⎠ − β(ε T Pε − ||Δ|| ¯ 2 x 2 − w2 ), V˙ (ε) ≤ ⎝ Δx

−β I 0 ⎠ ⎝ Δx w w

−β

On Finite- and Fixed-Time State Estimation for Uncertain Linear Systems

129

for any β > 0. Therefore, if the matrix inequality ⎛

⎞ P Ak + AkT P + β P P P D¯ ⎝

−β I 0 ⎠ ≤ 0,

−β is satisfied, it follows that ¯ 2 x 2 − w2 ), V˙ (ε) ≤ −β(ε T Pε − ||Δ|| and the time derivative of V is negative definite outside the ellipsoid E (P) = {ε ∈ ¯ 2 x 2 + w2 }, which implies that E (P) is an attractive ellipsoid of the Rn | ε T Pε ≤ Δ|| error dynamics (9) for μ = 0. Finally, defining the variable Y = P K , one obtains (11) which is an LMI in the variables P = P T ≥ 0 and Y for a fixed constant β > 0. Therefore, since ε = T e ¯ 2 x 2 + w2 , the estimation error converges exponentially to a ball and ε T Pε ≤ ||Δ|| around of the origin given by (12).

References 1. Andrieu, V., Praly, L., Astolfi, A.: Homogeneous approximation, recursive observer design, and output feedback. SIAM J. Control Optim. 47(4), 1814–1850 (2008) 2. Angulo, M.T., Moreno, J.A., Fridman, L.: Robust exact uniformly convergent arbitrary order differentiator. Automatica 49(8), 2489–2495 (2013) 3. Bacciotti, A., Rosier, L.: Liapunov Functions and Stability in Control Theory. Communications and Control Engineering. Springer, Berlin (2001) 4. Bejarano, F.J., Fridman, L.: High order sliding mode observer for linear systems with unbounded unknown inputs. Int. J. Control 9, 1920–1929 (2010) 5. Bejarano, F.J., Fridman, L., Poznyak, A.: Unknown input and state estimation for unobservable systems. SIAM J. Control Optim. 48(2), 1155–1178 (2009) 6. Bernuau, E., Efimov, D., Perruquetti, W., Polyakov, A.: On homogeneity and its application in sliding mode control. J. Frankl. Inst. 351(4), 1866–1901 (2014) 7. Bernuau, E., Polyakov, A., Efimov, D., Perruquetti, W.: Verification of ISS, iISS and IOSS properties applying weighted homogeneity. Syst. Control Lett. 62(12), 1159–1167 (2013) 8. Bhat, S.P., Bernstein, D.S.: Geometric homogeneity with applications to finite-time stability. Math. Control Signals Syst. 17(2), 101–127 (2005) 9. Cruz-Zavala, E., Moreno, J.A.: Levant’s arbitrary-order exact differentiator: a Lyapunov approach. IEEE Trans. Autom. Control 64(7), 3034–3039 (2019) 10. Cruz-Zavala, E., Moreno, J.A., Fridman, L.M.: Uniform robust exact differentiator. IEEE Trans. Autom. Control 56(11), 2727–2733 (2011) 11. Davila, J., Fridman, L., Levant, A.: Second-order sliding-mode observer for mechanical systems. IEEE Trans. Autom. Control 50(11), 1785–1789 (2005) 12. Du, H., Qian, C., Yang, S., Li, S.: Recursive design of finite-time convergent observers for a class of time-varying nonlinear systems. Automatica 49(2), 601–609 (2013) 13. Edwards, C., Spurgeon, S.K.: Sliding Mode Control: Theory and applications. Taylor and Francis, London (1998) 14. Floquet, T., Barbot, J.P.: Super twisting algorithm-based step-by-step sliding mode observers for nonlinear systems with unknown inputs. Int. J. Syst. Sci. 38(10), 803–815 (2007)

130

H. Ríos

15. Fridman, L., Davila, J., Levant, A.: High-order sliding-mode observation for linear systems with unknown inputs. Nonlinear Anal. Hybrid Syst. 5(2), 189–205 (2011) 16. Fridman, L., Levant, A., Davila, J.: Observation of linear systems with unknown inputs via high-order sliding-modes. Int. J. Syst. Sci. 38(10), 773–791 (2007) 17. Goebel, R., Sanfelice, R.G., Teel, A.R.: Hybrid Dynamical Systems: Modeling, Stability and Robustness. Princeton University Press, New Jersey, USA (2012) 18. Gutiérrez, A., Ríos, H., Mera, M.: Robust output-regulation for uncertain linear systems with input saturation. IET Control Theory Appl. 14(16), 2372–2384 (2020) 19. Hong, Y.: H∞ control, stabilization, and input-output stability of nonlinear systems with homogeneous properties. Automatica 37(7), 819–829 (2001) 20. Khalil, H.: Nonlinear Systems. Prentice Hall, New Jersey, U.S.A (2002) 21. Levant, A.: High-order sliding modes: differentiation and output-feedback control. Int. J. Control 76(9–10), 924–941 (2003) 22. Löfberg, J.: Yalmip: a toolbox for modeling and optimization in MATLAB. In: Proceedings of the CACSD Conference, Taipei, Taiwan (2004) 23. Lopez-Ramirez, F., Polyakov, A., Efimov, D., Perruquetti, W.: Finite-time and fixed-time observers design: implicit Lyapunov function approach. Automatica 87, 52–60 (2018) 24. Menard, T., Moulay, E., Perruquetti, W.: A global high-gain finite-time observer. IEEE Trans. Autom. Control 55(6), 1500–1506 (2010) 25. Moreno, J.A.: Arbitrary order fixed-time differentiators. IEEE Trans. Autom. Control (2021). https://doi.org/10.1109/TAC.2021.3071027 26. Niederwieser, H., Koch, S., Reichhartinger, M.: A generalization of Ackermann’s formula for the design of continuous and discontinuous observers. In: IEEE 58th Conference on Decision and Control, Nice, France (2019) 27. Perruquetti, W., Floquet, T., Moulay, E.: Finite-time observers: application to secure communication. IEEE Trans. Autom. Control 53(1), 356–360 (2008) 28. Polyakov, A.: Nonlinear feedback design for fixed-time stabilization of linear control systems. IEEE Trans. Autom. Control 57(8), 2106–2110 (2012) 29. Polyakov, A., Efimov, D., Perruquetti, W.: Homogeneous differentiator design using implicit Lyapunov function method. In: Proceedings of the 2014 European Control Conference, pp. 288–293, Strasbourg, France (2014) 30. Polyakov, A., Efimov, D., Perruquetti, W.: Finite-time and fixed-time stabilization: implicit Lyapunov function approach. Automatica 51, 332–340 (2015) 31. Polyakov, A., Fridman, L.: Stability notions and Lyapunov functions for sliding mode control systems. J. Frankl. Inst. 351(4), 1831–1865 (2014) 32. Ríos, H., Mera, M., Efimov, D., Polyakov, A.: Robust output-control for uncertain linear systems: homogeneous differentiator-based observer approach. Int. J. Robust Nonlinear Control 27(11), 1895–1924 (2017) 33. Ríos, H., Teel, A.R.: A hybrid observer for fixed-time state estimation of linear systems. In: IEEE 55th Conference on Decision and Control, pp. 5408–5413, Las Vegas, NV, USA (2016) 34. Ríos, H., Teel, A.R.: A hybrid fixed-time observer for state estimation of linear systems. Automatica 87, 103–112 (2018) 35. Ryan, E.P.: Universal stabilization of a class of nonlinear systems with homogeneous vector fields. Syst. Control Lett. 26, 177–184 (1995) 36. Seeber, R., Haimovich, H., Horn, M., Fridman, L.M., De Battista, H.: Robust exact differentiators with predefined convergence time. Automatica 134, 109858 (2021) 37. Shen, Y., Huang, Y.: Uniformly observable and globally Lipschitzian nonlinear systems admit global finite-time observers. IEEE Trans. Autom. Control 54(11), 2621–2625 (2009) 38. Shtessel, Y., Edwards, C., Fridman, L., Levant, A.: Sliding Mode Control and Observation. Birkhauser, New York (2014) 39. Slotine, J.J.E., Hedrick, J.K., Misawa, E.A.: On sliding observers for nonlinear systems. Trans. ASME J. Dyn. Syst. Meas. Control 109, 245–252 (1987) 40. Sturm, J.F.: Using SEDUMI 1.02, a MATLAB toolbox for optimization over symmetric cones. Optim. Methods Softw. 11(12), 625–653 (2001)

On Finite- and Fixed-Time State Estimation for Uncertain Linear Systems

131

41. Tan, C.P., Edwards, C.: Sliding mode observers for detection and reconstruction of sensor faults. Automatica 38, 1815–1821 (2002) 42. Trentelman, H.L., Stoorvogel, A.A., Hautus, M.: Control Theory for Linear Systems. Springer, London, Great Britain (2001) 43. Utkin, V., Guldner, J., Shi, J.: Sliding Modes in Electromechanical Systems. Taylor and Francis, London (1999) 44. Walcott, B.L., Zak, S.H.: State observation of nonlinear uncertain dynamical systems. IEEE Trans. Autom. Control 32(2), 166–170 (1987)

Robust State Estimation for Linear Time-Varying Systems Using High-Order Sliding-Modes Observers Jorge Dávila, Leonid Fridman, and Arie Levant

Abstract This chapter presents two algorithms for state estimation of linear timevarying systems affected by unknown inputs. The chapter is divided into two parts. The first part presents an observer for the class of strongly observable linear timevarying systems with unknown inputs. The proposed observer uses a cascade structure to guarantee the correct state reconstruction despite bounded unknown inputs and system instability. The second part of this chapter presents a finite-time observer that exploits the structural properties of the system through a linear operator. The particular design of this observer allows for avoiding the use of a cascade structure, providing with reduced complexity an exact estimate of the states after a finite transient time, even in the presence of possible instability of the system and the effects of bounded unknown inputs.

The material in this Chapter is presented with the permissions of IEEE and John Wiley and Sons: ©2022 John Wiley and Sons. Reprinted with permission from Galván-Guerra, R., Fridman, L., and Dávila, J. (2017) High-order sliding-mode observer for linear time-varying systems with unknown inputs. Int. J. Robust. Nonlinear Control, 27: 2338–2356. https://doi.org/10.1002/rnc.3698. ©2022 IEEE. Reprinted with permission from J. Dávila, M. Tranninger and L. Fridman, “FiniteTime State Observer for a Class of Linear Time-Varying Systems With Unknown Inputs,” in IEEE Transactions on Automatic Control, vol. 67, no. 6, pp. 3149–3156, June 2022, https://doi.org/10. 1109/TAC.2021.3096863. J. Dávila (B) Instituto Politécnico Nacional, ESIME-UPT, Section of Graduate Studies and Research, Mexico City 07340, Mexico e-mail: [email protected] L. Fridman Facultad de Ingeniería, Universidad Nacional Autónoma de México, Mexico City 04510, Mexico e-mail: [email protected] A. Levant School of Mathematical Sciences, Tel-Aviv University, Tel-Aviv 6997801, Israel e-mail: [email protected] © The Author(s), under exclusive license to Springer Nature Switzerland AG 2023 T. R. Oliveira et al. (eds.), Sliding-Mode Control and Variable-Structure Systems, Studies in Systems, Decision and Control 490, https://doi.org/10.1007/978-3-031-37089-2_6

133

134

J. Dávila et al.

1 Introduction The estimation of the states for linear time-varying (LTV) systems is a much more complicated problem than the linear time-invariant (LTI) case. Several authors have studied the conditions to reconstruct the states and design the observers (see, for example, [35] and the references therein). The paper [35] proposes not only the state estimation conditions but also presents an observer for the LTV systems based on the application of a dynamic gain that provides asymptotic convergence of the estimated states. An adaptive estimation approach for LTV systems providing asymptotic convergence of the estimations has been proposed in [40]. An extension of this work, presented in [39], proposes the simultaneous joint state and parameter asymptotic estimations problem. Finally, a finite-time convergent observer design for LTV systems is presented in [21]. All of the observers mentioned above considered LTV systems without unknown inputs. The estimation of the states of systems with unknown inputs is one of the most important modern control theory problems. The observers for linear systems affected by unknown inputs are designed under the so-called strong observability conditions. For linear time-invariant systems with unknown inputs, the strong observability property has been investigated in the pioneer works [14, 15, 22]. The characterization of this property for the LTV systems with unknown inputs (see, [18, 20, 34, 38]) opened the door for the design of robust observers for this class of systems. The design of finite-time observers (FTO), i.e., state-observers converging in finite time, has received great attention from the control community. In [1], an FTO was proposed for output feedback of systems without finite-time escape. The FTO suggested in [1] combines an observer with any globally convergent controller, i.e., allowing to use of an alternative strategy for output feedback control, i.e., turning on the controller after the observer has converged. Another important application of FTO is, for example, the state estimation of switched systems with strictly positive dwell-time because they converge before the switching moment [4]. In [29], the FTO is also applied for discrete-states estimations of hybrid systems. Fault detection and fault-tolerant control are other examples where the usage of FTO is successful [28]. Fault detection and reconfirmation strategies for in-service Airbus A380 airplanes presented in [8] is an illustrative example of the advantages of FTO properties. The combination of structural properties with robust control techniques such as sliding modes and high-order sliding modes (HOSM) has facilitated the design of FTO for systems with unknown inputs. For the linear time-invariant case, see, for example, [2, 3, 6, 10]. Two main observer structures can be found in the literature. On the one hand, using a cascade structure combining an auxiliary system with a stabilization term and HOSM differentiators provides finite-time convergence. Such as the case of [13], where a variable-structure observer is proposed for LTV systems with unknown inputs; however, in this work, the proposed observer is restricted to the case of systems with relative degree one with respect to the unknown inputs. An observer for a more general case of LTV systems affected by unknown inputs is presented in [12], where

Robust State Estimation for Linear Time-Varying Systems …

135

a combination of a linear observer with high-order sliding modes is proposed. An observer designed as a combination of a tangent space observer and a HOSM scheme is presented in [33]. On the other hand, the system’s structural properties to reduce the observer’s complexity have been reported in [25] for the LTI case and [7] for the LTV case. This chapter provides two methodologies for estimating states in linear timevarying systems in the presence of unknown inputs. One of them is designed using the cascade structure. The second one uses structural properties to guarantee that the estimation error dynamic is represented as a linear system without time-varying terms.

2 Problem Statement Consider the LTV system on the time interval T ⊂ R+ , affected by unknown inputs: x(t) ˙ = A(t)x(t) + B(t)u(t) + D(t)w(t)

(1)

y(t) = C(t)x(t),

(2)

where x(t) ∈ Rn is the state vector, u(t) ∈ Rρ is a vector of known inputs (control), w(t) ∈ Rm is a vector of unknown inputs, y(t) ∈ R p is the output vector with m ≤ p, and A(t), B(t), C(t), and D(t) are smooth enough conformable and known timevarying matrices. The level of smoothness of these matrices will be defined later. The proposed observers will be designed using high-order sliding-mode differentiators described through differential equations with a discontinuous right-hand side, whose solutions are defined in the Filippov sense [9].

3 Preliminaries In this section, some fundamental concepts for the design of state observers for LTV systems are recalled (see, for example, [24]). Let the operators W [·] and L[·] be defined as d M(t) dt d L[M(t)] = A(t)M(t) − M(t) dt

W [M(t)] = M(t)A(t) +

for any m × n matrix M(t) and A(t) defined in (1). Both operators can be applied recursively, i.e.,

136

J. Dávila et al.

W i [M(t)] = W [W i−1 [M(t)]] L i [M(t)] = L[L i−1 [M(t)]] for any i = 1, 2, ..., with W 0 [M(t)] = M(t), and L 0 [M(t)] = M(t). First, consider the homogeneous system given in the following equations x(t) ˙ = A(t)x(t), y(t) = C(t)x(t),

(3)

where x(t) ∈ Rn is the state vector, y(t) ∈ R p is the output vector, and the matrices A(t) and C(t) are smooth enough. Now, the definitions of uniform differential observability based on the existence of a lexicographic fixed vector, which spans the full state-space, are presented using the concepts in [24, 31, 37]. Definition 1 The system (3) is differentially observable on the time interval t ∈ T , if there exist a q-tuple of integers {r1 , r2 , ..., rq }, such that q 1. r1 ≥ r2 ≥ ... ≥ rq ≥ 0 and i=1 ri = n. 2. After suitable reordering of the rows of C(t), then row vectors: W j−1 [ci (t)] : i = 1, 2, ..., q; j = 1, 2, ..., ri are linearly independent for all t ∈ T . The integers {r1 , r2 , ..., rq } are called the observability indices. It is important to remark that there could exist more than one admissible set of observability indices for any observable system. A straightforward consequence of Definition 1 is the following proposition. Proposition 1 [17, 30] Let the matrix functions A(t) and C(t) of the system (3) be n − 2 and n − 1 times continuously differentiable for all t ∈ T , respectively. Define an observability matrix O˜ (A,C),n (t) as ⎡ ⎢ ⎢ O˜ (A,C),n (t) = ⎢ ⎣

N0 (t) N1 (t) .. .

⎤ ⎥ ⎥ ⎥ ∈ R pn×n , ⎦

(4)

Nn−1 (t) where N0 (t) = C(t) = W 0 [C(t)] and Ni (t) = W i [C(t)] for i = 1, . . . , n − 1. If rank(O˜ (A, C),n (t)) = n, for all t ∈ T , then the pair (A(t), C(t)) is observable on the time interval T . Definition 2 [30] Let the observability number lo be the smallest integer such that rank(O˜ (A, C),lo (t)) = n, for all t ∈ T .

Robust State Estimation for Linear Time-Varying Systems …

137

Definition 3 [17] The triplet (A(t), C(t), D(t)) is called strongly observable in the non-degenerate interval T , if x(t) ˙ = A(t)x + D(t)w(t), C(t)x(t) ≡ 0, for some unknown input w(t), with D(t)w(t) being a continuous function, implies that x(t) ≡ 0, for all t ∈ T . Theorem 1 [17] Let the elements of matrices A(t), D(t), and C(t) be lo − 2, lo − 2, lo − 1 times continuously differentiable, respectively, in the time interval t ∈ T , and define the matrices Dμ,ν = D(A,C,D),μ,ν (t), recursively by for 2 ≤ μ ≤ lo , Dμ,μ−1 := C(t)D(t) dD Dμ,1 := Nμ−2 D(t) + dtμ−1,1 for 3 ≤ μ ≤ lo , dD Dμ,ν := Dμ−1,ν−1 + dtμ−1,ν for 3 ≤ ν < μ ≤ lo where Ni (t) = W i [C(t)], and lo is the observability number from Definition 2. Define the matrix functions S : T → R plo ×[n+(lo −1)m] and S ∗ : T → R( plo +n)×[n+(lo −1)m] as 

0 In ∗ ˜ , S(t) := O(A, C),lo (t) J(A, C, D),lo (t) , S (t) := ˜ O(A, C),lo (t) J(A, C, D),lo (t) (5) with ⎡ ⎤ 0 0 ··· 0 ⎢ D2,1 0 · · · 0 ⎥ ⎢ ⎥ ⎢ D3,1 D3,2 · · · 0 ⎥ J(A, C, D),lo (t) := ⎢ ⎥, ⎢ .. .. . . .. ⎥ ⎣ . . . . ⎦ Dlo ,1 Dlo ,2 · · · Dlo ,lo −1

where In is the n × n identity matrix, and the matrix O˜ (A,C),lo is defined in Eq. (4) making n = lo . Then, the triplet (A(t), C(t), D(t)) is strongly observable on T if and only if rank S(t) = rank S ∗ (t) for all t ∈ T . Corollary 1 [17] Assume that the matrices A(t), D(t), and C(t) are lo − 2, lo − 2 and lo − 1 times continuously differentiable on the time interval T , respectively; suppose that D(t)w(t) is continuously differentiable and y(t) is lo − 1 continuously differentiable on T . Let K (t) ∈ R plo × plo such that ker K (A, C, D) (t) = ImJ(A, C, D),lo . Define T T ˜ H(A, C, D) (t) = O˜ (A, C),lo (t)K (A, C, D) (t)K (A, C, D) (t)O(A, C),lo (t).

Then H(A, C, D) (t) is invertible, and

138

J. Dávila et al. −1 T ˜T ˆ x(t) = H(A, C, D) (t)O(A, C),lo (t)K (A, C, D) (t)K (A, C, D) (t)Y (t)

T with Yˆ (t) = y T (t), . . . , y (lo −1)T (t) for all t ∈ T .

High-Order Sliding-Mode Differentiator The HOSM differentiator [19] provides exact convergence of the estimated derivatives to its real values after a finite-time transient. In this subsection, z i and vi denote the scalar differentiator variables. Let g0 be the function to be differentiated. Let us assume that there exist a constant  such that |g0(r +1) (t)| ≤ , the r -th order differentiator can be expressed in the following form: r

z˙ 0 = v0 = z 1 − κr |z 0 − g0 (t)| r +1 sign(z 0 − g0 (t)), r −1 z˙ 1 = v1 = z 2 − κr −1 |z 1 − v0 | r sign(z 1 − v0 ), .. . r −i

z˙ i = vi = z i+1 − κr −i |z i − vi−1 | r −i+1 sign(z i − vi−1 ), .. .

(6)

z˙r = −κ0 sign(zr − vr −1 )

for suitable positive constant coefficients κi to be chosen recursively large in the given order. Recall that the solutions of the differentiator (6) and the proposed observers are understood in Filippov’s sense [9], this assumption is made in order to allow for discontinuous signals in the system and observers. It is important to remark that Filippov’s solutions are equivalent to the usual ones when the right-hand side of (1) is continuous. It is also assumed that all the inputs allow the existence and solution’s extension for the entire semi-axis t ≥ 0. A possible selection of the differentiator parameters is κ0 = 1.1, κ1 = 1.5 1/2 , κ2 = 2 1/3 , κ3 = 3 1/4 , κ4 = 5 1/5 , κ5 = 8 1/6 that are valid for r ≤ 5. The following equalities are true after a finite time transient process in the absence of noise (see [19]): (7) i = 0, ..., r. |z i − g0(i) (t)| = 0, The notation Dri g0 is used to denote the injection signal z i in an application of the differentiator of order r to the signal g0 .

Robust State Estimation for Linear Time-Varying Systems …

139

4 Cascade Observer for Linear Time-Varying Strongly Observable Systems The following assumptions are considered in this Section: 1. A(t), D(t), and C(t) are matrices of n − 2, n − 2, and n − 1 continuously differentiable functions, respectively, and are assumed bounded, as well as their derivatives, i.e., A(t)(i) ≤ ki1 , D(t)(i) ≤ ki2 , C(t)( j) ≤ k j3 ,

∀i = 0, ..., n − 2;

j = 0, ..., n − 1,

where · is any induced matrix norm. 2. The unknown input w(t) is bounded and its (n − 2)-th derivatives are Lipschitz with a Lipschitz constant no greater than wi+ , i.e., w(t)(i) ≤ wi+ ,

∀i = 0, ..., n − 1.

3. The triple (A(t), D(t), C(t)) is strongly observable on T . The observer is designed using a cascade structure that is composed of a stabilization phase and a finite-time estimation phase. The stabilization algorithm is given by the equation:   z˙ (t) = A(t)z(t) + B(t)u(t) + L(t) e y (t) ,

(8)

where e y (t) = y(t) − C(t)z(t), and the matrix L(t) is a correction factor designed as (9) L(t) = P −1 (t)C T (t), the symmetric and positive definite matrix P(t) = P T (t) is the solution of the differential Riccati equation ˙ P(t) = −P(t)A(t) − A T (t)P(t) + 2C T (t)C(t) − Q(t),

(10)

with initial condition P(t0 ) = P0 and for some symmetric positive definite matrix Q(t). The finite-time state reconstruction is performed by the following equality: x(t) ˆ = z(t) + F(t)Rlo (e y (t))

(11)

where the gain F(t) is given by ˜ T˜ (t)K (TA, F(t) = H(−1 ˜ C, D) (t), ˜ C, D) (t)K ( A, ˜ C, D) (t)O( A, C),l A, o

(12)

140

J. Dávila et al.

the vector function Rlo −1 (e y (t)) is computed as ⎡



e y (t)1 .. .

⎢ ⎥ ⎢ ⎥ ⎢ ⎥ ⎢ ⎥ e y (t) p ⎢ ⎥ ⎢ D 1 e y (t)1 ⎥ ⎢ lo −1 ⎥ ⎢ ⎥ .. ⎥   ⎢ . ⎢ ⎥ Rlo −1 e y (t) = ⎢ D 1 e (t) ⎥ , ⎢ lo −1 y p ⎥ ⎢ ⎥ .. ⎢ ⎥ . ⎢ ⎥ ⎢ lo −1 ⎥ ⎢ Dlo −1 e y (t)1 ⎥ ⎢ ⎥ .. ⎢ ⎥ ⎣ ⎦ . lo −1 Dlo −1 e y (t) p

(13)

where e y (t)i denotes the i-th row of the vector function e y (t), and the differentiators gains are set as i > 2w0+ ||Nlo −1 ||||D(t)||

l −1

o ||P(t)||  + ||Dlo , j ||w +j−1 ||Q(t)|| j=1

(14)

where i is the gain corresponding to the differentiator applied to the output estimation error e y (t)i , for each i = 1, ..., p. Theorem 2 Let the LTV system with unknown inputs (1), (2) satisfy the Assumptions A1–A3. The observer (8)–(14) guarantees the global exact convergence of the estimation error e = x − xˆ to zero after a finite-time transient. Proof The proposed observer is composed of three elements (see Fig. 1): 1. First, an auxiliary state z(t) provides an inaccurate reconstruction of the state x(t). The auxiliary state is obtained from (8) in which the time-varying gain L(t) is ˜ = A(t) − L(t)C(t) is stable for all t ∈ T . In particular, designed such that A(t) the time-varying deterministic least squares filter proposed in Eqs. (9)–(10) is inspired in [36], and provides the convergence of the estimation error to a bounded region around the origin. 2. Second, the differentiator presented in (13) provides, after a finite-time transient, the exact reconstruction of the derivatives up to lo of the output estimation error e y (t). 3. Finally, the error compensation equation, given in (11), uses the information from the auxiliary state and the differentiator output, through the projection matrix computed in (12), to compensate for the discrepancies between x(t) and z(t) and provides the exact estimation of the real states in x(t). ˆ Therefore, the proof is divided into three steps. First, the convergence of the auxiliary estimation error ez = x − z to a bounded region around the origin is proven.

Robust State Estimation for Linear Time-Varying Systems …

141

Fig. 1 Cascade observer block diagram

With this aim, the dynamics of ez is computed as ˜ z (t) + D(t)w(t), e˙z (t) = Ae

(15)

where A˜ = (A(t) − L(t)C(t)). Let a Lyapunov candidate function be V (ez (t)) = ezT (t)P(t)ez (t).

(16)

The following chain of equalities is satisfied T ˙ V˙ (ez (t)) = ezT (t)P(t)e˙z (t) + ezT (t) P(t)e z (t) + e˙z (t)P(t)ez (t) ˙ V˙ (ez (t)) = ezT P(t)(A(t) − L(t)C(t)) + (A(t) − L(t)C(t))T P(t) + P(t) ez (t)

+2ezT P(t)D(t)w(t). Following the result by [27], if there exist symmetric positive definite matrices P(t) and Q(t) such that the following equation is satisfied P˙ + P(t)A(t) + A T (t)P(t) − 2C T (t)C(t) + Q(t) = 0 Thus, if the gain L(t) is chosen accordingly to (9), then the time derivative of (16) satisfies: V˙ (ez (t)) = −ezT (t)Q(t)ez (t) + 2ezT P(t)D(t)w(t).

142

J. Dávila et al.

Therefore, the foregoing derivative satisfies the following inequality: ||P(t)|| . V˙ (ez (t)) ≤ −ezT (t)Q(t)ez (t), ∀ ||ez || ≥ 2w0+ ||D(t)|| ||Q(t)|| Consequently, the proposed choice of the gain L(t) globally guarantees the exponential convergence of the estimation error ez to a bounded neighborhood of the origin dependent on the unknown-input bound. As a second step, let us prove that the derivatives of the output estimation error can be computed in finite time. Due to Assumptions A1 and A2, the estimation error derivative e˙z (t) will also be uniformly bounded. Define the output error as e y (t) = C(t)ez (t) = y − C z. Let us introduce the vector variable e¯ y as the array containing the derivatives of e y from order 0 up to lo − 1, i.e., ⎤ ⎡ e y (t) ⎥ ⎢ .. e¯y (t) = ⎣ ⎦. . e(ly o −1) (t)

A straightforward computation allows to compute the following relation: ˆ e¯ y (t) = O˜ ( A, ˜ C),lo (t)ez (t) + J( A, ˜ C, D),lo (t)w(t) ⎡

where

⎢ w(t) ˆ =⎣

w(t) .. .

(17)

⎤ ⎥ ⎦.

w (lo −2) (t) Notice that the lo -th derivative of e y (t) satisfies the equality: ˙ + ... + Dlo ,lo −1 w (lo −2) (t) e(ly o −1) (t) = Nlo −1 ez (t) + Dlo ,1 w(t) + Dlo ,2 w(t) The norm of this term satisfies the following inequality: ||e(ly o −1) (t)||

≤ ||Nlo −1 ||||ez (t)|| +

l o −1

||Dlo , j ||w +j−1 .

j=1

In view of the boundedness of the auxiliary estimation error ez , and with the proper selection of the differentiators gains according to (14), the lo − 1 derivatives of e y (t) can be estimated using the differentiator described in Sect. 3 as

Robust State Estimation for Linear Time-Varying Systems …

143

  eˆ¯ y = Rlo e y (t)   where Rlo e y (t) is computed according to (13). In view that Assumptions 1 and 2 are satisfied, the estimated values of eˆ¯ y converges, after a finite-time transient, to the real values e¯ y (for more details in the proof of convergence, the interested reader can consult [19]). As the third step of the proof, it should be proven that Eq. (11) provides a valid reconstruction of the state x(t). To do that one should explore the relation between ez and e¯ y in (17). p lo × p lo such that ker K ( A, Let us chose a matrix K ( A, ˜ C, D) (t) = K (t) ∈ R ˜ C, D) (t) = ImJ( A, ˜ C, D),lo , this matrix does exist according to [17]. Therefore, pre-multiplying Eq. (17) by K (t), the following equality is satisfied: ˆ K (t)e¯ y (t) = K (t)O˜ ( A, ˜ C),lo ez (t) + K (t)J( A, ˜ C, D),lo w(t).

(18)

1 Introducing the Moore-Penrose pseudo-inverse of K (t)O˜ ( A, ˜ C),lo as



K (t)O˜ ( A, ˜ C),lo

+

=



K (t)O˜ ( A, ˜ C),lo

T

K (t)O˜ ( A, ˜ C),lo

−1 

K (t)O˜ ( A, ˜ C),lo

T

.

(19) This pseudo-inverse always exist by the definition of K ( A, ˜ C, D) (t). It is important to remark the fact that the second term in the right-hand of Eq. (18) is equal to zero. Now, by pre-multiplying (18) by (19), the auxiliary estimation error can be reconstructed by T ˜T ez (t) = H(−1 ˜ C, D) (t)e¯y (t) ˜ C),l (t)K ( A, ˜ C, D) (t)K ( A, ˜ C, D) (t)O( A, A, o

= F( A, ˜ C, D) (t)e¯y (t). Notice that the value of the gain F( A, ˜ C, D) (t) can be obtained analytically by the ˜ L(t), C(t) and D(t) consecutive derivatives of the vectors and matrices e y , A(t), (see [17]). The derivatives of the output estimation error can be obtained on-line by using the HOSM differentiator (6). Finally,   in view of the finite-time convergence of the differentiator to zero, Rlo e y (t) ≡ e¯ y , the estimation of the state can be deduced directly from the equation ez = x − z. Notice that now ez and z are known values. Therefore, the state can be reconstructed exactly after a transient of finite duration by (11), i.e., the estimated state converges to the real value of the state in finite time, xˆ → x. 1

+  Here we can use any left generalized inverse K (t)O˜ ( A,C),l , such that ˜ o 

K (t)O˜ ( A,C),l ˜ o

+

K (t)O˜ ( A,C),l = I; ˜ o

and it can be computed using any methodology (see, e.g., [26], for more details about generalized inverses and its computation).

144

J. Dávila et al.

Lemma 1 Let Assumptions A1–A3 be satisfied and the output be measured with an additive noise y˘ , being a Lebesgue-measurable function of time with the maximal magnitude ε, i.e., || y˘ − y|| ≤ ε. With sufficiently small ε, the state estimation is obtained with the accuracy of the order of ε1/n . Proof Let follow the proof of Theorem 2, now the estimation error dynamics (15) becomes ˜ z (t) + D(t)w(t) + L(t)ε. e˙z (t) = Ae (20) Considering the Lyapunov candidate function V (ez (t)) = ezT (t)P(t)ez (t), its derivative satisfies the inequality ||P(t)|| . V˙ (ez (t)) ≤ −ezT (t)Q(t)ez (t), ∀ ||ez || ≥ 2(||D(t)||w0+ + ||L(t)||ε) ||Q(t)|| Now, the proposed choice of the gain L(t) guarantees the global exponential convergence of the estimation error to a bounded region around the origin, this region depends on the unknown-input bound and the maximal noise magnitude. In the presence of deterministic noise, component by component, the estimation of the output error e y and its derivatives of the order up to lo − 1, that are estimated by (13), satisfy the following inequalities according to [19]: |z 0 j − e y (t) j | ≤ δ0 j ε |z 1 j − e˙ y (t) j | ≤ δ1 j ε(lo −1)/lo .. . |z (lo −1) j − e(ly o −1) (t) j | ≤ δ(lo −1) j ε1/lo for positive scalar values δi j dependent on the observer and system parameters. Then, the following inequality is satisfied for some positive scalar constant γ + that depends on the system and observer parameters: ||e¯ y (t) − Rlo −1 (e y (t))|| ≤ γ + ε1/lo .   Notice that, according to Eq. (11), the estimated values e¯ y = Rlo e y (t) are used to obtain an estimate value of the states, i.e., x. ˆ Therefore, the following inequality is satisfied ||x − x|| ˆ ≤ γ + ε1/lo , where γ + depends on the norm of F(t) and δ + . Corollary 2 Let Assumptions of Theorem 2 be satisfied, and let the observer be designed according to (8)–(10). Under the assumption that estimation error (15) satisfy the following equality:

Robust State Estimation for Linear Time-Varying Systems …

145

⎤ ⎡ ⎤ C(t)D(t) 0 ˜ ⎢ C(t) A(t) ⎥ ⎢0⎥ ⎢ ⎥ ⎢ ⎥ ⎢ ⎥ = ⎢ .. ⎥ , .. ⎣ ⎦ ⎣.⎦ . lo −1 ˜ 0 C(t) A D(t) ⎡

the matrix F(t) = Ilo −1 can be used instead of (12). Remark 1 The application of the HOSM differentiator (6) ensures the finite-time convergence of the cascade observer. In order to guarantee fixed-time convergence of the estimated state, the proposed HOSM differentiator must be replaced by a suitable differentiator that guarantees this kind of convergence, e.g., the fixed-time differentiator [5].

4.1 Example For illustration purposes, let us consider a modified LTV model of an A-7A aircraft, see [11]. x(t) ˙ = A(t)x(t) + B(t)u(t) + D(t)w(t), (21) y(t) = C x(t), where x1 , x2 , x3 , x4 are the x-axis velocity, the angle of attack, the pitch angle and the pitch rate of the aircraft, respectively. The input u(t) is the elevator angle, and w(t) is an unknown exogenous input introduced for simulation purposes. The initial conditions are set as ⎡ ⎤ 0.1 ⎢ −0.5 ⎥ ⎥ x(0) = ⎢ ⎣ −0.1 ⎦ , 0.2 in this example, a null control-input is considered, i.e., u(t) = 0, and the unknown input, for simulation purposes, is taken as w(t) = sin (6π cos (t)) + 0.5. The time-varying matrices in the model are given by

146

J. Dávila et al.

A(t) = E c−1 (t)Q c (t), B(t) = E c−1 (t)Rc (t), D(t) = B(t), ⎤ ⎤ ⎡ ⎡ 10 0 0 0 ⎢0 1 ⎢ 0 ⎥ 0 0⎥ ⎥ ⎥ ⎢ E c (t) = ⎢ ⎣ 0 0 V (t) 0 ⎦ , Rc (t) = ⎣ Z δe (t) ⎦ 0 0 −Mα˙ (t) 1 Mδe (t) ⎤ ⎡ ⎡ ⎤ X u (t) −g cos(Θ0 ) X α (t) 0 1000 ⎥ ⎢ 0 0 0 1 ⎥ ⎣ ⎦ Q c (t) = ⎢ ⎣ Z u (t) −g sin(Θ0 ) Z α (t) V (t) + Z q (t) ⎦ , C = 0 1 0 0 . 0010 0 Mα (t) Mq (t) −Mu (t) With time-varying parameters defined as bV (t)S(C xu + 2C L tan(Θ0 )) , 2m bV (t)S(C zu − 2C L ) , Z u (t) = 2m bV (t)ScCmu bV 2 (t)SC xα Mu (t) = , , X α (t) = 2I yy 2m X u (t) =

Z α (t) =

bV 2 (t)SC zα bV 2 (t)ScCmα , Mα (t) = , 2m 2I yy

Mα˙ (t) =

bV (t)Sc2 Cm α˙ , 4I yy

Z q (t) =

bV (t)ScC zq bV (t)Sc2 Cmq , Mq (t) = , 4m 4I yy

Z δe (t) =

bV 2 (t)SC zδe bV 2 (t)ScCmδe , Mδe (t) = , 2m 2m

where V (t) = 3(sin(t) + 49) is the flight velocity, m is the aircraft’s mass, S is the main wing area, c is the main wing chord, b is the main wing span, and C L is the lift coefficient. The other parameters are the non-dimensional stability and control derivatives (for more information about these parameters, the interested reader can consult [11] and [32]. The proposed cascade algorithm is able to perform an exact reconstruction of the states. For simulation purposes a discretization step equal to 0.0001 was used with a gain i = 5, i = 1, 2. The cascade combination of the filter and the differentiator following the computational algorithm provides an exact estimation of the states, this estimation is shown in Fig. 2. The matrix P(t) is obtained numerically using a vectorization procedure with numerical integration of the vectorized Eq. (10). The proposed cascade observer provides an accurate estimation of the states. The corresponding estimation errors are presented in Fig. 3.

Robust State Estimation for Linear Time-Varying Systems …

147

Fig. 2 State estimation. ©2022 John Wiley and Sons. Reprinted with permission from GalvánGuerra, R., Fridman, L., and Dávila, J. (2017) High-order sliding-mode observer for linear timevarying systems with unknown inputs. Int. J. Robust. Nonlinear Control, 27: 2338–2356. https:// doi.org/10.1002/rnc.3698

Fig. 3 Estimation errors. ©2022 John Wiley and Sons. Reprinted with permission from GalvánGuerra, R., Fridman, L., and Dávila, J. (2017) High-order sliding-mode observer for linear timevarying systems with unknown inputs. Int. J. Robust. Nonlinear Control, 27: 2338–2356. https:// doi.org/10.1002/rnc.3698

It is important to highlight the finite-time convergence of the proposed observer to the states’ value despite the LTV system’s instability and the unknown inputs’ existence.

148

J. Dávila et al.

5 Non-cascade Observer for Linear Time-Varying Strongly Observable Systems 5.1 A Normal Form for Linear Time-Varying Systems with Unknown Inputs Let ci (t) be the i-th row of the matrix C(t). Assumption 1 There exists an admissible set of observability indices {r1 , r2 , ..., rq } such that the systems (1), (2) are observable and the following equalities are satisfied for all t ∈ T and every i = 1, 2, ..., q: ⎡ ⎢ ⎢ ⎢ ⎣

W 0 [ci (t)]D(t) W 1 [ci (t)]D(t) .. .

⎤ ⎥ ⎥ ⎥ = 0(ri −1)×m ; ⎦

W ri −2 [ci (t)]D(t) Remark 2 Assumption 1 allows for the existence of unknown inputs that could affect, at least for a time instant td ∈ T , the derivatives of order ri of the outputs yi (t), i.e., W ri −1 [ci (td )]D(td ) = 0. In light of Assumption 1 the following matrices Oi (t), i = 1, ..., q, are computed for each element of the matrix C(t): ⎡ ⎢ ⎢ Oi (t) = ⎢ ⎣

W 0 [ci (t)] W 1 [ci (t)] .. .

W ri −1 [ci (t)]

⎤ ⎥ ⎥ ⎥ ⎦

,

ri ×n

where ri is the corresponding observability index associated with the output yi (t) and the i-th row of C(t). The observability matrix is defined as O T (t) = [ O1T (t) O2T (t) ... OqT (t) ]n×n .

(22)

Definition 4 The systems (1), (2) are uniformly differentially observable with unknown inputs in the time interval T , if rank O(t) = n for all t ∈ T and there exist positive constants b1 and b2 such that 0 < b1 ≤ ||O(t)|| ≤ b2 . Let the systems (1), (2) be uniformly differentially observable with unknown inputs in T . Thus, by construction, O(t) is an n × n nonsingular matrix for all t ∈ T . Then, O(t) is said to be a fixed-lexicographic basis in T , see [37]. The term fixedlexicographic basis references that O(t) is a nonsingular matrix for all T , and that the lexicographic order r1 ≥ r2 ≥ ... ≥ rq is also satisfied. A uniformly differentially

Robust State Estimation for Linear Time-Varying Systems …

149

observable system with unknown inputs with a fixed-lexicographic basis on T is said to be lexicography-fixed on T . Assumption 2 System (1), (2) is uniformly differentially observable with unknown inputs and lexicography-fixed on T . Remark 3 Although matrices A(t) and C(t) were assumed smooth enough in Sect. 2, the particular selection of O(t) allows stating the requirement about the differentiability of these matrices more specifically. According to the particular lexicographic selection of O(t), matrix A(t) is required to be only (rmax − 2) times differentiable for every t ∈ T , where rmax = max∀i=1, ..., q (ri ), and the row vectors ci (t) are required to be only ri − 1 times differentiable for every t ∈ T , and for each i = 1, ..., q, respectively. The columns of the inverse of matrix O(t) are represented by the variables αi, j as



O−1 (t) = α1,0 ... α1,r1 −1 ... αq,0 ... αq,rq −1 n×n .

(23)

In particular, define the n × 1 vectors αi = αi,ri −1

(24)

for i = 1, 2, ..., q, i.e., the last column of each block associated with the same ci . A transformation matrix can be composed as follows: T (t) = T1 (t) T2 (t) ... Tq (t) n×n ,

(25)

Ti (t) = Ti,ri (t) Ti,ri−1 (t) ... Ti,1 (t) n×ri ,

(26)

where

for i = 1, ..., q. Each n × 1 column of the matrix is defined as Ti, j (t) = L j−1 [αi ].

(27)

The following definition characterizes the class of transformations necessary to maintain the properties of the system. Definition 5 [24] Consider the linear transformation x(t) = T (t)x(t), ¯ where T (t) is an n × n matrix valued function of time. T (t) is said to represent a Lyapunov transformation on T if there exist positive constants q1 and q2 such that

150

J. Dávila et al.

1. T (t) and T˙ (t) are continuous and uniformly bounded for every t ∈ T , 2. 0 < q1 ≤ | det T (t)| ≤ q2 for all t ∈ T . Assumption 3 Let T (t) in (25) be a Lyapunov transformation according to Definition 5. In particular, Assumption 3 guarantees that the stability properties of the original system are preserved. Lemma 2 [24] Let αi be defined as in (24), then the operators L[·] and W [·] satisfy the following equality: W g [ci ]L h [α j ] = W g+h [ci ]α j for i = 1, 2, ..., q and g, h = 1, 2, ..., n i . Let the matrices O(t) and its inverse O−1 (t) be defined as in (22) and (23), respectively. Therefore, their elements satisfy W h [ck ]αs,r = δr,h δs,k , where δi, j is the Kronecker delta function (see, for example, [24]), given that the matrix O(t) is a lexicography-fixed basis on T . Thus, the following equality is satisfied: q ek   m a ij,h (m)W h [c j ] W [ci ] = j=1 h=0

where a ij,h (m) are time-varying scalars for m ≥ ri , i = 1, ..., q, and ek = min(m, ri − 1). Therefore the following equalities are also satisfied: ⎧ m for m ≤ ri − 1 ⎨ W [ci ]αs,h = δh,m δs,i i W m [ci ]αs,h = as,h (m) for m ≥ ri ; m ≥ h (28) ⎩ for m ≥ ri ; m < h W m [ci ]αs,h = 0 for s = 1, 2, ..., q and h = 0, 1, ..., ri − 1. Inspired by the work [24], but considering here the effect of the unknown inputs, in this work the following transformation is proposed using the concept of observability indices. Theorem 3 Let the systems (1), (2) satisfy Assumption 2, and let the transformation matrix T (t) be constructed according to (25)–(27), such that it satisfies Assumption 3. Then, for every t ∈ T , the linear transformation x(t) = T (t)x(t) ¯ brings the systems (1), (2) to the form: ¯ x(t) ¯ ¯ ˙¯ = A(t) x(t) ¯ + B(t)u(t) + D(t)w, ¯ x(t), y(t) = C(t) ¯

(29) (30)

Robust State Estimation for Linear Time-Varying Systems …

151

  ¯ = T −1 (t) A(t)T (t) − T˙ (t) , B(t) ¯ ¯ where A(t) = T −1 (t)B(t), D(t) = T −1 (t)D(t), ¯ ¯ ¯ ¯ given C(t) = C(t)T (t), with the particular form of the matrices A(t), C(t), and D(t) as follows: ⎤ A11 A12 ... A1q ⎢ A21 A22 ... A2q ⎥ ⎥ ⎢ A¯ = ⎢ . .. .. ⎥ , ⎣ .. . . ⎦ Aq1 Aq2 ... Aqq ⎡

(31)

where ⎡

βii,1 βii,2 .. .

⎢ ⎢ ⎢ Aii = ⎢ ⎢ ⎣ βii,ri −1 βii,ri

⎤ 0 ... 0 0 1 ... 0 0 ⎥ ⎥ .. .. .. ⎥ . . .⎥ ⎥ 0 0 ... 0 1 ⎦ 0 0 ... 0 0 r ×r 1 0 .. .

i

(32)

i

for i = 1, 2, ..., q, here βi j,k are time-varying parameters. The matrices Ai j with time-varying parameters are given by ⎡

βi j,1 0 0 ⎢ βi j,2 0 0 ⎢ ⎢ .. .. Ai j = ⎢ ... . . ⎢ ⎣ βi j,r j −1 0 0 βi j,r j 0 0

⎤ ... 0 0 ... 0 0 ⎥ ⎥ .. .. ⎥ . .⎥ ⎥ ... 0 0 ⎦ ... 0 0 r

(33) j ×ri

for i = j and j = 1, 2, ..., q. The output distribution matrix is transformed into the next form ⎡

⎤ C1,1 0 0 ... 0 ⎢ C2,1 C2,2 0 ... 0 ⎥ ⎢ ⎥ C¯ = ⎢ . .. .. ⎥ , ⎣ .. . . ⎦ Cq,1 Cq,2 Cq,3 ... Cq,q where each sub-matrix Ci, j satisfies Ci,i = 1 0 ... 0 1×ri , r Ci, j = γi,1 j γi,2 j ... γi, jj , ∀ j = 1, ..., i, for each i = 1, ..., q. Here γi,k j , k = 1, ..., ri , are time-varying parameters.

(34)

152

J. Dávila et al.

The distribution matrix for the unknown inputs is given by D¯ T = D¯ 1T D¯ 2T ... D¯ qT ,

(35)

where each block D¯ i ∈ Rri ×m takes the following form: ⎡

⎤ 0 ... 0 ⎢ .. .. ⎥ ⎢ . ⎥ D¯ i = ⎢ . ⎥, ⎣ 0 ... 0 ⎦ di,1 ... di,m

(36)

with time-varying parameters di, j , i = 1, ..., q, j = 1, ..., m. ¯ takes the form described in (31)–(33), let us stress that the Proof To prove that A(t) following equality holds by definition. T A¯ = AT − T˙ .

(37)

Recall that T = L r1 −1 [α1 ] ... α1 | ... | L rq −1 [αq ] ... αq .

The following equality is obtained from expanding the product

L r1 −1 [α1 ] ... α1 | ... | L rq −1 [αq ] ... αq A¯ = r −1 r −1 q 1 A L [α1 ] ... α1 | ... | L [αq ] ... αq − d r1 −1 [α ] ... α | ... | L rq −1 [α ] ... α 1 1 q q . dt L

Notice that by definition L i+1 [α j ] = AL i [α j ] − becomes

d dt

L i [α j ]. Therefore, (37)

L r1 −1 [α1 ] ... α1 | ... | L rq −1 [αq ] ... αq A¯ = r r q 1 L [α1 ] ... L[α1 ] | ... | L [αq ] ... L[αq ] .

The right-hand side of the equation could be interpreted as a set of vectors that ¯ are represented with respect to the basis T (t) by means of the weights A(t). Thus, that takes the form: L r1 [α1 ] = β11,1 L r1 −1 [α1 ] + ... + β11,r1 L 0 [α1 ] + ... + βq1,1 L rq −1 [αq ] + ... + βq1,rq L 0 [αq ], L r1 −1 [α1 ] = 1L r1 −1 [α1 ] + ... + 0L 0 [α1 ] + ... + 0L rq −1 [αq ] + ... + 0L 0 [αq ], .. . L r j [α j ] = β1 j,1 L r1 −1 [α1 ] + ... + β1 j,r1 L 0 [α1 ] + ... + βq j,1 L rq −1 [αq ] + ... + βq j,rq L 0 [αq ],

Robust State Estimation for Linear Time-Varying Systems …

153

.. . L[αq ] = 0L r1 −1 [α1 ] + ... + 0L 0 [α1 ] + ... + 0L rq −1 [αq ] + ... + 1L[αq ] + 0L 0 [αq ].

Therefore, the values of the time-varying parameters βi j,k give the particular form ¯ as given in (31)–(33). to A(t) Secondly, to prove the matrix C¯ = C T takes the form given in (34), notice that ⎡ ⎤ c1 L r1 −1 [α1 ] ... c1 α1 ... c1 L rq −1 [αq ] ... c1 αq ⎢ c2 L r1 −1 [α1 ] ... c2 α1 ... c2 L rq −1 [αq ] ... c2 αq ⎥ ⎢ ⎥ C¯ = ⎢ .. .. .. .. .. .. .. ⎥ . ⎣ . . . . . . . ⎦ cq L r1 −1 [α1 ] ... cq α1 ... cq L rq −1 [αq ] ... cq αq

Introducing operator W [·], according to Lemma 2, matrix C¯ takes the form: ⎡

W r1 −1 [c1 ]α1 ⎢ W r1 −1 [c2 ]α1 ⎢ C¯ = ⎢ .. ⎣ .

... W 0 [c1 ]α1 ... W 0 [c2 ]α1 .. .. . . W r1 −1 [cq ]α1 ... W 0 [cq ]α1

⎤ ... W 0 [c1 ]αq ... W 0 [c2 ]αq ⎥ ⎥ ⎥. .. .. ⎦ . . rq −1 0 ... W [cq ]αq ... W [cq ]αq ... W rq −1 [c1 ]αq ... W rq −1 [c2 ]αq .. .. . .

(38)

Now, the particular values of each element of matrix C¯ in (34) are obtained from the repetitive use of the equality (28) to the elements of (38). Finally, the matrix D¯ in (35), (36) is a straightforward consequence of the change of coordinates. In particular, each block i = 1, ..., q takes the following form: ⎤ ⎡ ⎤ ⎡ ¯ W 0 [c¯i (t)] D(t) W 0 [ci (t)]D(t) ⎢ W 1 [c¯i (t)] D(t) ⎥ ⎢ W 1 [ci (t)]D(t) ⎥ ¯ ⎥ ⎢ ⎥ ⎢ ⎥ = 0(ri −1)×q ⎢ ⎥=⎢ .. .. ⎦ ⎣ ⎦ ⎣ . . ri −2 ri −2 ¯ [c (t)]D(t) W [c¯i (t)] D(t) W i ¯ for every t, and W ri −1 [c¯i (t)] D(t) = W ri −1 [ci (t)]D(t). Notice that the last expressions and the particular form of matrix A¯ fix the form of matrix D¯ as described in (35) and (36). Corollary 3 In case the admissible set of observability indices of system (1), (2) ¯ satisfies |ri − r j | ≤ 1, for all i, j = 1, 2, ..., q, matrix C(t) takes the following particular form: ⎡ ⎤ C1,1 0 ... 0 ⎢ C2,1 C2,2 ... 0 ⎥ ⎢ ⎥ ¯ C(t) =⎢ . ⎥, .. ⎣ .. ⎦ . Cq,1 Cq,2 ... Cq,q

154

J. Dávila et al.

where Ci,i = 1 0 ... 0 1×ri , Ci, j = γi,1 j 0 ... 0 1×r , j

∀ j = 1, ..., i.

Proof Notice that due to the difference of observability indices satisfying |ri − r j | ≤ 1 for every i, j = 1, ..., q,, the only term different from zero in (38) is the first element of each block C¯ i, j . What follows from the equalities (28). Furthermore, verifying that all terms of C¯ i, j , ∀i = j are equal to zero when the difference of observability indices equals zero is straightforward.

5.2 Observer Design The following assumption allows for the introduction of the observer without requiring a cascade structure. Assumption 4 There is an admissible set of observability indices for the systems (1) and (2) that satisfies Assumption 2, and the observability indices are such that |ri − r j | ≤ 1, i, j = 1, ..., q. This assumption will allow the observer proposed in this section to avoid the use of a cascade structure. Even though Assumption 4 seems restrictive, the class of systems that satisfy this assumption contains the class of LTV systems with a strict relative degree [16]. An LTV observer for the systems (1), (2) is proposed in the following form:   x(t) ˆ˙ = A(t)x(t) ˆ + B(t)u(t) + T l(e y ) yˆ (t) = C(t)x(t) ˆ

(39) (40)

where xˆ is the estimate of the state vector x, e y = y − yˆ is the output estimation error, l(e y ) is a nonlinear output error injection vector, and T (t) is the transformation matrix, constructed according to (25)–(27), that satisfies Assumption 3. The compensation term l(e y ) is composed of q blocks li (e y ): ⎡ ⎤ l1 (e y ) ⎢ l2 (e y ) ⎥ ⎢ ⎥ (41) l(e y ) = ⎢ . ⎥ . ⎣ .. ⎦ lq (e y ) Each block’s dimension agrees with the corresponding observability index of each T output and takes the form li (e y ) = li,1 (e y ) li,2 (e y ) ... li,ri (e y )

Robust State Estimation for Linear Time-Varying Systems … ⎡ q

⎤

⎡   rir−1 ⎥ ⎢ κi,ri e y,i ri −2 ⎥ ⎢  i ⎥ ⎢ κi,r −1 e y,i ri −1 ⎥+⎢ i ⎥ .. .. ⎥ ⎢ . . ⎦ ⎣  0 κi,1 e y,i j=1 ζi, j,ri e y, j

ζi, j,1 e y, j   j=1 q ζ j=1 i, j,2 e y, j

⎢ ⎢ ⎢ li (e y ) = ⎢ ⎢ ⎢ ⎣ q

155 ⎤ ⎥ ⎥ ⎥ ⎥, ⎥ ⎦

(42) k−1

1 where ζi, j,k = βi,q− j+1,k − l=1 ζi,l,k γq−l+1,q− j+1 and βi j,k , i, j = 1, ..., q, k = 1, ..., ri are known time-varying parameters that are taken from the transformed ¯ in (31). The design parameters κi, j have to be chosen sufficiently large matrix A(t) according to [19]. A possible selection of these gains is κi,1 = 1.1i+ ,  1/2  1/3  1/4  1/5  1/6 κi,2 = 1.5 i+ , κi,3 = 3 i+ , κi,4 = 5 i+ , κi,5 = 8 i+ , κi,6 = 12 i+ , for observability indices ri ≤ 6, where i+ = maxt∈[0, ∞) ||W ri −1 [ci (t)]D(t)|| w , for each i = 1, ..., q. From the two terms composing the right-hand side of Eq. (42), the first is a linear term compensating for the time-varying terms of the system, and the second term guarantees the finite-time convergence to zero of the estimation error. In this section, the use of terms inspired by the robust exact high-order sliding-mode differentiator [19], due to its widespread use, is proposed to present an accessible algorithm design. However, similar injections terms could be taken from algorithms such as the fixedtime differentiator by [23].

Theorem 4 Let the systems (1) and (2) such that Assumption 2 holds, and let matrix T (t), which is shaped using a set of permissible observability indices that satisfy Assumption 4, be constructed according to (25) and satisfying Assumption 3. Then, the observer (39)–(42) provides an exact estimate x(t) ˆ of the actual state x(t) after a finite-time transient tr . Proof Define the estimation error resulting of the observer application as e = x − x. ˆ The estimation error dynamics is   e(t) ˙ = A(t)e(t) + D(t)w(t) − T l(e y ) , e y (t) = C(t)e(t), where e y is the output estimation error. By applying the transformation e = T e, ¯ the dynamics of the new variable e, ¯ which takes a more suitable form for the analysis, is given by ¯ e(t) ¯ ˙¯ = A(t) e(t) ¯ + D(t)w(t) − l(e y ), ¯ e y (t) = C(t)e(t), ¯ ¯ C,and ¯ where the structure of the transformed matrices A, D¯ are described in Theorem 3. The estimation error in its transformed form can be studied in a block form. In this form, the errors are grouped in q sets, each subset e¯i composed of ri elements of the

156

J. Dávila et al.

estimation error vector, i.e., e(t) ¯ T = e¯1T e¯2T ... e¯qT , e¯iT = e¯i,1 e¯i,2 ... e¯i,ri . Then, for the ith block, e¯i , i = 1, ..., q, the estimation error takes the following form: ⎡ ⎤ ⎤ li,1 (e y ) 0 ... 0 ⎢ li,2 (e y ) ⎥ ⎥ ⎢ 0 ... 0 ⎥ ⎢ ⎥ ⎥ ⎥ ⎢ ⎥ ⎥ ⎥ ⎢ . . .. .. .. ⎥ w(t) − ⎢ ⎢ ⎥. ⎥ = Ai1 ⎢ . ⎢ ⎥ ⎥ ⎥ ⎢ ⎣ li,ri −1 (e y ) ⎦ ⎣ e˙¯i,r −1 ⎦ 0 ... 0 ⎦ i di,1 ... di,q li,ri (e y ) e˙¯i,ri (43) By substituting (31) and (42) into (43), it follows that ⎡

e˙¯i,1 e˙¯i,2 .. .

⎡ ⎤ e¯1 ⎢ ⎥ ⎢ ⎢ ⎢ e¯2 ⎥ ⎢ Ai2 ... Aiq ⎢ . ⎥ + ⎢ ⎣ .. ⎦ ⎢ ⎣ e¯q



⎡ ˙ ⎤ ⎡ e¯i,1 0 ⎢ e˙¯i,2 ⎥ ⎢ 0 ⎢ ⎥ ⎢ ⎢ . ⎥ ⎢. ⎢ .. ⎥ = ⎢ .. ⎢ ⎥ ⎢ ⎣ e˙¯ ⎦ ⎣0 i,ri −1 0 e˙¯i,ri



1 0 .. . 0 0

... 0 0



  rir−1 i e¯ κ ⎢ i,ri i,1  ri −2 ⎥ ⎢ r κi,ri −1 e¯i,1 i −1 ⎥ ⎢ ⎥ ⎢ .. ⎢ ⎥−⎢ ⎥ ⎢ . ⎦ ⎢  1 i,ri −1 ⎣ κi,2 e¯i,1 2 e¯i,ri  0 κi,1 e¯i,1

⎤⎡ ... 0 0 e¯i,1 ... 0 0 ⎥ ⎢ e¯i,2 ⎥⎢ .. .. ⎥ ⎢ .. ⎢ . .⎥ ⎥⎢ . ... 0 1 ⎦ ⎣ e¯





⎡ ⎥ ⎥ ⎢ ⎥ ⎢ ⎥ ⎢ ⎥+⎢ ⎥ ⎢ ⎥ ⎣ ⎥ ⎦

0 0 .. . 0 di,1

⎤ ... 0 ... 0 ⎥ ⎥ .. ⎥ w(t). . ⎥ ⎥ ... 0 ⎦ ... di,m

Each block of the dynamics of the estimation error takes the form of the differentiation error of the robust exact differentiator in its non-recursive form [19]:   ri − j e˙¯i, j = e¯i, j+1 − κi,ri − j+1 e¯i,1 ri , ∀ j = 1, ..., ri − 1  0 e˙¯i,ri = [ di,1 ... di,m ]w(t) − κi,1 e¯i,1 . (44) According to [19], if κi,1 is chosen such that κi,1 > maxt∈[0, ∞) ||[ di,1 ... di,m ]|| ||w(t)||, or equivalently κi,1 > maxt∈[0, ∞) ||W ri −1 [ci (t)]D(t)|| w , each block of errors ei, j , j = 1, ..., ri converges to zero after a finite transient-time Ti . Therefore, the full state estimation error vector e¯ will converge to zero after a finite transient-time equal to tr = maxi=1,...,q Ti . Remark 4 According to Eq. (44), the gains κi,1 of the differentiators depend only on the known system matrices A(t), C(t) and D(t), and the upper bound of the unknown inputs w . Remark 5 The lexicography-fixed matrix O defines the transformation matrix and fixes the observer’s structure for the time interval T . The gains of the nonlinear part of the observer are used to compensate for the effect of the unknown inputs and to provide finite-time convergence. Remark 6 Notice that Assumptions 2 and 4 imply the existence of a transformation that relates the output and its derivatives with the state vector, i.e., Y = Ox, where T T Y = Y1T Y2T ... YqT and Yi = yi y˙i ... yi(ri −1) . Thus, Assumptions 2 and 4 are sufficient conditions for strong observability [18, 20].

Robust State Estimation for Linear Time-Varying Systems …

157

5.3 Example Let us consider the linear time-varying system ⎛ ⎞ ⎛ −1 0.2 sin(t) − 0.3 −0.3 4 0 ⎜ −1 ⎟ ⎜ cos(5t) 4 6 3 ⎟ ⎜ x˙ = ⎜ ⎝ 1 2 cos(π t) − 5 −5 0.5 ⎠ x + ⎝ cos(5t) 0 0 1 −2 0 ! y=

0 2 −2 0 00 0 1

⎞ 1 0⎟ ⎟ w, 0⎠ 0

(45)

" x.

(46)

A set of admissible observability indices satisfying Assumption 4 is {r1 = 2, r2 = 2}. The observability matrix is given by ⎛ ⎞ 0 2 −2 0 ⎜ −4 18 − 4 cos(π t) 22 5 ⎟ ⎟. O=⎜ ⎝ 0 0 0 1 ⎠ 0 0 1 −2 Assumption 2 is satisfied due det(O) = −8. Thus, computing the transformation matrix T according to (25) yields: ⎞ 0.25 −0.25 −π sin(π t) + cos(π t) + 0.2 sin(t) − 10.6 10 − cos(π t) ⎟ ⎜ 0.25 0 cos(π t) 1 ⎟. T =⎜ ⎠ ⎝ −0.25 0 cos(π t) 1 0 0 1 0 ⎛

(47)

The determinant of the transformation matrix T (t) is det(T (t)) = −0.125, and its derivative is equal to ⎛

0 ⎜0 T˙ (t) = ⎜ ⎝0 0

⎞ 0 −0.2 cos(t) − π 2 cos(π t) − π sin(π t) π sin(π t) ⎟ 0 −π sin(π t) 0 ⎟. ⎠ 0 −π sin(π t) 0 0 0 0

Therefore the transformation matrix T (t) satisfies Assumption 3. The transformed system is given by the following matrices: ⎛

− cos(π t) − 2 ⎜ π sin(π t) + 19 cos(π t) − cos(2π t) − 0.4 sin(t) − 1.4 ⎜ ¯ A=⎝ −0.25 0.5 cos(π t) − 0.25

1 β12,1 0 β12,2 0 cos(π t) − 2 0 π sin(π t) + 2 cos(π t) + 1.75

⎞ 0 0⎟ ⎟ 1⎠ 0

(48)

β12,1 = 4π sin(π t) + 36 cos(π t) − 2 cos(2π t) − 0.8 sin(t) + 45.4 β12,2 = −4π 2 cos(π t) + 44π sin(π t) − 4π sin(2π t) + 65 cos(π t) − 4 cos(2π t) − 1.6 sin(t) + 0.8 cos(t) + 139.8

158

1

J. Dávila et al. 105

0.5

0

-0.5

-1 0

5

10

15

20

25

30

35

40

t [s] Fig. 4 States x(t)







⎞ 0 0 ⎜ −2 cos(π t + 5t) − 2 cos(5t − π t) + 40 cos(5t) −4 ⎟ ⎟, =⎜ ⎝ 0 0 ⎠ cos(5t) 0 ! " 1000 = . 0010

For simulation purposes, the following unknown input is considered:

 1 + 10 sin(2t) w(t) = . 3 + 10 sin(2 cos(5t)) The observer for the linear time-varying system (45)–(46) is given by Eqs. (39), (40), with the transformation matrix (47) and the coefficients βi j,k defined by the transformed matrix A¯ given in (48). The gains of the nonlinear part of the observer are chosen as κ1,1 = 1.1(44)(13) = 629.2, κ1,2 = 1.5(484)1/2 , κ2,1 = 1.1(13) = 14.3, κ2,2 = 1.5(14.3)1/2 . For the numerical simulations, the Euler integration method has been used with a sampling step equals 0.0001 seconds. The time evolution of the states in a period of 40 seconds is shown in Fig. 4. Notice that the behavior of the states is not asymptotically stable because the existence of the unknown inputs. The convergence to zero of the estimation errors for the same period of time is displayed in Fig. 5. The finite-time transient displaying the convergence of the estimates xˆ1 (t) and xˆ2 (t) to their real values x1 (t) and x2 (t) is presented in the zoom to a period of 5 seconds that is given in Fig. 6. The convergence of the estimated states xˆ3 (t), xˆ4 (t) to their real values x3 (t), x4 (t) is presented in Fig. 7. From those images, it is possible to see that the states converge exactly to their real values after a finite transient-time.

Robust State Estimation for Linear Time-Varying Systems …

159

40 20 0

5

-20

0

-40

-5

-60 0

5

1.8

1.9

10

2 15

2.1 20

2.2

2.3 25

30

35

40

t [s] Fig. 5 Estimation errors e(t) 60 40 20 0 -20 0

0.5

1

1.5

2

0.5

1

1.5

2

2.5

3

3.5

4

4.5

5

2.5

3

3.5

4

4.5

5

60 40 20 0 -20 0

t [s] Fig. 6 Estimations of the states x1 (t) (above), and x2 (t) (below) 20 0 -20 0

0.5

1

1.5

2

0.5

1

1.5

2

2.5

3

3.5

4

4.5

5

2.5

3

3.5

4

4.5

5

10 5 0 -5 0

t [s] Fig. 7 Estimations of the states x3 (t) (above), and x4 (t) (below)

160

J. Dávila et al.

6 Conclusions In this chapter two observation algorithm for Linear Time-Varying systems were presented. In the first algorithm a combination of a Deterministic Least Squares Filter and the HOSM differentiator is used in order to compensate for the effects of instability and unknown inputs. The proposed observer provides a finite-time exact reconstruction of the actual value of the states in the presence of bounded unknown inputs. For the second case, the algorithm takes advantage of a transformation designed for the class of linear time-varying systems with a uniformly differentially observable and lexicography-fixed structure. The particular structure of the transformed system allows using a linear term to compensate for the time-varying terms in the dynamics of the estimation error. The second part is a vector of nonlinear terms, which are functions of the output estimation errors. The resulting estimation error coincides with the error dynamics of the high-order robust exact differentiator. The resulting structure guarantees accurate state estimates after finite-time convergence of the estimation error to zero. The second methodology’s particular structure allows the observer’s gains to depend only on the known system matrices and the upper bound of the unknown inputs. In summary, the proposed observer provides exact state estimates after a finite transient time.

References 1. Angulo, M.T., Fridman, L., Moreno, J.A.: Output-feedback finite-time stabilization of disturbed feedback linearizable nonlinear systems. Automatica 49(9), 2767–2773 (2013) 2. Barbot, J., Floquet, T.: A canonical form for the design of unknown input sliding mode observers. In: Edwards, C., Fossas, E., Fridman, L. (eds.) Advances in variable structure and sliding mode control. Lecture Notes in Control and Information Science, pp. 271–292. Springer Verlag, Berlin (2006) 3. Bejarano, F.J., Fridman, L., Poznyak, A.: Exact state estimation for linear systems with unknown inputs based on hierarchical super-twisting algorithm. Int. J. Robust Nonlinear Control 17(18), 1734–1753 (2007) 4. Bejarano, F.J., Pisano, A., Usai, E.: Finite-time converging jump observer for switched linear systems with unknown inputs. Nonlinear Anal. Hybrid Syst. 5(2), 174–188 (2011) 5. Cruz-Zavala, E., Moreno, J.A., Fridman, L.M.: Uniform robust exact differentiator. IEEE Trans. Autom. Control 56(11), 2727–2733 (2011) 6. Davila, J., Fridman, L., Levant, A.: Second-order sliding-mode observer for mechanical systems. IEEE Trans. Autom. Control 50(11), 1785–1789 (2005) 7. Davila, J., Tranninger, M., Fridman, L.: Finite-time state-observer for a class of linear timevarying systems with unknown inputs. IEEE Trans. Autom. Control 1–1 (2021) 8. Efimov, D., Cieslak, J., Zolghadri, A., Henry, D.: Actuator fault detection in aircraft systems: Oscillatory failure case study. Ann. Rev. Control 37(1), 180–190 (2013) 9. Filippov, A.F.: Differential Equations with Discontinuos Right-hand Sides. Kluwer Academic Publishers, Dordrecht, The Netherlands (1988) 10. Fridman, L., Levant, A., Davila, J.: Observation of linear systems with unknown inputs via high-order sliding-modes. Int. J. Syst. Sci. 38(10), 773–791 (2007) 11. Fujimori, A., Ljung, L.: Parameter estimation of polytopic models for a linear parameter varying aircraft system. Trans. Japan Soc. Aeronaut. Space Sci. 49(165), 129–136 (2006)

Robust State Estimation for Linear Time-Varying Systems …

161

12. Galván-Guerra, R., Fridman, L., Dávila, J.: High-order sliding-mode observer for linear timevarying systems with unknown inputs. Int. J. Robust Nonlinear Control 27(14), 2338–2356 (2017) 13. Hashimoto, H., Utkin, V., Xu, J.X., Suzuki, H., Harashima, F.: VSS observer for linear time varying system. In: Procedings of IECON’90, pp. 34–39. Pacific Grove CA (1990) 14. Hautus, M.L.J., Silverman, L.M.: System structure and singular control. Linear Algebra Appl. 50, 369–402 (1983) 15. Hautus, M.L.: Strong detectability and observers. Linear Algebra Appl. 50, 353–368 (1983) 16. Ilchmann, A., Mueller, M.: Time-varying linear systems: Relative degree and normal form. IEEE Trans. Autom. Control 52(5), 840–851 (2007) 17. Kratz, W., Liebscher, D.: A local characterization of observability. Linear Algebra Appl. 269(13), 115–137 (1998) 18. Kratz, W.: Characterization of strong observability and construction of an observer. Linear Algebra Appl. 221, 31–40 (1995) 19. Levant, A.: High-order sliding modes: differentiation and output-feedback control. Int. J. Control 76(9–10), 924–941 (2003) 20. Liebscher, D.: Strong observability of time-dependent linear systems. In: Schmidt, W.H., Heier, K., Bittner, L., Bulirsch, R. (eds.) Variational Calculus, Optimal Control and Applications, number 124 in International Series of Numerical Mathematics, chapter 27, pp. 175–182. Birkhäuser, Basel (1998) 21. Menold, P.H., Findeisen, R., Allgöwer, F.: Finite time convergent observers for linear timevarying systems. In: Proceedings of the 11th Mediterranean Conference on Control and Automation, pp. 74–78 (2003) 22. Molinari, B.P.: A strong controllability and observability in linear multivariable control. IEEE Trans. Autom. Control 21(5), 761–764 (1976) 23. Moreno, J.A.: Arbitrary-order fixed-time differentiators. IEEE Trans. Autom. Control 67(3), 1543–1549 (2022) 24. Nguyen, C.C.: Canonical transformation for a class of time-varying multivariable systems. Int. J. Control 43(4), 1061–1074 (1986) 25. Niederwieser, H., Koch, S., Reichhartinger, M.: A generalization of Ackermann’s formula for the design of continuous and discontinuous observers. In: 2019 IEEE 58th Conference on Decision and Control (CDC), pp. 6930–6935 (2019) 26. Poznyak, A.: Advanced Mathematical Tools for Control Engineers-Volume 1: Deterministic Techniques. Elsevier, Amsterdam-Boston (2008) 27. Ravi, R., Pascoal, A.M., Khargonekar, P.P.: Normalized coprime factorizations for linear timevarying systems. Syst. Control Lett. 18(6), 455–465 (1992) 28. Ríos, H., Kamal, S., Fridman, L.M., Zolghadri, A.: Fault tolerant control allocation via continuous integral sliding-modes: A hosm-observer approach. Automatica 51, 318–325 (2015) 29. Ríos, H., Teel, A.R.: A hybrid fixed-time observer for state estimation of linear systems. Automatica 87, 103–112 (2018) 30. Wilson, J.R.: Linear System Theory. Prentice Hall (1993) 31. Silverman, L.: Realization of linear dynamical systems. IEEE Trans. Autom. Control 16(6), 554–567 (1971) 32. Gary, L.T.: Aircraftstability and control data. Technical report, National Aeronautics and Space Administration (1969) 33. Tranninger, M., Zhuk, S., Steinberger, M., Fridman, L.M., Horn, M.: Sliding mode tangent space observer for LTV systems with unknown inputs. In: 2018 IEEE Conference on Decision and Control (CDC), pp. 6760–6765 (2018) 34. Trentelman, H.L., Stoorvogel, A.A., Hautus, M.: Control Theory for Linear Systems. SpringerVerlag, London, Great Britain (2001) 35. Trumpf, J.: Observers for linear time-varying systems. Linear Algebra Appl. 425(2–3), 303– 312 (2007) 36. Willems, J.C.: Deterministic least squares filtering. J. Econ. 118(1–2), 341–373 (2004). Contributions to econometrics, time series analysis, and systems identification: a Festschrift in honor of Manfred Deistler

162

J. Dávila et al.

37. Wolovich, W.: On the stabilization of controllable systems. IEEE Trans. Autom. Control 13(5), 569–572 (1968) 38. Zattoni, E., Perdon, A.M., Conte, G.: Structural Methods in the Study of Complex Systems. Number 482 in Lecture Notes in Control and Information Science. Springer, Switzerland (2020) 39. Zhang, Q.: Adaptive observer for multiple-input-multiple-output (MIMO) linear time-varying systems. IEEE Trans. Autom. Control 47(3), 525–529 (2002) 40. Zhang, Q., Clavel, A.: Adaptive observer with exponential forgetting factor for linear time varying systems. In: Proc. of the 2001 American Control Conference, pp. 3886–3891. Orlando, Florida, USA (2001)

Discretization Issues

Effect of Euler Explicit and Implicit Time Discretizations on Variable-Structure Differentiators Mohammad Rasool Mojallizadeh and Bernard Brogliato

Abstract This chapter deals with the time discretization of variable-structure differentiators with discontinuous (i.e., set-valued) parts. The explicit and implicit timediscretization schemes are developed for a specific type of variable-structure differentiator, i.e., arbitrary-order super-twisting differentiator. The causal implementation and some properties of the implicit discretization are studied analytically, e.g., finitetime convergence, numerical chattering suppression, exactness, gain insensitivity, and accuracy. These properties are validated in the open-loop configuration using numerical simulations. Furthermore, a laboratory setup, namely, a rotary inverted pendulum available at the INRIA Centre at the University of Lille, has been considered as the benchmark to study the behavior of the discretization methods in closed-loop control systems in practice. The experimental results are in accordance with the ones obtained from the analytical calculations and numerical simulations.

1 Introduction In a typical control loop, a differentiator is often required for online differentiation. The differentiators are mostly designed and introduced in the continuous-time configuration. Assuming that f (t) and z(t) denote the input and the output of a differentiator, respectively, a pure differentiator (also called Euler differentiator) z = d f /dt might seem to be the first solution. However, in a real control loop, the signals are always contaminated by high-frequency measurement noise, and using a pure differentiator can significantly amplify the effect of the noise. To solve this issue, the pure differentiator may be employed along with a linear low-pass filter, and the resulting differentiator is usually called a linear filter (LF). However, the low-pass filter always imposes phase lag (negative phase) which deteriorates the performances of M. R. Mojallizadeh (B) · B. Brogliato Univ. Grenoble Alpes, INRIA, CNRS, Grenoble INP, LJK, Grenoble 38000, France e-mail: [email protected] B. Brogliato e-mail: [email protected] © The Author(s), under exclusive license to Springer Nature Switzerland AG 2023 T. R. Oliveira et al. (eds.), Sliding-Mode Control and Variable-Structure Systems, Studies in Systems, Decision and Control 490, https://doi.org/10.1007/978-3-031-37089-2_7

165

166

M. R. Mojallizadeh and B. Brogliato

closed-loop systems. To solve the drawbacks of the aforementioned configurations, more advanced continuous-time differentiators, e.g., variable-structure (VS) differentiators, have been introduced in the Automatic Control literature. VS differentiators refer to a specific type of differentiator where discontinuous, set-valued functions are employed. The Slotine–Hedrick–Misawa differentiator (SHMD) [1] is the very first in this category to provide the exactness. Note that exactness denotes the ability of a differentiator to converge to the exact differentiation in the absence of noise. Subsequently, the super-twisting differentiator (STD) [2] has been introduced to achieve several useful properties, e.g., finite-time convergence, by keeping the homogeneity. Arbitrary-order STD (AO-STD) is another modification of the STD to calculate higher-order differentiations [3, 4]. Further studies in this topic mainly dealt with the improvement of the transient response using quadratic sliding surface [5, 6], and providing uniform convergence by adding high-degree terms to form the uniform robust exact differentiator (URED) [7]. In addition to the abovementioned VS differentiators with fixed structures, adaptive schemes have also been introduced to achieve both exactness and robustness to noise simultaneously. For instance, see adaptive VS differentiators with adaptive coefficients [8] or adaptive exponents [9–11]. Nowadays digital processors are widely used in control systems. To implement the continuous-time differentiators on digital processors, a time-discretization method has to be used to obtain their discrete-time form while keeping the continuous-time properties. In a general classification, the discretization methods are based on the explicit or implicit schemes. According to the literature, the explicit discretization is the mainstream in this topic. However, as reported in [12–15], the explicit discretization may suffer from numerical (digital) chattering.1 Alternatively, the implicit schemes can solve the problem of numerical chattering and provides some useful characteristics including finite-time convergence. It should be noted that other discretization methods can also be found in the literature which cannot be classified into the aforementioned categories. For instance, homogeneous differentiator (HD) [16, 17] uses the matching discretization approach. Homogeneous discrete-time differentiator (HDD) [18] is another special discretization approach to keep the homogeneity after discretization by adding high-degree terms of the Taylor expansion. HDD was further modified and termed generalized homogeneous discrete-time differentiator (GHDD) [19] to remove the effect of the discontinuous terms by adding discontinuous terms in the recursion. These discretization methods have already been studied by analytical calculations and numerical simulations [12, 13]. In the control community, the implicit discretization was firstly introduced for homogeneous systems and sliding-mode controllers [6, 20–27]. Implicit discretization of the sliding-mode observers was probably firstly introduced in [20]. The implicit discretization was further developed for the filtering differentiator [28] in

1

Numerical chattering refers to a specific kind of chattering which only appears due to the time discretization.

Effect of Euler Explicit and Implicit Time Discretizations …

167

[14, 15], as well as other kinds of differentiators, e.g., quadratic [6, 29], supertwisting [26], and SHMD [30, 31]. Semi-implicit discretization of the URED has been addressed in [32]. A numerical simulation comparative study has been conducted among different discretization schemes in [12]. Compared to [12], the contribution of this work is to focus on a specific type of variable-structure differentiators, i.e., AO-STD, and validate results with experiments in addition to the numerical simulations. To this end, the continuous-time AO-STD differentiator is introduced in Sect. 2. Subsequently, the discretization schemes, as well as their characteristics, are reminded in Sect. 3. Numerical simulations and experiments are given in Sect. 4 and Sect. 5, respectively. Finally, the conclusions are drawn in Sect. 6.

2 Continuous-Time AO-STD VS differentiators employ discontinuous (set-valued) terms in their structure to achieve the exactness. This work aims at providing a comparative analysis between the Euler explicit and the Euler implicit discretizations, compactly, without getting into different types of continuous-time differentiators. To this end, the effect of the discretization has been studied for a well-known VS continuous-time differentiators. According to [12, 13], the AO-STD (or the Levant’s) arbitrary-order differentiator, presented in (1), has one of the simplest structures in the continuous-time setting. Compared to the STD, this differentiator is able to estimate higher-order differentiation as well, which may be necessary for control applications. Moreover, because of its homogeneous continuous-time structure, even the Euler explicit discretization of the AO-STD provides a specific level of accuracy, which is not the case for some other arbitrary-order differentiators, e.g., SHMD. In addition, the AO-STD is one of the earliest VS differentiators which is familiar to the VS scientific community. Hence, the effects of the Euler explicit and implicit discretization schemes are only studied for the AO-STD in this chapter. Note that in addition to the Euler explicit and implicit discretization schemes, other types of discretizations can also be found in the literature, as explained in Sect. 1, which are avoided in this work for the sake of space and compactness of the results. Furthermore, theoretical review of the continuous-time differentiators and other types of discretization schemes has been already addressed in [12, 13] which can be used as further studies. The continuous-time form of the AO-STD is given as follows: 

i+1

n−i

z˙ i (t) = −λi L n+1 |γ0 (t)| n+1 sgn(γ0 (t)) + z i+1 (t) for i ∈ N < n, z˙ n (t) ∈ −λn L sgn(γ0 (t)),

(1)

where γ0 (t) = z 0 (t) − f (t) is the sliding variable. In this study, it is assumed that the input base signal f 0 (t) is always polluted by a measurement noise η(t), i.e., f (t) = f 0 (t) + η(t). Moreover, z i , i = 0, . . . , n is the differentiation order i of f 0 (t)

168

M. R. Mojallizadeh and B. Brogliato

and n is the order of the differentiator. AO-STD has n + 1 parameters, i.e., L and λi . To reduce the number of tunable parameters, the gains λi are chosen as predefined constants given in [4, 28, 33, 34]. In this case, the only tunable parameter is L. However, since the input is always polluted by noise, there is no straightforward way to tune L. Note that for n = 1, AO-STD is basically a first-order STD. Remark 1 The accuracy of the continuous-time AO-STD (1) in the presence of noise has been investigated in [2, 18].

3 Discretization Schemes To implement the continuous-time AO-STD (1) on a digital processor, a timediscretization method has to be employed. The effect of time discretization on the overall behavior of the implementations is usually neglected in the literature and the Euler explicit discretization is mostly used without addressing its drawbacks. Generally speaking, discretization schemes can be categorized into two main methods, namely, explicit and implicit. As is well known, the explicit discretization of the ODE: y˙ (t) = g(y(t)) leads to yk+1 = hgk + yk , with h > 0 the sampling period and k = t/ h (the notation yk indicates y(k). Moreover, the colors red and blue are used to indicate the explicit and implicit parts). Alternatively, the implicit discretization of the mentioned ODE leads to yk+1 = hgk+1 + yk (see [12, 20–25]). These discretization methods are addressed in Sects. 3.1 and 3.2.

3.1 Explicit Discretization The explicit discretization is usually used to realize a simple implementation of the differentiators. The explicit AO-STD (E-AO-STD) corresponding to the continuoustime AO-STD (1) reads as (see [4, 18, 35, 36] for the corresponding theoretical results) 

i+1

n−i

z i,k+1 = −hλi L n+1 |γ0,k | sgn(γ0,k ) n+1 + hz i+1,k + z i,k , i = 0, . . . , (n − 1) z n,k+1 ∈ −hλn L sgn(γ0,k ) + z n,k .

(2) One can see that the explicit discretization is straightforward and can be implemented without further comments, which may not be the case for the implicit discretization as will be seen in Sect. 3.2.

Effect of Euler Explicit and Implicit Time Discretizations …

169

3.2 Implicit Discretization Implicit discretization of the VS differentiators can be found in [12, 14, 15].2 Following these references and from (1), the implicit AO-STD (I-AO-STD) gives 

i+1

n−i

z i,k+1 = −hλi L n+1 |γ0,k+1 | n+1 sgn(γ0,k+1 ) + hz i+1,k+1 + z i,k , i = 0, . . . , (n − 1) (3a) (3a). z n,k+1 ∈ −hλn L sgn(γ0,k+1 ) + z n,k

It can be seen that the implicit discretization cannot be implemented directly since the value of γ0,k+1 is required at the time step k. This issue has been deeply addressed in [12, 13] by solving a generalized equation appearing when manipulating this equation algebraically. Following these references, the I-AO-STD can be implemented according to the flowchart presented in Fig. 1. One can see that for n > 1, one of the polynomial Eqs. (4) and (5) has to be solved at each time step to advance the algorithm: βk + x

n+1

+ Lh

n+1

λn +

n−1  

 l+1 L n+1 x n−l h l+1 λl = 0,

(4)

 l+1 L n+1 x n−l h l+1 λl = 0.

(5)

l=0

βk − x n+1 − Lh n+1 λn −

n−1   l=0

Remark 2 According to [12, 13], the implicit discretization presented in Fig. 1 presents several useful characteristics, e.g., chattering suppression, finite-time convergence, insensitivity to the parameters λi and L during the sliding phase, and causality (it is non-anticipative). Moreover, several comments and guidelines have been presented in [12, 13] to tune the gain L of the I-AO-STD in the presence of noise.

4 Open-Loop Numerical Simulations The simulations have been conducted for a third-order AO-STD, i.e., n = 3, from the initial time t = 0s to the final time t f = 10s with the sampling rate h = 50 ms. The Newton–Raphson algorithm is used to calculate the roots of the polynomials appearing in the implicit algorithm at each time step (see Fig. 1). The responses of the AO-STD under different discretization schemes with a very large gain (L = 103 ) are presented in Fig. 2. The noise-free input signal f (t) = 5t is 2

The concept of the implicit discretization of the differentiators was derived from [12] and is represented in this work under the permission obtained from the corresponding publisher “John Wiley and Sons” (license number 5432100715018).

170

M. R. Mojallizadeh and B. Brogliato fk+1

βk = fk+1 −

n  l=0

zl,k hl

zn,k , . . . , z0,k Case 1: ωk ← solution of (4)

βk < −Lhn+1 λn

Y

i+1

zi,k+1 = zi,k + hzi+1,k+1 − L n+1 hλi (ωk )n−i ,

D

i = 0, . . . , (n − 1) zn,k+1 = −hλn L + zn,k

N

−hn+1 λn L < βk < hn+1 λn L

Y

Case 2: zi,k+1 = zi,k + hzi+1,k+1 , i = 0, . . . , (n − 1) zn,k+1 =

βk hn

D

+ zn,k

N Case 3: ωk ← solution of (5) i+1

zi,k+1 = zi,k + hzi+1,k+1 + L n+1 hλi (ωk )n−i , i = 0, . . . , (n − 1) zn,k+1 = hλn L + zn,k

D

zn,k .. . z0,k

Fig. 1 Flowchart of the I-AO-STD. The block D indicates one-step delay

considered in this simulation to ensure f¨(t) = 0. The first observation from Fig. 2a is that the explicit discretizations of the AO-STD show a significant amount of chattering under the oversized gain L = 103 . On the other hand, as can be seen from Fig. 2b the implicit discretization of the AO-STD converges to the exact differentiation and tracks the exact derivative without any numerical chattering, which indicates its insensitivity with respect to the gain. According to the above observation, the I-AO-STD converges to the exact differentiation in finite time. Moreover, after reaching the sliding phase, i.e., γk = 0, the implicit differentiators keep the exactness (tracking the exact differentiation without numerical chattering). Based on the numerical simulations, while the explicit discretization is sensitive to the gains and shows numerical chattering under large gains, the implicit one is not. In fact, the implicit discretization of the AO-STD is insensitive to the gains during the sliding phase and converges to the exact differentiation without numerical chattering even under oversized gains. This fact has already been validated analytically (Sect. 5.5 in [12]) and numerically (Fig. 8 in [12]), and is also met in digital sliding-mode control [22, 23, 37].

Effect of Euler Explicit and Implicit Time Discretizations …

171

Fig. 2 Output of the differentiators for the input f (t) = 5t without noise, under oversized gain L = 103 , and h = 50 ms

5 Practical Experiments A rotary inverted pendulum system (RIPS) is used in this section to study the effect of time discretization on the differentiators in practice. The setup is installed in INRIA Centre at University of Lille and is accessible through web interface.3 RIPS is a closed-loop control system with the controller in the loop. The design of the controller for this setup is addressed in Sect. 5.2. Before that, since the controller is designed based on a model, a mathematical model will be developed for the RIPS in Sect. 5.1. The results of the experiments are presented in Sect. 5.3.

5.1 Mathematical Modeling of the RIPS The configuration of RIPS is given in Fig. 3. As can be seen, the system includes the pendulum and rotary links. The parameters corresponding to these two links are indicated with the subscripts p and r , respectively. The weights of the pendulum and rotary links are m p = 0.024 kg and m r = 0.095 kg, respectively. The lengths of these two links are L p = 0.129 m and L r = 0.085 m. The moment of inertia of the pendulum and rotary links are J p = m p L 2p /12 and Jr = m r L r2 /12, respectively. Angles of the pendulum and rotary links are indicated by α and θ, respectively. Moreover, the upward position of the pendulum link leads to α = 0. The Euler– Lagrange framework can be used in this study to obtain the dynamic equations of the RIPS [38]. Following this reference, ignoring the actuator’s dynamic leads to the following linearized equation for α = α˙ = θ = θ˙ = 0:

3

x˙ = Ax + Bu

(6a)

y = C x + Du,

(6b)

http://valse-pendulum.lille.inria.fr:5000/.

172

M. R. Mojallizadeh and B. Brogliato

Fig. 3 Scheme of the rotary inverted pendulum

z0 z1

c2

y1

x2

y0

x0 sr

x1

sp c1 z2

˙

m θ) ˙ α] where x = [θ, α, θ, ˙  is the state vector, τ = kt (u−k is the system’s input (the Rm  torque), y = [θ, α] is the vector of output variables, km = 0.042V/(rad/s) is the back-emf constant, kt = 0.042 Nm/A is the current–torque coefficient, and Rm = 8.4 is the resistance of the motor winding. Moreover, u is the voltage applied to the motor and can be considered as a control input. The matrices A, B, C, D are as follows:



0 ⎢0 ⎢ A=⎣ 0 0

0 0 (L p /2)2 m 2p L r g/JT (L p /2)m p g(Jr + m p L r2 )/JT 1 0 0 1 −Dr (J p + m p (L p /2)2 )/JT −m p (L p /2)L r D p /JT −m p (L p /2)L r Dr /JT

⎤ ⎥ ⎥ ⎥ ⎦

(7)

−D p (Jr + M p L r2 )/JT

 

 1000 , B = 0 0 (J p + m p (L p /2)2 )/JT m p (L p /2)L r /JT , C = 0100

D=

  0 . 0

Equation (7) can be modified as follows to take the dynamics of the actuator (DC motor and drive circuit) into account. B=

kt Rm

B,

A(3, 3) = A(3, 3) −

kt2 , B(3)Rm

A(4, 3) = A(4, 3) −

kt2 , B(4)Rm

(8)

Effect of Euler Explicit and Implicit Time Discretizations …

173

Fig. 4 Closed-loop diagram of the RIPS

where g = 9.81 (m/s2 ) denotes the gravity constant. Note that all the viscous friction terms, i.e., D p = 0 Nm/(rad/s), Dr = 0 Nm/(rad/s), are neglected.

5.2 Design of the Controller for the RIPS The diagram of the closed-loop control system is shown in Fig. 4. The controller uses a switch strategy between a swing-up u s and balancing u b . The purpose of the swingup control is to put the system around its unstable equilibrium point, i.e., |α| ≤ 20◦ . After that, the control will switch to the balancing control, i.e., LQR4 to regulate the system to the unstable equilibrium point (upward position of the pendulum link α = α˙ = 0). The switching law u is shown in (9):  u b if |α| ≤ 20◦ (9) u= u s otherwise. Note that u is the control is linearly projected to [−10, +10]V by the laboratory setup to protect the motor, u s is the swing-up control, and u b is the balancing control. According to (9), the balancing control is active when |α| ≤ 20◦ . The closed-loop diagram of the RIPS is shown in Fig. 4, and the controllers u s and u b are obtained in Sects. 5.2.1 and 5.2.2, respectively. The computer which is used to implement the algorithms has 16 GB of memory running on an Intel Core i7 7700.

4

Linear quadratic regulator (LQR) is a well-known controller (for instance, see [39]) to minimize the objective function J = (x  Qx + u  Ru)dt, where Q and R are weighting matrices. This controller can be designed using the MATLAB function G=lqr(sys,Q,R), where sys is the state-space representation of the plant and G is the calculated control gain.

174

5.2.1

M. R. Mojallizadeh and B. Brogliato

Swing-up Control

The swing-up control is obtained according to kinetic-potential energy such that the potential energy of the pendulum is maximum for the unstable equilibrium point white its kinetic energy is zero.   u s = sat sgn(α˙ cos(α))μ(Er − E) , u sm ,

(10)

where E is the total energy, μ = 50 m/s/J denotes a proportional gain, Er = 40 mJ is the potential energy of the pendulum on its unstable equilibrium point, and u sm = 10 V is the saturation voltage. The term sgn(α˙ cos(α)) is employed to modify the sign of the control. Moreover, the sat function is as follows: ⎧ ⎨ u sm if x > u sm x if |x| ≤ u sm sat(x, u sm ) = (11) ⎩ −u sm if x < −u sm . The total energy of the pendulum is the sum of the potential (E 1 ) and kinetic (E 2 ) energies, defined as follows: E = E1 + E2 ,

5.2.2

E1 =

1 (1 − cos(α))M p gL p , 2

E2 =

1 2 α˙ J p . 2

(12a)

Balancing Control

As it was mentioned above, the LQR control (u = G(x − xr )) is used for the balancing when |α| ≤ 20◦ , with:

G = 2 −35 1.5 −3 .

(13)



 Throughout the experiments, the reference trajectory xr = 0.3 sin(t) 0 cos(t) 0 is considered. While the angles θ and α can be directly measured by sensors available on the setup, their time derivatives are not available. Hence, the AO-STD implemented using the explicit and implicit discretizations is used to estimate θ˙ and α. ˙

5.3 Results of the Experiments The experiments are conducted remotely using the web interface developed at INRIA Centre at University of Lille, France. To do the experiments, the customized controller and differentiator have to be written in a compatible MATLAB format. These files can

Effect of Euler Explicit and Implicit Time Discretizations …

175

Table 1 Parameters of the differentiators obtained from the tuning procedure for the input I1 Method Parameters Euler (pure differentiator) LF E-AO-STD (n = 1) I-AO-STD (n = 1) E-AO-STD (n = 2) I-AO-STD (n = 2)

No parameter c = 184 L = 80949 L = 80841 L = 15662 L = 5412

be directly uploaded to the web interface. The setup will do numerical simulations on receiving the codes to protect it from an unstable response. If the responses of the system are stable, the experiment will start, and the results of both the simulation and experiment will be ready quickly.

5.3.1

Conditions of the Experiments

The sampling time h = 2 ms is used by default on this setup for both simulations and experiments. Studies show that there is no straightforward way to tune the parameters of the differentiators [12, 13]). In this study, the parameters are tuned in the open-loop configuration when the input is I1 : f (t) = 4 × 104 (t 2 + 2t + 3) with quantization accuracy 0.001, and the corresponding parameters are listed in Table 1.5 Such a signal changes rapidly and therefore tuning the parameters of the differentiators based on this signal leads to differentiators with large enough bandwidth. Note that the RIPS contains fast dynamics. Moreover, the measurement noise is barely observed in such a system where the sensor used to measure the angles is a digital shaft encoder. In addition, for each simulation/experiment, if the tracking error is larger than π/4 for t > 3s, the simulation will be stopped without doing further experiments to protect the setup from unexpected behavior. As a result, the transient response must be fast enough such that α ∈ [− π4 , π4 ] rad after t = 3s. Otherwise, as was mentioned, the simulation will stop without further experimentation, leading to an impracticable system. Unlike numerical simulations, practical experiments do not lead to unique results because of the stochastic nature of the process and measurement noise. For example, the cables connecting the computer to the sensor can behave as a spring with different positions affecting each experiment differently in each. Furthermore, during the swing-up, a small perturbation can affect the system trajectory significantly. To handle this issue, each experiment has been conducted ten times and the averaged results are given in Table 3. With the tuned parameters listed in Table 1, the E-AO-STD always leads to impracticability since for these large gains, the explicit method exhibits too much chattering. 5

The differentiation toolbox developed in [13] is used for the parameter tuning.

176

M. R. Mojallizadeh and B. Brogliato

Table 2 Parameters of the differentiators obtained from the tuning procedure for the input I2 Method Parameters Euler (pure differentiator) LF E-AO-STD (n = 1) I-AO-STD (n = 1) E-AO-STD (n = 2) I-AO-STD (n = 2)

No parameter c = 109 L = 1040 L = 949 L = 4625 L = 4695

Table 3 Average values for 10 experiments when the parameters are tuned for I1 (parameters are shown in Table 1) Simulations

Experiments

Method

||eθ ||

||eα ||

||eθ ||∞

||eα ||∞ ||u||∞

||eθ ||

||eα ||

||eθ ||∞

Euler

0.1761

0.0097

0.2975

0.0082

0.0584

0.2725

0.0325

0.1161

||eα ||∞ ||u||∞

0.0221

13.4087

LF

0.1839

0.0098

0.3167

0.0085

0.0583

0.2665

0.0330

0.1182

0.0153

2.1529

I-AO-STD (n = 1) 0.1761

0.0097

0.2975

0.0082

0.0584

0.2715

0.0315

0.1151

0.0184

12.0016

I-AO-STD (n = 2) 0.1737

0.0097

0.2920

0.0081

0.0584

0.3030

0.0340

0.1377

0.0156

3.4482

E-AO-STD (n = 1) Impracticable E-AO-STD (n = 2) Impracticable

It indicates that it is difficult to achieve both large bandwidth and small chattering for the explicit discretization at the same time.

5.3.2

Effect of the Parameter Tuning on the Results

In the previous case, the parameters of the differentiators are tuned based on the input I1 : f (t) = 4 × 104 (t 2 + 2t + 3) to ensure high bandwidth. Now, the differentiators are tuned for another input to study the gain sensitivity of the differentiators. In this context, the input I2 : f (t) = 1000 sin(200t + π/4) with quantization (resolution=0.001) is taken into account for parameter tuning. The resulting parameters are listed in Table 2, and the corresponding results are presented in Table 4. An important observation from Table 4 is that all higher-order differentiators including the implicit and explicit ones are impracticable (see Table 3 where I-AOSTD is practicable for both n = 1, 2 in the previous case). In fact, as it is reported by [12], a higher-order differentiator shows a longer transient response compared to a first-order one. As the result, the tracking error is not small enough for the selected parameters (for t > 3) leading to impracticability. Comparing the results in both cases from Tables 3 and 4, it can be seen that the LF presents one of the worst results compared to the others.

Effect of Euler Explicit and Implicit Time Discretizations …

177

Table 4 Average values for 10 experiments when the parameters are tuned for I2 (parameters are shown in Table 2) Simulations

Experiments

Method

||eθ ||

||eα ||

||eθ ||∞

||eα ||∞ ||u||∞

||eθ ||

||eα ||

||eθ ||∞

Euler

0.1761

0.0097

0.2975

0.0082

0.0584

0.2725

0.0325

0.1161

||eα ||∞ ||u||∞

0.0221

13.4087

LF

0.1889

0.0098

0.3282

0.0086

0.0582

0.2652

0.0323

0.1177

0.0147

1.5955

I-AO-STD (n = 1) 0.1761

0.0097

0.2975

0.0082

0.0584

0.5135

0.0384

0.2179

0.0276

10.2306

E-AO-STD (n = 1) Impracticable I-AO-STD (n = 2)

Impracticable

E-AO-STD (n = 2) Impracticable

Remark 3 As it is reported by [12, 13] there is no systematic way to tune the parameters of the differentiators because of their nonlinear structure. The experimental results presented in this work show the importance of parameter tuning and its effect on the results. Hence, developing a parameter-tuning scheme for the differentiators seems to be crucial for future research works. In general, the results provided in Tables 3 and 4 can be summarized as follows: • Euler: The simulation results are in accordance with the experimental ones except for ||u||∞ where the experimental values are higher than the ones obtained for the simulations. It should be reminded that the Euler differentiator is a pure differentiation method with unlimited bandwidth. Hence, a small amount of noise can affect the control signal significantly. • LF: The simulations and experimental results follow the same pattern of the ones obtained from the experiments (see Tables 3 and 4). • I-AO-STD: According to Table 3, this differentiator leads to practicable results for the parameters listed in Table 1. However, for the smaller gain shown in Table 2 the I-AO-STD is impracticable for n = 2. Moreover, considering Tables 3 and 4, the simulations and experiments are quite similar for this differentiator. • E-AO-STD: The results have not been presented for this differentiator since it is always impracticable in this laboratory setup. The reason is that to achieve a large enough bandwidth, the gains should be increased. With such large gains, the explicit discretization shows too much chattering leading to impracticability.

5.3.3

Results Obtained for the Pendulum System

The following results are drawn according to the experiments conducted in Sect. 5: • Compared to the implicit methods, the explicit VS differentiators show too much chattering which may lead to impracticability on the RIPS. • Solver-based implicit VS differentiators (I-AO-STD when n > 1) can also be implemented in real time since it led to practicability in Table 3. It means that the available calculation resources on the RIPS are enough to implement them.

178

M. R. Mojallizadeh and B. Brogliato

• As it can be seen from Table 4, while the I-AO-STD with n = 1 led to appropriate responses, the higher-order counterpart, i.e., when n > 1 sometimes lead to impracticability since a higher-order differentiator generally shows a slower transient response. • According to Table 3, a higher-order differentiator may lead to a better response (considering ||u||∞ and ||eα ||∞ ) compared to a first-order one (see the results of I-AO-STD for n = 1 and n = 2 in Table 3). In addition to the experimental results presented for the RIPS in this work, the effect of the time discretization of the differentiators has been studied for another laboratory setup, i.e., an electro-pneumatic system (EPS) in [40]. Compared to the RIPS, the EPS presents slower dynamic responses with a higher amount of measurement noise. For such a system, smaller gains should be selected for the differentiators to avoid noise amplification. In this case, almost all the explicit and implicit differentiators including first and higher-order ones are practicable and the results obtained for the explicit and implicit differentiators look to be identical for small sampling times. However, for larger sampling times, the implicit methods supersede the explicit ones as they lead to smaller chattering (let us recall that the sampling time is fixed for the RIPS, hence varying it could not be used as a design parameter for performance optimization).

6 Conclusions A well-known variable-structure differentiator, i.e., arbitrary-order super-twisting differentiator, has been considered in this study, and two different time-discretization schemes have been introduced for it. The first discretization scheme was based on the Euler explicit discretization scheme while the other one was based on the implicit scheme. The properties of the implicit scheme have been briefly reviewed and it is shown that the implicit discretization can provide several useful properties including gain insensitivity, numerical chattering suppression, and finite-time convergence, which are not obtained with the explicit scheme. Furthermore, numerical simulations in an open-loop setting have been used to validate the analytical results. A rotary inverted pendulum setup has been considered as the case study and a set of experiments has been conducted. It is shown that the implicit discretization schemes can also be implemented in real time. Moreover, the gain insensitivity and numerical chattering alleviation properties have also been validated for the implicit methods in practical experiments. Acknowledgements This work was supported by the project Digitslid (Différentiateurs et commandes homogènes par modes glissants multivalués en temps discret: l’approche implicite), ANR18-CE40-0008-01. Authors are indebted to Andrey Polyakov from INRIA, Lille and Yannick Aoustin from École Centrale de Nantes for useful discussions and help on the experiments.

Effect of Euler Explicit and Implicit Time Discretizations …

179

References 1. Slotine, J.-J.E., Hedrick, J.K., Misawa, E.A.: On sliding observers for nonlinear systems. J. Dyn. Syst. Measur. Control 109(3), 245–252 (1987) 2. Levant, A.: Robust exact differentiation via sliding mode technique. Automatica 34(3), 379– 384 (1998) 3. Levant, A.: Higher order sliding: differentiation and black-box control. In: Proceedings of the 39th IEEE Conference on Decision and Control, vol. 2, pp. 1703–1708 (2000) 4. Levant, A.: Higher-order sliding modes, differentiation and output-feedback control. Int. J. Control 76(9–10), 924–941 (2003) 5. Emaru, T., Tsuchiya, T.: Research on estimating the smoothed value and the differential value of the distance measured by an ultrasonic wave sensor. In: Proceedings. 2000 IEEE/RSJ International Conference on Intelligent Robots and Systems, vol. 2, pp. 1291–1297 (2000) 6. Jin, S., Kikuuwe, R., Yamamoto, M.: Real-time quadratic sliding mode filter for removing noise. Adv. Robot. 26(8–9), 877–896 (2012) 7. Cruz-Zavala, E., Moreno, J.A., Fridman, L.M.: Uniform robust exact differentiator. IEEE Trans. Autom. Control 56(11), 2727–2733 (2011) 8. Reichhartinger, M., Spurgeon, S.: An arbitrary-order differentiator design paradigm with adaptive gains. Int. J. Control 91(9), 2028–2042 (2018) 9. Ghanes, M., Barbot, J.P., Fridman, L., Levant, A.: A novel differentiator: A compromise between super twisting and linear algorithms. In: 2017 IEEE 56th Annual Conference on Decision and Control (CDC), pp. 5415–5419. Melbourne, Australia (2017) 10. Ghanes, M., Barbot, J.P., Fridman, L., Levant, A.: A second order sliding mode differentiator with a variable exponent. In: 2017 American Control Conference (ACC), pp. 3300–3305. Seattle, USA (2017) 11. Ghanes, M., Barbot, J.P., Fridman, L., Levant, A., Boisliveau, R.: A new varying gain exponent based differentiator/observer: an efficient balance between linear and sliding-mode algorithms. In: IEEE Transactions on Automatic Control, pp. 1–1 (2020) 12. Mojallizadeh, M.R., Brogliato, B., Acary, V.: Time-discretizations of differentiators: Design of implicit algorithms and comparative analysis. Int. J. Robust Nonlinear Control 31(16), 7679– 7723 (2021) 13. Mojallizadeh, M.R., Brogliato, B., Acary, V.: Discrete-time differentiators: design and comparative analysis. Tech. rep., INRIA Grenoble-Alpes https://hal.inria.fr/hal-02960923 (2020) 14. Carvajal-Rubio, J.E., Sánchez-Torres, J.D., Defoort, M., Djemai, M., Loukianov, A.G.: Implicit and explicit discrete-time realizations of homogeneous differentiators. Int. J. Robust Nonlinear Control 31(9), 3606–3630 (2021) 15. Carvajal-Rubio, J.E., Loukianov, A.G., Sánchez-Torres, J.D., Defoort, M.: On the discretization of a class of homogeneous differentiators. In: 2019 16th International Conference on Electrical Engineering, Computing Science and Automatic Control (CCE), pp. 1–6. Mexico City (2019). https://doi.org/10.1109/ICEEE.2019.8884567 16. Koch, S., Reichhartinger, M.: Discrete-time equivalent homogeneous differentiators. In: 2018 15th International Workshop on Variable Structure Systems (VSS), pp. 354–359 (2018) 17. Reichhartinger, M., Koch, S., Niederwieser, H., Spurgeon, S.K.: The robust exact differentiator toolbox: Improved discrete-time realization. In: 2018 15th International Workshop on Variable Structure Systems (VSS), pp. 1–6 (2018) 18. Livne, M., Levant, A.: Proper discretization of homogeneous differentiators. Automatica 50(8), 2007–2014 (2014) 19. Koch, S., Reichhartinger, M., Horn, M., Fridman, L.: Discrete-time implementation of homogeneous differentiators. IEEE Trans. Autom. Control 65(2), 757–762 (2020) 20. Acary, V., Brogliato, B., Orlov, Y.V.: Chattering-free digital sliding-mode control with state observer and disturbance rejection. IEEE Trans. Autom. Control 57(5), 1087–1101 (2012) 21. Brogliato, B., Polyakov, A., Efimov, D.: The implicit discretization of the supertwisting slidingmode control algorithm. IEEE Trans. Autom. Control 65(8), 3707–3713 (2020)

180

M. R. Mojallizadeh and B. Brogliato

22. Huber, O., Acary, V., Brogliato, B.: Lyapunov stability and performance analysis of the implicit discrete sliding mode control. IEEE Trans. Autom. Control 61(10), 3016–3030 (2016) 23. Huber, O., Acary, V., Brogliato, B., Plestan, F.: Implicit discrete-time twisting controller without numerical chattering: Analysis and experimental results. Control Eng. Pract. 46, 129–141 (2016) 24. Huber, O., Acary, V., Brogliato, B.: Lyapunov stability analysis of the implicit discrete-time twisting control algorithm. IEEE Trans. Autom. Control 65(6), 2619–2626 (2020) 25. Huber, O., Oza, H.B.: Implicit numerical integration for the simulation and control of a nonsmooth system with resets. In: 2016 IEEE 55th Conference on Decision and Control (CDC), pp. 6551–6556. Las Vegas, USA (2016) 26. Xiong, X., Kikuuwe, R., Yamamoto, M.: Backward-Euler discretization of second-order sliding mode control and super-twisting observer for accurate position control. In: ASME 2013 Dynamic Systems and Control Conference, pp. 1–8. Palo Alto, USA (2013) 27. Brogliato, B., Polyakov, A.: Digital implementation of sliding-mode control via the implicit method: A tutorial. Int. J. Robust Nonlinear Control 31(9), 3528–3586 (2021) 28. Levant, A., Livne, M.: Robust exact filtering differentiators. In: European Journal of Control, vol. In Press (2019) 29. Lv, Z., Jin, S., Xiong, X., Yu, J.: A new quick-response sliding mode tracking differentiator with its chattering-free discrete-time implementation. IEEE Access 7, 130236–130245 (2019) 30. Kikuuwe, R., Pasaribu, R., Byun, G.: A first-order differentiator with first-order sliding mode filtering. IFAC-PapersOnLine 52(16), 771–776 (2019) 31. Byun, G., Kikuuwe, R.: An improved sliding mode differentiator combined with sliding mode filter for estimating first and second-order derivatives of noisy signals. Int. J. Control Autom, Syst (2020) 32. Wetzlinger, M., Reichhartinger, M., Horn, M., Fridman, L., Moreno, J.A.: Semi-implicit discretization of the uniform robust exact differentiator. In: 2019 IEEE 58th Conference on Decision and Control (CDC), pp. 5995–6000 (2019) 33. Levant, A.: Filtering differentiators and observers. In: 2018 15th International Workshop on Variable Structure Systems (VSS), pp. 174–179 (2018) 34. Levant, A.: Homogeneous filtering and differentiation based on sliding modes. In: 2019 IEEE 58th Conference on Decision and Control (CDC), pp. 6013–6018. Nice, France (2019) 35. Barbot, J.-P., Levant, A., Livne, M., Lunz, D.: Discrete differentiators based on sliding modes. Automatica 112, 108633 (2020) 36. Levant, A., Livne, M.: Exact differentiation of signals with unbounded higher derivatives. IEEE Trans. Autom. Control 57(4), 1076–1080 (2012) 37. Huber, O., Brogliato, B., Acary, V., Boubakir, A., Plestan, F., Wang, B.: Experimental results on implicit and explicit time-discretization of equivalent control-based sliding mode control. In: Fridman, L., Barbot, J., Plestan, F (eds.) Recent Trends in Sliding Mode Control, pp. 207–235. Institution of Engineering and Technology (2016) 38. Yang, X., Zheng, X.: Swing-up and stabilization control design for an underactuated rotary inverted pendulum system: Theory and experiments. IEEE Trans. Ind. Electron. 65(9), 7229– 7238 (2018) 39. Kirk, D.E.: Optimal Control Theory: An Introduction. Courier Corporation (2004) 40. Mojallizadeh, M.R., Brogliato, B., Polyakov, A., Selvarajan, S., Michel, L., Plestan, F., Ghanes, M., Barbot, J.-P., Aoustin, Y.: Discrete-time differentiators in closed-loop control systems: experiments on electro-pneumatic system and rotary inverted pendulum. Research report, INRIA Grenoble, Feb (2022)

Discrete-Time Implementations of Differentiators Homogeneous in the Bi-Limit Maximilian Rüdiger-Wetzlinger, Markus Reichhartinger, and Martin Horn

Abstract This chapter proposes a discrete-time differentiation algorithm of arbitrary order inspired by the uniform robust exact differentiator [1] and the generalized differentiator with negative homogeneity degree [2]. As the well-known explicit Euler method is not suitable for discretizing algorithms with the fixed-time convergence property, a semi-implicit discretization and a stable explicit Euler discretization approach are proposed. It is proven that the proposed discrete-time algorithms are globally asymptotically stable in the unperturbed case for arbitrary order and converge to an attractive invariant set around the origin in the perturbed case. Furthermore the performance of the proposed algorithm is evaluated via simulation studies.

1 Introduction In modern control engineering, discretizing the obtained continuous-time algorithms is an essential step of the design procedure. The easiest and most simple way of discretizing is the use of the explicit Euler method. If the continuous-time algorithm has a discontinuous right-hand side, applying the explicit Euler method can lead to discretization chattering [3–5] or in some cases even to unstable discrete-time systems [6]. This problem can be overcome by using an implicit discretization approach, i.e., exact convergence in the unperturbed case can be preserved. The appealing properties of the implicit approach in contrast to the explicit approach have been shown via analytically [7, 8] and in real-world experiments [9]. Furthermore, the consistent discretization proposed by the authors of [10] should be mentioned as it offers an M. Rüdiger-Wetzlinger (B) · M. Reichhartinger Institue of Automation and Control, Graz University of Technology, Styria, Austria e-mail: [email protected] M. Reichhartinger e-mail: [email protected] M. Horn Institute of Automation and Control, Graz University of Technology, Graz, Austria e-mail: [email protected] © The Author(s), under exclusive license to Springer Nature Switzerland AG 2023 T. R. Oliveira et al. (eds.), Sliding-Mode Control and Variable-Structure Systems, Studies in Systems, Decision and Control 490, https://doi.org/10.1007/978-3-031-37089-2_8

181

182

M. Rüdiger-Wetzlinger et al.

appealing discretization method for homogeneous systems with negative or positive homogeneity degree. This chapter offers an alternative discretization approach for a class of continuoustime algorithms. The before-mentioned articles all directly discretize (with the explicit or implicit Euler approach) the continuous-time algorithm. The following proposed approach suggests to first discretize the signal model (i.e., the integrator chain) and afterward the correction term is designed such that the pseudo-linear discrete-time system possesses eigenvalues which are the discrete-time equivalents of the eigenvalues of the pseudo-linear continuous-time system. The proposed discretization approaches in this chapter need much lower computational effort than the implicit method while discretization chattering is also avoided. Furthermore, the proposed discretization schemes can also be applied to algorithms with fixed-time convergence without losing the stability property. In Sect. 3 two discretization approaches, the semi-implicit eigenvalue mapping and the stable explicit Euler discretization, are proposed. Both methods show the highly desired property, that the corresponding discretized algorithms do not show any discretization chattering and are globally asymptotically stable. Furthermore they are also applicable for continuous-time algorithms with fixed-time convergence times. In [11] and comparative analysis of numerous discrete-time differentiators were made. Among others, the authors also compared different discretized versions of the uniform robust exact differentiator. They came to the conclusion that the semiimplicit mapped uniform robust exact differentiator proposed in [12] shows a calculation time similar to the explicit Euler discretized uniform robust exact differentiator. The authors also studied an implicitly discretized version of the URED which shows a around 20 times larger calculation time. The actual calculation times depend on whether there is noise or not and the actual noise quantities. Hence it can be stated that the discretization approaches proposed in this work lead to discrete-time differentiators with much smaller calculation times in comparison to their implicit discretized equivalents while also showing no discretization chattering and producing global asymptotically stable algorithms. In this chapter two different methodologies on how to obtain a discrete-time algorithm are discussed: (I) discretization of the signal model and design of an observer for the discretized model and (II) direct discretization of the continuoustime algorithm. A general proof for global asymptotic stability in the unperturbed case is given. It is shown that in the presence of bounded perturbations there exists an attractive invariant set around the origin of the differentiator error states. The work presented in this chapter is a generalization of the differentiator proposed in [13] and its stability proof is based on the ideas used in [14].

Discrete-Time Implementations of Differentiators Homogeneous in the Bi-Limit

183

Fig. 1 Signal model as a chain of integrators

2 Differentiator Design via State-Dependent Eigenvalue Placement Basically all differentiators can be seen as observers for a given linear signal model. In the case of already existing sliding-mode inspired differentiators [1, 2, 15] the nonlinearities (the discontinuities respectively) can also be introduced to the system via state-dependent eigenvalue placement. The corresponding signal model is depicted in Fig. 1 as a chain of n + 1 integrators with the unknown input f (n+1) (t) and the system output f˜(t). If the system is noise-free, i.e., f (t) = f˜(t), otherwise there is some additive noise η(t), i.e., f˜(t) = f (t) + η(t). The signal model can be represented as a linear state model f˙ = Af + b f (n+1) , f˜ = c T f + dη

(1) (2)

T  where f = f f˙0 . . . f (n) , the dynamic matrix 

I 0 A = n×1 n×n 01×n 0

 (3)

 T   is an upper shift matrix, b = 0 . . . 0 1 , c T = 1 0 . . . 0 and d is either zero for a noise-free signal or one for a noisy signal. By implementing the observer, i.e., the T  differentiator algorithm, with the observer states z = z 0 z 1 . . . z n z˙ = Az + l(σ0 )σ0

(4)

with σ0 = z 0 − f and the state-dependent correction term T  l(σ0 )= φ0 (σ0 ) . . . φn (σ0 ) the desired state-dependent eigenvalues λ(σ0 ) can be imposed upon the state-dependent dynamic matrix M(σ0 ) of the observer (differentiator) error state model

184

M. Rüdiger-Wetzlinger et al.

Table 1 Existing homogeneous differentiators and their corresponding eigenvalues Algorithm Eigenvalues Robust exact differentiator [15] Differentiator with homogeneity degree δ [2] Uniform robust exact differentiator [1]

(8a) (8b) (8c)



⎤ 0 ⎢ .. ⎥ ⎢ ⎥ σ˙ = M(σ0 )σ + ⎢ . ⎥ f (n+1) ⎣0⎦ −1 ⎡ ⎤ φ0 (σ0 ) 1 0 . . . 0 ⎢ .⎥ ⎢ φ1 (σ0 ) 0 . . . . . . .. ⎥ ⎢ ⎥ .. .. . . . . ⎥ . M(σ0 ) = ⎢ ⎢ . . . . 0⎥ ⎢ ⎥ ⎣φn−1 (σ0 ) 0 . . . 0 1⎦ φn (σ0 ) 0 . . . . . . 0

(5)

(6)

This concept is also applicable to the aforementioned existing differentiator algorithms if the differentiator parameters ki are chosen according to s n+1 + k0 s n + k1 s n−1 + · · · + kn−1 s + kn = (s + r )n

(7)

where r can be seen as a convergence tuning factor of the algorithms, i.e., a larger parameter r results in faster convergence and in more robustness against perturbations. The resulting state-dependent eigenvalues of the algorithms are listed in Table 1 and Eqs. (8a)–(8c). λ = −r λ = −r λ1,2 = −r

1 1

(8a)

−δ

(8b)

|σ0 | n+1 1 |σ0 | 1−nδ ⎛ ⎝ 1 + μ|σ0 | 1

|σ0 | 2

 1 ±√ 2

⎞ 1 − μ2 σ02 ⎠ |σ0 |

(8c)

Note that, as the eigenvalues are state-dependent, they cannot be used to prove stability. However, they provide an easy-to-apply design methodology and can be used to obtain an intuitive idea of the convergence properties of the system.

Discrete-Time Implementations of Differentiators Homogeneous in the Bi-Limit

185

2.1 Arbitrary-Order Differentiator Homogeneous in the Bi-Limit Homogeneous (or weighted homogeneous) systems have some very appealing stability properties [16, 17]. The homogeneity in the bi-limit for functions and continuous vector fields [18–24] and for differential inclusions [25] can be seen as extension of the classical definition of homogeneity. In the following the different definitions and the corresponding stability properties are summarized. Remark: In the following dynamical systems with continuous right-hand side are represented as x˙ = f(x)

(9)

while dynamical systems with discontinuous right-hand side (differential inclusions) are represented as x˙ = F(x).

(10)

Definition: Classical Homogeneity Assume the positive integer n and the vector field f : Rn → Rn . The vector field is said to be homogeneous of degree δ ∈ R [18, 26] if ∀λ > 0 f(λx) = λδ f(x)

(11)

holds. Furthermore, a system x˙ = f(x) is said to be homogeneous with degree δ if the vector field f(x) is homogeneous with degree δ. The concept of homogeneity also holds for differential inclusions.The system x˙ = F(x) is homogeneous of degree δ if F(x) is homogeneous with degree δ. Definition: Weighted Homogeneity T  Given the generalized weights r = r1 . . . rn with ri > 0 and the dilation matrix Λrλ = diag(λr1 , λr2 , . . . , λrn ).

(12)

The system x˙ = f(x) (the DI x˙ = F(x)) is said to be r-homogeneous of degree δ iff ∀λ > 0 f(Λrλ x) = λδ Λrλ f(x)

(13)

F(Λrλ x) = λδ Λrλ F(x)

(14)

and

respectively holds.

186

M. Rüdiger-Wetzlinger et al.

Homogeneous dynamical systems (9) and (10) show the following useful and important properties: • if the equilibrium point xe 00 is locally attractive, i.e., there exists a positive constant r > 0 with ||x(0)|| < r , such that for every solution x(t) of system (9) or system (10) lim x(t) = xe

t→∞

(15)

holds, then the equilibrium point of the system (9) or (10) xe is globally asymptotically stable. • if xe is asymptotically stable and the homogeneity degree of the system is δ < 0, then the equilibrium point xe is finite-time stable, i.e., ∃ T < ∞ s.t. x(t) = 0 ∀t ≥ T . • if xe is asymptotically stable and the homogeneity degree of the system is δ = 0, then the equilibrium point is exponentially stable (e.g., linear time-invariant systems) Definition: Homogeneity in the 0-limit • A function f : Rn → R is said to be homogeneous in the 0-limit with the weights T  r0 = r01 . . . r0n with r0i > 0, the homogeneity degree δ0 and the homogeneous approximating function f 0 , if f and f 0 are continuous, f 0 is not identically zero and for each compact set C ∈ R \ {0} and each ε > 0, there exists λ0 such that    f (Λrλ0 x)    ≤ ε, ∀λ ∈ (0, λ0 ] − f (x) max 0  x∈C λδ0

(16)

 T • A system x˙ = f(x) with f(x) = f 1 (x) . . . f n (x) is said to be homogeneous in T  the 0-limit with the weights r0 = r01 . . . r0n with r0i > 0, the   homogeneity degree δ0 and the homogeneous approximating vector field f0 = f 01 . . . f 0n , if for all i ∈ [1, 2, . . . , n] δ0 + r0i ≥ 0 holds and the corresponding functions fi (x) of the vector field f(x) are homogeneous in the 0-limit with weights r0i , homogeneity degrees δ0 + r0i and homogeneous approximating functions f 0i . • A system x˙ = F(x) is said to be homogeneous in the 0-limit with the weights T  r0 = r01 . . . r0n with r0i > 0, homogeneity degree δ0 and approximating setvalued vector field F0 (x), if F(x) and F0 (x) are upper semi-continuous and nonempty sets, and for each compact set C ∈ Rn \ {0} and each ε, there exists a positive λ0 such that   max d H λ−δ (Δrλ0 )−1 F(Δrλ0 x), F0 (x) ≤ ε, ∀λ ∈ (0, λ0 ] x∈C

(17)

holds. The operator d H (A , B) denotes the Hausdorff distance of two non-empty sets A and B in Rn .

Discrete-Time Implementations of Differentiators Homogeneous in the Bi-Limit

187

Definition: Homogeneity in the ∞-limit • A function f : Rn → R is said to be homogeneous in the ∞-limit with the weights T  r∞ = r∞1 . . . r∞n with r∞i > 0, the homogeneity degree δ∞ and the homogeneous approximating function f ∞ if f and f ∞ are continuous, f ∞ is not identically zero and for each compact set C ∈ R \ {0} and each ε > 0, there exists λ∞ such that    f (Δrλ∞ x)   − f ∞ (x) ≤ ε, ∀λ ≥ λ∞ (18) max δ ∞ x∈C λ T  • A system x˙ = f(x) with f(x) = f 1 (x) . . . f n (x) is said to be homogeneous in T  the ∞-limit with the weights r∞ = r∞1 . . . r∞n with r∞i > 0, the homogeneity degree δ∞ and the homogeneous approximating vector field f∞ =  T f ∞1 . . . f ∞n , if for all i ∈ [1, 2, . . . , n] δ∞ + r∞i ≥ 0 holds and the corresponding functions f i (x) of the vector field f(x) are homogeneous in the ∞-limit with weights r∞i , homogeneity degrees δ∞ + r∞i and homogeneous approximating functions f ∞i . • A system x˙ = F(x) is said to be homogeneous in the ∞-limit with the weights  T r∞ = r∞1 . . . r∞n with r∞i > 0, homogeneity degree δ∞ and approximating set-valued vector field F∞ (x), if F(x) and F∞ (x) are upper semi-continuous and non-empty sets, and for each compact set C ∈ Rn \ {0} and each ε, there exists a positive λ∞ such that   max d H λ−δ (Δrλ∞ )−1 F(Δrλ∞ x), F∞ (x) ≤ ε, ∀λ ≥ λ∞ x∈C

(19)

holds. Definition: Homogeneity in the bi-limit A function f : Rn → R, respectively a system x˙ = f(x) (˙x = F(x)) is said to be homogeneous in the bi-limit, in the following also called bi-homogeneous, if it is both homogeneous in the 0-limit and in the ∞-limit. Some important properties of systems (9) which are homogeneous in the bilimit are stated in [19]. In [25] this concept was extended to differential inclusions x˙ = F(x). The main statement says that the type of convergence of the differential inclusion is defined by its homogeneous approximations F0 (x) and F∞ (x) and their homogeneity degrees δ0 and δ∞ . Necessary conditions for fixed-time convergence: Assume a system x˙ = F(x) with the set-valued vector field F(x). The system is said to be fixed-time stable, i.e., there exists an upper bound T¯ of the convergence time Tc ≤ T¯ , x(t) = 0 for t ≥ Tc , independent of the initial systems states x(t = 0) = x0 , if the following properties hold: • The system x˙ = F(x) is homogeneous in the 0-limit with approximating function F0 (x), weights r0 and negative homogeneity degree δ0 < 0.

188

M. Rüdiger-Wetzlinger et al.

• The system x˙ = F(x) is homogeneous in the ∞-limit with approximating function F∞ (x), weights r∞ and positive homogeneity degree δ∞ > 0. • F(x), F0 (x) and F∞ (x) are upper semi-continuous functions. • The systems x˙ = F(x), x˙ = F0 (x) and x˙ = F∞ (x) are asymptotically stable. The considerations of Sect. 2 lead to the in this work proposed differentiator homogeneous in the bi-limit. Close to the origin the proposed algorithm can be approximated by the homogenous differentiator with homogeneity degree −1 ≤ δ ≤ 0 proposed in [2]. Hence, the stability proof from [2] can be used to show local stability (with finite or asymptotic convergence, dependent on the homogeneity degree). The nth order differentiator for a noise-free signal f (t) with bounded n + 1st derivative | f (n+1) (t)| ≤ L is proposed, according to (4), as z˙ 0 = φ0 (σ0 )σ0 + z 1 z˙ 1 = φ1 (σ0 )σ0 + z 2 .. . z˙ n+1 = φn+1 (σ0 )σ0 + z n z˙ n = φn (σ0 )σ0

(20a) (20b)

(20c) (20d)

with  i+1 δ0 φi (σ0 ) = −ki |σ0 | 1−nδ0 + μ|σ0 |δ∞

(21a)

and −1 ≤δ0 ≤ 0 0 ≤δ∞ < ∞.

(21b) (21c)

Similar to the uniform robust exact differentiator [1] μ > 0 is a tuning parameter and for μ = 0 the homogeneous differentiator [2] is recovered. If the gains ki are chosen according to (7) then the dynamic matrix (6) of the differentiator errors σ possesses the eigenvalue   δ0 λ(σ0 ) = −r |σ0 | 1−nδ0 + μ|σ0 |δ∞

(22)

of multiplicity n + 1. If f (n+1) (t) ≡ 0 holds, then the dynamics of the differentiator errors are governed by σ˙ = M(σ0 )σ = F(σ ), where F(σ ) is • homogeneous in the 0-limit with approximating function

Discrete-Time Implementations of Differentiators Homogeneous in the Bi-Limit

⎤ r01 −k0 σ0 r00 + σ1 r02 ⎥ ⎢ ⎢ −k1 σ0 r00 + σ2 ⎥ ⎥ ⎢ ⎥ ⎢ .. F0 (σ ) = ⎢ ⎥, . ⎥ ⎢ r0n ⎥ ⎢ ⎣−kn−1 σ0 r00r + σn ⎦ 0n+1 −kn σ0 r00

189



(23)

with homogeneity degree δ0 , r0n+1 = 1 + δ0 and weights r00 r01 .. .





⎤ 1 − nδ0 ⎥ ⎢1 − (n − 1)δ0 ⎥ ⎢ ⎥ ⎢ ⎥ ⎢ ⎥ ⎢ ⎥ ⎢ .. r0 = ⎢ ⎥=⎢ ⎥. . ⎥ ⎢ ⎥ ⎢ ⎦ ⎣r0n−1 ⎦ ⎣ 1 − δ0 r0n δ0 ⎡

(24)

• homogeneous in the ∞-limit with approximating function ⎡

−k0 μ σ0 r∞1 + σ1 −k1 μ2 σ0 r∞2 + σ2 .. .



⎥ ⎢ ⎥ ⎢ ⎥ ⎢ F∞ (σ ) = ⎢ ⎥, ⎥ ⎢ ⎣−kn−1 μn σ0 r∞n + σn ⎦ −kn μn+1 σ0 r∞n+1

(25)

with homogeneity degree δ∞ , r∞n+1 = 1 + (n + 1)δ∞ and weights ⎡ r∞

r∞0 r∞1 .. .





1 1 + δ∞ .. .



⎥ ⎢ ⎥ ⎢ ⎥ ⎢ ⎥ ⎢ ⎥ ⎢ ⎥ ⎢ =⎢ ⎥=⎢ ⎥. ⎥ ⎢ ⎥ ⎢ ⎣r∞n−1 ⎦ ⎣1 + (n − 1)δ∞ ⎦ r∞n 1 + nδ∞

(26)

Until now no proof is found by the author that the origin of the differentiator error states is global asymptotically stable, nor that they converge in finite or fixed-time. However, the properties of the state-dependent eigenvalues and numerous simulations indicate, that the fixed-time convergence property is given for proper chosen parameters. Furthermore, the proposed continuous-time algorithm only serves as basis for the discrete-time algorithm proposed in the next chapter, for which stability can be proven for arbitrary order. Moreover, the proposed algorithm can be seen as a generalization of all aforementioned differentiators: • δ0 = 0 and μ = 0: linear differentiator • δ0 = −1 and μ = 0: robust exact differentiator, [15]

190

M. Rüdiger-Wetzlinger et al.

• δ0 ∈ [−1, 0) and μ = 0: homogeneous differentiator with negative homogeneity degree, [2] • δ0 = −1, δ∞ = 21 , n = 1 and μ > 0: eigenvalues of the proposed algorithm match the real-valued part of the eigenvalues of the uniform robust exact differentiator with k0 = 2r and k1 = r 2 , [1] To show the effectiveness of the proposed algorithm the following 5th order differentiator is applied to the noise-free signal f (t) = −0.4 sin(t) + 0.9 cos(0.9t).   5 z˙ 0 = − 5.88 σ0 6 + μσ0 + z 1   2 5 z˙ 1 = − 14.406 σ0 3 + 2μ σ0 6 + μ2 lσ0 + z 2   1 2 5 z˙ 2 = − 18.824 σ0 2 + 3μ σ0 3 + 3μ2 σ0 6 + μ3 σ0 + z 3 1 3

1 2

2 3

(27a) (27b) (27c)

5 6

z˙ 3 = − 18.836( σ0 + 4μ σ0 + 6μ2 σ0 + 4μ3 σ0 + μ4 σ0 ) + z 4 (27d) 1

1

1

2

z˙ 4 = − 5.24( σ0 6 + 5μ σ0 3 + 10μ2 σ0 2 + 10μ3 σ0 3 + 5

+ 5μ4 σ0 6 + μ5 σ0 ) + z 5 1 6

(27e) 1 3

1 2

z˙ 5 = − 0.886( σ0 + 6μ σ0 + 15μ σ0 + 20μ σ0 + 0

2

2

3

5

+ 15μ4 σ0 3 + 6μ5 σ0 6 + μ6 σ0 )

(27f)

The homogeneity degrees are chosen as δ0 = −1 and δ∞ = 0, i.e., there is a constant term in the eigenvalues (22). The parameters ki are the coefficients of the polynomial (s + 0.98)6 . In Fig. 2 the estimated derivatives of the signal are shown for μ = 1 and μ = 0, i.e., for the RED. It can be observed, that although the homogeneity degree δ∞ = 0 is only chosen to be zero, the convergence rate is already increased significantly. Near the origin of the differentiator error states, i.e., once the estimates converged to the true derivatives, both algorithms show the same behavior, as the system is dominated by the approximating function F0 (σ0 ) which is exactly the right-hand side of the differentiator error dynamics of the robust exact differentiator.

3 Sliding-Mode Inspired Numerical Differentiation In general, there are two ways to obtain a discrete-time differentiator for a continuoustime signal. The obvious and most often used method is to design a continuous-time differentiator and discretize the obtained algorithm afterward. The preferred discretization approaches in most cases are the simple explicit Euler discretization or the more advanced implicit Euler discretization. The explicit Euler discretization might work for sufficiently small sampling steps; however, it can also lead to unstable behavior as shown in [6]. The implicit discretization yields estimates with much lower discretization effects while also leading to much higher computational effort. In the following, discrete-time differentiators, which are obtained by this first design

Discrete-Time Implementations of Differentiators Homogeneous in the Bi-Limit

estimate µ = 1

signal

1

191

estimate µ = 0

f (t)

0.5 0 −0.5 −1

f˙(t)

2 0 −2 4 f¨(t)

2 0 −2 −4 f (3) (t)

2 0 −2 −4

f (4) (t)

2 1 0 −1 −2 f (5) (t)

0.5 0 −0.5 −1

0

1

2

3

4

5 t

6

7

8

9

Fig. 2 Simulation results for the 5th order bi-homogeneous differentiator with μ = 1 and μ = 0

192

M. Rüdiger-Wetzlinger et al.

methodology are called [discretization method] discretized [differentiator name], e.g., explicit Euler discretized robust exact differentiator or implicit Euler discretized uniform robust exact differentiator. The second alternative is to first discretize the signal model and afterward directly design a discrete-time differentiator for the obtained discrete-time system. This approach in combination with suitable eigenvalue mappings, such as proposed in [13, 27], yields discrete-differentiators which are globally asymptotically stable and do not suffer from discretization chattering. In the following, differentiators obtained via the second design methodology are called [applied eigenvalue mapping] mapped [differentiator name], e.g., semi-implicit mapped robust exact differentiator. In the following, discretization methods for both approaches are proposed and discussed in detail.

3.1 Semi-implicit Mapped Differentiators As discussed in the previous chapter, the continuous-time differentiator can be seen as an observer where the observed system is an integrator chain. The same is true for the design of a discrete-time differentiator. The zero-order-hold discretization of the signal model depicted in Fig. 1 with sampling time h (and no noise) is fk+1 = Ad fk + dk , f k = c fk T

(28) (29)

where Ad =

n  (Ah)i

,

(30)

  cT = 1 0 . . . 0

(31)

i=0

h!

T  and the disturbance vector dk = d0,k d1,k . . . dn,k depends on the unknown but bounded (n + 1)st derivative f (n+1) (t) of the measured signal f (t):  d0,k = d1,k =

(k+1)h

kh  (k+1)h kh

.. . dn,k =



(k+1)h kh



τ0

kh  τ1 kh

 ... ...

τn−1

kh  τn−1

f (n+1) (τn )dτn dτn−1 . . . dτ0

(32a)

f (n+1) (τn )dτn dτn−1 . . . dτ1

(32b)

kh

f (n+1) (τn )dτn

(32c)

Discrete-Time Implementations of Differentiators Homogeneous in the Bi-Limit

193

If it can be assumed that f (n+1) (t) is constant between two timesteps, i.e., f (n+1) (t) = f k(n+1) for hk ≤ t < h(k + 1) Eq. (32) simplify to h n+1 f (n+1) (n + 1)! h n (n+1) = f n!

d0,k =

(33a)

d1,k

(33b)

.. .

dn,k = h f (n+1)

(33c)

Considering the state-dependent eigenvalues (22) and the semi-implicit eigenvalue mapping with sampling time h proposed in [12], i.e., λd =

1 , 1 − hλ

(34)

yields the desired discrete-time state-dependent eigenvalues −δ0

λd =

|σ0,k | 1−nδ0 −δ0

|σ0,k | 1−nδ0 + hr μ|σ0,k |

δ∞ −δ0 (1+nδ∞ ) 1−nδ0

+ hr

.

(35)

The discrete-time version of the continuous-time differentiator of order n is defined as zk+1 = Ad zk + ld (σ0,k )σ0,k

(36)

with ⎤−1 ⎡ ⎤ ⎡ ⎤ Φ0 − 1 cT 0 ⎢c T Ad ⎥ ⎢ .. ⎥ ⎢ Φ1 ⎥ ⎢ ⎥ ⎢ ⎥ ⎢ h ⎥ ld (σ0,k ) = −(Ad − λd I)n+1 ⎢ . ⎥ ⎢ . ⎥ = ⎢ . ⎥ ⎣ .. ⎦ ⎣0⎦ ⎣ .. ⎦ ⎡

c T And

1

(37)

Φn hn

where I is the identity matrix of dimension (n + 1) × (n + 1), Φi = Φi (λd ) for i = 0, . . . , n and λd is chosen according to (35). As the desired eigenvalues are state-dependent and hence change with every time step, Eq. (37), which is a generalized version of the formula of Ackerman, which has to be evaluated in every time step. However, the functions Φi (λd ) are polynomials of λd (σ0,k ) and their constant coefficients can be determined in advance. Therefore the computational effort can be reduced significantly. For example, for n = 1 the functions are defined as

194

M. Rüdiger-Wetzlinger et al.

Φ0 = 2λd − 1,

(38)

Φ1 =

(39)

−λ2d

+ 2λd − 1.

The proposed algorithm, which in the end has to be implemented on a digital device, can be written as z 0,k+1 = Φ0 σ0,k + z j,k+1 =

n  hi

Φj σ0,k + hj

i=1 n  i= j

i!

z i,k + f k ,

h i− j z i,k , (i − j)!

(40a) j = 1, . . . , n.

(40b)

The dynamics of the errors σ k can be represented in the pseudo-linear form σ k+1 = Md σ k − dk , with the state-dependent matrix Md = Md (λd ) given as ⎡ 2 3 hn ⎤ Φ0 h h2 h6 . . . (n)! ⎢ Φ1 1 h h 2 . . . h n−1 ⎥ ⎢ h 2 (n−1)! ⎥ ⎢ Φ2 h n−1 ⎥ ⎢ h 2 0 1 h . . . (n−1)! ⎥ ⎢ ⎥. Md = ⎢ . . ⎥ . . . . ⎢ .. .. . . . . . . .. ⎥ ⎢ ⎥ n−1 ⎣ Φn−1 0 ... 0 1 h ⎦ h Φn 0 ... ... 0 1 hn

(41)

(42)

It is obvious that in the discrete-time domain the differentiator error states σ k will not vanish if f (n+1) (t) = 0, i.e., the differentiator is not exact in sense of the continuoustime differentiator. Anyway, the differentiator error states will converge to a neighborhood around the origin and will stay in this neighborhood as long as | f (n+1) (t)| ≤ L, i.e., they converge into an attractive invariant set which contains the origin.

3.2 Stable Explicit Euler Discretized Differentiator Assume the continuous-time differentiator proposed in Sect. 2.1.The stable explicit Euler discretized approximation of (22) with sampling time h is z 0,k+1 = hφ0,sat (σ0,k ) + z 0,k + hz 1,k .. . z n−1,k+1 = hφn−1,sat (σ0,k ) + z n−1,k + hz n,k z n,k+1 = hφn,sat (σ0,k ) + z n,k

(43a)

(43b) (43c)

Discrete-Time Implementations of Differentiators Homogeneous in the Bi-Limit

195

with φi,sat (σ0 ) = k˜i λsat (σ0 )i+1

(44)

where λsat (σ0 ) is the saturated version of λ(σ0 ) and is defined as  λsat (σ0 ) =

λ(σ0 ), if λ(σ0 ) > − h1 − h1 , else

(45)

and (s + 1)n+1 = s n+1 + k˜0 s n + · · · + k˜n−1 s + k˜n .

(46)

This saturation of the continuous-time state-dependent real-valued eigenvalue λ ensures that the corresponding discrete-time eigenvalue λd ∈ R λd (σ0,k ) = 1 + hλsat (σ0 ), ⎧ −δ0 δ∞ −δ0 (1+nδ∞ ) ⎪ ⎨ |σ0,k | 1−nδ0 −hr μ|σ0,k | 1−nδ0 −hr , if ρ(σ ) ≤ 0 −δ0 0,k λd = |σ0,k | 1−nd0 ⎪ ⎩0, else −δ0

ρ(σ0,k ) = −|σ0,k | 1−nδ0 + hr μ|σ0,k |

δ∞ −nδ0 δ∞ −δ0 1−nδ0

+ rh

(47) (48)

(49)

of the differentiator error dynamic matrix (56b) is nonnegative and smaller that one, i.e., 0 ≤ λd < 1. Using the explicit Euler discretized signal model ˜ d fk + d˜k , fk+1 = A

(50)

˜ d = I(n+1)×(n+1) + hA A   0n×1 In×n A= 01×n 0

(51)

with

(52)

and T  dk = d0,k . . . dn,k  (k+1)h  τ1 di,k = f (i+2) (τ )dτ dτ1 , i = 0, . . . , n − 1 dn,k =

kh  (k+1)h kh

(53) (54)

kh

f (n+1) (τ )dτ

(55)

196

M. Rüdiger-Wetzlinger et al.

yields the differentiator error dynamics σ k = Md (σ0,k )σ k − d˜ k ⎡ 1 + hφ0,sat h 0 ⎢ ⎢ hφ1,sat 1 . . . ⎢ ⎢ .. . Md (σ0,k ) = ⎢ . 0 .. ⎢ ⎢ . ⎣ hφn−1,sat .. . . . hφn,sat 0 . . .

... .. . ..

.

1 0



0 .. ⎥ .⎥ ⎥ ⎥ , 0⎥ ⎥ ⎥ h⎦ 1

(56a)

(56b)

where the state-dependent dynamic matrix Md (σ0,k ) has one eigenvalue λd (σ0,k ) according to (48) of multiplicity n + 1. Remark: In [28, 29] a non-chattering, or low-chattering respectively, discretization approach for homogeneous differentiators with optional filtering was published recently. While near the origin the discretization scheme of the differentiator proposed in [28, 29] and of the stable explicit Euler discretized differentiator proposed in the present chapter show similar structures, only the latter one is also defined for bi-homogeneous differentiators. Remark: In contrast to the discrete-differentiators derived in Sect. 3.1, the disturbance vector d˜ k does not only depend on the signal’s (n + 1)st derivative f (n+1) (t), but also depends on the second derivative f¨(t) and all higher derivatives up to the (n + 1)st derivative f (n+1) (t). Hence, to obtain a bounded disturbance vector d˜ k for this discretization approach, it is assumed that f¨(t), f (3) (t), . . . , f (n+1) (t) are bounded, i.e., | f¨(t)| ≤ L 2 |f

(3)

(t)| ≤ L 3 ...

| f (n+1) (t)| ≤ L n+1 .

(57) (58) (59) (60)

4 Stability Proof The following stability proof in the unperturbed and in the perturbed case is based on [13], where global asymptotic stability for one special case of the here proposed semi-implicit mapped differentiator is proposed. Furthermore, the mentioned work provides more detailed stability proofs for the first and second-order differentiator.

Discrete-Time Implementations of Differentiators Homogeneous in the Bi-Limit

197

4.1 The Unperturbed Case Assume for the following proof that dk = d˜ k ≡ 0. There exist regular state transformations xk = S1 σ k and xk = S2 σ k , dependent on  T which design methodology is used, with xn,k = σ0,k and x = x0 x1 . . . xn such that the error dynamics (41) and (56) are in the observable normal form xk+1 = Ao (λd )xk

(61)

with ⎡ 0 ⎢1 ⎢ ⎢ Ao = Ao (λd ) = ⎢ ⎢0 ⎢. ⎣ ..

⎤ . . . 0 −Φˆ 0 (λd ) . . . 0 −Φˆ 1 (λd )⎥ ⎥ ⎥ .. ˆ . 0 −Φ2 (λd )⎥ ⎥. ⎥ .. . . . . .. ⎦ . . . . ˆ 0 . . . . . . 1 −Φn (λd ) ... 0 .. .

(62)

The dynamic matrix Ao (λd ) can be split into a constant matrix A L and a matrix A N (λd ) which contains the non-linearities, i.e., Ao (λd ) = A L + A N (λd )

(63)

where ⎡

0 ⎢1 ⎢ AL = ⎢ . ⎣ .. ⎡

... ... .. .

0 0 .. .

⎤ 0 0⎥ ⎥ .. ⎥ and .⎦

(64)

0 ... 1 0

⎤ 0 −Φˆ 0 (λd ) 0 −Φˆ 1 (λd )⎥ ⎥ ⎥. .. .. ⎦ . . 0 . . . 0 −Φˆ n (λd )

0 ⎢0 ⎢ A N (λd ) = ⎢ . ⎣ ..

... ... .. .

(65)

Note that the following holds for both eigenvalue mappings (semi-implicit and stable explicit Euler): lim λd (σ0 ) = lim λd (σ0 ) = 0

|σ0 |→0

hence

|σ0 |→∞

(66)

198

M. Rüdiger-Wetzlinger et al.

lim Ao (λd ) = lim Ao (λd ) = A L

|σ0 |→0

|σ0 |→∞

(67)

Theorem 1 There exist positive parameters r , μ and h such that the origin of system (61) for n ≥ 1, discrete-time eigenvalues according to (35) or (48) and dk = 0 is globally asymptotically stable if its parameters are selected as r ≥ r , μ ≥ μ , h ≥ h . Proof Since A L is a Schur-matrix there exists a matrix P  0 for any Q  0 of appropriate dimension with ATL PA L − P = −Q,   Qn−1 qn−1 , Q= T qn−1 qn

(68) (69)

Qn−1  0, Qn−1 ∈ R(n)×(n) , qn−1 ∈ Rn , qn ∈ R and A L according to (64). Let Vk = xkT Pxk be a Lyapunov function with   ΔVk = xkT AoT (λd )PAoT (λd ) − P xk ¯ d )xk , = −xkT Q(λ   ¯ d ) = TQn−1 q¯ n−1 (λd ) , Q(λ q¯ n−1 (λd ) q¯n (λd )   T q¯ n−1 (λd ) = q¯0 (λd ) . . . q¯n−1 (λd ) ,

(70) (71) (72)

T T where q¯0 (λd ), . . . , q¯n (λd ) are polynomials of λd and q¯ n−1 (0) = qn−1 , q¯n (0) = qn . As Qn−1 is positive definite, a sufficient condition for global asymptotic stability of ¯ d )) is a polynomial of λd and ¯ d )) > 0 holds. As det(Q(λ the origin is that det(Q(λ ¯ ¯ Q(0) = Q , Q(λd ) is for sure positive definite for 0 ≤ λd < λ < 1. As a result the stability proof simplifies to finding the smallest, real-valued positive root λ of the ¯ d )). Both discrete-time eigenvalue mappings (35) and (48) do polynomial det(Q(λ have a maximum depending on the sampling time h and the convergence factor r . The maximum of the corresponding continuous-time eigenvalue (22) is

⎧ if δ0 = 0 and δ∞ = 0 ⎪ ⎨−r, δ0 δ −δ +nd δ ¯ if δ0 = 0 and δ∞ = 0 = −r μ 0 ∞ 0 ∞ δ, ⎪ ⎩ −r μ, if δ0 = 0 and δ∞ = 0

(73)

   δ −δ−δ0 +nδ+nδ0 δ∞δ 0 ∞ 0 ∞ δ0 δ∞ δ¯ = nδ∞ + 1 − δ0 nδ∞ δ0 − δ∞

(74)

λmax with

Applying the eigenvalue mappings according to (35) and (48) yields the following stability conditions:

Discrete-Time Implementations of Differentiators Homogeneous in the Bi-Limit

199

• Semi-implicit eigenvalue mapping: λd,max =

1 1 − hλmax

(75)

The constraint λd (σ0 ) < λ ∀σ0 ∈ R yields ⎧ 1−λ ⎪ ⎨hr > λ δ , 0 hr μ δ0 −δ∞ +nδ0 δ∞ > ⎪

⎩ , hr μ > 1−λ λ

if δ0 = 0 and δ∞ = 0 1−λ ¯ , δλ

if δ0 = 0 and δ∞ = 0 if δ0 = 0 and δ∞ = 0

(76)

• Stable explicit Euler eigenvalue mapping: λd,max = 1 + hλmax

(77)

The constraint λd (σ0 ) < λ ∀σ0 ∈ R yields ⎧

⎪ ⎨hr > 1 −δ λ , 0 hr μ δ0 −δ∞ +nδ0 δ∞ > ⎪ ⎩ hr μ > 1 − λ ,

if δ0 = 0 and δ∞ = 0 1−λ , δ¯

if δ0 = 0 and δ∞ = 0 if δ0 = 0 and δ∞ = 0

(78)

Hence, there exist parameter settings h , μ and r such that ΔVk < 0 for all h ≥ h , μ ≥ μ , r ≥ r and σ k = 0.

4.2 The Perturbed Case In the case that the disturbance vector is not equal to zero, i.e., dk = 0, respectively d˜ k = 0, it can be shown that there exists an attractive invariant set X around the origin of the differentiator error states, i.e., the estimation errors will converge to a “band” around zero. Furthermore, the width of the band depends on the bound of the perturbation. Using the same regular state transformations xk = S1 σ k or xk = S2 σ k as for the proof of global asymptotic stability in the unperturbed case, the error dynamics can be written as xk+1 = Ao (λd )xk + d¯ k ,

(79)

with d¯ k = S1 dk , or d¯ k = S2 d˜ k respectively. The perturbation d¯ k is bounded as dk and d˜ k are bounded by assumption. Theorem 2 Consider the eigenvalue mappings (35) and (48). There exist positive parameters r , μ and h such that for bounded perturbation d¯ k there exists an invariant set X around the origin of system (79) such that Vk ≤ c and Vk+1 ≤ c

200

M. Rüdiger-Wetzlinger et al.

if xk ∈ X and ΔVk < 0 if xk ∈ / X if its parameters are selected as r ≥ r , μ ≥ μ

and h ≥ h . Proof Using the same Lyapunov function as for the proof of Theorem 1 yields ¯ d )xk + 2xkT AoT (λd )Pd¯ k + d¯ kT Pd¯ k . ΔVk = −xkT Q(λ

(80)

¯ d ) is positive definite for an appropriate choice of It is proven in Theorem 1 that Q(λ the differentiator parameter. As d¯ k is bounded and (67) holds there exists a set X1 such that if xk ∈ / X1 the quadratic term with respect to x of (80) dominates the linear term with respect to x and the quadratic term with respect to d and ΔVk < 0 holds. For the second part of the proof the Definition 2.1 and Theorem 2.1 of the paper [30] are used: Definition: Bounded Input-Bounded State stability System (79) is D,X2 -BIBS (Bounded Input-Bounded State) stable if for each initial condition x0 ∈ X2 and for every input d¯ k with ||d¯ k || < D for all k ≥ 0, the state xk remains bounded for all k ≥ 0. Lemma 1 Assume that the origin is an asymptotically stable equilibrium point of system (79) with d¯ k = 0 and let Vk be an associated Lyapunov function which is assumed to be continuously differentiable. Then there exists an positive constant D and a bounded open neighborhood X2 of the origin in Rn so that system (79) is D,X2 -BIBS stable. The proof of Lemma 1 can be found in [30]. By defining X2 = {x ∈ Rn |Vk (x) ≤ c} and utilizing the continuity of Vk and Vk+1 yields that every set X2 ⊇ X1 is in attractive invariant set of system (79). The smallest invariant set is X X = {x ∈ Rn |Vk (x) ≤ c } = min X2 s.t. X ⊇ X1 = min X1 .

5 Simulation Study and Conclusion 5.1 Numerical Example To illustrate the performance of the proposed discretization methods the bi-homogenous differentiators

Discrete-Time Implementations of Differentiators Homogeneous in the Bi-Limit

  5 z˙ 0 = − 8.4 σ0 6 + μσ0 + z 1   2 5 z˙ 1 = − 29.4 σ0 3 + 2μ σ0 6 + μ2 lσ0 + z 2   1 2 5 z˙ 2 = − 54.88 σ0 2 + 3μ σ0 3 + 3μ2 σ0 6 + μ3 σ0 + z 3 1 3

1 2

2 3

201

(81a) (81b) (81c)

5 6

z˙ 3 = − 57.624( σ0 + 4μ σ0 + 6μ2 σ0 + 4μ3 σ0 + μ4 σ0 ) + z 4 (81d) 1

1

1

2

z˙ 4 = − 32.2694( σ0 6 + 5μ σ0 3 + 10μ2 σ0 2 + 10μ3 σ0 3 + 5

+ 5μ4 σ0 6 + μ5 σ0 ) + z 5

(81e) 1 6

1 3

1 2

z˙ 5 = − 7.5295( σ0 + 6μ σ0 + 15μ σ0 + 20μ σ0 + 0

2

2

5

+ 15μ4 σ0 3 + 6μ5 σ0 6 + μ6 σ0 ),

3

(81f)

where the parameters ki are the coefficients of the polynomial (s + 1.4)6 , are implemented and simulated in three different ways: • semi-implicit mapped (Sect. 3.1) • stable explicit Euler discretized (Sect. 3.2) • explicit Euler discretized As differentiator input the same noise-free signal as in Sect. 2.1 is used. In Fig. 3 the simulation results are illustrated. It can be observed that the estimates of the first four derivatives are quite similar for all three differentiators, while the advantage of the proposed discretization schemes is obvious for the estimate of the fifth derivative. While the common explicit Euler method yields strong discretization chattering, both proposed discretization methods do not show any discretization chattering. Further simulation results for one special configuration of the proposed semi-implicit mapped n ), e.g., precision in the case of perturbations, differentiator (δ0 = −1 and δ∞ = n+1 n+1 i.e., f (t) = 0, impact of additive noise to the differentiator errors states and convergence time as function of the initial differentiator states, can be found in [13].

5.2 Conclusion The homogeneous differentiator proposed in [2] has been adapted in such a way that the proposed differentiator is homogeneous in the bi-limit. The additional higher order terms increase the convergence rate of the original algorithm far away from the origin while the behavior of both algorithms sufficiently near the origin is almost the same. The design of the proposed differentiator has been broken down to a simple state-dependent eigenvalue placement. Taking the proposed continuous-time differentiator as basis, two discretization methods have been discussed and stability was proven. The discrete-time systems

202

M. Rüdiger-Wetzlinger et al.

signal explicit Euler

semi-implicit

stable explicit Euler

f (t)

1 0 −1

f˙(t)

5 0

f¨(t)

20

0.5 0

0

−0.5 4

6

8

10

4

6

8

10

4

6

8

10

4

6

8

10

−20 f (3) (t)

50

0.5 0 −0.5

0 −50

f (4) (t)

50

0.5

0

0

−50

−0.5

f (5) (t)

40 20

0.5

0 −20

0 −0.5

−40 0

1

2

3

4

5 t

6

7

8

9

Fig. 3 Comparison of the simulation results of the semi-implicit mapped and the stable explicit Euler and the explicit Euler discretized 5th order bi-homogeneous differentiator

Discrete-Time Implementations of Differentiators Homogeneous in the Bi-Limit

203

obtained with both methods do not suffer from discretization chattering and hence are superior to the simple explicit Euler discretization. Furthermore, the required computational effort is much lower compared to the implicit approach [11]. A general stability proof for the unperturbed case and the existence of an attractive invariant set in the presence of bounded perturbations have been provided. The functionality of the proposed discrete-time algorithm for order 5 has been shown via a simulation study. Additional future studies will be necessary in order to further analyze and compare the proposed nonlinear discrete-time algorithms with respect to linear ones from a theoretical point of view.

References 1. Cruz-Zavala, E., Moreno, J.A., Fridman, L.: Uniform robust exact differentiator. IEEE Trans. Autom. Control (2011). https://doi.org/10.1109/tac.2011.2160030 2. Cruz-Zavala, E., Moreno, J.A.: Lyapunov functions for continuous and discontinuous differentiators. IFAC-PapersOnLine (2016). https://doi.org/10.1016/j.ifacol.2016.10.241 3. Utkinl, V.I.: Sliding Mode Control and Optimization (2013) 4. Boiko, I.: Discontinuous Control Systems: Frequency Domain Analysis and Design (2008) 5. Boiko, I., Fridman, L., Pisano, A., Usai, E.: Analysis of chattering in systems with second-order sliding modes. IEEE Trans. Autom. Control (2007). https://doi.org/10.1109/tac.2007.908319 6. Levant, A.: On fixed and finite time stability in sliding mode control. In: 52nd IEEE Conference on Decision and Control (2013). https://doi.org/10.1109/cdc.2013.6760544 7. Acary, V., Brogliato, B.: Syst. Control Lett. (2010). https://doi.org/10.1016/j.sysconle.2010. 03.002 8. Huber, O., Acary, V., Brogliato, B.: Lyapunov stability and performance analysis of the implicit discrete sliding mode control. IEEE Trans. Autom. Control (2016). https://doi.org/10.1109/ TAC.2015.2506991 9. Huber, O., Acary, V., Brogliato, B., Plestan, F.: Implicit discrete-time twisting controller without numerical chattering: analysis and experimental results. Control Eng. Pract. (2016). https://doi. org/10.1016/j.conengprac.2015.10.013 10. Polyakov, A., Efimov, D., Brogliato, B.: Consistent discretization of finite-time and fixed-time stable systems. SIAM J. Control Optim. (2019). https://doi.org/10.1137/18M1197345 11. Mojallizadeh, M.R., Brogliato, B., Acay, V.: Time-discretizations of differentiators: design of implicit algorithms and comparative analysis. Int. J. Robust Nonlinear Control (2021). https:// doi.org/10.1002/rnc.5710 12. Wetzlinger, M., Reichhartinger, M., Horn, M., Fridman, L., Moreno, J.A.: Semi-implicit discretization of the uniform robust exact differentiator. IEEE Conf. Decis. Control (2019). https:// doi.org/10.1109/CDC40024.2019.9028916 13. Rudiger-Wetzliger, M., Reichhartinger, M., Horn, M.: Robust exact differentiator inspired discrete-time differentiation. IEEE Trans. Autom, Control (2021) 14. Wetzlinger, M., Reichhatinger, M., Horn, M.: Higher order sliding mode inspired nonlinear discrete-time observer. Syst. Control Lett. (2021). https://doi.org/10.1016/j.sysconle.2021. 104992 15. Levant, A.: Robust exact differentiation via sliding mode technique. Automatica (1998). https:// doi.org/10.1016/s0005-1098(97)00209-4 16. Levant, A.: Homogeneity approach to high-order sliding mode design (2005). https://doi.org/ 10.1016/j.automatica.2004.11.029 17. Bernuau, E., Efimov, D., Perruquetti, W., Polykov, A.: On homogeneity and its application in sliding mode control (2014). https://doi.org/10.1016/j.jfranklin.2014.01.007

204

M. Rüdiger-Wetzlinger et al.

18. Hahn, W.: Stablity of Motion (1967) 19. Andrieu, V., Praly, L., Astolfi, A.: Homogeneous approximation, recursive observer design and output feedback. SIAM J. Control Optim. (2008). https://doi.org/10.1137/060675861 20. Bacciotti, A., Rosier, L.: Liapunov Functions and Stability in Control Theory (2006) 21. Zubov, V.I.: Methods of A. M. Lyapunof and Their Application (1964) 22. Hermes, H.: Homogeneous coordinates and continuous asymptotically stabilizing feedback controls. In: Differential Equations, Stability and Control. Lecture Notes in Pure and Applied Mathematics (1990) 23. Rosier, L.: Homogeneous Lyapunov function for homogeneous continuous vector field. Syst. Control Lett. (1992). https://doi.org/10.1016/0167-6911(92)90078-7 24. Sepulchre, R., Aeyels, D.: Homogeneous Lyapunov functions and necessary conditions for stabilization. Math. Control, Signals, Syst. (1996). https://doi.org/10.1007/bf01211517 25. Cruz-Zavala, E., Moreno, J.A.: High-order sliding-mode control design homogeneous in the bi-limit. Int. J. Robust Nonlinear Control (2020). https://doi.org/10.1002/rnc.5242 26. Zubov, V.I.: Systems of ordinary differential equations with generalized-homgeneous righthand sides. Izv. Vyssh. Uchebn. Zaved. Mat. (1958) 27. Koch, S., Reichhartinger, M.: Discrete-time equivalents of the super-twisting algorithm. Automatica (2019). https://doi.org/10.1016/j.automatica.2019.05.040 28. Hanan, A., Jbara, A., Levant, A.: Non-chattering discrete differentiators based on sliding modes. In: 59th IEEE Conference on Decision and Control (CDC) (2020). https://doi.org/10.1109/ cdc42340.2020.9304283 29. Hanan, A., Levant, A., Jbara, A.: Low-chattering discretization of homogeneous differentiators (2021). https://doi.org/10.1109/tac.2021.3099446 30. DeGroat, R.D., Hunt, L.R., Linebarger, D.A., Verma, M.: Discrete-time nonlinear system stability. IEEE Trans. Circuits Syst. I: Fundam. Theory Appl. (1992). https://doi.org/10.1109/81. 199866 Institute of Electrical and Electronics Engineers (IEEE)

Lyapunov-Based Consistent Discretization of Quasi-continuous High-Order Sliding Modes Tonametl Sanchez, Andrey Polyakov, and Denis Efimov

Abstract In this chapter, we propose an explicit discretization scheme for class of disturbed systems controlled by homogeneous quasi-continuous high-order slidingmode controllers which are equipped with a homogeneous Lyapunov function. Such a Lyapunov function is used to construct the discretization scheme that preserves important features from the original continuous-time system: asymptotic stability, finite-time convergence, and the Lyapunov function itself.

1 Introduction Discretization of continuous-time models has become a fundamental step in most of the processes to design a control system. It is required, e.g., for numerical simulation, for implementation by means of digital electronics, or for designing of sampled-data controllers [11, 27]. For the case of linear systems, we can obtain exact discretized models, however, exact discretization of nonlinear systems is in general impossible due to the lack of explicit solutions. Hence, approximating discretization techniques must be used in the nonlinear setting. Nonetheless, for many nonlinear systems it is not a trivial task: first of all, standard discretization techniques usually impose some smoothness requirements on the system, and secondly, they do not preserve some relevant characteristics from the continuous-time system. For the particular case of High-Order Sliding Mode (HOSM) systems, which are non-smooth by nature, it is well known that standard discretization methods produce T. Sanchez (B) IPICYT, 78216 SLP, Mexico e-mail: [email protected] A. Polyakov · D. Efimov University of Lille, Inria, CNRS, UMR 9189 - CRIStAL, 59000 Lille, France e-mail: [email protected] D. Efimov e-mail: [email protected] © The Author(s), under exclusive license to Springer Nature Switzerland AG 2023 T. R. Oliveira et al. (eds.), Sliding-Mode Control and Variable-Structure Systems, Studies in Systems, Decision and Control 490, https://doi.org/10.1007/978-3-031-37089-2_9

205

206

T. Sanchez et al.

undesirable behaviors in the discrete-time approximation [1, 8, 9, 16, 18, 21, 24, 32, 36]. That is why several new strategies to discretize sliding-mode systems have been designed, e.g., the implicit discretization of standard sliding modes [8, 36] and HOSM controllers [5]; the discrete-time redesign of the robust exact differentiator of arbitrary order proposed in [19]; the consistent implicit or semi-implicit discretization algorithms for finite-time and fixed-time stable systems developed in [29], which is based on an adequate transformation of the system; the digital implementation of sliding-mode controllers based on the discretization of differential inclusions by means of the implicit Euler method, see, e.g., [1, 16]. In this chapter, we propose a technique to discretize a class of systems controlled by quasi-continuous HOSM controllers whose origin is asymptotically stable. It is well known that one of the advantages of quasi-continuous HOSM is that the only discontinuity is at the origin [23]. The proposed technique is based on the discretization procedure provided in [33, 34]1 for homogeneous systems without disturbances. On one hand, we particularize the method for quasi-continuous HOSM, nonetheless, on the other hand, we extend the method by allowing time-varying disturbances in the model. A relevant feature of the proposed method is that it takes advantage of the information provided by the Lyapunov function for the closed-loop system. As in [34] the discretization scheme has the following properties: 1. Lyapunov function preservation: The Lyapunov function of the continuous-time system is also a Lyapunov function for discrete-time system, guaranteeing this way that the origin of the obtained discrete-time system is Lyapunov stable. 2. Consistency: The origin of the discrete-time approximating system is finite-time stable (this means that the discretization is consistent in the sense described in [29]). 3. Independence of the discretization step: The properties of stability and consistency of the obtained discrete-time systems are not affected by the size of the discretization step. Chapter organization: In Sect. 2 we state the problem to be solved and provide some definitions and preliminary results. In Sect. 3, we analyze the dynamics of the studied system by projecting it on a level set of its Lyapunov function. The discretization scheme, proposed in this chapter, is introduced and explained in Sect. 4. In Sect. 5, we present some examples of the proposed discretization method. In Sect. 6, some conclusions are stated. Notation: The set of integer numbers is denoted by Z. R∗+ denotes the set R+ \ {0}, analogously for the set Z. For a function V : Rn → R+ , which is continuous 1

Some parts of Lemma 2, Theorem 3, Theorem 5, and their proofs have been reproduced with permission from [Sanchez T, Polyakov A, Efimov D. Lyapunov-based consistent discretization of stable homogeneous systems. Int J Robust Nonlinear Control. 2021;31:3587–3605. https://doi.org/ 10.1002/rnc.5308] ©2020 John Wiley & Sons Ltd., and with permission from the IFAC License Agreement IFAC 2020#1150 of [T. Sanchez, A. Polyakov, D. Efimov, A Consistent Discretisation method for Stable Homogeneous Systems based on Lyapunov Function, IFAC-PapersOnLine 53(2), 5099–5104 (2020). DOI https://doi.org/10.1016/j.ifacol.2020.12.1141.].

Lyapunov-Based Consistent Discretization of Quasi-continuous …

207

and positive definite, we denote the set SV = {x ∈ Rn : V (x) = 1}. The class of functions η : R+ → R+ with η(0) = 0, which are strictly increasing and continuous, is denoted by K .

2 Problem Statement and Preliminaries In this section, we describe the class of systems to be studied in this chapter, we also give the statement of the problem to be solved, and we recall some important properties of homogeneous systems. In this chapter, we consider the following continuous-time system: x˙1 (t) = x2 (t) .. .

(1)

x˙n−1 (t) = xn (t) x˙n (t) = d1 (t) + d2 (t)u(x(t)),

where x(t) ∈ Rn is the state and u(x(t)) ∈ R is the control signal. The disturbances d1 (t), d2 (t) ∈ R are piece-wise continuous functions such that |d1 (t)| ≤ d¯1 , d 2 ≤ |d2 (t)| ≤ d¯2 ,

(2)

for all t ∈ R for some known constants d¯1 , d 2 , d¯2 ∈ R∗+ . For (1), we consider the following sub-class of quasi-continuous controllers u(x) proposed in [7], σn (x) , (3) u(x) = −kn σ¯ n (x) where σ1 (x) = x1 , σ¯ 1 (x) = |x1 |, and for i ∈ {2, . . . , n} we have that n

n r

n

n r

i i σi−1 , σ¯ i (x) = |xi | ri + ki−1 σ¯ i−1 , ri = n + 1 − i , σi (x) = xi  ri + ki−1

(4)

where we used the notation x p := |x| p sign(x). For the closed-loop system (1), (3) it is also provided in [7] the Lyapunov function Vn : Rn → R+ given by the following construction for i ∈ {2, . . . , n}, Vi (x) = Vi−1 (x) + Wi (x) , 2n

ri where Wi (x) = 2n |xi | ri − νi−1  n−1 −k1 x1  n , and V1 (x) = 21 |x1 |2 .

2n−ri ri

xi + (1 −

(5) 2n

ri )|νi−1 | ri 2n

, νi = −ki σi 

n−i n

, ν1 =

208

T. Sanchez et al.

Theorem 1 ([7]) Consider the closed loop of (1) with (3). There exist large enough ki > 0 such that the origin of the system is globally finite-time stable and Vn given by (5) is a Lyapunov function for the system. A procedure to compute the gains ki is given in [7]. Let us show three explicit examples of controllers (3) and their respective Lyapunov functions (5). Example 1 For n = 1 we have the controller u(x) = −k1 sign(x1 ) , and the Lyapunov function V1 (x) = 21 x12 . Example 2 For n = 2 we have the controller u(x) = −k2

x2 2 + k12 x1 , x22 + k12 |x1 |

and the Lyapunov function 3

V2 (x) = 21 x12 + 41 x24 + k13 x1  2 x2 + 43 k14 x12 . Example 3 For n = 3 we have the controller u(x) = −k3

3

x3 3 + k23 σ2 (x) 3  , 3 |x3 |3 + k23 |x2 | 2 + k12 |x1 |

3

where σ2 = x2  2 + k12 x1 . For this case, the Lyapunov function is V3 (x) = 21 x12 + W2 (x) + W3 (x) , where

4

W2 (x) = 13 |x2 |3 + k12 x1  3 x2 + 23 k13 x12 , 5

W3 (x) = 16 x36 + k25 σ2  3 x3 + 56 k26 σ22 . In Sect. 5 these examples are resumed to illustrate the discretization procedure developed in Sect. 4. In order to make the exposition clearer, let us rewrite the closed-loop system (1), (3) as follows:   x(t) ˙ = f x(t), d(t) , x(t) ∈ Rn , d(t) ∈ R2 ,

(6)

Lyapunov-Based Consistent Discretization of Quasi-continuous …

209

where f i (x, d) = xi+1 for i ∈ {1, . . . , n − 1}, and f n (x, d) = d1 − d2 kn σσ¯ nn (x) . Note (x) n+2 n that f : R → R is continuous except at x = 0. Let us also denote with D to the set of piece-wise continuous functions d : R → R2 satisfying (2). Since the input d is unknown, a usual procedure in sliding-mode control to analyze (6) is to replace it by the differential inclusion [22, 30] x˙1 (t) = x2 (t) .. .

x˙n−1 (t) = xn (t) . x˙n (t) ∈ [−d¯1 , d¯1 ] − [d 2 , d¯2 ]kn σσ¯ nn (x) (x)

(7)

Thus, the solutions of (7) are understood as the solutions of a differential inclusion x˙ ∈ B(x) , x ∈ Rn ,

(8)

associated with (7), where the set-valued map B satisfies the following basic conditions [10, p. 77]: for all x ∈ Rn the set B(x) is nonempty, compact, and convex, and the set-valued function B is upper semicontinuous. In this context, a (generalized) solution of (7) is defined as a function x :  ⊂ R+ → Rn which is absolutely continuous and satisfies (8) for almost all t ∈  [10, p. 50]. Moreover, the existence of solutions of the differential inclusion is guaranteed since B satisfies the basic conditions. Following [22], we refer to (8) as a Filippov differential inclusion, which is obtained by means of a kind of Filippov regularization of (7) [22]. In general, the solutions of (7) or (8) are non-unique, however, observe that the right-hand side of (6) satisfies the conditions2 to guarantee uniqueness of solutions on Rn \ {0}.

2.1 Homogeneity Let us begin this section by recalling the definition of Weighted Homogeneity. Definition 1 ([17, 22, 26]) Given a set of coordinates (x1 , x2 , . . . , xn ) for Rn , r ()x denotes the family of dilations characterized by the square diagonal matrix r () = diag( r1 , . . . ,  rn ), where r = [r1 , . . . , rn ] , ri ∈ R∗+ , and  ∈ R∗+ . The components of r are called the weights of the coordinates. Thus: 1. a function V : Rn → R is r-homogeneous of degree m ∈ R if   V r ()x =  m V (x) , ∀x ∈ Rn , ∀ ∈ R∗+ ; 2. a vector field f : Rn → Rn is r-homogeneous of degree μ ∈ R if Under the assumption of d ∈ D , it can be seen that the right-hand side of (6) is locally Lipschitz in x on Rn \ {0}.

2

210

T. Sanchez et al.

  f r ()x =  μ r () f (x) , ∀x ∈ Rn , ∀ ∈ R∗+ ; 3. a set-valued map x → B(x) ⊂ Rn is r-homogeneous of degree μ ∈ R if   B r ()x =  μ r ()B(x) , ∀x ∈ Rn , ∀ ∈ R∗+ . A differential inclusion (8) is said to be r-homogeneous of degree μ ∈ R if its vector-set field (or set-valued vector field) B is r-homogeneous of degree μ. Now, we recall some important features of r−homogeneous differential inclusions.3 Theorem 2 ([3, 26, 31]) Let (8) be r−homogeneous of degree μ < 0 with B satisfying the basic conditions. If x = 0 is strongly globally asymptotically stable then 1. x = 0 is strongly globally finite-time stable; 2. for any positive integer p and any real m > p max{r1 , . . . , rn } there exists a positive definite function V : Rn → R+ such that a. V is of class C ∞ for all x = 0 and of class C p for all x ∈ Rn ; b. V is r−homogeneous of degree m; c. there exists a continuous positive definite function W¯ : Rn → R+ such that it is r−homogeneous of degree m + μ, and ∂ V (x) b ≤ −W¯ (x) , ∀x ∈ Rn , ∀b ∈ B(x) . ∂x

(9)

Remark 1 It is important to mention that (7) is an r-homogeneous differential inclusion of degree μ = −1 with weights r = [n, n − 1, . . . , 1] . Also note that (since (6) describes the closed-loop system (1), (3)), for any function d the vector field f is such that   f r ()x, d =  μ r () f (x, d) , ∀x ∈ Rn , ∀ ∈ R∗+ .

(10)

Now, if V is as in Theorem 2, then the derivative of V along (6) is given by ∂ V (x) f (x, d) , V˙ = −W (x, d) , W (x, d) := − ∂x

(11)

where the function W : Rn+m → R satisfies the following:   W r ()x, d =  m+μ W (x, d) , ∀x ∈ Rn , ∀d ∈ R2 , ∀ ∈ R∗+ , which is a direct consequence of the fact that [35, Prop. 1]) and (10). 3

∂ V (r ()x) ∂x

=  m ∂ V∂ (x) −r () (see, e.g., x

Following [3], we use the term strong stability (which involves all the solutions) in Theorem 2 to contrast with the term of weak stability, which claims properties of some solutions [10].

Lyapunov-Based Consistent Discretization of Quasi-continuous …

211

Moreover, the solutions of (6) are in the set of solutions of the Filippov differential inclusion (8). Therefore, by Theorem 2, the derivative of V along the solutions of (6) satisfies V˙ = −W (x, d) ≤ −W¯ (x), with W¯ as given in (9). In such a case, there exists α ∈ R∗+ such that [15, 26] V˙ ≤ −αV

m+μ m

(x) .

(12)

As stated in Theorem 2, W¯ is r−homogeneous, hence, the constant parameter α in (12) can be computed as follows: α = inf W¯ (x) .

(13)

x∈S V

We know from Theorem 2 that the degree of homogeneity of W¯ is m + μ, which is strictly positive if the homogeneity degree of V is restricted to m > −μ. Observe that this is always the case for (5) since m = 2n and μ = −1. The properties explained so far prove the following result (analogous to those in [13, 15, 26] for unperturbed systems). Lemma 1 Let (8) be a Filippov differential inclusion associated with (7). Also let (8) and V be as in Theorem 2. Then, in (6), for all x(0) ∈ Rn , for all t ∈ R+ , and for any d ∈ D, the following holds (with α as given in (13)): V (x(t)) ≤ V¯ (x(0), t) where ⎧ m  −μ −μ −μ ⎨ −μ m m (x(0)) − V αt , t < −μα V m (x(0)) m V¯ (x(0), t) = . (14) −μ ⎩ 0, t ≥ m V m (x(0)) −μα

From Lemma 1, we can see that the trajectories of (1) converge to the origin in finite time. Moreover, the convergence time T (x(0)) to the origin, for the initial −μ m V m (x(0)). condition x(0), is bounded as follows T (x(0)) ≤ −αμ

2.2 Problem Statement As already mentioned in the introduction, an exact discretization for a nonlinear system is (in general) not possible, this due to the lack of knowledge of the exact solution of the system. However, any suitable discretization scheme should preserve relevant properties of the solutions, e.g., the type of convergence of the trajectories to the origin in case of asymptotic stability. If we are able to extract some relevant information from a Lyapunov function, e.g., stability properties and convergence rates, just as it is done in Lemma 1, then such a Lyapunov function should be used to develop a discretization scheme. Hence, the problem to be solved in this chapter is Develop a discretization scheme, for (6) (which describes the closed-loop system (1), (3)) such that: if the origin of (6) is finite-time stable, then the generated

212

T. Sanchez et al.

discrete-time approximating system preserves the finite-time stability property of the continuous-time system. In this chapter, we solve this problem by taking advantage of the information provided by the homogeneous Lyapunov function of the system. This is done by making a homogeneous projection of the dynamics of the system on a level set of the Lyapunov function. Thus, the evolution of the system’s trajectories can be determined from the trajectories of the projected dynamics and an expansion computed by using the information of the decaying rate of the Lyapunov function along the solutions of the system (see Sect. 3 for more details). We have to mention that, in the literature, there exist some discretization methods that also utilize the idea of projecting the trajectories of the system onto level sets of the Lyapunov functions. Unfortunately, although some of those methods are able to keep the same Lyapunov function for the discrete-time system, they cannot guarantee that the convergence rates are preserved as well, see [12] and the references therein. In general, another disadvantage of those methods is that the projection is not explicit (i.e., an algebraic equation must be solved to find the projection). In contrast, in the discretization method of this chapter, the projection onto the level set of the Lyapunov function is explicit, which represents a procedural advantage.

3 Projected Dynamics In this section, we compute and analyze the projection of the dynamics of (6) onto a unitary level set of its Lyapunov function. The developments of this section constitute the fundamentals for the construction of the discretization method proposed in Sect. 4. Let V be as in Theorem 2, and define the following change of variable: y = r (V

−1 m

(x))x , ∀x ∈ Rn \ {0} .

(15)

Observe that, according to [28, p. 159], (15) constitutes the homogeneous projection of the point x over the level set {x ∈ Rn : V (x) = 1}, thus, y ∈ SV for all x ∈ Rn \ {0}. By taking the derivative of (15) along (6), we obtain y˙ = r (V

−1 m

 (x)) I −

1 −1 V (x)Gx ∂ V∂ (x) m x



f (x, d) ,

(16)

where I is the n × n identity matrix and G := diag(r1 , . . . , rn ). Recall from Remark 1 1 that f in (6) satisfies (10). From (15), we obtain x = r (V m (x))y, which can be substituted in (16) to obtain y˙ = r (V

−1 m

μ

= V m (x) f (y, d) + μ

1 −1 V (x)r (V m 1 W (x,d) Gy , m V (x)

(x)) f (x, d) −

= V m (x) f (y, d) +

1 V m

m+μ m

(x)W (y,d) Gy V (x)

,

−1 m

(x))Gr (V m (x))y ∂ V∂ (x) f (x, d) , x 1

Lyapunov-Based Consistent Discretization of Quasi-continuous …

213

(where W is given by (11)), therefore, μ y˙ = V m (x) f (y, d) +

1 W (y, d)Gy m



.

(17)

Equation (17) describes the dynamics (6) projected onto SV . However, note that we cannot recover the trajectories of (6) directly from the trajectories of (17) since (15) is not bijective. Here is where we can exploit the information provided by the Lyapunov function. Thus, we proceed to study the dynamics of V , i.e., its derivative along the trajectories of (6). Thus, considering (15), we obtain from (11) that   m+μ 1 V˙ = −W r (V m (x))y, d = −V m (x)W (y, d) .

(18)

Note that (17) and (18) still depend on x, thus, we introduce two auxiliary equations that will be useful for our purposes. Thus, we define a function v : R+ → R+ such that it is solution to the differential equation (cf. (18)) v(t) ˙ = −v

m+μ m

(t)W (z(t), d(t)) ,

(19)

where the function z : R+ → Rn is the solution to the following system (cf. (17)): μ z˙ (t) = v m (t) f (z(t), d(t)) +

1 W (z(t), d(t))Gz(t) m



.

(20)

From the developments made up to this point, we are now ready to state the main results of this section. The first one of these results consists in verifying that the set SV is positively invariant with respect to the trajectories of (20). Lemma 2 Consider (20) with z(0) ∈ SV , and any continuous function v : R+ → R. If μ < 0 and v(t) = 0 for all t ∈ [0, T ) for some T ∈ R∗+ , then z(t) ∈ SV for all t ∈ [0, T ), and any d ∈ D. Proof To verify that SV is positively invariant, let us compute the derivative of V (z) along the trajectories of (20), thus μ

V˙ = v m

∂ V (z) ∂z

f (z, d) +

1 W (z, d) ∂ V∂z(z) Gz m



.

Since ∂ V∂z(z) f (z, d) = −W (z, d), we can see that if W (z, d) = 0 then V˙ (z(t)) = 0. Thus, without loss of generality, we assume that W (z, d) = 0. According to the equality4 ∂ V (z) Gz = mV (z) , (21) ∂z we obtain

4



μ V˙ = v m W (z, d) − 1 + V (z) .

This equation is known as Euler’s theorem for weighted homogeneous functions, see, e.g., [2, Proposition 5.4].

214

T. Sanchez et al.

Hence V˙ (z(t)) = 0 if and only if V (z(t)) = 1 for all t ∈ [0, T ).



The second main result of this section is with respect to a useful representation of the solutions of (19). Lemma 3 Let (6) and V be as in Lemma 1. For any initial condition v(0) ∈ R∗+ , and any d ∈ D, there exists Θd (v(0)) ∈ R∗+ such that the function v : R+ → R+ given by ⎧ m −μ ⎨ ˆ 0 (t) −μ , −μ Wˆ 0 (t) < v(0) −μ m v(0) m − −μ W m m v(t) = , (22) −μ −μ ˆ ⎩ 0, W0 (t) ≥ v(0) m m t where Wˆ 0 (t) := 0 W (z(τ ), d(τ )) dτ , satisfies (19) with v(t) > 0 for all t in the interval [0, Θd (v(0))), and v(t) → 0 as t → Θd (v(0)). Proof This lemma is proven by direct integration of (19) to obtain (22). Nonetheless, let us provide some clarifying details. Since W (z, d) > 0 for all z ∈ SV and all d ∈ D, then v in (22) is strictly decreasing to zero, hence, for each v(0) ∈ R∗+ there  −μ ∗ ˆ m exists a maximal Θd (v(0))  ∈ R+ such that W0 (t) < v(0) for all t ∈ 0, Θd (v(0)) . Note that, 0, Θd (v(0)) is the interval of time such that the right-hand side of (20) is well defined on it.  Note that v : R+ → R+ , given by (22), is a continuous function, and also note that v(t) = 0 for all t ∈ [0, Θd (v(0)) and for initial conditions v(0) ∈ R∗+ . Therefore, (22) satisfies the hypothesis required in Lemma 2 for the function v. The last results of this section (the following theorem and its corollary) constitute the fundamentals of the discretization technique that is developed in Sect. 4. Theorem 3 Let (6) and V be as in Lemma 1. Define ζ = [v, z ] ∈ Z , where Z = R∗+ × SV . Consider (6) with x ∈ Rn \ {0}, d ∈ D, and (19)–(20) with ζ ∈ Z . The solutions of (6) and the solutions of (19)–(20) are equivalent with the homeomorphism Φ : Rn \ {0} → Z given by Φ(x) =

V (x)  . −1  V m (x) x  r

(23)

Proof Since V is a continuous function of x, we can ensure that Φ is also a continuous function of x, moreover, it has a continuous inverse Φ −1 : Z → Rn \ {0} given by  1 Φ −1 (ζ ) = r v m z .

(24)

The remaining steps of the proof are straightforward, it is only needed to note that  ζ (t) = Φ(x(t)) satisfies (19)–(20) and x(t) = Φ −1 (ζ (t)) satisfies (6). Corollary 1 If v is a solution of (19) with initial condition  −1 v(0)  = V (x(0)), and if z is a solution of (20) with initial condition z(0) = r v m (0) x(0), for any x(0) ∈ Rn \ {0}, then the function x : R+ → Rn given by

Lyapunov-Based Consistent Discretization of Quasi-continuous …

 x(t) =

215

 1  r v m (t) z(t), t < Θd (v(0)), 0, t ≥ Θd (v(0)),

(25)

(with Θd as given in Lemma 3) is solution of (6) for all t ∈ R+ .

4 Discretization Scheme In this section, we describe the proposed discretization scheme. The main idea of the method is a consequence of the developments presented in Sect. 3. Indeed, observe that (20) represents the dynamics of (6) but projected on a unit sphere SV , and that v (as given in Lemma 3) characterizes the decay of the Lyapunov function V evaluated along the trajectories of (6). So, the main idea is to compute a numerical solution5 of (19)–(20), and next, to define the numerical solution of the original system by using (25). Remark 2 Although several different schemes can be used to obtain a numerical solution of (19)–(20), we restrict ourselves in this chapter to the explicit (also known as forward) Euler method taking into account that v(t) is a nonnegative variable and z(t) belongs to a manifold for all t ≥ 0. To construct the discrete-time approximation of v, we see from Lemma 3 that for any h ∈ R+ , ⎧ m ⎨ −μ ˆ (t) −μ , v m (t) − −μ W m v(t + h) = ⎩ 0,

−μ ˆ W (t) m −μ ˆ W (t) m

0 . zk , vk+1 = 0

(28)

Let us explain the main idea of (28). First, the term z˜ k+1 can be regarded as an explicit Euler of (20); second, such a discretization is scaled by the  −1 discretization  r m factor  V (˜z k+1 ) . Note that such scaling is necessary since we have to ensure that z k ∈ SV for all k ∈ Z+ . Also note that, the condition z˜ k+1 = 0 is necessary to have (28) well defined, this is why we require the following assumption. Assumption 1 Consider (6) and V as in Theorem 3. For all z ∈ SV , all d ∈ D, and all τ ∈ R∗+ ,   z + τ f (z, d) + m1 W (z, d)Gz = 0 . In the following lemma, we state some sufficient conditions that can be helpful in verifying Assumption 1. Lemma 4 Assumption 1 holds in any of the following cases: 1. for all z∈SV such that ∂ V∂z(z) z = 0 we have that z F(z, d) ≥ 0, where F(z, d) := f (z, d) + m1 W (z, d)Gz; ∂ V (z) 2. ∂z z = 0 for all z ∈ SV ; 3. the set {z ∈ Rn : V (z) ≤ 1} is convex. Proof From Lemma 2 we know that, for all z ∈ SV and all d ∈ D, F(z, d) is tangent to SV . Hence, ∂ V∂z(z) F(z, d) = 0 for all z ∈ SV . On the other hand, if there exist τ ∈ R∗+ , z ∈ SV and d ∈ D such that z + τ F(z, d) = 0, then the vector F(z, d) is necessarily collinear to z but it has the opposite direction. Therefore, the following are necessary conditions to have z + τ F(z, d) = 0: ∂ V∂z(z) z = 0 and z F(z, d) < 0. The analysis in the previous paragraph let us clearly see that a sufficient condition to guarantee that Assumption 1 holds is either ∂ V∂z(z) z = 0 for all z ∈ SV (which is satisfied, for example, if the function z → ∂ V∂z(z) z is positive definite), or z F(z, d) ≥ 0 for all z such that ∂ V∂z(z) z = 0. This proves the first two items of the lemma. The third item of the lemma is proven as follows. On one hand, if the set {z ∈ Rn : V (z) ≤ 1} is convex, the fact that V is homogeneous guarantees that the sets {z ∈ Rn : V (z) ≤ a} are also convex for all a ∈ R∗+ , hence, we have that the function V is quasi-convex (see, e.g., [4, Sect. 3.4.1]). On the other hand, z = 0 is a global minimum of V since it is positive definite. From these reasoning, we conclude that V is a pseudo-convex function (see, e.g., [6, Lemma 2.1]), therefore (by definition  of pseudo-convexity), ∂ V∂z(z) z > 0 for all z ∈ SV , see also [25, p. 40]. It is important to mention that in [34] it is wrongly stated that (in absence of disturbances) the first item in Lemma 4 is an equivalent condition to Assumption 1, however, this is only true for n = 2. Now, we can state the following theorem, which is the main result of this section.

Lyapunov-Based Consistent Discretization of Quasi-continuous …

217

Theorem 4 Let (6) and V be as in Lemma 1. Suppose that Assumption 1 holds. Consider the discrete-time approximation of (6) given by  xk+1 = ψ(xk ) =

 m1  r vk+1 z k+1 , xk = 0, 0, xk = 0,

k ∈ Z+ ,

(29)

where vk+1 and  −1  z k+1 are given by (27) and (28), respectively, with vk = V (xk ), z k = r V m (xk ) xk , and x0 = x(0). Then V is a Lyapunov function for (29), and for all h ∈ R∗+ and all x(0) ∈ Rn \ {0}, xk → 0 as k → ∞. Moreover, V (xk ) ≤ V¯ (x0 , kh) for all k ∈ Z+ , with V¯ given by (14). The proof of the theorem is completely analogous to the proof of Theorem 2 in [34]. It is clear from Theorem 4 that the solutions of (29) reach the origin in a finite number of steps. Remark 3 Let us underline the main features of the proposed discretization scheme described in Theorem 4: 1. the discretization method is consistent, this means that the stability properties and the convergence rate from the solutions of the continuous-time system are preserved; 2. the Lyapunov function is preserved, i.e., the Lyapunov function from the continuous-time system is a Lyapunov function for the discrete-time approximating system as well; 3. the discretization method is explicit since the right-hand side of (29) does not depend on xk+1 but only on xk . Numerical convergence to the solutions In this section, we verify that the solutions of the discrete-time approximating system converge to the solutions of the continuous-time system. The methodology is the same as that given in [34]. First, consider the solution x : [0, a] → Rn of (6) for some fixed time a ∈ R∗+ . Second, suppose that the discretization step h is given by h = a/N , for some N ∈ Z∗+ , thus it is clear that h → 0 as N → ∞. N be the sequence generated by means of some discretization of (6). Let {xk }k=0 Consider the step function (associated to such a discretization method) defined as t → x(t) ˜ := xk , for all t ∈ [kh, (k + 1)h). An essential requirement for any discretization technique is that its associated step function converges uniformly on [0, a] to the solution x as the step size h tends to zero (or, equivalently, as N tends to infinity). The standard procedure to verify such a convergence property consists in confirming that the global truncation error 6 tends to zero as the step h tends to zero. Such a 6

The global truncation error can be understood as the accumulation of the errors generated at each step in a given compact interval [0, a], see, e.g., [20] or [14, p. 159].

218

T. Sanchez et al.

confirmation can be achieved by verifying the existence of a function η ∈ K such that the local truncation error 7 E(t + h) := x(t + h) − xk+1 satisfies the following (see, e.g., [37] and [14, pp. 37 and 159]) |E(t + h)| ≤ hη(h) .

(30)

Thus, in this section, we demonstrate the convergence property of the discretization scheme proposed in Sect. 4, by means of the verification of a local truncation error estimate given by (30). It is important to see that taking into account Theorem 3, we only need to demonstrate that vk and z k converge to v and z, respectively. Thus, we only have to analyze the local truncation error estimates of vk and z k as stated in the following theorem. Theorem 5 Assume that the hypotheses of Theorem 4 hold with v(t) = vk and z(t) = z k for some t ∈ R+ . Assume also that min{V (x(t + h)), vk+1 } ≥ b for some b ∈ R∗+ . Then, there exist functions ηv , ηz ∈ K such that in (27), (28): |E v (t + h)| := |v(t + h) − vk+1 | ≤ h ηv (h) , |E z (t + h)| := |z(t + h) − z k+1 | ≤ h ηz (h) .

(31) (32)

The proof of Theorem 5 is given below, but first we state the following lemma, which is used for the proof of the theorem. We use the following notation: the i-th element of the vector z˜ k is denoted as z˜ ki . Lemma 5 Consider (28). Given H ∈ R∗+ , under the assumptions of Theorem 5, i | ≤ γ¯i for all h ≤ H and all there exist constants γ i , γ i ∈ R∗+ such that γ i ≤ |˜z k+1 i ∈ {1, . . . , n}. Proof On one hand, we have that Assumption 1 guarantees the existence of the constants γ i . On the other hand, from (28) we can see that for every i = 1, . . . , n, μ i |˜z k+1 | ≤ ζi + hvkm f¯i +

1 αr ¯ i ζi m



,

where D¯ = [−d¯1 , d¯1 ] × [d 2 , d¯2 ], f¯i = sup | f i (z, d)| , ζi = sup |z i | , α¯ = sup |W (z, d)| . z∈S V d∈ D¯

z∈S V

z∈S V d∈ D¯

Now, since vk is decreasing and μ < 0, the hypotheses of the lemma ensure that μ γ i = ζi + H b m f¯i +

1 αr ¯ i ζi m



. 

The local truncation error is the one-step error computed by assuming that E(t) = 0, i.e., x(t) = xk .

7

Lyapunov-Based Consistent Discretization of Quasi-continuous …

219

Proof of Theorem 5 First we analyze E v given by (31). From (26) and (27) we have that (denoting E v = E v (t + h))  m  m μ μ  ˆ (t) −μ − 1 − −μ v m (t)W (z k , dk )h −μ  , m (t) W |E v | ≤ v(t) 1 − −μ v m m t+h where Wˆ (t) := t W (z(τ ), d(τ )) dτ . Note that we have used the hypothesis v(t) = vk . Now, we rewrite these inequalities as follows: |E v | ≤ hv(t)

   1−

−μ mμ v (t)Wˆ (t) m

m  −μ

 − 1−

−μ mμ v (t)W (z k , dk )h m

 m  −μ  

h

.

Assume that d is continuous at t. Recall that W is continuous for all z ∈ Rn \ {0} and all d ∈ R2 , moreover, z(t) = z k and d(t) = dk . Hence, it is clear (e.g., by using the L’Hôpital’s rule) that 1   1− h→0 h lim

−μ mμ v (t)Wˆ (t) m

m  −μ

 − 1−

−μ mμ v (t)W (z k , dk )h m

 m  −μ   = 0.

¯ therefore, Observe that W (z, d) is positive and bounded for all z ∈ SV and all d ∈ D, there exists a function η¯ v ∈ K (which does not depend on z k ∈ SV ) such that 1   1− h

−μ mμ v (t)Wˆ (t) m

m  −μ

 − 1−

−μ mμ v (t)W (z k , dk )h m

 m  −μ   ≤ η¯ v (h) .

Thus, the result of the theorem is obtained by taking ηv (h) = v(t)η¯ v (h). Now, we analyze the error E z , which is given by (32). Observe from (28) that   −1 E z (t + h) = z(t + h) − r V m (˜z k+1 ) z˜ k+1 , which can be rewritten as  −1   E z (t + h) = z(t + h) − z˜ k+1 + I − r V m (˜z k+1 ) z˜ k+1 . Hence, for i = 1, . . . , n, we have that   i −ri i E iz (t + h) = z i (t + h) − z˜ k+1 + 1 − V m (˜z k+1 ) z˜ k+1 . Since z(t + h) ∈ SV , the term 1 − V

−ri m

in (33) can be rewritten as follows:

(33)

220

T. Sanchez et al.

1−V

−ri m

(˜z k+1 ) = =

1 ri m



V (˜z k+1 )  1 ri m

V (˜z k+1 )

 ri V m (˜z k+1 ) − 1  ri ri V m (˜z k+1 ) − V m (z(t + h)) .

From Lemma 5, and for any H ∈ R∗+ , we can ensure the existence of constants a1 , a2 ∈ R∗+ such that a1 ≤ V (˜z k+1 ) ≤ a2 for all h ≤ H . Hence,8  ri      r V m (˜z k+1 ) − V mi (z(t + h)) ≤ L i V (˜z k+1 ) − V (z(t + h)) ≤ L i L v z˜ k+1 − z(t + h) ,

   −ri for some constants L i , L v ∈ R∗+ . Thus, we obtain 1 − V m (˜z k+1 ) ≤ ci z˜ k+1 − −ri  z(t + h) with ci := a m L i L v . Hence, we find a bound for (33) as follows: 1

 i  i | + ci z˜ k+1 − z(t + h)|˜z k+1 |, |E iz (t + h)| ≤ |z i (t + h) − z˜ k+1  i    ≤ |˜z k+1 − z(t + h)| + ci z˜ k+1 − z(t + h) |˜z k+1 | ,   i ≤ (1 + ci |˜z k+1 |)z˜ k+1 − z(t + h) ,   ≤ c¯i z˜ k+1 − z(t + h) , c¯i := 1 + ci γ i ,

(34)

  with γ i as given in Lemma 5. To analyze the term z˜ k+1 − z(t + h) define F(x, d) := f (x, d) + m1 W (x, d)Gx. Thus, from (20) and (28) we have that z(t + h) = z(t) + μ t+h μ m m t v (τ )F(z(τ ), d(τ )) dτ and z˜ k+1 = z k + hvk F(z k , dk ), respectively. By Taylor’s theorem, there exists a function h → R(t, h) such that z(t + h) = z(t) + μ v m (t)F(z(t), d(t))h + R(t, h), and h1 R(t, h) → 0 as h → 0. Since z k = z(t), d(t) = dk , and vk = v(t),  μ    z˜ k+1 − z(t + h) = z k + hv m F(z k , dk ) − z(t) − v mμ (t)F(z(t), d(t))h − R(t, h) , k 1   1     (35) = |R(t, h)| = h  R(t, h) , lim  R(t, h) = 0 . h→0 h h Therefore, from (34) and (35) we conclude that there exist c ∈ R∗+ and ηz ∈ K such that  1   |E z (t + h)| ≤ hηz (h) , ηz (h) ≥ c R(t, h) . h 

8

Since [a1 , a2 ] ⊂ R is compact and a1 > 0, the function g : [a1 , a2 ] ⊂ R∗+ → R given by g(V ) = ri

V m is Lipschitz continuous. Also, z k and z belong to a compact subset of Rn on which V is Lipschitz continuous.

Lyapunov-Based Consistent Discretization of Quasi-continuous …

221

5 Examples In this section, firstly, we resume the examples given in Sect. 2 to illustrate the discretization scheme proposed in Sect. 4. Secondly, in Example 7 we show a possible application of the proposed discretization scheme to construct a discrete-time implementation of the controller (3). The disturbances to be used in the examples are given by (36) d1 (t) = A1 sin(ω1 t) , d2 (t) = 1 − A2 cos(ω2 t) , where d¯1 = A1 , d 2 = 1 − A2 , and d¯2 = 1 + A2 , with the parameters A1 = 1, ω1 = π , A2 = 1/5, and ω2 = 10π . Example 4 For n = 1 consider (1) with the controller and the Lyapunov function given in Example 1. Observe that ∂ V∂1x(x) x = x12 . Hence, Lemma 4 guarantees that Assumption 1 holds. For the simulation, we use the gain k1 = 3 and the initial condition x(0) = 5. Figure 1 shows the behavior of the discretization scheme (29) with a step of h = 0.1. It is clear that the state of the system converges exactly to the origin in finite time despite the disturbance. Additional details of this example are given in [34], where it is even compared with the implicit discretization schemes from [1, 8]. Example 5 Now consider (1) with n = 2, the controller and the Lyapunov function given in Example 2. Observe that   ∂ V2 (x) 3 x = 1 + 23 k14 x12 + 25 k13 x1  2 x2 + x24 . ∂x 3

By applying Young’s inequality to the term x1  2 x2 it can be verified that the function    1 − 41 1 given by x → ∂ V∂2x(x) x is positive definite if k1 < 8 4 3 5 58 3 − 4 . Under this condition, Lemma 4 guarantees that Assumption 1 holds. For the simulation we use the initial conditions x1 (0) = 2, x2 (0) = 2 and the gains k1 = 1, k2 = 4. The discretization step is again set to h = 0.1. In Fig. 2, it can be seen the states of the 1

4

xk

0

2

1.6

2.2

0 0

0.5

1

1.5

2

hk

Fig. 1 Discrete-time approximation of (1), (3) for n = 1

2.5

3

3.5

4

222

T. Sanchez et al. 3 2

x1k

0.05 0 −0.05

1 0 3

0

2

xk2

−0.2

1 0

−1 −2 0

1

2

3

4

5

hk

6

7

8

Fig. 2 Discrete-time approximation of (1), (3) for n = 2 4

x1k

5x10-6 0 8.65

2

8.75

0 4

xk2

0

2

-2x10-4

0 −2 0

xk3 −2 −4 0

0.01 0 1

2

3

4

5

hk

6

7

8

9

10

Fig. 3 Discrete-time approximation of (1), (3) for n = 3

discrete-time approximation (29) preserving the finite-time converge feature from the continuous-time model. Example 6 Now, for the case n = 3, consider (1) with the controller and the Lyapunov function given in Example 3. To simulate the discretization scheme (29), we use the initial conditions x1 (0) = 2, x2 (0) = 2, x3 (0) = 2 and the gains k1 = 0.6, k2 = 1.7, k3 = 1200. The discretization step is set to h = 0.001. Figure 3 shows the states of the system converging exactly to the origin in finite time.

Lyapunov-Based Consistent Discretization of Quasi-continuous …

223

10

|zk|

5 0 0

1

2

3

4

5

hk

6

7

8

9

10

Fig. 4 Norm of z˜ k in (28) for the discrete-time approximation of (1), (3) for n = 3

Since, for this example, Assumption 1 is not easily verifiable by means of Lemma 4, we confirm along the simulation that z˜ k = 0 for k ≥ 0. This can be corroborated in Fig. 4, which shows the norm of z˜ k . Example 7 In this example, we show a possible application of the proposed discretization scheme to construct a discrete implementation of the controller (3). Consider (1) for n = 2. Assume that the state x is measured at instants tk = kh, k ∈ Z+ , h ∈ R∗+ , and the control signal must be constant for the interval Ik = [kh, (k + 1)h), i.e., u(t) = u k for all t ∈ Ik . It is well known that the standard discretization of u(x(t)) given by (37) u k = u(x(tk )) generates numerical chattering as it can be seen in Fig. 5, which shows the behavior of the states of the system for d1 = 0 and d2 = 1, with the controller (3) discretized as in (37). For the simulation, the initial conditions are x1 (0) = 1, x2 (0) = 1 and the gains k1 = 1, k2 = 4. The step for the control discretization is h = 0.01. The continuoustime dynamics is approximated by means of the explicit Euler discretization with a step of h s = 1 × 10−5 . Now, as it is done with implicit discretization techniques for sliding-mode controllers (see, e.g., [1, 8]) we consider the controller discretization u k = u(xk+1 ) .

(38)

We use the proposed discretization method to compute xk+1 . First, observe that the function u : Rn → R given by (3) is r−homogeneous of zero degree. Hence, if x = r ()z, then u(x) = u(z) for all  ∈ R∗+ . Thus, by considering (29), we can replace (38) with (39) u k = u(z k+1 ) . To compute z k+1 we use (28) and vk+1 given by (27) with the data  −1 vk = V (x(tk )) , z k = r vk m x(tk ) . Now, note that both vk+1 and z k+1 given by (27) and (28), respectively, depend on the disturbance d, and it is generally unknown. Thus, we compute vk+1 and z k+1 by assuming the nominal case, i.e., with the disturbance d such that d1 = 0 and d2 = 1.

224

T. Sanchez et al. 2

x1k

5x10-4 0

1

−5x10-4

0 2

xk2

0.05 0

1

−0.05

0

−1 5

uk

0

−5 0

1

2

3

4

5

hk Fig. 5 States of (1) in closed loop with the discretization of (3) given by (37) (undisturbed case) 2

x1k

1.0753x10-3

1 0 2

xk2

2x10-7

1

0

0

−1 5

uk

0 2x10-7

0

−5 0

0.5

1

1.5

2

2.5

hk

3

3.5

4

4.5

5

Fig. 6 States of (1) in closed loop with the discretization of (3) given by (39) (undisturbed case)

Finally, observe that u is discontinuous at zero, hence we have to assign the value of u(z k+1 ) for the case z k+1 = 0. Since lim↓0 u(r ()z) = u(z) for all z = 0, we set u(z k+1 ) = u(z k ) if z k+1 = 0. In Fig. 6, we can see the states of the system with the controller (3) discretized as in (39). The disturbance signals are set as before, i.e., d1 ≡ 0 and d2 ≡ 1. The initial conditions are x1 (0) = 1, x2 (0) = 1 and the gains k1 = 1, k2 = 4. It can be seen that the numerical chattering has been considerably reduced with the proposed discretization scheme.

Lyapunov-Based Consistent Discretization of Quasi-continuous …

225

2

x1k

0 −5x10-4

1 0

xk2

2

0.1

1

0

0

−1 5

uk

0

−5 0

1

2

hk

3

4

5

Fig. 7 States of (1) in closed loop with the discretization of (3) given by (37) (disturbed case) 2 1.5x10-3

x1k

1

0

0 2

xk2

5x10-3 0

1 0

−1 5

uk

0

−5 0

1

2

hk

3

4

5

Fig. 8 States of (1) in closed loop with the discretization of (3) given by (39) (disturbed case)

Now we repeat the simulations by considering the disturbances given in (36). In Figs. 7 and 8, we can appreciate the states of the system and the control signals with the two different methods. It is clear that the proposed Lyapunov-based discretization helps to reduce the numerical chattering effect in the state signals. It is also noticeable that the accuracy in the second state is improved with the Lyapunov-based method, but it is not for the first state.

226

T. Sanchez et al.

6 Conclusion We have provided in this chapter a discretization scheme for a class of systems controlled by a family of quasi-continuous HOSM controllers. Two of the most relevant properties of the method are that it preserves from the continuous-time system the finite-time convergence to the origin and the Lyapunov function. Another interesting feature of the technique is that both the discretization and the projection procedures are explicit. Finally, we have shown an example of the application of the proposed discretization scheme to design a discrete-time implementation of a quasi-continuous controller that helps to reduce the numerical chattering effect. Acknowledgements The authors acknowledge the support of: the project ANR DIGITSLID (ANR 18-CE40-0008); CONACYT CVU-371652.

References 1. Acary, V., Brogliato, B., Orlov, Y.V.: Chattering-free digital sliding-mode control with state observer and disturbance rejection. IEEE Trans. Autom. Control 57(5), 1087–1101 (2012). https://doi.org/10.1109/TAC.2011.2174676 2. Bacciotti, A., Rosier, L.: Liapunov Functions and Stability in Control Theory, 2nd edn. Communications and Control Engineering. Springer, Berlin (2005). https://doi.org/10.1007/b139028 3. Bernuau, E., Efimov, D., Perruquetti, W., Polyakov, A.: On homogeneity and its application in sliding mode control. J. Frankl. Inst. 351(4), 1866–1901 (2014). https://doi.org/10.1016/j. jfranklin.2014.01.007 4. Boyd, S., Vandenberghe, L.: Convex Optimization. Cambridge University Press, Cambridge (2004). https://doi.org/10.1017/CBO9780511804441 5. Brogliato, B., Polyakov, A., Efimov, D.: The implicit discretization of the supertwisting slidingmode control algorithm. IEEE Trans. Autom. Control 65(8), 3707–3713 (2020). https://doi. org/10.1109/TAC.2019.2953091 6. Crouzeix, J.P., Ferland, J.A.: Criteria for quasi-convexity and pseudo-convexity: relationships and comparisons. Math. Program. 23(1) (1982). https://doi.org/10.1007/BF01583788 7. Cruz-Zavala, E., Moreno, J.A.: Homogeneous high order sliding mode design: a Lyapunov approach. Automatica 80, 232–238 (2017). https://doi.org/10.1016/j.automatica.2017.02.039 8. Drakunov, S.V., Utkin, V.I.: On discrete-time sliding modes. IFAC Proc. Vol. 22(3), 273–278 (1989). https://doi.org/10.1016/S1474-6670(17)53647-2 9. Efimov, D., Polyakov, A., Levant, A., Perruquetti, W.: Realization and discretization of asymptotically stable homogeneous systems. IEEE Trans. Autom. Control 62(11), 5962–5969 (2017). https://doi.org/10.1109/TAC.2017.2699284 10. Filippov, A.F.: Differential Equations with Discontinuous Righthand Sides. Kluwer, Dordrecht, The Netherlands (1988). https://doi.org/10.1007/978-94-015-7793-9 11. Goodwin, G.C., Agüero, J.C., Garrido, M.E.C., Salgado, M.E., Yuz, J.I.: Sampling and sampled-data models: the interface between the continuous world and digital algorithms. IEEE Control Syst. Mag. 33(5), 34–53 (2013). https://doi.org/10.1109/MCS.2013.2270403 12. Grimm, V., Quispel, G.R.W.: Geometric integration methods that preserve Lyapunov functions. BIT Numer. Math. 45(4), 709–723 (2005). https://doi.org/10.1007/s10543-005-0034-z 13. Haimo, V.T.: Finite time controllers. SIAM J. Control Optim. 24(4), 760–770 (1986) 14. Hairer, E., Nørsett, S.P., Wanner, G.: Solving Ordinary Differential Equations I, 2nd edn. Springer, Berlin, Heidelberg (1993). https://doi.org/10.1007/978-3-540-78862-1

Lyapunov-Based Consistent Discretization of Quasi-continuous …

227

15. Hong, Y., Huang, J., Xu, Y.: On an output feedback finite-time stabilisation problem. In: Proceedings of the 38th IEEE Conference on Decision and Control, vol. 2, pp. 1302–1307 (1999). https://doi.org/10.1109/CDC.1999.830117 16. Huber, O., Acary, V., Brogliato, B.: Lyapunov stability and performance analysis of the implicit discrete sliding mode control. IEEE Trans. Autom. Control 61(10), 3016–3030 (2016). https:// doi.org/10.1109/TAC.2015.2506991 17. Kawski, M.: Stability and nilpotent approximations. In: Proceedings of the 27th IEEE Conference on Decision and Control, vol. 2, pp. 1244–1248 (1988). https://doi.org/10.1109/CDC. 1988.194520 18. Koch, S., Reichhartinger, M.: Discrete-time equivalents of the super-twisting algorithm. Automatica 107, 190–199 (2019). https://doi.org/10.1016/j.automatica.2019.05.040, http://www. sciencedirect.com/science/article/pii/S0005109819302596 19. Koch, S., Reichhartinger, M., Horn, M., Fridman, L.: Discrete-time implementation of homogeneous differentiators. IEEE Trans. Autom. Control 65(2), 757–762 (2020). https://doi.org/ 10.1109/TAC.2019.2919237 20. Lambert, J.D.: Numerical Methods for Ordinary Differential Systems: The Initial Value Problem. Wiley, New York (1991) 21. Levant, A.: Higher-order sliding modes, differentiation and output-feedback control. Int. J. Control 76(6), 924–941 (2003). https://doi.org/10.1080/0020717031000099029 22. Levant, A.: Homogeneity approach to high-order sliding mode design. Automatica 41(5), 823– 830 (2005). https://doi.org/10.1016/j.automatica.2004.11.029 23. Levant, A.: Quasi-continuous high-order sliding-mode controllers. IEEE Trans. Autom. Control 50(11), 1812–1816 (2005). https://doi.org/10.1109/TAC.2005.858646 24. Levant, A.: On fixed and finite time stability in sliding mode control. In: 52nd IEEE Conference on Decision and Control (2013) 25. Mangasarian, O.L.: Nonlinear Programming. SIAM (1994) 26. Nakamura, H., Yamashita, Y., Nishitani, H.: Smooth Lyapunov functions for homogeneous differential inclusions. In: Proceedings of the 41st SICE Annual Conference, pp. 1974–1979 (2002). https://doi.org/10.1109/SICE.2002.1196633 27. Nesi´c, D., Teel, A.R.: Perspectives in Robust Control, chap. Sampled-Data Control of Nonlinear Systems: An Overview of Recent Results. Springer, London (2001) 28. Polyakov, A.: Generalized Homogeneity in Systems and Control. Springer, Cham, Switzerland (2020). https://doi.org/10.1007/978-3-030-38449-4 29. Polyakov, A., Efimov, D., Brogliato, B.: Consistent discretization of finite-time and fixedtime stable systems. SIAM J. Control Optim. 57(1), 78–103 (2019). https://doi.org/10.1137/ 18M1197345 30. Polyakov, A., Fridman, L.: Stability notions and Lyapunov functions for sliding mode control systems. J. Frankl. Inst. 351(4), 1831–1865 (2014). https://doi.org/10.1016/j.jfranklin.2014. 01.002 31. Rosier, L.: Homogeneous Lyapunov function for homogeneous continuous vector field. Syst. Control Lett. 19(6), 467–473 (1992). https://doi.org/10.1016/0167-6911(92)90078-7 32. Sanchez, T., Efimov, D., Polyakov, A., Moreno, J.A.: Homogeneous discrete-time approximation. IFAC-PapersOnLine 52(16), 19–24 (2019). https://doi.org/10.1016/j.ifacol.2019.11.749. 11th IFAC Symposium on Nonlinear Control Systems NOLCOS 2019 33. Sanchez, T., Polyakov, A., Efimov, D.: A consistent discretisation method for stable homogeneous systems based on lyapunov function. IFAC-PapersOnLine 53(2), 5099–5104 (2020). https://doi.org/10.1016/j.ifacol.2020.12.1141. 21st IFAC World Congress 34. Sanchez, T., Polyakov, A., Efimov, D.: Lyapunov-based consistent discretization of stable homogeneous systems. Int. J. Robust Nonlinear Control 31(9), 3587–3605 (2021). https://doi. org/10.1002/rnc.5308 35. Sepulchre, R., Aeyels, D.: Homogeneous Lyapunov functions and necessary conditions for stabilization. Math. Control Signals Syst. 9(1), 34–58 (1996). https://doi.org/10.1007/ BF01211517

228

T. Sanchez et al.

36. Utkin, V.I.: Variable Structure and Lyapunov Control, chap. Sliding mode control in discretetime and difference systems. Springer, Berlin, Heidelberg (1994) 37. Walter, J.: Proof of Peano’s existence theorem without using the notion of the definite integral. J. Math. Anal. Appl. 59(3), 587–595 (1977). https://doi.org/10.1016/0022-247X(77)90083-X

Low-Chattering Discretization of Sliding Modes Avi Hanan, Adam Jbara, and Arie Levant

Abstract Sliding-mode control (SMC) is not only known for its efficacy and robustness, but it also features the notorious chattering effect both in applications and simulation. The new control-discretization method diminishes the chattering while preserving the system trajectories, accuracy and insensitivity to matched disturbances. The unavoidable restrictions of low-chattering SMC discretization methods are discussed. Computer simulation demonstrates visual chattering removal.

1 Introduction Sliding-mode control (SMC) [55, 58, 59] is widely used in the control of uncertain systems. For this end, a proper constraint σ = 0 is chosen to be exactly kept, where σ is some (often virtual) output available in real time. The constraint σ = 0 is kept by high-frequency control switching preventing any deviation of σ from 0, and the closed system is said to be in SM. As a result, SMC suppresses system uncertainties corresponding to bounded disturbances in the control channel, which results in the The material in this chapter is presented with the permission of IEEE and Springer Nature. ©2021 IEEE, reprinted with permission from A. Hanan, A. Levant, and A. Jbara (2021). Lowchattering discretization of sliding mode control. In Proc. 2021 60th IEEE Conference on Decision and Control (CDC), DOI: https://doi.org/10.1109/CDC45484.2021.9683643 ©2021 Springer Nature, reprinted with the permission from A. Hanan, A. Jbara and A. Levant (2021). Homogeneous Sliding Modes in Noisy Environments. In: Mehta, A., Bandyopadhyay, B. (eds) Emerging Trends in Sliding Mode Control. Studies in Systems, Decision and Control, vol 318. Springer, Singapore. DOI: https://doi.org/10.1007/978-981-15-8613-2_1. A. Hanan · A. Jbara · A. Levant (B) Tel-Aviv University, 6997801 Tel-Aviv, Israel e-mail: [email protected] A. Hanan e-mail: [email protected] A. Jbara e-mail: [email protected] © The Author(s), under exclusive license to Springer Nature Switzerland AG 2023 T. R. Oliveira et al. (eds.), Sliding-Mode Control and Variable-Structure Systems, Studies in Systems, Decision and Control 490, https://doi.org/10.1007/978-3-031-37089-2_10

229

230

A. Hanan et al.

high overall system performance. The relative degree of the output σ defines the SM order [31, 32, 55]. Unfortunately, SMC also can induce dangerous system vibrations (the chattering effect) due to the switching combined with discrete noisy sampling and/or parasitic dynamics [6, 8, 35]. Three main methods broadly used for alleviating these vibrations are SM regularization, dynamic extension (artificially increasing the relative degree [26]), and SMC discretization specially adjusted to lower vibrations. SM regularization replaces relays with some continuous (“sigmoid”) approximations [58]. It actually introduces a local singular perturbation. As a result, the high SM accuracy and the dynamics insensitivity to matched disturbances are partially destroyed. Moreover, the chattering due to the discrete noisy sampling is amplified. The second method, dynamic extension, adds integrators into the feedback and requires application of high-order SM (HOSM) control (HOSMC) [31, 56, 57]. Such HOSMC indeed establishes and keeps constraints of any relative degrees and is capable of significantly diminishing the chattering [4, 32, 35]. In fact, in that case, the discontinuous control derivative suppresses the derivatives of the uncertain matched disturbance. Correspondingly, only smooth matched disturbances are removable, and higher-order derivatives of σ turn into new system states to be estimated in real time. Also, the recently proposed integral-action method [44] suffers from a similar information deficiency. This paper studies low-chattering discretization methods remaining “faithfull” to the original discontinuous system dynamics. In the sequel by discretization , we mean replacing the original continuous-time dynamic system or some of its subsystems with discrete-time counterparts. The ultimate requirement is that the solutions and the trajectories of the resulting hybrid system converge to the solutions and the trajectories of the original system, as the maximal discretization time interval vanishes. That formulation is intentionally vague in order to cover most of the currently available discretization methods. This paper studies finite-dimensional systems and the corresponding discretized solutions uniformly converge to the theoretically established ideal continuous-time solutions over each closed time interval. The explicit dependence of the obtained solutions on the sampling/discretization time step separates the discretization from the first two methods. Note that, generally speaking, the chattering due to sampling noises [35] is only reducible at the cost of significant performance degradation. Indeed, one cannot distinguish output varying due to sampling noises from the actual state variations. Also, high-frequency internal system vibrations all but assure its chattering independently of applied chattering reduction methods. Traditional discretizations of discontinuous systems employ the (explicit) Euler method and usually result in significant chattering [16]. Implicit discretization methods [1, 9, 10, 12, 25, 48] develop special Euler-method modifications to resolve the issue. The traditional Euler discretization can be described as the one-step forwardsin-time recursion, which is repeated at each sampling/discretization time step. That discretization is used either for the numeric simulation of a closed-loop system,

Low-Chattering Discretization of Sliding Modes

231

or for determining the next control injection value in real-time applications. In the latter case, the input makes use of discretely sampled noisy outputs. Much more advanced methods, like Runge–Kutta methods, are not available if the dynamics are discontinuous or even only non-smooth. The implicit Euler procedure corresponds to the above Euler recursion repeatedly applied backward in time at each successive step in the normal forwards-in-time direction. In practice, it means solving the Euler recursion equation for the unknown future system state at each integration/sampling-time instant. The method theoretically requires the knowledge of the exact mathematical system model and numerically solving nontrivial equations at each discretization time step. The algorithm indeed suppresses the chattering of smooth switched systems in the absence of sampling noises, since the moment when the trajectory enters the SM is predicted in advance, and starting from that moment, the SM motion is driven by the standard smooth Filippov SM dynamics. Unfortunately, in reality, such exact model knowledge is mostly not possible. The obstacle is overcome by some short-time approximate state prediction, and using set-valued functions instead of discontinuities. The approach invokes some beautiful mathematics [11, 25, 28–30, 46, 47, 60]. Such semi-implicit methods still inevitably assume a sufficiently detailed system model and the sampling step known in advance. They are especially effective for the first-order SMs, but higher-order SM applications are also known [9, 10, 49]. The realization for higher relative degrees might become complicated due to repeated numeric solutions of nonlinear algebraic equations. These numeric procedures often also prevent accurately estimating the system accuracy due to sampling noises. This paper further develops a novel simple discretization approach [23] to the Filippov dynamics [18]. The method preserves the continuous-time system trajectories and accuracy asymptotics, it also does not cause performance degradation. Similar to the above methods, it still depends on the control structure, but that dependence is much weaker. Only standard SMC uncertainty conditions are imposed for any relative degrees and SM orders, the sampling periods can be unknown and variable. The real-time discretization recursion step is always described by analytic formulas developed in advance. The procedure does not involve online numerical solutions of equations. Utilizing this approach, we have recently proposed simple low-chattering discretizations of SM-based filtering differentiators [24], and have found proper parametric sets for these differentiators up to the order 12. In the following, we develop low-chattering discretizations for two families of arbitrary-order homogeneous SM controllers [14], as well as of the twisting controller [31]. Only the standard SMC uncertainty conditions are imposed. In the output feedback format, the new scheme utilizes the above low-chattering discrete differentiators [24]. The simplicity and efficacy of the method are validated by extensive computer experiments.

232

A. Hanan et al.

Notation.1 Let sat s = max[−1, min(1, s)]. We use the widely accepted special power function ·m = | · |m sign(·), m ≥ 0. The norm ||x|| stays for the standard Euclidean norm of x, Bε = {x| ||x|| ≤ ε}, correspondingly ||x||h is a homogeneous norm, Bhε = {x| ||x||h ≤ ε}. A function of a set is the set of function values on this set. Let a  b be a binary operation for a ∈ A, b ∈ B, then A  B = {a  b|a ∈ A, b ∈ B}. → ˙ . . . , ξ (k) ) Depending on the context, we use the same notation ξ k for both (ξ, ξ, and (ξ0 , ξ1 , . . . , ξk ). The finite-difference operator δ j A = A(t j+1 ) − A(t j ) is introduced for any sampled function A(t j ).

2 Discontinuous Dynamic Systems Recall a few notions.2 Let T Rn x stay for the tangent space to Rn x , and Tx Rn x denote the tangent space at the point x ∈ Rn x . Consider the differential inclusion (DI) x˙ ∈ F(x), x ∈ Rn x , F(x) ⊂ Tx Rn x .

(1)

As usual, solutions of (1) are defined as locally absolutely-continuous functions x(t), satisfying DI (1) for almost all t. Note that whereas the right-hand side of (1) is often assumed embedded in Rn x [18, 26, 51], we need the tangent-space formalism for the homogeneity considerations (see the Appendix). We call a differential inclusion (DI) (1) Filippov DI, if the right-hand vector set F(x) is non-empty, compact, and convex for any x, and F is an upper-semicontinuous set-valued function of x [18, 33]. The upper-semicontinuity of F means that the maximal distance of the vectors of F(x) from the vector set F(y) tends to zero as x approaches y. Solutions of the Filippov DIs feature most of the usual properties including the existence of a local solution for the Couchy problem, and solutions’ extendability till the boundary of a compact region. Naturally, solutions are not unique, but the solutions still continuously depend on the right-hand side of (1). More important is that, in fact, they continuously depend on the graph of the DI [18]. Consider a differential equation (DE) x˙ = f (x), x ∈ Rn x , with a locally essentially bounded Lebesgue-measurable right-hand side f : Rn x → T Rn x . It is understood in the Filippov sense [18], if its solutions are defined as the solutions of the special DI x˙ ∈ K F [ f ](x) for 1

Notation is reprinted from the papers [21, 23] by authors with the permission by Springer Nature and IEEE. 2 Notions and notation introduced here are standard and reprinted from the papers [21, 23] by authors with the permission by Springer Nature and IEEE.

Low-Chattering Discretization of Sliding Modes

233

Fig. 1 Filippov procedure. a Graph Γ ( f ) of the DE x˙ = f (x) = 1 − 2 sign x. b Graph of the corresponding Filippov inclusion

K F [ f ](x) =

 

co f ((x + Bδ )\N ).

(2)

μ L N =0 δ>0

Here co denotes the convex closure, whereas μ L stays for the Lebesgue measure. The formula (2) introduces the famous Filippov procedure, and the corresponding DI x˙ ∈ K F [ f ](x) is a Filippov DI [18]. In the sequel, in the non-autonomous case, we add the virtual coordinate t, t˙ = 1. Filippov solutions satisfy all widely accepted alternative definitions for solutions of discontinuous dynamic systems. Actually, they constitute the minimal set of such solutions. The graph Γ (F) of the DI (1) over the domain G ⊂ Rn x is defined as the set of pairs Γ (F) = {(x, ξ) | x ∈ G, ξ ∈ F(x)}. In spite of Γ (F) ⊂ Rn x × T Rn x , it is locally topologically isomorphically embedded in R2n x for any fixed coordinates. Let G be closed, and F(x) be non-empty, compact, and locally bounded for any x ∈ G, then F is upper-semicontinuous in G if and only if Γ (F) is closed [18]. Furthermore, if F is upper-semicontinuous and G is compact, then also Γ (F) is compact [18]. In other words, Filippov’s procedure generates the minimal convex closure of the original DE graph (Fig. 1). The set of solutions for the Filippov DI (1) defined over the segment [a, b], a ≤ 0 ≤ b, for initial conditions x(0) within a fixed compact set Ω ⊂ Rn x , x(0) ∈ Ω, is compact in the C-metric. Moreover, the points of the corresponding trajectories constitute a compact set in Rn x . In the usual case when a function φ is continuous almost everywhere, the set K F [φ](x) is the convex closure of the limit values limk→∞ φ(yk ) obtained along all possible continuity-point sequences yk approaching x. Approximation of solutions. A locally absolutely-continuous function ξ : I → G is further called a δ-graph-approximating (δ-GA) solution of the Filippov DI (1) over ˙ ∈ Γ (F) + Bδ the closed domain G ⊂ Rn x , δ ≥ 0, I ⊂ R, if it satisfies (ξ(t), ξ(t)) for almost all t ∈ I . The time interval I here can be open, one-side-open or closed, finite or infinite. In the case of the compact time interval I and any ε > 0, there exists δ > 0 such that every δ-GA solution of the Filippov DI (1) defined over I in a closed region G ⊂ Rn x is C-metric distanced by not more than ε from a solution of DI (1). Vice

234

A. Hanan et al.

versa, if δk → 0, then any sequence of δk -GA solutions has a subsequence which uniformly converges to a solution of (1) over I [18]. Stability Notions. A point x0 ∈ Rn x is termed the equilibrium of the Filippov DI (1), if the constant function x(t) ≡ x0 satisfies it. The equilibrium x0 is (Lyapunov) stable, if each solution starting in some its vicinity ||x(0) − x0 || < δ0 at t = 0 is extendable till infinity in time, and for any ε > 0, there exists such δ > 0, δ ≤ δ0 , that any solution x(t) satisfying ||x(0) − x0 || < δ satisfies ||x(t) − x0 || < ε for any t ≥ 0. A stable equilibrium x0 is called asymptotically stable (AS), if any solution x(t) starting in some of its vicinity satisfies limt→∞ ||x(t) − x0 || = 0; it is globally AS if limt→∞ ||x(t) − x0 || = 0 for any x(0) ∈ Rn x . An AS equilibrium x0 is called finite-time (FT) stable (FTS), if x0 is AS, and for each initial condition x(0) from some vicinity of x0 there exists such a number T ≥ 0 that x(t) ≡ x0 for any t ≥ T . It is called globally FTS, if such T exists for any initial condition x(0) ∈ Rn x . The equilibrium x0 is termed fixed-time (FxT) stable (FxTS) [50, 53, 54], if it is globally FTS and there is an upper transient-time constant bound T > 0 valid for all solutions and initial conditions. Locally (globally) AS autonomous Filippov DIs possess proper local (global) smooth Lyapunov functions [13].

3 Discretization of Filippov Dynamic Systems In this section, we introduce a new simple discretization method. Let the controlled system x˙ = X (t, x, u) have the output σ ∈ Rn s . Consider a general closed-loop system x˙ = X (t, x, u 1 , u 2 ), x ∈ Rn x , u 1 ∈ Rn u1 , u 2 ∈ Rn u2 , (3) u˙ 1 = U1 (t, x, u 1 , u 2 , σ(t, x)), u 2 = U2 (t, u 1 , σ(t, x)), with a general-form output feedback, locally bounded and Lebesgue-measurable X locally bounded and Borel-measurable functions U1 , U2 . Let the system be understood in the Filippov sense, and the corresponding Filippov DI be d (t, x, u 1 )T ∈ Ft xu (t, x, u 1 ). (4) dt Let t d = {t j } = t0 , t1 , . . . be the sequence of sampling time instants, t j < t j+1 , τ j = t j+1 − t j , where t j ∈ [ta , tb ]. Let sup j τ j ≤ τ , where τ is called the discretization density of t d . Assume that such sampling-time sequences exist for any density τ > 0. A discretization of the closed-loop system (3) is further defined as any algorithm producing δ(t d )-GA solutions of the corresponding Filippov DI (4) which converge to its solutions as the sampling density τ vanishes.

Low-Chattering Discretization of Sliding Modes

235

There are two natural types of discretization: the discretization of the whole system corresponding to the computer simulation, and the feedback discretization leaving the continuous-time system dynamics intact. The latter models practical applications and results in a hybrid system of the form x˙ = X (t, x, u 1 , u 2 ), x ∈ Rn x , u 1 ∈ Rn u1 , u 2 ∈ Rn u2 , u˙ 1 = U1d (t, x, u 1 (t j ), σ(t j , x(t j )), τ , τ j ), t ∈ [t j , t j+1 ), j = 0, 1, . . . , u 2 = U2d (t j , u 1 (t j ), σ(t j , x(t j )), τ , τ j ),

(5)

where U1d , U2d are some discretized controls replacing U1 , U2 . The following theorem is a direct corollary of the above Filippov results (Fig. 2a). Theorem 1 Let the right-hand side of (5) be distanced by not more than δτ ≥ 0 from the graph Γ (Ft xu ) of the Filippov DI (4) over a compact set Ω ⊂ Rn x +n u1 +1 , and δτ depend on the discretization density τ and tend to 0 as τ → 0. Consider the set of solutions for (5) (discretized solutions) taking initial values in a compact subset Ω0 , (t0 , x(t0 ), u 1 (t0 )) ∈ Ω0 ⊂ Ω, defined over a fixed-time segment [ta , tb ], t0 ∈ [ta , tb ], and staying in the region, (t, x(t), y(t)) ∈ Ω, for t ∈ [ta , tb ]. Then these solutions uniformly converge to the set of solutions of (4) as τ → 0. A theorem similar to Theorem 1 holds provided the discretization is extended to the closed-loop dynamics of the whole state (computer simulation case). Remark It follows from Theorem 1 that one can try any reasonable discretization of a system without risking its destruction or performance degradation. It is a great feature for practical implementation. On the other hand, the chattering-attenuation proof is often complicated, since the very chattering notion is vague, and the criteria are only qualitative [35]. Neither the vibration magnitude, nor its frequency, or both of them do determine the chattering intensity. That is why all chattering reduction methods usually only contain the proof of the system stability, and, sometimes, the convergence of the solutions to the ideal ones. No general formal claims of the chattering removal can be formulated. In particular, basic proven results for the implicit Euler discretization are reducible to the system asymptotic-stability preservation and robustness with respect to certain disturbances. The accuracy in the presence of noises is usually experimentally estimated. In the following, we demonstrate that a proper simple feedback discretization can significantly diminish the system chattering in the absence of noises, or when the noises are small (usually very small). Note once more that, in general, it is not possible to remove the chattering caused by sampling noises.

3.1 Example: Alternative Discretization of Relay Control Note that a general discretization of the first-order SMC is considered as a simple particular case in Sect. 5. Consider the scalar SMC scalar system

236

A. Hanan et al.

x˙ = h(t) + g(t)u, u = −2 sign x;

(6)

The graph of the closed-loop system (6) for h = g = 1, f = h + gu is shown in Fig. 1a, while the graph of the corresponding Filippov DI appears in 1b. It contains the vertical segment [−1, 3] at x = 0. According to Theorem 1, we are to choose a proper value (selector) of x(t ˙ j ) for each Euler step x(t j+1 ) = x(t j ) + x(t ˙ j )τ j , 0 < τ j ≤ τ , ˙ j )) ∈ Γ (K F ( f )) + Bε(τ ) , limτ →0 ε(τ ) = 0, (x(t j ), x(t

(7)

˙ j )) to an infinitesimally providing for reasonably smooth convergence of (x(t j ), x(t small vicinity of 0 ∈ R2 or even asymptotic convergence to 0 for each sampling density τ (Fig. 2a). Let now τ > 0 be small, and choose some k0 ≥ 0. Replace the vertical segment with a thin vertical rectangle of the width 2k0 τ obtaining the new Filippov DI h(t) ⎧ = 1, g(t) = 1, ⎨ {−1} for x > k0 τ , x˙ ∈ [−1, 3] for |x| ≤ k0 τ , ⎩ {3} for x < −k0 τ .

(8)

The graph of the DI (8) obviously lies inside the “swelled” graph Γ (K F ( f )) + Bε for ε = k0 τ (Fig. 2a). Assign the values x(t ˙ j ) from the inclusion (8). According to Theorem 1, differently ˙ j ) obtain different discretization schemes. choosing k0 ≥ 0 and x(t The standard Euler discretization of (6) corresponds to k0 = 0 (Fig. 2b), x(t j+1 ) = x(t j ) + (1 − 2 sign x(t j ))τ j .

(9)

The simple alternative discretization x(t j+1 ) = x(t j ) + sat



|x(t j )| |2 sign x(t j )−1|τ j



(1 − 2 sign x(t j ))τ j

(10)

steers x(t) to 0 in finite time (FT) and corresponds to k0 ≥ 3 (Fig. 2c). It requires the exact knowledge of the system. Another alternative discretization   x(t ) (11) x(t j+1 ) = x(t j ) + [h(t j ) + g(t j )u(t j )]τ j , u(t j ) = −2 sat 4τj , corresponds to k0 ≥ 4 and asymptotically stabilizes the system (6) for h = g = 1 (Fig. 2d). Moreover, scheme (11) remains effective for any h(t) ∈ [−1, 1], g(t) ∈ [1, 1.5], + in which case x(t) converges into a vicinity of zero. The equality x(t) = − τ4 h(t) g(t)

Low-Chattering Discretization of Sliding Modes

237

Fig. 2 First-order SMC (6) for h = g = 1 and its discretizations. a The proposed discretization method for x˙ = f (x) = 1 − 2 sign x. b The standard Euler discretization (9). c Alternative discretization (10) utilizing the knowledge of the system provides for the FT stability. d Alternative discretization (11) is effective for any h, g, |h| ≤ 1, |g| ∈ [1, 1.5] and provides for the asymptotic stability for h = g = 1

O(τ 2 ) is kept, provided the equivalent control u eq = −h/g and its derivatives u˙ eq , u¨ eq are bounded, |u eq | ≤ const < 2. Any k0 > 3 corresponds to the scheme. The discretized control of (11) also stays effective for the continuous-time system (6) (see Sect. 5). Obviously, one can propose many other low-chattering discretization schemes for the relay control.

4 Homogeneous Output Regulation The natural way of implementing the proposed discretization method is local in time and state (Theorem 1). A system homogeneity (see the Appendix) extends the system features from any vicinity of the origin to the whole state space. Correspondingly, the homogeneity allows extending any system treatment including discretization to the whole space [49]. Unfortunately, effective discretization still depends on the structure of the concrete systems and/or controllers. In the following, we introduce the standard SMC problem and the concrete homogeneous single-input single-output (SISO) SM controllers [14] to be discretized in the sequel. Standard SMC problem [32]. Let the dynamic system be

238

A. Hanan et al.

x˙ = a(t, x) + b(t, x)u, σ = σ(t, x),

(12)

where x ∈ Rn , a : Rn+1 → Tx Rn , b : Rn+1 → Tx Rn and σ : Rn+1 → R are uncertain smooth functions, u ∈ R is the control. The output σ might, for example, be a tracking error. Solutions of (12) are understood in the Filippov sense, which allows application of discontinuous controls u(t, x). For simplicity any solution of (12) is assumed forward complete (i.e., extendable in time till infinity), provided the corresponding control function of time u(t, x(t)) stays bounded along the solution x(t). System (12) is assumed to have a constant relative degree r [26]. It means that σ (r ) = h(t, x) + g(t, x)u,

(13)

holds, where the function g does never vanish [26]. Traditionally for SMC, smooth functions h(t, x) and g(t, x) are assumed unknown, but bounded and satisfying the conditions |h(t, x)| ≤ C, 0 < K m ≤ g(t, x) ≤ K M . (14) where C, K m , K M > 0, as well as r ≥ 1, are the problem parameters. The SMC task is to establish and keep the constraint σ ≡ 0. Obviously, the uncertain dynamics (13) and (14) imply the quite certain DI σ (r ) ∈ [−C, C] + [K m , K M ]u.

(15)



Denote σ k = (σ, σ, ˙ . . . , σ (k) ) ∈ Rk+1 . Introduce some discontinuous SMC →

u = αu ∗r ( σ r −1 ).

(16)

The mode σ(t, x(t)) ≡ 0 is further called r th-order SM (r -SM) [31, 32] if the → corresponding r -SM set σ r −1 = 0 locally or globally is the integral set of the Filippov DE (12). Provided the r -SM set is an attracting forward-invariant set, controller (16) is called r -SM controller (r -SMC further stays for both “r -SM controller” or “r -SM control”) . In particular, due to their form, the following control is called the “rational” r SMC [14]: u ∗r =

→ u Qr ( σ r −1 )

 =−

σ (r −1)

11

 1 1 + βr −2 σ (r −2) 2 + · · · + β0 σ r

1

1

1

|σ (r −1) | 1 + βr −2 |σ (r −2) | 2 + · · · + β0 |σ| r

,

(17)

whereas the next one is the “relay” r -SMC [14], →

u ∗r = u Sr ( σ r −1 ) = − sign



σ (r −1)

11

 1 1 + βr −2 σ (r −2) 2 + · · · + β0 σ r . (18)

Low-Chattering Discretization of Sliding Modes

239

Both controllers (17) and (18) establish and keep the r -SM σ = 0 for the same properly chosen parametric set β0 , . . . , βr −2 > 0, whereas the parameter α > 0 is taken large enough, but different for u Qr and u Sr . Naturally, α > 0 equals the magnitude of the control. The value of u Qr (0) is defined voluntarily, for it does not affect the Filippov solutions (2). Note that in both cases, u ∗1 = − sign σ for r = 1. → Obviously, the rational control (17) is continuous everywhere except σ r −1 = 0. Such controls are called quasi-continuous (QC) [34] (see the Appendix). Contrary to that the discontinuity set u Qr = 0 of the relay controller is comprised of subsurfaces occasionally possessing infinite gradients. Hence, during the transient control (18) cannot keep u Qr = 0 in SM at such points, nevertheless, temporary SMs u Qr = 0 can arise over some time intervals. Correspondingly, solutions of (12), (16), and (14) satisfy the resulting Filippov DI → (19) σ (r ) ∈ [−C, C] + α[K m , K M ]K F [u ∗r ]( σ r −1 ). Obviously, these controls require the availability or the real-time estimation of r − 1 derivatives σ, ˙ . . . , σ (r −1) . Further any continuous-time (possibly discontinuous) feedback control function in DIs is assumed replaced by the result of its Filippov procedure (2). Assigning the weights deg σ (i) = r − i renders DI (19) homogeneous of the homogeneity degree (HD) −1, − deg t = −1 (see the Appendix [33]). Obviously deg σ (r ) = deg u ∗r = 0. Assume that the sampling-noise magnitude and the sampling-time period do never exceed ε0 ≥ 0 and τ > 0, respectively. Then the homogeneity implies that the accuracy [33] |σ (i) | ≤ μi ρr −i

(20) 1/r

is established in FT and kept for some constants μi > 0 and ρ = max[τ , ε0 ]. The formula (20) is also correct for exact sampling in continuous time, τ = ε0 = 0, for possibly different coefficients μi .

4.1

Differentiation and Filtering Based on SMs

SMC technique traditionally requires differentiation of the sliding variable.3 Let Lipn d L stay for the set of all scalar functions φ : R+ → R, R+ = [0, ∞), possessing Lipschitzian n d th derivative with the Lipschitz constant L > 0. It implies that |φ(n d +1) | ≤ L holds for almost all t ∈ R+ .

3

Some notions and notation introduced here are reprinted from the papers [21, 23] by authors with the permission by Springer Nature and IEEE.

240

A. Hanan et al.

Let the sampled input signal f (t), t ≥ 0, be of the form f (t) = f 0 (t) + η(t), where f 0 ∈ Lipn d L is the unknown basic signal to be differentiated and η(t) is a Lebesgue-measurable noise. The numbers L , n d are assumed known, and the function f (t) is available (sampled) in real time. An n d th-order differentiator, n d ≥ 0, is defined as any algorithm producing functions z 0 , . . . , z n d : R+ → R. Functions z i (t), i = 0, 1, . . . , n d , are assumed to have the sense of the real-time estimations for f 0(i) (t). Let a differentiator be exact after some FT transient on all inputs f = f 0 ∈ Lipn d L for η(t) ≡ 0. Then it is called asymptotically optimal [41], if for some μi > 0, any f 0 ∈ Lipn d (L), any ε0 ≥ 0, and any bounded Lebesque-measurable noise η, ess sup |η(t)| ≤ ε0 , it in FT establishes the differentiation accuracy |z i (t) −

f 0(i) (t)|



n d +1−i i n +1 d μi L ε0 n d +1

, i = 0, 1, . . . , n d .

(21)

Accuracy asymptotics (21) are proved to be the best possible for bounded noises [41]. i

It is also proved there that μi ≥ 2 n d +1 always holds, i = 0, . . . , n d . Arbitrary-order asymptotically-optimal differentiators were for the first time proposed in [32], and they are SM-based. The filtering differentiator [40, 42] of the differentiation order n d ≥ 0 and the filtering order n f ≥ 0 has the form n d +n f

1

w˙ 1 = −λ˜ n d +n f L nd +n f +1 w1  nd +n f +1 + w2 , ... n f −1

n d +2

w˙ n f −1 = −λ˜ n d +2 L nd +n f +1 w1  nd +n f +1 + wn f , nf n d +1 w˙ n f = −λ˜ n d +1 L nd +n f +1 w1  nd +n f +1 + wn f +1 , wn f +1 = z 0 − f (t), n f +1

nd

z˙ 0 = −λ˜ n d L nd +n f +1 w1  nd +n f +1 + z 1 , ... n d +n f

(22)

1

z˙ n d −1 = −λ˜ 1 L nd +n f +1 w1  nd +n f +1 + z n d , z˙ n d = −λ˜ 0 L sign(w1 ), | f 0(n d +1) | ≤ L .

(23)

It also features strong noise-filtering capabilities [24, 40]. In particular, it filters out even unbounded noises, provided their local iterated integrals of an order not exceeding n f are small [40]. Moreover, differentiator (27) directly extracts the equivalent control and its derivatives from the chattering SMC u(t) [42]. It also filters out random noises of small mean values [21, 22]. The variable wn f +1 = z 0 − f (t) is fictitious and is only introduced to keep the same formula for n f = 0. Indeed, in the case n f = 0 DEs (22) disappear and z 0 − f (t) is substituted for w1 in (23). The resulting derivative estimator is the standard

Low-Chattering Discretization of Sliding Modes

241

differentiator [32] mentioned above. In particular, n d = n f = 0 determines the 0order differentiator z˙ 0 = −λ˜ 0 L sign(z 0 − f (t)), | f˙0 | ≤ L. Introduce the short notation for (22), (23) w˙ = Ωn d ,n f (w, z 0 − f, L), z˙ = Dn d ,n f (w1 , z, L),

(24)

for the proper parameters λ˜ = (λ˜ 0 , . . . , λ˜ n d +n f ) (Fig. 3). Let ess sup η(t) = ε0 and the maximal allowed sampling-time interval be τ > 0. It is proved in [32, 39] that in that case differentiator (24) in FT provides and holds the accuracy |z i (t) − f 0(i) (t)| ≤ μi Lρn d +1−i , i = 0, 1, . . . , n d , |w1 (t)| ≤ μw1 Lρn d +n f +1

(25)

ρ = max[(ε0 /L)1/(n d +1) , τ ],

(26)

for

˜ and some constants μw1 > 0, μi > 0 only depending on the choice of λ. In fact, also internal variables wk satisfy inequalities |wk (t)| ≤ μwk Lρn d +n f +2−k , k = 2, .., n f , for some μwk > 0. Note that these internal variables actually become quite large for general, possibly not bounded, filterable noises [40], but they do not directly influence the outputs z i . We do not consider such noises in this paper. The formulas (25) formally stay true for τ = 0 and ρ = (ε0 /L)1/(n+1) corresponding to continuous-time noisy sampling. Thus, differentiator (27) is exact and asymptotically optimal in spite of its filtering capabilities. Notation. Recall that for any sampled vector signal φ(t j ), its increment is denoted by δ j φ = φ(t j+1 ) − φ(t j ). The discrete differentiator [3]. The discrete version of (24) δ j w = Ωn d ,n f (w(t j ), z 0 (t j ) − f (t j ), L)τ j , δ j z = Dn d ,n f (w1 (t j ), z(t j ), L)τ j + Tn d (z(t j ), τ j ),

(27)

where the Taylor-like term Tn d ∈ Rn d +1 is defined as Tn d ,0 = Tn d ,1 =

1 z (t )τ 2 2! 2 j j 1 z (t )τ 2 2! 3 j j

...

+ ··· + + ··· +

1 z (t )τ n d , nd ! nd j j 1 z (t )τ n d −1 , (n d −1)! n d j j

(28)

1 z (t )τ 2 , 2! n d j j

Tn d ,n d −2 = Tn d ,n d −1 = 0, Tn d ,n d = 0, has the same features as its continuous-time counterpart (24). Terms Tn d are needed in the stand-alone numeric-differentiation applications in order to ensure the homogeneity of the discrete error dynamics and the standard continuous-time accuracy (25), (26) with possibly different coefficients μi .

242

A. Hanan et al.

4.2 Output Feedback Stabilization in Continuous Time First let σ be only available by its noisy measurements σ(t) ˆ = σ(t, x(t)) + η(t), |η| ≤ ε0 . The corresponding FT stabilization is obtained for α > 0 large enough and the closed-loop system σ (r ) ∈ [−C, C] + α[K m , K M ]K F [u ∗r ](z(t)), w˙ = Ωn d ,n f (w, z 0 − σ − η(t), L), z˙ = Dr −1,n f (w1 , z, L), L ≥ C + K M α.

(29)

The stabilization is exact for η = 0. Note that in the case η(t) ≡ 0 system (29) is homogeneous of the HD −1 and the weights deg z i = deg σ (i) = r − i, i = 0, 1, . . . , n d , deg wk = r + n f + 1 − k, k = 1, 2, . . . , n f + 1. Recall that wn f +1 = z 0 − σˆ is a fictitious variable. The steady-state system accuracy in the presence of 1/(n +1) noises is well-known [21, 22, 27] and is described by (20) for ρ = ε0 d .

4.3 Output Feedback Stabilization Using Discrete Differentiators Let now σ be discretely sampled as σ(t ˆ j ) = σ(t j , x(t j )) + η(t j ) for some sampling instants t0 , t1 , .., τ j = t j+1 − t j ≤ τ , and the bounded noise |η| ≤ ε0 . Denote (27), (28) by δ j (w, z)T = n d ,n f (w, z, z 0 − f, L , τ j )(t j ). Consider the stabilization of the DI (15) which still evolves in continuous time by the feedback zero-hold r -SMC (16) exploiting the discrete differentiator. The closedloop system contains continuous-time and discrete-time subsystems. Therefore, it is a hybrid system. It gets the form σ (r ) ∈ [−C, C] + α[K m , K M ]K F [u ∗r ](z(t j )), t ∈ [t j , t j+1 ), δ j (w, z)T = r −1,n f (w, z, z 0 − σ − η, L , τ j )(t j ), L ≥ C + K M α, |u ∗r | ≤ 1, 0 < τ j = t j+1 − t j ≤ τ .

(30)

Now consider the original system (12) of the relative degree r closed by the same feedback. The corresponding closed-loop hybrid system gets the form x˙ = a(t, x) + b(t, x)u(t j ), σ(t ˆ j ) = σ(t j , x(t j )) + η(t j ), u = αu ∗r (z(t j )), L ≥ C + K M α sup |u ∗r |, L > 0, t ∈ [t j , t j+1 ), ˆ j ), z(t j ), L)τ j . δ j (w, z)T = r −1,n f (w(t j ), z 0 (t j ) − σ(t

(31)

Theorem 2 Let the sampling noise satisfy |η(t)| ≤ ε0 , the sampling interval be bounded, 0 < t j+1 − t j ≤ τ , n f ≥ 0. Then the discrete output feedback control sta-

Low-Chattering Discretization of Sliding Modes

243

bilizes both systems (30) and (31) in FT providing the accuracy |σ (i) | ≤ γi ρr −i , 1/(n +1) i = 0, 1, . . . , r − 1, for ρ = max[ε0 d , τ ] and some γ0 , . . . , γr −1 > 0. Addition of the terms Tn f ,n d (z(t j ), τ j ) is optional and not required in the output feedbacks (31), (30). Unbounded noises are considered in [40]. Note that this theorem is formally extendable also to the limit case τ = 0 corresponding to the continuous sampling of σ in the presence of the Lebesgue-measurable noise η(t), |η(t)| ≤ ε0 . The proof of Theorem 2 is based on the accuracy estimation (66) of the disturbed homogeneous systems (see the Appendix). Also, the computer simulation case (complete discretization) is covered there [38, 39].

5 Low-Chattering Discretization of HOSMs It is not possible to reasonably define the chattering of a separate signal [35]. Indeed, only the time scaling distinguishes between sin(106 t) and sin(10−6 t). Therefore, we intentionally bound ourselves to the intuitive chattering understanding. It is well-known that SM control u in average approximates the equivalent control u eq = −h/g|σ≡0 [42, 59]. Correspondingly, in order to exclude the chattering of the ideal Filippov solution for (19), one needs the equivalent u eq itself not to chatter. For the same reason, since measurement noises can mimic the chattering of u eq , one cannot in general remove the control chattering in the presence of noises [35]. We also do not consider the chattering due to parasitic dynamics [7]. Thus, our goal is to diminish the high-frequency significant-magnitude vibrations of the SMC (30) in the case of exact discrete measurements for small enough sampling step τ and relatively slowly changing h, g. All available problem solutions are obtained under the same assumptions and prove the system practical stability in the absence of noises. The widespread discontinuity regularization [58] is highly sensitive to noises [35]. Artificial increase of the relative degree [4, 15] raises the sensitivity to noises due to the required higherorder differentiation. Also, continuous SM controllers with integral action [44] have differentiation issues and some chattering due to the non-Lipschitzian control. All Euler-based discretization methods for the discontinuous ODE x˙ = f (t, x) require to select a proper value of x(t ˙ j ) for each sampling/integration time step ˙ j )(t j+1 − t j ). The natural choice x(t ˙ j ) = f (t j , x(t j )) inevitably leads x(t j+1 ) = x(t to chattering. The implicit discretization schemes [1, 9] propose a feasible choice of x(t ˙ j ). The idea is that the knowledge or a proper estimation of the ideal Filippov solution at the ˙ j) next sampling step will allow to choose x(t ˙ j ) fitting that prediction. The value of x(t is chosen in correspondence with the Filippov procedure. It requires numeric solution of nonlinear equations for the proper value of x(t ˙ j ) at each sampling/integration step. It can be computationally difficult and requires some concrete knowledge of the system. The performance of such algorithms in the presence of noises usually cannot be theoretically established due to the involved numeric procedure.

244

A. Hanan et al.

Our method enlarges the Filippov inclusion, significantly expanding the set of available selections for x(t ˙ j ) so that no choice affects the standard limit performance of the system as the sampling intervals vanish. Some of these choices significantly reduce the chattering of the approximating solutions. Therefore, one needs to choose such a proper selector algorithm for all time instants t j . In the following, we suggest a simple discretization of the output feedback SMC (30) featuring significantly less chattering in the absence of noises and preserving the system performance in the presence of noises. The case of the direct measurements → of σ r −1 is obtained by trivially removing the observer from the feedback. Also, in the sequel the upper bound τ of the sampling step is assumed available, whereas the actual sampling steps are variable and unknown. The proposed alternative discretization should include discretization of both the differentiator [24] and the controller.

5.1 Low-Chattering Discretization of Differentiators Low-chattering discretization of SM-based differentiators is a well-known problem. Until recently, it was only solvable by implicit discretization methods [12, 48]. Let k L > 0 be the parameter of the low-chattering differentiator discretization chosen as in [24]. The following is the low-chattering discrete filtering differentiator [20]: ˆ δ j (w, z)T = n d ,n f (w,  z, z 0 − f, L, τ j )(t j ), (32) |w (t )| 1 j ˆ j ) = L sat , wτ = k L τ n d +n f +1 . L(t Lwτ

Let | f 0n d +1 | ≤ L f ≤ L. According to [24] it in FT provides for the accuracy of the form |z i − σ (i) | ≤ μdi Lρr −i , i = 0, . . . , r − 1, ρ = max[τ , (ε0 /L)1/(n d +1) ]. Here ε0 ≥ 0 is the (unknown) maximal noise amplitude and ρ = τ in the absence of noises. Moreover, if ε0 = 0, then the new accuracy is |z i − σ (i) | ≤ μ˜ di L f τ r −i , i.e., overestimated values of L do not affect it. Furthermore, if lims→∞ supt≥s | f 0n d +1 (t)| = 0 then differentiation errors asymptotically converge to zero, |z i − σ (i) | → 0 [24]. Note that the accuracy estimation holds for any bounded sampling steps and noises, i.e., for any ρ ≥ 0. In particular, in the limit case of exact continuous-timemeasurements τ = ε0 = 0, the differentiator becomes exact. Moreover, such differentiator of the differentiation order 3 and the filtering order 9 is numerically demonstrated to produce 3 asymptotically exact derivatives for the input cos(3 ln(t + 1)), the constant sampling step τ = 5 (five), and L = 1000 [24]. An optional choice of parameters [24] valid for any n d , n f ≥ 0, n d + n f = 0, 1, . . . , 12, is provided in Figs. 3, 4 which correspond to the sequence of recursiveform [32] parameters 1.1, 1.5, 2, 3, 5, 7, 10, 12, 14, 17, 20, 26, 32, . . .. Recall that increasing k L preserves the validity of the parameters [24].

Low-Chattering Discretization of Sliding Modes

245

Fig. 3 Parameters λ˜ 0 , λ˜ 1 , . . . , λ˜ n d +n f of differentiator (22), (23) for n d + n f = 0, 1, . . . , 12

Fig. 4 Valid parameters k L of the discrete differentiator (32) corresponding to Fig. 3

5.2 Low-Chattering Discretization of Higher-Order SMC Consider the closed-loop SMC DI obtained from (31) using controller Ur , σ (r ) ∈ [−C, C] + [K m , K M ]u(t j ), σ(t ˆ j ) = σ(t j ) + η(t j ), u = αUr (z(t j )), L ≥ C + K M α sup |Ur |, L > 0, t ∈ [t j , t j+1 ), ˆ j ))τ j . δ j (w, z)T = r −1,n f (w(t j ), z 0 (t j ) − σ(t ˆ j ), z(t j ), L(t

(33)

ˆ j ) is defined by (32). The task is to develop a proper discretization Ur for Here L(t controllers u ∗r from (17) and (18). The main idea is to complement the powers of coordinates σ (i) up to one turning controllers u ∗r defined by (17) or (18) into a linear high-gain control in the infinitisemally small vicinity of the origin depending on τ 0.  γ  differentiation γ Each term σ (i) , γ ∈ (0, 1), is replaced with the term sat 1−γ (|σ (i) |/ζτ i ) σ (i) . The transformation is performed in infinitesimally thin layers |σ (i) | ≤ ζτ i along the surfaces of discontinuity and/or non-smoothness, which keeps the velocity vectors

246

A. Hanan et al.

close to the graph of the Filippov inclusion (19). Since limτ →0 ζτ i = 0 the distance of the vectors from the graph vanishes as τ → 0, producing a valid discretization (Theorem 1). Assigning the weights deg τ = deg t = 1, deg σ (i) = deg z (i) = r − i, deg w j = r + n f , and taking ζτ i = kir −i τ r −i for some ki > 0, obtain a homogeneously disturbed r -SMC system [38, 39]. It is natural to call such a discretization homogeneous. As a result, the accuracy of the discretization is established by [38, 39] and coincides with the standard worst-case accuracy of the homogeneous r -SM (20). That worst-case accuracy is improved in the degenerate case when the parameter C from (15) vanishes, which corresponds to the vanishing of the equivalent control. In that → particular case, in a small vicinity of the origin z = σ r −1 = 0, the system becomes alternatively homogeneous of the zero homogeneity degree and asymptotically stable for correspondingly chosen parameters. That asymptotic stability becomes global, provided parameters are properly assigned. Discretization of quasi-continuous controllers (17). Outputs z i of differentiator (32) are substituted for σ (i) , i = 0, 1, . . . , r − 1, ˆ δ j (w, z)T = n d ,n f (w,  z, z 0 − σ, L, τ j )(t j ), |w (t )| 1 j ˆ j ) = L sat , wτ = k L τ n d +n f +1 , L ≥ C + K M α. L(t Lwτ

(34)

producing the output feedback control. Pick some numbers kˆ0 , . . . , kˆr −1 , kh > 0, kh ∈ (0, 1], κ∗ > 0, to define the width of the layer ζτ i = kir −i τ r −i , ki = κ∗ kˆi , u(t) = α Ur (z(t

j )), t ∈ [t j , t j+1 ),

Ur (z) = − sat

P(z) , Q(z)

Q(z) kh Q τ

| ) 2 |zr −2 | 2 + · · · Q(z) = |zr −1 | + βr −2 sat( |zζτrr −2 −2 r −1 1 +β0 sat( |zζτ00| ) r |z 0 | r , 1 1

1

1

| ) 2 zr −2  2 + · · · P(z) = zr −1 + βr −2 sat( |zζτrr −2 −2 1

1

r −1

(35)

+β0 sat( |zζτ00| ) r z 0  r , ζτ i = kir −i τ r −i , ki = κ∗ kˆi , kˆi > 0, i = 0, 1, . . . , r − 1, κ∗ > 0 1

1

1

1

Q τ = ζτr1 −1 + βr −2 ζτr2 −2 + · · · + β0 ζτr0 . →

If σ r −1 are directly measured, system coordinates σ (i) are substituted back for z i in (35), i = 0.1, . . . , r − 1, and (34) is excluded. After some calculation get Q τ = qτ τ , qτ = kr −1 + βr −2 kr −2 + · · · + β0 k0 . Therefore, in the set |z i | ≤ (ki τ )r −i , Q(z) ≤ kh qτ τ the control function Ur from (35) gets the form

Low-Chattering Discretization of Sliding Modes

247

Ur (z) = −(kh qτ τ )−1 [zr −1 + β˜r −2 τ −1 zr −2 + · · · + β˜0 τ −(r −1) z 0 ], β˜i = βi k −(r −1−i) = βi (κ∗ kˆi )−(r −1−i) , i = 0, 1 . . . , r − 2,

(36)

i

which corresponds to the local output feedback high-gain control with the small parameters τ , κ−1 ∗ . Note that it only takes place in the discrete form of the control → and in a vicinity of σ r −1 = z = 0 of the diameter proportional to τ . Discretization of the relay r -SMC (18). Choose the discretization u(t) = α Ur (z(t

j )), t ∈ [t j , t j+1 ),

Ur (z) = − sat

P(z) kh Q τ

,

(37)

It is easy to see that in the set |z i | < (ki τ )r −i = ζτri−i , the control once more gets the form (36) of the local output feedback high-gain control with the small parameters τ , κ−1 ∗ . That is also a homogeneous discretization. Clearly, any set of βˆ0 , . . . , βˆr −2 > 0 is obtainable by a proper choice of kˆ0 , . . . , kˆr −1 . Also, substituting κki for ki , κ > 0 causes the division of the roots by κ for the polynomial s r −1 + β˜r −2 s r −2 + · · · + β˜0 . Theorem 3 Let α > 0, β0 , . . . , βr −2 > 0 be properly chosen ensuring the FT stability of the DI (15) under the QC control (17) (respectively the “relay” control (18)), the differentiator parameters be also properly chosen [24, 40], in particular, L > C + K M α holds. Then for any choice of positive numbers k0 , . . . , kr −1 > 0 (i.e., any corresponding kh , κ∗ ) and τ > 0, the system (respectively, (15), (34), and (37)) establishes the standard accuracy (20) for bounded sampling noises, and features the standard filtering capabilities for noises of the filtering orders not larger than n f ≥ 0 [40]. Thus, Theorem 3 guaranties the preservation of the standard properties of the homogeneous r -SMC. The following theorem deals with chattering-attenuation capabilities. Theorem 4 Let the conditions of Theorem 3 hold, and let ki = κ∗ kˆi , i = 0, . . . , r − 1, for some κ∗ > 0, kˆ0 , . . . , kˆr −1 > 0, also let kˆ0 , . . . , kˆr −2 > 0 generate the Hurwitz polynomial s r −1 + βˆr −2 s r −2 + · · · + βˆ0 , βˆi = βi kˆi−(r −1−i) . Let kˆr −1 also be small enough with respect to kˆ0 , . . . , kˆr −2 . Then, provided κ∗ and kh be chosen, respectively, sufficiently large and sufficiently small, and there are no sampling noises, the following statements are true for any τ > 0: • The choice ki = κ∗ kˆi , i = 0, 1, . . . , r − 1, ensures that the system (15), (32), (35) in FT stabilizes in the set |σ (i) | ≤ kir −i τ r −i , |z i | ≤ kir −i τ r −i . • System (15), (32), (35) which results from discretization, is exponentially stable → for C = 0 (i.e. σ r −1 , z → 0) for any kh ∈ (0, 1] small enough. Hence, the steady-state chattering removal is obtained in the absence of noises in the rather rare case, when the equivalent control is identical zero, u eq = −h(t, x)/

248

A. Hanan et al.

g(t, x) ≡ 0. In general, when u eq = 0, control u tracks u eq , provided u eq is smooth and slow. The QC control (17) is smooth outside of the coordinate planes z i = 0 (σ (i) = 0 for direct measurements) and continuous everywhere except the origin z = 0 (respec→ tively σ r −1 = 0), correspondingly the proposed discretization is expected to significantly diminish the chattering in the whole state space. In the case of the “relay” control (18), the theorem does not ensure chattering reduction during the transient even for h ≡ 0. Indeed, no special discretization is applied along the discontinuity set of (18).

5.3 Proof Sketch Consider the case of the control (17). The similar case of (18) is simpler. Consider the auxiliary set-valued homogeneous control function u ∈ α Uˆ r (z, , 1 ), Uˆ r (z, , 1 ) = −[1 − 1 , 1]U r (z, ), 1 ,  ∈ (0, 1), → 1 1 ( z r −2 ) = βr −2 |zr −2 | 2 + · · · + β0 |z 0 | r , → → 1 r −1 1 |zr −2 | 1  ˜ s ( z r −2 , s r −2 ) = βr −2 sat( s 2 ) 2 |zr −2 | 2 + · · · + β0 sat( |zs r0 | ) r |z 0 | r , →



r −2

0

ωs ( z r −2 , s r −2 ) = βr −2 sat( |zsr2−2 | ) 2 zr −2  2 + · · · + β0 sat( |zs r0 | ) 0 r −2 → −1 +ωs ω(z, ) = { |zzrr−1 , . . . , s ∈ [0, ( z )] }, s0 r −2 r −2 ⎧ |+˜ s u (z) for  < |z | < /, ∗r r −1 ⎪ ⎪ ⎨ [1 − , 1] sign z r −1 for |z r −1 | ≥ , z r −1 = 0, U r (z, ) = → ⎪ ω(z, ) for |zr −1 | ≤ , z r −2 = 0, ⎪ ⎩ [−1, 1] for z = 0. 1

1

r −1 r

1

z 0  r , (38)

It is easy to see that the graph of (38) is close to the graph of u ∗r (z) over any homogeneous ball for , 1 > 0 small enough. Thus, since (29) is FT stable for η = 0, also the additionally modified homogeneous system σ (r ) ∈ [−C, C] + α[K m , K M ] co(Uˆ r (z, , 1 ) + sign u ∗r (z)), w˙ = Ωn d ,n f (w, z 0 − σ, L), z˙ = Dr −1,n f (w1 , z, L)

(39) →

is FT stable for sufficiently small , 1 > 0. Recall that after a FT transient σ r −1 ≡ z is kept. Now Theorem 3 directly follows from the properties of homogeneous perturbations of system (39) [39]. Prove Theorem 4. Fix any ζτ i > 0, i = 0, . . . , r − 1. Consider the perturbation of system (39)

Low-Chattering Discretization of Sliding Modes

249

σ (r ) ∈ [−C, C] + α[K m , K M ] co U˜ r (z), w˙ = Ωn d ,n f (w, z 0 − σ, L), z˙ = Dr −1,n  f (w1 , z, L), → Uˆ r (z) + sign u ∗r (z) for |zr −1 | > ζτ r −1 or |zr −1 | ≤ ( z r −2 ), U˜ r (z) = → [−1, 1] for |zr −1 | ≤ ζτ r −1 and |zr −1 | > ( z r −2 ).

(40)



System (40) differs from (39) only in a small vicinity of z = σ r −1 = 0. Choosing sufficiently small ζτ r −1 > 0 obtain that the system trajectories stabilize in a small vicinity of the origin inside the set |z i | ≤ ζτ i , z i = σ (i) , i = 0, . . . , r − 2. It is easy to see that for sufficiently small kh > 0 we also obtain that α Ur (z) ∈ co U˜ r (z). Thus, the continuous analog of the system (15), (34), (35) also stabilizes in the same set. Let now |z i | ≤ ζτ i , z i = σ (i) , i = 0, . . . , r − 2 be kept. Opening the saturation functions obtain that in that vicinity of the origin the control gets the form Q(z) zr −1 + βˆr −2 zr −2 + · · · + βˆ0 z 0 ) , kh Q τ |zr −1 | + βˆr −2 |zr −2 | + · · · + βˆ0 |z 0 | βˆi = βi ( ζ1τ i )r −1−i , i = 0, 1 . . . , r − 2,

u = −α sat(

(41)

The corresponding polynomial is Hurwitz, which implies that for C = 0 the system (15), (41) is always locally AS for sufficiently small kh [37]. Otherwise, if C = 0, it converges to a vicinity of zero proportional to C. Let C = 0 then the system converges into the vicinity of zero |z i | ≤ ζτ i , z i = σ (i) , i = 0, . . . , r − 1. Now all saturation functions can be opened and one gets the equivalent AS linear system. By time transformation, one can replace τ with 1 obtaining z˙ i = z i+1 , i = 0, . . . , r − 2, z˙r −1 = −(kh qτ )−1 [zr −1 + β˜r −2 zr −2 + · · · + β˜0 z 0 ], (42) β˜i = βi ki−(r −1−i) = βi (κ∗ kˆi )−(r −1−i) . It is AS for sufficiently small kh > 0. Although that by itself does not automatically cause the asymptotical stability (AS) of the Euler discretization for the time steps not exceeding 1, the AS is ensured for κ∗ large enough. The same choice also provides the corresponding features of the discretized original homogeneous system [38, 39].

6 Discretization Examples for the Sliding Orders 3, 4 The proposed low-chattering output feedback SMC discretization is demonstrated here for the kinematic car model (r = 3) and the integrator chain (academic example, r = 4).4

4

The model and some parameters are reprinted from the paper [23] by authors with the permission by IEEE.

250

A. Hanan et al.

Fig. 5 Kinematic car model and the desired trajectory y = g(x)

6.1 Discretization of the 3-SM Car Control The example from [23] is studied here for other sampling periods and initial conditions. Consider the kinematic “bicycle” model of the vehicle motion [52] x˙ = V cos(ϕ), y˙ = V sin(ϕ) ϕ˙ = Vl tan θ, θ˙ = u,

(43)

where x and y are Cartesian coordinates of the rear-axle middle point (Fig. 5), l = 5 m is the distance between the axles, ϕ is the orientation angle, V = 10 m/s is the constant longitudinal velocity, θ is the steering angle (i.e., the actual real-life control), and u = θ˙ is the auxiliary computer-based control. The goal is to track some smooth trajectory y = g(x), unknown in advance, whereas g(x(t)), y(t) are sampled in real time. Therefore, the task is to make σ(x, y) = y − g(x) as small as possible. The tracking error σ is measured with the constant sampling step τ . The function g(x) = 10 sin(0.05x) + 5 is chosen for the simulation. The output relative degree obviously equals 3. Control (32), (35) is applied for r = 3, α = 1, β1 = 2, β0 = 1, k0 = 105/3 , k1 = 40001/2 , k2 = 300, kh = 0.3, L = 50, n f = 2, and k L = 5. The control is kept at 0 till t = 1, which provides some time for the differentiator convergence and is applied according to the formula only for t > 1. The initial conditions (x(0), y(0), ϕ(0), θ(0)) = (0, −5, −1, −0.4), z(0) = 0, w(0) = 0 are set. The simulation is performed by the Euler integration method for the integration step 10−4 . The discretization time interval τ of the output feedback control is different and is naturally never less than the integration step. Note that since the differentiator has n d + n f + 1 = 2 + 2 + 1 = 5 variables, the closed-loop system is of the dimension 4 + 5 = 9 and keeps second-order SMs both in the control and the observation contours. First take τ = 0.0001 and apply control (30) using the standard Euler discretization (5) (Fig. 6a–c). The resulting accuracy is described by the component-wise inequality (|σ|, |σ|, ˙ |σ|) ¨ ≤ (3.4 · 10−8 m, 1.4 · 10−4 m/s, 0.024 m/s2 ). Now apply

Low-Chattering Discretization of Sliding Modes

251

Fig. 6 a 3-SM car control, car trajectory and control for the coinciding sampling step τ = 10−4 and the integration step. a–c standard Euler discretization, d–f new discretization

the proposed new discretization. The performance is shown on the left of Fig. 6 in Fig. 6d–f. The corresponding accuracies are (|σ|, |σ|, ˙ |σ|) ¨ ≤ (1.4 · 10−8 m, 1.3 · 2 −4 −5 10 m/s, 1.0 · 10 m/s ). One observes that the trajectories are exactly the same, whereas the chattering is practically removed. The performance of the both standard and new discretizations for the sampling step τ = 0.03 is shown in Fig. 7. Recall that the integration step of the whole system is still 10−4 mimicking the continuous-time dynamics of the controlled process. The resulting accuracy for the standard discretization is (|σ|, |σ|, ˙ |σ|) ¨ ≤ ˙ (0.45 m, 1.2 m/s, 6.3 m/s2 ). The new discretization yields the accuracy (|σ|, |σ|, |σ|) ¨ ≤ (0.37 m, 0.23 m/s, 0.16 m/s2 ). Also, here, no visible chattering is present in spite of the large discretization step. The proposed discretization actually preserves the system performance from Fig. 6d–f.

6.2 Integrator Chain Control, r = 4 Consider the standard integrator chain →

σ (4) = h(t, σ 3 ) + (1.5 − cos(σ σ))u, ˙ |h| ≤ 1. →



(44)

Let the whole state σ 3 be available. The initial value σ 3 (0) = (4, −7, 9, −3) is taken. The integration/sampling step is τ = 10−4 in all simulations.

252

A. Hanan et al.

Fig. 7 a 3-SM car control, car trajectory and control (steering angle derivative), sampling step τ = 0.03, integration step 10−4 . a–c standard Euler discretization, d–f new discretization. New discretization keeps the performance from Fig. 6 in spite of the much larger sampling step

Choose the 4-SM “relay” control 1 1 1 ... ¨ 2 + 2σ ˙ 3 + σ 4 ). u = −10 sign( σ + 2σ

(45)

1

Let k0 = 102 , k1 = ( 23 ) 2 · 102 , k2 = 23 · 102 , k3 = 1, and kh = 0.3, ζτ i = ki4−i τ 4−i , i = 0, 1, 2, 3. According to (37), the proposed new discretization of the control is ⎛ u = −10 sat ⎝

1 1 2 1 3 1 ... ¨ ˙ σ +2 sat( |σ| ¨ 2 +2 sat( |σ| ˙ 3 +sat( |σ| ) 4 σ 4 ) 2 σ ) 3 σ ζτ 2

ζτ 1

  1 1 1 kh ζτ 3 +2ζτ22 +2ζτ31 +ζτ40

ζτ 0

⎞ ⎠

(46)

First consider the case when h = 0, Fig. 8, h = 0.5 cos(t + σ). ˙

(47)

Starting from t = 20, the standard discretization produces the accuracy ... (|σ|, |σ|, ˙ |σ|, ¨ | σ |) ≤ (1.5 · 10−11 , 8 · 10−9 , 8 · 10−6 , 8 · 10−3 ),

(48)

Low-Chattering Discretization of Sliding Modes

253

Fig. 8 4-SM stabilization of the system (44) for h = 0.5 · cos(t + σ) ˙ Fig. 9 Asymptotic stabilization of the system (44) for h ≡ 0 by the new discretization

whereas the new discretization leads to the accuracy ... (|σ|, |σ|, ˙ |σ|, ¨ | σ |) ≤ (1.5 · 10−9 , 1.5 · 10−9 , 1.5 · 10−9 , 1.5 · 10−9 ).

(49)

The case h = 0, Fig. 9. In that case, the standard dicretization starting from t = 20 implies the same accuracy (48), whereas the new discretization implies asymptotic convergence and the accuracy ... (|σ|, |σ|, ˙ |σ|, ¨ | σ |) ≤ (3 · 10−20 , 10−19 , 3 · 10−19 , 7 · 10−18 ) for t ≥ 20.

(50)

254

A. Hanan et al.

Note that, as we note in Sect. 5.2, discretization (46) is not intended to remove the chattering during the transient.

7 Discretization of the Twisting Controller Let r = 2. Then (15) gets the form ˙ σ¨ ∈ [−C, C] + α[K m , K M ]K F [u ∗2 ](σ, σ),

(51)

where the control can be, in particular, chosen in the form (17) or (18). The second one coincides with the homogeneous version of the popular terminal SMC [43]. Another option is to apply the twisting controller [31] ˙ β0 > β1 > 0; u ∗2 = UT w = −β0 sign σ − β1 sign σ, α(β0 +β1 )K m −C > 1. α(β0 − β1 )K m > C, α(β 0 −β1 )K M +C

(52)

Here the second line contains practically necessary and sufficient conditions for its FT convergence [31, 32] (only the equality cases are excluded). Choose any k0 , k1 > 0. Then the obvious discretization is Utwd (t) = −β0 sat



σ(t j ) k02 τ 2



− β1 sat



σ(t ˙ j) k1 τ



, t ∈ [t j , t j+1 ).

(53)

It is also a homogeneous discretization, since defining deg τ = deg t renders (53) homogeneous. Correspondingly, in a small vicinity of σ = σ˙ = 0, the control becomes a linear Hurwitz controller. In the case C = 0, an AS system of the zero homogeneity degree is produced. The corresponding theorems and the proofs are very similar to Sect. 5, but are much simpler.

7.1 Case Study: Targeting In the following example, discretization (53) is modified in order to account for the targeting-dynamics specifics: both h and g from (13) escape to infinity as the target is approached. Consider the geometry of the relative motion of missile (M) and target (T) in the vertical interception plane x, y (Fig. 10). Both objects are formally described as point masses. The planar missile-target engagement kinematics is described by the relative displacement vector r R and its derivatives v R , a R

Low-Chattering Discretization of Sliding Modes

255

Fig. 10 Planar engagement geometry

r R = rT − r M , v R = vT − v M , v˙ M = a M , v˙ T = a T , v˙ R = a R , r˙ M = v M , r˙ T = v T , r˙ R = v R .

a R = aT − a M , (54)

Here indexes M, T and R mean “missile”, “target” and “relative”, r T , v T , a T , r M , v M , a M , r R , v R , a R ∈ R2 . Consider the polar coordinates (r, λ), where r is the range from the missile to the target along the line of sight (LOS), and λ is the LOS angle with respect to the horizontal plane (the axis x). During the attack, the impact angle is quite small, and the dynamic equation for λ(t) is obtained from the plane kinematics of rigid bodies [45] 1 2˙ (55) r + (aT λ − a Mλ ). λ¨ = − λ˙ r r Here aT λ , a Mλ are the target and the missile acceleration components orthogonal to the LOS. It is well-known that the direct hit is assured, provided λ˙ = 0 is kept. Correspondingly, the goal is to make λ˙ vanish and keep it at zero. Assume real-time direct measurements of λ and r (from a seeker). Let the control u be the derivative of the missile acceleration component am orthogonal to the missile velocity v M . Let γ M be the angle between v M and the axis x. The corresponding dynamics are ˙ r + 1 (aTλ − am cos(λ − γ M )), λ¨ = − r2 λ˙ r (56) a˙ m = u. Correspondingly, the guidance task is reduced to steering the system (56) to the ˙ Obviously, the system relative degree is 2. manifold λ˙ = 0. I.e., let σ = λ.

256

A. Hanan et al.

Apply a second-order filtering differentiator (22) and (23) of some filtering order ˙ and λ, ¨ respectively. n f with the input λ and the outputs z 0 , z 1 , and z 2 estimating λ, λ, The corresponding structure of the output feedback twisting controller is u = α(β0 sign z 1 + β1 sign z 2 ).

(57)

... The differentiator parameter L is chosen so as to provide for the inequality | λ | ≤ L during the targeting mission. In the close proximity of the target, the differentiator inevitably diverges. While the standard discretization of the control is obvious, the proposed alternative controller discretization is u˜ = α(β0 sat

z1 · r z2 · r + β1 sat ), 2 K1τ α K2τ α

(58)

where τ > 0 is once more the upper bound of the sampling step and K 1 , K 2 > 0 are some proper gains.

7.2 Simulation of Targeting Control Let the initial position of the missile be at the origin x = 0 [m], y = 0 [m], and the initial position of the target be at the point x = 4000 [m], y = 2500 [m]. The initial velocities of the missile and the target are 200 and 150 [m/s], respectively. The initial velocity angles of the missile and the target are 45 and 180 [deg], respectively. The maximal normal acceleration of the missile is 20 [m/s2 ], both the missile and the target are subjects to the gravitation. The Euler integration step is 10−6 in all simulations. Let at be the component of the missile acceleration along the missile velocity. Correspondingly, one gets a˙ m = u, v˙ M = [am cos(γ M + π2 ), am sin(γ M + π2 ) + g]T , r˙ M = v M (t), v˙ T = a T = [at cos(γT (t) + π2 ), at sin(γT (t) + π2 ) + g]T , r˙ T = v T (t).

(59)

Here r, λ, γ satisfy the relations v

r

v

r = r R , λ = tan−1 ( r RR y ), γ M = tan−1 ( v MM y ), γT = tan−1 ( vTTy ). x

x

x

(60)

Intercepting maneuvering target. Let the target maneuver with the acceleration at (t) = 75 cos(2π f t)

(61)

Low-Chattering Discretization of Sliding Modes

257

Fig. 11 Intercepting maneuvering target. Standard discretization: τ = 10−6 , τ = 10−3 . New discretization: τ = 10−3

of the amplitude 75 [m/s2 ] and the frequency 0.5 [Hz]. The maximal lateral acceleration of the missile is 100 [m/s2 ]. Let the controller parameters be β0 = 5, β1 = 2, α = 50. Apply the filtering differentiator (24) of the differentiation order n d = 2, the filtering order n f = 2 with the Lipschitz parameter L = 100. First assume that r , λ are sampled with the sampling time step τ = 10−6 which equals the integration step. The standard discretization of the controller produces the missing distance of 2 · 10−5 [m]. The interception trajectories, the missile acceleration, and the missile acceleration derivative (the control) are shown in Fig. 11 on the left. Taking the sampling step τ = 10−3 get a miss distance of 1.35 · 10−4 [m]. The interception geometry, the missile acceleration, and the control (missile acceleration command derivative) are shown in the middle of Fig. 11. The performance of the proposed new discretization (58) for K 1 = 2000 and K 2 = 100 is shown on the right side of Fig. 11. The corresponding miss distance is 1.63 · 10−4 [m]. Intercepting ballistic target. Change the setup assuming that the target does not maneuver, i.e., at (t) ≡ 0. Take α = 20 and the larger sampling step τ = 10−2 . The standard discretization results in the miss distance of 0.0675 [m]. The corresponding

258

A. Hanan et al.

Fig. 12 Intercepting ballistic target. The standard discretization (on the left) and the new discretization (on the right) are performed with the sampling step τ = 10−2

targeting performance is shown on the left side of Fig. 12. The proposed new discretization results in the miss distance of 1.63 · 10−4 [m] (Fig. 12 on the right). The new discretization leads to much smoother missile acceleration am and u = a˙ m .

8 Conclusions The proposed low-chattering discretization of Filippov discontinuous systems is computationally simple, guaranties the preservation of system trajectories, stability, and accuracy. Contrary to other known discretization approaches, the standard HOSM uncertainty conditions are enough, the sampling steps are not required to be equal or to be known in advance. Only an upper bound for the sampling intervals is required to be known, but there are no restrictions on its value, and it can be very rough. The control or system discretization are performed by the formulas known in advance. No solution of any equation is required at each sampling step. Sampling noises do not destroy the system performance, and the theoretical system accuracy estimations are preserved. At the same time, the attenuation of chattering in the presence of significant noise in general is impossible.

Low-Chattering Discretization of Sliding Modes

259

The main result of the paper is Theorem 1 which establishes the new discretization approach. Its proof is almost trivial, but it opens infinitely many practical options for the low-chattering practical SMC discretization in real-life industrial applications. The approach has been applied for the low-chattering dicretization of general single-input single-output nonlinear systems of any relative degree. Two families of homogeneous SM controllers were considered and the corresponding discretizations were proposed. Theorem 3 assures that no choice of parameters can destroy the standard system performance, whereas possibly leaving the chattering intact, Theorem 4 establishes the chattering attenuation for suitable controlled processes. Simulation of the 3-SM car control and 4-SMC stabilization of an integrator chain are demonstrated. The new control discretization successfully suppresses the chattering of the 3-SM car control and keeps the trajectories visually the same for the sampling periods 0.0001 and 0.03. The SMC chattering is significantly reduced, at the same time, improving the system accuracy in the absence of noises and preserving the standard system accuracy in their presence. Another example is the low-discretization of the twisting controller. The discretization is applied for targetting maneuvering and ballistic missiles.

Appendix: Introduction to the Weighted Homogeneity Following is the brief presentation of the homogeneity theory.5 Assign the weights (degrees) m 1 , m 2 , . . . , m n x > 0 to the coordinates x1 , x2 , . . . , xn x of Rn x , and denote deg xi = m i . Dilations [2] are defined as the simplest linear transformations dκ (x) = (κm 1 x1 , κm 2 x2 , . . . , κm n x xn x ),

(62)

depending on the parameter κ ≥ 0. It is said that the function f : Rn x → Rm is of the homogeneity degree (HD) (weight) q ∈ R, deg f = q, if the equality f (x) = κ−q f (dκ x) holds identically for any x ∈ Rn x , κ > 0. The notions of a vector function f : Rn x → Rn x , f : x → f (x) ∈ Rn x , and a vector field f : Rn x → T Rn x , f : x → f (x) ∈ Tx Rn x are distinguished [55]. A vector-set function F(x) ⊂ Rm is termed homogeneous of the HD q ∈ R, if the equality F(x) = κ−q F(dκ x) holds identically for any x ∈ Rn x , κ > 0 [33]. On the other hand, a vector-set field F(x) ⊂ Tx Rn x (DI (1), x˙ ∈ F(x)) is termed homogeneous of the HD q ∈ R, if the equality F(x) = κ−q dκ−1 F(dκ x) holds identically for any x and κ > 0 [33]. That definition implies that d(κd−q t) dκ x ∈ F(dκ x) holds, i.e. DI (1) does not change under the homogeneous time-coordinate transformation (t, x) → (κ−q t, dκ x), κ > 0.

5

Standard notions introduced here are partially reprinted from the papers [21, 23] by authors with the permission by Springer Nature and IEEE.

260

A. Hanan et al.

Correspondingly, we often interpret −q as the weight of the time t, deg t = −q, and the weight deg t can be positive, negative or zero. The weight deg 0 is not defined (can be any number), whereas for any constant a = 0 get deg a = 0. One easily checks the simple rules of the homogeneous arithmetic: deg Aa = ∂ A = deg A − deg α, deg A˙ = deg A − a deg A, deg(AB) = deg A + deg B, deg ∂α deg t. A vector field f (x) ∈ Tx Rn x is treated as a particular case of a vector-set field F(x) ⊂ Tx Rn x when the set F(x) has only one element, F(x) = { f (x)}. Thus, in the case of the vector field f (x) and the differential equation (DE) x˙ = f (x) = ( f 1 (x), f 2 (x), . . . , f n x (x))T the classical weighted homogeneity deg x˙i = deg xi − deg t, i = 1, 2, . . . , n x is obtained. Homogeneous norm is defined as any continuous positive-definite function of the HD 1. It is not a norm in the standard sense. In this paper, homogeneous norms are denoted as ||x||h . Each two homogeneous norms || · ||h and || · ||h∗ are equivalent in the proportionality sense: there exist such γ∗ , γ ∗ > 0 that inequalities γ∗ ||x||h∗ ≤ ||x||h ≤ γ ∗ ||x||h∗ hold any x ∈ Rn x . Two standard homogeneous norms are 1

||x||h∞ = max {|xi | mi }, ||x||h = ( 1≤i≤n x





1 

|x| mi ) .

i

Note that ||x||h is continuously differentiable for x = 0, if  > max{m i }. The weights and HDs are defined up to proportionality. Let deg xi = m i , − deg t = q, then for any γ > 0 the redefinition deg xi = γm i , − deg t = γq implies that HDs of all functions/fields/inclusions are simply multiplied by γ. Correspondingly, homogeneous norms are not preserved. r -SM homogeneity. Under assumptions (14) system (12) of the relative degree r satisfies (13). Choose any control (16). The corresponding closed-loop DI (15) σ (r ) ∈ [−C, C] + [K m , K M ]u, → u = −αu ∗r ( σ r −1 ).

(63)

can be made homogeneous by a proper weights asignment. The presence of the segment [−C, C], C > 0, on the right-hand side of (63) implies that deg σ (r ) = 0. On the other hand, deg σ (r ) = deg σ − r deg t. It implies that deg t > 0. Let deg t = 1 and the system HD be −1. Then the only possible weights are deg σ = r , deg σ˙ = r − 1, …, deg σ (r −1) = 1. Furthermore, deg u ∗r = 0 is necessarily to hold, which implies that → → → ∀κ > 0 ∀ σ r −1 ∈ Rr : u ∗r ( σ r −1 ) ≡ u ∗r (dˆκ σ r −1 ), → dˆκ σ r −1 = (κr σ, κr −1 σ, ˙ . . . , κσ (r −1) ).

(64)

Low-Chattering Discretization of Sliding Modes

261

A function f : Rn x → Rm x is quasi-continuous (QC) [34], if it is continuous everywhere except the origin x = 0. In particular, all continuous functions are QC. A homogeneous DI (1) is AS (FTS, FxTS), if the origin x = 0 is its global AS (FTS, FxTS) equilibrium. A set D0 is termed homogeneously retractable if dκ D0 ⊂ D0 for any κ ∈ [0, 1]. A Filippov DI (1) is called contractive [33], if for some numbers T, ε > 0, there exist such a retractable compact D0 and a compact D1 , 0 ∈ D1 , D1 + Bε ⊂ D0 that for any solution x(t) the starting-point placement x(0) ∈ D0 implies that x(T ) ∈ D1 . ˜ A Filippov DI x˙ ∈ F(x) is termed a small homogeneous perturbation of the ˜ homogeneous Filippov DI x˙ ∈ F(x), if the set F(x) is close to F(x) in some vicinity ˜ ⊂ F(x) + Bε of x = 0. It formally means that whenever x ∈ B1 the relation F(x) holds for some small ε ≥ 0. The following theorem [19, 38, 39] describes the relations between the contractivity, stability, and the HD sign. Theorem 5 Consider the homogeneous Filippov DI (1) of the HD q. Then its asymptotic stability (AS) and contractivity features are equivalent and robust to small homogeneous perturbations. Moreover, • AS implies FT stability if q < 0; and FT stability always implies that q < 0; • if q = 0 AS is exponential; • if q > 0 AS implies the FxT attractivity of any ball Bε , ε > 0, but the convergence to the origin is slower than exponential. Any AS homogeneous Filippov DI possesses a differentiable homogeneous Lyapunov function [2, 5]. Accuracy of perturbed homogeneus DIs. Consider the retarded “noisy” perturbation of the AS Filippov homogeneous DI (1) of the negative HD q < 0 [33] x˙ ∈ F(x(t − [0, τ ]) + Bhε ), x ∈ Rn x ,

(65)

where τ , ε ≥ 0, Bhε = {x ∈ Rn x | ||x||h ≤ ε}. In principle, DI (65) requires some functional initial conditions for t ∈ [−τ , 0]. Correspondingly, the following result [32] imposes some homogeneity assumptions on these conditions [17, 39] which are always satisfied provided the solutions do not depend on the solution prehistory for t < 0. That assumption usually holds in the case of sampled systems comprised of smooth dynamic systems closed by digital dynamic controllers, which in their turn exploit discrete output sampling starting at t = 0. So assume that the solutions of (65) are independ of the values x(t) for t < 0, and fix any homogeneous norm || · ||h . Then the accuracy x ∈ γ Bhρ , ρ = max[ε, τ −1/q ], is established in FT for some γ > 0 independent of ε, τ and initial conditions.

(66)

262

A. Hanan et al.

That accuracy is established for ρ = ε and any sufficiently small τ [17]. If q > 0, one still takes ρ = ε, but the initial value x(0) and ε are to be uniformly bounded, whereas τ is to be sufficiently small for each fixed R, x(0) ∈ B R (it is the most problematic case [17], since the system can escape to infinity faster than any exponent [36]). A similar result is also true for the implicit Euler discretization with the sampling step τ [17].

References 1. Acary, V., Brogliato, B.: Implicit Euler numerical scheme and chattering-free implementation of sliding mode systems. Syst. Control Lett. 59(5), 284–293 (2010) 2. Bacciotti, A., Rosier, L.: Liapunov Functions and Stability in Control Theory. Springer, London (2005) 3. Barbot, J.P., Levant, A., Livne, M., Lunz, D.: Discrete differentiators based on sliding modes. Automatica 112, 108,633 (2020). https://doi.org/10.1016/j.automatica.2019.108633 4. Bartolini, G., Ferrara, A., Usai, E.: Chattering avoidance by second-order sliding mode control. IEEE Trans. Autom. Control 43(2), 241–246 (1998) 5. Bernuau, E., Efimov, D., Perruquetti, W., Polyakov, A.: On homogeneity and its application in sliding mode control. J. Frankl. Inst. 351(4), 1866–1901 (2014) 6. Boiko, I.: Frequency domain analysis of fast and slow motions in sliding modes. Asian J. Control 5(4), 445–453 (2003) 7. Boiko, I., Fridman, L.: Analysis of chattering in continuous sliding-mode controllers. IEEE Trans. Autom. Control 50(9), 1442–1446 (2005) 8. Boiko, I., Fridman, L., Pisano, A., Usai, E.: Analysis of chattering in systems with second-order sliding modes. IEEE Trans. Autom. Control 52(11), 2085–2102 (2007) 9. Brogliato, B., Polyakov, A.: Digital implementation of sliding-mode control via the implicit method: a tutorial. Int. J. Robust Nonlinear Control 31(9), 3528–3586 (2021) 10. Brogliato, B., Polyakov, A., Efimov, D.: The implicit discretization of the super-twisting slidingmode control algorithm. IEEE Trans. Autom. Control 65(8), 3707–3713 (2019) 11. Byun, G., Kikuuwe, R.: An improved sliding mode differentiator combined with sliding mode filter for estimating first and second-order derivatives of noisy signals. Int. J. Control Autom. Syst. 18(12), 3001–3014 (2020) 12. Carvajal-Rubio, J., Sánchez-Torres, J., Defoort, M., Djemai, M., Loukianov, A.: Implicit and explicit discrete-time realizations of homogeneous differentiators. Int. J. Robust Nonlinear Control 31(9), 3606–3630 (2021) 13. Clarke, F., Ledayev, Y., Stern, R.: Asymptotic stability and smooth Lyapunov functions. J. Differ. Equ. 149(1), 69–114 (1998) 14. Ding, S., Levant, A., Li, S.: Simple homogeneous sliding-mode controller. Automatica 67(5), 22–32 (2016) 15. Dorel, L., Levant, A.: On chattering-free sliding-mode control. In: 47th IEEE Conference on Decision and Control, 2008, pp. 2196–2201 (2008) 16. Drakunov, S., Utkin, V.: On discrete-time sliding modes. In: Nonlinear Control Systems Design 1989, pp. 273–278. Elsevier (1990) 17. Efimov, D., Levant, A., Polyakov, A., Perruquetti, W.: Discretization of asymptotically stable homogeneous systems by explicit and implicit Euler methods. In: 55th IEEE Conference on Decision and Control, CDC’2016, Las-Vegas, 12–14 Dec 2016 18. Filippov, A.: Differential Equations with Discontinuous Right-Hand Sides. Kluwer Academic Publishers, Dordrecht (1988) 19. Hanan, A., Jbara, A., Levant, A.: New homogeneous controllers and differentiators. In: Variable-Structure Systems and Sliding-Mode Control, pp. 3–28. Springer, Cham (2020)

Low-Chattering Discretization of Sliding Modes

263

20. Hanan, A., Jbara, A., Levant, A.: Non-chattering discrete differentiators based on sliding modes. In: Proceedings of the 59th IEEE Conference on Decision and Control, Jeyu Island, 14–18 Dec 2020, Korea, pp. 3987–3992 21. Hanan, A., Jbara, A., Levant, A.: Homogeneous sliding modes in noisy environments. In: Emerging Trends in Sliding Mode Control Theory and Application, pp. 1–46. Springer, Cham (2021) 22. Hanan, A., Jbara, A., Levant, A.: Homogeneous sliding modes in noisy environments, preprint (2021). https://www.tau.ac.il/~levant/Hanan,Jbara,Levant-Homogeneous_SMC_in_ noise,Springer,2021.pdf 23. Hanan, A., Jbara, A., Levant, A.: Low-chattering discretization of sliding mode control. In: Proceedings of the 60th IEEE Conference on Decision and Control (CDC), Austin, TX, USA, Dec 15–17, 2021, pp. 6403–6408 (2021) 24. Hanan, A., Levant, A., Jbara, A.: Low-chattering discretization of homogeneous differentiators. IEEE Trans. Autom. Control 67(6), 2946–2956 (2021) 25. Huber, O., Acary, V., Brogliato, B.: Lyapunov stability and performance analysis of the implicit discrete sliding mode control. IEEE Trans. Autom. Control 61(10), 3016–3030 (2015) 26. Isidori, A.: Nonlinear Control Systems I. Springer, New York (1995) 27. Jbara, A., Levant, A., Hanan, A.: Filtering homogeneous observers in control of integrator chains. Int. J. Robust Nonlinear Control 31(9), 3658–3685 (2021) 28. Kikuuwe, R., Pasaribu, R., Byun, G.: A first-order differentiator with first-order sliding mode filtering. IFAC-PapersOnLine 52(16), 771–776 (2019) 29. Kikuuwe, R., Yamamoto, Y., Brogliato, B.: Implicit implementation of nonsmooth controllers to nonsmooth actuators. IEEE Trans. Autom. Control 67(9), 4645–4657 (2022) 30. Kikuuwe, R., Yasukouchi, S., Fujimoto, H., Yamamoto, M.: Proxy-based sliding mode control: a safer extension of PID position control. IEEE Trans. Robot. 26(4), 670–683 (2010) 31. Levant, A.: Sliding order and sliding accuracy in sliding mode control. Int. J. Control 58(6), 1247–1263 (1993) 32. Levant, A.: Higher order sliding modes, differentiation and output-feedback control. Int. J. Control 76(9/10), 924–941 (2003) 33. Levant, A.: Homogeneity approach to high-order sliding mode design. Automatica 41(5), 823– 830 (2005) 34. Levant, A.: Quasi-continuous high-order sliding-mode controllers. IEEE Trans. Autom. Control 50(11), 1812–1816 (2005) 35. Levant, A.: Chattering analysis. IEEE Trans. Autom. Control 55(6), 1380–1389 (2010) 36. Levant, A.: On fixed and finite time stability in sliding mode control. In: Proceedings of the 52 IEEE Conference on Decision and Control, Florence, Italy, 10–13 Dec 2013 37. Levant, A.: Non-Lyapunov homogeneous SISO control design. In: 56th Annual IEEE Conference on Decision and Control (CDC), Melbourne, VIC, Australia, 12–15 Dec 2017, pp. 6652–6657 (2017) 38. Levant, A., Efimov, D., Polyakov, A., Perruquetti, W.: Stability and robustness of homogeneous differential inclusions. In: Proceedings of the 55th IEEE Conference on Decision and Control, Las-Vegas, 12–14 Dec 2016 39. Levant, A., Livne, M.: Weighted homogeneity and robustness of sliding mode control. Automatica 72(10), 186–193 (2016) 40. Levant, A., Livne, M.: Robust exact filtering differentiators. Eur. J. Control 55(9), 33–44 (2020) 41. Levant, A., Livne, M., Yu, X.: Sliding-mode-based differentiation and its application. IFACPapersOnLine 50(1), 1699–1704 (2017) 42. Levant, A., Yu, X.: Sliding-mode-based differentiation and filtering. IEEE Trans. Autom. Control 63(9), 3061–3067 (2018) 43. Man, Z., Paplinski, A., Wu, H.: A robust MIMO terminal sliding mode control scheme for rigid robotic manipulators. IEEE Trans. Autom. Control 39(12), 2464–2469 (1994) 44. Mercado-Uribe, J., Moreno, J.: Discontinuous integral action for arbitrary relative degree in sliding-mode control. Automatica 118, 109,018 (2020) 45. Meriam, J., Kraige, L.: Engineering Mechanics - Dynamics, 7th edn. Wiley (2012)

264

A. Hanan et al.

46. Miranda-Villatoro, F., Brogliato, B., Castanos, F.: Multivalued robust tracking control of Lagrange systems: continuous and discrete-time algorithms. IEEE Trans. Autom. Control 62(9), 4436–4450 (2017) 47. Miranda-Villatoro, F., Brogliato, B., Castanos, F.: Set-valued sliding-mode control of uncertain linear systems: continuous and discrete-time analysis. SIAM J. Control Optim. 56(3), 1756– 1793 (2018) 48. Mojallizadeh, M., Brogliato, B., Acary, V.: Time-discretizations of differentiators: design of implicit algorithms and comparative analysis. Int. J. Robust Nonlinear Control 31(16), 7679– 7723 (2021) 49. Polyakov, A., Efimov, D., Brogliato, B.: Consistent discretization of finite-time and fixed-time stable systems. SIAM J. Control Optim. 57(1), 78–103 (2019) 50. Polyakov, A., Efimov, D., Perruquetti, W.: Finite-time and fixed-time stabilization: implicit Lyapunov function approach. Automatica 51(1), 332–340 (2015) 51. Polyakov, A., Fridman, L.: Stability notions and Lyapunov functions for sliding mode control systems. J. Frankl. Inst. 351(4), 1831–1865 (2014) 52. Rajamani, R.: Vehicle Dynamics and Control. Springer, New York (2005) 53. Ríos, H., Teel, A.: A hybrid observer for fixed-time state estimation of linear systems. In: 2016 IEEE 55th Conference on Decision and Control (CDC), pp. 5408–5413. IEEE (2016) 54. Ríos, H., Teel, A.: A hybrid fixed-time observer for state estimation of linear systems. Automatica 87, 103–112 (2018) 55. Shtessel, Y., Edwards, C., Fridman, L., Levant, A.: Sliding Mode Control and Observation. Birkhauser, Basel (2014) 56. Shtessel, Y., Shkolnikov, I.: Aeronautical and space vehicle control in dynamic sliding manifolds. Int. J. Control 76(9/10), 1000–1017 (2003) 57. Sira-Ramírez, H.: On the dynamical sliding mode control of nonlinear systems. Int. J. Control 57(5), 1039–1061 (1993) 58. Slotine, J.J., Li, W.: Applied Nonlinear Control. Prentice Hall Inc, New Jersey (1991) 59. Utkin, V.: Sliding Modes in Control and Optimization. Springer, Berlin, Germany (1992) 60. Xiong, X., Kikuuwe, R., Kamal, S., Jin, S.: Implicit-Euler implementation of super-twisting observer and twisting controller for second-order systems. IEEE Trans. Circuits Syst. II: Express Briefs 67(11), 2607–2611 (2019)

Adaptation Methods

Adaptive Sliding Mode and Higher-Order Sliding-Mode Control Techniques with Applications: A Survey Yuri Shtessel, Franck Plestan, Christopher Edwards, and Arie Levant

Abstract Control under uncertainty is one of the main focuses of modern control theory research. The idea of sliding mode and higher-order sliding-mode control (SMC/HOSMC) is to drive the system trajectory to properly chosen constraints (sliding manifold) in finite time and preserving the constraints for all subsequent time by means of high-frequency switching control. The main features of SMC/HOSMC are its insensitivity to bounded disturbances matched to the control, high stabilization accuracy, and finite-time convergence. Therefore, SMC/HOSMC remains, probably, one of the most popular choices for handling systems with bounded uncertainties/disturbances. Adaptive HOSMC has been of great interest in the sliding-mode control community during the last 15 years due to its ability to handle perturbations with unknown bounds while mitigating chattering, if the adaptive control gains are not overestimated. This chapter presents an overview of the adaptive SMC/HOSMC paradigms and algorithms. A number of application specific results are also discussed. The literature in the area is presented in the context of continuing developments in the broad areas of the theory and application of adaptive SMC/HOSMC.

Y. Shtessel (B) University of Alabama in Huntsville, Huntsville, USA e-mail: [email protected] F. Plestan Nantes Université, Ecole Centrale Nantes, CNRS, LS2N, UMR 6004, F-44000 Nantes, France e-mail: [email protected] C. Edwards The College of Engineering, Mathematics & Physical Sciences, University of Exeter, Exeter, UK e-mail: [email protected] A. Levant Tel-Aviv University, Tel Aviv, Israel e-mail: [email protected] © The Author(s), under exclusive license to Springer Nature Switzerland AG 2023 T. R. Oliveira et al. (eds.), Sliding-Mode Control and Variable-Structure Systems, Studies in Systems, Decision and Control 490, https://doi.org/10.1007/978-3-031-37089-2_11

267

268

Y. Shtessel et al.

1 Introduction The insensitivity and finite-time convergence properties enjoyed by sliding-mode controllers (SMC or 1-SMC) and observers/differentiators (SMO or 1-SMO) [1–3] make them a useful approach for controlling systems with significant uncertainties in a very wide number of application fields. The idea of 1-SMC is one of driving the system trajectory to properly chosen constraints (the sliding manifold) in finite time and maintaining the constraints thereafter by means of high-frequency switching control, thus exploiting the main features of the sliding mode (SM): its insensitivity to external and internal disturbances matched to the control, and finite-time reaching transient. A family of 1-SMO algorithms employs the same basic idea: the high frequency switching injection term drives the estimated states to the sliding manifold and then maintains the sliding mode for all subsequent times so that the state estimation errors are driven to zero. Robustness to matched disturbances and the finite-time convergence is presumed. However, these insensitivity/robustness properties come at a cost, usually termed “chattering” [4, 5], resulting from high frequency switching of the control signal and the (inevitable) presence of unmodeled dynamics. Higher-order sliding-mode control (HOSMC or r − S MC, r ≥ 2, where r is the system’s relative degree [6]) algorithms [3, 7, 8] can handle systems with arbitrary relative degree with enhanced stabilization accuracy compared to systems of relative degree one attributed to 1-SMC, while preserving the insensitivity/robustness to bounded perturbations and the finite-time convergence inherited from 1-SMC. The efficacy of the SMC/HOSMC algorithms has been proven by a number of applications, including control of electrical DC/DC and AC/DC power converters [9, 10], control of AC and DC motors and generators [11], rocket and aircraft guidance and control [12, 13], and control of robots [14]. The emergence of the chattering phenomenon [4, 5] that could damage actuators and the systems themselves may be considered as the major disadvantage of the SMC/HOSMC. Note that all SMC/HOSMC designs require knowledge of the uncertainty/ perturbation bounds, which practically could be difficult to obtain; it can lead to the overestimation of these bounds, which may yield unnecessarily large control gains that could excite significant chattering. All r − S MC, r ≥ 1 may mitigate chattering through an artificial increase of relative degree, thus providing continuous control signals. Note that sometimes this approach applied to HOSMC in the presence of unmodeled dynamics may lead to increased chattering and significant energy consumption when compared with 1-SMC [15, 16]. SMC/HOSMC adaptation may help to minimize the energy consumption. Specifically, the SMC and HOSMC and corresponding observation laws contain discontinuous signum functions premultiplied by the gains proportional to the perturbation bounds (or bounds on the perturbation derivatives) and so chattering is

Adaptive Sliding Mode and Higher-Order Sliding-Mode Control Techniques …

269

not totally eliminated even by increasing the relative degree. Therefore, increasing relative degree does not allow for full chattering elimination. Usually, conservative upper bounds on perturbations are needed to guarantee the existence of the sliding mode. Note that this conservatism increases the chattering. A common way to mitigate chattering is by designing the SMC/HOSMC and observation algorithms with control gain adaptation, assuming the perturbations with unknown bounds and the bounds of their derivatives. In more sophisticated approaches, the SMC gain adaptation allows for designing SMC/HOSMC where the gains in the controller, depending on the uncertainty bounds, attempt to adapt to a level where they are as small as possible and yet guarantee sliding is maintained. Designing the adaptive SMC/HOSMC (1-ASMC/r-ASMC) algorithms can be also beneficial when the boundaries of the perturbations are known. The reason for this is that the perturbations and their derivatives are not necessarily close to their boundaries all the time. Therefore, designing 1-ASMC/r-ASMC algorithms with the adaptive gains being close to the unknown perturbations while maintaining sliding, allows the controller/observer gains to be reduced to a healthy level. The following main research direction in 1-ASMC/r-ASMC design should be mentioned: (a) the 1-ASMC/r-ASMC algorithms for disturbance reconstruction and a consecutive compensation; (b) the ASMC/AHOSMC algorithms in perturbed systems with full knowledge of the boundaries of the perturbations and their derivatives; (c) 1-ASMC/r-ASMC algorithms in systems with unknown bounds on the perturbations and their derivatives, specifically: schemes with gain overestimation versus ones with non-overestimated gains; (d) the best new concepts for the sliding-mode control and observation algorithms with the control gain adaptation [17]. The following options for 1-ASMC/r-ASMC are also discussed in [18]: Monotonically increasing gain: This strategy consists of monotonically increasing the control gain until the SM is reached. Note that in this situation the control gain does not decrease as the perturbation decreases, and thus the chattering level and the power consumption needed for the actuator to maintain the system’s SM will be unreasonable large. Furthermore, one cannot be sure that the sliding mode will not eventually be lost (due to the limited control power) if the perturbation continues increasing. Increasing and decreasing gain: The main idea here involves increasing the gain until the SM is reached, then the gain is decreased and follows the perturbation profile. Thus, the sliding variable is driven toward some vicinity of zero that depends on the upper bound of the perturbation. However, only ultimately boundedness of the sliding variable can be ensured and this vicinity cannot be prescribed in advance. A Barrier function-based approach: In accordance with this type of adaptation, the gain increases until the norm of the sliding variable reaches a value lower than a predefined one. Then, the gain defined by the barrier function ensures that the solution will never leave a prescribed domain of origin. The control gain decreases practically to the norm of the perturbation profile whenever the sliding variable is close to the origin. Once the solution reaches the prescribed vicinity of the sliding manifold, it can never leave it. Note that the cost of such a prescribed size of the reachability domain could be high, since when the solution is close to the barrier

270

Y. Shtessel et al.

function, the control gain can get increasingly significant. Note that in the case of the noisy measurements the size of the barrier should be chosen much bigger than the level of the noise, which is unknown a priori. Otherwise convergence can be lost. The summarized challenge is to adapt the control/injection term gains, in 1ASMC/n-ASMC in such a way that the convergence of a sliding set to zero is guaranteed in finite time with no overestimation of the control/injection term gains, while relaxing the requirements on the knowledge of the bounds of the perturbations and their derivatives. This chapter is dedicated to a survey of the state of the art of 1-ASMC/n-ASMC techniques and their applications.

2 Adaptive Sliding-Mode Control (1-ASMC) Techniques Consider a nonlinear controllable SISO uncertain system x˙ = f (x) + g(x)u, σ = σ (x, t);

(1)

Notations for Eq. (1) can be found in [19, 20]. The σ − dynamics are given by σ˙ = (x, t) + (x, t)u

(2)

with |(x, t)| ≤ m , m > 0, 0 < m ≤ (x, t) ≤  M . It is assumed that the constants m , m ,  M exist but are unknown. The original way to reduce the chattering effect of the 1-SMC u = −K sign(σ )

(3)

is through the use of a boundary layer around the sliding surface σ = 0 defined by |σ | ≤ δ, δ > 0, where the sign function may be approximated by continuous ones, including saturation and sigmoid functions [1–3], [21]. Note that the boundary layer approach sacrifices the sliding variable stabilization accuracy and allows the convergence of a sliding variable to a domain only, but not to the origin. One of the earliest readily available contributions in the area of adaptive SMC based on the boundary layer concept appeared in the late 1980s [21, 22], where a consistent rule on when to stop adaptation using the concept of the boundary layer was proposed. Algorithms and approaches to the design of the adaptive control u that drives σ → 0 or σ → {|σ | ≤ δ0 , δ0 > 0} in finite time, while maintaining the adaptive control gain as small as possible to maintain the sliding mode (SM), is discussed next. In the work in [19] an adaptive gain K in (3) for single-input-single-output (SISO) nonlinear systems with uncertain parameters (1), (2) was proposed of the form

Adaptive Sliding Mode and Higher-Order Sliding-Mode Control Techniques …

1 K˙ = |σ |, α > 0 α

271

(4)

so that the 1-ASMC (3), (4) drives σ → 0 in finite time. Discussion: • The upper bounds of any uncertainties are not required to be known in advance. • The adaptation law (4) increases the 1-ASMC gain until a sliding mode (SM) starts. • The gain remains constant or increases, due to the increase of the perturbation. Clearly this 1-ASMC yields an overestimation of the gain K that may yield increased chattering. In the works [20, 23, 24] an equivalent (average) control-based 1-SMC gain K adaptation law is proposed for nonlinear systems with disturbances with unknown bounds (which may be even unbounded) that guarantees asymptotic convergence and non-overestimation of the gain. Estimating the equivalent (average) control u eq [1–3] through discontinuous high frequency switching control filtering, is a central point of the algorithms. In the works [20, 25] another adaptive gain SMC algorithm that is not based on the equivalent control estimation (that is difficult to accomplish exactly) and provides non-overestimated gains is proposed. Specifically, the following results are obtained in [20]. Theorem 1 [20]:Given the sliding variable dynamics in (3), and the sliding-mode control (1), where the adaptive gain K is defined as i f |σ | > 0 K˙ = K¯1 |σ |, K = K¯2 |η| + K¯3 , i f |σ | = 0

(5)

with τ η˙ + η = sign (σ ), τ > 0, then σ → 0 in finite time. Discussion: • The control gain K is increasing due to the adaptation law (5) up to a value large enough to counteract the bounded uncertainty with unknown bounds in (2) until the sliding mode starts. • This 1-ASMC strategy allows decreasing the control gain K and then adjusts it with respect to the current uncertainties/ perturbations. Theorem 2 [20]: Given the sliding variable dynamics in (2), and the 1-ASMC in (3), where the adaptive gain K is defined as  K˙ =

K¯4 |σ |sign(σ − ), i f K > μ μ, if K ≤ μ

(6)

272

Y. Shtessel et al.

with K (0), K¯ 4 > 0, and where ε, μ > 0 are small real numbers, then there exists a finite time t F > 0 so that a real sliding mode is established for all t ≥ t F , i.e., |σ | ≤ δ¯0 for all t ≥ t F , where  2 M δ¯0 = 2 + . (7) K¯4 m Discussion: • A guarantee that the sliding variable σ converges to a domain is provided [20, 25]. • The idea is to increase Kuntil the (approximate) 1-SM σ = 0 is detected in (6). Then Kis gradually reduced until the sliding mode is lost due to an insufficient control magnitude, then it increases again. • A tuning procedure for the size of the domain was developed in [20]. In [26] at the moment when the 1-SM is lost, the gain Kis increased in one impulse to provide for the immediate convergence restoration. Then it is decreased gradually until the approximate 1-SM σ = 0 is once more lost, etc. Thus, a real sliding mode σ → {|σ | ≤ δ0 , δ0 > 0} [1–3] is kept almost for all time after it has established. In [27] a modified switching scheme for the gain K adaptation is proposed to guarantee a prescribed transient time, maximum overshoot, and steady-state error for multivariable uncertain plants. In the work [28] another 1- ASMC methodology is presented that does not require a priori knowledge of bounded uncertainty. Convergence to a domain is provided. Another 1-ASMC algorithm that addresses gain overestimation is studied in [29]. This algorithm allows reducing the error in the reaching phase by using an integral 1-ASMC. A novel 1-ASMC law was proposed in [30] to overcome the overestimation and underestimation problems of the switching gain without any a priori constant upper-bound assumption on the system uncertainty. The saturation function approximates the signum one, and the convergence to a domain is guaranteed. Consider the multiple input-multiple output (MIMO) system (1) with the vector output (the sliding variable) σ , whose dynamics are given by σ˙ = (x, t) + u

(8)

  (x, ˙ t) ≤ m1 with unknown where , u, σ ∈ Rm , (x, t) ≤ m0 , m0 , m1 > 0. A dual-layer 1-ASMC unit-vector control u scheme that drives the sliding variable σ to the ideal 1-SM in finite time in the presence of the smooth bounded perturbation, whose norm-boundary and the norm-boundary of its derivative are not known, is presented in [31]. As studied in [31], the unit vector 1-ASMC is u = −(ρ(t) + η0 )

σ ||σ ||

(9)

Adaptive Sliding Mode and Higher-Order Sliding-Mode Control Techniques …

273

where η0 is a small positive constant. A sufficient condition to ensure a 1-SM exists is ρ(t) > m0 , and this is satisfied by the scheme proposed in [31] (MIMO 1-ASMC adaptation algorithm). Furthermore the adaptive gain ρ(t) remains bounded. A detailed description of the dual-layer adaptation scheme for the control gain ρ(t) > 0 can be found in [31]. In [32, 33] the 1-ASMC is based on a sliding-mode observer (SMO). Specifically, in [32] an integral 1-ASMC is designed for a switch system without the possibility for the adaptive control gain to be reduced, while the required states are observed via a SMO. A disturbance SMO is proposed for systems with unmatched disturbances in [33], while a 1-ASMC is designed without the possibility of the control gains to be reduced. Therefore, adaptive control gain overestimation is still possible. In the works [34, 35], a new barrier function-based 1-ASMC is proposed for perturbed systems whose disturbances are bounded with unknown bounds. The proposed barrier 1-ASMC can ensure convergence of the sliding variable and maintains it in a predefined neighborhood of zero independent of the upper bound of the disturbance, without overestimating the control gain when inside in the neighborhood. In [36, 37], ASMC in discrete time perturbed systems is studied. In the case of constant uncertainties, a discrete time 1-ASMC scheme that guarantees the convergence of the discrete time sliding variable to a neighborhood of the sliding manifold, with the size of this neighborhood depending on the sampling interval was proposed in [36]. A model reference discrete time 1-ASMC algorithm is presented in [37] and drives the error between the actual sliding dynamics and the desired dynamics to zero in the presence of an unknown but constant system matrix and control distribution vector. Practical easily implementable continuous and discrete time 1-ASMC algorithms are studied in [38], where a shrinking hysteresis-based 1-ASMC that drives the sliding variable to a boundary layer around zero is proposed. The control gain is not overestimated.

3 Applications of Adaptive Sliding-Mode Control (1-ASMC) Techniques There exists a large variety of practical control tasks that have been effectively addressed using 1-ASMC/1-AHOSMC algorithms, including controlling aerospace vehicles [39–46], robots, pneumatic and electric drive control [47–49], cyberphysical systems [50, 51], stock management problem [52], complex networks of dynamics systems [53], underwater vehicles [54, 55], and chaotic motion [56, 57]. Integral smooth 1-ASMC of robot manipulators with a norm-bounded disturbance, whose smoothness is achieved via a sigmoid function approximation, and whose bound is known, is studied in [47]. Dual level 1-ASMC with increasingdecreasing adaptive gains providing uniform ultimate boundedness of the tracking errors of a robot manipulator in a real SM is presented in [48]. A resilient ASMO for sensorless AC drives was proposed and studied in [58].

274

Y. Shtessel et al.

Fig. 1 Schematic of an electropneumatic system. ©2010 TAYLOR & FRANCIS. Reprinted with permission from Plestan, F., Shtessel, Y., Brégeault, V., Poznyak, A.: New methodologies for adaptive sliding mode control. Int. J. Control. 83(9), 1907–1919 (2010)

The 1-ASMC proposed in [20] is applied to controlling the electropneumatic actuator in [48], whose schematic is presented in Fig. 1. The schematic description can be found in [20, 49]. A mathematical model that describes the motion of the pneumatic part of the system in Fig. 1 is presented in [20, 49]   kr T SP qm N (U, p P ) − pP v V P (y) rT   kr T SN qm P (−U, pN ) − pN v p˙ N = VN (y) rT  1  v˙ = S P p P − S N p N − bv − F f − Fext M y˙ = v. p˙ P =

(10)

Notations for Eq. (10) are given in [20]. The aim of the 1-ASMC control U design is to achieve high accuracy for the position tracking error e = yd (t) − y(t) where yd (t) is the desired trajectory, in the presence of the bounded perturbations, including the uncertain carriage mass M and the partially known mass flow rates qm P and qm N , with unknown bounds. The sliding variable σ = e¨ + 2ωn e˙ + ωn2 e dynamics are presented in the form of Eq. (2), and the 1-ASMC is designed as U=

1 nom

(−ψnom + u)

(11)

Adaptive Sliding Mode and Higher-Order Sliding-Mode Control Techniques …

275

Fig. 2 Top: Current (-) and desired (—) position trajectories (m) versus time (s). Center: Control input (V) versus time (s). Bottom: Control gain K versus time (s). ©2010 TAYLOR & FRANCIS. Reprinted with permission from Plestan, F., Shtessel, Y., Brégeault,V., Poznyak, A.:New methodologies for adaptive sliding mode control. Int. J. Control. 83(9), 1907–1919 (2010)

where  = nom + ,  = nom + , nom > 0, the control u is designed in accordance with the 1-ASMC in (3), (5), (6), and implemented with K (0) = 10, K¯ 4 = 1000, ωn = 33, μ = 0.1. In this work a specially chosen time varying ε [20] is used. Simulation experiments have been made considering variations of the load mass from 17 up to 47 kg. The simulation results are shown in Fig. 2. Discussions: • The plots in Fig. 2 demonstrate the high accuracy of the output tracking, • The control gain demonstrates the deep adaptation. Hypersonic vehicles are controlled using ASMC in [39–41]. In fact, [39] is one of the first works where a MIMO continuous ASMC with saturation function approximation is designed for the nonlinear longitudinal model of a generic hypersonic vehicle. The adaptive part of the ASMC deals with parametric uncertainties. An adaptive multivariable integral terminal sliding-mode fault-tolerant control (FTC) strategy for a hypersonic gliding vehicle (HGV) subject to actuator malfunctions and model uncertainties is studied in [40, 41]. The multivariable integral SMC is capable of ensuring finite-time stability of the closed-loop system in the presence of actuator malfunctions and model uncertainties, while the adaptive laws are employed to tune the control parameters in response to the hypersonic vehicle’s uncertain and faulty dynamics.

276

Y. Shtessel et al.

In [42], a novel robust adaptive recursive sliding-mode control (ARSMC) strategy is proposed for a quadrotor to improve the attitude tracking performance and disturbance rejection. The adaptive gain (that cannot be reduced) adjustment method is presented to make an estimate of the unknown upper bound of the disturbances. It is proved that the attitude control system ensures finite-time convergence and that the attitude tracking error converges to zero. A quadrotor attitude test platform was built to evaluate the proposed algorithm. For a multi-quadrotor system, a novel adaptive nonsingular terminal sliding-mode control strategy is studied in [43] to realize the formation control with collision avoidance and inter-quadrotor avoidance. The paper [44] offers ASMC design for finite-time stabilization of a quadrotor with parametric uncertainties. The studied ASMC in [44] guarantees the tracking errors of the quadrotor converge to the origin with a finite-time convergence rate. The adaptive gains have no ability to get reduced. In [54, 55], an underwater vehicle controlled by ASMC is studied. The depth tracking errors are shown to be uniform ultimately bounded, while the adaptive gains can reduce their value. Integral ASMC is used in three-layer control [59] of a cable-driven underwater parallel platform. In this work, the gain adaptation law does not allow the gains to get reduced. Perturbed spacecraft ASMC is studied in [45, 46]. In [45], the flexible spacecraft is controlled by continuous ASMC control. Attitude of a space formation is controlled in [46] by ASMC based on the equivalent control and a saturation approximation of the signum function. State reconstruction in all the nodes of a complex network of perturbed uncertain dynamical systems is accomplished in [53] via a supervisory 1-ASMO. The estimation errors converge to zero on ideal 1-SM. Chaotic motion was exactly stabilized at zero in the 1-SM using 1-ASMC in [56]. Synchronization of chaotic motions is accomplished using ASMC in [57]. Here the sign function is approximated by a hyperbolic tangent. The gain adaptation law does not allow the gains to get reduced in both works [56, 57].

4 Adaptive Second-Order Sliding-Mode Control (2-ASMC) Techniques As we have seen 1-SMC provides robust and high-accuracy solutions for a variety of control problems under uncertainty conditions and has wide range of applications. However, two main restrictions remain. Firstly, the constraint to be held at zero in conventional sliding modes has to be of relative degree 1, which means that the control needs to explicitly appear in the first time derivative of the sliding variable. Thus, one has to search for an appropriate sliding variable. The overestimation of the uncertainty bounds implies unnecessary large control effort and a corresponding degradation of accuracy in practical implementation. Secondly, high-frequency control switching may easily cause unacceptable practical complications (chatter-

Adaptive Sliding Mode and Higher-Order Sliding-Mode Control Techniques …

277

ing effect) especially with the overestimated control gains, if such control has any physical sense. To attenuate these problems, higher-order and, specifically, secondorder sliding-mode control algorithms were presented. Several 2-SMC algorithms were proposed, including twisting and super-twisting, prescribed convergence [3, 7], and quasi-continuous [8] algorithms. They can handle sliding variable dynamics of relative degree 2 (with the exception of super-twisting control that is applicable for compensating the perturbed sliding variable dynamics of relative degree 1 while generating a continuous control function) with enhanced accuracy of sliding variable stabilization in finite time. Note that the 2-SMC and 2-SMO that can handle systems of relative degree 2 inherit all features of 1-SMC and 1-SMO of relative degree 1, including finite-time convergence in the presence of matched bounded disturbances. An introduction to the main ideas of the 2-ASMC algorithms that follows the paradigm discussed above, in order to mitigate possible chattering effects in 2-SM, is presented next.

4.1 Adaptive 2-SMC Super-Twisting Control The first known results on adaptive super-twisting (ASTW) control algorithms were obtained in the works [60–63]. All these ASTW control algorithms were obtained using Lyapunov function techniques. In [60], the adaptation laws for the STW gains were derived using the Lyapunov function technique for the relative degree one sliding variable dynamics in the presence of the perturbation, whose derivative is bounded with unknown boundary. The sliding variable convergence to the ideal 2-SM in finite time is provided. The downside of the proposed adaptive laws is that the adaptive gains can be only increased. In [61, 64] the adaptive gain laws for ASTW control achieve gain non-overestimation via allowing the gains to reduce. Convergence to the real 2-SM is provided, and the size of the domain depends on the perturbation derivative bounds. A particular case of the ASTW control proposed in [61], when the adaptive gains can only increase, yields convergence to an ideal 2-SM. Consider a nonlinear controllable SISO uncertain system in (1) of relative degree 1 with the smooth output σ = σ (x, t). The σ − dynamics are given by σ˙ = a(x, t) + b(x, t)u

(12)

where |b(x, t)| ≤ γ1 < 1 (13) b0 (x, t) a(x, t) = a1 (x, t) + a2 (x, t), |a1 (x, t)| ≤ γ1 |σ |1/2 , |a2 (x, t)| ≤ δ2 , δ1 , δ2 > 0. b(x, t) = b0 (x, t) + b(x, t), b0 (x, t) > 0,

278

Y. Shtessel et al.

Finally σ˙ = a(x, t) + b1 (x, t)ω

(14)

where u = b0 (x, t)ω and 1 − γ1 ≤ b1 (x, t) ≤ 1 + γ1 . Note that the boundaries δ1 , δ2 , γ1 exist but are unknown. The problem is to design the ASTW control ω that drives the sliding variable σ to a real 2-SM, i.e., |σ | ≤ η1 , |σ˙ | ≤ η2 , in the presence of the bounded additive and multiplicative perturbations with unknown boundaries. The following ASTW control structure is considered [7, 61] ω˙ = −α|σ |1/2 sign(σ ) + v

(15)

v˙ = −βsign(σ ) where the non-overestimated adaptive gains α = α(σ, t), β = β(σ, t) are to be defined. The idea of designing ASTW is to dynamically increase the control gains α = α(σ, t), β = β(σ, t) until an ideal 2-SM is established. Then the gains start reducing. This gain reduction is then reversed as soon as the sliding variable or its derivative start deviating from the equilibrium point σ = σ˙ = 0 in 2-SM. Therefore, a rule (a detector) that detects the beginning of the destruction of the 2-SM must be constructed and incorporated in the ASTW control law that prevents overestimating the control gains α and β. The following result on ASTW control was obtained in [61]: Theorem 3 [61]: Given the sliding variable dynamics in (13), (14), then for any initial condition σ (0) there exists a finite time t F > 0 and a parameter μ > 0 so that as soon as |σ | > μ the condition α>

δ1 (λ + 4 2 ) − (4δ4 + 1) (2 δ1 − 2δ4 − λ − 4 2 )2 + λ(1 − γ1 ) 12 λ(1 − γ1 )

(16)

holds, a real 2-SM |σ | ≤ η1 , |σ˙ | ≤ η2 is established for all t ≥ t F via the ASTW control (15) with the bounded adaptive gains  ω1 γ21 sign(|σ | − μ) i f α > αm α˙ = β = 2 α (17) η0 i f α ≤ αm



where ε, γ1 , αm , ω1 , η0 are arbitrary positive constants, δ4 ≥ |a˙ 2 | + b˙1 v , and η1 > μ, η2 > 0.

Adaptive Sliding Mode and Higher-Order Sliding-Mode Control Techniques …

279

Discussions: • In accordance with the adaptive gain law (17) the gains α = α(σ, t), β = β(σ, t) can get reduced and, therefore, are not overestimated, • If the “detector” for adaptive gain reduction (the term sign (|σ | − μ) in the gain adaptation law (17)) is eliminated (by making μ = 0), then the gain adaptation law (17) with α(0) > 0 can be changed to  α˙ =

ω1



γ1 2

i f |σ | = 0

0 if σ = 0

β = 2 α.

(18)

In this case, the ASTW control law (18) will drive the sliding variable σ trajectory to the ideal 2-sliding mode σ = σ˙ = 0 in finite time. However, the adaptive control gains α and β can be overestimated. ASTW laws based on the equivalent control are proposed and studied in [62, 63]. In [62] only one ASTW control gain is adapted. However, in [63] ASTW is proposed in a dual layer format with independent adaptation of both control gains. The proposed ASTW control laws [62, 63] provide a finite-time convergence to 2-SM with non-overestimated control gains. Consider the sliding variable dynamics compensated by a ASTW-like control in a form [62] σ˙ = −α(t)|σ |1/2 sign(σ ) + v + ϕ(σ, L) v˙ = −β(t)sign(σ ) + f (t).

(19)

Notations and constrains in (19) can be found in [62]. Compared to the traditional super-twisting structure in (15) an additional term ϕ (σ, L) is proposed in (19). It was suggested that β(t) = L(t)β0 (20) α(t) = L(t)α0 , and the new term ϕ (σ, L) reads as ϕ(σ, L) = −

˙ L(t) σ (t). L(t)

(21)

The adaptive control element L(t), on which the gains α(t) and β(t) depend, is given by L(t) = l0 + l(t)

(22)

where l0 is a (small) positive design constant and the time-varying term l(t) satisfies

280

Y. Shtessel et al.

˙ = −ρ(t)sign(δ(t)) l(t)

(23)

with

δ(t) = L(t) −

1 |u¯ eq | − gβ0

(24)

 1 β(t)sign(σ ) − u¯ eq u˙¯ eq = τ while τ > 0 and 1 < g < β10 < 1. The time varying gain ρ(t) is defined ρ(t) = r0 + r (t) r˙ = γ¯ |δ|

(25)

where r0 , γ¯ are fixed positive design scalars used for tuning purposes. The result is formulated in the following theorem. Theorem 4 [62]: Consider the sliding variable dynamics compensated by the ASTW-like control (19)–(21)

subject to uncertainty f(t) which satisfies the two con

straints | f (t)| ≤ a0 , f˙(t) ≤ a1 , where a0 , a1 > 0 are unknown. Then the duallayer adaption scheme in (22)–(25) ensures L(t) > | f (t)| in finite time and thus the convergence of σ to ideal 2-SM. Example [62]: The system (19)–(25) was simulated with f (t) = sin (t + 0.57) + 0.5 sin 3t, β0 = 1.1, α0 = 2.97, τ = 0.001, γ¯ = 8, r0 = l0 = 0.1, ε = 0.1. The results of the simulations are shown in Figs. 3, 4, 5, and 6.

Fig. 3 Time history of σ and v. ©2019 ELSEVIER. Reprinted with permission from Edwards, C., Shtessel, Y.: Enhanced continuous higher order sliding mode control with adaptation. J. Frankl. Inst. 356, 4773–4784 (2019)

Adaptive Sliding Mode and Higher-Order Sliding-Mode Control Techniques …

281

Fig. 4 Time histories of α, β and | f (t)|. ©2019 ELSEVIER. Reprinted with permission from Edwards, C., Shtessel, Y.: Enhanced continuous higher order sliding mode control with adaptation. J. Frankl. Inst. 356, 4773–4784 (2019)

Fig. 5 Time histories of ρ(t) and L(t). ©2019 ELSEVIER. Reprinted with permission from Edwards, C., Shtessel, Y.: Enhanced continuous higher order sliding mode control with adaptation. J. Frankl. Inst. 356, 4773–4784 (2019)

Discussions: • The plots in Figs. 4 and 5 show the deep gain adaptation and confirm nonoverestimation of the adaptive gains α and β. • The plots in Fig. 3 confirm the fact that convergence has happened to the ideal 2-SM in finite time. • The byproduct is the exact estimation/reconstruction of the disturbance f (t) as one can see in Fig. 6. The result obtained in [65] using the barrier function is similar to the one proposed in [61]for the case when the perturbation’s derivatives do not increase. The proposed ASTW control law forces the sliding variable in a finite time to a predefined neighborhood of zero, independently of the bound of the disturbance derivative. The paper [66] presents an ASTW-like 2-SMC controller with an uniform finite/fixed convergence time that is robust to perturbations with unknown bounds. It

282

Y. Shtessel et al.

Fig. 6 The disturbance f (t) reconstruction. ©2019 ELSEVIER. Reprinted with permission from Edwards, C., Shtessel, Y.: Enhanced continuous higher order sliding mode control with adaptation. J. Frankl. Inst. 356, 4773–4784 (2019)

is shown that ideal 2-SM is established in exact finite-time if the adaptive gain does not have the ability to get reduced and converges to a small vicinity of the origin if the adaptation algorithm does not overestimate the control gain. The estimate of fixed convergence time of the studied ASTW-like controller is derived based on the Lyapunov analysis.

4.2 2-SMC Twisting Control Adaptive twisting control (ATC) of relative degree 2 systems is considered in [67– 70]. The ATC law proposed in [67] provides continuous non-overestimated adaptive control gains and mitigates control chattering in the presence of bounded disturbances with unknown boundaries. The size of the domain of convergence, which is achieved after a finite-time transient response, is controlled by a threshold parameter. If gain overestimation is acceptable, then the proposed ATC without a threshold provides a finite-time convergence to the ideal 2-SM. Specifically, the output dynamics of the system (1) of relative degree 2 are presented as σ¨ = (x, t) + u

(26)

where , u, σ ∈ R , |(x, t)| ≤ m with unknown m > 0. The modified version of twisting control (TC) [3, 7] is given by [65]. u = −α(sign(σ ) + 0.5sign(σ˙ )

(27)

and renders the origin of the closed-loop system (26), (27) a finite-time stable equilibrium, provided that α > 2m .

Adaptive Sliding Mode and Higher-Order Sliding-Mode Control Techniques …

283

The idea behind the design of ATC algorithm is to continuously increase the control gain α(t) until σ converges to the equilibrium point σ = σ˙ = 0 in the 2-SM. Then, the gain α(t) starts reducing. This gain reduction gets reversed as soon as σ, σ˙ start deviating from the equilibrium. A detector that reveals the beginning of the destruction of the 2-SM is incorporated into the ATC, which prevents the control gain α(t) from being overestimated. This detection mechanism is designed by introducing a domain M = {σ, σ˙ : V0 (σ, σ˙ ) ≤ μ0 , μ0 > 0}, where 1 V0 = V0 (σ, σ˙ ) = α 2 σ 2 + γ σ˙ |σ |3/2 sign(σ ) + α|σ |σ˙ 2 + σ˙ 2 4

(28)

is a Lyapunov function specially designed in [67]. It is assumed that there exist (A1) a sufficiently large, a priori known parameter α∗ > 0 and ε0 > 0 so that α ∗ −ε0 > 2m , (A2) a parameter γ1 that satisfies the condition 1 0 3

≤ γ1 ≤ , α ∗ − 0 > 2m 4 maxσ,σ˙ ∈ 2α σ 2 + |σ |σ˙ 2 + γ1

(29)

where ε0 is selected accordingly,  ⊂ R2 is a compact set that contains the origin in its interior, and γ1 > 0 is a regularization term. The main result on ATC is presented in the following theorem. Theorem 5 [67]: Assume that the assumptions (A1) and (A2) hold. Define the control gain α(t) in Eq. (27) to be adapted as

α˙ =

⎧ ⎪ ⎨

√ω1

2γ1

1 γ1

⎪ ⎩ χ

(2ασ 2 +|σ |σ˙ 2 ) −

sign (V0 (σ, σ˙ ) − μ)

(α∗−α)3

i f α > αmin

(30)

i f α ≤ αmin

with ω1 , μ, χ , αmin , γ1 to be arbitrary positive constants, then, for initial conditions [x(0), y(0)]T ∈  ⊂ R2 , a real 2-SM is established in system (1)–(4) in the domain

4 λmax {P1 } |x|1/2 + |y| ≤ μ

(31)

where P1 =

3

λ (A) 2 max 0



0

λmax (A) 2

+

1 4

, A=

 2 α γ 2

γ 2 , |γ |

α

∈ 0, 2α 3/2 min

(32)

while the variables σ, σ˙ may leave this domain for finite time. In [68], an ATC is proposed for relative degree two systems with time delay. As studied in [69], a continuous ATC of third order provides finite-time convergence to zero by means of a continuous control signal. However, the design of the 3-ATC gains requires knowledge of the fourth derivative of the controlled output, which is

284

Y. Shtessel et al.

difficult to obtain. In accordance with the proposed continuous ATC algorithm, the gains are increased to a level allowing the rejection of the perturbations and achieving finite-time convergence. The ATC-like 2-SMC algorithm is proposed and studied in [71]. Convergence to an ideal 2-SM is achieved, while the adaptive control gain cannot reduce.

4.3 Other Adaptive 2-SMC Algorithms Adaptive sub-optimal 2-SMC continuous is proposed in [64]. The paper [72] focuses on the design of an adaptive finite reaching time2-SMC for second-order dynamic systems with perturbation terms given in a regressive form. The uncertainties considered are assumed to be bounded with unknown bounds. The proposed adaptive finite reaching time 2-SMC not only retains robustness to these disturbances, but also is continuous and drives the sliding variable to an ideal 2-SMC. An adaptive discrete time SMC algorithm for relative degree 2 systems is studied in [73]. The output switching discrete time SMC achieves real-2-SM, while the control gain performs in an increase-decrease mode.

5 Applications of Adaptive Second-Order Sliding-Mode Control (2-ASMC) Techniques Applications of 2-ASMC include cyber-attack reconstruction in electric power networks [74], where a novel distributed dual-layer ASTW observer-based scheme is designed to isolate, reconstruct and mitigate disturbances and a class of communication attacks affecting both generator nodes and load nodes in power networks. The adaptive gains of the ASTWO obey a recently proposed dual-layer adaptation law for the STW sliding-mode architecture. A disturbance mitigation strategy is also proposed. Numerical simulations are discussed to assess the proposed distributed scheme. A variety of 2-ASMC algorithms are applied to the aerospace control problems in the works [75–83]. The dual-layer ASTW control algorithm [62] is employed for controlling a physically distributed, dual quaternion-based antenna formation array for cube satellite communication in [76]. The efficacy of the proposed antenna arrangement and the control algorithm are verified via simulations. A multivariable version of the dual-layer ASTW observer is applied to controlling a perturbed fixed-wing unmanned aerial vehicle (UAV) subject to unmeasurable angular rates and unknown matched/unmatched disturbances in [77]. The Lipschitz continuous control signal is generated by the designed airspeed controller. The ASTW control algorithm presented in [60, 61] is applied and experimentally validated for robust attitude control of a UAV in [78]. The results are compared to the

Adaptive Sliding Mode and Higher-Order Sliding-Mode Control Techniques …

285

Fig. 7 Left, schematic of Quanser 3DOF tandem helicopter. Right, experimental set-up (with the fan on the left-hand side, producing perturbations as wind gusts). ©2017 JOHN WILEY AND SON. Reprinted with permission from Chriette, A., Plestan, F., Castaneda, H., Pal, M., Guillo, M., Odelga, M., Rajappa, S., Chandra, R.: Adaptive robust attitude control for UAVs - Design and experimental validation. Adaptive Control and Signal Processing 30,(8–10), 1478–1493 (2016)

Fig. 8 Adaptive super-twisting control -experimental results for trajectory tracking. ©2017 JOHN WILEY AND SON. Reprinted with permission from Chriette, A., Plestan, F., Castaneda, H., Pal, M., Guillo, M., Odelga, M., Rajappa, S., Chandra, R.: Adaptive robust attitude control for UAVs Design and experimental validation. Adaptive Control and Signal Processing 30,(8–10), 1478–1493 (2016)

PID-based controller, and the advantages of the proposed ASTW control algorithm were demonstrated. Specifically, the UAV schematic considered is shown in Fig. 7 [78]. The results of an experimental study of the perturbed UAV control: ASTW versus PID controller [61] is presented in Figs. 7, 8, and 9 [78]. The experimental tests have been made with a perturbation produced by the fan at around t = 100s.

286

Y. Shtessel et al.

Fig. 9 PID control - experimental results for trajectory tracking. ©2017 JOHN WILEY AND SON. Reprinted with permission from Chriette, A., Plestan, F., Castaneda, H., Pal, M., Guillo, M., Odelga, M., Rajappa, S., Chandra, R.: Adaptive robust attitude control for UAVs - Design and experimental validation. Adaptive Control and Signal Processing 30,(8–10), 1478–1493 (2016)

The dual-layer ASTW control proposed in [62] is applied to a three-dimensional guidance law for a missile interceptor in the presence of actuator faults in [75]. The control gain adaptation provides high precision target interception. The trajectory tracking problem of a perturbed quadrotor formation with collision avoidance was addressed in [79] using a multivariable version of the ASTW control algorithm presented in [60, 61]. The advantages of the proposed ASTW control versus a fixed gain SMC controller were demonstrated via simulations. The advantages of the proposed ASTW control [60] [61] used for the 2-DOF helicopter were demonstrated in [80] by comparison with a fixed gain SMC controller via simulations. The design of airspeed and attitude ASTW controllers in concert with the extended observer for a perturbed fixed wing UAV was presented in [81]. The ATC algorithm [67, 69] was applied for the perturbed Hypersonic Reentry Vehicle’s (HRV) attitude tracking and reported in [82]. The efficacy of the ATC is verified via simulations. The fast version of the ASTW control algorithm [61] based on the fast terminal sliding manifold was used in [83] for fault-tolerant attitude control for a reusable launch vehicle in the reentry phase. The advantages of the proposed ASTW control algorithm were demonstrated via simulations. A novel adaptive second-order discrete sliding-mode control (2-ADSMC) is proposed in [84] and applied to a highly nonlinear combustion engine tracking control problem. The simulation test results show that the 2-ADSMC can improve the tracking performance by up to 80% compared to a first-order DSMC under sampling and model uncertainties.

Adaptive Sliding Mode and Higher-Order Sliding-Mode Control Techniques …

287

Fig. 10 Mass-spring-damper (MSD) system: the Educational Control Products (ECP) model 210. ©2017 ELSEVIER. Reprinted with permission from Shtessel Y, Moreno J, and Fridman L. Twisting sliding mode control with adaptation: Lyapunov design, methodology and application. Automatica, 75:229–235, 2017

Fig. 11 Comparison of output tracking errors for ATC against fixed gain TC. ©2017 ELSEVIER. Reprinted with permission from Shtessel, Y., Moreno, J., Fridman, L.: Twisting Sliding Mode Control with Adaptation: Lyapunov Design, Methodology and Application. Automatica 75, 229– 235 (2017)

The ATC proposed in [67] was applied to the perturbed mass-spring-damper (MSD) system position control problem (see Fig. 10) and studied experimentally The results of the experiments are shown in Figs. 11 and 12 [67]: Discussions: • It is clear from Fig. 11 that the accuracy of the output tracking, which is achieved via the proposed ATC, is much better than the one guaranteed by the fixed gains TC.

288

Y. Shtessel et al.

Fig. 12 Time history of the adaptive control gain. ©2017 ELSEVIER. Reprinted with permission from Shtessel, Y., Moreno, J., Fridman, L.: Twisting Sliding Mode Control with Adaptation: Lyapunov Design, Methodology and Application. Automatica 75, 229–235 (2017)

• The ATC deep adaptation with gain reduction is demonstrated in Fig. 12. In [70] a new version of the ATC algorithm presented in [67] is proposed and applied to the position control of an electropneumatic actuator in the presence of uncertainties and disturbances. The efficacy of the proposed ATC algorithm is confirmed via simulation. Specifically, the system schematic is presented in Fig. 1 and the mathematical model in Eq. (10). In the work in [85], an integral 2-ASMC was employed for the design of a controller for a five degrees of freedom active magnetic bearing (AMB) system, which is subjected to model uncertainties and bounded disturbances with unknown bounds. A numerical analysis validated the integral 2-ASMC methodology. Chattering is reduced significantly. A novel version of the fast ASTW control algorithm originally presented in [61] was applied to the cable-driven manipulators under complicated unknown uncertainties in [86]. The validity and advantages of the proposed novel version of the fast ASTW control algorithm were demonstrated through comparative 2-DOFs (degree of freedoms) simulations and experiments. Roll suppression of marine vessels subjected to harmonic wave excitations is addressed in [87] via the ASTW control algorithm [61]. Extensive simulation results have been provided to verify the effectiveness of the proposed robust controller against exogenous disturbances. The efficient control of certain types of variable-speed wind turbines is accomplished via sub-optimal adaptive 2-SMC algorithm in [88] that is based on appropriate receding horizon adaptation time windows, rather than on fixed and adjacent

Adaptive Sliding Mode and Higher-Order Sliding-Mode Control Techniques …

289

ones. The effectiveness of the novel adaptive controller has been extensively assessed through computer simulations [89].

6 Higher-Order Adaptive Sliding-Mode Control (r-ASMC) Techniques One of the first results on adaptive higher-order sliding-mode control (r-ASMC) was presented in [90], where a variety of r-ASMC algorithms were reviewed. In the series of works [31, 91, 92] continuous r-ASMC based on equivalent control algorithms for perturbed nonlinear systems have been presented. The approach is based on a novel dual-layer nested adaptive methodology which is quite different from the existing (at that time) schemes proposed in the sliding-mode literature. The adaptive schemes do not require knowledge of the minimum and maximum allowed values of the adaptive gain, and in their most general form, do not require information about the bound on the disturbances and their derivatives. In this work the adaptive gains were not overestimated, while convergence to ideal r-SM was provided. The proposed method was based on a combination of the Bhat & Bernstein [93] and ASTW [31] algorithms. The ASTW algorithm estimates the perturbation in finite time, and its adaptation abilities were subsequently enhanced in [92]. Specifically, the output dynamics of the system (1) of relative degree q are presented as σ (q) = (t) + u.

(33)

Notations for (33) can be found in [90]. The objective is to force σ, σ (1) , ..., σ (q) → 0 (q-SM) in finite

time despite the disturbance (t) and without knowledge of the

¨ . Consider a control law comprising two parts: bounds and (t) u = −u σ (t) − u s (t)

(34)

where

αq

u σ (.) = γ1 |σ |α1 sign (σ ) + γ2 |σ˙ |α2 sign (σ˙ ) + ... + γq σ (q−1) sign σ (q−1) (35) and

u s (.) = s(t)

˙ L(t) + λ(t) |s|1/2 sign (s) + w L(t) w˙ = β(t)sign (s)

(36)

290

Y. Shtessel et al.

with the auxiliary sliding variable s=σ

(q−1)



t

+

u σ (τ )dτ.

(37)

0

In (35), the parameters γ1 , γ2 , ...γq > 0 must be chosen such that the polynomial pr + γr pr −1 + ... + γ2 p + γ1 is Hurwitz and the scalars α1 , α2 , ...αr > 0 are chosen recursively as [93] αi−1 =

αi αi+1 , αq+1 = 1, αq = α, ¯ i = 2, 3, ..., q + 1. 2αi+1 − αi

(38)

The gains λ(t), β(t) adapt according to the dual-layer structure in Eqs. (20), (22)–(25). The main result on dual-layer r-ASMC is presented in the following theorem Theorem 6 [31, 92]: system

(33), where the disturbance (t) has the



Consider ¨ ≤ a2 where the a1 , a2 > 0 are unknown. ˙ ≤ a1 , (t) bounded derivatives (t) √ Using the control law in Eqs. (34)–(38), where the gains λ(t) = L(t)λ0 , β(t) = L(t)β0 , λ0 , β0 > 0 adapt according to the dual-layer Eqs. (20), (22)– (25), then there exists an 0 < εb < 1 such that for every α¯ ∈ (1 − εb , 1) the origin σ = σ (1) =, ..., = σ (q) = 0 is a finite-time stable equilibrium point. Note that • The control law in Eqs. (34)–(38), (20), (22)–(25) is a continuous adaptive q-SMC, based on equivalent control and integral SMC concepts. • In the adaptation law Eqs. (20), (22)–(25) the gain r (t) and hence ρ(t) is nondecreasing but remains bounded. The gain ρ(t) can

be interpreted as an upper ¨ ≤ a2 . Although in Eq. bound on the second derivative of the disturbance (t) (23) large amplitude switching can occur in the right hand side if ρ(t) is large, the gain l(t) and hence L(t), are both continuous. Consequently the changes to λ(t), β(t) in Eq. (20) and (7) are continuous. • Examples of coefficients for the control law component in (35) can be found from Table 1. The efficacy of the proposed continuous q-ASMC is verified via simulations. Example [92]: Consider the system in Eq. (33) with q = 3. The disturbance is taken for simulation purposes as (t) = 2 sin(t) + 0.2 sin 5t + 0.5 and the boundary of its 1st and second derivatives are assumed unknown for the controller design in Eqs. (34)–(38), (20), (22)–(25). In the simulations, the values from Table 1 have been used to create u σ in Eq. (35). In the adaptive component u s in Eq. (36) the following parameters are taken: α = 0.99, β0 = 1.1, λ0 = 2.97, τ = 0.001, γ¯ = 10, r0 = 1, l0 = 0.001, ε = 0.02, σ (0) = 1, σ˙ (0) = 0.5, σ¨ (0) = 0. The simulations plots are shown in Figs. 13, 14, 15, and 16.

Adaptive Sliding Mode and Higher-Order Sliding-Mode Control Techniques …

291

Table 1 The coefficients for the control law component in (35). © 2019 ELSEVIER. Reprinted with permission from Edwards C and Shtessel Y. Enhanced continuous higher-order sliding mode control with adaptation. J. Frankl. Inst, 356:4773–4784, 2019 q Control law component 1 2 3 4

−2|σ |1/2 sign (σ ) −4|σ |3/7 sign(σ ) − 4|σ˙ |3/7 sign(σ˙ ) −8|σ |7/16 sign(σ ) − 12|σ˙ |7/3 sign(σ˙ ) − 6|σ¨ |7/10 sign(σ¨ ) −16|σ |1/2 sign(σ ) − 32|σ˙ |4/7 sign(σ˙ ) − 24|σ¨ |2/3 sign(σ¨ ) − 8||4/5 sign()

Discussion: • Figure 13 demonstrates finite-time convergence of σ, σ˙ , σ¨ to zero (a 3-SMC) in the presence of the disturbance with bounded 1st and 2nd derivatives, • The plot in Fig. 14 demonstrates continuity of the adaptive control function u(t) and the time history of the continuous adaptive 3-SMC control u(t), • The evolution of the gains λ(t) and β(t), shown in Fig. 15, demonstrates the excellent self-tuning capabilities of the adaptive continuous 3-SMC control in terms of control gain non-overestimation. Specifically, the adaptive gain β(t) just



˙ profile. slightly supersedes the (t) • The evolutions of ρ(t), δ(t) and L(t) presented in Fig. 16 show that the adaptive gain ρ(t) reaches a constant

value, while the adaptive parameter L(t) follows

˙ , and the variable δ(t) reaches zero in finite time as the unknown profile (t) expected. The quasi-optimal continuous r-ASMC algorithm that is based on the works [31, 91, 92] is presented in [94]. The overall scheme consists of two elements: 1) a quasioptimal control law that provides fast finite-time stabilization for a chain of integrators [74]; and 2) an ATWC with integration of the quasi-optimal control and the integral sliding-mode concept. The adaptation strategy solves the problem of gain tuning without overestimation and has the advantage of chattering reduction. The bounds of the uncertainties are no longer needed in the controller design. Simulation results are provided to demonstrate the effectiveness of the proposed r-ASMC algorithm. In [95], [96] barrier function-based r-ASMC algorithms are proposed and studied. In [95], a new Lyapunov-based r-ASMC scheme for a (perturbed) chain of integrators with unknown bounded uncertainties was proposed. The proposed r-ASMC guarantees (a) finite-time convergence to a family of adaptive open sets decreasing to the origin as time tends to infinity; (b) once a trajectory enters some domain at some time, it remains trapped in it for all subsequent time. However the control function is not continuous. Note that as proposed in [95], the r-ASMC algorithm designed based on semidefinite barrier functions can avoid chattering, while driving the barrier size asymptotically to zero. A dual-layer [92] barrier function-based r-ASMC algorithm is proposed and studied in [96].

292

Y. Shtessel et al.

Fig. 13 Evolution of σ, σ˙ , σ¨ . ©2019 ELSEVIER. Reprinted with permission from Edwards C and Shtessel Y. Enhanced continuous higher order sliding mode control with adaptation. J. Frankl. Inst, 356:4773–4784, 2019

Fig. 14 Evolution of the control function u(t). ©2019 ELSEVIER. Reprinted with permission from Edwards C and Shtessel Y. Enhanced continuous higher order sliding mode control with adaptation. J. Frankl. Inst, 356:4773–4784, 2019

A fractional-order r-ASMO was proposed for a class of nonlinear fractional-order single input-output systems in [97]. The concept of homogeneity is extended for fractional-order systems by applying Caputo’s derivative definition. The finite-time stability of the r-ASMO is derived through the homogeneity property. The efficacy of the proposed method is demonstrated on two simulation examples. A novel r-ASMC is proposed in [98]. The main features are gain adaptivity and the use of the integral sliding-mode concept. The gain adaptation allows a reduction in the chattering and gives a solution to control uncertain nonlinear systems, whose uncertainties/perturbations have unknown bounds. The integral continuous r-ASMC algorithm is studied in [99]. The derivative of the control input is used in the proposed control law. The adaptive gain law eliminates the

Adaptive Sliding Mode and Higher-Order Sliding-Mode Control Techniques …

293

˙ Fig. 15 Evolution of the adaptive gains λ(t), β(t) and |(t)|. ©2019 ELSEVIER. Reprinted with permission from Edwards C and Shtessel Y. Enhanced continuous higher order sliding mode control with adaptation. J. Frankl. Inst, 356:4773–4784, 2019

Fig. 16 Evolution of ρ(t), δ(t), L(t). © 2019 ELSEVIER. Reprinted with permission from Edwards C and Shtessel Y. Enhanced continuous higher order sliding mode control with adaptation. J. Frankl. Inst, 356:4773–4784, 2019

need of prior knowledge about the upper bound of the system uncertainties. However the adaptive gains cannot be reduced. Simulation results demonstrate the advantages of the proposed control scheme. The r-ASMO/differentiator is studied in [100], [101]. A constructive design paradigm to generate differentiator parameters which is seen to provide a natural framework to facilitate simple online adaptation of the chosen gains is proposed in [100]. The 1-ASMC algorithm presented in [20] is employed. Simulation experiments as well as experimental results are presented to demonstrate the proposed approach. A barrier function-based adaptation algorithm for Levant’s differentiator (LD) gains is designed in [101] for the case when the upper bound of the second derivative of the base signal exists but is unknown. This algorithm enjoys fast con-

294

Y. Shtessel et al.

vergence in the absence of noise, but does not converge in the case of noisy signals. The effects of r-ASMC differentiators in a close loop system with output feedback are studied in [102].

7 Applications of Adaptive Higher-Order Sliding-Mode Control (r-ASMC) Techniques A novel r-ASMC strategy is proposed in [103] to enhance system robustness to uncertainties that are established under islanding and grid-connected modes. The 3-ASMC and 2-ASMC control algorithms are proposed and studied for the two considered models. The studied models of microgrids consisted of nominal and uncertain components. The proposed adaptation law increases and/or decreases the control gains. Real r-SM are established. The efficacy of the proposed r-ASMC algorithm was confirmed via simulations. A r-ASMC is proposed in [104] for controlling the nonlinear single-input-singleoutput (SISO) system that supplies air to fuel cells. Note that the uncertainties in this SISO system are bounded with unknown bounds. The 3-ASMC and r-ASMC are designed to obtain continuous input current functions for the motor control. These two ASMCs are compared experimentally. Control of a hypersonic missile (HSM) via self-tuning continuous r-ASMC is studied during the terminal phase in [105], [31, 91, 92]. This continuous r-ASMC algorithm protects the control gains from overestimation. The efficient and robust output tracking is confirmed via simulations in the presence of bounded perturbations in the HSM model. Note that the studied HSM is propelled by an air-breathing jet engine (usually a scramjet). The perturbed longitudinal dynamics of the HSM (see Figs. 17 and 18 [105]) is considered as ⎧ T cos(α)−D ˙ − g sin(θ − α), h˙ = V sin(θ − α), ⎪ ⎨V = m L+T sin(α) + q + Vg cos(θ − α) α˙ = − mV ⎪ ⎩θ˙ = q, q˙ = M , χ˙ = V cos(θ − α) I yy L = L(α, δe ), D = D(α, δe ), M = M(T, α, δe ), T = T (φ, α).

(39)

(40)

Notations for Eqs. (39)–(40) can be found in [105]. The control and output vectors are given respectively as u = [φ, δe ]T ,

y = [h, θ ]T .

(41)

The output tracking errors eθ = θc (t) − θ (t), eh = h c (t) − h(t), where θc (t), h c (t) are the reference profiles for θ (t) and h(t), are driven to zero by means of two

Adaptive Sliding Mode and Higher-Order Sliding-Mode Control Techniques …

295

Fig. 17 Systems of HSM coordinates and corresponding variables. ©2016 JOHN WILEY AND SONS. Reprinted with permission from Yu P, Shtessel Y, and Edwards C. Continuous higher order sliding mode control with adaptation of air breathing hypersonic missile. Int. J. Adapt. Control Signal Process, 30:1099–1118, 2016

Fig. 18 A geometry of the HSM-target interaction. ©2016 JOHN WILEY AND SONS. Reprinted with permission from Yu P, Shtessel Y, and Edwards C. Continuous higher order sliding mode control with adaptation of air breathing hypersonic missile. Int. J. Adapt. Control Signal Process, 30:1099–1118, 2016

decoupled continuous 3-ASMCs φ and δe , designed in accordance with Eqs. (34)– (38), (20), (22)–(25) in the presence of uncertainties m, I yy , L and D with bounded 1st and 2nd derivatives with unknown bounds. The simulation results of the HSM in the terminal phase using the continuous 3-ASMCs are presented in Figs. 19, 20, 21, 22, 23, 24, 25, and 26.

296

Y. Shtessel et al.

Fig. 19 Altitude Tracking. ©2016 JOHN WILEY AND SONS. Reprinted with permission from P. Yu, Y. Shtessel, and C. Edwards, “Continuous Higher Order Sliding Mode Control with Adaptation of Air Breathing Hypersonic Missile,” Int. J. Adapt. Control Signal Process, Vol. 30, Issues 8–10, August-October 2016, pp. 1099–1118

Fig. 20 Pitch Angle Tracking. ©2016 JOHN WILEY AND SONS. Reprinted with permission from P. Yu, Y. Shtessel, and C. Edwards, “Continuous Higher Order Sliding Mode Control with Adaptation of Air Breathing Hypersonic Missile,” Int. J. Adapt. Control Signal Process, Vol. 30, Issues 8–10, August-October 2016, pp. 1099–1118

Discussion: • High accuracy attitude tracking via continuous 3-ASMC is demonstrated in Figs. 19, 20 and 21. • Figures 23, 24 and 25 demonstrate the evolution of non-overestimated (1st layer) adaptive gains. • Maximum target penetration is provided, which can be seen in Figs. 19, 20, 21 and 26, where the altitude, pitch, and downrange distance errors are near zero. It should be noted that the downrange distance accuracy is achieved within 1.67 ft of the desired target location.

Adaptive Sliding Mode and Higher-Order Sliding-Mode Control Techniques …

297

Fig. 21 Altitude and Pitch Angle Errors. ©2016 JOHN WILEY AND SONS. Reprinted with permission from P. Yu, Y. Shtessel, and C. Edwards, “Continuous Higher Order Sliding Mode Control with Adaptation of Air Breathing Hypersonic Missile,” Int. J. Adapt. Control Signal Process, Vol. 30, Issues 8–10, August-October 2016, pp. 1099–1118

Fig. 22 Control Inputs. ©2016 JOHN WILEY AND SONS. Reprinted with permission from P. Yu, Y. Shtessel, and C. Edwards, “Continuous Higher Order Sliding Mode Control with Adaptation of Air Breathing Hypersonic Missile,” Int. J. Adapt. Control Signal Process, Vol. 30, Issues 8–10, August-October 2016, pp. 1099–1118

298

Y. Shtessel et al.

Fig. 23 Adaptive first layer control gains. ©2016 JOHN WILEY AND SONS. Reprinted with permission from P. Yu, Y. Shtessel, and C. Edwards, “Continuous Higher Order Sliding Mode Control with Adaptation of Air Breathing Hypersonic Missile,” Int. J. Adapt. Control Signal Process, Vol. 30, Issues 8–10, August-October 2016, pp. 1099–1118

Fig. 24 Adaptive control terms that are hidden behind integrals. ©2016 JOHN WILEY AND SONS. Reprinted with permission from P. Yu, Y. Shtessel, and C. Edwards,“Continuous Higher Order Sliding Mode Control with Adaptation of Air Breathing Hypersonic Missile,” Int. J. Adapt. Control Signal Process, Vol. 30, Issues 8–10, August-October 2016, pp. 1099–1118

Adaptive Sliding Mode and Higher-Order Sliding-Mode Control Techniques …

299

Fig. 25 Adaptive second layer control gains. ©2016 JOHN WILEY AND SONS. Reprinted with permission from P. Yu, Y. Shtessel, and C. Edwards, “Continuous Higher Order Sliding Mode Control with Adaptation of Air Breathing Hypersonic Missile,” Int. J. Adapt. Control Signal Process, Vol. 30, Issues 8–10, August-October 2016, pp. 1099–1118

Fig. 26 Downrange tracking error. ©2016 JOHN WILEY AND SONS. Reprinted with permission from P. Yu, Y. Shtessel, and C. Edwards, “Continuous Higher Order Sliding Mode Control with Adaptation of Air Breathing Hypersonic Missile,” Int. J. Adapt. Control Signal Process, Vol. 30, Issues 8–10, August-October 2016, pp. 1099–1118

300

Y. Shtessel et al.

• The control inputs time history in Fig. 22 shows continuous control functions. The fuel–air ratio control saturates at zero for short time in order to stay within the fuel/air ratio limits of [0, 1.5]. The modified r-ASMC presented in [31, 91, 92] is used for controlling hypersonicentry vehicles in [106] [107]. The robustness is verified with respect to perturbations in terms of initial conditions, atmospheric density variations, as well as mass and aerodynamic uncertainties. Results show that the approach is valid, leading to an accurate disturbance reconstruction, to a better transient, and to good tracking performance. Improvements of about 50% in terms of altitude and range errors with respect to the corresponding standard 1-SMC approach are achieved. A novel continuous r-ASMC with impulsive action [108] that is based on the theoretical development presented in [31, 91, 92, 109] is applied to a hypersonic glider autopilot. This autopilot and the control approach include a robust continuous r-ASMC augmented by impulsive reaction control thrusters. Control gain-adaptation enables the vehicle’s bounded uncertainties and perturbations to be addressed without overestimating the control gains. The impulsive augmentation of the continuous r-ASMC provides almost instantaneous convergence thereby mitigating the risk of control loss caused by sideslip angle departures due to poor transversal stability and restricted lateral control authority. The results are validated via simulations of realistic scenarios.

8 Conclusions The continued growth in the area of adaptive sliding mode and higher-order slidingmode control during the last 15 years, both from the theoretical and practical points of view, is evident. For uncertain dynamic systems, practically validated theoretical developments are available. However, research activity is on-going to fully disclose the possibilities offered by adaptive sliding mode and higher-order sliding-mode approaches and to develop constructive adaptive gain algorithms that prevent control gain overestimation while achieving an ultimate convergence time and accuracy. Based on the strong theoretical foundations outlined in this survey chapter, in concert with their success in real applications, it can be anticipated that adaptive sliding-mode control techniques will continue to have increasing impact on both theoretical and practical adaptive sliding mode and higher-order sliding-mode control developments.

References 1. Utkin, V.I.: Sliding Modes in Control and Optimization. Springer (1992) 2. Edwards, C., Spurgeon, S.: Sliding Mode Control: Theory and Applications. Crc Press (1998) 3. Shtessel, Y., Edwards, C., Fridman, L., Levant, A., et al.: Sliding Mode Control and Observation, vol. 10. Springer (2014)

Adaptive Sliding Mode and Higher-Order Sliding-Mode Control Techniques …

301

4. Utkin, V., Lee, J.: Chattering problem in sliding mode control systems. In: Proceedings of the 9 th IEEE Workshop on Variable Structure Systems, pp. 346–350 (2006) 5. Levant, A.: Chattering analysis. IEEE Trans. Aut. Contr 55, 1380–1389 (2010) 6. Isidori, A.: Nonlinear Control Systems. Springer (1989) 7. Levant, A.: Higher order sliding modes, differentiation and output feedback control. Int. J. Control 76, 924–941 (2003) 8. Levant, A.: Quasi-continuous higher order sliding mode controllers. IEEE Trans. Aut. Contr 50, 1812–1816 (2006) 9. Shtessel, Y., Zinober, A.S.I., Shkolnikov, I.A.: Sliding mode control of boost and buck-boost power converters using method of stable system centre. Automatica 39(6), 1061–1067 (2003) 10. Shtessel, Y., Baev, S., Biglari, H.: Unity power factor control in 3-phase ac/dc boost converter using sliding modes. IEEE Trans. Ind. Electron. 55, 3874–3882 (2008) 11. Utkin, V.: Sliding mode control principles and application to electric drives. IEEE Trans. Ind. Electron. 40, 23–36 (1993) 12. Shtessel, Y., Buffington, J., Banda, S.: Tailless aircraft flight control using multiple time scale re-configurable sliding modes. IEEE Trans. Control Syst. Technol. 10, 288–296 (2002) 13. Shtessel, Y., Shkolnikov, I., Levant, A.: Smooth second order sliding modes: Missile guidance application. Automatica 43, 1470–1476 (2007) 14. Slotine, J.-J., Sastry, S.: Tracking control of nonlinear system using sliding surfaces, with application to robot manipulators. Int. J. Control 38, 465–492 (1983) 15. Utkin, V.: Discussion aspects of high-order sliding mode control. IEEE Trans. Autom. Control 61, 829–833 (2016) 16. Martínez-Fuentes, C., Pérez-Ventura, U., Fridman, L.: Chattering analysis for lipschitz continuous sliding-mode controllers. Int. J. Robust Nonlinear Control 31, 3779–3794 (2021) 17. Shtessel, Y., Fridman, L., Plestan, F.: Adaptive sliding mode control and observation. Int. J. Control 89, 1743–1746 (2016) 18. Cruz-Ancona, C., Estrada, M., Fridman, L., Obeid, H., Laghrouche, S.: Adaptive continuous controllers ensuring prescribed ultimate bound for uncertain dynamical systems. IFAC-Papers On Line 53, 5063–5068 (2020) 19. Huang, Y., Kuo, T., Chang, S.: Adaptive sliding-mode control for nonlinear systems with uncertain parameters. IEEE Trans. Syst. Man Cybern. 38, 534–539 (2008) 20. Plestan, F., Shtessel, Y., Brégeault, V., Poznyak, A.: New methodologies for adaptive sliding mode control. Int. J. Control 83, 1907–1919 (2010) 21. Jean-Jacques, E.S., Weiping, L., et al.: Applied Nonlinear Control. Prentice Hall Englewood Cliffs, NJ (1991) 22. Slotine, J., Coetsee, J.: Adaptive sliding controller synthesis for non-linear systems. Int. J. Control 43, 1631–1651 (1986) 23. Oliveira, T., Cunha, J., Hsu, L.: Adaptive sliding mode control for disturbances with unknown bounds. In: Proceedings of the 14th International Workshop on Variable Structure Systems, pp. 59–64 (2016) 24. Oliveira, T., Cunha, J., Wang, X.: Adaptive sliding mode control based on the extended equivalent control concept for disturbances with unknown bounds. In: Advances in Variable Structure Systems and Sliding Mode Control-Theory and Applications. Studies in Systems, Decision and Control, p. 115 (2018) 25. Utkin, V., Spurgeon, S.: Adaptive sliding mode control. In: Advances in Sliding Mode Control. Lecture Notes in Control and Information Sciences, p. 440 (2013) 26. Bartolini, G., Levant, A., Plestan, F., Taleb, M., Punta, E.: Adaptation of sliding modes. IMA J. Math. Control Inf. 30, 285–300 (2013) 27. Hsu, L., Oliveira, T., Paulo, J., Cunha, V., Yan, L.: Adaptive unit vector control of multivariable systems using monitoring functions. Int. J. Robust Nonlinear Control 29, 583–600 (2019) 28. Roy, S., Baldi, S., Fridman, L.: On adaptive sliding mode control without a priori bounded uncertainty. Automatica 111 (2020) 29. Cong, B., Chen, Z., Liu, X.: On adaptive sliding mode control without switching gain overestimation. Int. J. Robust Nonlinear Control 24, 515–531 (2014)

302

Y. Shtessel et al.

30. Roy, S., Roy, S., Lee, J., Baldi, S.: Overcoming the underestimation and overestimation problems in adaptive sliding mode control. IEEE/ASME Trans. Mechatron. 24, 2031–2039 (2019) 31. Edwards, C., Shtessel, Y.: Adaptive continuous higher order sliding mode control. Automatica 65, 183–190 (2016) 32. Lin, L., Liu, Z., Kao, Y., Xu, R.: Observer-based adaptive sliding mode control of uncertain switched systems. IET Control Theory Appl. 14, 28 (2020) 33. Guo, J., Liu, Y., Zhou, J.: New adaptive sliding mode control for a mismatched secondorder system using an extended disturbance observer. Trans. Inst. Meas. Control 41, 276–284 (2019) 34. Obeid, H., Fridman, L., Laghrouche, S., Harmouche, M.: Barrier function-based adaptive sliding mode control. Automatica 93, 540–544 (2018) 35. Rodrigues, V., Hsu, L., Oliveira, T., Fridman, L.: Adaptive sliding mode control with guaranteed performance based on monitoring and barrier functions. Int. J. Adapt. Control Sig. Process 1 (2021) 36. Bartolini, G., Ferrara, A., Utkin, V.: Adaptive sliding mode control in discrete-time systems. Automatica 31, 769–773 (1995) 37. Steinberger, M., Horn, M., Ferrara, A.: Discrete-time model reference adaptive sliding mode control for systems in state-space representation. In: Proceedings of the IEEE 58th Conference on Decision and Control, pp. 6007–6012 (2019) 38. Ibarra, L., Rosales, A., Ponce, P., Molina, A.: Adaptive smc based on the dynamic containment of the sliding variable. J. Frankl. Inst. 358, 1422–1447 (2021) 39. Xu, H., Mirmirani, M., Ioannou, P.: Adaptive sliding mode control design for a hypersonic flight vehicle. J. Guidance Control Dyn. 27, 829–929 (2004) 40. Li, P., Yu, X., Zhang, Y., Peng, X.: Adaptive multivariable integral tsmc of a hypersonic gliding vehicle with actuator faults and model uncertainties. IEEE/ASME Transa. Mechatron. 22, 2723–2735 (2017) 41. Guo, F., Lu, P.: Improved adaptive integral-sliding-mode fault-tolerant control for hypersonic vehicle with actuator fault. IEEE Access 9, 46143–46151 (2021) 42. Gao, Y., Zhao, Q., Xiao, L.: A robust adaptive sliding mode control method for attitude control of the quad-rotor. Adv. Mater. Res. 852, 391–395 (2014) 43. Shang, W., Jing, G., Zhang, D., Chen, T., Liang, Q.: Adaptive fixed time nonsingular terminal sliding-mode control for quadrotor formation with obstacle and inter-quadrotor avoidance. IEEE Access 9, 60640–60657 (2021) 44. Mofid, O., Mobayen, S.: Adaptive sliding mode control for finite-time stability of quadrotor uavs with parametric uncertainties. ISA Trans. 72, 1–14 (2018) 45. Mancini, M., Capello, E.: Adaptive sliding mode-based control system for flexible spacecraft. In: Proceedings of American Control Conference, pp. 2968–2973 (2021) 46. Bae, J., Kim, Y.: Adaptive controller design for spacecraft formation flying using sliding mode controller and neural networks. J. Frankl. Inst. 349, 578–603 (2012) 47. Yao, B., Tomizuka, M.: Smooth robust adaptive sliding mode control of manipulators with guaranteed transient performance. J. Dyn. Sys. Meas. Control 118, 764–775 (1996) 48. Baek, J., Kwon, W.: Practical adaptive sliding-mode control approach for precise tracking of robot manipulators. Appl. Sci. 10, 1–16 (2020) 49. Br’egeaul, V., Plestan, F., Shtessel, Y., Poznyak, A.: Adaptive sliding mode control of electropneumatic actuator. In: Proceedings of 11th International Workshop on Variable Structure Systems, pp. 260–265 (2010) 50. Nateghi, S., Shtessel, Y., Edwards, C.: Resilient control of cyber-physical systems under sensor and actuator attacks driven by adaptive sliding mode observer. Int. J, Robust Nonlinear Control (2021) 51. Huang, X., Zhai, D., Dong, J.: Adaptive integral sliding-mode control strategy of data-driven cyber-physical systems against a class of actuator attacks. IET Control Theory Appl. 12 (2018) 52. Basin, M., Guerra-Avellaneda, F., Shtessel, Y.: Stock management problem: Adaptive fixedtime convergent continuous controller design. IEEE Trans. Syst. Man Cybern. Syst. 50, 4974– 4983 (2020)

Adaptive Sliding Mode and Higher-Order Sliding-Mode Control Techniques …

303

53. Menon, P., Edwards, C., Shtessel, Y.: An adaptive sliding mode observer for a complex network of dynamical systems. Int. J. Adapt. Control Sig. Process 30, 1465–1478 (2016) 54. Chu, Z., Zhu, D., Yang, S., Jan, G.: Adaptive sliding mode control for depth trajectory tracking of remotely operated vehicle with thruster nonlinearity. J. Navig. 70, 149–164 (2017) 55. Cristi, R., Papoulias, F., Healey, A.: Adaptive sliding mode control of autonomous underwater vehicles in the dive plane. IEEE J. Ocean. Eng. 15, 152–160 (1990) 56. Nhu, N.T.H., Vu, M., Nguyen, N., Mung, N., Hong, S.: Finite-time stability of mimo nonlinear systems based on robust adaptive sliding control: Methodology and application to stabilize chaotic motions. IEEE Access 9, 21759–21768 (2021) 57. Dadras, S., Momeni.: Adaptive sliding mode control of chaotic dynamical systems with application to synchronization. Math. Comput. Simul. 80, 2245–2257 (2010) 58. Messali, A., Ghanes, M., Hamida, M., Koteich, M.: A resilient adaptive sliding mode observer for sensorless ac salient pole machine drives based on an improved hf injection method. Control Eng. Pract. 93, 104163 (2019) 59. Xia, Y., Xu, K., Li, Y., Xu, G., Xiang, X.: Modeling and three-layer adaptive diving control of a cable-driven underwater parallel platform. IEEE Access 6, 24016–24034 (2018) 60. Shtessel, Y., Moreno, J., Plestan, F., Fridman, L., Poznyak, A.: Super-twisting adaptive sliding mode control: A lyapunov design. In: Proceedings of the 49th IEEE Conference on Decision and Control, pp. 5109–5113 (2010) 61. Shtessel, Y., Taleb, M., Plestan, F.: A novel adaptive-gain super-twisting sliding mode controller: methodology and application. Automatica 48, 759–769 (2012) 62. Edwards, C., Shtessel, Y.: Adaptive dual layer super-twisting control and observation. Int. J. Control 89, 1759–1766 (2016) 63. Utkin, V., Poznyak, A.: Adaptive sliding mode control with application to super-twist algorithm: Equivalent control method. Automatica 49(1), 39–47 (2013) 64. Wang, Z., Yuan, J., Pan, Y.: Adaptive second-order sliding mode control: A unified method. Trans. Inst. Meas. Control 40, 1927–1935 (2018) 65. Obeid, H., Laghrouche, S., Fridman, L., Chitour, Y., Harmouche, M.: Barrier function-based adaptive super-twisting controller. IEEE Trans. Autom. Control 65, 4928–4933 (2020) 66. Basin, M., Panathula, C., Shtessel, Y.: Adaptive uniform fixed-time convergent second order sliding mode control. Int. J. Control 89, 1777–1787 (2016) 67. Shtessel, Y., Moreno, J., Fridman, L.: Twisting sliding mode control with adaptation: Lyapunov design, methodology and application. Automatica 75, 229–235 (2017) 68. Liu, G., Zinober, A., Shtessel, Y., Niu, Q.: Adaptive twisting sliding mode control for the output tracking in time delay systems. Australian J. Electr. Electron. Eng. 9, 217–224 (2012) 69. Mendoza-Avila, J., Moreno, J., Fridman, L.: Adaptive continuous twisting algorithm of third order. In: Proceedings of the 2018 15th International Workshop on Variable Structure Systems, pp. 144–149 (2018) 70. Taleb, M., Levant, A., Plestan, F.: Twisting algorithm adaptation for control of electropneumatic actuators. In: Proceedings of the 12th International Workshop on Variable Structure Systems, pp. 178–183 (2012) 71. Bartolini, G., Levant, A., Pisano, A., Usai, E.: Adaptive second-order sliding mode with uncertainty compensation. Int. J. Control 89, 1747–1758 (2016) 72. Shtessel, Y., Kochalummoottil, J., Edwards, C., Spurgeon, S.: Continuous adaptive finite reaching time control and second order sliding modes. IMA J. Math. Control Inf. 30, 97–113 (2013) 73. Estrada, A., Plestan, F., Allouche, B.: An adaptive version of a second order sliding mode output feedback controller. In: Proceedings of the European Control Conference, pp. 3228– 3233 (2013) 74. Rinaldi, G., Menon, P., Edwards, C., Ferrara, A., Shtessel, Y.: Adaptive dual-layer supertwisting sliding mode observers to reconstruct and mitigate faults and attacks in power networks. Automatica 129 (2021) 75. Ma, Q., Ji, Y., Niu, Z., Gou, Q., Zhao, L.: Impact angle constraint dual-layer adaptive guidance law with actuator faults. IEEE Access 8, 115823–115836 (2020)

304

Y. Shtessel et al.

76. Nixon, M., Shtessel, Y.: Adaptive double-layer continuous super-twisting control of a satellite formation. In: Proceedings of the AIAA SciTech Forum, pp. 2021–0560 (2021) 77. Zhang, C., Zhang, G., Dong, Q.: Multi-variable finite-time observer-based adaptive-gain sliding mode control for fixed-wing uav. IET Control Theory Appl. 15, 223–247 (2021) 78. Chriette, A., Plestan, F., Castañeda, H., Pal, M., Guillo, M., Odelga, M., Rajappa, S., Chandra, R.: Adaptive robust attitude control for uavs -design and experimental validation. Adapt. Control Sig. Process. 30, 1478–1493 (2016) 79. Zhang, G., Shang, W., Jing, G., Chen, T., Liang, Q.: Multi-layer adaptive finite time super twisting control for quaternion-based quadrotor formation with obstacle avoidance. IEEE Access 8, 213062–213077 (2020) 80. Bkekri, R., Benamor, A., Messaoud, H.: Adaptive super-twisting sliding mode controller for 2-dof helicopter. In: Proceedings of the International Conference on Control, Automation and Diagnosis, pp. 1–6 (2019) 81. Herman, C., Oscar, S.S., de León-Morales, J.: Extended observer based on adaptive second order sliding mode control for a fixed wing uav. ISA Trans. 66, 226–232 (2017) 82. Guo, Z., Chang, J., Guo, J., Zhou, J.: Adaptive twisting sliding mode algorithm for hypersonic reentry vehicle attitude control based on finite-time observer. ISA Trans. 77, 20–29 (2018) 83. Zhang, Y., Tang, S., Guo, J.: Adaptive-gain fast super-twisting sliding mode fault tolerant control for a reusable launch vehicle in reentry phase. ISA Trans. 71, 380–390 (2017) 84. Amini, M., Shahbakhti, M., Pan, S., Hedrick, J.: Discrete adaptive second order sliding mode controller design with application to automotive control systems with model uncertainties. In: Proceedings of the American Control Conference (ACC), pp. 4766–4771 (2017) 85. Syed, S., Amrr, M., Nabi, M.: Adaptive second order sliding mode control for the regulation of active magnetic bearing. IFAC Papers-On-Line 53, 1–6 (2020) 86. Yaoyao, W., Kangwu, Z., Fei, Y., Bai, C.: Adaptive super-twisting nonsingular fast terminal sliding mode control for cable-driven manipulators using time-delay estimation. Adv. Eng. Soft. 128, 113–124 (2019) 87. Lee, S., Hong, B., Xu, X., You, P.S.: Roll suppression of marine vessels using adaptive supertwisting sliding mode control synthesis. Ocean Eng. 195, 106724 (2020) 88. Pisano, A., Tanelli, M., Ferrara, A.: Switched/time-based adaptation for second-order sliding mode control. Automatica 64, 126–132 (2016) 89. Evangelista, C., Pisano, A., Puleston, P., Usai, E.: Receding horizon adaptive second-order sliding mode control for doubly-fed induction generator based wind turbine. IEEE Trans. Control Syst. Technol. 25, 73–84 (2017) 90. Plestan, F., Brégeault, V., Glumineau, A., Shtessel, Y., Moulay, E.: Advances in high order and adaptive sliding mode control -theory and applications. In: Lecture Notes in Control and Information Sciences, p. 412 (2011) 91. Edwards, C., Shtessel, Y.: Continuous higher order sliding mode control based on adaptive disturbance compensation. In: Proceedings of the 13th International Workshop on Variable Structure Systems (VSS), pp. 1–5 (2014) 92. Edwards, C., Shtessel, Y.: Enhanced continuous higher order sliding mode control with adaptation. J. Frankl. Inst. 356, 4773–4784 (2019) 93. Bhat, S., Bernstein, D.: Geometric homogeneity with applications to finite-time stability. Math. Control Sig. Syst. 17, 101–127 (2005) 94. Li, P., Yu, X., Xiao, B.: Adaptive quasi-optimal higher order sliding-mode control without gain overestimation. IEEE Trans. Ind. Inf. 14, 3881–3891 (2018) 95. Laghrouche, S., Harmouche, M., Chitour, Y., Obeid, H., Fridman, L.: Barrier functionbased adaptive higher order sliding mode controllers. Automatica 123, 109355 (2021) 96. Obeid, H., Laghrouche, S., Fridman, L.: Dual layer barrier functions based adaptive higher order sliding mode control. Int. J. Robust Nonlinear Control 31, 3795–3808 (2021) 97. Fanaee, N.: Adaptive finite time high-order sliding mode observer for non-linear fractional order systems with unknown input. Asian J. Control 23, 1083–1096 (2021) 98. Taleb, M., Plestan, F., Bououlid, B.: An adaptive solution for robust control based on integral high-order sliding mode concept. Int. J. Robust Nonlinear Control 25, 1201–1213 (2015)

Adaptive Sliding Mode and Higher-Order Sliding-Mode Control Techniques …

305

99. Mondal, S., Mahanta, C.: Adaptive integral higher order sliding mode controller for uncertain systems. J. Control Theory Appl. 11, 61–68 (2013) 100. Reichhartinger, M., Spurgeon, S.: An arbitrary-order differentiator design paradigm with adaptive gains. Int. J. Control 91, 2028–2042 (2018) 101. Obeid, H.F., Laghrouche, S., Harmouche, M., Galkani, A.: Adaptation of levant’s differentiator based, on barrier function. Int. J. Control 91, 2019–2027 (2018) 102. Oliveira, T., Rodrigues, V., Fridman, L.: Generalized model reference adaptive control by means of global hosm differentiators. IEEE Trans. Autom. Control 64 (2019) 103. Han, Y., Ma, R., Cui, J.: Adaptive higher-order sliding mode control for islanding and gridconnected operation of a microgrid. Energies 11, 1459 (2018) 104. Laghrouche, S., Harmouche, M., Ahmed, F., Chitour, Y.: Control of pemfc air-feed system using lyapunov-based robust and adaptive higher order sliding mode control. IEEE Trans. Control Syst. Technol. 23, 1594–1601 (2015) 105. Yu, P., Shtessel, Y., Edwards, C.: Continuous higher order sliding mode control with adaptation of air breathing hypersonic missile. Int. J. Adapt. Control Sig. Process 30, 1099–1118 (2016) 106. Sagliano, M., Mooij, E., Theil, S.: Adaptive disturbance-based high-order sliding-mode control for hypersonic-entry vehicles. J. Guidance Control Dyn. 40, 521–536 (2017) 107. Zong, Q., Wang, J., Tao, Y.: Adaptive high-order dynamic sliding mode control for a flexible air-breathing hypersonic vehicle. Int. J. Robust Nonlinear Control 23, 1718–1736 (2013) 108. Tournes, C., Shtessel, Y., Spenser, A.: Hypersonic glider autopilot using adaptive higher order sliding mode control with impulsive actions. Am. J. Aerosp. Eng. 5, 71–86 (2018) 109. Shtessel, Y., Glumineau, A., Plestan, F., Aldukali, F.: Hybrid-impulsive second-order sliding mode control: Lyapunov approach. Int. J. Robust Nonlinear Control 27, 1064–1093 (2017)

Unit Vector Control with Prescribed Performance via Monitoring and Barrier Functions Victor Hugo Pereira Rodrigues, Liu Hsu, Tiago Roux Oliveira, and Leonid Fridman

Abstract This chapter proposes an adaptive unit vector control approach via output feedback for a class of multivariable nonlinear systems. Fixed-time convergence of the output to a predefined neighborhood of the origin of the closed-loop system is proved with guaranteed performance even with parametric uncertainties and (un)matched disturbances with unknown upper bounds. During the transient phase, the monitoring function ensures the specification of the settlement time and maximum overshoot. In the steady state, the barrier function confines the output of the plant within a small prescribed vicinity of the origin. Simulation results including an application to an overhead crane system illustrate the advantages of the proposed adaptive control strategy.

1 Introduction A monitoring function is an adaptive tool that was initially designed to correct the control direction (sign of the control feedback) in plants whose sign of the highfrequency gain is unknown [1]. In this approach, the change of the control direction occurs whenever the norm of the output error and the monitoring function meet. The correct control direction is identified after a finite number of switchings (changes), taking the tracking error to zero [2–5]. V. H. P. Rodrigues (B) · L. Hsu Federal University of Rio de Janeiro (UFRJ), Rio de Janeiro, RJ 21941-972, Brazil e-mail: [email protected] L. Hsu e-mail: [email protected] T. R. Oliveira State University of Rio de Janeiro (UERJ), Rio de Janeiro, RJ 20550-900, Brazil e-mail: [email protected] L. Fridman National Autonomous University of Mexico (UNAM), 04510 Mexico, CDMX, Mexico e-mail: [email protected] © The Author(s), under exclusive license to Springer Nature Switzerland AG 2023 T. R. Oliveira et al. (eds.), Sliding-Mode Control and Variable-Structure Systems, Studies in Systems, Decision and Control 490, https://doi.org/10.1007/978-3-031-37089-2_12

307

308

V. H. P. Rodrigues et al.

In [6], the monitoring function allows the design of an adaptive sliding-mode control law for perturbed plants with disturbances of unknown bounds. The strategy could guarantee the convergence of the output error to a residual set, but, as well as the reference [7], the residual set could not be previously specified. The monitoring function proposed in [8] was applied in the design of an adaptive control law based on a unit vector for multivariable uncertain plants. The approach can guarantee the convergence of the output error to a prespecified residual set, but the chattering could not be avoided. Another adaptive tool that has been successfully applied to a vast class of adaptive sliding-mode controllers is the barrier function. The barrier function, in a closed loop, without using an upper bound for the disturbances, confines the output of the plant into a small vicinity of the origin while ensuring, at the same time, a control gain not overestimated. For instance, in [9, 10] such a strategy is applied to a class of first-order sliding-mode controllers (first-order and integral); in [11, 12], to the case of second-order sliding-mode controllers (twisting and variable gain super twisting) and in [13, 14], to algorithms for adaptation of Levant’s differentiator gains [15]. In all above-mentioned papers, only uncertain single-input/single-output plants are considered without ensuring the specification of the transient phase (prespecified fixed-time convergence to the residual set and overshoot constraint). In this chapter, by combining monitoring and barrier functions, we propose a new adaptive (multivariable) unit vector control (see [16]) able to guarantee the stability and prescribed performance of the closed-loop system. During the transient phase, the monitoring function ensures the specification of the settlement time and maximum overshoot. In the steady state, the barrier function confines the output of the plant within a small prescribed vicinity of the origin. The main advantages of the proposed monitoring-barrier adaptive sliding-mode controller are: • • • •

The guarantee of a prespecified maximum overshoot in the reaching phase; The output convergences to residual phase in a prespecified fixed time; The control gain is not overestimated; It does not require the knowledge of upper bounds for matched and unmatched disturbances.

In the application side, overhead cranes are widely used to move the large/heavy objects horizontally for either manufacturing or maintenance practices in many industrial environments, such as ocean engineering, nuclear industries, and airports [17]. Indeed, it is not an easy task to control a crane system since naturally the crane acceleration, required for motion, always induces undesirable load swing. The acceleration and the deceleration of the overhead crane lead to swings of the payload; these swings can be dangerous and may cause damage and/or accidents [18]. The crane control consists of a crane motion and load hoisting control as well as payload swing suppression. In the literature, several attempts have been made to control the load swing [17–21]. In this context, our adaptive scheme seems appropriate to be applied to an overhead crane system ensuring the closed-loop properties of fixed-time convergence and prescribed performance via monitoring and barrier functions.

Unit Vector Control with Prescribed Performance via Monitoring …

309

The chapter is organized as follows. Section 2 formulates the control problem to be solved. The basic techniques, namely, state-norm observers, monitoring function, barrier function are introduced in Sect. 3. Particularly, in Sect. 3.4 the proposed adaptive unit vector control is developed. Stability and convergence results are shown in Sect. 4. Examples are presented in Sect. 5, including both academic plants as well as an application to overhead cranes. Section 6 concludes the chapter.

2 Problem Formulation Consider the uncertain MIMO nonlinear plant in regular form η(t) ˙ = A11 η(t) + A12 σ (t) + d1 (η(t) , σ (t) , t) , σ˙ (t) = A21 η(t) + A22 σ (t) + d2 (η(t) , σ (t) , t) + B2 u ,

(1) (2)

where the unmeasured part state is η(t) ∈ Rn−m , the output is σ (t) ∈ Rm , the state is x T (t) := [η T (t) , σ T (t)] ∈ Rn , the input control vector is u ∈ Rm and the disturbances are given by the mappings d1 : Rn−m × Rm × R+ → Rn−m and d2 : Rn−m × Rm × R+ → Rm . Throughout the chapter, the assumptions are: (A1) The constant matrices A11 ∈ R (n−m)×(n−m) , A12 ∈ R (n−m)×m , A21 ∈ R m×(n−m) , A22 ∈ R m×m and B2 ∈ Rm×m are uncertain. (A2) The matrix A11 is Hurwitz. (A3) The disturbances d1 (x(t) , t) and d2 (x(t) , t) are Lipschitz in x, piece-wise continuous in t and uniformly bounded by unknown constants d¯1 and d¯2 such that ||d1 (x, t)|| ≤ d¯1 and ||d2 (x, t)|| ≤ d¯2 , for all t ≥ 0. (A4) Exist a matrix S p such that the high-frequency gain matrix is defined by K p := B2 S p and −K p is Hurwitz. The control objective is, by using only output feedback, the confinement of σ (t) into the inside of an -vicinity of the origin such that ||σ (t)|| ≤  for all t ≥ ts .

3 Basic Techniques In this section, basic techniques for the proposed control strategy are introduced.1

1

©2021 John Wiley & Sons, Inc.. Reprinted with permission from Victor Hugo Pereira Rodrigues, Liu Hsu, Tiago Roux Oliveira, and Leonid Fridman, Adaptive sliding-mode control with guaranteed performance based on monitoring and barrier functions, International Journal of Adaptive Control and Signal Processing, 36 (2021), pp. 1252–1271.

310

V. H. P. Rodrigues et al.

3.1 Norm Observer The norm observer, also called first-order approximation filter (FOAF) [22], is an important tool that, by using only output information, allows us to design an upper bound for the norm of the unmeasured state variable η(t). The Lemma 1 demonstrates how the solution of ˙¯ = −k1 η(t) ¯ + k2 (||σ (t)|| + d¯1 ) , η(t)

(3)

¯ + πη (t) , ∀t ≥ 0 , η(t) ≤ k3 |η(t)|

(4)

with

where k1 , k2 and k3 are appropriated constants while πη (t) is an exponentially decreasing term and dependents on the initial conditions η(0) and η(0). ¯ Lemma 1 Consider the η-dynamics in (1) and suppose the assumptions (A1)–(A3) are satisfied. Then, η(t) ¯ in (3) is a norm observer of η(t) satisfying (4). Proof Consider the following candidate Lyapunov function Vη (t) = η(t)T Pη η(t) ,

Pη = PηT > 0 ,

(5)

Pη being the solution of the Lyapunov equation T Pη + Pη A11 = −Q η , A11

Q η = Q ηT > 0 ,

(6)

λmin {Pη }η(t)2 ≤ Vη ≤ λmax {Pη }η(t)2 ,

(7)

and the Rayleigh-Ritz inequality,

where λmin {·} and λmax {·} denote, respectively, the minimum and maximum eigenvalue of a given matrix. By taking the time derivative of (5) with (1) and (6), one arrives at ˙ V˙η (t) = η˙ T (t)Pη η(t) + η(t)T Pη η(t) T T T = η T (t)(A11 Pη + Pη A11 )η(t) + 2σ T (t)A12 Pη η(t) + 2d1T (x, t)A12 Pη η(t) T T = −η T (t)Q η η(t) + 2σ T (t)A12 Pη η(t) + 2d1T (x, t)A12 Pη η(t) .

(8)

Then, Eq. (8) is upper bounded by V˙η (t) ≤ −λmin {Q η }η(t)2 + 2A12 Pη (σ (t) + d1 (x, t))η(t) = −λmin {Q η }η(t)2 + 2A12 λmax {Pη }(σ (t) + d1 (x, t))η(t) , (9)

Unit Vector Control with Prescribed Performance via Monitoring …

311

by using the inequality (7) and (A4),  λmin {Q η } 2A12 λmax {Pη } Vη + (σ (t) + d1 (x, t)) Vη V˙η (t) ≤ − λmax {Pη } λmin {Pη }  λmin {Q η } 2A12 λmax {Pη } ≤− Vη + (σ (t) + d¯1 ) Vη . λmax {Pη } λmin {Pη } Now, defining η˜ :=



(10)

V˙η Vη whose time derivative is η˙˜ =  , is possible to upper 2 Vη

bound it as A12 λmax {Pη } ˙˜ ≤ − λmin {Q η } η(t) ˜ + (σ (t) + d¯1 ) . η(t) 2λmax {Pη } λmin {Pη } Now, it is defined the constants 0 < k1 < bound for (11) is given by

λmin {Q η } 2λmax {Pη }

and k2 >

A12 λmax {Pη } , λmin {Pη }

˙˜ ≤ −k1 η(t) η(t) ˜ + k2 (σ (t) + d¯1 ) .

(11) and upper

(12)

Then, invoking the Comparison Lemma [23, p. 102], the solution η(t) ¯ of (3) is an upper bound for η(t) ˜ such that ˜ − η(0)) ¯ , η(t) ˜ ≤ η(t) ¯ + exp(−k1 t)(η(0)

(13)

˜ + |η(0)|) ¯ , η(t) ˜ ≤ |η(t)| ¯ + exp(−k1 t)(|η(0)|

(14)

consequently,

and, by using the Rayleigh-Ritz inequality (7), (|η(0)| ˜ + |η(0)|) ¯ 1 η(t) ≤  |η(t)| ¯ + exp(−k1 t)  , λmin {Pη } λmin {Pη }

(15)

therefore the inequality (4) is satisfied for any k3 > 

1 λmin {Pη }

and πη (t) > exp(−k1 t)

(|η(0)| ˜ + |η(0)|) ¯  . λmin {Pη }



¯ cannot be used. To overNotice, from assumption (A5), d¯1 is unknown then η(t) come this problem, in the next section we introduce a hybrid state-norm estimation [5] based on a monitoring function [8].

312

V. H. P. Rodrigues et al.

3.2 Monitoring Function—Reaching Phase The monitoring function is a switching scheme that has been used to provide alternatives to the lack of information such as in scenarios where the disturbances have unknown bounds [6, 24] or the control direction is unknown [1]. Here, inspired by [8], we present a monitoring function version able to ensure specifications of performance criteria for the closed-loop system. Definition 1 It is said the output σ (t) satisfies the reaching and residual phase specifications, if • σ (t) ≤ σ (0) + , ∀t ∈ [0 , T ), • σ (t) ≤  , ∀t ≥ T , where the parameters which can be freely specified are given by the overshoot  > 0, the transient time T > 0, and the maximum allowed steady-state error ε > 0. As present in [8, 25], in the reaching phase the monitoring function should forces σ (t) ≤ ε/r2 before a fixed-time T ensuring non-infringement of the overshoot specification. The monitoring switching scheme is such that for every k for which σ (tk ) > ε/r2 , the next switching instant is defined as

tk+1

  ⎧ 1 ⎪ σ (t) = σ (0) +  1 − ⎪ r1k ⎨ := min t > tk : , k = 1, 2, . . . or   ⎪ ⎪ 1 ⎩ and σ (t) > /r2 t = T 1 − rk 1

(16) with constants r1 , r2 > 1. If, for some time interval t¯ < T , the condition σ (t¯) = ε/r2 is reached, then the Residual Phase (Sect. 3.3) is started. Now, positive monotonically increasing unbounded sequences dˆ1 (k) and dˆ2 (k) are designed by using the switching index k of the monitoring function to counteract the absence of constants d¯1 and d¯2 , i.e., dˆ1 (k) = c1 b1k ,

(17)

dˆ2 (k) =

(18)

c2 b2k

,

where c1 , c2 > 0, and b1 , b2 > max{r1 , r2 }. Then, it is possible to introduce a hybrid state-norm estimation scheme, based on (3)–(4), ˙ˆ = −k1 η(t) ˆ + k2 (||σ (t)|| + dˆ1 (k)) , η(t)

(19)

ˆ + πη (t) , ∀t ≥ 0 . η(t) ≤ k3 |η(t)|

(20)

such that

Unit Vector Control with Prescribed Performance via Monitoring …

313

3.3 Barrier Function—Residual Phase As presented in [25], in the residual phase, even for disturbances with unknown upper bounds, the barrier function assures the avoidance of overestimated control gains and imposes a hard bound on the sliding variable by restricting it within a prescribed ε-vicinity. Now, it is present two kinds of barrier functions for the adaptive unit vector control: • Positive definite Barrier Function (PB) ρ pb (χ ) :=

ε F¯ ε − |χ |

i.e., ρ P B (0) = F¯ > 0 .

(21)

• Positive Semi-definite Barrier Function (PSBF) ρ psb (χ ) :=

|χ | ε − |χ |

i.e. ρ P S B (0) = 0 .

(22)

For more details, please see [10]. In the next section, the adaptive unit vector control with monitoring and barrier functions is presented.

3.4 Adaptive Unit Vector Control The adaptive unit vector control law based on Monitoring and Barrier Function (MBF) is given by u = −ρ(t)S p

σ (t) , σ (t)

(23)

where the matrix S p is chosen to verify assumption (A4) and ρ(t) is the modulation function ρ M (t) , if t < t¯ ρ(t) = ρ B (t) , if t ≥ t¯

.

(24)

In the reaching phase, the adaptive scheme is driven by the monitoring function action by using the hybrid norm observer (19) and disturbance estimate (18) such that the modulation function designed as ˆ +δ, ρ M (t) = cσ σ (t) + cdσ dˆ2 (k) + cη η(t)

(25)

314

V. H. P. Rodrigues et al.

with appropriated positive constants cσ , cd2 , cη , and δ. On the other hand, in the residual phase, the modulation function is driven by the barrier function such that ρ B (t) is (21) or (22). Notice, the switching law given by (16) means that k increases aggressively if the output σ (t) tends to its prespecified transient T and steady-state values . This behavior makes the sequences dˆ1 (k) and dˆ2 (k) in (19) and (20) to behave as exponential increasing functions, ∀t ≤ t¯, so that the modulation function ρ(t) in (25) is increased to force the convergence of σ (t) to a residual set of order O(). When the condition σ (t) ≤ /r2 is reached, the residual phase is stared and the increment of the sequence k ceases forever such that the adaptive scheme is driven by the barrier functions (21) or (22).

4 Stability Analysis Theorem 1 summarizes the main results of the closed-loop system stability. Theorem 1 Consider the multivariable plant (1)–(2), the monitoring scheme (16)– (19) and the output-feedback unit vector control law (23)–(24). Assume that (A1)– (A5) hold. Then, the prescribed performance in Definition 1 is ensured. More¯ over, there exists an unknown constant dmax = K −1 p {A21 [k2 k3 ( + d1 )/k1 + ¯ ¯ πη (t )] + A22 ε + d2 } so that the output error σ (t) converges to the interior of an -neighborhood of the origin satisfying • with positive definite barrier function (21), σ (t) ≤ 1 such that σ (t) ≤ 1 < , where 1 = 0 when F ≥ dmax and 1 = (1 − F/dmax ) when F < dmax . • with the positive semi-definite barrier function (22), σ (t) ≤ 2 <  where 2 = dmax /(dmax + 1). Proof The first step of the proof is to demonstrate that when using the control law (23) with modulation function (25), the sliding variable reaches the region |σ | ≤ /r2 with the guarantee that the specifications of the settlement time and maximum overshoot are not violated. Through rigorous analysis, this fact has already been proven in [8, Theorem 1]. Therefore, the second stage starts and is characterized by what follows. Now, the dynamic behavior of (1)–(2) is briefly analyzed under the residual regime. Since d1 (x, t) ≤ d¯1 and |σ (t)| ≤ , from (3) and (4), it is easy to verify that η(t) ≤

k2 k3 ( + d¯1 ) + πη (t¯) . k1

(26)

Therefore, by using control law (23) with modulation function based on barrier (25), the Eq. (2) can be rewritten as

Unit Vector Control with Prescribed Performance via Monitoring …

σ (t) , σ˙ (t) = K p d(x, t) − ρ B (σ (t)) σ (t) −1 −1 d(x, t) = K −1 p A21 η(t) + K p A22 σ (t) + K p d2 (x, t) ,

315

(27) (28)

with equivalent disturbance d(x, t) satisfying −1 −1 d(x, t) ≤ K −1 p A21 η(t) + K p A22 σ (t) + K p d2 (x, t)



k2 k3 ¯ ¯ ¯ ≤ K −1  A  ( + d ) + π ( t ) + A ε + d 21 1 η 22 2 p k1 = dmax , (29)

in other words, in residual phase, the disturbance d(x, t) is upper bounded by unknown constant dmax . Now, consider the following candidate for Lyapunov function Vσ =

1 T 1 σ Pσ + (ρ B − ρ B (0))2 , 2 2

P = PT > 0 ,

(30)

and the Rayleigh-Ritz inequality, λmin {Pη }σ 2 ≤ σ T Pσ ≤ λmin {Pη }σ 2 .

(31)

The time derivative V , along (29) and (27), satisfies 1 1 V˙σ = σ T P σ˙ + σ˙ Pσ + (ρ B − ρ B (0))ρ˙ B 2 2 1 ρB T T σ T (P K p + K pT P)σ + (ρ B − ρ B (0))ρ˙ B = d K p Pσ − 2 σ (t) 1 = d T K pT Pσ − ρ B σ  + (ρ B − ρ B (0))ρ˙ B 2 (−2K p Pd + ρ B ) σ  + (ρ B − ρ B (0))ρ˙ B ≤− 2 (−2K p Pdmax + ρ B ) σ  + (ρ B − ρ B (0))ρ˙ B . ≤− 2 At this point we have two options: • Adaptation with Positive definite Barrier Function With ρ B = ρ P B in (21), the inequality (32) is rewritten as

(32)

316

V. H. P. Rodrigues et al.

(−2K p Pdmax + ρ P B ) ¯ ρ˙ P B V˙σ ≤ − σ  + (ρ P B − F) 2 (−2K p Pdmax + ρ P B ) =− σ + 2   σ T K pσ 1  F¯ T ¯ σ + (ρ P B − F) K d − ρ p P B ( − σ )2 σ  σ  (−2K p Pdmax + ρ P B ) σ + ≤− 2   ¯ min (K p ) K p   Fλ ¯ + (ρ P B − F) dmax − ρ P B σ  ( − σ )2 λmin (K p ) ¯   (−d¯max + ρ P B ) ¯  Fλmin (K p ) −d¯max + ρ P B ≤− σ  − (ρ P B − F) 2 2 ( − σ )   ¯ min (K p ) −d¯max + ρ P B ¯  Fλ (−dmax + ρ P B ) ¯ σ  − (ρ P B − F) ≤− 2 ( − σ )2 2 ¯ σ  |ρ P B − F| = −βσ 1 √ − ζ1 βσ 1 , (33) √ 2 2 where βσ 1 =

−d¯max √+ρ P B , ζ1 2

=

¯ min (K p )  Fλ , (−σ )2

d¯max = max 2P,

and by using (29), the unknown constant

1 K p dmax . λmin (K p )

(34)

Then, from [10, Lemma 5], one can conclude that (33) satisfies ¯  |ρ P B − F| 1 σ  V˙σ ≤ − √ βσ 1 λmin {P} √ − ζ1 βσ 1 √ λmin {P} 2 2 √  λmin {P} 1 ¯ ≤ −β1 σ  + √ |ρ P B − F| √ 2 2

√ 1 ≤ −β1 Vσ1/2 , with β1 = βσ 1 2 min √ , ζ1 , λmin {P}

(35)

that results a finite-time convergence of the output variable to the region |σ (t)| ≤ 1 such that |σ (t)| ≤ 1 <  where 1 = (1 − F/dmax ) if F < dmax and 1 = 0 if F ≥ dmax . • Adaptation with Positive Semi-definite Barrier Function With ρ B = ρ P S B in (22), the inequality (32) is rewritten as

Unit Vector Control with Prescribed Performance via Monitoring … (−2K p Pdmax + ρ P S B ) σ  + ρ P S B ρ˙ P S B 2 (−2K p Pdmax + ρ P S B ) σ + =− 2   σ T K pσ  1 T σ K pd − ρP S B + ρP S B σ  ( − σ )2 σ 

317

V˙σ ≤ −

≤−

(−2K p Pdmax + ρ P S B ) λmin (K p ) σ  + ρ P S B 2 ( − σ )2



K p  d − ρ psb λmin (K p )



  (−2K p Pdmax + ρ P S B ) λmin (K p ) K p  σ  + ρ P S B d − ρ max P S B 2 ( − σ )2 λmin (K p ) ¯  λmin (K p )  (−dmax + ρ P S B ) ≤− −d¯max + ρ P S B σ  − ρ P S B 2 ( − σ )2 λmin (K p ) (−d¯max + ρ P S B ) (−d¯max + ρ P S B ) σ  − ρ P S B ≤− 2 2 ( − σ )2 σ  |ρ P S B | , (36) = −βσ 2 √ − ζ2 βσ 2 √ 2 2 ≤−

¯



(K )

min p PSB where βσ 2 = −dmax√+ρ , ζ2 = (−σ , and by using (29), the unknown constant )2 2 d¯max given by (34). Then, from [10, Lemma 6], one can conclude that (36) satisfies

 |ρ P S B | 1 σ  V˙σ ≤ − √ βσ 2 λmin {P} √ − ζ2 βσ 2 √ λmin {P} 2 2 √  λmin {P} 1 ≤ −β2 σ  + √ |ρ P S B | √ 2 2

√ 1 1/2 , ζ2 , ≤ −β2 Vσ , with β2 = βσ 2 2 min √ λmin {P}

(37)

that results in a finite-time convergence of the output variable to the region σ (t) ≤ 2 such that σ (t) ≤ 2 <  where 2 = dmax /(dmax + 1). Thus, the proof is completed.



In this chapter, we consider a perturbed and uncertain system with state separated by their internal and external dynamics whose solutions are η(t) and σ (t), respectively. Two strategies are employed. In both cases, the monitoring function, without any knowledge of the disturbances’ upper bounds, is able to drive the sliding variable σ (t) to inside of a prespecified -neighborhood of the origin ensuring that there are no violation of the desired overshoot and settling time. It is worth to mention that the residual width  in (21) and (22) is an arbitrary parameter completely chosen by the designer, i.e., different from [26] where an estimate of the equivalent control [27] based on filtering is employed, our strategy does not depend on any upper bounds of the disturbance or its derivative, see inequality (21) of [26]. Initially, the monitoring function adapts the gain of the unit vector controller by using a monotonically increasing sequence k, given by (16), able to increase

318

V. H. P. Rodrigues et al.

unboundly until the controller’s gain (25) achieves its appropriated value to lead σ (t) to the residual phase. In the first scenario, the adaptive scheme is given by a combination of the positive definite barrier and the monitoring functions, (21) and (25). In this case, already inside of the residual set , if the bias F is sufficiently larger than the norm of the equivalent disturbance, the control signal is discontinuous and the positive barrier is able to guarantee ideal sliding mode (σ ≡ 0) and, if F is insufficient to overcome the norm of the equivalent disturbance, the control signal is smooth and the practical sliding mode (σ  ≤ ) is achieved. In the second case, by using the semi-positive definite barrier function (22), the control signal is continuous and only the practical sliding mode can be indeed reached. On the other hand, the unmeasured part of the state, the variable η(t), will converge to a domain around the origin dependent on the unknown but bounded the amplitude of the perturbation.

5 Numerical Examples In this section two simulation results are presented, an academic example and application to an overhead crane system, to illustrate the advantages of the MBF strategy.

5.1 Academic Example In this section, we consider an academic example to validate the proposed control −2 1 strategy. The MIMO nonlinear unstable system (1)–(2) has A11 = , A12 = 1 −2





01 11 1 1 10 , A21 = , A22 = , B2 = , 10 11 0.4 0.5 01 sin(10t)sgn(η1 η2 ) [11(t) − 11(t − 25)] , arctan(η1 + η2 + σ1 + σ2 ) + cos(2t) + exp(−σ22 /2)

0.25(1 − exp(−|η2 |)) − exp(−σ22 /2) d2 (x, t) = [11(t) − 11(t − 25)] , exp(−σ12 /2) + cos(t)

d1 (x, t) =

where the step function is 11(t) and initial conditions η T (0) = [3 , −2] and σ T (0) = [2 , −1]. In the academic example, the performance criteria have  = 0.5 (maximum overshoot), T = 5 [s] (maximum transient time) and  = 0.01 (maximum residual). The parameters of the monitoring function are r1 = 2 and r2 = 1.2. The positive barrier (21) has parameter F¯ = 20. Other control parameters are setting as b1 = 1.5,

Unit Vector Control with Prescribed Performance via Monitoring …

319

Fig. 1 Simulation results—unit vector control with monitoring and positive barrier

b2 = 1.2,

c 1 = 0.05, c2 = 0.1, λ1 = −0.8, cησ = 1.1, cσ = 3, cdσ = 4, cη = 5, and 10 Sp = . 01 Figures 1 and 2 show the results obtained for the proposed unit vector controller combining monitoring and barrier functions. Both combinations, monitoring with the positive barrier and monitoring with the semi-positive barrier, ensure the convergence of the sliding variable, the output, to a predefined neighbor of the origin with guaranteed transient and steady-state behavior, see Fig. 1a, b with Fig. 2a, b. Notice, the semi-positive barrier is able to reduce the control effort, compare Figs. 1c and 2c. In contrast, the residue goes to zero by using the positive barrier, compare Figs. 1d and 2d.

320

V. H. P. Rodrigues et al.

Fig. 2 Simulation results—unit vector control with monitoring and positive barrier

5.2 Application Example Overhead cranes are well-known machines used to dislocate heavy, large, or even hazardous materials from an origin to a target localization. This procedure is held by lifting and lowering a given payload to avoid obstacles in the path. Cranes can be easily found in harbors, nuclear industries, building sites, factories, and airports [17–21]. A overhead crane is composed of the hoisting and support mechanisms, respectively, hoisting line and a trolley-girder. Unfortunately, the hoisting and trolley-girder accelerations always induce undesirable load swing. This unavoidable load swing frequently causes efficiency drop, load damages, and even accidents [15]. Moreover, most crane systems are handled by humans which demands a long training to avoid accidents and to increase the work efficiency. In this section, to develop a safety and efficient autonomous crane system, the MBF Adaptive UVC is applied to the overhead crane system. Consider the free body diagram in Fig. 3, xc (t), yc (t) and z c (t) are the coordinates of the payload, xw (t) denotes the distance of the rail with the cart from the center of the construction frame,

Unit Vector Control with Prescribed Performance via Monitoring …

321

Fig. 3 Free body diagram of overhead crane

yw (t) denotes the distance of the cart from the center of the rail, R(t) denotes the length of the lift-line, α(t) denotes the angle between the Y axis and the lift-line, β(t) denotes the angle between the negative direction on the Z axis and the projection of the lift-line onto the X Z plane, m c is the mass of the payload, m w is the mass of the cart and m s is the mass of the moving rail. The spherical system has been adopted such that the coordinates of the moving rail (body of mass m s ) are xs (t) = X (t) ,

(38)

ys (t) = 0 , z s (t) = H ,

(39) (40)

xw (t) = X (t) , yw (t) = Y (t) ,

(41) (42)

z w (t) = H ,

(43)

of the cart (body of mass m w ) are

and, of the payload (body of mass m c ) are xc (t) = X (t) + R(t)sen(α(t))sen(β(t)) , yc (t) = Y (t) + R(t)cos(α(t)) ,

(44) (45)

z c (t) = H − R(t)sen(α(t))cos(β(t)) .

(46)

The equations of motion are provided from Lagrange’s equation by considering the payload as a point mass and neglecting the mass and stiffness of the rope. The kinetic energy is

322

V. H. P. Rodrigues et al.

 mw  2  mc  2  ms  2 2 x˙s + y˙s2 + z˙ s2 + x˙w + y˙w2 + z˙ w x˙c + y˙c2 + z˙ c2 + 2 2 2 mc  ˙ 2 ms ˙ 2 mw ˙ 2 ( X + Y˙ 2 ) + X + Y˙ 2 + 2 sin(β) cos(α)R α˙ X˙ = X + 2 2 2 − cos2 (α)R 2 β˙ 2 + cos(α) R˙ Y˙ + R 2 α˙ 2 + R 2 β˙ 2 − sin(α)R α˙ Y˙  +2 cos(β) sin(α)R β˙ X˙ + R˙ 2 + 2 sin(α) sin(β) R˙ X˙ ,

K =

(47)

(48)

and the potential energy is given by U = (m s + m w )g H + m c g(H − cos(β) sin(α)R) .

(49)

Therefore, the Lagrangian is L = K −U mc  ˙ 2 ms ˙ 2 mw ˙ 2 ( X + Y˙ 2 ) + X + Y˙ 2 + 2 sin(β) cos(α)R α˙ X˙ X + = 2 2 2 − cos2 (α)R 2 β˙ 2 + cos(α) R˙ Y˙ + R 2 α˙ 2 + R 2 β˙ 2 − sin(α)R α˙ Y˙  +2 cos(β) sin(α)R β˙ X˙ + R˙ 2 + 2 sin(α) sin(β) R˙ X˙ − (m s + m w )g H − m c g(H − cos(β) sin(α)R) .

(50)

(51)

The equations of motion are obtained by calculating ∂ ∂t



∂L ∂ q˙i

 −

∂L = Fqi − kqi q˙i , ∂qi

(52)

for each generalized coordinate qi such that q = [X (t) , Y (t) , R(t) , α(t) , β(t)]T , leading us to (m s + m w + m c ) X¨ + m c sin(α) sin(β) R¨ + m c cos(α) sin(β)R α¨ + m c cos(β) sin(α)R β¨ ˙ = m c sin(α) sin(β)R α˙ 2 + m c sin(α) sin(β)R β˙ 2 − 2m c cos(α) sin(β)α˙ R+ − 2m c cos(β) sin(α)β˙ R˙ − 2m c cos(α) cos(β)R α˙ β˙ + FX − k X X˙ , (m c + m w )Y¨ + m c cos(α) R¨ − m c sin(α)R α¨ = m c cos(α)R α˙ 2 + 2m c sin(α)α˙ R˙ + FY − kY Y˙ , m c sin(α) sin(β) X¨ + m c cos(α)Y¨ + m c R¨ = m c R α˙ 2 + m c R β˙ 2 − m c cos2 (α)R β˙ 2 + m c g cos(β) sin(α) + FR − k R R˙ ,

(53) (54) (55)

m c cos(α) sin(β)R X¨ − m c sin(α)R Y¨ + m c R 2 α¨ = −2m c α˙ R R˙ + m c g cos(α) cos(β)R + m c cos(α) sin(α)R 2 β˙ 2 ,

(56)

m c cos(β) sin(α)R X¨ + m c sin2 (α)R 2 β¨ = −2m c β˙ R R˙ − m c g sin(α) sin(β)R + 2m c cos2 (α)β˙ R R˙ − 2m c cos(α) sin(α)R 2 α˙ β˙ .

(57)

Unit Vector Control with Prescribed Performance via Monitoring …

323

By the assumption of small swing angles and small accelerations as presented in [15], ¨ g, | R| ˙ |R|, leading to |R α| ¨ g, and |α| ˙ one has | X¨ | , |Y¨ | , | R| ¨ , |R β| ˙ , |β| 1. Therefore, for the overhead crane, sin(α) ≈ 1, cos(α) ≈ 0, sin(β) ≈ β, cos(β) ≈ 1 and neglecting higher-order terms, the nonlinear system (53)–(57) is simplified to (m s + m w + m c ) X¨ + m c β R¨ + m c R β¨ = FX − k X X˙ , (m c + m w )Y¨ − m c R α¨ = FY − kY Y˙ , m c β X¨ + m c R¨ = m c g + FR − k R R˙ , −Y¨ + R α¨ = 0 , X¨ + R β¨ = −gβ .

(58) (59) (60) (61) (62)

Consequently, kX 1 1 FX + β FR , (63) X˙ + ms + mw ms + mw ms + mw 1 kY ˙ FY , (64) Y+ Y¨ = − mw mw   kR 1 1 1 kX R¨ = − R˙ + β X˙ − β FX + − β 2 FR + g , mc ms + mw ms + mw mc ms + mw (65) kY 1 kY 1 ˙ FY , (66) Y+ α¨ = − mw R mw R kX 1 1 ˙ 1 1 1 1 FX − β FR . β¨ = −g β + (67) X− R ms + mw R ms + mw R ms + mw R

X¨ = −

By design, the control signal are u X = FX , u Y = FY and u R = FR + m c g. Then, the nonlinear system (67)–(67) is rewritten as kX mc g 1 1 X˙ − (68) β+ uX + βu R , ms + mw ms + mw ms + mw ms + mw kY ˙ 1 Y¨ = − Y+ uY , (69) mw mw   kX mc g 1 1 1 ¨ − k R R˙ + R= β X˙ + β2 − βu X + − β2 u R , mc ms + mw ms + mw ms + mw mc ms + mw

X¨ = −

(70)

kY 1 kY 1 ˙ Y+ α= ¨ − (71) uY , mw R mw R   1 mc mc 1 ˙ 1 1 kX 1 ¨ − 1− X− β= uX − βu R . (72) g β+ ms + mw R ms + mw R ms + mw R ms + mw R

324

V. H. P. Rodrigues et al.

Finally, neglecting quadratic and weakly compound terms, one arrives to the nonlinear system 1 mc g kX β+ uX , X˙ − ms + mw ms + mw ms + mw kY ˙ 1 uY , Y¨ = − Y+ mw mw ¨ − k R R˙ + 1 u R , R= mc mc kY 1 ˙ kY 1 uY , α= ¨ − Y+ m R mw R  w 1 mc kX 1 ˙ 1 1 ¨ − 1− g β+ β= uX . X− ms + mw R ms + mw R ms + mw R

X¨ = −

(73) (74) (75) (76) (77)

In our application example, for the sake of simplicity, the state is available for feedback and there is no displacement in Y axis such that Y = Y (0) and Y¨ = Y˙ = 0, then u Y = 0 and, consequently, α = α(0) = π/2 and α¨ = α˙ = 0, for all t > 0. Inspired by [28], the sliding vector σ to efficient payload transportation and the swing suppression is designed as σ1 = X˙ − X˙ d + c1 (X − X d ) − c2 β , σ2 = R˙ − R˙ d + c3 (R − Rd ) ,

(78) (79)

where the constants c1 , c2 , c3 > 0 and the variables X d and Rd are the desired cart position and the desired length of the lift-line, respectively. The time derivative of (78) and (79) are given by σ˙ 1 = X¨ − X¨ d + c1 ( X˙ − X˙ d ) − c2 β˙   1 kX mc g β − c2 β˙ + u X − ( X¨ d + c1 + X˙ d ) , X˙ − = c1 − ms + mw ms + mw ms + mw (80) σ˙ 2 = R¨ − R¨ d + c3 ( R˙ − R˙ d )   kR ˙ 1 u R − ( R¨ d + c3 R˙ d ) . (81) = c3 − R+ mc mc Now, inspired by [29], we introduce the feedback law 

 kX mc g β + c2 β˙ + U1 , u X = (m s + m w ) − c1 − X˙ + ms + mw ms + mw

  kR ˙ u R = m c − c3 − R + U2 , mc

(82) (83)

Unit Vector Control with Prescribed Performance via Monitoring …

325

where U1 and U2 represent the discontinuous control law and define the exogenous disturbances d1 = −( X¨ d + c1 X˙ d ) , d2 = −( R¨ d + c3 R˙ d ) .

(84) (85)

By plugging (82)–(85) in (80) and (81), the sliding variable dynamics can be rewritten in a compact form as σ˙ (t) = d(t) + U ,

(86)

with state σ (t) = [σ1 (t), σ2 (t)]T , Lipschitz disturbance d(t) = [d1 (t), d2 (t)]T and discontinuous control vector U = [U1 , U2 ]T . In the simulation results, the payload must be lifted and lowered while the crane is in motion and the swing of the payload should be kept as small as possible. The desired trajectory is given by a parabolic shape such as X d (t) =

t − X M , 0 ≤ t ≤ 2X M 0, t > 2X M

Rd (t) = Rm + R M

,

XM > 0 ,

(X M + X d (t))(X M − X d (t)) , X 2M

(87) Rm , R M > 0.

(88)

The crane parameters are: m c = 1 kg, m w = 0.6 kg, m s = 1 kg, k X = 4.1 kg/s, kY = 3.1 kg/s, and k R = 4.1 kg/s. The minimum lift length is Rm = 0.1 m, the lift length must vary of R M = 0.6 m, while the cart travel from −X M to X M with X m = 30 m. It was chosen as the performance criteria a maximum overshoot of  = 0.4, maximum transient time T = 1 [s], and maximum residual  = 0.2. Then, the switching law based on monitoring function follows (16) with parameters r1 = 2 and r2 = 10. Moreover, during the residual phase, the disturbance d(t) in (86) is estimated by the monitoring function such that the modulation ρ M in (24) is simply ρ M = dˆ = cbk ,

(89)

where c = 0.2 and b = 1.01. The positive barrier function (21) has the parameter 10 . As presented in F¯ = 5 and the unit vector controller (23) employs S p = 01 [29], an initial payload swing is considered with initial conditions X (0) = −29 m, Y (0) = 0 m, R(0) = 0.1 m, X˙ (0) = Y˙ (0) = R˙ = 0 m/s, α(0) = 90o , β(0) = 6o and ˙ = 11.4o /s. α(0) ˙ = 0o /s and β(0)

326

Fig. 4 3D Crane—MBF UVC with positive barrier, X (t) and X˙ (t)

Fig. 5 3D Crane—MBF UVC with positive barrier, Y (t) and Y˙ (t)

˙ Fig. 6 3D Crane—MBF UVC with positive barrier, R(t) and R(t)

V. H. P. Rodrigues et al.

Unit Vector Control with Prescribed Performance via Monitoring …

327

Fig. 7 3D Crane—MBF UVC with positive barrier, α(t) and α(t) ˙

˙ Fig. 8 3D Crane—MBF UVC with positive barrier, β(t) and β(t)

First, the performance of the Adaptive UVC is evaluated with monitoring and positive barrier functions (Figs. 4, 5, 6, 7, 8, 9 and 10). In closed loop, while the reference is given by a parabolic trajectory, the control law is able to successfully ensure the tracking on the actuated state variables X (t), X˙ (t), Y (t), Y˙ (t), R(t) and ˙ R(t), see Figs. 4, 5 and 6, as well as the antiswing and antiskew behavior on the ˙ underactuated state variables α(t), α(t), ˙ β(t) and β(t), see Figs. 7 and 8. It takes less than 10 s to suppress the payload oscillations, see Fig. 8a. From the point of view of the prescribed tracking performance, during the reaching phase, the monitoring function is able to guarantee an exponential increase of the modulation function such that the sliding condition is verified in the fixed-time T = 1 (see Fig. 9), where the dashed lines represent the frontiers of allowable excursion of the sliding vector. Indeed, beyond the fixed-time convergence, the monitoring function ensures that there is no overshoot violation, as specified in Definition 1. In the residual phase, with the positive barrier function, the adaptive UVC is discontinuous (Fig. 10a, b), and leads to an exact sliding motion, i.e., σ ≡ 0 for all t > T , see Fig. 9e, f.

328

Fig. 9 3D Crane—MBF UVC with positive barrier, σ (t) and σ (t)

V. H. P. Rodrigues et al.

Unit Vector Control with Prescribed Performance via Monitoring …

Fig. 10 3D Crane—MBF UVC with positive barrier, U and d(t)

Fig. 11 3D Crane—MBF UVC with semi-positive barrier, X (t) and X˙ (t)

Fig. 12 3D Crane—MBF UVC with semi-positive barrier, Y (t) and Y˙ (t)

329

330

˙ Fig. 13 3D Crane—MBF UVC with semi-positive barrier, R(t) and R(t)

Fig. 14 3D Crane—MBF UVC with semi-positive barrier, α(t) and α(t) ˙

˙ Fig. 15 3D Crane—MBF UVC with semi-positive barrier, β(t) and β(t)

V. H. P. Rodrigues et al.

Unit Vector Control with Prescribed Performance via Monitoring …

Fig. 16 3D Crane—MBF UVC with semi-positive barrier, σ (t) and σ (t)

331

332

V. H. P. Rodrigues et al.

Fig. 17 3D Crane—MBF UVC with semi-positive barrier, U and d(t)

Now, the performance of the Adaptive UVC is evaluated with monitoring and semi-positive barrier functions (Figs. 11, 12, 13, 14, 15, 16 and 17). In closed loop, as in the case the adaptive strategy with positive barrier function, while the reference is given by a parabolic trajectory, the control law is able to successfully ensure ˙ the tracking on the actuated state variables X (t), X˙ (t), Y (t), Y˙ (t), R(t) and R(t), see Figs. 11, 12 and 13, as well as the antiswing and antiskew behavior on the ˙ underactuated state variables α(t), α(t), ˙ β(t) and β(t), see Figs. 14 and 15. It takes less than 10 s to suppress the payload oscillations, see Fig. 15a. From the point of view of the prescribed tracking performance, during the reaching phase, the monitoring function is able to guarantee an exponential increase of the modulation function such that the sliding condition is verified in the fixed-time T = 1 (Fig. 16), where the dashed lines represent the frontiers of allowable excursion of the sliding vector. Indeed, beyond the fixed-time convergence, the monitoring function ensures that there is no overshoot violation, as specified in Definition 1. In the residual phase, with the semi-positive barrier function, different from the adaptive strategy with positive barrier function, the adaptive UVC is continuous (Fig. 17a, b), and leads to a practical sliding motion, i.e., σ  ≤  for all t > T , see Fig. 16e, f.

6 Conclusion This chapter proposes a new output-feedback adaptive unit vector controller for a class of plants with parametric uncertainties and (un)matched disturbances with unknown upper bounds. The proposed multivariable controller exhibits the advantages of adaptive schemes based on monitoring and barrier functions. The strategy avoids overestimation of the control gains and guarantees prescribed performance of transient and steady state. Numerical simulations for an academic example, as well

Unit Vector Control with Prescribed Performance via Monitoring …

333

as an application for an overhead crane system application, illustrate the theoretical results.

References 1. Yan, L., Hsu, L., Costa, R.R., Lizarralde, F.: A variable structure model reference robust control without a prior knowledge of high frequency gain sign. Automatica 44, 1036–1044 (2008) 2. Hsu, L., Oliveira, T.R., Cunha, J.P.V.S.: Extremum seeking control via monitoring function and time-scaling for plants of arbitrary relative degree. In: Proceedings of the 13th International Workshop on Variable Structure Systems, pp. 1–6 (2014) 3. Oliveira, T.R., Peixoto, A.J., Hsu, L.: Sliding mode control of uncertain multivariable nonlinear systems with unknown control direction via switching and monitoring function. IEEE Trans. Autom. Control 55, 1028–1034 (2010) 4. Oliveira, T.R., Peixoto, A.J., Nunes, E.V.L.: Binary robust adaptive control with monitoring functions for systems under unknown high-frequency-gain sign, parametric uncertainties and unmodeled dynamics. Int. J. Adapt. Control Signal Process. 30, 1184–1202 (2016) 5. Rodrigues, V.H.P., Oliveira, T.R.: Global adaptive HOSM differentiators via monitoring functions and hybrid state-norm observers for output feedback. Int. J. Control 91, 2060–2072 (2018) 6. Oliveira, T.R., Melo, G.T., Hsu, L., Cunha, J.P.V.S.: Monitoring functions applied to adaptive sliding mode control for disturbance rejection. In: Proceedings of the 20th IFAC World Congress, pp. 2684–2689 (2017) 7. Shtessel, Y.B., Moreno, J.A., Plestan, F., Fridman, L.M., Poznyak, A.S.: Super-twisting adaptive sliding mode control: a Lyapunov design. In: Proceedings of the 49th IEEE Conference on Decision and Control, pp. 5109–5113 (2010) 8. Hsu, L., Oliveira, T.R., Cunha, J.P.V.S., Yan, L.: Adaptive unit vector control of multivariable systems using monitoring functions. Int. J. Robust Nonlinear Control 29, 583–600 (2019) 9. Obeid, H., Fridman, L.M., Laghrouche, S., Harmouche, M.: Barrier function-based adaptive integral sliding mode control. In: Proceedings of the 57th IEEE Conference on Decision and Control, pp. 5946–5950 (2018) 10. Obeid, H., Fridman, L.M., Laghrouche, S., Harmouche, M.: Barrier function-based adaptive sliding mode control. Automatica 93, 540–544 (2018) 11. Obeid, H., Fridman, L.M., Laghrouche, S., Harmouche, M.: Barrier function-based adaptive twisting controller. In: Proceedings of the 15th International Workshop on Variable Structure Systems, pp. 198–202 (2018) 12. Obeid, H., Laghrouche, S., Fridman, L.M., Chitour, Y., Harmouche, M.: Barrier function-based variable gain super-twisting controller, pp. 1–6 (2019). arXiv:1909.07467 13. Obeid, H., Fridman, L.M., Laghrouche, S., Harmouche, M.: Barrier adaptive first order sliding mode differentiator. In: Proceedings of the 20th IFAC World Congress, pp. 1722–1727 (2017) 14. Obeid, H., Fridman, L.M., Laghrouche, S., Harmouche, M., Golkani, M.A.: Adaptation of Levant’s differentiator based on barrier function. Int. J. Control 91, 2019–2027 (2018) 15. Levant, A.: Robust exact differentiation via sliding mode technique. Automatica 34, 379–384 (1998) 16. Baida, S.V.: Unit sliding mode control in continuous-and discrete-time systems. Int. J. Control 57, 1125–1132 (1997) 17. He, W., Ge, S.S.: Cooperative control of a nonuniform gantry crane with constrained tension. Automatica 66, 146–154 (2016) 18. Almutairi, N.B., Zribi, M.: Sliding mode control of a three-dimensional overhead crane. J. Vib. Control 15, 1679–1730 (2009) 19. Abdel-Rahman, E.M., Nayfeh, A., Masoud, Z.N.: Dynamics and control of cranes: a review. J. Vib. Control 9, 863–908 (2001)

334

V. H. P. Rodrigues et al.

20. Ramli, L., Mohamed, Z., Abdullahi, A.M., Jaadar, H.I., Lazim, I.M.: Control strategies for crane systems: a comprehensive review. Mech. Syst. Signal Process. 95, 1–23 (2017) 21. Fang, Y., Ma, B., Wang, P., Zhang, X.: A motion planning-based adaptive control method for an underactuated crane system. IEEE Trans. Control Syst. Technol. 20, 241–248 (2012) 22. Cunha, J.P.V.S., Costa, R.R., Hsu, L.: Design of first-order approximation filters for slidingmode control of uncertain systems. IEEE Trans. Ind. Electron. 55, 4037–4046 (2008) 23. Khalil, H.K.: Nonlinear Systems. Prentice Hall, Upper Saddle River (2002) 24. Rodrigues, V.H.P., Oliveira, T.R.: Monitoring function for switching adaptation in control and estimation schemes with sliding modes. In: Proceedings of the 1st IEEE Conference on Control Technology and Applications, pp. 608–613 (2017) 25. Rodrigues, V.H.P., Hsu, L., Oliveira, T.R., Fridman, L.: Adaptive sliding mode control with guaranteed performance based on monitoring and barrier functions. Int. J. Adapt. Control Signal Process. 36, 1252–1271 (2021) 26. Edwards, C., Shtessel, Y.: Adaptive dual-layer super-twisting control and observation. Int. J. Control 89, 1759–1766 (2016) 27. Utkin, V.I., Poznyak, A.S.: Adaptive sliding mode control with application to super-twist algorithm: equivalent control method. Automatica 49, 39–47 (2013) 28. Bartolini, G., Pisano, A., Usai, E.: Second-order sliding-mode control of container cranes. Automatica 38, 1783–1790 (2002) 29. Vázquez, C., Collado, J., Fridman, L.: Variable structure control of perturbed crane: parametric resonance case study. In: Yu, X., Efe, M.Ö. (eds.) Recent Advances in Sliding Modes, pp. 317– 347 (2015) 30. Goebel, R., Sanfelice, R.G., Teel, A.R.: Hybrid Dynamical Systems - Modeling, Stability, and Robustness. Princeton University Press, Princeton (2012)

Chattering Analysis and Adjustment

Chattering in Mechanical Systems Under Sliding-Mode Control Igor Boiko

Abstract Chattering in sliding-mode control systems is known as an undesirable effect that reduces system performance with respect to the theoretical performance expected in accordance with the design. In the present book chapter, the metrics applied to chattering evaluation as well as the effect of chattering on system performance in position-control mechanical systems are studied.

1 Introduction Sliding-mode (SM) control, that was proposed and received significant development in the last century and afterward (see [1, 7, 19, 21]), encounters a serious obstacle to its wide use in practice, represented by chattering. Chattering in SM control systems is known as an undesirable effect that reduces system performance when compared to that expected according to the system design. Chattering is manifested as high-frequency oscillations in the system [2]. These oscillations are excited by the system dynamics that includes a plant (or process) and controller, due to the combination of two factors: (i) the presence of Lipschitz-discontinuous nonlinearities and (ii) additional (parasitic, not accounted for in the design) dynamics in the loop. In mechanical systems, these oscillations may be observed as mechanical vibrations of some parts of the system or some physical parameters. This is especially relevant for position-control mechanical systems, which are a subject of the present study. These feedback control systems that use a power-amplifying feedback, and in which the controlled variable is a mechanical position, are also referred to as servomechanisms. The theory of chattering has received extensive theoretical treatment. The methods being used for chattering analysis include the describing functions (DF) [3, 4], the locus of a perturbed relay system (LPRS) [11, 18], as well as seldom used for this purpose but also suitable Tsypkin locus [12], Hamel locus [13], Poincare map [14], I. Boiko (B) Department of Electrical Engineering and Computer Science, Khalifa University of Science and Technology, 127788 Abu Dhabi, UAE e-mail: [email protected] © The Author(s), under exclusive license to Springer Nature Switzerland AG 2023 T. R. Oliveira et al. (eds.), Sliding-Mode Control and Variable-Structure Systems, Studies in Systems, Decision and Control 490, https://doi.org/10.1007/978-3-031-37089-2_13

337

338

I. Boiko

and singular perturbation methods [15]. Chattering is inevitable and occurs in firstorder SM as well as in second- and higher-order SM control systems. The original hopes concerning second- and higher-order SM control that these algorithms lead to chattering-free SM modes didn’t prove true. On the contrary, it was shown in [5, 16, 17] and then other publications that chattering cannot be avoided, and from the property of the derivative of the sliding variable staying zero in the ideal sliding mode, it does not follow that chattering would be eliminated in the presence of additional dynamics. The subsequent introduction of the concept of phase deficit [6] shed more light on the chattering phenomenon. It allows one to relate such operating modes in SM control systems as finite-time convergence, infinite-time (asymptotic) convergence, and finite-frequency oscillations (chattering) as the three modes occurrence of which depend on the phase deficit value. Despite the seemingly clear situation with chattering, and the impossibility to avoid it being a proven fact (Note: There are some SM control system designs that shift SM and chattering into observer loop, supplying a filtered control signal to the plant; we do not consider them as presenting a solution of the chattering problem, because the process loop is not operated under a SM control), there are still numerous attempts to provide a design of a SM system without chattering. Lately, the efforts are shifted toward reducing chattering rather than eliminating it. When solving this problem, it is necessary to have certain metrics of chattering evaluation to ensure the possibility of options comparison. In most cases, the used metric is the amplitude of chattering at the system output. Often comparison of options is done with the use of abstract dynamics not related to any particular system type (for example, transfer functions). There is an effort of introducing other metrics [8], followed by [24] and [25], which may be more relevant to specific system types. In the two last referenced papers, the metric of chattering is based on the power dissipated by chattering in electric circuits. However, what is not accounted for is the possible different phase shifts between the voltage and the current, which would affect the proposed metric. The current book chapter aims to contribute to the aspect of chattering evaluation in mechanical systems. Another aspect of chattering analysis is prompted by the discussion in the recent publications regarding chattering magnitude (in the various senses considered in this work) related to the different SM control algorithms, and their comparison, including discontinuous and continuous, first and higher order [19, 20, 22–24]. Such a comparison is undertaken in the present book chapter, with respect to mechanical systems in view of the metrics proposed. One more aspect, which is not the chattering itself, but an effect related to it, propagation of average on the period of chattering values through the system, is considered too. This aspect of analysis is rarely touched upon, and the present material is aimed to fill this gap in the analysis of SM control systems. The aim of this analysis is related to the property of propagation of external disturbances and control signals, which is described not by the original nonlinear functions of the controller, but by certain transformed controller nonlinearities often referred to as bias functions. Finding bias functions for the considered discontinuous and continuous control is the objective of this section.

Chattering in Mechanical Systems Under Sliding-Mode Control

339

And finally, mechanical systems with dry friction are considered. And, the analysis of chattering, that is first done for systems without dry friction, is repeated for systems having dry friction. Some new interesting phenomena are found in these systems, which are not available in SM control systems without friction.

2 The Chattering Effect and Metrics for Chattering Evaluation Let us consider the mechanism of producing chattering in the SM control system. Since it is commonly accepted now that chattering is produced in any SM control system when parasitic (additional) dynamics are present, regardless of where this is a first-order SM, second-order, or higher-order SM, we shall further consider only first-order SM. Methods of analysis of chattering in second- and higher-order SM are well developed now, with analysis of first-order SM being substantially simpler. And because all SM control systems end up in getting chattering in real conditions, it would be expedient focusing primarily on the first-order (conventional) SM due to the simpler analysis, which was mentioned above. Consider the following system: x˙ = Ax + Bu,

(1)

y = Cx,

(2)

u = − f (y) ,

(3)

where x ∈ R n is a state vector, y ∈ R 1 sliding variable, u control, A, B, and C matrices of respective dimensions, relative degree of the system (1), (2) is one, and the nonlinear function (3) is single-valued odd-symmetric and Lipschitz-discontinuous at zero. The listed conditions are necessary for the possibility of a first-order SM to occur in the system [1, 7]. We shall consider two Lipschitz-discontinuous nonlinear functions: (4) f (y) = cr sign(y), which gives discontinuous control, and f (y) = cc sign(y)|y|β ,

(5)

where cr , cc , and β are positive parameters, with 0 < β < 1, that provides a continuous control function u(t). Let us refer to the dynamics given by (1), (2) as linear plant or linear part of the system, and to the nonlinearity (3) as controller. The linear part of the system can be equivalently presented by the following transfer function:

340

I. Boiko

G(s) =

y(s) = C(Is − A)−1 B. u(s)

(6)

In the frequency domain, the conditions of the possibility of the ideal SM existence can be obtained from the consideration of the phase deficit of the system [6], which must be a positive quantity for the ideal SM to exist. The phase deficit can be obtained from the consideration of the frequency response of the linear part at s = jω and ω → ∞, and the negative reciprocal of the describing function (DF) of the nonlinearity (4) or (5) at a → 0 and ω → ∞, where the DF of a nonlinearity is found as ω N (a) = πa

2π  /ω

2π  /ω

0

0

ω u(t) sin ωt dt + j πa

u(t) cos ωt dt.

(7)

In the formula (7), a and ω are the amplitude and the frequency of the harmonic signal at the input to the nonlinearity, respectively. The amplitude and the frequency of chattering can be conveniently found from the harmonic balance equation [3, 4]. In most cases, the frequency and amplitude of chattering provide comprehensive characterization of the detrimental effect of chattering, if analysis is done for a particular application. For example, chattering in the output voltage of a power converter manifested as a ripple of the voltage, with the magnitude of ripple of an order of a few mV and the frequency of tens of kHz may be considered an acceptable level of chattering, because, usually, other variables that also experience chattering would be within acceptable ranges. On the other hand, chattering in a tank water level may not be considered acceptable, even if the amplitude of the level chattering is of the order of a few centimeters and the frequency of about 1 Hz. This is because this small chattering in the output variable is a result of a large chattering in the valve position, which may seriously affect the longevity of the actuator and valve. Besides, it may produce undesirable oscillations to the flow, which in turn may disturb other parts of the process. The given examples show that using only abstract dynamic model of a system (in the form of differential equations or transfer functions), without considering an application would not allow one to make conclusions on whether the chattering is acceptable or not. Another conclusion drawn from that is that for analysis of chattering all system variables must be evaluated, and not only the output signal. An approach to analyze chattering from the point of view of how detrimental it can be for a particular system can be seen in [8]. The author proposes integral metrics that are supposed to reflect the energy spent by the system on generating the chattering motion. Then characterization of chattering as infinitesimal, bounded, and unbounded is given. We shall also consider in this book chapter the energy-related metrics. However, for the sake of the possibility of deriving explicit formulas, we shall (i) consider a linear mechanical system and (ii) consider the desired trajectory of the system to be an equilibrium point, with chattering being oscillations about this equilibrium point. Let us consider that the force Fa produced by certain auxiliary system (actuator) is applied to the following mechanical system:

Chattering in Mechanical Systems Under Sliding-Mode Control

341

m x¨ + η x˙ + κ x = Fa ,

(8)

where x = y is the position, m is a mass, η coefficient of viscous friction, κ coefficient characterizing rigidness of the spring. The left-hand side of the Eq. (8) can be rewritten as a sum of three forces: Finer − F f r − Fspr = Fa ,

(9)

where Finer = m x¨ is an inertia force (fictitious force), F f r = −η x˙ is the force of friction, and Fspr = −κ x is a force exerted by the spring. With the initial position of the output variable x(t) at zero, and the displacement of x from zero to a certain position x(t) corresponding to the time t, the work of the force Fa can be evaluated as x(t) Wa (t) =

Fa (x)d x.

(10)

0

Given the decomposition of the force Fa as per (9), the integral in (10) can be rewritten as the sum of three integrals: x(t) Winer (t) =

x(t) x(t) dv dx Finer (x)d x = m xd ¨ x =m dt

0

0

0

v(t) =m

dx dv = m dt

0

v(t) mv2 (t) vdv = 2 0

(11)

x(t) W f r (t) =

x(t) x(t) dx dx F f r (x)d x = − η xd ˙ x = −η dt

0

0

0

t = −η 0

dx dx dt = −η dt dt

t v2 (t)dt (12) 0

where v(t) = x(t), ˙ and x(t) Wspr (t) = 0

x(t) κ x 2 (t) . Fspr (x)d x = − κ xd x = − 2 0

(13)

342

I. Boiko

Because chattering is a periodic motion, and with the period T , x(T ) = x(0), and x(T ˙ ) = x(0), ˙ the components of the work given by (11) and (13) represent conservation of energy (the total work over the period T is zero), whereas the component given by (12) represents dissipation of energy (the integral over the period is negative). Any motion of the system, if v = x˙ = 0, results in a negative value of W f r . It should be noted that, despite Winer is the kinetic energy, and |Wspr | gives the potential energy, during the oscillation of x(t), the sum of the kinetic and potential energies can vary with time. This depends on the character of change of the driving force Fa (t). What is important, however, to note is that the change of Winer and of Wspr over the period of oscillation is zero. Let us evaluate the work of the driving force over the period of chattering assuming the harmonic character of Fa (t). The transfer function of the mechanical system is G p (s) =

1 X (s) = . Fa (s) ms 2 + ηs + κ

(14)

Let us assume that chattering of the output x is an oscillation produced due to the application of a harmonic force Fa of the frequency ω and amplitude a F . Under this assumption, the output of the system is a harmonic oscillation too of the same frequency ω and the amplitude ax and phase φ that can be found from the frequency response that corresponds to the transfer function (14), through the relationships: 1 |G p ( jω)| =  (κ − mω2 )2 + η2 ω2 and arg G p ( jω) = arctan

ηω − π. mω2 − κ

(15)

(16)

In deriving Eq. (16), we assume that the frequency ω is sufficiently high; so, the following inequality holds mω2 − κ > 0. Now let us assume that the motion of x produced by the harmonic force is x(t) = ax sin(ωt).

(17)

And the driving force is Fa (t) =

ax sin(ωt − arg G p ( jω)), |G p ( jω)|

(18)

therefore, having a phase lead with respect to x(t) (because arg G p ( jω) < 0). We can now obtain formula for the average energy supplied by the driving force Fa , that is consumed by the viscous friction due to chattering:

Chattering in Mechanical Systems Under Sliding-Mode Control

Wchat

1 = T

T

1 ηv (t)dt = T

T η(ax ω cos(ωt))2 dt

2

0

343

0

=

1 2 2 ηa ω T x

T

1 1 ( + cos(2ωt))dt = ηax2 ω2 . (19) 2 2

0

The expression given by (19) can be used as a metric for chattering in mechanical systems, because it has clear physical meaning. In main features it is similar to the “L 2 − chat metric” proposed in [8], with the main difference being averaging on the period of chattering suggested by (19), and expression of the energy through the amplitude and the frequency. Formula (19) is convenient for comparison of level of chattering produced by different control algorithms, when they are applied to the same mechanical plant.

3 Analysis and Comparison of Chattering Chattering is a result of the existence of additional (parasitic) dynamics in a Lipschitzdiscontinuous system. Yet no additional dynamics has been considered so far. Let us further assume that the driving force is produced by a dynamic actuator that has a model satisfying the following conditions: • In a steady state, the output of an actuator is related with an input to the actuator by a single-valued function. In the case of a linear actuator, the steady-state value of the actuator output is a linear function of the steady-state value of the input. As a result, the transfer function of a linear actuator satisfies the condition: G a (0) = K a > 0. • Actuator dynamics are causal, with the dynamic model of sufficiently high relative degree or containing a delay, so that addition of the actuator dynamics to the control loop will never result in oscillations of infinite frequency (ideal sliding mode or asymptotic convergence to infinite-frequency oscillations). • Actuator dynamics represent a low-pass filter. With these requirements to the actuator model, the most suitable actuator models might be the following transfer functions: G a (s) =

Ka , (Ta s + 1)2

(20)

K a e−τ s Ta s + 1

(21)

G a (s) = and the fractal dynamics [9]:

344

I. Boiko

G a (s) = K a

∞ 

1

λ−k Ta s k=0

+1

(22)

with the first one being the simplest and the last one potentially being the most accurate (if the respective data are available). The equations of the plant, controller, and actuator must also be completed with equations of the sliding surface. Because the plant is of the second order, we will consider the following equation of the sliding surface: y = x + c2 x˙ = 0,

(23)

where c2 > 0 is a coefficient of the matrix C = [1 c2 ]. The transfer function of the sliding surface, corresponding to the Eq. (23) is G ss (s) = c2 s + 1.

(24)

One can notice that the transfer function G ss has a relative degree of −1, which constitutes non-causal dynamics. Yet, in series connection with plant dynamics, it gives relative degree 1. We will also assume that the actuator and the plant are connected in series, so that if both are linear then the transfer function of the actuator–plant–sliding surface dynamics is G(s) = G a (s)G p (s)G ss (s). It should be noted that this is not often the case, especially with the use of electromagnetic actuators, which have back-emf that presents a feedback from the plant to the actuator. However, what is important for the qualitative analysis of chattering and comparison of different control algorithms is that the presence of the back-emf does not alter the relative degree of the actuator and actuator–plant dynamics. It alters the actuator dynamic response though. Analysis of chattering can be performed through the solution of the harmonic balance equation: (25) N (ax )G( j ) = −1, where is the frequency of chattering (which was previously denoted in a generic way as ω), and N (a) is the describing function of the sliding-mode controller. For the two controllers considered above, the describing functions are Nr (a) =

4cr πa

(26)

for the controller given by (4), and 2cc a β−1 (0.5β + 1) Nc (a) = √ , π (0.5β + 1.5)

(27)

where is the gamma function, for the controller given by (5) (see the derivations in [5]).

Chattering in Mechanical Systems Under Sliding-Mode Control

345

Another aspect of the modeling of the considered system is the selection of controller parameters. It is especially important if the comparison of controller performance is done. The two considered controllers must be put in equal conditions, so that a fair comparison of their chattering characteristics can be done. Yet, it is obvious that the relay controller (4) provides control magnitude that does not depend on the value of the system output (or sliding variable value), whereas the magnitude of the controller (5) output depends on the sliding variable y(t) value. It is proposed here that the following approach is used. The motion of the output of the mechanical system is always constrained. Assuming the symmetric limits x ∈ [−xm , xm ] for the motion of x, let us calculate the magnitude cr of the relay controller that would ensure that the system output can reach the limit. For non-zero κ, this gives cr =

xm κ . Ka

(28)

With that value of cr the relay controller would allow the system output to reach the maximum value in the steady state (if the controller output is maintained constant sufficiently long). In other words, for constant controller output equal to cr , the system output is xm , and the values of control higher than cr would not make sense. This property is valid for all three considered actuator models. To obtain an equivalent effect in the continuous controller, the gain cc much be selected as per the following equation: Ka xm = cc xmβ , (29) κ that implies that the maximum force allowing reaching xm is exerted in the steady state when the position is |x| = xm (it is of the opposite sign because of the negative feedback). Expressing xm from (28) and substituting in (29) yields  cc = cr

cr K a κ

−β

.

(30)

If the parameters of the two controllers correspond to each other as per Eq. (30), then the comparison of chattering in the two respective systems can be done in a fair way. The parameter β of the continuous controller can be selected first, and then the values of cr and cc can be chosen in coordination with each other. It can be noted that the frequency of chattering will be the same in the systems with either controller: (26) or (27), because both control functions are single-valued odd-symmetric nonlinearities with the describing functions being real quantities. As a result, the frequency of the oscillations is defined by the phase cross-over frequency of the frequency response of the combined actuator–plant–sliding surface dynamics G(s), which is the same in both systems. The amplitude of chattering can be found as follows. For the relay controller, it is found from the solution of (25) and (26), which yields ar =

4cr |G( j )|, π

(31)

346

I. Boiko

where is found from the equation arg G( j ) = −π.

(32)

And for the continuous controller, the amplitude of chattering of x is obtained through expressing ax from (25), with the use of formula (27) for the controller describing function and substitution of (30) for cc :  ac = cr

κ Ka

β   1−β



2 (0.5β + 1) |G( j )| π (0.5β + 1.5)

1  1−β

.

(33)

For one of typical values of the parameter β of the continuous controller being 0.5, the Eq. (33) transforms into κ ac = cr Ka



2 (0.5β + 1) |G( j )| √ π (0.5β + 1.5)

2

4cr |G( j )| 0.9726κ|G( j )| . π Ka (34) The expression (34) is given as a product of two fractions. The first one gives the value of the amplitude of chattering under the relay control (see formula (31)), and the second one is a coefficient that can be estimated as shown below. Given the fact that the chattering losses are the square of a product of the amplitude and frequency (19), and that the frequency of chattering in both cases is the same, we can compare the chattering impact by comparing the amplitudes One can see from (34) that the amplitude of chattering in the system under the continuous control can in theory be both higher and lower than the amplitude of chattering under the relay control. However, the detailed look into the transfer function of the open-loop system shows that the magnitude of the frequency response can be written as =

|G( j )| = |G a ( j )||G ss ( j )||G p ( j )|  1 Ka 1 + c22 2  . (35) = 2 2 2 Ta + 1 (m − κ)2 + η2 2 Therefore, the coefficient given by the second fraction in (34) can be expressed as γ =

 0.9726κ|G( j )| κ 0.9726 = 2 2 1 + c22 2  . 2 Ka Ta + 1 (m − κ)2 + η2 2

(36)

Under the assumption of a high frequency of chattering, such that Ta >> 1, c2 >> 1, and the inertial term dominating over the spring-related term: m 2 >> κ, formula (36) rewrites as γ =

0.9726c2 κ  . 2 2 2 Ta

m 2 + η2

(37)

Chattering in Mechanical Systems Under Sliding-Mode Control

347

In formula (37), the second fraction is smaller than one (due to m 2 >> κ), and the first fraction is likely to be a small quantity due to high . Therefore, at high frequencies of chattering, it is likely that the amplitude of chattering under continuous control will be smaller than the amplitude of chattering under the relay control. However, it does not constitute superiority of the continuous control. In applications, the advantage of using the relay control is related to the simpler and cheaper technology involved. For example, solenoids can be used instead of proportional electromagnets in actuators of electropneumatic and electrohydraulic servomechanisms. Some other aspects related to this comparison are considered below.

4 Analysis and Comparison of External Signals Propagation Another comparison aspect involves analysis of bias functions and equivalent gains in systems under these two control algorithms, which reflect the average on the period of chattering motions. The matter is that chattering results in the propagation of averages on the period differing from propagation of instantaneous values. While propagation of instantaneous values through the controller is described by (4) and (5), propagation of average on the period of chattering values is described by different nonlinear functions. They are bias functions and their linearizations known as equivalent gains. Due to the existing invariant relationships between the describing functions and the corresponding equivalent gains, analyzed in [10] and termed as inherent gain margins, the two systems under comparison react to disturbances in terms of the averaged on the period of chattering motions in a different way. A bias function is a characteristic describing propagation of average values (averaged on the period of chattering) through single-valued or hysteretic nonlinearities usually associated with a controller: 1 u 0 (a, σ0 ) = 2π

2π u(σ0 − a sin ψ)dψ,

(38)

0

where σ0 is a constant bias value in the error signal σ = r − y, r = 0 is an input signal, the control u is now u = f (r − y), where f (...) is either the relay function (4) or the continuous nonlinear function (5). Constant value of bias σ0 can be created by applying a constant input signal r = r0 . Because the bias function is a nonlinearity too, another characteristic, an equivalent gain, can be used to describe propagation of averaged values of variables. It is defined as a coefficient presenting the linearization of a bias function: kn =

du 0 (a, σ0 ) . dσ0

(39)

348

I. Boiko

Despite this derivative can be taken at any point, the most convenient point is σ0 = 0. Let us produce these characteristics for the two nonlinearities. For positive σ0 , in accordance with (38), the bias function of the relay nonlinearity is cr u 0r (a, σ0 ) = 2π

2π sign(σ0 − a sin ψ)dψ 0

⎛ ⎞ ψ1 ψ2 2π cr ⎜ ⎟ = ⎝ dψ − dψ + dψ ⎠ 2π ψ1

0

ψ2

σ0 cr 2cr arcsin . (40) = (ψ1 − ψ2 + π ) = π π a In formula (40), the phase angles ψ1 and ψ2 correspond to the switching of the relay from +cr to −cr and from −cr to +cr , respectively. They can be found from the equation of the equality of the error to zero: σ0 − a sin ψ = 0. The equivalent gain for the relay nonlinearity is found as the derivative of the bias function u 0r at σ0 = 0: 2cr knc (a, 0) = knc (a) = . (41) πa It should be noted that both relationships (40) and (41) are well known in the relay systems theory [3]. Finding the bias function for the continuous controller analytically may be difficult if possible at all. The formula for the bias can be given by the following integral: ⎛ cc ⎜ u 0c (a, σ0 ) = ⎝ 2π

ψ1 (σ0 )

ψ2 (σ0 )

β

(a sin ψ − σ0 )β dψ

(σ0 − a sin ψ) dψ − ψ1 (σ0 )

0

2π +

 (σ0 − a sin ψ)β dψ .

(42)

ψ2 (σ0 )

In formula (42), the phase angles ψ1 and ψ2 correspond to the moments of the sign change by the error signal. They both can be found from the equation of the equality of the error to zero: σ0 − a sin ψ = 0, which yields ψ1 = arcsin

σ0 σ0 , ψ2 = π − ψ1 = π − arcsin . a a

(43)

To find the equivalent gain of the continuous nonlinearity, let us differentiate the bias function (42) with respect to σ0 , noting the fact that σ0 is contained in both: the function under the integral and the intervals of integration.

Chattering in Mechanical Systems Under Sliding-Mode Control

349

⎛ ψ1 (σ0 )  d(σ0 − a sin ψ)β cc ⎝ dψ1 knc (a, σ0 ) = dψ − βa(σ0 − a sin ψ1 )β−1 cos ψ1 2π dσ0 dσ0 0 ψ2 (σ0 )

+  + βa

ψ1 (σ0 )

d(a sin ψ − σ0 )β dψ dσ0

dψ1 dψ2 cos ψ1 (a sin ψ1 − σ0 )β−1 − cos ψ2 (a sin ψ2 − σ0 )β−1 dσ0 dσ0 ⎞

2π



d(σ0 − a sin ψ)β dψ2 ⎟ dψ + βa(σ0 − a sin ψ2 )β−1 cos ψ2 ⎠ . (44) dσ0 dσ0

+ ψ2 (σ0 )

Let us evaluate knc (a, σ0 ) at σ0 = 0. One can see that with σ0 → 0, ψ1 → 0 and ψ2 → π . Then the first integral in (44) becomes zero, the second integral ψ2 (σ0 )

I2 = ψ1 (σ0 )

d(a sin ψ − σ0 )β dψ = −βa β−1 dσ0



(sin ψ)β−1 dψ,

(45)

0

and the third integral 2π I3 = ψ2 (σ0 )

d(σ0 − a sin ψ)β dψ = βa β−1 dσ0

2π

(− sin ψ)β−1 dψ

π

= βa

β−1



(sin φ)β−1 dφ = −I2 , (46)

0

For other terms in (44), the following relationships can be produced. Despite we used both left and right derivatives (left and right limits in the derivative definition) when differentiating the interval limits in the integrals of (44), which was done with the purpose that the expression raised to the power of β or β − 1 should remain positive, the following identity holds lim

(σ0 − a sin ψ1 )β−1 =

σ0 −a sin ψ1 →0+

lim

(a sin ψ1 − σ0 )β−1 .

σ0 −a sin ψ1 →0−

And similar relationship is valid for the terms containing ψ2 . As a results, the terms in (44) cancel out each other, and knc (a, 0) = knc (a) = 0.

(47)

350

I. Boiko

Fig. 1 Bias functions for relay and continuous control; cr = cc = a = 1, β = 0.5

One can see from the subsequent numeric analysis that the fact that knc (a, 0) = 0 for the continuous controller is only a reflection on a singularity of the function u 0c (a, σ0 ) at σ0 = 0. Computation through the numeric integration or simulations of the system containing the continuous nonlinearity shows a quite smooth and nearly linear characteristic. The bias function for the relay and continuous nonlinearity are given in Fig. 1. Averaging of the slope of u 0c (a, σ0 ) in Fig. 1 provides an estimate of the equivalent gain, which is knc Aver ≈ 0.844. The comparison with the DF value reveals some invariance of the DF value to the kn value ratio. Whereas, for the relay control, it has been known and is equal to 2, for the continuous control it is computed as follows. Let us compute the DF for cc = 1, β = 0.5, and a = 1. As per (27) the value of Nc = 1.113. The ratio of Nc /knc Aver = 1.319 was previously termed as an inherent gain margin. This ratio would stay constant regardless of the plant controlled. If, for example, the plant gain is increased by two times, the amplitude of the oscillations will be increased by two, but both the describing function value and the equivalent gain value will change in the same proportion, so that their ratio remains constant. Therefore, the inherent gain margin is not a metric of stability margins: the averaged dynamics of the system will remain stable with any increase in the loop gain, due to the relationship between the DF and equivalent gain. Having smaller inherent gains (closer to one) may be beneficial, because the stability of averaged dynamics is automatically provided, and higher loop gain would result in better rejection of disturbances. However, other effects that have not been presented here, such as narrowing the regulation range due to the existence of mechanical limits, multistage design of mechanical systems typical of electropneumatic and electrohydraulic servo systems, and practical considerations, make the comparison between the relay

Chattering in Mechanical Systems Under Sliding-Mode Control

351

control and the continuous control in mechanical systems a complex task, which can be solved only for a specific system. Both considered types have advantages and disadvantages; analysis and comparison based only on abstract dynamic models may provide only some aspects of the comparison but not a selection recommendation in favor of one of the control methods.

5 The Chattering Effect and Metrics for Chattering Evaluation. Dry Friction The above development is applied to the systems not having dry or Coulomb friction. Let us now introduce dry friction in the dynamics model. We shall further assume that the dry friction is described by the following model: ⎧ ⎪ ⎨−μ if v = x˙ > 0 Fd f r (v) = μst ∈ [−μ, μ] if v = x˙ = 0 ⎪ ⎩ +μ if v = x˙ < 0,

(48)

where μst in (48) is a static friction corresponding to the rest status. The model (48) does not account for the Stribeck effect but seems adequate for the analysis undertaken. With the dry friction added to the model, the Eq. (9) rewrites as follows: Finer − F f r − Fspr − Fd f r = Fa ,

(49)

And the energy integral for the motion from 0 to x(t) can be written as x(t) Wd f r (t) =

x(t) Fd f r (x)d x = −μ Sign(v(t))d x,

0

0

⎧ ⎪ ⎨1 if v > 0 Sign(v) = μμst ∈ [−1, 1] if v = 0 ⎪ ⎩ −1 if v < 0.

where

(50)

(51)

For the periodic motion of x(t) with the amplitude ax = max{x(t)}, the energy integral over the period T of the oscillations can be written as ⎡ Wd f r,T (ax ) = −μ ⎣

ax

−ax Sign(v(t))d x +

0

0 Sign(v(t))d x +

ax

−ax

⎤ Sign(v(t))d x ⎦

  x = −μ x|a0x − x|a−a + x|0−ax = −4μax . x (52)

352

I. Boiko

The integral (52), like the integral (12), is a reflection of the dissipation of energy, because on the period of the oscillation T it is not equal to zero. And combined together, the average energy consumed by chattering can be expressed on the basis of the sum of (52) and (12). It can be noted that in the derivation of the formula (52), no assumptions were made regarding the shape of the oscillations. The only assumption that was made is that the motion is periodic. If we now assume that the shape of the oscillations is sinusoidal (17), then the average power consumed by chattering can be expressed as

Wchat

1 = T

T ηv2 (t)dt + 0

1 1 2μ 4μax = ηax2 ω2 + ax ω. T 2 π

(53)

One can see that the metric of chattering given by (53) is a function of the product ax ω, which despite the nonlinear character of the function itself, allows for its use for comparing chattering in the systems having the same plant but different control algorithms. Another aspect of chattering analysis that is relevant to the systems having dry friction is the impossibility of chattering to occur at certain frequencies. With the relay control and the models of actuators used (Eqs. (20)–(22)), the amplitude of the chattering of the force applied to the mechanical plant is given by af =

4cr |G a ( jω)|. π

(54)

This force amplitude must be greater than the value of dry friction μ for the force to produce any motion in the plant. Because all the actuator models are low-pass filters and have the decreasing character of the dependence |G a ( jω)|, oscillations in the system are possible if the frequency is lower than ωmax := {ω| 4cπr |G a ( jω)| = μ}. And because the mechanism of the excitation of chattering is self-excitation, chattering frequency will always be lower than ωmax . However, if cr |G a ( j0)| < μ oscillations cannot occur at all. A different scenario must be considered for chattering in a system with the continuous controller. If we assume that the oscillations are sinusoidal and are given by (17), then the describing function for the dry friction will be Nd f r (ax , ω) =

4μ πax ω

(55)

and the frequency response of the plant becomes dependent on the amplitude ax too, with the magnitude response being |G p ( jω, ax )| = 

1 (κ −

mω2 )2

+ (ηω +

4μ 2 ) πax

(56)

Chattering in Mechanical Systems Under Sliding-Mode Control

353

and the phase response: arg G p ( jω, ax ) = arctan

ηω +

4μ πax

mω2 − κ

−π

(57)

which are produced through the replacement of η in formulas (15), (16) with η + Nd f r = η + πa4μx ω . The frequency and the amplitude of chattering are now defined by the harmonic balance equation: Nc (ax )G a ( j )G ss ( j )G p ( j , ax ) = −1,

(58)

or by the following equations (for the actuator model (20)) for the magnitude balance: Nc (ax )

 1 Ka 1 + c22 2  Ta2 2 + 1 (m 2 − κ)2 + (η +

4μ 2 ) πax

= 1,

(59)

where Nc (ax ) is given by (27), and the phase balance − 2 arctan(Ta ) + arctan(c2 ) + arctan

η +

4μ πax

m 2 − κ

= 0,

(60)

To find the frequency and amplitude of chattering, the Eqs. (59) and (60) must be solved for and ax . However, the constraint concerning the amplitude of the oscillations of the force developed by the actuator must be satisfied too: a f = ax Nc (ax )|G a ( j )| > μ,

(61)

which is the amplitude of the force oscillations must be greater than the dry friction. Considering the expression for the describing function of the continuous controller, rewrite (61) as β−1 2cc ax (0.5β + 1) (62) |G a ( j )| > μ, ax √ π (0.5β + 1.5) which provides an inequality for ax : axβ > where

μ , ν|G a ( j )|

2cc (0.5β + 1) ν=√ , π (0.5β + 1.5)

(63)

354

I. Boiko

The inequality (63) leads to ax >

1 μ1/β . 1/β ν |G a ( j )|1/β

(64)

If, for example, β = 0.5 then (64) rewrites as ax >

1 μ2 . ν 2 |G a ( j )|2

(65)

On the other hand, the amplitude ax is a result of the propagation of the force oscillations having amplitude a f = μ through the plant dynamics (56). Because the function |G p ( jω)| is monotone decreasing (at least at the frequencies higher than the resonant frequency if resonance takes place), there exists a certain maximum frequency of chattering ωmax corresponding to the equality ax = μ|G a ( jωmax |, where ax is given by (65), such that no frequencies of chattering higher than ωmax may exist in the system. One can see that the existence of dry friction leads to certain limitations on the frequency of chattering: chattering frequency cannot be higher than a certain value that is defined by the value of dry friction and parameters of the controller, actuator and plant. This effect is found in both controllers: relay and continuous.

6 Conclusion Chattering in position-control mechanical systems is considered in this book chapter. Despite the study is done with the involvement of a relatively simple linear model of a mechanical plant, it allows one to discover all main effects that may occur in more complex mechanical systems. Therefore, the use of this simple model is very beneficial in that respect. A new metric for chattering evaluation is proposed; it is the product of the amplitude and frequency of chattering: ax . It can be used in both scenarios: with and without dry friction and is particularly useful for the comparison of different control algorithms in terms of their properties with respect to chattering. Chattering in mechanical systems under two types of control, discontinuous (relay) and continuous, is analyzed and compared. Since in both algorithms, the amplitude of chattering depends on the algorithm parameters, they are brought to equal conditions in their ability to produce a certain maximum force. The basis for this comparison is the equality of the control magnitudes at maximum position displacement. Under this condition, the chattering under the continuous control is likely to be smaller. However, it does not automatically mean superiority of the continuous

Chattering in Mechanical Systems Under Sliding-Mode Control

355

control. The conclusion would largely depend on a particular system, and it may well be the opposite, because the relay control allows for the use of the technology which may not be suitable for the continuous control. For example, solenoids in servomechanisms can only be used under the discontinuous control, and due to that, the relay control may be a preferred option. Beside the chattering itself, the so-called bias functions are produced and analyzed too. Bias functions are the effective nonlinearities which describe propagation of signals through a nonlinear system when signals are accompanied by chattering. In other words, they describe the propagation of average on the period of chattering values. Therefore, the relay and the Lipschitz-discontinuous nonlinearities act for the external signal propagation not as original but as totally different nonlinearities, which are continuous. As a result, such properties of SM control systems as finitetime convergence and ideal disturbance rejection are not valid for the averaged signal values, which in practice means that they are not valid in any real SM control system. The bias functions are found and presented in the book chapter. It is found that in both types of the studied control, the bias functions are close to linear, which is usually an attractive feature. The SM systems are also investigated for the dry friction, which is always present in any real mechanical motion. It is shown that with dry friction present, not all frequencies of chattering may be generated in the system. If the actuator is defined, and the frequency of chattering still can be varied by the selection of sliding surface parameters, only the frequencies that provide a sufficiently large amplitude of the driving force oscillation can be realized in the system. This gives another constraint in the analysis of the parameters of chattering. Overall, analysis of chattering and propagation of signals in the system affected by chattering is of high practical importance. Proper analysis may help one to avoid mistakes in design and unrealistically high expectations from the designed system performance.

References 1. Utkin, V.I.: Sliding Modes in Control and Optimization. Springer Verlag, Berlin, Germany (1992) 2. Young, D., Utkin, V.I., Ozguner, U.: A control engineer’s guide to sliding mode control. IEEE Trans. Control Syst. Technol. 7(3), 328–342 (1999) 3. Atherton, D.P.: Nonlinear Control Engineering -Describing Function Analysis and Design. Van Nostrand Company Limited, Workingham, Berks, UK (1975) 4. Gelb, A., Vander Velde, W.E.: Multiple-Input Describing Functions and Nonlinear System Design. McGraw-Hill, New York (1968) 5. Boiko, I., Fridman, L.: Analysis of chattering in continuous sliding-mode controllers. IEEE Trans. Autom. Control 50(9), 1442–1446 (2005) 6. Boiko, I.: On frequency-domain criterion of finite-time convergence of second-order sliding mode control algorithms. Automatica 47(9), 1969–1973 (2011) 7. Edwards, C., Spurgeon, S.K.: Taylor & Francis Systems and Control Book Series. Theory and Applications. Taylor & Francis, Sliding Mode Control (1998)

356

I. Boiko

8. Levant, A.: Chattering analysis. IEEE Trans. Autom. Control 55(6), 1380–1389 (2010) 9. Boiko, I.: On relative degree, chattering and fractal nature of parasitic dynamics in sliding mode control. J. Franklin Inst. 351(4), 1939–1952 (2014) 10. Boiko, I.: On inherent gain margins of sliding-mode control systems. In: Li, S., Yu, X., Fridman, L., Man, Z., Wang, X. (eds.) Advances in Variable Structure Systems and Sliding Mode ControlTheory and Applications, pp. 133–147. Springer-Verlag, London (2018) 11. Boiko, I.: Oscillations and transfer properties of relay servo systems - the locus of a perturbed relay system approach. Automatica 41, 677–683 (2005) 12. Tsypkin, Y.Z.: Relay Control Systems. England, Cambridge (1984) 13. Hamel, B.: Contribution a l’etude mathematique des systemes de reglage par tout-ou-rien, C.E.M.V., (17), Service Technique Aeronautique (1949) 14. Khalil, H.K.: Nonlinear Systems. Prentice Hall (1996) 15. Fridman, L.: Singularly perturbed analysis of chattering in relay control systems. IEEE Trans. Autom. Control 47(12), 2079–2084 (2002) 16. Boiko, I., Fridman, L., Castellanos, M.I.: Analysis of second order sliding mode algorithms in the frequency domain. IEEE Trans. Autom. Control 49(6), 946–950 (2004) 17. Boiko, I., Fridman, L., Pisano, A., Usai, E.: Analysis of chattering in systems with second-order sliding modes. IEEE Trans. Autom. Control 52(11), 2085–2102 (2007) 18. Boiko, I.: Input-output analysis of limit cycling relay feedback control systems. In: Proceedings of the 1999 American Control Conference (Cat. No. 99CH36251), vol. 1, pp. 542–546 (1999) 19. Utkin, V., Poznyak, A., Orlov, Y.V., Polyakov, A.: Road Map for Sliding Mode Control Design. Springer International Publishing (2020) 20. Perez-Ventura, U., Fridman, L.: Design of super-twisting control gains: A describing function based methodology. Automatica 99, 175–180 (2019) 21. Shtessel, Y., Edwards, C., Fridman, L., Levant, A.: Sliding Mode Control and Observation. Birkhauser, Boston (2014) 22. Swikir, A., Utkin, V.: Chattering analysis of conventional and super twisting sliding mode control algorithm. In: 2016 IEEE 14th International Workshop on Variable Structure Systems, pp. 98–102 (2016) 23. Utkin, V.: Discussion aspects of high-order sliding mode control. IEEE Trans. Autom. Control 61(3), 829–833 (2016) 24. Perez-Ventura, U., Fridman, L.: Chattering comparison between continuous and discontinuous sliding-mode controllers, variable-structure systems and sliding-mode control. In: Steinberger, M., Horn, M., Fridman, L. (eds.) Studies in Systems, Decision and Control, vol. 271, pp. 197–211. Springer, Cham (2020) 25. Perez-Ventura, U., Fridman, L.: When is it reasonable to implement the discontinuous slidingmode controllers instead of the continuous ones? Frequency domain criteria. Int. J. Robust Nonlinear Control 29(3), 810–828 (2019)

Describing Function-Based Analysis and Design of Approximated Sliding-Mode Controllers with Reduced Chattering Antonio Rosales, Leonid Freidovich, and Ismael Castillo

Abstract Sliding-mode control (SMC) is a powerful robust control design technique that, when appropriately implemented, ensures insensitivity to the so-called matched bounded disturbances and finite-time convergence. However, the insensitivity requires an ideal implementation of discontinuous signals, often based on the sign function that, in practice, results in the presence of parasitic oscillations called chattering. Chattering is unavoidable in systems with SMC, including continuous and higher-order SMC (HOSM) approaches. One of the simplest and commonly used solutions to alleviate chattering is the approximation of the SMC by substituting the sign function with its approximation by a sigmoid function or a saturation function, although this obviously transforms the insensitivity property into a reduction of the influence of the disturbances. In fact, the accuracy of approximating discontinuity creates a trade-off between the reduction of the influence of the disturbances and the amount of chattering. Hence, the following question appears: Is it possible to systematically design a SMC-approximation avoiding a blind search requiring a huge number of numerical and/or hardware experiments? This chapter presents a method to design the boundary-layer parameter of the saturation function. The design is based on the describing function (DF) and harmonic balance (HB) techniques for estimating the parameters of chattering, i.e., frequency and amplitude of the parasitic oscillations.

A. Rosales (B) · L. Freidovich Department of Applied Physics and Electronics, Umeå University, Umeå, Sweden e-mail: [email protected] L. Freidovich e-mail: [email protected] I. Castillo Institute of Automation and Control, Graz University of Technology, Graz, Austria e-mail: [email protected] © The Author(s), under exclusive license to Springer Nature Switzerland AG 2023 T. R. Oliveira et al. (eds.), Sliding-Mode Control and Variable-Structure Systems, Studies in Systems, Decision and Control 490, https://doi.org/10.1007/978-3-031-37089-2_14

357

358

A. Rosales et al.

1 Introduction Sliding-mode control (SMC) guarantees perfect tracking and regulation in control systems with a high level of uncertainty, when the uncertainties and disturbances are matched with the control input [13, 30, 35]. A variety of SMC controllers including continuous and discontinuous ones have been proposed and studied during the last 10 years [36]. However, the common drawback of all these SMC controllers is the effect of chattering, i.e., the appearance of parasitic high-frequency oscillations with a finite amplitude around the sliding manifold due to unmodeled dynamics such as imperfections of actuators and sensors, and presence of sampling [3, 8]. Hence, chattering is an open problem since it is inevitable in systems driven by SMC controllers. There exist several approaches to attenuate chattering [17, 37]. One of them is either a dynamic extension, defining the actual control signal as an integral of new control input to be designed as a SMC, or including into the control law the  term u disc instead of u disc = sign(σ ). Then, the controller is only composed of continuous signals, and minor levels of chattering are expected [21]. Another one is the design of adaptive gain methods that modifies the gain of the SMC controller with respect to the magnitude of the disturbance; then the control gain magnitude can be decreased when low magnitude of disturbance is sensed, and a minor level of chattering is ensured [26]. Also, an observation approach where only the estimated states are used to compute the control signal, and where the output is considered free of chattering since it is generated by the observer, is proposed in [9]. It is also possible to estimate the disturbance and reduce its influence, including the opposite in sign term in the expression of the controller that may result in lower leftover parts of the disturbance to be handled by the discontinuous part of the SMC with a lower coefficient. The approximation technique is another method to attenuate chattering [10]. It consists of the replacement of the discontinuous sign function by a continuous function such as saturation or sigmoid. Among the mentioned approaches to attenuate chattering, the approximation technique is the most simple to implement, and it is commonly used in practical cases. The most applied approximation technique for chattering attenuation is the boundary-layer (BL) method [10, 32]. Under the assumption that the main source of chattering are the discontinuous terms in the SMC, the idea is to replace the relay terms by saturation functions [7]. Once the discontinuity is removed, the infinite gain of the sign function around the origin is replaced with a high gain of the slope of the approximating continuous function. This creates a BL, which is a set around the original sliding set, where the approximated SMC signal is different from the ideal SMC signal potentially leading to a significant reduction of chattering. However, sliding, defined by solutions of differential equations with discontinuities in the sense of Filippov and resulting in the insensitivity to matched disturbances, disappears, while the trajectories outside of the BL set are not modified. The BL value is characterized by the slope (around zero) of the continuous function. In the case of a saturation function, the BL value is inversely proportional to the slope. The smaller

Describing Function-Based Analysis and Design …

359

the BL value is, the higher the slope is and the better the approximation of the discontinuous function is. However, only a few studies quantify how much chattering is reduced/increased when the BL method is applied. We are to present a method for selecting the saturation parameters in BL approximation. Conventional and HOSM control systems with BL approximation in the presence of unmodeled dynamics have been studied in [7, 11], and [28]. In [7], the author is only focused on first-order SMC; the SMC with BL is studied using three different techniques: Describing function (DF) and harmonic balance (HB) equation [2, 15], Popov’s stability criterion, and Poincaré map. Considering the DF analysis, it is concluded that chattering may exist in conventional SMC with BL when the solution of HB equation is found, while this solution characterizes expected oscillations. The study presented in [11] is focused in the analysis of SMC for relative degree one systems. The authors analyzed first-order SMC and super-twisting (ST) algorithms with BL approximation using DF, then the relation of the parameters of chattering and the BL value is studied, and a method to compute the BL value that minimizes the parameters of chattering is provided. In [28], analysis of SMC with BL for relative degree two systems is presented, the relation between the BL value and parameters of chattering is studied, and a BL value minimizing parameters of chattering is presented. To the best of our knowledge, the results presented in [11] and [28] are the only ones that provide a quantitative analysis for the design of the saturation function in SMC with BL approximation. Some studies like [24, 25, 27] quantify chattering in SMC for relative degree two systems, but the analysis is done in terms of actuator time constant, not BL approximation. Other research works related with saturated SMC systems are found in [12, 16, 29]. These cases do not employ BL approximation, the saturation is applied to the total control input, and the analysis of chattering is not presented. In this chapter, the analysis of SMC with BL approximation for systems of relative degree one and two is performed. DF and HB equation are employed to study chattering and to compute its parameters (amplitude and frequency). A quantitative analysis of chattering in SMC with BL approximation is presented. Examples and simulations that verified the proposed methods are included.

2 Problem Statement Consider the next model of the plant or the plant plus actuator, x˙ = A x + B (u + φ) , σ = C x,

(1)

where x ∈ R n is the state vector, u ∈ R is the control input, and σ ∈ R is the output. Matrices A, B, and C have appropriate dimensions. The matched disturbance φ

360

A. Rosales et al.

is assumed either bounded |φ| < L, L > 0, or continuously differentiable with a bounded derivative | dtd φ| < L d , L d > 0. The control design objective is to drive σ to zero, preferably, in finite time. Taking σ as a sliding variable and assuming that its derivatives are available for feedback, the control input u can be designed as a HOSM controller [28],     u = U σ, σ˙ , . . . , σ (r −1) + v; v˙ = Uv σ, σ˙ , . . . , σ (r −1) ,

(2)

where r is the relative degree of (1) with respect to σ . Then, the control gains of (2), for r = 1 or r = 2, can be computed using the bounds L or L d , and the control objective can be accomplished as in [19, 23]. For example, one can have a discontinuous HOSM controller u such as conventional SMC, Twisting or Nested, with control gains calculated using information of L [19, 30]. Another option is to have a continuous HOSM controller u such as Super Twisting, higher-order Super Twisting, or Continuous Twisting, with gains computed using information of L d [23, 33]. The block diagram of the closed-loop system (1–2) is presented in Fig. 1a [28]. However, the chattering, i.e., oscillations of high frequency and finite amplitude, is unavoidable in the closed-loop system defined by Eqs. (1–2). Chattering is caused by the presence of the unmodeled dynamics omitted when HOSM controllers are designed. The BL approximation method can be applied to the closed-loop system (1–2) to attenuate chattering. Following [11, 25, 34], assuming the presence of an actuator with the second-order model, ¯ μ > 0, μ2 u¨ + 2μu˙ + u = u(t);

(3)

and replacing the discontinuous terms of the HOSM controller (2) by a saturation function, the analysis of chattering for the closed-loop system (1–2) is performed. The output and input of the actuator are u and u, ¯ respectively. The control signal generated by the HOSM controller is provided to the plant in Eq. (1) through the actuator output u; see Fig. 1b [28]. Once the BL method is applied, one can analyze the chattering in the system (1–2) including the unmodeled dynamics (3) in cascade with the plant; see Fig. 1b. Remark 1 Note that the disturbance/uncertainty φ is assumed in the forgoing analysis equal to zero. This significant simplification can be motivated assuming that the difference between the continuous approximation and discontinuous term in the designed SMC is sufficiently small to ensure that replacing insensitivity to φ = 0, satisfying the assumptions above, with alleviation of its influence on the solutions of the closed-loop system, is negligible. Then, σ and/or σ˙ are assumed kept identically zero or in a sufficiently small vicinity around it, when chattering due to unmodeled dynamics is analyzed; see Fig. 1b. If required, local effects of the disturbance/uncertainty φ = 0 can be analyzed by means of the equivalent gain method [3, 6]. Assuming the frequency response W ( jω) of the cascade between plant (1) and actuator (3) has low-pass filter properties, the DF and HB equation can be used to

Describing Function-Based Analysis and Design …

HOSM

+

+

Plant

HOSM with BL approx.

361

Actuator

+

+

Plant

b)

a)

Fig. 1 a System employed to compute the gains of the HOSM using only information of φ, i.e., unmodeled dynamics are omitted; b System controlled by HOSM with BL approximation, considering unmodeled dynamics, to be used for designing parameters of BL

estimate chattering parameters, i.e., amplitude and frequency of the parasitic oscillations, and to discuss the effect of BL approximation on these parameters. The next section describes how the approximated HOSM system is analyzed using the DF of the nonlinearities in the control law.

3 Describing Function (DF) of Approximated SMC In this section, a brief introduction to DF technique and HB balance equation is presented, the main sources of information are [2, 15] and [28]. DF technique is applied to nonlinear systems with the structure presented in Fig. 2a, i.e., systems that can be separated into a nonlinear element ψ(σ ) and a linear block defined by a (stable low-pass) transfer function H (s) [15] with a frequency response H ( jω). The closed-loop system studied in this chapter, see Fig. 1b, has the same structure as Fig. 2a. The linear block is the frequency response of the plant plus actuator W ( jω), and the nonlinear part is the HOSM with BL approximation. The low frequency part of the steady-state output of the nonlinear element to the signal σ = A sin(ω t) is assumed to be approximately defined by the complex-valued A-dependent function    N (A, ω), called the DF, as ψ A sin(ω t) ≈ |N (A, ω)| A sin ω t + ∠N (A, ω) . The chattering parameters in HOSM with BL can be identified by means of the solution of HB equation [15] W ( jω) = −

1 , N (A, ω)

(4)

where N (A, ω) is the DF of the HOSM with BL, and W ( jω) is the frequency response of the plant plus actuator. The solution of Eq. (4) has a graphical interpretation such as in Fig. 2a. The intersection between the frequency response W ( jω) and the plot

362

A. Rosales et al.

Solution

Re

Im

Fig. 2 a Block diagram of nonlinear system divided in linear part and nonlinear part. b Graphical solution of Eq. (4) with saturation DF in (7)

of −N −1 (A, ω) gives the solution of Eq. (4), i.e., the amplitude A and frequency ω of chattering. The BL approximation of HOSM algorithms is defined here as the result of replacement of discontinuous terms by the saturation function, sat(σ, δ) =

1

σ if |σ | ≤ δ and δ > 0 δ sign(σ ) otherwise,

(5)

where 1/δ is the slope of the linear region. When δ → 0, the sign function is recovered, while the understanding of solutions of the closed-loop system changes from the classical solutions of differential equations to the solutions of the corresponding differential inclusions as defined by Filippov with a possibility of a sliding regime with sign(0) understood as a set [14].

3.1 DF of Saturation Function The DF of the saturation function α sat(σ, δ), defined by (5), see, e.g., [15], is ⎤ ⎡

   2 δ 2α ⎣ −1 δ δ ⎦ + Nsat (A) = 1− ; sin πδ A A A

A > δ,

(6)

where α = 1 is the magnitude of saturation. For the case of A ≤ δ, the DF is reduced to the constant gain Nsat (A) = αδ since the nonlinear function is within its region of linearity. Figure 2 presents the graphical interpretation of the solution of HB Eq. (4) with DF in (6), and a frequency response W ( jω) with relative degree 3. Since Nsat (A) is constant and it does not depend on A for A ≤ δ, the plot of −1/Nsat starts at −1/Nsat = −1/(1/δ) = −δ and there is a gap to the origin, proportional to δ.

Describing Function-Based Analysis and Design …

363

Note that HB equation with DF in (6) does not predict any chattering, i.e., oscillation, when W ( jω) is located in the first and fourth quadrants of the complex plain. Chattering may exist when W ( jω) is located in the second and third quadrants and the magnitude |W ( jω)| is bigger than δ at phase ∠W ( jω) = −π . Since the solution of HB equation is the basis of the analysis presented in this chapter, and an explicit analytical solution of HB equation is desired, the Taylor approximation of the nonlinear DF (6) for small δ > 0 and A > δ is used to solve the HB equation, see [11], Nsat (A) ≈

2α 2 4α − δ . π A 3π A3

(7)

When it is not possible to obtain an explicit analytical solution of the HB equation, a numerical method is used.

4 Relative Degree One Systems Suppose the plant defined by Eq. (1) is a first-order system x˙ = u + φ; σ = x,

(8)

Then, u can be designed for a relative degree one system so that σ can be driven to zero in finite time under an ideal implementation. Let us consider the following SMC for relative degree one systems , known as the conventional SMC u = α sign(σ )

(9)

with α > L, and the Super-Twisting controller u = −k1 |σ |1/2 sign(σ ) + v; v˙ = −k2 sign(σ )

(10)

√ with k1 > 1.5 L d , k2 > 1.1L d [18]. The controllers (9) and (10) are designed only with the information of disturbance φ; no information about unmodeled dynamics such as the actuator model in Eq. (3) is considered. This section contains the analysis of controllers (9) and (10) with BL approximation considering actuator dynamics (3). DF technique and HB Eq. (4) are used for the analysis. The DFs for controllers (9) and (10) are [31] Nsmc = and [4]

4α , πA

(11)

364

A. Rosales et al.

N ST = respectively, where c1 =

2π 1/2 Γ (1.25) Γ (1.75)

c1 k1 4k2 1 , + 1/2 πA π A jω

(12)

≈ 3.496.

4.1 First-Order Sliding-Mode Control The first-order SMC in Eq. (9) with BL approximation is u smcδ = αsat(σ, δ); α > L ,

(13)

which has a DF similar to (7). Then, for small δ > 0 and A > δ the next HB equation is obtained,   2α 2 4α − δ ≈ 2μω2 + jω μ2 ω2 − 1 (14) 3 π A 3π A

  − W (1jω)

where the W ( jω) is the frequency response of the plant (8) plus actuator (3). Considering the imaginary part of Eq. (14), the frequency of chattering can be computed as 1 (15) ω= , μ and, from the real part of Eq. (14), the amplitude of chattering is the solution of the equation [11] 2αμ 2 αμ 2 A + δ ≈0 A3 − (16) π 3π which for sufficiently small values of δ > 0 (and not extremely small μ > 0) are A≈

π 2α μ− δ2 > δ π 12 α μ

and the extraneous solutions A ≈ ±

4.1.1



6 δ 6

+

π δ2 24 α μ

< δ.

Example

Consider the plant (8) in cascade with (3) in a closed loop with the controller (13). The parameters of plant, actuator, and controller are α = 1.1 ∗ L, L = 1, and μ = 0.01. Then, the frequency and amplitude of chattering are computed with Eqs. (15) and (16), respectively, when δ increases.

Describing Function-Based Analysis and Design … 10

365

-3

7 6.5 6 5.5 5 110 105 100 95 90

0

1

2

3

4

5

6 10-3

Fig. 3 Estimated amplitude A and frequency ω for first-order SMC with BL approximation 1

=0 =0.001 =0.003 =0.005

0.8 0.6

0.01

0.4

0

0.2 0

0

-0.01 4.7

4.75

1

2

4.8 3

4.85

4.9 4

5

Fig. 4 Simulation of the plant (8) in cascade with (3) in a closed loop with the controller (13)

The estimated values of A and ω are presented in Fig. 3. Since the frequency depends only on the actuator time constant, see (15), it is not influenced by the approximation. However, a decrement in the amplitude A can be observed when δ increases. Then, the first-order SMC with δ = 0 can be improved in terms of chattering attenuation, when the BL approximation method is applied. Figure 4 presents the output σ resulting from the simulation of the plant (8) in cascade with (3) in a closed loop with the controller (13). Several values of δ are used, and it can be seen that the amplitude of chattering decreases when the value of delta increases.

366

A. Rosales et al.

Fig. 5 Curves of inverse describing function −1 −1/N ST (A, ω) of sat approximated super-twisting algorithm

0 -0.01 -0.02 -0.03 -0.04 -0.05 -0.4

=10 =50 =100 =500

-0.3

-0.2

-0.1

0

4.2 Super-Twisting Controller The Super-Twisting controller in Eq. (10) with BL approximation is defined as  u STδ = −k1 |σ |1/2 sign(σ ) + vδ ; v˙ δ = −k2 sat(σ, δ); k1 > 1.5 L d , k2 > 1.1L d , (17) and its DF representation is [4, 11] N STsat =

c1 k1 1 + Nsat (A) π A1/2 jω

(18)

Γ (1.25) where c1 = 2π Γ (1.75) ≈ 3.496, Γ is the gamma function [1], and Nsat (A) is defined −1 (A, ω) considering δ = 0.001, in Eq. (7). Figure 5 presents the curves of −N ST sat amplitude δ ≤ A < 0.5, and frequency ω = 10, 50, 100, 500. Note the gap around the origin due to the saturation value. Considering W ( jω) as the frequency response of the plant (8) plus actuator (3), and the DF N STsat , then the HB equation is 1/2

  c1 k1 − j Nsat (A) = 2μω2 + jω μ2 ω2 − 1 . π A1/2

 

(19)

− W (1jω)

An explicit analytical solution of Eq. (19), or even of its approximation for small 1 values of δ > 0, is difficult to obtain, except for ω2 = 2 πc1μk√ , defining how the A frequency decreases with the amplitude. However, a numerical method should be used to find the amplitude of parasitic oscillations, as presented in the next example.

Describing Function-Based Analysis and Design …

4.2.1

367

Example

Consider the plant (8) in cascade with (3) in a closed loop with √ the controller (17). The parameters of plant, actuator, and controller are k1 = 1.1 L d , k2 = 1.1L d , L d = 1, and μ = 0.01. Then, the HB Eq. (19) is solved numerically to identify the frequency and amplitude of chattering when δ increases, and the results are presented in Fig. 6. As one can observed in Fig. 6, the Super-Twisting controller with BL approximation (17) may result in smaller chattering than with the original Super-Twisting controller (10), in terms of the magnitude of A. However, the frequency of chattering increases. The plant (8) in cascade with (3) in a closed loop with the controller (17) is simulated by varying δ; the output σ during the simulation is show in Fig. 7. One can observe the same reduction of chattering when δ increases.

10

6.5

-4

6

5.5 60 59 58 57

0

1

2

3

4 10

-4

Fig. 6 Estimated amplitude A and frequency ω for super twisting with BL approximation 1 =0 =0.0002 =0.0004

0.8 10-4

0.6 0.4 0.2 0

5 0 -5 4.75

4.8

4.85

4.9

4.95

Fig. 7 Simulation of the plant (8) in cascade with (3) in a closed loop with the controller (17)

368

A. Rosales et al.

4.3 Quantitative Analysis In order to have an estimate of how much chattering can be attenuated when BL approximation is used in first-order SMC or Super-Twisting controller, the ratio of reduction is computed: P − Pδ RP = × 100 (20) P where P is the parameter of interest (amplitude A or frequency ω) without BL approximation, and Pδ is the parameter with BL approximation. Considering the DF Nsmc in (11) of the first-order SMC, the analytical solution of the HB equation can be obtained as [25] A=

1 2αμ ; ω= . π μ

(21)

This solution gives the estimation of A and ω for the plant (8) in a closed loop with the controller (9). On the other had, the analytical solution of the HB equation can be obtained for the case when the plant (8) plus actuator (3) are in a closed loop with the controller (13), αμ 1 Aδ = ; ωδ = . (22) 2 μ Then, using Eq. (20), the ratio of reduction of amplitude R A can be computed [11]  π A − Aδ × 100 = 1 − × 100 ≈ 21.46%. RA = A 4 Since the amplitude ω does not change, the ratio of reduction of frequency Rω is zero. For the case when the plant (8) plus actuator (3) are in a closed loop with the controller (17), the next steps are followed. Considering the DF N ST in Eq. (12) of the Super-Twisting controller, the analytical solution of the HB equation can be obtained as [25]  A=μ

2



2 c1 k + 4π k2 2 1 c1 π k1 2

2

1 ; ω= μ



 c1  c1 2

k1

k 2 1 2

2

+ 4π k2

1/2 .

(23)

This solution gives the estimation of A and ω for the plant (8) in a closed loop with the controller (10). Since it is difficult to find an analytical solution of the HB equation for the case when the plant (8) plus actuator (3) are in a closed loop with the controller (17), an analytical solution can be found when δ is selected as δ = A in Eq. (18). The solution of the HB equation is

Describing Function-Based Analysis and Design …

 Aδ = μ2

2 c1 k + π 2 k2 2 1 c1 π k1 2



369

1 ; ωδ = μ



1/2 2 k 1 2 .  c1 2 k + π 2 k2 1 2  c1

(24)

Then, for the case of the Super-Twisting algorithm, the ratio of reduction of amplitude and frequency is obtained as ⎛



R A = ⎝1 − ⎛

2 c1 k 2 1  c1 2 k 2 1



ω A = ⎝1 −

2 c1 k 2 1  c1 2 k 2 1

+ π 2 k2 + 4π k2 + 4π k2

2 ⎞ ⎠ × 100

(25)

1/2 ⎞

+ π 2 k2

⎠ × 100.

(26)

√ If the gains are selected as k1 = 1.5 L d and k2 = 1.1L d , one gets R A ≈ 12.56 % and Rω ≈ −3.41 %. It should, however, be kept in mind that while the computed reduction appears independent of the value of the actuator’s time constant μ > 0, the insensitivity to the disturbance disappears and the amplitude of oscillations, caused by not fully compensated disturbance, will depend on this constant. At the same time, the obtained expressions answer the question on how much one can improve the performance using the considered approximation when the actuators are very fast.

5 Relative Degree Two Systems Suppose the plant in Eq. (1) is a second-order system x¨ = u + φ;

σ = x1 ,

(27)

where u is a SMC for relative degree two systems. Assuming an ideal, typically impossible in practice, implementation of a classical SMC, σ , can be driven to zero in finite time. In this section, two SMC algorithms for relative degree two systems are studied. The first one is the second-order Nested controller, [19], u nes = ρsign(σ˙ + |σ |0.5 sign(σ ))

(28)

where ρ > L, and the second one is the Twisting controller [20] u tw = −α1 sign(σ ) − α2 sign(σ˙ )

(29)

where α1 > L + α2 , α2 > L, and the extension of ST controller for relative degree two [22, 33]

370

A. Rosales et al.

u ST2 = −k1 |σ |1/3 sign(σ ) − k1 |σ˙ |1/2 sign(σ˙ ) + v; 2/3

v˙ = −k3 sign(σ )

(30)

1/2

where k1 = 2L d , k2 = 5L d , and k3 > L d . Note that the design of the controllers (28), (29), and (30) omit the information of unmodeled dynamics like actuator (3), while taking it into account would lead not only to a model with a higher relative degree but also unmatched disturbance, the effect of which cannot be fully annihilated using the SMC techniques as discussed above; see also [13]. This section contains the analysis of controllers (28), (29), and (30) with BL approximation considering actuator dynamics (3). DF technique and HB Eq. (4) are used for the analysis. Let us, however, recall again that the controllers are designed so that without the actuator dynamics the response is insensitive to the disturbance, while sufficiently fast actuator dynamics of degree 1 or 2 is assumed to ensure a significant reduction of the response to the disturbance but appearance of chattering that we aim to reduce substituting discontinuous terms with continuous approximations. The DF of the Nested algorithm is (see [28] and Appendix) Nnes (A, ω) = where γ =

2Γ (5/4) , π 1/2 Γ (7/4)

 γ  4ρ  + jω , π A1/2 Aω2 + γ 2 A1/2

(31)

Γ being the gamma function.

The DF of Twisting is [5] Ntw (A) =

4 (α1 + jα2 ) . πA

(32)

And, the DF of the extension of ST controller for relative degree two [25] N ST ex (A, ω) =

c1 k1 c2 k2 ω1/2 4k3 1 +j + 2/3 πA π A1/2 π A jω

(33)

where c1 = 3.642, and c2 = 3.496.

5.1 Nested Algorithm The nested algorithm in Eq. (28) has a discontinuous term (sign function) and a continuous one, σ˙ + |σ |0.5 sign(σ ). Then, replacing the discontinuous sign function by the saturation, defined in (5), the BL approximation of the nested algorithm is u nessat = ρsat(σ˙ + |σ |0.5 sign(σ )).

(34)

Considering the DF of saturation function in Eq. (7), the DF of the saturated nested algorithm is

Describing Function-Based Analysis and Design …

371

10-8 1

=10 =50 =100 =500 =1000

0.8 0.6 0.4 0.2 0

-10

-8

-6

-4

-2 10-9

Fig. 8 Curves of inverse describing function −1/Nnessat (A, ω) of approximated nested algorithm. “©2022 IEEE. Reprinted with permission from A. Rosales, I. Castillo and L. Freidovich, ‘Analysis of Higher Order Sliding Mode Controllers with Boundary Layer Approximation,’ 2021 60th IEEE Conference on Decision and Control (CDC), Austin, TX, USA, 2021, pp. 7070–7075, https://doi. org/10.1109/CDC45484.2021.9683424”





 γ  + jω . A1/2 (35) One can observe that when δ = 0, the DF of the nested algorithm in Eq. (31) is −1 (A, ω) (right-hand side Eq. (36)) recovered. Figure 8 presents the curves of Nnes sat for δ = 0.001, ρ = 1, ω ∈ {10, 50, 100, 500, 1000} [rad/sec], when δ < A < 0.01. Note that a gap equivalent to δ emerges around the origin of the complex plain due to the BL approximation. The HB equation for the plant (27) plus actuator dynamics (3) in closed loop with the nested algorithm with BL approximation, defined by (34), is selectfont Nnessat (A, ω) =



4α 2αδ 2  − 3π A3/2 (Aω2 + γ 2 )3/2 π A1/2 Aω2 + γ 2

4α 2αδ 2  − 3/2 3π A (Aω2 + γ 2 )3/2 π A1/2 Aω2 + γ 2



 γ  + jω = ω2 (1 − μ2 ω2 ) + j2μω3 . 1/2

  A − W (1jω)

(36) The solution of HB Eq. (36) is computed numerically since the equation is implicit.

5.1.1

Example

Consider the plant (27) in cascade with (3) in a closed loop with the controller (34). The parameters of plant, actuator, and controller are α = 1 and μ = 0.01. Then, the numerical solution of (36) is obtained when δ increases, and the results are presented in Fig. 9. The second-order nested controller with BL approximation (34) may present a minor level of chattering compared with the original nested algorithm. Frequency ω

372

A. Rosales et al. 10 7

6.5

6 42 40 38 36 34 32

0

0.01

0.02

0.03

0.04

0.05

Fig. 9 Amplitude and Frequency versus δ for nested algorithm with BL. “©2022 IEEE. Reprinted with permission from A. Rosales, I. Castillo and L. Freidovich, ‘Analysis of Higher Order Sliding Mode Controllers with Boundary Layer Approximation,’ 2021 60th IEEE Conference on Decision and Control (CDC), Austin, TX, USA, 2021, pp. 7070–7075, https://doi.org/10.1109/CDC45484. 2021.9683424” 1

=0 =0.030 =0.040

0.8 0.6

1

0.4

0

0.2

-1 15.5

0 -0.2

10-3

0

5

15.6

15.7

10

15.8

15.9

15

20

Fig. 10 Simulation of the plant (27) in cascade with (3) in a closed loop with the controller (34), “©2022 IEEE. Reprinted with permission from A. Rosales, I. Castillo and L. Freidovich, ‘Analysis of Higher Order Sliding Mode Controllers with Boundary Layer Approximation,’ 2021 60th IEEE Conference on Decision and Control (CDC), Austin, TX, USA, 2021, pp. 7070–7075, https://doi. org/10.1109/CDC45484.2021.9683424”

and amplitude A of chattering are reduced when δ increases as one can observe in Fig. 9. Also, this relation between the BL value and the parameters of chattering is observed in the simulation of the plant (27) in cascade with (3) in a closed loop with the controller (34); see Fig. 10.

Describing Function-Based Analysis and Design …

8

373

10-3

=0.8; 2=0.6 =0.8; 2=0.79 1 1

6

1

=0.8; 2=0.5 =0.8; 2=0.2 1 1

=0.8; 2=0.01

4 2 0 -0.01

-0.008

-0.006

-0.004

-0.002

0

−1 Fig. 11 Curves of inverse describing function −1/Ntw (A, ω) of approximated twisting algosat rithm. “©2022 IEEE. Reprinted with permission from A. Rosales, I. Castillo and L. Freidovich, ‘Analysis of Higher Order Sliding Mode Controllers with Boundary Layer Approximation,’ 2021 60th IEEE Conference on Decision and Control (CDC), Austin, TX, USA, 2021, pp. 7070–7075, https://doi.org/10.1109/CDC45484.2021.9683424”

5.2 Twisting Algorithm Twisting controller (29) has two discontinuous terms, then, when one replaces each discontinuous term by the saturation function in (5), the twisting with BL approximation is obtained as u twsat = −α1 sat(σ, δ) − α2 sat(σ˙ , δ).

(37)

Considering the DFs in Eqs. (7) and (32), the DF for twisting with BL is Ntwsat (A, ω) =

  2α1 δ 2 2α2 δ 2 4α1 4α2 . − − + j πA 3π A3 πA 3π A3 ω2

(38)

−1 Figure 11 presents the curves of −Ntw (A, ω) (right-hand side Eq. (39)) for δ = sat 0.001, ω = 10 [rad/sec], and different values of α1 and α2 ; when amplitude A varies, −1 −1 converge to −Ntw , and δ < A < 0.01. One can observe that the curves of −Ntw sat the gap depending on δ appears. The HB equation corresponding to the plant (27) plus actuator dynamics (3) in a closed loop with twisting with BL approximation in Eq. (37) is

  2α1 δ 2 2α2 δ 2 4α2 4α1 = ω2 (1 − μ2 ω2 ) + j2μω3 . − − + j

  πA 3π A3 πA 3π A3 ω2 − W (1jω)

(39)

374

A. Rosales et al. 10-3 6 5.5 5 4.5 4 26 25 24 23 22

0

0.5

1

1.5

2

2.5

3

3.5 10

-3

Fig. 12 Amplitude and Frequency versus δ for twisting with BL. “©2022 IEEE. Reprinted with permission from A. Rosales, I. Castillo and L. Freidovich, ‘Analysis of Higher Order Sliding Mode Controllers with Boundary Layer Approximation,’ 2021 60th IEEE Conference on Decision and Control (CDC), Austin, TX, USA, 2021, pp. 7070–7075, https://doi.org/10.1109/CDC45484.2021. 9683424”

5.2.1

Example

Consider the plant (27) in cascade with (3) in a closed loop with the controller (37). The parameters of plant, actuator, and controller are α1 = 2.3, α2 = 1.1, and μ = 0.01. Then, the numerical solution of (39) is obtained, when δ increases, and the results are presented in Fig. 12. Also, the simulation of the closed-loop system is presented in Fig. 13.

5.3 ST Extension for Relative Degree Two Considering the ST extension for a relative degree two controller in Eq. (30), its BL approximation is obtained when the discontinuous term (k3 sign(x)) is replaced by the saturation (5), u ST 2sat = −k1 |σ |1/3 sign(σ ) − k1 |σ |1/2 sign(σ˙ ) + v; v˙ = −k3 sat(σ ).

(40)

Then, the DF of the ST extension with BL approximation is computed using DFs (7) and (33) as

Describing Function-Based Analysis and Design … 1

375 =0 =0.002 =0.003

10-3 5 0

0.5

-5 16.6

16.8

17

0

-0.5

0

5

10

15

20

Fig. 13 Simulation of the plant (27) in cascade with (3) in a closed loop with the controller (37). “©2022 IEEE. Reprinted with permission from A. Rosales, I. Castillo and L. Freidovich, ‘Analysis of Higher Order Sliding Mode Controllers with Boundary Layer Approximation,’ 2021 60th IEEE Conference on Decision and Control (CDC), Austin, TX, USA, 2021, pp. 7070–7075, https://doi. org/10.1109/CDC45484.2021.9683424” 1.5

10-3 =50 [rad/sec] =100 [rad/sec] =200 [rad/sec]

1

=400 [rad/sec] =800 [rad/sec] =1600 [rad/sec]

0.5

0 -2.5

-2

-1.5

-1

-0.5

0 10-4

−1 Fig. 14 Curves of inverse describing function −1/N ST exsat (A, ω) of ST extension with BL approximation. “©2022 IEEE. Reprinted with permission from A. Rosales, I. Castillo and L. Freidovich, ‘Analysis of Higher Order Sliding Mode Controllers with Boundary Layer Approximation,’ 2021 60th IEEE Conference on Decision and Control (CDC), Austin, TX, USA, 2021, pp. 7070–7075, https://doi.org/10.1109/CDC45484.2021.9683424”

  2k3 2 c1 k1 c2 k2 ω1/2 4k3 + N ST exsat (A, ω) = +j − δ . π A2/3 π A1/2 ωπ A 3ωπ A3

(41)

The curves corresponding to (−1/N ST exsat ) are presented in Fig. 14 for δ = 0.001, δ < A < 0.1, and k1,2,3 computed with L d = 10. The gap due to δ is observed around the origin, and when δ = 0 the curves correspond to the result presented in [25]. The HB equation corresponding to the plant (27) plus actuator dynamics (3) in a closed loop with ST extension with BL approximation (40) is

376

A. Rosales et al.

2.9

10-4

2.89

2.88

2.87 75.5

75

74.5

0.5

1

1.5

2

2.5

3

3.5

4 10-4

Fig. 15 Amplitude and Frequency versus δ for ST extension with BL approximation. “©2022 IEEE. Reprinted with permission from A. Rosales, I. Castillo and L. Freidovich, ‘Analysis of Higher Order Sliding Mode Controllers with Boundary Layer Approximation,’ 2021 60th IEEE Conference on Decision and Control (CDC), Austin, TX, USA, 2021, pp. 7070–7075, https://doi.org/10.1109/ CDC45484.2021.9683424”

  2k3 2 c1 k1 c2 k2 ω1/2 4k3 + +j − δ = ω2 (1 − μ2 ω2 ) + j2μω3 . (42)

  π A2/3 π A1/2 ωπ A 3ωπ A3 − W (1jω)

5.3.1

Example

Consider the plant (27) in cascade with (3) in a closed loop with the controller (40). The parameters of plant, actuator, and controller are k1 = 9.2832, k2 = 15.81, k3 = 11, and μ = 0.01. Then, the numerical solution of (42) is obtained when δ increases, and the results are presented in Fig. 15; it can be seen that the parameters of chattering vary slightly. The output σ from the simulation of the closed-loop system is presented in Fig. 16; note that the parameters of chattering barely change for values of δ inside the interval of the solution presented in Fig. 15.

Describing Function-Based Analysis and Design …

377

1

=0 =0.0001 =0.0002

0.8 0.6

2 0

0.4

-2

0.2

15.6

15.65

15.7

0 -0.2

0

5

10

15

20

Fig. 16 Simulation of the plant (27) in cascade with (3) in a closed loop with the controller (40). “©2022 IEEE. Reprinted with permission from A. Rosales, I. Castillo and L. Freidovich, ‘Analysis of Higher Order Sliding Mode Controllers with Boundary Layer Approximation,’ 2021 60th IEEE Conference on Decision and Control (CDC), Austin, TX, USA, 2021, pp. 7070–7075, https://doi. org/10.1109/CDC45484.2021.9683424”

5.4 Quantitative Analysis The ratio of reduction, defined by (20), is used for quantitative analysis of chattering in the closed-loop systems with the controllers (34), (37), and (40) with BL approximation. For the controllers studied in this section, there is no analytical solution for the HB equations. The analysis is performed using numerical solutions. Note that the ratio of reduction of amplitude is computed using the amplitude when δ = 0, and the amplitude when δ is equal the value that minimizes the amplitude, i.e., δ = A. From Figs. 9, 12, and 15, it is possible to estimate the values of amplitude required for the computation of the ratio of reduction R A . The R A s for the controllers studied in this section are • Nested with BL approximation: considering Fig. 9  6.25 × 10−4 × 100 ≈ 10.46% RA = 1 − 6.98 × 10−4 • Twisting with BL approximation: considering Fig. 12  4.33 × 10−3 × 100 ≈ 27.83% RA = 1 − 6 × 10−3 • ST extension with BL approximation: considering Fig. 15 

2.89 × 10−4 RA = 1 − 2.87 × 10−4

× 100 ≈ −0.69%.

378

A. Rosales et al.

6 Conclusions Attenuation of chattering in SMC can be easily accomplished via BL approximation. Compared with other methods, BL approximation is straightforward since to use it one just replaces the relay terms by a saturation function. Furthermore, under certain conditions and assumptions, it is possible to find the value of the BL parameter that minimizes the magnitude of chattering in the case when the actuator dynamics is known. Alternatively, using our analysis one can estimate the level of possible reduction of chattering as a result of using BL approximation as well as dependence of this level on the actuator time constant. The bigger attenuation of chattering is observed in controllers purely discontinuous such as first-order SMC and twisting algorithm. The controllers with linear terms such as super-twisting, super-twisting extension, and nested (continuous term in the argument of the sign function) have a smaller reduction of chattering. In the case of super-twisting extension, the attenuation appears negligible. Overall, we have shown that when the time constant of the actuator dynamics is sufficiently small and certain characteristics of disturbance are known, one can start with a design of an appropriate SMC ignoring the presence of actuator dynamics. Knowledge of the range of values for the time constant and other parameters can be used to find the amplitude and frequency characteristics of the chattering, induced by nontrivial actuator dynamics, in the case of discontinuous controller and a BL approximation of the SMC, characterized by a single parameter. This parameter can be chosen using presented expressions for simple SMC, solving derived nonlinear algebraic equations for the more sophisticated SMC numerically, or via a potentially large number of experimental evaluations. The presented analysis shall help to answer the question whether the BL approximation is worth trying and to reduce the number of hardware experiments aimed at tuning the BL parameter.

Appendix: DF for Nested Consider the nested controller (28) in closed-loop systems with a plant represented by the frequency response W ( jω); see Fig. 17. Note that the closed-loop system in Fig. 17 can be divided into two blocks. The first one, B1, is the argument of the sign function in (28), i.e., the parallel connection of s with |σ |1/2 sign(σ ). The second one, B2, is the cascade connection of the relay with the frequency characteristic W ( jω). The block B1 is considered the nonlinear element of the system. Since the DF technique is a linear representation (harmonic linearization), the DF of the relay inside B2 can be used to obtain a linear element composed by the cascade of Nr (A) and W ( jω), where Nr is the DF of the relay. Then, the block B2 can be considered the linear part of the system.

Describing Function-Based Analysis and Design …

379

B2

B1 + +

Fig. 17 Closed loop with nested controller

If B1 is a nonlinear part and B2 is a linear part, the DF technique can be applied to predict oscillations in the closed-loop system with nested controller. First, the DF of the nonlinear element is computed as N para (ω, A) =

γ + jω. A1/2

(43)

Then, the DF of the relay is computed as follows: Nr (A para ) =

4α π A para

(44)

where α is the magnitude of the relay, and A para is the amplitude of the input of the relay (or the amplitude of the output of the block B1). The amplitude A para can be computed as  (45) A para = A1/2 Aω2 + γ 2 (5/4) where γ = π 2Γ 1/2 Γ (7/4) , Γ is the gamma function, and A is the amplitude of σ . Hence, the linear part of the system is

W B2 =

4α W ( jω), π A para

(46)

which satisfies the low-pass filter assumption required for DF application. The HB equation for the closed-loop system in Fig. 17 is 1 γ 1 , + jω = − 1/2 A Nr (A para ) W ( jω)

(47)

then by solving this HB equation, the parameters of chattering can be estimated. Note that the HB Eq. (47) can be rewritten as

380

A. Rosales et al.

 γ  1 ; + jω =− 1/2 A W ( jω)  4α  γ 1 ; + jω =− 1/2 π A para A W ( jω)  γ  4α 1  + jω =− , 1/2 1/2 2 2 A W ( jω) πA Aω + γ Nr (A para )

where the left-hand side is the DF for the nested algorithm presented in Eq. (31). The application of DF presented in this appendix seems correct since the closedloop system satisfies the main assumption required to apply the DF technique, i.e., low-pass filter property of the linear part. If one follows a similar procedure, the DF in Eq. (34) can be obtained.

References 1. Artin, E., Butler, M.: The Gamma Function. Dover Books on Mathematics. Dover Publications (2015). https://books.google.se/books?id=U3MnBgAAQBAJ 2. Atherton, D.: Nonlinear Control Engineering-Describing Function Analysis and Desing. Van Nostrand Company Limited, UK (1975) 3. Boiko, I.: Discontinuous Control Systems. Frequency-Domain Analysis and Design. Birkhauser Boston, Boston (2009) 4. Boiko, I., Fridman, L.: Analysis of chattering in continuous sliding-mode controllers. IEEE Trans. Autom. Control 50(9), 1442–1446 (2005) 5. Boiko, I., Fridman, L., Castellanos, M.: Analysis of second-order sliding-mode algorithms in the frequency domain. IEEE Trans. Autom. Control 49(6), 946–950 (2004) 6. Boiko, I., Fridman, L., Pisano, A., Usai, E.: Analysis of chattering in systems with second-order sliding-modes. IEEE Trans. Autom. Control 52(11), 2085–2102 (2007) 7. Boiko, I.M.: Chattering in sliding mode control systems with boundary layer approximation of discontinuous control. Int. J. Syst. Sci. 44(6), 1126–1133 (2013) 8. Boiko, I.M.: On relative degree, chattering and fractal nature of parasitic dynamics in sliding mode control. J. Franklin Inst. 351(4), 1939–1952 (2014) 9. Bondarev, A., Bondarev, S., Kostyleva, N., Utkin, V.: Sliding modes in systems with asymptotic state observers. Autom. Remote Control 46, 679–684 (1985) 10. Burton, J., Zinober, A.: Continuous approximation of variable structure control. Int. J. Syst. Sci. 17, 875–888 (1986) 11. Castillo, I., Freidovich, L.B.: Describing-function-based analysis to tune parameters of chattering reducing approximations of sliding mode controllers. Control Eng. Pract. 95(104), 230 (2020) 12. Castillo, I., Steinberger, M., Fridman, L., Moreno, J.A., Horn, M.: Saturated super-twisting algorithm: Lyapunov based approach. In: 2016 14th International Workshop on Variable Structure Systems (VSS), pp. 269–273 (2016) 13. Drazenovic, B.: The invariance conditions in variable structure systems. Automatica 5(3), 287–295 (1969) 14. Filippov, A.: Differential Equations with Discontinuos Right-hand Sides. Kluwer Academic Publishers, Dordrecht, The Netherlands (1988) 15. Gelb, A., Velde, W.V.: Multiple-input describing functions and nonlinear system design. Mc Graw-Hill, New York (1968)

Describing Function-Based Analysis and Design …

381

16. Golkani, M.A., Fridman, L.M., Koch, S., Reichhartinger, M., Horn, M.: Saturated continuous twisting algorithm. In: 2018 15th International Workshop on Variable Structure Systems (VSS), pp. 138–143 (2018). https://doi.org/10.1109/VSS.2018.8460449 17. Lee, H., Utkin, V.: Chattering suppression methods in sliding mode control systems. Ann. Rev. Control 31, 179–188 (2007) 18. Levant, A.: Sliding order and sliding accuracy in sliding mode control. Int. J. Control 58, 1247–1263 (1993) 19. Levant, A.: High-order sliding modes: differentiation and output-feedback control. Int. J. Control 76(9–10), 924–941 (2003) 20. Levant, A.: Principles of 2-sliding mode design. Automatica 43(4), 576–586 (2007) 21. Ángel Mercado-Uribe, J., Moreno, J.A.: Discontinuous integral action for arbitrary relative degree in sliding-mode control. Automatica 118(109), 018 (2020) https://doi. org/10.1016/j.automatica.2020.109018. https://www.sciencedirect.com/science/article/pii/ S0005109820302168 22. Moreno, J.A.: Discontinuous Integral Control for Systems with Relative Degree Two, pp. 187– 218. Springer International Publishing, Cham (2018) 23. Moreno, J.A., Cruz-Zavala, E., Mercado-Uribe, Á.: Discontinuous integral control for systems with arbitrary relative degree. In: Steinberger, M., Horn, M., Fridman, L. (eds.) VariableStructure Systems and Sliding-Mode Control: From Theory to Practice, pp. 29–69. Springer International Publishing, Cham (2020) 24. Pérez-Ventura, U., Fridman, L.: Design of super-twisting control gains: A describing function based methodology. Automatica 99, 175–180 (2019). https://doi.org/10.1016/j.automatica. 2018.10.023 25. Pérez-Ventura, U., Fridman, L.: When is it reasonable to implement the discontinuous slidingmode controllers instead of the continuous ones? frequency domain criteria. Int. J. Robust Nonlinear Control 29(3), 810–828 (2019) 26. Plestan, F., Shtessel, Y., Bregeault, V., Poznyak, A.: New methodologies for adaptive sliding mode control. Int. J. Control 83(9), 1907–1919 (2010) 27. Pérez-Ventura, U., Mendoza-Avila, J., Fridman, L.: Design of a proportional integral derivativelike continuous sliding mode controller. Int. J. Robust Nonlinear Control 31(9), 3439–3454 (2021). https://doi.org/10.1002/rnc.5412 28. Rosales, A., Castillo, I., Freidovich, L.: Analysis of higher order sliding mode controllers with boundary layer approximation. In: Conference on Decision and Control 2021. Austin, TX, USA (2021) 29. Seeber, R., Reichhartinger, M.: Conditioned super-twisting algorithm for systems with saturated control action. Automatica 116(108), 921 (2020) 30. Shtessel, Y., Edwars, C., Fridman, L., Levant, A.: Sliding Mode Control and Observation. Springer, New York (2014) 31. Shtessel, Y., YoungJu, L.: New approach to chattering analysis in systems with sliding modes. In: Proc. 35th IEEE Conf. Decision Control, pp. 790–795 (1996) 32. Slotine, J.J., Sastry, S.S.: Tracking control of non-linear systems using sliding surfaces, with application to robot manipulators. Int. J. Control 38(2), 465–492 (1983). https://doi.org/10. 1080/00207178308933088 33. Torres-González, V., Sanchez, T., Fridman, L.M., Moreno, J.A.: Design of continuous twisting algorithm. Automatica 80, 119–126 (2017) 34. Utkin, V.: Discussion aspects of high-order sliding mode control. IEEE Trans. Autom. Control 61(3), 829–833 (2016) 35. Utkin, V., Guldner, J., Shi, J.: Sliding Modes in Electromechanical Systems. Taylor and Francis, Boca Raton, FL (1999) 36. Utkin, V., Poznyak, A., Orlov, Y., Polyakov, A.: Conventional and high order sliding mode control. J. Franklin Inst. 357(15), 10244–10261 (2020) 37. Utkin, V., Poznyak, A., Orlov, Y.V., Polyakov, A.: Chattering Problem, pp. 73–82. Springer International Publishing, Cham (2020)

Applications of SMC

Blood Glucose Regulation for Type 1 Diabetic Patients at Intensive Care Unit via Continuous Sliding-Mode Algorithms Roberto Franco, Héctor Ríos, Alejandra Ferreira de Loza, Louis Cassany, David Gucik-Derigny, Jérôme Cieslak, and David Henry

Abstract This chapter proposes five continuous sliding-mode control algorithms to regulate blood glucose levels in critically ill patients affected with Type 1 Diabetes Mellitus. All the controllers deal with uncertainties from intrapatient and interpatient variability and external disturbances such as food intake. The proposed schemes do not require meal announcements or patient individualization. The blood glucose measurement and insulin infusion are intravenous. The control objective is to regulate the blood glucose in a normoglycemia range, i.e., 70–180 (mg/dl). The approach is validated in the UVA/Padova metabolic simulator for ten in silico adult patients. The results show excellent performance and minimal risk of hyperglycemia and hypoglycemia events for every sliding-mode algorithm. A quantitative analysis is carried out to shed light on the controller’s performance. To this aim, we consider several indexes regarding blood glucose regulation and the control signal energy. In addition, simulations using a standard (open-loop) protocol used at ICU highlight the workload alleviation obtained with any SMC.

Figures 13–18 and Tables 6–7 ©2022 IEEE. Reprinted with permission, from, R. Franco et al., “Output-Feedback Sliding-Mode Controller for Blood Glucose Regulation in Critically Ill Patients Affected by Type 1 Diabetes,” in IEEE Transactions on Control Systems Technology, vol. 29, no. 6, pp. 2704–2711, Nov. 2021, https://doi.org/10.1109/TCST.2020.3046420. R. Franco · H. Ríos (B) Tecnológico Nacional de México/I.T. La Laguna, División de Estudios de Posgrado e Investigación, C.P. 27000 Torreón, Coahuila, México e-mail: [email protected] H. Ríos · A. Ferreira de Loza CONAHCYT IxM, C.P. 03940 Ciudad de México, México A. Ferreira de Loza Instituto Politécnico Nacional, C.P. 22435 Tijuana, Baja California, México L. Cassany · D. Gucik-Derigny · J. Cieslak · D. Henry University of Bordeaux, CNRS, Bordeaux INP, IMS, UMR 5218, 33400 Talence, France © The Author(s), under exclusive license to Springer Nature Switzerland AG 2023 T. R. Oliveira et al. (eds.), Sliding-Mode Control and Variable-Structure Systems, Studies in Systems, Decision and Control 490, https://doi.org/10.1007/978-3-031-37089-2_15

385

386

R. Franco et al.

1 Introduction Type 1 Diabetes Mellitus (T1DM) is a condition caused by insufficient insulin production in the pancreas. This lack of insulin produces hyperglycemia, i.e., an increase in the blood glucose levels. Patients affected by T1DM are dependent on daily exogenous insulin administration. Blood glucose regulation is critical in hospitalized patients affected by T1DM, especially those at intensive care unit (ICU). There exist different factors that can exacerbate hyperglycemia in the ICU, for example: medications, high nutrition levels, loss of insulin sensitivity, and stress, among many others. The occurrence of hyperglycemia leads to medical complications such as cardiovascular and microvascular diseases, long-term treatment at the ICU, wound infections, rising healthcare costs, and adverse outcomes including death [1]. Evidence strictly supports that effective blood glucose regulation can reduce considerably the morbidity and mortality reducing the ICU cost [2]. Based on the expertise of nutritionists and doctors, the blood glucose control at ICU has traditionally been performed using open-loop protocols. Open-loop methods often rely on paper-based or computer-based protocols implemented in the form of written instructions [3]. Open-loop protocols approaches are unrealistic, these methods assume that all the patient cohort has the same response under some fixed doses leading to risk events. In this sense, in case of hypo- and hyperglycemia events, the ICU staff must participate to stabilize the patient. Due to this recurring participation, the ICU staff has a high workload, which is impractical. Moreover, in the worstcase scenario, diabetes is a serious comorbidity in viral pandemics and an extreme attention is mandatory [4]. Significant research has been dedicated to design autonomous blood glucose regulation approaches in ICU therapy. Unlike open-loop approaches, autonomous approaches depend on feedback controllers. Any feedback controller must deal with the interpatient and intrapatient variability. The sensitivity to the insulin, the time corresponding to the T1DM condition of the patient, and the physiological characteristics are defined as the interpatient variability. On the other hand, due to the circadian variations in insulin sensitivity, the food intake, medication, among others, the physiological condition of the patient varies with respect to time, this is known as the intrapatient variability [5]. Several approaches related to feedback control that solve the blood glucose regulation problem in critically ill patients have been designed based on proportional integral derivative (PID) control [6], the stochastic targeted (STAR) control algorithm [7], and model predictive control (MPC) approaches (see [8, 9]). Particularly, in some hospitals, the algorithms proposed in [6, 7] are the standard of care. The PID control aim is to behave like the multiphase insulin response of β-cells in healthy patients [6]. Nevertheless, the disturbances that affect patients with T1DM are extensive, in this sense, the PID can only cope with a limited class of disturbances. On the other hand, most of the algorithms in the literature need patient individualization and meal announcement, for instance, [8, 9]. The previous shortcomings also appear in

Blood Glucose Regulation for Type 1 Diabetic Patients at Intensive Care Unit …

387

MPC algorithms such as [10, 11]. Moreover, most of the MPC algorithms require a precise model that, given the complexity of glucose–insulin dynamics and the large patient variability, is not easily affordable. In summary, the controllers need to consider different factors such as interpatient, intrapatient variability, and unannounced meal intake that can be seen as disturbances. Moreover, the controllers must ensure good performance, safety, and robustness. Sliding-mode control (SMC) theory is interesting due to its insensitivity to some class of disturbances and parameter uncertainties [12]. In the frame of glucose regulation, [13, 14] consider the super-twisting controller, whereas [15] applies a quasi-continuous controller to regulate blood glucose in T1DM patients. Recently, in [16], a terminal sliding-mode controller is proposed, while [17] proposes a continuous twisting algorithm. Particularly, [15, 16] are discontinuous controllers, and then, the chattering effect may occur. Chattering refers to undesired oscillations of finite frequency and amplitude. Continuous SMC algorithms are an alternative to alleviate the chattering phenomena while counteracting Lipschitz disturbances. The super-twisting algorithm (STA) [19] launched the development of subsequent continuous SMC algorithms. Recent strategies comprise the continuous twisting algorithm (CTA) [20], continuous singular terminal sliding-mode algorithm (CSTA) [21], continuous nonsingular terminal algorithm (CNTA) [22], and output-feedback continuous twisting algorithm (OFCTA) [23]. The manuscript is devoted to designing continuous SMC algorithms to safely and effectively manage the blood glucose concentration in critically ill patients. The strategies considered here are the STA, CTA, CSTA, CNTA, and OFCTA. All the previously listed approaches involve a finite-time sliding-mode differentiator in their design [19]. The control algorithms are validated in silico to verify its feasibility. To this aim, we use the adult patient cohort of the high-fidelity metabolic simulator UVA/Padova, approved by the US Federal and Drug Administration (FDA) [26]. In this manuscript, a challenging scenario of three meals with high-carbohydrate amounts serves as a testbed for the proposed controllers. Moreover, the results of each control algorithm are contrasted using several performance indexes such as the RMS mean control action, the average time in hypo-, normo-, and hyperglycemia, and the average risk of low and high blood glucose indexes. In addition, an open-loop simulation is carried out using a standard protocol at ICU, to highlight the workload attenuation. The document’s contribution is three-fold: (1) Opposite to standard glucose regulation protocols at ICU, which demand ICU staff intervention, the proposed algorithms work autonomously. (2) Unlike some strategies that demand patient individualization, the controllers proposed here are robust against patient variability, thus every algorithm can be applied to the whole patient cohort. (3) A quantitative analysis concerning the performance of each controller sheds light on which controller achieves the minimal risks. The chapter structure is as follows. Section 2 shows the problem statement. Section 3 presents the preliminaries. Section 4 summarizes the controllers design. Section 5 introduces the in silico simulation results. Finally, Sect. 6 resumes the conclusions. All the proofs are postponed to the Appendix.

388

R. Franco et al.

Notation. For any x ∈ R, the symbol |x| denotes its absolute value. If x ∈ Rn then |x| states for the Euclidean norm. For x, y ∈ R, the function max(x, y) renders the maximum value between the arguments x, y. Define the function aα := |a|α sign(a) for any a ∈ R, and any α ≥ 0; according to this, a0 = sign(a).

2 Problem Statement In the literature, there are several insulin–glucose models. Bergman’s minimal model is commonly employed to design insulin delivery algorithms using the intravenous route. In this regard, the problem statement presented in the sequel is standard to previously published works by the authors concerned with robust blood glucose regulation, i.e., [17, 18]. Nonetheless, the chapter deals with a broader study involving the design of several high-order sliding-modes controllers and their performance evaluation. In this chapter, we use the Bergman Minimal Model (BMM) to synthesize robust controllers. The BMM is a simplified model of the T1DM patient dynamics that has no physiological value. However, the BMM can be used to synthesize a controller to stabilize the glucose concentration in the blood plasma in hospitalized patients [29]. The BMM is represented as follows [30]: x˙1 = − p1 (x1 − G b ) − x1 x2 + d(t), x˙2 = − p2 x2 + p3 (x3 − Ib ),

(1a) (1b)

x˙3 = −n(x3 − Ib ) + γ max(0, x1 − h)t + u, y = x1 ,

(1c) (1d)

where x1 is the concentration of glucose in the blood plasma (mg/dl); the effect of insulin on the net glucose disappearance is denoted by x2 (1/min); the insulin concentration in the blood plasma at time t is given by x3 (μU/ml). The insulin infusion rate corresponds to the control input u; the basal pre-injection level of glucose is denoted by G b (mg/dl); the basal pre-injection level of insulin is given by Ib (μU/ml); the insulin-independent rate constant of glucose uptake in liver and muscles is denoted by p1 (1/min); the rate for the decrease in tissue glucose uptake ability is given by p2 (1/min); the insulin-dependent increase in glucose uptake ability in tissue per unit of insulin concentration above the basal level is denoted by p3 [(μU/ml)/min2 ]; the first-order decay rate for insulin in blood is given by n(1/min); the threshold value of glucose above which the pancreatic β cells release insulin is represented by h (mg/dl); the rate of the pancreatic β cells’ release of insulin after the glucose injection with glucose concentration above the threshold is given by γ [(μU dl)/(ml min2 mg)] and t = 0 is the initial value where the glucose is in the blood plasma. In this sense, x1 , x2 , and x3 ∈ R+ are the state variables. The term, max(0, x1 − h), in (1c), is an internal function responsible for regulating the secretion of insulin in the body. However, this function does not exist in patients

Blood Glucose Regulation for Type 1 Diabetic Patients at Intensive Care Unit …

389

affected by T1DM. Moreover, in [31], it is shown that the parameter p1 can be almost zero for diabetic patients. The function d(t) models the glucose absorption from the intestine due to food intake. In the case of diabetic patients, the regulatory insulin system does not work properly, thus, the function d(t) is considered as a disturbance, see, e.g., [30], and [36]. Nevertheless, d(t) depends on the food intake and can be considered vanishing, see [32]. That is, (2) d(t) = b1 e−b2 t , where b1 , b2 are some positive constants, and d(t) is in (mg min/dl). For the sake of brevity, in what follows d(t) is plainly written as d. This chapter aims to design robust control laws u to counteract the disturbance effects produced by the interpatient, and intrapatient variability acting as parameter uncertainties, as well as unannounced food intake d in the system given by (2). The objective is to stabilize the blood glucose concentration at the desired level G b , and keep it between 70 and 180 (mg/dl) normoglycemic range. Moreover, the settling time should be not less than 2 hours, and avoiding the undershoots of hypoglycemia, i.e., x1 ≤ 50 (mg/dl), for all t ≥ 0. System (1) possesses the following properties: Property 1 For u ≡ 0 and for any d satisfying (2), the origin of system (1) is uniformly stable. Property 1, follows straightforwardly from the analysis of (1). First, consider (1c), note that for T1DM patients γ max(0, x1 − h)t = 0, then, x˙3 = −n[x3 − Ib ], therefore, x3 = x3 (0)e−nt + Ib (1 − e−nt ), that implies |x3 | ≤ |x3 (0)| + Ib = x¯3 , for all t ≥ 0. Now, it is given from (1b) that |x2 | ≤ |x2 (0)| + pp23 |x¯3 − Ib | = x¯2 , for all t ≥ 0. Finally, (1a) yields |x1 | ≤ |x1 (0)| + pp11+Gx¯b2 + | p1 +bx¯12 −b2 | = x¯1 , for all t ≥ 0. Then, in accordance with Definition 1, given u ≡ 0, the system (1) is uniformly stable. Property 2 The relative degree of the tracking error e1 = y − G b ,

(3)

with respect to the input u, is equal to three. Property 2 follows from differentiating successively e1 in (3) and considering (2), which result as follows: ... e 1 = φ(t, x) − p3 x1 u, (4) where

390

R. Franco et al.

φ(t, x) = x1 [− p1 ( p12 + 3 p3 Ib ) − p3 Ib ( p2 + n)] + d˙ + x2 [− p22 (1 + G b ) + p1 p2 (2G b − 1) + 2d( p1 + p2 )] + x3 [−2 p3 ( p1 + d)] + x1 x2 [−( p1 + p2 )2 − 3 p3 Ib ] + x1 x3 [ p3 (3 p1 + p2 + n)] + x1 x22 [−3( p1 + p2 )] + x22 ( p1 G b + d) + 3 p3 x1 x2 x3 − x1 x23 + d¨ + ( p1 G b + d)( p12 + 2 p3 Ib ). (5) Note that the relative degree is well-defined since p3 x1 = 0, for all t ≥ 0. Property 3 The term φ in (5) is Lipschitz continuous and uniformly bounded, i.e., |φ(t, x)| ≤ ψ,

  d   φ(t, x) ≤ ρ,  dt 

(6)

for all t ≥ 0 and x ∈ R3 , with ψ and ρ as some known positive constants, respectively. Property 3 can be guaranteed due to the insulin infusion rate, which is limited by the practical framework, i.e., |u| ˙ < l, with l as a positive constant, the uniform stability properties of system (1), and taking into account that d is also bounded.

3 Preliminaries Let us consider the system x˙ = f (t, x),

(7)

where the state is given by x ∈ Rn , and f : R≥0 × Rn → Rn ensures forward existence and uniqueness of solutions at least locally in time, f (t, 0) = 0, for all t ≥ 0. Consider the initial state x(t0 ) = x0 , with t0 ∈ R as the initial time; thus, the solution is denoted as x(t, t0 , x0 ) for any t ≥ t0 for which the solution exists. Let D be an open neighborhood of the origin in Rn . Then, let us introduce the following stability properties (for more details see [32]). Definition 1 At the steady state x = 0, the system (7), is said to be: 1. Uniformly Stable if for any ε > 0, exists δ(ε) > 0 such that for any x0 ∈ D with any t0 ∈ R, if |x0 | ≤ δ(ε) then |x(t, t0 , x0 )| ≤ ε, for all t ≥ t0 and any t0 ∈ R. 2. Uniformly Asymptotically Stable if it is uniformly stable in D and for any t0 ∈ R, κ > 0 and ε > 0, there exists T (κ, ε) ≥ 0 such that for any x0 ∈ D, i f |x0 | ≤ κ, then |x(t, t0 , x0 , 0)| ≤ ε, for all t ≥ t0 + T (κ, ε).

Blood Glucose Regulation for Type 1 Diabetic Patients at Intensive Care Unit …

391

3. Uniformly Finite-Time Stable if it is uniformly stable and uniformly finite-time converging from D, i.e., for any x0 ∈ D with any t0 ∈ R, there exists 0 ≤ T < +∞ such that x(t, t0 , x0 ) = 0, for all t ≥ T . The function T0 (t0 , x0 ) = inf{T ≥ 0 : x(t, t0 , x0 ) = 0, ∀t ≥ T } is called the settling time of the system. If D = Rn , at steady state x = 0, system (7) is said to be Globally Uniformly Stable (GUS), Globally Uniformly Asymptotically Stable (GUAS), and Globally Uniformly Finite-Time Stable (GUFTS), respectively.

4 Intravenous Continuous Controllers Based on High-Order Sliding Modes Defining the regulation error as in (3), and according with the insulin–glucose dynamics (1), the regulation error dynamics can be written as follows: e˙1 = e2 ,

(8a)

e˙2 = e3 , e˙3 = φ(x, t) − p3 x1 u,

(8b) (8c)

where the measured output corresponds to e1 , and φ(x, t) is defined in (5). In what follows, the control input u is designed exploiting five different robust control strategies to drive the trajectories of system (8) to zero. Five continuous SMC approaches are used to regulate the blood glucose level for patients affected by T1DM: 1. 2. 3. 4. 5.

Super-Twisting Algorithm (STA). Continuous Twisting Algorithm (CTA). Continuous Singular Terminal Algorithm (CSTA). Continuous Nonsingular Terminal Algorithm (CNTA). Output-Feedback Continuous Twisting Algorithm (OFCTA).

Property 2 states that system (1), and correspondingly (8), has relative degree three of the output e1 with respect to the control input u. The controllers mentioned above tackle disturbed systems with different relative degrees: the STA deals with a relative degree equal to one, whereas the CTA, CSTA, and CNTA deal with a relative degree equal to two, and the OFCTA deals with a relative degree equal to three disturbed systems. Therefore, the sliding surface for each algorithm is designed accordingly.

4.1 Super-Twisting Algorithm Let us proceed with the synthesis of the STA [33], designing a relative degree one sliding surface (9) s = c1 e1 + c2 e2 + e3 ,

392

R. Franco et al.

with c1 , c2 ∈ R+ , and differentiating (9) yields to s˙ = c1 e2 + c2 e3 + φ(t, x) − p3 x1 u. Then, the controller is designed as follows [33]  1  1 −k1 s 2 + v − c1 e2 − c2 e3 , p3 x 1 v˙ = −k2 s0 .

u=−

(10a) (10b)

The main result, for the STA, is given by the following theorem. Theorem 1 [33]. Let the STA (10), with the sliding surface defined by (9), be applied to system (8), with |dφ(t, x)/dt| ≤ ρ. Thus, selecting the control gains as k1 = 1 1.5ρ 2 and k2 = 1.1ρ, with c1 , c2 ∈ R+ , (e1 , e2 , e3 ) = 0 is GUAS. It is worth mentioning that in the literature exist alternative selections for the STA gains k1 and k2 ; see, for instance, [34] and the references therein. The closed-loop stability analysis for STA and the remained controllers is postponed to the Appendix.

4.2 Continuous Twisting Algorithm Now, the synthesis of the CTA [20] is presented. Consider the sliding surface s1 = c1 e1 + e2 ,

(11)

which has a relative degree equal to two, where c1 ∈ R+ . The sliding surface dynamics is given by s˙1 = s2 , s˙2 = c1 e3 + φ(x, t) − p3 x1 u.

(12a) (12b)

Then, the controller is designed as follows [20]:  1  1 1 −k1 s1  3 − k2 s2  2 − c1 e3 + v , p3 x 1 v˙ = −k3 s1 0 − k4 s2 0 .

u=−

The main result for the CTA is given by the following theorem.

(13a) (13b)

Blood Glucose Regulation for Type 1 Diabetic Patients at Intensive Care Unit …

393

Theorem 2 [20]. Let the CTA (13), with the sliding surface defined by (11), be applied to system (8), with |dφ(t, x)/dt| ≤ ρ. Thus, selecting the control gains as 2 1 k1 = 25ρ 3 , k2 = 15ρ 2 , k3 = 2.3ρ and k4 = 1.1ρ, with c1 ∈ R+ , (e1 , e2 , e3 ) = 0 is GUAS.

4.3 Continuous Singular Terminal Algorithm Proceeding with the synthesis of the CSTA [21], consider the sliding surface given in (11) where the dynamics is given by (12). Then, the controller is designed as follows [21]: 2

s = s2 + c1 s1  3 ,  1  1 u=− −k1 s 2 − c1 e3 + v , p3 x 1 v˙ = −k2 s0 .

(14a) (14b) (14c)

The main result for the CSTA is given by the following theorem. Theorem 3 [21]. Let the CSTA (14b)–(14c), with the sliding surface defined by (14a), be applied to system (8), with |dφ(t, x)/dt| ≤ ρ. Thus, selecting the control 1 gains as k1 = 1.5ρ 2 and k2 = 1.1ρ, with c1 ∈ R+ , (e1 , e2 , e3 ) = 0 is GUAS.

4.4 Continuous Nonsingular Terminal Algorithm The synthesis of the CNTA [22] is introduced. Consider the sliding surface given in (11) where the dynamics is given by (12). Then, the controller is designed as follows [22]: 3

s = s1 + c1 s2  2 ,  1  1 u=− −k1 s 3 − c1 e3 + v , p3 x 1 v˙ = −k2 s0 .

(15a) (15b) (15c)

The following theorem gives the main result for the CNTA. Theorem 4 [22]. Let the CNTA (15b)–(15c), with the sliding surface defined by (15a), be applied to system (8) with |dφ(t, x)/dt| ≤ ρ. Thus, selecting the control 2 1 gains as k1 = 4.4ρ 3 and k2 = 2.5ρ, with c1 = 20ρ − 2 , (e1 , e2 , e3 ) = 0 is GUAS.

394

R. Franco et al.

4.5 Output-Feedback Continuous Twisting Algorithm Finally, the OFCTA synthesis is presented. Consider the OFCTA [23] given by  1  3 1 2 1 1 1 k1 L 4 e1  4 + k2 L 3 eˆ2  3 + k3 L 2 eˆ3  2 − η , p3 x 1 η˙ = −k4 Le1 0 − k5 Leˆ2 0 − k6 Leˆ3 0 , 3 1  e˙ˆ1 = −λ1 H 4 eˆ1 − e1 4 + eˆ2 , 1 2  e˙ˆ2 = −λ2 H 4 eˆ1 − e1 2 + eˆ3 , 1 3  e˙ˆ3 = −λ3 H 4 eˆ1 − e1 4 + ε − K m u, u=−

ε˙ = −λ4 H eˆ1 − e1  , 0

(16a) (16b) (16c) (16d) (16e) (16f)

where eˆ1 , eˆ2 , eˆ3 , η, and ε are some auxiliary variables. Subsystem (16a)–(16b) corresponds to the controller, whereas (16c)–(16f) corresponds to the observer. The parameters ki ∈ R, i = 1, 6, are the controller gains; the parameters λ j ∈ R+ , j = 1, 4 are the observer gains, while the constants L , H ∈ R+ are scaling factors. The following theorem states the convergence properties of the OFCTA. Theorem 5 [17]. Let the OFCTA (16) be applied to the system (8) with |dφ(t, x)/dt| ≤ ρ. For any ρ ≥ 0, there exist controller gains k1 , k2 , k3 , k4 ∈ R+ , k5 , k6 ∈ R, satisfying k4 > |k5 | + |k6 | + ρ; observer gains λ j ∈ R+ , j = 1, 4; and constants L min , Hmin ∈ R+ ; such that for any L > L min and H > Hmin , (e1 , e2 , e3 ) = 0 is GUFTS.

4.6 Observer Design The sliding-mode controllers STA, CTA, CSTA, and CNTA require the whole state to be available for measurement, i.e., e1 , e2 , and e3 . Generally, in the glucose regulation problem for patients affected by T1DM, only the glucose concentration in the blood plasma is available, i.e., x1 , which can be used to obtain e1 . Thus, an observer is designed in order to obtain an estimate of e2 and e3 . The observer has the following structure: 3 1  e˙ˆ1 = −λ1 H 4 eˆ1 − e1 4 + eˆ2 , 1 2  e˙ˆ2 = −λ2 H 4 eˆ1 − e1 2 + eˆ3 , 1 3  e˙ˆ3 = −λ3 H 4 eˆ1 − e1 4 + ε − K m u, ε˙ = −λ4 H eˆ1 − e1  , 0

(17a) (17b) (17c) (17d)

Blood Glucose Regulation for Type 1 Diabetic Patients at Intensive Care Unit …

395

where λ j ∈ R+ , j = 1, 4, are the observer gains, the constant H ∈ R+ is a scaling factor, and it is assumed that 0 < K m ≤ p3 x1 ≤ K M for some known positive constants K m and K M . The observer is a particular case of the filtering observer presented in [24], and it is based on the Levant’s differentiator first proposed in [19]. The following theorem states the convergence properties of the observer. Theorem 6 [19]. Let the observer (17) be applied to the system (8), with |dφ(t, x)/dt| ≤ ρ. Thus, there exist gains λi > 0, i = 1, 4, and a constant Hmin > 0, such that, for any scaling factor H > Hmin , the states of system (8) are estimated exactly and in finite time. The above result states that, after a finite time, the identities eˆ1 ≡ e1 , eˆ2 ≡ e2 , eˆ3 ≡ e3 ,

(18)

are certainly obtained. Remark 1 The observer dynamics in (17), determined by the sensor sampling time, is faster than the insulin–glucose dynamics in (1). Therefore, a practical approach, for satisfying the separation principle, is switching on the controller until the observer transient ends. Formal stability analysis of the observer–controller scheme is developed in [27] using homogeneity and input-to-state stability tools. Remark 2 The equalities in (18) are attained theoretically. Still, if the output e1 is affected by a measurement noise uniformly bounded |n(t)| < n; ¯ then the observer convergence is only practical, i.e., the observation error trajectories converge in finite time to a region around the origin [19]. The next section illustrates the efficiency of the proposed schemes in a high-fidelity simulator approved by the US FDA for pre-clinical test.

5 In Silico Simulation Results In this section, the SM approaches are applied to a high-fidelity metabolic simulator called UVA/Padova T1DMS v3.2 provided by The Epsilon Group [26]. The simulator patient model can be seen at [28]. In this chapter, the glucose measurement and the insulin infusion are intravenous. The algorithms were applied in a cohort of ten virtual adult patients. The simulations consider one closed-loop scenario with three meals. Each meal corresponds to a single intake [35]. The sampling rate of the blood glucose sensor is 1 (min), whereas the sampling rate for the proposed controllers is 10 (ms). The gain selection is given in Table 1. Practical criterion to tune the controllers. The knowledge of the upper bound of the disturbance φ(x, t) in (6) is mandatory to set the gains of every control strategy. Such a bound can be estimated offline using blood glucose measurements and insulin

396

R. Franco et al.

Table 1 Gain selection of the sliding-mode controllers and observer parameters Controller

k1

k2

STA

1.5ρ 1/2

1.1 × 10−4 ρ

CTA

25ρ 2/3

15ρ 1/2

CSTA

1.5ρ 1/2

CNTA

4.4ρ 2/3

OFCTA

2.3ρ 3/4

3.2ρ 2/3

3ρ 1/2

4 × 10−5 ρ

0

Observer

λ1 = 4.5

λ2 = 2.4

λ3 = 0.8

λ4 = 0.1

H = 200

k3 2.3 × 10−4 ρ

k4

k5

k6

1.1 × 10−4 ρ

c1

c2

1

1

ρ

1000

1

1000

1.1 × 10−4 ρ

1

1000

2.5 × 10−4 ρ

0.64 0

1000 1000

infusion data, i.e., x1 and u, from a standard patient. Here, we apply the criteria proposed in [17]. (1) Consider a dataset of output and input data, i.e., u and e1 . Apply HOSM observer (17) using λ1 = 4.5, λ2 = 2.4, λ3 = 0.8, λ4 = 0.1, and the observer gain is arbitrary selected as H > 0. (2) Check |e1 − eˆ1 | ≤ γ1 H h 3 , for all t ∈ [0, γ2 h], where h is the sampling time, and γ1 , γ2 are some positive constants. Adjust the value of the observer gain H until the inequality holds. (3) Compute ρ = λ4 H , and fix the controllers gains accordingly. The control gains were chosen once and for all the adult patient cohort. The controller does not demand meal announcement. The setpoint for the controller is given by 120 (mg/dl) and the initial condition is 150 (mg/dl). The controller is running from the start of the experiment, i.e., t = 0 (hrs). From a generic insulin pump, the insulin is delivered intravenously. The performance of the controller is measured in terms of its ability to keep glucose levels in the normoglycemia range of 70–180 (mg/dl). All the controllers are tested using the same scenario, which is described as follows: Three-meal closed-loop scenario. The behavior of the patient is examined for 24 (hrs) with three food intakes. The patient eats 45, 70, and 70 (gr) of carbohydrates at 7, 12, and 18 (hrs), respectively. All the examination is in closed-loop using each of the controllers. Performance indexes. The following metrics, taken from [37], are considered to evaluate the controllers performance: • Low Blood Glucose Index (LBGI). The LBGI is a measure of the frequency and extent of low blood glucose readings. LBGI sheds light on hypoglycemia risk: Minimal-risk (LBGI < 1.1) and High-risk (LBGI > 5). • High Blood Glucose Index (LBGI). The HBGI provides a measure of the frequency and extent of high blood glucose readings. HBGI sheds light on hyperglycemia risk: Minimal-risk (HBGI < 5) and High-risk (HBGI > 15). • Percent time in Hypoglycemia level, i.e., the average time in which the patient’s blood glucose level is less than 70 (mg/dl).

Blood Glucose Regulation for Type 1 Diabetic Patients at Intensive Care Unit …

397

• Percent time in the normoglycemia level, i.e., the average time in which the patient’s blood glucose level remains between 70 and 180 (mg/dl). • Percent time in the hyperglycemia level, i.e., the percentage of time in which the patient’s blood glucose level is higher than 180 (mg/dl). For comparison purposes, the Portland protocol (for more details, please see Appendix A in [3]), which is a standard open-loop algorithm used in the ICU for diabetic patients, is proposed to highlight the workload mitigation and the advantages of the controllers.

5.1 Super-Twisting Algorithm This subsection presents the performance of the STA concerning the blood glucose regulation problem. Figures 1, 2, 3, and Table 2 summarize the results for ten in silico adult patients. Figure 1 depicts the average value of the blood glucose and its ±1 STD, and the hypoglycemia and hyperglycemia ranges. Figure 2 depicts the mean control action represented as the insulin infusion rate for the adult cohort. The control action is continuous and is capable to deal with the food intake. Figure 3 shows the control variability grid array (CVGA) (for more details, see [38]). The STA drives 20% of the subjects to the A-zone, corresponding to excellent performance, whereas the remaining 80% are in the Upper B-zone, corresponding to benign deviations into the hyperglycemia. Table 2 indicators evidence that the STA does not present any risk of hyperglycemia and hypoglycemia, i.e., HBGI < 5.5 and LBGI < 1.1 for the adult cohort. Moreover, the mean blood glucose level and the standard deviation remain in the normoglycemia band most of the time, even in the presence of unannounced food intake and interpatient variability.

5.2 Continuous Twisting Algorithm This subsection presents the performance of the CTA concerning the blood glucose regulation problem. Figures 4, 5, 6 and Table 3 summarize the results for ten in silico adult patients. Figure 4 depicts the average value of the blood glucose and its ±1 STD, and the hypoglycemia and hyperglycemia ranges. Figure 5 depicts the mean control action represented as the insulin infusion rate for the adult patient cohort. The control signal is continuous and is capable to deal with the food intake. Figure 6 shows the CVGA. The CTA drives 10% of the subjects to the A-zone, whereas the remaining 90% are in the Upper B-zone.

398

R. Franco et al.

400 350 300 250 200 150 100 50 0

45 (gr)

0

6 7

70 (gr)

12

70 (gr)

18

24

Fig. 1 Average blood glucose concentration for the adult patient cohort using the STA. The ±1 STD values are given by the filled area

0.1 0.08 0.06 0.04 0.02 0

0

6

12

18

24

Fig. 2 Average insulin infusion rate for the adult patient cohort using the STA

Table 3 indicators evidence that the CTA does not present any risk of hyperglycemia and hypoglycemia. Moreover, the mean blood glucose level and the standard deviation remain in the normoglycemia band most of the time, even in the presence of unannounced food intake and interpatient variability.

upper 95% confidence bound (mg/dl)

Blood Glucose Regulation for Type 1 Diabetic Patients at Intensive Care Unit …

399

A zone 20%, B zone 80%, C zone 0%, D zone 0%E zone 0%

400

300

180

110 110

90

70

50

lower 95% confidence bound (mg/dl)

Fig. 3 CVGA for the adult patient cohort using the STA Table 2 Performance indicators of the STA. The last row refers to the mean values of the adult cohort Adult patients Time in range [%] Risk index Patient Mean blood 180 LBGI HBGI glucose 1 2 3 4 5 6 7 8 9 10 Mean

125.23 128.74 128.44 124.98 123.70 133.69 128.96 131.08 123.90 122.52 127.12

0 0 0 0 0 0 0 0 0 0 0

96.95 95.91 96.67 95.77 98.20 87.79 94.66 91.60 96.88 100 95.44

3.05 4.09 3.33 4.23 1.80 12.21 5.34 8.40 3.12 0 4.56

0 0 0.02 0.02 0.01 0 0 0 0.01 0.02 0.008

0.67 0.89 1.11 0.85 0.83 1.88 0.95 1.74 0.75 0.61 1.02

5.3 Continuous Singular Terminal Algorithm This subsection presents the performance of the CSTA concerning the blood glucose regulation problem. Figures 7, 8, 9 and Table 4 summarize the results for ten in silico adult patients. Figure 7 depicts the average value of the mean blood glucose and its ±1 STD, and the hypoglycemia and hyperglycemia range. Figure 8 depicts the mean control

400

R. Franco et al.

400 350 300 250 200 150 100 50 0

45 (gr)

0

6 7

70 (gr)

12

70 (gr)

18

24

Fig. 4 Average blood glucose concentration for the adult patient cohort using the CTA. The ±1 STD values are given by the filled area

0.08

0.06

0.04

0.02

0

0

6

12

18

24

Fig. 5 Average insulin infusion rate for the adult patient cohort using the CTA

action represented as the insulin infusion rate for the adult patient cohort. The control signal is continuous and is capable to deal with the food intake. Figure 9 shows the CVGA. The CSTA drives the 100% in the Upper B-zone. Table 4 indicators evidence that the CSTA does not present any risk of hyperglycemia and hypoglycemia. Moreover, the mean blood glucose level and the standard deviation remain in the normoglycemia band most of the time, even in the presence of unannounced food intake and interpatient variability.

upper 95% confidence bound (mg/dl)

Blood Glucose Regulation for Type 1 Diabetic Patients at Intensive Care Unit …

401

A zone 10%, B zone 90%, C zone 0%, D zone 0%E zone 0%

400

300

180

110 110

90

70

50

lower 95% confidence bound (mg/dl) Fig. 6 CVGA for the adult patient cohort using the CTA Table 3 Performance indicators of the CTA. The last row refers to the mean values of the adult cohort Adult patients Time in range [%] Risk index Patient Mean blood 180 LBGI HBGI glucose 1 2 3 4 5 6 7 8 9 10 Mean

123.05 124.20 131.93 122.30 126.55 141.34 128.29 135.07 125.05 123.54 128.13

0 0 0 0 0 0 0 0 0 0 0

97.02 95.42 92.92 95.28 91.53 80.57 90.35 85.01 91.12 97.99 91.72

2.98 4.58 7.08 4.72 8.47 19.43 9.65 14.99 8.88 2.01 8.28

0.07 0.02 0.02 0.08 0.21 0.03 0.02 0.13 0.12 0.06 0.007

0.88 0.93 1.69 0.96 1.65 3.19 1.64 2.63 1.42 0.97 1.59

5.4 Continuous Nonsingular Terminal Algorithm This subsection presents the performance of the CNTA concerning the blood glucose regulation problem. Figures 10, 11, 12 and Table 5 summarize the results for ten in silico adult patients. Figure 10 depicts the average value of the mean blood glucose and its ±1 STD, and the hypoglycemia and hyperglycemia ranges. Figure 11 depicts the mean control

402

R. Franco et al.

400 350 300 250 200 150 100 50 0

45 (gr) 0

6 7

70 (gr) 12

70 (gr) 18

24

Fig. 7 Average blood glucose concentration for the adult patient cohort using the CSTA. The ±1 STD values are given by the filled area

0.06 0.05 0.04 0.03 0.02 0.01 0

0

6

12

18

24

Fig. 8 Average insulin infusion rate for the adult patient cohort using the CSTA

action represented as the insulin infusion rate for the adult patient cohort. The control signal is continuous and is capable to deal with the food intake. Figure 12 shows the CVGA. The CNTA drives the 100% in the Upper B-zone. Table 5 indicators evidence that the CNTA does not present any risk of hyperglycemia and hypoglycemia. Moreover, the blood glucose level and the standard deviation remain in the normoglycemia band most of the time, even in the presence of unannounced food intake and interpatient variability.

upper 95% confidence bound (mg/dl)

Blood Glucose Regulation for Type 1 Diabetic Patients at Intensive Care Unit …

400

403

A zone 0%, B zone 100%, C zone 0%, D zone 0%E zone 0%

300

180

110 110

90

70

50

lower 95% confidence bound (mg/dl) Fig. 9 CVGA for the adult patient cohort using the CSTA Table 4 Performance indicators of the CSTA. The last row refers to the mean values of the adult patient cohort Adult patients Time in range [%] Risk index Patient Mean blood 180 LBGI HBGI glucose 1 2 3 4 5 6 7 8 9 10 Mean

133.08 133.03 148.37 128.66 136.52 159.09 150.90 148.27 131.70 132.40 140.20

0 0 0 0 0 0 0 0 0 0 0

88.69 91.74 78.21 93.41 84.18 72.31 78.21 72.52 88.20 93.06 84.05

11.31 8.26 21.79 6.59 15.82 27.69 21.79 27.48 11.80 6.94 15.95

0 0 0 0.01 0.01 0 0 0 0.05 0.01 0.008

1.78 1.66 3.72 1.39 2.58 4.65 4.76 4.24 1.90 1.72 2.84

5.5 Output-Feedback Continuous-Twisting Algorithm This subsection presents the performance of the OFCTA concerning the blood glucose regulation problem. Figures 13, 14, 15 and Table 6 summarize the results for ten in silico adult patients. Figure 13 depicts the average value of the blood glucose and its ±1 STD, and the hypoglycemia and hyperglycemia range. Figure 14 depicts the mean control action

404

R. Franco et al.

400 350 300 250 200 150 100 50 0

45 (gr)

0

6 7

70 (gr)

12

70 (gr)

18

24

Fig. 10 Average blood glucose concentration for the adult patient cohort using the CNTA. The ±1 STD values are given by the filled area

0.06 0.05 0.04 0.03 0.02 0.01 0

0

6

12

18

24

Fig. 11 Average insulin infusion rate for the adult patient cohort using the CNTA

represented as the insulin infusion rate for the adult patient cohort. The control signal is continuous and is capable to deal with the food intake. Figure 15 shows the CVGA. The OFCTA drives 90% in the Upper B-zone, whereas the other 10% remains in the B-zone. Table 6 indicators evidence that the OFCTA does not present any risk of hyperglycemia and hypoglycemia. Moreover, the blood glucose level and the standard deviation remain in the normoglycemia band most of the time, even in the presence of unannounced food intake and interpatient variability.

upper 95% confidence bound (mg/dl)

Blood Glucose Regulation for Type 1 Diabetic Patients at Intensive Care Unit …

405

A zone 0%, B zone 100%, C zone 0%, D zone 0%E zone 0% 400

300

180

110 110

90

70

50

lower 95% confidence bound (mg/dl) Fig. 12 CVGA for the adult patient cohort using the CNTA Table 5 Performance indicators of the CNTA. The last row refers to the mean values of the adult cohort Adult patients Time in range [%] Risk index Patient Mean blood 180 LBGI HBGI glucose 1 2 3 4 5 6 7 8 9 10 Mean

132.00 136.65 147.10 133.00 139.38 159.86 145.92 148.00 133.10 132.81 140.78

0 0 0 0 0 0 0 0 0 0 0

91.12 87.09 80.15 90.01 81.05 72.89 79.81 73.28 85.70 92.85 83.39

8.88 12.91 19.85 9.99 18.95 27.11 20.19 26.72 14.30 7.15 16.61

0 0 0 0.01 0.02 0 0 0 0.02 0.01 0.006

1.56 2.13 3.55 1.92 3.05 4.86 3.90 4.16 2.10 1.75 2.89

5.6 Standard Open-Loop Protocol at ICU This subsection presents the Portland protocol results (for more details, please see Appendix A in [3]). The Portland protocol supplies three different signals: insulin infusion, insulin bolus, and glucose bolus. The Portland protocol is based on written instructions that the ICU staff must follow to maintain the blood glucose within

406

R. Franco et al.

400 350 300 250 200 150 100 50 0

45 (gr)

0

6 7

70 (gr)

12

70 (gr)

18

24

Fig. 13 Average blood glucose concentration for the adult patient cohort using the OFCTA. The ±1 STD values are given by the filled area. ©2022 IEEE. Reprinted with permission, from, R. Franco et al., “Output-Feedback Sliding-Mode Controller for Blood Glucose Regulation in Critically Ill Patients Affected by Type 1 Diabetes,” in IEEE Transactions on Control Systems Technology, vol. 29, no. 6, pp. 2704–2711, Nov. 2021, https://doi.org/10.1109/TCST.2020.3046420

0.08

0.06

0.04

0.02

0

0

6

12

18

24

Fig. 14 Average insulin infusion rate for the adult patient cohort using the OFCTA. ©2022 IEEE. Reprinted with permission, from, R. Franco et al., “Output-Feedback Sliding-Mode Controller for Blood Glucose Regulation in Critically Ill Patients Affected by Type 1 Diabetes,” in IEEE Transactions on Control Systems Technology, vol. 29, no. 6, pp. 2704–2711, Nov. 2021, https:// doi.org/10.1109/TCST.2020.3046420

Blood Glucose Regulation for Type 1 Diabetic Patients at Intensive Care Unit … A zone 0%, B zone 100%, C zone 0%, D zone 0%E zone 0%

400 upper 95% confidence bound (mg/dl)

407

300

180

110 110

90

70

50

lower 95% confidence bound (mg/dl)

Fig. 15 CVGA for the adult patient cohort using the OFCTA. ©2022 IEEE. Reprinted with permission, from, R. Franco et al., “Output-Feedback Sliding-Mode Controller for Blood Glucose Regulation in Critically Ill Patients Affected by Type 1 Diabetes,” in IEEE Transactions on Control Systems Technology, vol. 29, no. 6, pp. 2704–2711, Nov. 2021, https://doi.org/10.1109/TCST. 2020.3046420 Table 6 Performance indicators of the OFCTA. The last row refers to the mean values of the adult cohort. ©2022 IEEE. Reprinted with permission, from, R. Franco et al., “Output-Feedback Sliding-Mode Controller for Blood Glucose Regulation in Critically Ill Patients Affected by Type 1 Diabetes,” in IEEE Transactions on Control Systems Technology, vol. 29, no. 6, pp. 2704–2711, Nov. 2021, https://doi.org/10.1109/TCST.2020.3046420 Adult patients Time in range [%] Risk index Patient Mean blood 180 LBGI HBGI glucose 1 2 3 4 5 6 7 8 9 10 Mean

137.07 136.15 152.84 129.05 140.50 156.95 152.87 154.66 133.70 137.85 143.16

0 0 0 0 0 0 0 0 0 0 0

89.38 88.83 79.94 96.11 81.82 80.65 74.12 78.01 85.36 90.98 85.07

10.62 11.17 20.06 3.89 18.18 19.35 25.88 21.99 14.64 9.02 14.93

0 0 0.01 0 0.01 0.01 0.02 0.02 0 0 0.007

2.15 2.04 4.18 1.36 3.15 4.18 4.79 4.51 2.35 2.25 3.09

408

R. Franco et al.

400 350 300 250 200 150 100 50 0

45 (gr)

0

6 7

70 (gr)

12

70 (gr)

18

24

Fig. 16 Average blood glucose concentration for the adult patient cohort using the Portland protocol. The ±1 STD values are given by the filled area ©2022 IEEE. Reprinted with permission, from, R. Franco et al., “Output-Feedback Sliding-Mode Controller for Blood Glucose Regulation in Critically Ill Patients Affected by Type 1 Diabetes,” in IEEE Transactions on Control Systems Technology, vol. 29, no. 6, pp. 2704–2711, Nov. 2021, https://doi.org/10.1109/TCST.2020.3046420

140–180 mg/dl. Figures 16, 17, 18, and Table 7 summarize the results for ten in silico adult patients. Figure 16 depicts the average value of the blood glucose and its ±1 STD, and the hypoglycemia and hyperglycemia range. Figure 17 depicts the mean control action represented as the insulin infusion rate driven by the staff according to the Portland protocol. Figure 18 shows the insulin bolus and glucose bolus. Table 7 evidence that the Portland protocol does not present any risk of hyperglycemia and hypoglycemia. The staff also ministers insulin bolus before the meals and glucose bolus in case of hypoglycemia risk. The simulations show that, to keep the blood glucose within the normoglycemia range, the ICU staff must intervene around 50 times in the whole process. In contrast with the above outcomes, all the sliding-mode control approaches keep the blood glucose within the normal range, and none of them put the patient at hypoglycemic risk.

5.7 Discussion This section contrasts the closed-loop performance of the five controllers. First, all the control algorithms achieve the control objective with minimal risk of hypo- and hypoglycemia. Besides, none algorithm exhibits hypoglycemia events despite the unannounced food intake.

Blood Glucose Regulation for Type 1 Diabetic Patients at Intensive Care Unit …

409

0.06 0.05 0.04 0.03 0.02 0.01 0

0

6

12

18

24

Fig. 17 Average insulin infusion rate for the adult patient cohort using the Portland protocol. ©2022 IEEE. Reprinted with permission, from, R. Franco et al., “Output-Feedback Sliding-Mode Controller for Blood Glucose Regulation in Critically Ill Patients Affected by Type 1 Diabetes,” in IEEE Transactions on Control Systems Technology, vol. 29, no. 6, pp. 2704–2711, Nov. 2021, https://doi.org/10.1109/TCST.2020.3046420

4

25 20

3

15 2 10 1

0 0

5

6

12

18

0 24

Fig. 18 Insulin and glucose bolus for the adult patient cohort using the Portland protocol. ©2022 IEEE. Reprinted with permission, from, R. Franco et al., “Output-Feedback Sliding-Mode Controller for Blood Glucose Regulation in Critically Ill Patients Affected by Type 1 Diabetes,” in IEEE Transactions on Control Systems Technology, vol. 29, no. 6, pp. 2704–2711, Nov. 2021, https:// doi.org/10.1109/TCST.2020.3046420

410

R. Franco et al.

Table 7 Performance indicators of the Portland protocol. The last row corresponds to the mean values of the patient cohort. ©2022 IEEE. Reprinted with permission, from, R. Franco et al., “Output-Feedback Sliding-Mode Controller for Blood Glucose Regulation in Critically Ill Patients Affected by Type 1 Diabetes,” in IEEE Transactions on Control Systems Technology, vol. 29, no. 6, pp. 2704–2711, Nov. 2021, https://doi.org/10.1109/TCST.2020.3046420 Adult patients Average time in range [%] Average risk index Patient Mean blood 180 LBGI HBGI glucose 1 2 3 4 5 6 7 8 9 10 Mean

129.56 123.48 132.61 117.83 132.88 138.45 131.96 153.00 120.45 129.14 130.93

0.65 0 0.04 0.02 0.07 0 1.26 0 0.87 0 0.28

89.01 91.26 80.74 88.95 79.81 76.82 79.12 71.55 84.63 90.42 83.23

10.34 8.74 19.22 11.03 20.12 23.18 19.65 28.45 14.50 9.58 16.49

0.22 0.34 0.34 0.26 0.24 0.35 1.28 0.01 1.12 0.05 0.42

2.07 1.44 2.90 1.82 3.19 4.23 3.22 5.26 1.99 1.07 2.71

Table 8 Performance indicators of the Sliding-Mode algorithms. The last row depicts algorithm with the best performance for the corresponding metric Adult patients Average time in range [%] Average risk index Algorithm RMS mean 180 LBGI HBGI control action STA CTA CSTA CNTA OFCTA Portland protocol Best performance

0.0355 0.0294 0.0292 0.0296 0.0247 0.0177

0 0 0 0 0 0.28

95.44 91.72 84.05 83.69 85.07 83.23

4.56 8.28 15.95 16.61 14.93 16.49

0.008 0.007 0.008 0.006 0.007 0.42

1.02 1.59 2.84 2.89 3.09 2.71

OFCTA

None

STA

STA

CNTA

STA

For comparison purposes, Table 8 recapitulates the average values of the abovelisted metrics for each control algorithm. The last row highlights the controller with the best performance under the prescribed metrics.

Blood Glucose Regulation for Type 1 Diabetic Patients at Intensive Care Unit …

411

From Table 8, excluding the Portland protocol, the following can be highlighted: • None of the algorithms provoke hypoglycemic events, continuous sliding-mode controllers can deal with unannounced food intake. • All the algorithms present minimal hypo- and hyperglycemia risks. • The OFCTA has the lowest insulin infusion consumption, whereas the STA has the highest insulin infusion consumption. Still, it is not a drawback for the STA, given that the higher insulin infusion rate of STA is straightforwardly related with the lower average time spend in the hyperglycemia range. Note that all the analyzed algorithms present a fairly good performance. Moreover, in comparison with the Portland protocol (open-loop), the SMC has most time in normoglycemia range, better LBGI index, and reduce the workload.

6 Conclusions In this chapter, five different continuous controllers based on high-order sliding modes are designed to solve the problem of blood glucose regulation in critically ill patients affected with T1DM. The method relies on blood glucose measurement and insulin infusion via intravenous route. The proposed schemes cope with the effects of interpatient and intrapatient variability and are not required the patient individualization. Moreover, all the controllers are robust to unannounced meal intake, which is regarded as an external perturbation. The controllers are validated in ten in silico adult patients using the FDA-approved UVA/Padova simulator. All the proposed algorithms achieve the control objective without the occurrence of hypoglycemia events. Besides, any of the algorithms present hypo- and hyperglycemia risks. A quantitative analysis examined the individual behavior of the algorithms showing that all have acceptable performances. In contrast with the standard open-loop protocol used at ICU, the sliding-mode control approaches reduce the workload of the ICU staff, and any of the SMC put the patient at hypoglycemia risk. Acknowledgements This work was supported in part by the ECOS Nord Project M18M01, in part by SEP-CONACYT-ANUIES-ECOS NORD Project 296692, and in part by the French National Agency for Research under Grant DIABLO ANR-18- CE17-0005- 01. The work of Roberto Franco and Héctor Ríos was supported in part by CONAHCYT CVU 772057, in part by CONAHCYT CVU 270504 Project 922, and in part by TecNM Projects. The work of Alejandra Ferreira de Loza was supported by CONAHCYT CVU 166403, Project 1537.

412

R. Franco et al.

Appendix Proof of Theorem 1 Consider the sliding surface given by (9) where the dynamics is given by s˙ = c1 e2 + c2 e3 + φ(t, x) − p3 x1 u. Substituting the STA control law (10) in the sliding surface dynamics, it follows that 1

s˙ = −k1 s 2 + v¯ , ˙ x), v˙¯ = −k2 s0 + φ(t,

(19a) (19b)

˙ where v¯ = v + φ(t, x). Thus, since |φ(x, t)| ≤ ρ, according to [33], if the control 1 2 gains are selected as k1 = 1.5ρ and k2 = 1.1ρ; then, (s, v¯ ) = 0 is GUFTS. Hence, from (9), s = 0 results in e¨1 = c1 e˙1 − c2 e1 . Finally, if c1 , c2 ∈ R+ , then (e1 , e2 , e3 ) = 0 is GUAS.



Proof of Theorem 2 Consider the sliding surface given by (11) where the dynamics is given by s˙1 = s2 , s˙2 = ce3 + φ(x, t) − p3 x1 u. Substituting the CTA control law (13) in the sliding surface dynamics, it follows that s˙1 = s2 ,

(20a) 1 3

1 2

s˙2 = −k1 s1  − k2 s2  + v¯ , ˙ x), v˙¯ = −k3 s1 0 − k4 s2 0 + φ(t,

(20b) (20c)

˙ where v¯ = v + φ(t, x). Thus, since |φ(x, t)| ≤ ρ, according to [20], if the gains are 2 1 3 2 selected as k1 = 25ρ , k2 = 15ρ , k3 = 2.3ρ, and k4 = 1.1ρ; then (s1 , s2 , ν) ¯ =0 is GUFTS. Substituting s1 = 0 in (11) yields

Blood Glucose Regulation for Type 1 Diabetic Patients at Intensive Care Unit …

e˙1 = −ce1 . Then, if c ∈ R+ , (e1 , e2 ) = 0 is GUAS.

413

(21) 

Proof of Theorem 3 Consider the sliding surface given by (14a), with s1 = ce1 + e2 , where the dynamics is given by s˙1 = s2 , s˙2 = ce3 + φ(x, t) − p3 x1 u. Substituting the CNTA control law (14b)–(14c) in the sliding surface dynamics, it follows that s˙1 = s2 ,

(22a) 1 2

s˙2 = −k1 s + v¯ , ˙ x), v˙¯ = −k2 s0 + φ(t,

(22b) (22c)

˙ where v¯ = v + φ(t, x). Thus, since |φ(x, t)| ≤ ρ, according to [21], if the gains are 1 selected as k1 = 1.5ρ 2 and k2 = 1.1ρ; then (s1 , s2 , ν) ¯ = 0 is GUFTS. In the same vein that the previous result, s1 = 0 in (11) yields to (21). Therefore,  selecting c ∈ R+ , ensures that (e1 , e2 ) = 0 is GUAS.

Proof of Theorem 4 Consider the sliding surface given by (15a), with s1 = ce1 + e2 , where the dynamics is given by s˙1 = s2 , s˙2 = ce3 + φ(x, t) − p3 x1 u. Substituting the CNTA control law (15b)–(15c) in the sliding surface dynamics, it follows that s˙1 = s2 ,

(23a) 1 2

s˙2 = −k1 s + v¯ , ˙ x), v¯˙ = −k2 s0 + φ(t,

(23b) (23c)

414

R. Franco et al.

˙ where v¯ = v + φ(t, x). Thus, since |φ(x, t)| ≤ ρ, according to [22], if the gains 1 ¯ = 0 is GUFTS. Selecting are selected as k1 = 1.5ρ 2 and k2 = 1.1ρ; then (s1 , s2 , ν) c ∈ R+ , ensures that (e1 , e2 ) = 0 is GUAS. From (11), s1 = 0 results in (21). Therefore, selecting c ∈ R+ , ensures that  (e1 , e2 ) = 0 is GUAS.

Proof of Theorem 5 Introducing the following error variables σ1 = eˆ1 − e1 , σ2 = eˆ2 − e2 , σ3 = eˆ3 − e3 , σ4 = ε − φ + ( p3 x1 − K M )u, z 1 = e1 , z 2 = e2 + σ2 , z 3 = e3 + σ3 , z 4 = η. Thus, applying the OFCTA (16) to the tracking error dynamics (8), and taking into account ˙ ≤ ρ and |u| ˙ ≤ l with l > 0, the closed-loop dynamthat 0 < K m ≤ p3 x1 ≤ K M , |φ| ics yields z˙ 1 = z 2 − e2 ,

(24a) 2 4

1 2

z˙ 2 = z 3 − λ2 H σ1  , [K m , K M ]  ¯ 3 1 2 1 1 1 −k1 L 4 z 1  4 − k¯2 L 3 z 2  3 − k¯3 L 2 z 3  2 + z¯ 4 z˙ 3 ∈ Km  3 1 −K m λ3 H 4 σ1  4 + K m σ4 ,

(24b)

˙ z˙¯ 4 = − k¯4 Lz 1 0 − k¯5 Lz 2 0 − k¯6 Lz 3 0 + K m φ,

(24d)

1 4

3 4

2 4

1 2

3 4

1 4

σ˙ 1 = − λ1 H σ1  + σ2 , σ˙ 2 = − λ2 H σ1  + σ3 ,

(24c)

(24e) (24f)

σ˙ 3 = − λ3 H σ1  + σ4 ,

(24g)

σ˙ 4 ∈ − λ4 H σ1  + [−ρ, ρ] + [0, (K M − K m )c],

(24h)

0

with k¯i = K m ki , i = 1, 6, z¯ 4 = K m z 4 . Therefore, system (24) is homogeneous of degree dh = −1 for a vector of weights r = (4, 3, 2, 1, 4, 3, 2, 1) for the extended state (z 1 , z 2 , z 3 , z¯ 4 , σ1 , σ2 , σ3 , σ4 ). Thus, following the stability analysis provided in [23], (z 1 , z 2 , z 3 , z¯ 4 , σ1 , σ2 , σ3 , σ4 ) = 0 is GUFTS. Consequently, (e1 , e2 , e3 ) = 0 is GUFTS.

References 1. Dickson, L., Stewart, W., Pretty, G., Flechet, M., Desaive, T., Penning, S., Lambermont, C., Benyo, B., Shaw, M., Chase, J.: Generalisability of a virtual trials method for glycaemic control in intensive care. IEEE Trans. Biomed. Eng. 65, 1543–1553 (2018)

Blood Glucose Regulation for Type 1 Diabetic Patients at Intensive Care Unit …

415

2. Cosson, E., Catargi, B., Cheisson, G., Jacqueminet, S., Ichai, C., Leguerrier, A., Ouattara, A., Tauveron, I., Bismuth, E., Benhamou, D., Valensi, P.: Practical management of diabetes patients before, during and after surgery: a joint French diabetology and anaesthesiology position statement. Diabetes Metab. 44, 200–216 (2018) 3. Steil, G., Deiss, D., Shih, J., Buckingham, B., Weinzimer, S., Agus, M.: Intensive care unit insulin delivery algorithms: why so many? How to choose? J. Diabetes Sci. Technol. 3, 126–140 (2009) 4. Kumar, A., Gupta, R., Ghosh, A., Misra, A.: Diabetes and metabolic syndrome: clinical research and reviews diabetes in COVID-19: prevalence, pathophysiology, prognosis and practical considerations. Diabetes Metab. Syndr. Clin. Res. Rev. 14, 303–310 (2020) 5. Chase, J., Benyo, B., Desaive, T.: Glycemic control in the intensive care unit: a control systems perspective. Annu. Rev. Control 48, 359–368 (2019) 6. Wintergerst, K., Deiss, D., Buckingham, B., Cantwell, M., Kache, S., Agarwal, S., Wilson, D., Steil, G.: Glucose control in pediatric intensive care unit patients using an insulin-glucose algorithm. Diabetes Technol. Ther. 9, 211–222 (2007) 7. Fisk, L., Le Compte, A.J., Shaw, G.M., Penning, S., Desaive, T., Chase, J.G.: STAR development and protocol comparison. IEEE Trans. Biomed. Eng. 59, 3357–3364 (2012) 8. Hovorka, R., Canonico, V., Chassin, L., Haueter, Ludovic U., Massi-Benedetti, M., Federici, M., Pieber, T., Schaller, C., Schaupp, L., Vering, T., Wilinska, M.: Nonlinear model predictive control of glucose concentration in subjects with type 1 diabetes. Physiol. Meas. 25, 905–920 (2004) 9. Wu, S., Furutani, E., Sugawara, T., Asaga, T., Shirakami, G.: Glycemic control for critically ill patients using zone model predictive control. IEEJ Trans. Electr. Electron. Eng. 16(2), 275–281 (2021) 10. Incremona, G.P., Messori, M., Toffanin, C., Cobelli, C., Magni, L.: Model predictive control with integral action for artificial pancreas. Control Eng. Pract. 77, 86–94 (2018) 11. Chakrabarty, A., Healey, E., Shi, D., Zavitsanou, S., Doyle, F.J., Dassau, E.: Embedded model predictive control for a wearable artificial pancreas. IEEE Trans. Control Syst. Technol. 28, 2600–2607 (2020) 12. Utkin, V.I.: Sliding Modes in Control and Optimization. Springer, Berlin, Germany (1992) 13. Ahmad, S., Ahmed, N., Ilyas, M., Khan, W.: Super twisting sliding mode control algorithm for developing artificial pancreas in type 1 diabetes patients. Biomed. Signal Process. Control 38, 200–211 (2017) 14. Kaveh, P., Shtessel, Y.: Blood glucose regulation using higher order sliding-mode control. Int. J. Robust Nonlinear Control 18, 554–569 (2008) 15. Gallardo Hernández, A., Fridman, L., Levant, A., Shtessel, Y., Leder, R., Revilla Monsalve, C., Islas Andrade, S.: High-order sliding-mode control for blood glucose: practical relative degree approach. Control Eng. Pract. 21, 747–758 (2013) 16. Dansa, M., Rodrigues, V.H.P., Oliveira, T.R.: Blood glucose regulation through bihormonal non-singular terminal sliding mode controller. In: Proceedings 15th International Conference of Control, Automation, Robotics and Vision (ICARCV), vol. 1, pp. 474–479 (2018) 17. Franco, R., Ferreira de Loza, A., Ríos, H., Cassany, L., Gucik-Derigny, D., Cieslak, J., Olcomendy, L., Henry, D.: Output-feedback sliding-mode controller for blood glucose regulation in critically ill patients affected by type 1 diabetes. IEEE Trans. Control Syst. Technol. 29, 2704–2711 (2021) 18. Franco, R., Ríos, H., Ferreira de Loza, A., Cassany, L., Gucik-Derigny, D., Cieslak, J., Olcomendy, L., Henry D.: Blood glucose regulation in patients with type 1 diabetes mellitus: a robust MRAC approach. In: IEEE Conference on Decision and Control (CDC), Austin, TX, USA, pp. 661–666 (2021) 19. Levant, A.: Higher-order sliding modes, differentiation and output feedback control. Int. J. Control 76, 924–941 (2003) 20. Torres-González, V., Sanchez, T., Fridman, L., Moreno, J.: Design of continuous twisting algorithm. Automatica 80, 119–126 (2017)

416

R. Franco et al.

21. Fridman, L., Moreno, J.A., Bandyopadhyay, B., Kamal, S., Chalanga, A.: Continuous nested algorithms: the fifth generation of sliding mode controllers. Recent Advances in Sliding Modes: From Control (2015) 22. Kamal, S., Moreno, J.A., Chalanga, A., Bandyopadhyay, B., Fridman, L.: Continuous terminal sliding-mode controller. Automatica 69, 308–314 (2016) 23. Mendoza-Avila, J., Moreno, J., Fridman, L.: Continuous twisting algorithm for third order systems. IEEE Trans. Autom. Control 65, 2814–2825 (2018) 24. Levant, A.: Filtering differentiators and observers. In: 15th International Workshop on Variable Structure Systems (VSS), vol. 1, pp. 174–179 (2018) 25. Levant, A.: Homogeneity approach to high-order sliding mode design. Automatica 41, 823–830 (2005) 26. Man, C.D., Micheletto, F., Lv, D., Breton, M., Kovatchev, B., Cobelli, C.: The UVA/PADOVA type 1 diabetes simulator: new features. J. Diabetes Sci. Technol. 8, 26–34 (2014) 27. Ovalle, L., Ríos, H., Llama, M., Santibañez, V., Dzul, A.: Omnidirectional Mobile Robot Robust Tracking: Sliding-Mode Output-based Control Approaches. Control Engineering Practice 85, 50–58 (2019) 28. Messori, M., Paolo Incremona, G., Cobelli, C., Magni, L.: Individualized model predictive control for the artificial pancreas: in silico evaluation of closed-loop glucose control. IEEE Control Syst. Technol. 38, 86–104 (2018) 29. Herpe, T.V., Espinoza, M., Haverbeke, N., De Moor, B., Van den Berghe, G.: Glycemia prediction in critically ill patients using an adaptive modeling approach. J. Diabetes Sci. Technol. 1, 348–356 (2007) 30. Bergman, R., Phillips, L., Cobelli, C.: Physiologic evaluation of factors controlling glucose tolerance in man: measurement of insulin sensitivity and beta-cell glucose sensitivity from the response to intravenous glucose. J. Clin. Invest. 68, 1456–1467 (1981) 31. Fisher, M.E.: A semiclosed-loop algorithm for the control of blood glucose levels in diabetics. IEEE Trans. Biomed. Eng. 38, 57–61 (1991) 32. Khalil, H.: Nonlinear Systems. Prentice- Hall, Upper Saddle River, NJ, USA (2002) 33. Moreno, J.: Strict Lyapunov functions for the super-twisting algorithm. IEEE Trans. Autom. Control 57, 1035–1040 (2012) 34. Seeber, R., Horn, M.: Stability proof for a well-established super-twisting parameter setting. Automatica 84, 241–243 (2017) 35. Doola, R., Deane, A., Tolcher, D., Presneill, J., Barrett, H., Forbes, J., Todd, A., Okano, S., Sturgess, D.: The effect of a low carbohydrate formula on glycaemia in critically ill enterally-fed adult patients with hyperglycaemia: a blinded randomised feasibility trial. Clin. Nutr. ESPEN 31, 80–87 (2019) 36. Sanz, R., Garcia, P., Diez, J.L., Bondia, J.: Artificial pancreas system with unannounced meals based on a disturbance observer and feedforward compensation. IEEE Trans. Control Syst. Technol. 29, 454–460 (2020) 37. Kovatchev, B.P., Breton, M., Dalla-Man, C., Cobelli, C.: In silico preclinical trials: a proof of concept in closed-loop control of type 1 diabetes. J. Diabetes Sci. Technol. 3(1), 44–55 (2009) 38. Magni, L., Raimondo, D., Dalla-Man, C., Breton, M., Patek, S., De Nicolao, G., Cobelli, C., Kovatchev, B.: Evaluating the efficacy of closed-loop glucose regulation via control-variability grid analysis. J. Diabetes Sci. Technol. 2, 630–635 (2008)

A Reduced-Order Model-Based Design of Event-Triggered Sliding-Mode Control Kiran Kumari, Abhisek K. Behera, Bijnan Bandyopadhyay, and Johann Reger

Abstract Event-triggered sliding-mode control (SMC) is an effective tool for stabilizing networked systems under external perturbations. In this chapter, a reducedorder model-based event-triggered controller is presented, unlike the case in the traditional full-order-based design. Besides its inherent advantage of reduced computations, this technique also offers many benefits to the network-based implementation. Particularly in the event-triggering scenario, the use of a reduced-order state vector shows an increase in the sampling interval (also called the inter-event time), leading to a sparse sampling sequence. This is the primary goal of almost all eventtriggered controllers. The second outcome of this design is the transmission of a reduced-order vector over the network. Consequently, the transmission cost associated with the controller implementation can be reduced. This chapter exploits the aggregation technique to obtain a reduced-order model for the plant. The design of SMC and the event condition are carried out using this reduced-order model. The analysis of the closed-loop system is discussed using the reduced-order model without transforming it into a regular form. At the end, a practical example is considered to illustrate the benefit of the proposed technique.

K. Kumari Department of Electrical Engineering, Indian Institute of Science, Bengaluru, Karnataka 560012, India e-mail: [email protected]; [email protected] K. Kumari · J. Reger Control Engineering Group, Technische Universität Ilmenau, 98693 Ilmenau, Germany e-mail: [email protected] A. K. Behera Department of Electrical Engineering, Indian Institute of Technology Roorkee, Roorkee 247 667, India e-mail: [email protected] B. Bandyopadhyay (B) Department of Electrical Engineering, Indian Institute of Technology Jodhpur, Jodhpur 342 037, India e-mail: [email protected] © The Author(s), under exclusive license to Springer Nature Switzerland AG 2023 T. R. Oliveira et al. (eds.), Sliding-Mode Control and Variable-Structure Systems, Studies in Systems, Decision and Control 490, https://doi.org/10.1007/978-3-031-37089-2_16

417

418

K. Kumari et al.

1 Introduction The event-triggering technique is primarily developed as a new method of implementing the sampled-data controller in a digital platform. Compared to the classical approach, in this technique, the plant outputs are transmitted or the control is applied intermittently to the plant, but not necessarily at regular intervals of time. This implementation is realized by enabling the sensors to compute a condition, which can be verified online, and the data transmission is initiated only when this condition is violated (see [1, 2, 11, 26, 28] for the initial developments). The condition used for regulating the transmission is known as the event condition, and the control law, which is updated at these time instants of event occurrence, is called the eventtriggered control (ETC). In the past decade, many works on ETC were reported in the literature in various fields of control applications. Readers can refer to [8, 9, 12, 20, 25, 30–32, 34], and the references cited therein for the significant developments in this field. One of the key features of the event-based algorithms is the dependency of event conditions on the plant output to generate a sequence of sampling (also called triggering) instants. Any abrupt behavior of the state trajectory can lead to the generation of more number of triggering instants, and vice versa. As a result, the triggering mechanism generates either a sparse or a dense sampling sequence, which can affect the overall system performance. Although this is not stated explicitly, many works that primarily focus on the improvement of system performance through ETC have utilized this fact in their design. The most notable of these can be found in [17], where an internal dynamic model is introduced in the event-triggering strategy. The overall order of the system is increased by that of the filter while the additional state variable is used to generate the sparse sampling sequence. A proper design of the triggering mechanism can increase the time interval between two events (often called the interevent time). A similar idea was also extended recently in [33] for the sliding-mode control (SMC), which also results in a similar outcome. There is also an alternate approach introduced in [22], where a reduced-order model is used to design the ETC. Due to the use of a reduced vector, the inter-event times were found to be larger than those of the full-order case. The main motivation of this chapter is derived from the reduced-order approach for the design of ETC. Here, we describe a Lyapunov-based SMC design using the reduced-order model to stabilize the full-order plant along similar lines. Many model-order reduction techniques are found in the literature based on numerous design criteria, e.g., aggregation method [3], Routh approximation [18], Padé approximation [27], and moment matching method [4]. In each of these techniques, different reduced-order models are obtained for the controller design. The criterion followed to choose a reduced-order model is that it behaves the same way as that of the (original) full-order plant subject to some accuracy [7, 23, 24]. It is well known that the model-order reduction techniques are used to avoid the computational complexities in computer-based implementations. Additionally, these techniques may have other benefits in modern applications, such as a reduction in data

A Reduced-Order Model-Based Design of Event-Triggered Sliding-Mode Control

419

transmitted over the communication network. With the use of a reduced-order vector, the computations for the triggering mechanism are reduced, and also the amount of information transmitted is reduced. This directly results in the larger inter-event times, as seen in [22]. In other words, the sampling sequence generated by the triggering mechanism becomes sparse. Under this framework, a reduced-order controller is designed to obtain robust performance. In this chapter, the SMC is incorporated with the reduced-order event-triggering strategy for stabilizing the plant. SMC is a popular control strategy for guaranteeing robust performance against a class of disturbances [5, 16, 29]. This is true in the case of analog implementation, but the performance deteriorates in the sampled-data case. The event-based implementation of SMC has demonstrated a significant improvement in performance, termed as the practical sliding mode, where the trajectories are bounded within any bound around the sliding manifold (see [6, 10, 13–15] for initial developments). In [21, 22], the design of reduced-order event-triggered SMC consists of transforming the system into the regular form, followed by the selection of the sliding surface parameter. The design step is quite cumbersome because the estimation of the region of attraction is required to ensure the boundedness of the plant trajectory. In this chapter, this design step is simplified by employing a Lyapunov-based approach to design the SMC. Here, there is no need to transform the system into the regular form to synthesize the sliding hyperplane. Another main difference is that, unlike in [22], the practical sliding mode is enforced in the system in a finite time, but not necessarily when the trajectory crosses the sliding manifold. The chapter is organized as follows: Sect. 2 describes the problem setup, followed by Sect. 3, which states the problem at hand. In Sect. 4, a reduced-order model is obtained using the aggregation technique, which is also discussed in detail. The main contribution is discussed in Sect. 5. The design of the sliding manifold and control law is presented for the reduced-order model. Also, the proposed event-triggering mechanism is discussed in the same section. The stability analysis for the proposed method is also presented, and it shows the boundedness of the state trajectory of the full-order plant. Section 6 illustrates the proposed strategy with a practical example. Finally, Sect. 7 concludes the chapter.

2 System Description We consider the following LTI system x˙ = Ax + B(u + d)

(1)

where x ∈ Rn is the state, u ∈ R is the control input, and d ∈ R is an unknown disturbance input to the system. Here, the input disturbance is a matched uncertainty, i.e., it enters the plant through the input channel. There are many mechanical and electrical systems that fall into the above class.

420

K. Kumari et al.

Uncertain Plant Reduced-Order Controller

Reduced-Order Event Mechanism Communication Network

Fig. 1 The closed-loop plant with the event-triggered feedback strategy

Assumption 1 The pair (A, B) is controllable. Assumption 2 There exists a positive constant d0 such that d(t) ≤ d0 for all t ≥ 0. In our work, the plant with the sampled-data feedback shown in Fig. 1 is considered. One of the situations where this scenario arises frequently is the networked control system. Here, the communication network is subject to various constraints that directly affect the system performance. Among many strategies, the sampleddata feedback is primarily used in a networked system to reduce the transmission frequency. In this chapter, an event-triggering-based scheduling policy is employed to regulate the data transmission over the network. ∞ be a time sequence at which the control input is applied to the plant. Let {ti }i=0 Here, the event-triggering mechanism located at the sensor side measures the state continuously and eventually generates such time instants. Then, we apply the newly computed control signal to the plant through a zero-order hold. It may be noted that the triggering sequence is not uniformly spaced because of the nonlinearity in the sampling mechanism, and, importantly, these are not known a priori.

3 Problem Statement The main objective of this chapter is to develop a novel event-triggering algorithm that shows an improvement in the performance over the existing methods. Here, the improvement is in terms of the inter-event time—the time interval between two events—for the proposed event-triggering mechanism is larger than that of the existing algorithms. Thus, the sampling sequence becomes sparse and the requirement of feedback is also reduced. The model-order reduction technique is applied here to design ETC. By obtaining a reduced-order model in the design, a reduced-order event-triggering mechanism and controller are proposed. Many model-order reduction techniques are available in the literature based on different criteria that measure the performance of actual and reduced models. The proposed technique relies on the use of the dominant modes of the plant to design the controller by neglecting the non-dominant modes. In many

A Reduced-Order Model-Based Design of Event-Triggered Sliding-Mode Control

421

practical systems, there are only a few modes that dominate over others in the plant response. These dominant modes can be used to design the controller in the eventtriggering feedback. The objective here is to propose a triggering mechanism and a stabilizing controller employing the reduced-order model of the given system, which consists primarily of the dominant modes. As a result of this, the triggering instants will depend on fewer states rather than the full state vector and lead to a longer time for violation of event condition. In the next section, details of obtaining the reduced-order model are provided.

4 Reduced-Order System via Aggregation Technique This section presents the construction of a reduced-order model for the control design. We introduce a nonsingular transformation T that transforms the system (1) into a block diagonal form consisting of dominant and non-dominant modes as its block elements. Then, one can write with the transformation x = T z ˆ + Bˆ (u + d), z˙ = Az where

  Λ1 0 ˆ , A= 0 Λ2

(2)

  B1 ˆ B= B2

  and the corresponding state vector z = z 1 z 2 ∈ Rn with z 1 ∈ Rr and z 2 ∈ Rn−r being the dominant and non-dominant modes, respectively, and r is the number of dominant modes in the system. The block matrices Λ1 and Λ2 denote the dominant mode and non-dominant mode of the plant, respectively. Then, the reduced-order model from (2) can be written as z˙ 1 = Λ1 z 1 + B1 (u + d).

(3)

Similarly, the non-dominant dynamics is given by those in (2) not accounted in dominant part (3), and it can be written as z˙ 2 = Λ2 z 2 + B2 (u + d).

(4)

In further analysis, only the reduced-order model (3) is used to design the controller for stabilizing the full-order system (2). It can be shown that the reduced-order controller will stabilize the origin of the subsystem (3), whereas the same controller acts as a perturbation in (4). But, as the dominant mode trajectories converge to zero, the non-dominant modes may converge to some bounded region because of the disturbance. Indeed, in the nominal case, i.e., d ≡ 0, the trajectories move toward

422

K. Kumari et al.

the origin provided the control vanishes at zero. Thus, we design a reduced-order controller based on the reduced-order plant and then analyze the behavior of the fullorder plant. It may be noted that the high-order system (1) and the reduced-order system (3) are related to each other as z 1 = C a x,

(5)

where   C a = Ir 0 T −1 is known as the aggregation matrix [5]. This suggests the name of the model-order reduction technique. The aggregation matrix relates the state vector of the reduceorder model with that of the plant state. Thus, one can analyze the behavior of the full-order plant by looking at the dominant dynamics. Another way to interpret this is to see the aggregation matrix as a projection operator that maps a vector in the fullorder state space into a vector in the reduced-order state space. With this background, the design of the controller is presented in the next section.

5 Reduced-Order Event-Triggered SMC 5.1 Design of Sliding Manifold In this section, we discuss the design of event-triggered SMC for a reduced-order system (3). Here, the design is based on the direct synthesis of the sliding variable without converting the system into a regular form. Let F be any r -dimensional vector such that the matrix Λs := Λ1 − B1 F is Hurwitz. Then, there exists a solution P1 = P1 > 0 to the Lyapunov equation P1 Λs + Λ s P1 = −Q 1

(6)

for any given Q 1 = Q  1 > 0. With this, the sliding variable is defined as sz (z 1 ) = B1 P1 z 1

(7)

  S := z 1 ∈ Rr : sz (z 1 ) = B1 P1 z 1 = 0 .

(8)

and the sliding manifold by

When the sliding motion occurs in the system, the trajectories slide on the set S due to the action of the discontinuous control signal. In the analog implementation, this is possible by designing an appropriate control law. But, in the event-triggered

A Reduced-Order Model-Based Design of Event-Triggered Sliding-Mode Control

423

implementation, the state trajectory may not slide on the sliding surface exactly because of the discrete nature of the control signal. However, the state trajectories remain bounded around the sliding surface by the proposed event-based SMC. This motion is referred to as the practical sliding mode, which is defined below. Definition 1 (Practical sliding mode) The system (3) is said to be in practical sliding mode if for every ε > 0 there is a τ > 0 such that |sz (t)| ≤ ε for all t ≥ τ .

5.2 Design of Event-Triggered SMC Here, we design an event-triggered controller using the reduced-order model. As the controller is to be implemented through the event-based feedback, we first discuss the reduced-order event-triggering condition. Let ez (t) = z 1 (ti ) − z 1 (t)

(9)

be the sampling error for all t ∈ [ti , ti+1 ) and any i ∈ Z≥0 . Then, the following eventtriggering mechanism is developed: ti+1 = inf {t > ti : ez (t) ≥ σα}

(10)

where α > 0 is the triggering parameter, and σ ∈ (0, 1) is a constant introduced to tighten the triggering condition. The time interval between two consecutive triggering instants, i.e., Ti = ti+1 − ti , is called the inter-event time. Note that we use here a reduced-order state vector in the event-triggering mechanism. The main outcomes of this design are twofold. First, a larger inter-event time is obtained compared to the full-order-based triggering mechanism, and the second one is the reduction in the transmission cost. It is obvious to notice that the computational and transmission costs are reduced over the network because only less information needs to be communicated. The larger inter-event time can be proved along similar lines as discussed in [22]. Suppose that {s j }∞ j=0 is the time sequence at which a full-order controller is applied. The triggering time is generated by a full-order triggering mechanism given by   (11) s j+1 = inf t > s j : e(t) ≥ σα where e(t) = z(s j ) − z(t), and σ > 0 and α > 0 are the same values as in (10). Then, we have the following result.

424

K. Kumari et al.

Proposition 1 Let ti = s j for some i, j ∈ Z≥0 such that z(ti ) = z(s j ). Then, the triggering mechanisms (10) and (11) guarantee that ti+1 ≥ s j+1 . Proof First note that the triggering parameters σ and α are the same in the triggering mechanisms (10) and (11). By observing ti = s j with z(ti ) = z(s j ), we obtain e(t)2 = ez (t)2 + z 2 (ti ) − z 2 (t)2 .

(12)

Now, suppose ti+1 < s j+1 . It follows immediately that ez (t) < σα for all t ∈ − ) = σα. However, from the relation (12), it can be observed [ti , ti+1 ) and ez (ti+1 that e(t) ≥ σα for some t ∈ [ti , ti+1 ). Thus, the triggering mechanism (11) yields s j+1 ≤ ti+1 . Here, the equality holds when the second term in (12) vanishes at the time of triggering. This leads to a contradiction to our assumption, and hence the proof is completed.  The above result directly reveals that the triggering sequence generated by (10) becomes sparse because the non-dominant trajectory does not ensure the error vectors in (10) and (11) are equal. Now, we design event-triggered SMC based on the above triggering mechanism. Note that, like the triggering mechanism (10), the control law also uses the reducedorder state vector. The proposed control law is given as u(t) = −F z 1 (ti ) − K sign sz (ti ),

t ∈ [ti , ti+1 )

(13)

where K > 0 is the switching gain to be designed in the sequel. This control law is quite different from the classical SMC with respect to the first term. Since the state-feedback form of the controller in (13) has been used, all the eigenvalues of the dominant part of the closed-loop system have negative real parts. This is in contrast to the traditional design, where some of the eigenvalues are at the origin. Note that the reduced-order controller (13) is applied to the dominant and non-dominant parts of dynamics, i.e., the full-order plant. The actual plant state is measured and then the reduced-order state vector is obtained with the help of aggregation matrix as given in (5). Some remarks are now in order that highlight the key features of the above controller. Remark 1 The controller (13) is different from the earlier proposals on eventtriggered SMC in two aspects. Firstly, it places all the eigenvalues of the closed-loop system in the left half of the complex plane, and so it guarantees the convergence of trajectory z 1 to the origin. On the other hand, the existing event-triggered SMC (see [6, 10]) places one eigenvalue at zero leading to the analysis of the reaching phase and the sliding phase sequentially. Secondly, the discrete nature of the control law (13) is quite different from its full-order triggering condition because the triggering instants are based on the reduced-order model. As a result, the controllers in both the cases are applied at different triggering instant.

A Reduced-Order Model-Based Design of Event-Triggered Sliding-Mode Control

425

Remark 2 There are a few controllers which can place all eigenvalues with a negative real part and at the same time, they all become equal to Utkin’s equivalent control. For instance, the event-triggered SMC law u(t) = −B1 P1 Λ1 z 1 (ti ) − L s sz (ti ) − K signsz (ti ) with any L s > 0 can also guarantee all the eigenvalues of the closed-loop system stable. However, the convergence of the closed-loop trajectory using the Lyapunov function V1 (z 1 ) = z 1 P1 z 1 cannot be ensured for all time. So analyzing the behavior of the system trajectory may not be very straightforward. Remark 3 The control law (13) may not enforce the sliding motion in the system when the trajectory reaches the sliding manifold for the first time. This is because the first term in the controller is not equal to the equivalent control, and thus, the trajectory leaves the surface owing to the fact that the switching part is unable to keep the trajectory on it. Despite these challenges, one may still guarantee the sliding motion in a finite time provided the states converge to the sufficiently small neighborhood of the origin. This is discussed in detail in the next subsection.

5.3 Main Result In this subsection, the main result on stabilizing the plant with the proposed reducedorder event-triggered controller is presented. Here, the full-order plant is considered for analyzing its closed-loop behavior. It is established that the trajectory of the dominant modes starting within any compact region converges to any arbitrarily small bound around the origin, while the non-dominant trajectory remains bounded in the steady state. First, the sets necessary for the analysis are constructed. Let P2 = P2 > 0 be a solution to the Lyapunov equation P2 Λ2 + Λ 2 P2 = −Q 2

(14)

 r for any Q  2 = Q 2 > 0. Then, define Ω1 := {z 1 ∈ R : z 1 P1 z 1 ≤ c1 } and Ω2 :=  n−r : z 2 P2 z 2 ≤ c2 } for some positive scalars c1 and c2 . Define Ω = Ω1 × {z 2 ∈ R Ω2 , which is given by

 Ω=

z1 z2



∈ Rn : z 1 ∈ Ω1 and z 2 ∈ Ω2 .

Theorem 1 Consider the system (3) and (4), the control law (13), and the eventtriggering mechanism (10). Assume that z(0) ∈ Ω. Then,

426

K. Kumari et al.

• for every ε1 > 0, there exist α > 0, K > 0, and τ1 > 0 such that z 1 (t) ∈ Ω1 for all t ≥ 0 and z 1 (t) ≤ ε1 for all t ≥ τ1 . Moreover, there also exists τ > 0 such that Ti ≥ τ for all i ∈ Z≥0 ; • the trajectories of the non-dominant mode remain bounded for all time and are ultimately bounded eventually. Proof The closed-loop dynamics of dominant modes with the above control law can be written as z˙ 1 (t) = Λs z 1 (t) − B1 Fez (t) − B1 K sign sz (ti ) + B1 d(t)

(15)

for all t ∈ [ti , ti+1 ). Let us analyze first the behavior of this dynamics. Initially, various design parameters are selected. For any given ε1 > 0 and η > 0, choose ε > 0 and α > 0 such that 2P1 B1 F2 2 λm (Q 1 ) α + (2K + 2d0 )ε ≤ λm (P1 )ε21 λm (Q 1 ) 4λ M (P1 )

 ε c1 α < min B1 P1 , λm (P1 ) B1 P1

(16) (17)

and the switching gain K by K ≥ (B1 P1 B1 )−1 B1 P1 Λs ε1 + Fα + d0 + η.

(18)

/ Ω1 }. Then, it follows that z 1 (t) ∈ Ω1 for all Define Tmax := min{t ≥ 0 : z 1 (t) ∈ t ∈ [0, Tmax ). The goal is to analyze the behavior of (15) in the time interval [0, Tmax ). This fact is necessary to establish a (uniform) lower bound for the inter-event time that excludes the Zeno behavior in the triggering sequence. If the state trajectory never leaves the set Ω1 , the stability of the closed-loop system can be guaranteed for all t ≥ 0. For some i ∈ Z≥0 , let i = {t ∈ [ti , min{ti+1 , Tmax }) : ez (t) = 0}. Then differentiating ez (t) with respect to time for all t ∈ [ti , min{ti+1 , Tmax }) \ i , the following is obtained: d d d ez (t) ≤ = e z (t) (t) z 1 dt dt dt = Λs z 1 (t) − B1 Fez (t) − B1 K sign sz (ti ) + B1 d(t)) = −(Λs + B1 F)ez (t) + Λs z 1 (ti ) − B1 K sign sz (ti ) + B1 d(t) ≤ Λ1  ez (t) + Λs  z 1 (ti ) + B1  (K + d0 ), where in the last step, Λs + B1 F = Λ1 is used. The Comparison lemma is used to find the solutions of the above differential inequality with ez (ti ) = 0 [19]. So one can observe that

A Reduced-Order Model-Based Design of Event-Triggered Sliding-Mode Control

ez (t) ≤

 Λs z 1 (ti ) + B1 (K + d0 )  (Λ1 (t−ti )) e −1 Λ1 

427

(19)

for all t ∈ [ti , min{ti+1 , Tmax }). Using the triggering condition (10), one has − ) = σα, which results in lower bound ez (ti+1 Ti ≥

  σαΛ1  1 ln 1 + . Λs  z 1 (ti ) + B1  (K + d0 ) Λ1 

By repeating this argument for all subsequent triggering intervals, it can be concluded that there exists a uniform lower bound for the inter-event times in the interval [ti , Tmax ). With this conclusion, the behavior of the closed-loop system trajectory can be analyzed. Consider the Lyapunov function V1 (z 1 ) = z 1 P1 z 1 , where P1 is a solution to (6). Taking the time derivative of V1 (z 1 ) along the solutions of (15), it can be seen that for all t ∈ [ti , min{ti+1 , Tmax }), V˙1 (z 1 (t)) = z˙ 1 (t)P1 z 1 (t) + z 1 (t)P1 z˙ 1 (t)     = z 1 (t) Λ s P1 + P1 Λs z 1 (t) − 2z 1 (t)P1 B1 Fez (t) − 2z 1 (t)P1 B1 K sign(sz (ti )) + 2z 1 (t)P1 B1 d(t) = −z 1 (t)Q 1 z 1 (t) − 2z 1 (t)P1 B1 Fez (t) − 2z 1 (t)P1 B1 K sign(sz (ti )) + 2z 1 (t)P1 B1 d(t) ≤ −λm (Q 1 )z 1 (t)2 + 2z 1 (t)P1 B1 Fez (t) − 2z 1 (t)P1 B1 K sign(sz (ti )) + 2|sz (t)|d0

(20)

where (6) is used. Now, each term of the above equality is analyzed separately. The second term can be written using Young’s inequality and the triggering condition (10) as λm (Q 1 ) 2P1 B1 F2 z 1 2 + ez 2 2 λm (Q 1 ) λm (Q 1 ) 2P1 B1 F2 2 z 1 2 + α . < 2 λm (Q 1 )

2z 1 P1 B1 Fez  ≤

Define the set Sε = {z 1 ∈ Rr : |sz | ≤ ε} for ε > 0 defined as above. Note that if z 1 (t) is outside the set Sε for each t ≥ ti , then it holds that sign(sz (t)) = sign(sz (ti )). Therefore, it can be written that 2z 1 (t)P1 B1 K sign(sz (ti )) = 2K |sz (t)|. In the light of these observations, if the system trajectory is outside the set Sε , then λm (Q 1 ) 2P1 B1 F2 2 z 1 (t)2 + α − 2K |sz (t)| + 2d0 |sz (t)|. V˙1 (z 1 (t)) < − 2 λm (Q 1 )

428

K. Kumari et al.

Since (18) implies that K > d0 + η, one may obtain λm (Q 1 ) 2P1 B1 F 2 V˙1 (z 1 (t)) < − z 1 (t)2 + α − 2η|sz (t)| 2 λm (Q 1 ) λm (Q 1 ) 2P1 B1 F2 2 V1 (z 1 (t)) + α ≤− 2λ M (P1 ) λm (Q 1 ) 2

where, in the last step, the inequality V1 (z 1 ) ≤ λ M (P1 )z 1 2 is applied. On the other hand, if sz is inside the set S , then the identity sign(sz (t)) = sign(sz (ti )) does not hold for any t ≥ ti , and as a result of this, (20) becomes λm (Q 1 ) 2P1 B1 F2 2 z 1 (t)2 + α + 2K |sz (t)| + 2d0 |sz (t)| V˙1 (z 1 (t)) < − 2 λm (Q 1 ) which under the fact that |sz | ≤ ε reduces to λm (Q 1 ) 2P1 B1 F2 2 z 1 (t)2 + α + 2(K + d0 )ε. V˙1 (z 1 (t)) < − 2 λm (Q 1 ) Now, applying the inequality (16) in the above, we arrive at λm (Q 1 ) λm (Q 1 ) V1 (z 1 (t)) + λm (P1 )ε21 V˙1 (z 1 (t)) < − 2λ M (P1 ) 4λ M (P1 ) λm (Q 1 ) V1 (z 1 (t)), ≤− ∀ V1 (z 1 (t)) ≥ λm (P1 )ε21 . 4λ M (P1 ) The above differential inequality guarantees the existence of a finite time instant τ1 > 0 such that     z 1 (t) ∈ z 1 ∈ Rr : V1 (z 1 ) ≤ λm (P1 )ε21 ⊆ z 1 ∈ Rr : z 1  ≤ ε1 for all t ≥ τ1 . In other words, it can be concluded that z 1 (t) ≤ ε1 for all t ≥ τ1 . This shows that the dominant mode trajectory never leaves the set Ω1 . The convergence of the above trajectory also establishes the practical sliding mode in the system, which is shown below. Consider the Lyapunov function Vs (sz ) = (B1 P1 B1 )−1 sz2 /2. Taking the time derivative and applying the control input (13), it is seen that

A Reduced-Order Model-Based Design of Event-Triggered Sliding-Mode Control

429

V˙s (sz (t)) = (B1 P1 B1 )−1 sz s˙z = sz (t)(B1 P1 B1 )−1 B1 P1 Λs z 1 (t) − sz (t)Fez (t) − sz (t)K signsz (ti ) + sz (t)d(t). The same argument like before is borrowed in our analysis. If the trajectory sz (t) is outside the set Sε for all t ∈ [ti , ti+1 ), then the third term in the above becomes −K |sz (t)|. Using the event-triggering condition (10), the second term is bounded by |sz (t)|Fα. So, the above can be written as V˙s (sz (t)) ≤ |sz (t)| (B1 P1 B1 )−1 B1 P1 Λs z 1 (t) + |sz (t)|Fα − K |sz (t)| + |sz (t)|d0 .

For K satisfying (18), the dominant trajectory will not ensure the practical sliding mode unless the reachability condition is fulfilled. However, using the fact that z 1 (t) ≤ ε1 for all t ≥ τ1 , the following is achieved: V˙s (sz (t)) < |sz (t)| (B1 P1 B1 )−1 B1 P1 Λs ε1 + |sz (t)|Fα − K |sz (t)| + d0 |sz (t)| ≤ −η|sz (t)|. This guarantees the convergence of the system trajectory toward the sliding manifold Sε . However, once the trajectory hits the line sz = 0, it will cross the manifold until the control signal is updated. Indeed, it can be concluded that the trajectory can never leave the set Sε because   |sz (ti ) − sz (t)| ≤  B1 P1 ez (t) ≤ B1 P1 ez (t) ≤ B1 P1 α < ε due to (17). Thus, the existence of a practical sliding mode is established in the system. This completes the first part of the theorem. Now the proof of second part of the theorem is discussed. Consider the Lyapunov function V2 (z 2 ) = z 2 P2 z 2 . Taking the time derivative of V2 (z 2 ) along the trajectory of non-dominant dynamics of the plant, one gets V˙2 (z 2 (t)) = z˙ 2 (t)P2 z 2 (t) + z 2 (t)P2 z˙ 2 (t) = z˙ 2 (t)P2 (Λ2 z 2 (t) − B2 F z 1 (ti ) − B2 K sign sz (ti ) + B2 d(t)) + (Λ2 z 2 (t) − B2 F z 1 (ti ) − B2 K sign sz (ti ) + B2 d(t)) P2 z 2 (t) = −z 2 (t)Q 2 z 2 (t) − 2z 2 (t)P2 B2 F z 1 (ti ) − 2z 2 (t)P2 B2 (K sign sz (ti ) − d(t)) .

The last equation is obtained by using (14) and the control input (13). Now employing the inequality λm (Q 2 ) z 2 2 ≤ z 2 Q 2 z 2 in the above, the following inequality is obtained: V˙2 (z 2 (t)) ≤ −λm (Q 2 ) z 2 (t)2 + 2 z 2 (t) P2 B2  F z 1 (ti ) + 2 z 2 (t) P2 B2  (K + d0 ).

(21)

430

K. Kumari et al.

For further analysis, analyze the second and third terms of the above inequality separately, by applying Young’s inequality to them. This gives λm (Q 2 ) 4 P2 B2 2 F2 z 2 2 + z 1 2 , 4 λm (Q 2 ) λm (Q 2 ) 4 P2 B2 2 z 2 2 + (K + d0 )2 . 2 z 2  P2 B2  (K + d0 ) ≤ 4 λm (Q 2 )

2 z 2  P2 B2  F z 1  ≤

(22a) (22b)

From the first part of our theorem, it is known that z 1 (t) ∈ Ω for all t ≥ 0. Using this fact along with the above arguments, the boundedness of z 2 (t) within some bounded region in Rn−r for all t ≥ 0 is obtained. Moreover, it has been proved that z 1 (t) ≤ ε1 for all t ≥ τ1 ; using this result in (22a) and adding it with (22b), the following inequality is obtained: 2 z 2  P2 B2  (F z 1  + K + d0 ) ≤

λm (Q 2 ) 2 λm (Q 2 ) z 2 2 + δ , 2 4

where δ :=

 4 P2 B2  F2 ε21 + (K + d0 )2 . λm (Q 2 )

Substituting the upper bound of each term in (21) with those derived above, the Lyapunov inequality is reduced to λm (Q 2 ) λm (Q 2 ) 2 z 2 2 + δ , V˙2 (z 2 ) ≤ − 2 4 which again holds if λm (Q 2 ) V2 (z 2 ), V˙2 (z 2 ) ≤ − 4λ M (P2 )

∀ V2 (z 2 ) ≥ λm (P2 )δ 2 .

(23)

From this inequality, one can deduce that the state trajectories of the non-dominant dynamics starting within the set 

   z 2 ∈ Rn−r : V2 (z 2 ) ≤ λm (P2 )δ12 ⊆ z 2 ∈ Rn−r : V2 (z 2 ) ≤ c2 ,

for any δ1 with δ < δ1 ≤

√ c2 /λm (P2 ), eventually enter the set



z 2 ∈ Rn−r : V2 (z 2 ) ≤ λm (P2 )δ 2



in a time τ2 and remain there for all future times. The expression for finite time τ2 can be obtained by solving the differential inequality (23) with the help of Comparison Lemma [19], and it is given by

A Reduced-Order Model-Based Design of Event-Triggered Sliding-Mode Control

τ2 := 8

431

  δ1 λ M (P2 ) ln >0 λm (Q 2 ) δ

and this ends the proof of the second part of Theorem 1.



6 Simulation Results This section illustrates the design methodology of reduced-order ETC for a practical example. We consider the magnetic tape control system ([22]), which has the following dynamical model: ⎡ ⎤ ⎡ ⎤ 0 0.4 0 0 0 ⎢0 0 0.345 0 ⎥ ⎢ 0 ⎥ ⎥ ⎢ ⎥ x˙ = ⎢ ⎣0 −52.4 −46.5 26.2 ⎦ x + ⎣ 0 ⎦ (u + d). 0 0 0 −100 100 Here, the disturbance input is taken as d(t) = 0.1 cos(t) + 0.1 sin(2t). It may be observed that the open-loop system has four eigenvalues located at 0, −0.3921, −46.1079, −100. Since the modes −46.1079 and −100 decay at a faster rate, these are taken as the non-dominant modes. Thus, the reduced-order model for the given system is obtained by considering 0 and −0.3921 as the dominant modes. It may be observed that using the following transformation matrix: ⎡

⎤ 1 0.8977 0.0001 0 ⎢0 −0.8799 −0.0075 0.0017 ⎥ ⎥, T =⎢ ⎣0 1 1 −0.4881⎦ 0 0 0 1 one can obtain the reduced-order model as     0 0 0.2 z + (u + d) z˙ 1 = 0 −0.3921 1 −0.2256 with associated aggregation matrix  Ca =

 1 1.0289 0.0076 0.002 . 0 −1.1462 −0.0086 −0.0023

In the proposed method, the sliding variable is designed for the reduced-order system using Lyapunov’s method. First, we observe that the eigenvalues of the reduced-order system can be placed at [−0.3 − 0.1] using the feedback matrix F = [0.3826 0.3041]. Then the Lyapunov Eq. (6) is solved for P1 with Q 1 = I2 , which results in

432

K. Kumari et al.

Fig. 2 Evolution of state trajectories

Fig. 3 Evolution of sliding variable

 P1 =

 5.9205 −0.5445 . −0.5445 1.6480

This gives the sliding variable sz (z 1 ) = [1.3069 − 0.4807] z 1 . Choose ε1 = 0.73 and ε = 0.0239. The triggering parameters are taken as α = 0.0166 and σ = 0.95 using (17). Then, we choose the switching gain K = 0.5253 satisfying (18). Note that the choice of all the above parameters satisfies the inequality (16). Finally, we take z(0) = [1 1 3 − 4] for which one may verify that x(0) = [1.8979 − 0.9091 5.9523 − 4] . The response of the closed-loop system with the proposed control strategy is shown in Figs. 2, 3, 4, and 5. In Fig. 2, the state trajectories of the full-order system are plotted. It is observed that the non-dominant trajectories are ultimately bounded, whereas the dominant mode trajectories are bounded within the desired bound. The chattering in the response is observed due to the high switching gain, which can be tuned to adjust the steady-state bounds. The convergence of the sliding variable to the sliding manifold is shown in Fig. 3. As the sliding function is designed for the reduced-order system, it moves with the reduced-order SMC toward the sliding manifold in some finite time and remains bounded thereafter. The reduced-order

A Reduced-Order Model-Based Design of Event-Triggered Sliding-Mode Control

433

Fig. 4 Evolution of control input

Fig. 5 Evolution of inter-event time

control signal is plotted in Fig. 4. In the event-triggered implementation, the control signal is applied by a zero-order hold, i.e., it is constant over the triggering interval, and it is updated once an event is generated. The variation of inter-event time is shown in Fig. 5. The total number of triggering instants generated by our strategy is equal to 221 compared to that of 400 in the periodic implementation for the sampling period of 0.05 s. It may be noted that the same steady-state bound has been taken for the sliding variable in both cases, and the sampling period is taken such that it results in the same band.

7 Conclusion In this chapter, a reduced-order-based event-triggered SMC was proposed for an uncertain system. The reduced-order model obtained by the aggregation technique is used to design the (reduced-order) controller. The sliding variable is designed for this system using the Lyapunov-based approach. Both event condition and controller are designed using the reduced-order model. The proposed strategy ensures the ulti-

434

K. Kumari et al.

mate boundedness of the closed-loop trajectories with the reduced-order SMC. It was observed that the inter-event time is increased significantly in the proposed strategy compared to the triggering condition developed with a full-order state vector. Additionally, the Lyapunov-based design of the SMC does not require a system representation in a regular form and eliminates the need to estimate the region of attraction as done in the method proposed in [22]. The validation of the method is done by the simulation of a magnetic tape control system with the proposed control law.

References 1. Åarzén, K.E.: A simple event-based PID controller. 14th IFAC World Congr. Beijing China 32(2), 8687–8692 (1999) 2. Anta, A., Tabuada, P.: Preliminary results on state-triggered stabilizing control tasks. In: 45th IEEE Proc. Conf. Decision and Control, pp. 892–897. San Diego, USA (2010) 3. Aoki, M.: Control of large-scale dynamic systems by aggregation. IEEE Trans. Autom. Control 13(3), 246–253 (1968) 4. Astolfi, A.: Model reduction by moment matching for linear and nonlinear systems. IEEE Trans. Autom. Control 55(10), 2321–2336 (2010) 5. Bandyopadhyay, B., Abera, A.G., Janardhanan, S., Sreeram, V.: Sliding mode control design via reduced order model approach. Int. J. Autom. Comput. 4(4), 329–334 (2007) 6. Bandyopadhyay, B., Behera, A.K.: Event-Triggered Sliding Mode Control: A New Approach to Control System Design, vol. 139. In: Studies in Systems, Decision and Control. Springer International Publishing, Cham, Switzlerland (2018) 7. Bandyopadhyay, B., Unbehauen, H., Patre, B.: A new algorithm for compensator design for higher-order system via reduced model. Automatica 34(7), 917–920 (1998) 8. Behera, A.K., Bandyopadhyay, B.: Event-triggered sliding mode control for a class of nonlinear systems. Int. J. Control 89(9), 1916–1931 (2016) 9. Behera, A.K., Bandyopadhyay, B.: Robust sliding mode control: an event-triggering approach. IEEE Trans. Circuits Syst. II: Exp. Briefs 64(2), 146–150 (2017) 10. Behera, A.K., Bandyopadhyay, B., Cucuzzella, M., Ferrara, A., Yu, X.: A survey on eventtriggered sliding mode control. IEEE J. Emer. Sel. Topics Ind. Electron. 2(3), 206–217 (2021) 11. Behera, A.K., Bandyopadhyay, B., Yu, X.: Periodic event-triggered sliding mode control. Automatica 96, 61–72 (2018) 12. Behera, A.K., Shim, H.: Robust feedback stabilization using high-gain observer via event triggering. Int. J. Robust Nonlinear Control 30(5), 2097–2112 (2020) 13. Abhisek, K.B., Bijnan, B.: Event based robust stabilization of linear systems. In: IECON 201440th Annual Conference of the IEEE Industrial Electronics Society, pp. 133–138. IEEE (2014) 14. Cucuzzella, M., Ferrara, A.: Practical second order sliding modes in single-loop networked control of nonlinear systems. Automatica 89, 235–240 (2018) 15. Cucuzzella, M., Incremona, G.P., Ferrara, A.: Event-triggered variable structure control. Int. J. Control 93(2), 252–260 (2020) 16. Edwards, C., Spurgeon, S.: Sliding Mode Control: Theory and Applications. CRC Press (1998) 17. Girard, A.: Dynamic triggering mechanisms for event-triggered control. IEEE Trans. Autom. Control 60(7), 1992–1997 (2014) 18. Hutton, M., Friedland, B.: Routh approximations for reducing order of linear, time-invariant systems. IEEE Trans. Autom. Control 20(3), 329–337 (1975) 19. Khalil, H.K.: Nonlinear Systems. Prentice Hall (2002) 20. Kumari, K., Bandyopadhyay, B., Kim, K.-S., Shim, H.: Output feedback based event-triggered sliding mode control for delta operator systems. Automatica 103, 1–10 (2019)

A Reduced-Order Model-Based Design of Event-Triggered Sliding-Mode Control

435

21. Kumari, K., Bandyopadhyay, B., Reger, J., Behera, A.K.: Event-triggered discrete-time sliding mode control for higher-order systems via reduced-order model approach. 21st IFAC World Congr. Berlin Germany 53(2), 6207–6212 (2020) 22. Kumari, K., Bandyopadhyay, B., Reger, J., Behera, A.K.: Event-triggered sliding mode control for a high-order system via reduced-order model based design. Automatica 121, 109163 (2020) 23. Lamba, S., Rao, S.: On suboptimal control via the simplified model of Davison. IEEE Trans. Autom. Control 19(4), 448–450 (1974) 24. Linnemann, A.: Existence of controllers stabilizing the reduced-order model and not the plant. Automatica 24(5), 719 (1988) 25. Liu, T., Jiang, Z.-P.: Event-based control of nonlinear systems with partial state and output feedback. Automatica 53, 10–22 (2015) 26. Åström, K.J., Bernhardsson, B.M.: Comparison of Riemann and Lebesgue sampling for first order stochastic systems. In: Proceedings of the 41st IEEE Conference on Decision and Control, 2002, vol. 2, pp. 2011–2016. IEEE (2002) 27. Shamash, Y.: Stable reduced-order models using padé-type approximations. IEEE Trans. Autom. Control 19(5), 615–616 (1974) 28. Tallapragada, P., Chopra, N.: On event triggered tracking for nonlinear systems. IEEE Trans. Autom. Control 58(9), 2343–2348 (2013) 29. Utkin, V.: Variable structure systems with sliding modes. IEEE Trans. Autom. Control 22(2), 212–222 (1977) 30. Wang, X., Fei, Z., Gao, H., Yu, J.: Integral-based event-triggered fault detection filter design for unmanned surface vehicles. IEEE Trans. Ind. Inf. 15(10), 5626–5636 (2019) 31. Wang, X., Lemmon, M.D.: Event-triggering in distributed networked control systems. IEEE Trans. Autom. Control 56(3), 586–601 (2010) 32. Wu, L., Gao, Y., Liu, J., Li, H.: Event-triggered sliding mode control of stochastic systems via output feedback. Automatica 82, 79–92 (2017) 33. Yesmin, A., Behera, A.K., Bera, M.K., Bandyopadhyay, B.: Dynamic event-triggering based design of sliding mode control. Int. J. Robust Nonlinear Control 31(12), 5910–5925 (2021) 34. Zheng, B.-C., Yu, X., Xue, Y.: Quantized feedback sliding-mode control: An event-triggered approach. Automatica 91, 126–135 (2018)

A Robust Approach for Fault Diagnosis in Railway Suspension Systems Selma Zoljic-Beglerovic, Mohammad Ali Golkani, Martin Steinberger, Bernd Luber, and Martin Horn

Abstract Due to the increased popularity of railway transportation, maximizing the availability of both vehicles and infrastructure is on high demand. To fulfill this requirement, the development of efficient maintenance strategies is in the focus of research in this area. Such strategies require a robust estimation and prediction of the vehicle’s component health states. In this chapter, a sliding-mode-based algorithm is proposed as a solution for the identification of suspension parameters, which are credible representatives of the operational status. The accuracy of the estimation, robustness to model uncertainties, and sensitivity to faults are shown through different test scenarios in a simulation environment.

1 Introduction Railroad transportation is the least polluting way of transport, and due to this positive environmental footprint, a rapid ascent of its popularity is recorded. Therefore, maximizing the availability of both infrastructure and vehicles has come to the epicenter of interest within the railway industry. For the vehicle to be available at maximal S. Zoljic-Beglerovic (B) · B. Luber Virtual Vehicle Research GmbH, Inffeldgasse 21A, 8010 Graz, Austria e-mail: [email protected] B. Luber e-mail: [email protected] M. Ali Golkani · M. Steinberger · M. Horn Institute of Automation and Control, Graz University of Technology, Inffeldgasse 21B, 8010 Graz, Austria e-mail: [email protected] M. Steinberger e-mail: [email protected] M. Horn e-mail: [email protected] © The Author(s), under exclusive license to Springer Nature Switzerland AG 2023 T. R. Oliveira et al. (eds.), Sliding-Mode Control and Variable-Structure Systems, Studies in Systems, Decision and Control 490, https://doi.org/10.1007/978-3-031-37089-2_17

437

438

S. Zoljic-Beglerovic et al.

Fig. 1 Pyramid of maintenance strategies

rate, all unnecessary or unplanned downtimes and outages must be either avoided or anticipated, wherefore a proper maintenance is of great importance. Plant maintenance is a key process in every industry field. An overall goal is to keep assets (e.g., vehicles, infrastructure, and tools) in a good working condition by reducing breakdowns, increasing uptime, and promoting long-term reliability. Different maintenance strategies have been developed and can be intuitively sorted out in a pyramid, as shown in Fig. 1. The chosen maintenance strategy defines the degree and frequency of maintenance tasks to be carried out. The lowest level of the pyramid, i.e., the lowest maintenance plan, is a “no plan at all” form. This is a so-called reactive maintenance, which is based on “run until it breaks” strategy and obviously leads to the failure. It is only suitable for noncritical or redundant assets that have little or no immediate impact on safety or operation. This approach may also be used for the assets that have minimal repair or replacement costs. Most vehicle and infrastructure operators employ at least a preventive maintenance, where assets are serviced on a planned schedule that is based on operational statistics and manufacturers recommendations. This approach ensures that an asset gets examined before it reaches a point of failure. The third level of the pyramid is a condition-based maintenance (CBM) strategy that takes advantage of the asset’s collected data to determine whether it requires servicing or not. This strategy is focused on a physical condition and operational status of the asset. If a measurable parameter can be defined and used as an indicator of pending problems based on a certain logic, then a CBM is an ideal solution. If the asset does not have a failure pattern that increases with use or age, then a more advanced approach is required. This leads us to the so-called predictive maintenance (PM), for which the fourth level of the maintenance pyramid is reserved. Often referred to as predictive analytics, this strategy monitors the performance of an asset and attempts to forecast possible failure occurrence. Thus, within the CBM collected sensor data are used as inputs in advanced modeling and estimation procedures, and then compared with a real-time operating data to determine and alert upon a sudden deviation from

A Robust Approach for Fault Diagnosis in Railway Suspension Systems

439

expected asset’s behavior. In addition to this, PM employs aging characteristics (e.g., stiffness of a spring is decaying over time) for the prognostics. All these approaches together form the foundation for the reliability-centered maintenance (RCM), which stands on the top of the pyramid. RCM, also referred to as prescriptive maintenance, is a comprehensive prognostics strategy focused on outcomes, and it is utilized to determine what should be done to ensure that an asset operates as intended [3, 13]. When it comes to the reduction of life cycle costs, the predictive maintenance is considered as a desirable goal. But what exactly is predictive maintenance and how does it differ from other maintenance strategies? Which requirements must be fulfilled to implement it? And what benefits does it bring? These questions will be answered in the following subsection.

1.1 Predictive Versus Preventive—Benefits and Requirements As mentioned before, predictive maintenance is an advanced proactive approach that is based on permanent monitoring and evaluation of collected asset’s data. The condition of a plant that is in operation is observed in order to predict future maintenance requirements, thereby avoiding malfunctions and making maintenance processes more efficient. Therefore, the PM approach can be summarized in three steps: 1. long-lasting or periodic collection and storage of plant and process data; 2. analysis and evaluation of the collected data; 3. probability analysis—forecasting future performance and occurrence of certain events. The aim is to predict the best time to perform maintenance—only if it is necessary and ideally before a malfunction occurs. At this stage, we can point out two main differences compared to the preventive maintenance: (i) Service is performed on demand, not on fixed scheduled intervals as in preventive maintenance strategy; (ii) PM refers to the actual (monitored) condition of the plant, while preventive maintenance is based on average or expected service life. Thus, by avoiding unnecessary maintenance routines, service life of the plant is increased while time and financial costs are decreased. Calculation of optimal maintenance time leads to avoidance of downtimes, as well as to efficient spare parts and human resources management. Therefore, if applied correctly, predictive maintenance brings numerous advantages that lead to the compelling improvement of plant productivity and performance (Fig. 2). To implement PM efficiently, several requirements must be met: • The plant, as well as its environment, must be properly and early enough equipped with sensors in order to have sufficient amount of data, including the historical ones, for further evaluation. • For the quality forecasting of maintenance routines, not only data on plant status are sufficient but also aging characteristics, static (data for the rail network and

440

S. Zoljic-Beglerovic et al.

Fig. 2 Benefits gained through predictive maintenance strategy

the rolling stock), and service (data on previous maintenance actions) data should be collected and stored on a regular basis. • Pre-processing of the data, which implies removing of wrong and adding of missing values, is of great importance. Analysis and interpretation of such well-prepared data enable valid statements on maintenance needs.

1.2 Problem Statement in Terms of Predictive Maintenance Components of the railway vehicle, see Fig. 3, are mostly replaced after certain time or mileage, regardless of whether they are defective or not. The current goal among the vehicle and infrastructure managers is to have a highly efficient maintenance strategy for individual components that is based on their true state of health, in order to change the conservative scheduled maintenance. This also means reduction of inefficient preventive and reactive maintenance actions, which leads in the direction of condition-based and predictive maintenance. Besides an optimal service plan, it is also important that the chosen maintenance strategy enables safety, reliability, and prompt detection or anticipation of both severe and small faults. As a result, improved performance as well as reduction of overall costs should be achieved. A vehicle in operation should be ideally persistently observed, and a reliable decision basis of when to replace or maintain components should be established. If a condition-based maintenance strategy is applied, decisions are based on the collected data. To go one step further, i.e., to apply predictive maintenance, besides health monitoring the aging characteristics for each component are needed in order to establish prognostic models and do the forecasting on the component behavior. A robust solution to both strategies would be a proper combination of different model-based and/or data-driven methods that is suitable for decision-making on the target component. Therefore, the identification of system parameters is an important feature, i.e., input to the health models, as well as prognostics models of the railway vehicle components.

A Robust Approach for Fault Diagnosis in Railway Suspension Systems

Carbody

Track

Wheelset

Bogie frame

Secondary suspension

441

Primary suspension

Fig. 3 Main parts of a railway vehicle

Carbody

Driving direction

Secondary suspension Trailing bogie

Primary suspension

Leading bogie

Wheelsets Track

Fig. 4 Simplified model of the entire vehicle consisting of one carbody, two bogies, and four wheelsets, along with the two suspension systems—primary and secondary

1.3 Related Work Faults in the railway vehicle are reflected in a significant change in the system parameters. Therefore, the problem of parameter identification is in the literature often referred to as fault diagnosis and isolation (FDI). The railway vehicle is mostly represented in its simplified form, consisting of one carbody, two bogies and four wheelsets, with two suspension levels—primary and secondary, as shown in Fig. 4. Different research groups have dealt with the topic of FDI for the described system, and both model-based and data-driven methods were applied. Among the model-based methods, the most common are different versions of Kalman filter (KF). KF approaches are applied on linearized models to detect faults in both primary and secondary suspension and for different dynamics (e.g., vertical or lateral) [5, 8, 19]. In [6], a nonlinear suspension model is adopted for fault detection and isolation using the Hybrid Extended KF. Cubature KF avoids linearization issues, improves accuracy, and diminishes computational effort compared

442

S. Zoljic-Beglerovic et al.

to other often-used KFs, and in [23, 25] it is tested on a simplified railway vehicle (sub)model. Besides the Kalman filter, recursive least squares methods (see [11, 20]) and particle filters (see [7]) are often used for the same purpose. All listed methods ensure either exponential or asymptotic convergence to the neighborhood of the true value, and most of them enable the identification of constant or slow varying parameters. An additional constraint is that most methods assume conditions that cannot be guaranteed by the railway system itself, such as specific noise characteristics or exclusion of unmodeled dynamics or other unknown disturbances. Therefore, the sliding-mode-based parameter identification algorithm introduced in [14] is applied for the parameter identification in a reduced subsystem of the railway vehicle within a simulation environment [24]. The finite-time convergence to the neighborhood of the true value was shown, as well as some fundamental advantages over other methods, such as low computational effort and straightforward tuning. In recent years, the second line of research has been developed in the direction of data-driven methods. They are used either alone or in a combination with modelbased methods. In [1], a hybrid model framework is developed, where supervised machine learning is used to predict the faulty and healthy state of the suspension system components, after their nominal parameter values were identified using the model-based method. The authors of [16] based their railcar performance monitoring on the mutual information estimation algorithm for feature extraction, and support vector machines for their classification. Data-driven classifiers are used in [4] to detect reduced functionality of a component, based on a real measurement. Even acoustic emissions from different vertical suspension faults are monitored and processed to extract features and generate data-driven models to perform FDI [18]. Data-driven fault diagnosis techniques are also often implemented for high-speed trains (see [2, 21, 22]). Data-driven algorithms are based on statistical models that get trained on the data and therefore capture all specific characteristics represented as a multidimensional feature set that is later classified. Still, the performance of these methods strongly depends on the availability of healthy and faulty data. On the other side, model-based methods take advantage of the a priori knowledge on the underlying system, enabling that states of all modeled components, as well as values of system parameters, are fully available at every time step. Thus, when both abundant historical data and knowledge of the physical process are available, leveraging the strengths of the data-driven and model-based methods into one prognostic framework can bridge the gap between them and contribute to a robust and high quality decision-making for maintenance strategies [10].

1.4 Outline of the Book Chapter Section 2 gives an overview of modeling of a simplified railway vehicle, as a first step toward model-based parameter identification. What must be taken into consideration when choosing the right model depth? Which steps precede the parameter

A Robust Approach for Fault Diagnosis in Railway Suspension Systems

443

identification with a chosen method? After answering these questions in Sect. 2, Sect.3 introduces the method itself, and its advantages compared to other often-used algorithms. What has to be detected, what is considered as a fault, and what are the most common faults in railway vehicles? Which faults, i.e., parameter changes, will be considered in the results section, and which requirements must be met in order to detect them? The results on different test scenarios are shown and assessed in Sect. 4, and the conclusions on presented work are drawn.

2 Modeling of the Railway Vehicle Suspension System The first step toward the successful application of a model-based method is the development of a model which captures all the essential dynamic characteristics of the considered problem [8]. In the literature, the railway vehicle is mostly represented in its simplified form, consisting of one carbody, two bogies, and four wheelsets, as shown in Fig. 4. The bogie, along with its suspension systems, provides some essential vehicle functions—it bears the load, transmits the traction and breaking forces, steers the vehicle in a safe manner, and smooths out track irregularities. Moreover, it guarantees passenger comfort and operational safety. The primary suspension is located between the axis of the wheelsets on one side and the bogie frame on the other side. On each location point, one on each frame corner, there is a spring–damper element that acts like a tire on a car. The bogie is linked to the carbody via secondary suspension that in the considered vehicle consists of air springs and dampers. While the primary suspension absorbs track irregularities and limits their transmission to the car, the secondary suspension insulates the car or locomotive from vibrations and noise. Considering its role, the bogie can be marked as the most safety-critical part of the vehicle, wherefore a prompt detection of any significant changes in its operational status is of the great importance.

2.1 Overview of Model Variations As stated before, the health status of the bogie and its suspension system is crucial for the safe and reliable operation of the railway vehicle. Therefore, a great part of the vehicle maintenance process is dedicated to it. Suspension parameters, i.e., stiffness and damping coefficients of all primary and secondary suspension spring–damper elements, are credible representatives of the operational status of the vehicle. Thus, a robust identification of suspension parameters provides a good decision basis for the maintenance strategy. When identifying parameters of a complex system, like a railway vehicle even in its simplified form, it is important to determine which model depth is sufficient to capture all relevant dynamic characteristics. In that sense, one can consider either the whole system (model) at once, or just a part (submodel) of it; for a chosen

444

S. Zoljic-Beglerovic et al. Model Simplified railway vehicle model

PARAMETER IDENTIFICATION

CB

BG1

BG2

Model depth

Parameter set to be identified Primary and/or secondary suspension parameters without isolation Primary and/or secondary suspension parameters with isolation

front-back-left-right Sub-models

Dynamics • Bounce (z) and/or • Lateral (y) and/or • Longitudinal (x) • Bounce (z) • Roll (around x) • Pitch (around y) …

z x

Half vehicle model

½ CB

Wheelset excitation

BG2

y

Bogie frame excitation

z Roll

Pitch

x

Excitation

Carbody

CB



y

Vehicle parameters Leading/Trailing bogie

BG

Physical model parameters (k, c, m, I, ...)

Fig. 5 Modeling concept for a simplified railway vehicle model

(sub)model, different dynamics can be modeled—e.g., pure vertical/lateral, or one can also take rotational movements into consideration. Figure 5 gives us an insight into the modeling concept, summarizing all steps that precede parameter identification with a chosen method. On the left side of the concept-picture, there is an example of relevant submodels for a railway vehicle. For example, one can consider just a carbody as a submodel of the entire vehicle in order to identify secondary suspension parameters, whereas displacements and velocities of a bogie are used as inputs to this submodel. Even just for this submodel, one can decide on appropriate dynamics to be modeled (e.g., vertical, lateral, and vertical in combination with roll and/or pitch dynamics). Thus, there are three important points that make a model relevant for a certain problem statement. Firstly, to choose the right (sub)model and its depth, it is important to decide which parameters exactly are to be identified and whether not only fault detection but also fault isolation is needed. Secondly, one has to decide on an appropriate excitation, e.g., wheelset excitation and excitation brought on a bogie frame. Finally, in the design stage, the physical model parameters must be set according to the vehicle that was used for the recording of measurements; these vehicle parameters must be adapted for the chosen (sub)model, e.g., carbody mass has to be halved for a half-vehicle-model.

2.2 The Vertical Vehicle Suspension System Here, the dynamical model of the simplified railway vehicle vertical suspension shown in Fig. 6 is presented. Equations for the bounce, i.e., vertical dynamics for both carbody and bogies read as

A Robust Approach for Fault Diagnosis in Railway Suspension Systems

445

z

Carbody

z

x

y

Trailing bogie

Leading bogie

u

Fig. 6 Simplified railway vehicle with model states and parameters; stiffness (k p ) and damping (c p ) constants are the same for all primary spring–damper elements

—carbody: M z¨ + 4cs z˙ − 2cs z˙ 1 − 2cs z˙ 2 + 4ks z − 2ks z 1 − 2ks z 2 = 0,

(1)

—leading bogie: Mb z¨ 1 − 2cs z˙ + (4c p + 2cs )˙z 1 − c p (d˙1r + d˙1l + d˙2r + d˙2l ) − 2ks z + (4k p + 2ks )z 1 − k p (d1r + d1l + d2r + d2l ) = 0,

(2)

—trailing bogie: Mb z¨ 2 − 2cs z˙ + (4c p + 2cs )˙z 2 − c p (d˙3r + d˙3l + d˙4r + d˙4l ) − 2ks z + (4k p + 2ks )z 2 − k p (d3r + d3l + d4r + d4l ) = 0,

(3)

where z, z 1 , and z 2 denote the vertical displacement of the carbody, the leading bogie, and the trailing bogie, respectively, while z˙ , z˙ 1 and z˙ 2 are their corresponding velocities. di j and d˙i j are vertical displacement and velocities of the wheel in the wheelset, where i ∈ {1, 2, 3, 4} denotes the wheelset, and j denotes the side (l—left or r —right). For the simplification of the problem, it is assumed that the vertical displacements of the wheel are equal to the irregularities/unevenness of the track. The parameters of the vertical vehicle suspension system are listed in Table 1.

446

S. Zoljic-Beglerovic et al.

Table 1 The parameters of the vehicle suspension system Symbol Description M Mb kp ks cp cs

Unit

Carbody mass Bogie mass Spring constants of the primary spring Spring constants of the air (secondary) spring Damping constants of the primary damper Damping constants of the secondary damper

kg kg kN/m kN/m kNs/m kNs/m

A state-space representation of the model is then given by z˙ (t) = Az(t) + B(u(t) + ud (t)) y(t) = Cz(t) + fo (t),

(4)

where the state vector is defined as z = [z z 1 z 2 z˙ z˙ 1 z˙ 2 ]T , and ud (t) and fo (t) are assumed to be bounded in magnitude, i.e., ud ∞ ≤ u¯ d and fo ∞ ≤ f¯o , and denote external disturbances. In the model representation (4), signals u(t) and y(t) are available, and represent vertical track variation velocities and displacements due to track vertical irregularities, and accelerations of the carbody and bogies, respectively, i.e.,   uT = d1r d1l d˙1r d˙1l d2r d2l d˙2r d˙2l d3r d3l d˙3r d˙3l d4r d4l d˙4r d˙4l   yT = z¨ z¨ 1 z¨ 2 . The system matrices read as ⎡

0 0 0

⎢ ⎢ ⎢ ⎢ A = ⎢ −4ks ⎢ M ⎢ 2ks ⎣ Mb 2ks Mb

⎡ ⎢ ⎢ ⎢ ⎢ T B =⎢ ⎢ ⎢ ⎣

0 0 0 0

0 0 0 0

0 0 0 0

0 0 0

0 0 0

2ks M −4k p −2ks Mb

2ks M

0 0 0 0 0

0 0 0 0

1 0 0

0 −4k p −2ks Mb

0 0 0 0

0 0 0 0

0 0 0 0

kp kp cp cp kp kp cp cp Mb Mb Mb Mb Mb Mb Mb Mb

0 0 0 0 0 0 0 0

0 1 0



0 0 1

−4cs 2cs 2cs M M M −4c −2c 2cs p s 0 Mb Mb −4c p −2cs 2cs 0 Mb Mb

0 0 0 0 0

0 0 0 0 0

0 0 0 0 0

0 0 0 0 0

0 0 0 0 0

0 0 0 0 0

⎥ ⎥ ⎥ ⎥ ⎥ ⎥ ⎥ ⎦

0 0 0 0 0

0 0 0 0 0

kp kp cp cp kp kp cp cp Mb Mb Mb Mb Mb Mb Mb Mb

⎤ ⎥ ⎥ ⎥ ⎥ ⎥ ⎥ ⎥ ⎦

A Robust Approach for Fault Diagnosis in Railway Suspension Systems

⎡ −4k ⎢ C=⎣

M 2ks Mb 2ks Mb

s

2ks M −4k p −2ks Mb

0

2ks M

0 −4k p −2ks Mb

447



−4cs 2cs 2cs M M M 2cs −4c p −2cs 0 ⎥ ⎦. Mb Mb −4c p −2cs 2cs 0 Mb Mb

Model (4) is written based on the assumption that all stiffness and damping parameters on the primary and/or secondary suspension level have the same value, and these parameters are denoted by k p , c p , ks , and cs . If this assumption does not hold, an extension of the model description needs to be made. For instance, 4ks in the carbody vertical dynamics (1) is replaced with the sum (ks11 + ks12 + ks21 + ks22 ), where ks11 , ks12 , ks21 , and ks22 represent front/right, front/left, rear/right, and rear/left stiffness, respectively. Analogous to that, all secondary suspension parameters are denoted by ksi j for stiffness, and csi j for damping, where i ∈ {1, 2} denotes front and rear, while j ∈ {1, 2} denotes the side (1—right or 2—left). For the primary suspension parameters, the same notation is applied, with the difference that i ranges from 1 to 4, i.e., i ∈ {1, 2} for front and rear of the leading bogie, and i ∈ {3, 4} for front and rear of the trailing bogie. In order to check to which extent this model captures the dynamics of the whole vehicle, a Multibody Dynamics Simulation (MBD) model of the real vehicle implemented in SimpackTM [17] is used. Simpack is a general-purpose multibody systems software that is used to simulate the nonlinear motion of a mechanical or mechatronic system. Its main purpose is to solve the (nonlinear) equations of motion numerically and to calculate and visualize the resulting motion of the bodies and coupling forces. Thus, a model of the real vehicle was used to generate the data that will be used for simulations and algorithm evaluation, as well as for this model check. Figure 7 shows the analysis of positions, i.e., displacements, and velocities of the carbody, leading and trailing bogie in a frequency domain. It can be seen how the energy of these signals is distributed over a range of frequencies. It is important to emphasize that Simpack data is generated with the whole vehicle model where all degrees of freedom, i.e., vertical, lateral, longitudinal, and rotational movements, as well as system nonlinearities, are taken into account, while Matlab [12] data are generated with the pure vertical model of a simplified vehicle presented in this section. Nevertheless, it is clear that even this simplified model captures to the great extent the behavior of the whole vehicle model, therefore it will be used for the further analysis. When it comes to fault isolation, it is required to detect on which exact component the change has happened. Since the carbody of a railway vehicle is a rigid body, and the dynamic coupling between two bogies via the soft secondary suspensions is small [9], for the isolation task in the secondary suspension one has to take into account not only pure vertical movement of the center of gravity (CG) but also movements caused by the rotational movements around the x and y axes. Therefore, a model shown in Fig. 8 is used, where displacements and velocities of all four carbody corners are considered. Displacements and velocities of the leading and trailing bogie are in this case taken as inputs to the considered submodel. The behavior of this submodel is again

448

S. Zoljic-Beglerovic et al. Velocity analysis in the frequency domain

Position analysis in the frequency domain Matlab Simpack

Matlab Simpack

Matlab Simpack

Matlab Simpack

Matlab Simpack

Matlab Simpack

Frequency [Hz]

Frequency [Hz]

Fig. 7 Model check—vertical vehicle dynamics (Matlab) versus whole vehicle model (Simpack) z

Carbody pitch

z ,

roll

x

y

u

Fig. 8 Carbody as a submodel for the isolation task on secondary suspension level

A Robust Approach for Fault Diagnosis in Railway Suspension Systems 10 -4 Position analysis in the frequency domain

3

449

10 -4 Velocity analysis in the frequency domain

20

Matlab Simpack

2.5

Matlab Simpack

15 2 1.5

10

1 5

0.5 0

0 0

1

2

3

4

5

0

5

10

15

20

Frequency [Hz]

Frequency [Hz]

Fig. 9 Model check—vertical dynamics of the carbody subsystem (Matlab) versus carbody of the whole vehicle model (Simpack)

compared with the behavior of the carbody in the whole vehicle model, and the satisfactory performance on the whole frequency range is shown in Fig. 9. As defined in (4), y is available for measurement and represents accelerations of the rigid bodies, i.e., carbody and bogies. If displacements and velocities of the centers of gravity of the rigid bodies are to be used as inputs to some submodel or state variables in the model, the available measurement data, i.e., the accelerations must be pre-processed. The concept for the pre-processing of the measurement data is shown in Fig. 10. Due to the numerical integration of the acceleration signal, drift appears in the velocity and displacement signals. Therefore, a high-pass filter with an appropriate cutoff frequency f c is desirable as an addition to the integrator. A series connection of an integrator G int (s) and a high-pass filter G H P (s) can ideally be interpreted as a first-order low-pass filter G L P (s), which reads as G int (s) · G H P (s) =

s·T 1 1 · = G L P (s) · T with T = . s 1+s·T 2π f c

Thus, both first and second integrations of the acceleration signal are carried out using a low-pass filter (marked in red in Fig. 10). In addition, there is a scaling factor

̈

Fig. 10 Filtering concept for the sensor data, i.e., accelerations

450

S. Zoljic-Beglerovic et al.

of T , which stands for the filter time constant for all high- and low-pass filters that affect the integration. In most applications, high-pass filtering is required before integration to eliminate DC components. This function is also taken into account in the presented concept (marked in blue in Fig. 10), with the cutoff frequency of the low-pass filter corresponding exactly to the cutoff frequency of the high-pass filter required for the integration.

3 Parameter Identification The system matrices depend on the system parameters, i.e., stiffness and damping constants, that can either be known or unknown with respect to the considered test case. If these parameters are considered unknown and have to be identified, the system becomes nonlinear, and identification methods that are suitable for nonlinear system classes are required. The generalization of the super-twisting algorithm introduced in [14] as a nonlinear recursive version of the least squares algorithm has been employed in [24] to identify constant parameters of the two-mass oscillator, i.e., eighth of the simplified vehicle model. In this chapter, it is given how to identify the time-varying as well as constant parameters of the whole vehicle in a finite time. A discontinuous gradient algorithm [15] is applied, and, therefore, it is briefly introduced with regard to the problem statement.

3.1 Discontinuous Gradient Algorithm If the considered system can be described as ˙ = Θ(ωt), θ(t) r(t) = Γ T (ωt)θ(t) + ε(t),

(5)

where r ∈ Rm is the output available for measurement, and θ ∈ Rn is the vector of unknown parameters to be identified, then the unknown parameters, whether constant or time-varying, can be identified in a finite time applying the nonlinear algorithm presented in [15]. Disturbances ε(t) ∈ Rm are assumed to be a Lebesgue-measurable signal essentially bounded with ε∞ ≤ ¯ , where ¯ is a known positive constant. Regressor Γ : R → Rn×m represents a known matrix of bounded piece-wise continuous functions, which is assumed to be persistently exciting. Unknown Θ : R → Rn is a uniformly bounded Lebesgue-measurable signal with Θ(ωt)∞ ≤ g(ωt) ≤ Λ, ∀t, for a known continuous function g : R → R≥0 and a known positive constant Λ, while ω ∈ [−ω0 , ω0 ] is the frequency of the time-varying part with ω0 > 0. If all assumptions hold, the estimated parameter vector θˆ is obtained through

A Robust Approach for Fault Diagnosis in Railway Suspension Systems

451

θ˙ˆ = −K Γ (ωt)|e|γ sgn(e),

(6)

ˆ − r(t), e = Γ T (ωt)θ(t)

where γ ∈ [0, 1) is set to zero in the case that noise of the measurements is negligible; a different value from zero and less than one is assigned to γ if the measurements are subject to noise. A symmetric and positive definite matrix K ∈ Rn×n , i.e., K = K T > 0, which can be a diagonal matrix, needs to be selected appropriately. It is noted that the number of observer gains to be tuned can be identical to the number of parameters to be identified. Moreover, a decoupled identification is possible, i.e., each observer gain can be selected according to the corresponding parameter. This contributes significantly to a straightforward tuning and a more precise identification when an immense difference between the parameter characteristics exists. The residual e represents the difference between the measured plant output and the output of the observer. In [14], it is shown that the error dynamics (at the steady state) is short-finite-time stable applying the homogeneity property of the system, and the convergence of e to zero in a finite time is guaranteed considering ε = 0. Its global short-finite-time stability is proved therein by means of a Lyapunov function. If ε = 0, with the proper choice of K and γ the error e converges to a neighborhood of the origin. Since the system modeled in Sect. 2.2 can be represented as in (5), the presented algorithm can be used for the identification of the unknown parameters of a simplified railway vehicle. In this application, ω = 1 and the convergence of the estimation error e is guaranteed as it is proved in [15]. As shown in the previous section in Figs. 7 and 9, the system trajectory is not constant and therefore, the persistence of excitation condition is also satisfied. Secondary suspension parameters ksi j and csi j can be identified using (1), i.e., the fourth channel of the system (4). If the measurable output in (5) is taken as r = z¨ (acceleration of the carbody), then regressor Γ and parameter vector to be identified θ read as T  Γ (t) = z z 1 z 2 z˙ z˙ 1 z˙ 2 θ=

 −ks11 −ks12 −ks21 −ks22 M

 ks11 +ks12 ks21 +ks22 −cs11 −cs12 −cs21 −cs22 cs11 +cs12 cs21 +cs22 T M M M M M

. (7)

Regressor vector defined in (7) is sufficiently exciting if it is assumed that all stiffness and damping parameters have the same values, i.e., ks11 = ks12 = ks21 = ks22 = ks and cs11 = cs12 = cs21 = cs22 = cs . Otherwise, not only dynamics of the CG of the rigid bodies but also movements caused by the rotational movements around the x and y axes have to be considered. In that case, regressor and its accompanying unknown parameter vector are defined as T  Γ (t) = z I r z I l z I I r z I I l z 1r z 1l z 2r z 2l z˙ I r z˙ I l z˙ I I r z˙ I I l z˙ 1r z˙ 1l z˙ 2r z˙ 2l

452

S. Zoljic-Beglerovic et al.



−ks11 −ks12 −ks21 −ks22 ks11 ks12 ks21 ks22 M M M M M M M M T −cs11 −cs12 −cs21 −cs22 cs11 cs12 cs21 cs22 . M M M M M M M M

θ=

(8)

As aforementioned, primary suspension parameters can be identified through (2) and (3), i.e., the fifth and sixth channels of the system (4). The identification of the leading bogie suspension parameters is done using (2), where r = z¨ 1 (acceleration of the leading bogie), while (3) is used for the identification of trailing bogie parameters, where r = z¨ 2 (acceleration of the trailing bogie). In this case, regressor Γ is the same as in (7), and the parameter vectors θ are given in the following form: —leading bogie: θ= —trailing bogie: θ=





2ks −4k p −2ks Mb Mb

2ks Mb

0

0

−4k p −2ks Mb

2cs −4c p −2cs Mb Mb

2cs Mb

0

T 0

−4c p −2cs Mb

T

,

(9)

.

(10)

In addition to the decoupled identification of primary and secondary suspension parameters, it is also possible to identify them simultaneously by combining channels defined with set of Eqs. (7) or (8), (9), and (10), which showed the significant contribution to a better identification when the measurements are subject to noise.

4 Test Cases Changes in system parameters are caused either by external disturbances or internal physical damages, and after reaching a certain threshold they are considered as faults. Both primary and secondary suspension components are subject to faults; therefore, in this chapter both primary and secondary suspension parameters are identified with respect to the defined scenario. The numerical simulations are carried out through Matlab/Simulink, while the data was generated using SimpackT M . For the simulation purpose, the constant forward speed is set to 45 m/s, and the sampling frequency for the measurements is set to f = 200 Hz.

4.1 Fault-Free Case The goal of advanced maintenance strategies like predictive and prescriptive maintenance is to define expectations in parameter changes, but in reality, without such strategies, the changes in system parameters are unpredictable and therefore cannot

A Robust Approach for Fault Diagnosis in Railway Suspension Systems

453

be prescribed in the observer gain definition. Thus, the observer gains are tuned on a nominal, i.e., fault-free scenario, since the satisfactory performance in this scenario is a first step toward the successful detection of possible changes in system parameters within other, i.e., faulty scenarios. Nominal parameter values are provided by the vehicle manufacturer, and their normalized values are indicated by dashed lines in all figures. All parameter estimates are also presented in normalized values, i.e., divided by the respective nominal value. A two-stage identification process is applied, meaning that known values of secondary suspension parameters are used in the test case where primary suspension parameters are to be identified, and vice versa. Please note that the initial condition of the observer is set to 70% of the nominal value, i.e., it is assumed that no exact knowledge of the parameter to be identified is available. It is also assumed that all components on the primary/secondary suspension level have the same stiffness/damping constants, wherefore just one stiffness and damping value is identified per suspension level. Simulation response curves of the sliding-mode (SM) estimator presented in Sect. 3 for the fault-free scenario are shown in Figs. 11 and 12, and they are denoted by subscript S M. For the purpose of comparison, the estimations obtained with the hybrid extended Kalman filter (denoted by subscript K F) are shown in the same figures. Mean values, deviation from nominal, i.e., expected values, and standard deviations are evaluated in the last fourth of the simulation time. Both estimates, i.e., SM and KF results, converge in a finite time to the neighborhood of the expected value, and a good precision of estimation is achieved. Despite slightly slower convergence, the SM algorithm outperforms the KF in terms of precision of estimation. If one wants to estimate all primary and/or secondary suspension parameters separately, then the extended version of the system model (4) is considered, where all parameters k pi j , c pi j , ksi j , and csi j are included. Response curves of the slidingmode estimator are shown in Figs. 13 and 14. If all parameters are estimated separately, the number of observer gains to be tuned corresponds to the number of parameters to be estimated, i.e., 16 for primary, and 8 for secondary suspension. In KF algorithms, one has to tune several covariance matrices—process (Q) and measurements noise (R) covariance matrix. Process covariance matrix is associated with states (Q x ) and parameters to be identified (Q p ). Hence, if only diagonal elements are set, there are 28 values to be tuned for primary suspension, and 20 values for estimation on secondary suspension level; see Table 2. SM outperforms KF also in the time of estimation, since it is on average 10 times faster. A comparison of sliding mode and Kalman filter performance is given in Fig. 15. These methods are compared based on four criteria: • • • •

parametrization effort, i.e., tuning—by number of values to be set computational effort, i.e., average time of estimation accuracy of estimation, i.e., deviation of the estimate from the expected value convergence speed, i.e., how fast the estimate settles down in the neighborhood of the expected value.

454

S. Zoljic-Beglerovic et al. Primary suspension parameters

1.1 1 0.9 0.8 0.7 0.6 0.5 0.4

0

SM: mean value = 1.00

error = 0.00%

std value = 0.000

KF: mean value = 1.04

error = 4.39%

std value = 0.001

50

100

150

200

250

200

250

1.1 1 0.9 0.8 0.7 0.6 0.5 0.4

0

SM: mean value = 1.00

error = 0.00%

std value = 0.000

KF: mean value = 1.00

error = 0.41%

std value = 0.001

50

100

150

Time [s]

Fig. 11 Estimation of primary suspension parameters in a fault-free test case. All components on the primary suspension level have the same stiffness/damping constants Secondary suspension parameters

1.1 1 0.9 0.8 0.7 0.6 0.5 0.4 0

SM: mean value = 1.00

error = 0.03%

std value = 0.002

KF: mean value = 1.00

error = 0.35%

std value = 0.000

50

100

150

200

250

1.1 1 0.9 0.8 0.7 0.6 0.5 0.4 0

SM: mean value = 1.00

error = 0.09%

std value = 0.002

KF: mean value = 0.97

error = 3.01%

std value = 0.000

50

100

Time [s]

150

200

250

Fig. 12 Estimation of secondary suspension parameters in a fault-free test case. All components on the secondary suspension level have the same stiffness/damping constants

A Robust Approach for Fault Diagnosis in Railway Suspension Systems

455

Primary suspension parameters

1.1 1 0.9 0.8 0.7 0.6 0.5 0.4

0

50

100

0.4 0

50

100

150

200

250

150

200

250

1.1 1 0.9 0.8 0.7 0.6 0.5

Time [s]

Fig. 13 Estimation of primary suspension parameters in a fault-free test case. All primary suspension parameters estimated separately Secondary suspension parameters

1.1 1 0.9 0.8 0.7 0.6 0.5 0.4

0

50

100

0

50

100

150

200

250

150

200

250

1.1 1 0.9 0.8 0.7 0.6 0.5 0.4

Time [s]

Fig. 14 Estimation of secondary suspension parameters in a fault-free test case. All secondary suspension parameters estimated separately

456

S. Zoljic-Beglerovic et al.

Table 2 Computational and parametrization effort of SM and KF algorithms Algorithm Average Parametrization Parametrization time of est. (prim.susp.) (sec.susp.) SM KF

4.20 s 51.06 s

16 observer gains diagonal elements of Qx6×6 , Qp16×16 , R6×6

8 observer gains diagonal elements of Qx6×6 , Qp8×8 , R6×6

1 0,8 0,6 0,4 0,2

0 Parametrization effort Computational effort Sliding mode

Accuracy of est.

Convergence speed

Kalman filter

Fig. 15 Comparison of SM and KF performance

Sliding mode outperforms Kalman filter by the first three criteria—it requires less parametrization (ca. 50%) and computational (ca. 90%) effort, and has slightly higher accuracy of estimation. On the other side, it converges slower (ca. 60%) than KF to the true value.

4.2 Faulty Case In the second scenario, a fault is introduced in the stiffness on the secondary suspension level, i.e., a fault of 50% reduced secondary stiffness is introduced in the front/right secondary spring starting from t = 0s. Since the focus is on the estimation and isolation of the parameters ksi j and csi j , i ∈ {1, 2}, j ∈ {1, 2}, in the secondary suspension of the vehicle, the carbody submodel presented in Sect. 2 is considered for this test case. As is mentioned in Sect. 2.2, the movements caused by the rotational movements around x and y axes must be taken into consideration. Therefore, the T  corresponding state z also needs to be replaced with the vector z I r z I l z I I r z I I l , where I stands for the front of the carbody (over leading bogie), and I I stands for

A Robust Approach for Fault Diagnosis in Railway Suspension Systems

457

Secondary suspension parameters

1.2 1 0.8 0.6 0.4 0.2 0 -0.2 0

50

100

150

50

100

150

200

250

300

350

400

200

250

300

350

400

1.2 1 0.8 0.6 0.4 0.2 0 -0.2

0

Time [s]

Fig. 16 Estimation of secondary suspension parameters when a fault of 50% reduced secondary stiffness is introduced in the front/right secondary spring starting from t = 0s

the rear of the carbody (over trailing bogie). Sides are denoted by r -right and l-left. In Fig. 8, all the aforementioned states and parameters are illustrated. Having introduced the data generated through Simpack for the faulty scenario to the estimator of the secondary suspension parameters, the simulation results are depicted in Fig. 16. The initial condition of the observer is set to zero, i.e., it is assumed, as in the fault-free case, that no exact prior knowledge of the parameter values is available. Since eight parameters need to be identified here, the eight observer gains are chosen according to the nominal values of the parameters. In this case, noise of the measurements is assumed to be negligible and, therefore, zero is assigned to γ . A satisfactory performance in terms of the fault detection and the estimation accuracy can be seen. Due to the introduced fault and coupling between components, estimates converge to the neighborhood of the nominal or expected value. In this section, it is shown that the chosen algorithm can cope with model uncertainties that were analyzed in Sect. 2 and depicted in Figs. 7 and 9. Also, in Sect. 4.2 an example of a fault detection is presented and the sensitivity to faults is proved. Therefore, sliding mode can be considered as a promising candidate for the real-time application during vehicle operation.

458

S. Zoljic-Beglerovic et al.

4.3 Outlook In this book chapter, a sliding-mode-based parameter identification algorithm is presented as a solution for fault detection in a railway suspension system. A general introduction to railway vehicle maintenance and the importance of prompt fault detection is given in Sect. 1. In Sect. 2, an overview of the modeling procedure for the simplified vehicle is presented and the dynamical model of the vertical suspension system is derived. Faults are reflected in changes of system parameters, therefore parameters of both primary and secondary suspension levels are identified using a sliding-mode-based algorithm that is briefly introduced in Sect. 3. A discontinuous gradient algorithm is applied and assessed through fault-free and faulty scenarios, and details of these scenarios are drawn in Sect. 4. In this section, a comparison with a hybrid extended Kalman filter (HEKF) is given, and the advantages of sliding-mode techniques are highlighted. It is shown that the presented sliding-mode approach outperforms HEKF by three out of four criteria, i.e., parametrization effort, computational effort, and the accuracy of estimation. Finite convergence times are achieved in both fault-free and faulty scenarios, with a non- or small deviation from the expected value. Sliding-mode techniques are a promising solution to the fault detection and isolation, and therefore a valuable contributor to the decision basis pool for the condition-based and predictive maintenance in a railway field.

References 1. Ankrah, A.A., Kimotho, J., Muvengei, O.: Fusion of model-based and data driven based fault diagnostic methods for railway vehicle suspension. J. Intell. Learn. Syst. Appl. 12, 51–81 (2020) 2. Chen, H., Jiang, B., Ding, S.X., Huang, B.: Data-driven fault diagnosis for traction systems in high-speed trains: a survey, challenges, and perspectives. IEEE Trans. Intell. Transp. Syst. 1–17 (2021). https://doi.org/10.1109/TITS.2020.3029946 3. Fink, O.: Data-Driven Intelligent Predictive Maintenance of Industrial Assets. Springer International Publishing (2019). https://doi.org/10.1007/9783-030-11866-2_25 4. Girstmair, B., Haigermoser, A., Dietmaier, P.: Advantages of using statistical models for detecting faulty components in railway bogies against using simple criteria as defined in standards. Veh. Syst. Dyn. 59(1), 56–69 (2021). https://doi.org/10.1080/00423114.2019.1662925 5. Hayashi, Y., Tsunashima, H., Marumo, Y.: Fault detection of railway vehicles using multiple model approach. In:2006 SICEICASE International Joint Conference, pp. 2812–2817 (2006) 6. Jesussek, M., Ellermann, K.: Fault detection and isolation for a nonlinear railway vehicle suspension with a hybrid extended Kalman filter. Veh. Syst. Dyn. 51(10), 1489–1501 (2013) 7. Li, P., Goodall, R., Kadirkamanathan, V.: Parameter estimation of railway vehicle dynamic model using Rao-Blackwellised particle filter. In: European Control Conference (ECC), pp. 2384–2389. IEEE (2003) 8. Li, P., Goodall, R.: Model-based condition monitoring for railway vehicle systems. Control (2004) 9. Li, P., Goodall, R., Weston, P., Ling, C.S., Goodman, C., Roberts, C.: Estimation of railway vehicle suspension parameters for condition monitoring. Control Eng. Pract. 15(1), 43–55 (2007)

A Robust Approach for Fault Diagnosis in Railway Suspension Systems

459

10. Liao, L., Köttig, F.: A hybrid framework combining data-driven and model-based methods for system remaining useful life prediction. Appl. Soft Comput. 44, 191–199 (2016). ISSN: 1568-4946 11. Liu, X., Alfi, S., Bruni, S.: An efficient recursive least square-based condition monitoring approach for a rail vehicle suspension system. Veh. Syst. Dyn. 54(6), 814–830 (2016) 12. MATLAB - Programming and numeric computing platform. https://www.mathworks.com/ 13. Moubray, J.: Reliability-Centered Maintenance. Industrial Press Inc (2001). ISBN: 9780750633581 14. Rios, H., Efimov, D., Moreno, J.A., Perruquetti, W., Rueda-Escobedo, J.G.: Time-varying parameter identification algorithms: finite and fixed-time convergence. IEEE Trans. Autom. Control 62(7), 3671–3678 (2017) 15. Rueda-Escobedo, G., Moreno, J.A.: Discontinuous gradient algorithm for finite-time estimation of time-varying parameters. Int. J. Control 89, 1838–1848 (2016) 16. Shahidi, P., Maraini, D., Hopkins, B., Seidel, A.: Railcar bogie performance monitoring using mutual information and support vector machines. In: Annual Conference of the PHM Society vol. 7, no. 1 (2015) 17. SIMPACK - Multibody system simulation by Dassault Systèmes. https://www.3ds.com/ 18. Sorribes-Palmer, F., Luber, B., Fuchs, J.: Data-driven fault diagnosis of bogie suspension components with on-board acoustic sensors. In: Proceedings of the 5th European Conference of the Prognostics and Health Management Society, pp. 1–13 (2020). ISBN: 978-1-936263-32-5 19. Tsunashima, H., Mori, H.: Condition monitoring of railway vehicle suspension using adaptive multiple model approach. In: 2010 International Conference on Control Automation and Systems (ICCAS), pp. 584–589. IEEE (2010) 20. Ward, C.P., Goodall, R.M., Dixon, R.: Wheel rail profile condition monitoring. In: UKACC International Conference on Control, pp. 1–6 (2010) 21. Ye, Y., Zhang, Y., Wang, Q.: Fault diagnosis of high-speed train suspension systems using multiscale permutation entropy and linear local tangent space alignment. Mech. Syst. Signal Process. 138, 106565 (2020). ISSN: 0888-3270 22. Zhou, X., Wang, P., Long, Z.: Fault detection for suspension system of maglev trains based on historical health data. IEEE Access 8, 134290–134302 (2020). https://doi.org/10.1109/ ACCESS.2020.3005159 23. Zoljic-Beglerovic, S., Stettinger, G., Luber, B., Horn, M.: Railway suspension system fault diagnosis using Cubature Kalman filter techniques. In: SAFEPROCESS 2018, 10th IFAC Symposium on Fault Detection, Supervision and Safety for Technical Processes, pp. 1330– 1335 (2018). ISSN: 2405-8963 24. Zoljic-Beglerovic, S., Golkani, M.A., Steinberger, M., Horn, M.: Robust parameter identification for railway suspension systems. In: 15th International Workshop on Variable Structure Systems (VSS), pp. 432–437 (2018). https://doi.org/10.1109/VSS.2018.8460281 25. Zoljic-Beglerovic, S., Luber, B., Stettinger, G.: Parameter identification for railway suspension systems using Cubature Kalman filter. In: Advances in Dynamics of Vehicles on Roads and Tracks, 2020, pp. 128–132. Springer International Publishing (2019). ISBN: 978-3-030-380779

Flight Evaluation of a Sliding-Mode Fault-Tolerant Control Scheme Halim Alwi, Lejun Chen, Christopher Edwards, Ahmed Khattab, and Masayuki Sato

Abstract This chapter describes the development and application of sliding-mode fault-tolerant control schemes to safety critical aerospace scenarios. There is growing literature exploiting the specific and unique properties of sliding modes in the field of fault-tolerant control, but many of the results are more theoretical in nature, and relatively little work has been published describing real implementations of these ideas. This chapter focuses on the development of fault-tolerant sliding-mode controllers for a class of linear parameter varying systems. This class of systems is commonly employed to model different aerospace systems, and so represents a natural starting point for these developments. The chapter describes the implementation of these ideas on a small, unmanned quadrotor UAV, and also piloted flight tests on a full-scale twin-engine civil aircraft.

©2018 IEEE. Reprinted with permission from C Edwards, L Chen, A Khattab, H Alwi, M Sato, “Flight evaluations of sliding mode fault tolerant controllers”, Proceedings of 15th International Workshop on Variable Structure Systems (VSS), pp. 180–185, 2018. H. Alwi · C. Edwards (B) · A. Khattab The College of Engineering, Mathematics & Physical Sciences, University of Exeter, Exeter, UK e-mail: [email protected] H. Alwi e-mail: [email protected] L. Chen Department of Electronic and Electrical Engineering, University College London, London, UK e-mail: [email protected] M. Sato The Faculty of Advanced Science and Technology, Kumamoto University, (Although the work described here was conducted while he worked for JAXA), Kumamoto, Japan e-mail: [email protected] © The Author(s), under exclusive license to Springer Nature Switzerland AG 2023 T. R. Oliveira et al. (eds.), Sliding-Mode Control and Variable-Structure Systems, Studies in Systems, Decision and Control 490, https://doi.org/10.1007/978-3-031-37089-2_18

461

462

H. Alwi et al.

1 Introduction This chapter aims to demonstrate that schemes based on sliding-mode control ideas represent a viable practical approach to designing fault-tolerant controllers for safetycritical systems. This chapter will specifically demonstrate these ideas in the area of flight control—for both manned and unmanned aircraft. When Otto Lilienthal flew for the first time a heavier-than-air vehicle in 1891, he implicitly created a new challenge: controlled flight. This in particular necessitated lateral stabilization using a rudder [1]. The first “controlled flight” took place in 1914 conducted by Lawrence Sperry [1]. This represented the first autopilot and led finally to modern fly-by-wire systems utilizing feedback control [2]. However, feedback control schemes are not inherently able to handle sensor/actuator/component failures which usually result in degraded system response and even loss of stability. Currently, in civil aircraft, fault tolerance is achieved using redundant hardware. Typically, critical flight control surfaces utilize three hydraulic actuators, with three associated hydraulic lines and three independent hydraulic pumps [3]. Likewise, critical sensor systems also exploit triple redundancy: i.e., a quantity is measured by three separate devices from which a consolidated (single) signal measurement is obtained, and is then used in the flight control computer [3]. As an alternative, an analytic redundancy paradigm can be pursued which offers the possibility of mitigating faults and/or failures without replicating hardware [3]. This has advantages in terms of saving weight and space. The main objective of Fault-Tolerant Control (FTC) is to mitigate faults and failures and maintain a level of (possibly degraded) performance [4]. Unmanned Aerial Vehicles (UAVs) are occupying increasingly important and widening roles in many military and civil applications. They often undertake missions regarded as too dull, dirty, or dangerous (so-called 3D) for humans, and are used for reconnaissance in dangerous environments such as nuclear power plants, polluted environments, and forest fires. Recently, UAVs have been considered for package delivery (see Fig. 1) and to provide internet convergence in rural areas (High-Altitude Long-Endurance vehicles). There is also increasing interest in exploiting UAVs for autonomous self-flying passenger transport (taxi drones). The growth of potential applications in civil aerospace has resulted in the problem of ensuring UAVs can operate safely in the absence of human pilots. Often, in this situation, it is the collateral damage to people and property which is more important rather than the damage or loss of the vehicle itself. Over previous decades, in manned passenger aircraft, in emergency situations, catastrophe has often only been avoided by the skill of the pilots [5–7]. In un-piloted UAVs, there is consequently the need for automatic safety features to allow, for example, an UAV to make an emergency landing in a potentially unknown residential environment (i.e., airworthiness-like approval) [8].

Flight Evaluation of a Sliding-Mode Fault-Tolerant Control Scheme

463

Fig. 1 UAV civil applications: cargo transport and delivery

2 Fault-Tolerant Control Flight Tests Fault-tolerant control has been extensively investigated in an academic setting, however, reported implementation and validation results, particularly for aerospace systems, are relatively rare. It is much more likely that fault-tolerant control will be implemented on UAVs before this technology is adopted for manned civil aircraft, due to the absence of pilots. The following sections discuss the implementation and validation of fault-tolerant control schemes on UAVs and civil aircraft, which have appeared in the open literature.

2.1 FTC Hardware Implementation: Civil and Fighter Aircraft Some of the earliest published descriptions of results from implementing FTC schemes on civil aircraft are from NASA’s Propulsion Controlled Aircraft (PCA) project [9]. In this work, a control system was created to maneuver the aircraft using only engine thrust. The project investigated using collective thrust (increasing thrust to facilitate climb and decreasing collective thrust to cause descent) to effect pitch control, while heading (and roll) changes were achieved using the left and right engines in a differential fashion. The schemes were flight tested on a McDonnell Douglas MD-11 transport aircraft (Fig. 2). Other work at NASA Dryden, the Intelligent Fight Control System (IFCS) project, [10] aimed to identify online the aircraft aerodynamic and stability parameters in fault or failure conditions during flight. This information was then used to reconfigure the controller to safely land the aircraft. The neural network-based system was flight tested on a specially modified McDonnell Douglas F-15B Eagle on 6 December 2002.

464

H. Alwi et al.

Fig. 2 McDonnell Douglas MD-11: landing under engine power only. NASA Dryden Flight Research Center Photo Collection, photo by J. Ross [9]

Fig. 3 JAXA MuPAL-α experimental airplane (reproduced with permission from JAXA)

FTC schemes have also been investigated for military aircraft. In the Integrated Resilient Aircraft Control project (IRAC) [11], a modified F-18 was flown using adaptive control algorithms to maintain a safe flight regime despite structural damage, control surface failure and the associated changes to the aerodynamic characteristics. Outside of the USA, the recent “Validation of Integrated Safety-enhanced Intelligent flight cONtrol (VISION)” project aimed to increase TRLs (Technology Readiness Levels) by implementing FDD and FTC schemes on the Japan Aerospace Exploration Agency’s (JAXA)’s MuPAL-α experimental aircraft (Fig. 3) [12], and the USOL K-50 UAV (Fig. 4) Ref. [13]. JAXA’s MuPAL-α aircraft is a piloted passenger aircraft (a Dornier Do228-202) which has been used to flight test flight control systems. These include Simple Adaptive Control [14], H∞ Control [15, 16], SlidingMode Control (SMC) [17, 18], and an Indirect Adaptive Control strategy exploiting online parameter estimation [19].

Flight Evaluation of a Sliding-Mode Fault-Tolerant Control Scheme

465

Fig. 4 K50-Advanced UAV platform

2.2 FTC Hardware Implementation: Multirotor UAV Descriptions of implementations of FTC schemes on UAVs are much more prevalent in the literature, and rapidly expanding. Small lab-based multirotor systems are popular testbeds. Their manageable size allows them to be tested safely in indoor environments.

2.2.1

FTC on Quadrotors

The most popular platform for testing FTCs is the quadrotor. The literature in this area is growing rapidly. Many FTC paradigms have been adopted. In [20], a gain scheduled PID was evaluated in different fault situations where the gains were selected based on the tracking error and actuator health. A modified Qball-X4 developed by Quanser Inc. was used as the test platform after being modified from a quadrotor to a hexacopter. Linear Quadratic Regulator (LQR) designs and Model Reference Adaptive Control (MRAC) schemes have also been tested in [20] in the presence of propeller damage. In [21], passive and active sliding-mode FTC was tested experimentally. In [22], active FTC for a quadrotor was reported where model predictive control was combined with horizon estimation and an unscented Kalman filter for parameter estimation. In [23], an SMC scheme was employed for active FTC, while in [24] an SMC approach was used for passive FTC. In [24], it was demonstrated that similar performance could be achieved by careful tuning. Quadrotors have inherent limitations as platforms to demonstrate FTC because of the lack of redundancy. FTC implementations on quadrotors tend to focus on faults and not total actuator failures. That said, [25, 26] constitute a small body of work considering total rotor failures for a quadrotor. Control strategies cannot deal with the total failure of one rotor if the objective is to retain nominal performance. Typically,

466

H. Alwi et al.

in fault-free situations, the three body rates (roll, pitch, and yaw) plus collective heave commands are tracked by independent rotation speed manipulation of the four propellers. If one of the propellers fails, tracking of four independent commands cannot be maintained. In this situation, [26] the UAV operates with unrestrained rotation or constant rotational speed in the yaw (vertical) axis, to allow the healthy rotors to maintain accurate control of roll and pitch angles. The implementation results demonstrated that even in this situation, a safe landing could still be achieved.

2.2.2

Octocopters or Hexarotors for FTC

In these platforms the presence of redundancy in terms of actuators provides the capacity to deal with total failures [27]. In [28], an octocopter UAV with a co-axial configuration was used to investigate the potential of a Fault Detection and Isolation (FDI) scheme based on a sliding-mode observer, which provides information to a decision-maker. The decision-maker selects which specific PID configuration should be online at any one particular time (from a collection of possible candidates designed a priori based on different potential fault scenarios). In [29], adaptive estimators were used to detect and then isolate sensor faults (including biases) in the accelerometer and gyroscope. In [30], a hexacopter was tested indoors using a Vicon motion tracking arrangement. The FTC scheme exploits a parametric programming control allocator to automatically redistribute control signals when faults/failures occur.

3 LPV Sliding-Mode Controller Design The Linear Parameter Varying (LPV) methodology is a popular paradigm within the aerospace community. It originally emanated from the ad hoc gain scheduling technique [31] and first appeared in the thesis of Shamma [31]. The main advantages of the LPV approach are (i) rigorous stability and performance throughout the whole parameter region is guaranteed by using linear control theory; and (ii) the interpolation of the controllers designed about isolated independent design points is avoided, obviating the need for extensive a posteriori checking of performance via extensive simulation. From the LPV representation, controllers can be designed and analyzed based on natural extensions of well-established linear techniques. Furthermore, they represent a framework which facilitates rigorous stability analysis and helps guarantee robust performance across a wide range of operating values. Compared with general linear time-varying systems, the specific structural properties of LPV systems can be exploited to ease the laborious effort required for designing controllers valid for the whole set of operating conditions. Many different paradigms have been exploited in the literature to create FTC schemes [4]. These include LQR approaches [32], H∞ methods [33], adaptive control [34], and model predictive control [35]. Many of these have been applied to LPV systems (partly due to the acceptance of the LPV approach within the aerospace community).

Flight Evaluation of a Sliding-Mode Fault-Tolerant Control Scheme

467

Perhaps unsurprisingly, given the nature of safety-critical systems, most of the new FTC schemes developed within academia have only been demonstrated in simulation. SMC schemes [3] have attracted attention as candidate FTC schemes because of their inherent ability to reject matched uncertainty—i.e., uncertainty occurring in the “channels” in which the control signals act. Actuator faults and failures, by their very nature, occur in these channels, and thus represent a particular class of matched uncertainty [36]. To widen their capabilities further, SMC elements have been incorporated within Control Allocation (CA) frameworks (for over-actuated systems). This allows the resulting scheme to handle total actuator failures [3, 37]. The schemes described in this chapter will be based on LPV models of aircraft. LPV-based synthesis methodologies [38–42] have been widely considered in aerospace applications, however in terms of the development of SMC schemes for LPV systems, very little established work exists in the open literature. A notable exception is the recent work from [43] which considers a class of quasi-LPV systems scheduled by the states and formulates the sliding surface in a convex framework.

3.1 Problem Formulation In this section, a sliding-mode FTC scheme with online CA is described from a theoretical standpoint based on an LPV model of the system1 Consider the system x˙ p(t) = A p (ρ)x p (t)+ B p(ρ)(Im−K (t))u(t)+D p (ρ)ξ(t, x) y(t) = C p x p (t),

(1)

where A p (ρ) ∈ Rn×n , B p (ρ) ∈ Rn×m and C p ∈ Rl×n . The states and the control inputs are denoted by x p ∈ Rn and u ∈ Rm , while y(t) ∈ Rl (where l < m) represents the controlled output. This actuator redundancy will be exploited to achieve FTC in the case of a class of failures. The disturbance distribution matrix D p (ρ) ∈ Rn×k , and the unknown but bounded signal ξ(t, x) ∈ Rk , which is assumed to be Lebesgue measurable, denotes a “matched” disturbance (see Assumption 2). The scheduling parameter ρ ∈ Rnr is assumed to be differentiable and to lie in a hyper-rectangle Ω ⊂ Rnr . In this chapter, it is assumed that ρ is available and Ω := {(ω1 , . . . , ωnr ) : ωi ∈ {ρ i , ρ i }}

(2)

are used to represent the N = 2nr vertices of the hyper-rectangle associated with the bounds on ρ. In (1), the matrix K (t) := diag(k1 (t), . . . , km (t)) and the time-varying scalars k1 (t), k2 (t), . . . , km (t) model the loss of actuator effectiveness [3]. This is a 1

Adapted from L. Chen, H. Alwi, C. Edwards, and M. Sato, “Flight evaluation of a sliding mode online control allocation scheme for fault tolerant control”, Automatica, vol 114, 2020. Originally published under a CC BY 4.0 license; https://doi.org/10.1016/j.automatica.2020.108829.

468

H. Alwi et al.

specific form of one of the fault models used extensively in the literature [44, 45]. A fault-free actuator is modeled as k j (t) = 0; while for a completely failed actuator k j (t) = 1. When 0 < k j (t) < 1, the actuator behaves with reduced effectiveness (i.e., it is faulty but has not failed). The following assumptions are used throughout the chapter. Assumption 1 Assume the system matrix A p (ρ) in (1) is affinely dependent on ρ (this assumption can be relaxed for matrices B p (ρ) and D p (ρ)), so that A p (ρ) = A p,0 + A p,1 ρ1 (t) . . . + A p,nr ρnr (t).

(3)

Assumption 2 The uncertainty in (1) is matched: specifically, the range spaces R(D p (ρ)) ⊆ R(B p (ρ)) for all ρ ∈ Ω. Assumption 3 The uncertainty ξ(t, x) in (1) satisfies ξ(t, x) ≤ α(t, x)

(4)

for a known positive function α(t, x) which is assumed to be bounded. Define W (t) := Im − K (t)

(5)

and as a consequence, the ith diagonal element of W (t) is wi (t) = 1 − ki (t) where wi (t) ∈ [0, 1]. Assumption 4 Assume the input distribution matrix can be factorized as B p (ρ) = Bv B2 (ρ)

(6)

where Bv ∈ Rn×l is a fixed matrix with full column rank, and the time-varying matrix B2 (ρ) ∈ Rl×m . Furthermore, assume rank(B2 (ρ)) = l for all ρ ∈ Ω. Variations on this factorization are common in the FTC literature when considering control allocation and systems with redundancy (see, for example, [46, 47]). In this chapter, this factorization allows an ideal form of control allocation to be exploited and a “virtual” control signal will be created of dimension l (compared to the total number of actuators m). By design, the control allocation mechanism will preferentially distribute the control effort to the healthy actuators. Assumption 5 It will be assumed that the fault W (t) is such that det(B2 (ρ)W (t)B2T (ρ)) = 0.

(7)

Remark 1 Because m > l, Assumption 5 allows W (t) to lose rank and yet still satisfy condition (7). This redundancy will be exploited in the CA framework discussed in the sequel.

Flight Evaluation of a Sliding-Mode Fault-Tolerant Control Scheme

469

Since by assumption rank(Bv ) = l, there exists a coordinate change of the form x p → Tn x p = x such that   0 (8) Tn Bv = Il In the literature, this decomposition of the input distribution matrix results in socalled regular form [48]. Such a canonical representation is often used as the starting point for the design of SMC. In the new coordinate system, the equations in (1) can be written as     0 0 W (t)u(t)+ ξ(t, x) (9) x(t) ˙ = A(ρ)x(t)+ D2 (ρ) B2 (ρ) where A(ρ) = Tn A p (ρ)Tn−1 and D2 (ρ) ∈ Rl×k . This special structure of the uncertainty distribution matrix is due to Assumption 2. Define a virtual control input v(t) ∈ Rl according to v(t) := B2 (ρ)W (t)u(t)

(10)

The objective is to first design the virtual control v(t) to provide appropriate closedloop performance, and then to compute the physical control law u(t) so that Eq. (10) is satisfied. Substituting from (10) in (9) yields     0 0 v(t) + ξ(t, x) x(t) ˙ = A(ρ)x(t)+ D2 (ρ) Il

(11)

Equation (11) will be used as the basis for the design of the virtual control v(t). Here, it is proposed that the control signals sent to the physical actuators are given by u(t) := N (ρ)v(t),

(12)

where the allocator matrix is chosen as N (ρ) := Λ(t)B2 (ρ)T (B2 (ρ)Λ(t)2 B2 (ρ)T )−1 .

(13)

In (13), Λ(t) ∈ Rm×m is any diagonal weighting matrix such that det(B2 (ρ)Λ(t)2 B2 (ρ)T ) = 0. Here, Λ(t) is a diagonal matrix and represents an estimate of the true effectiveness matrix W (t). The motivation for the allocation matrix N (ρ) is discussed later in Remark 2 and provides a mechanism for dealing with total failures when the matrix W loses rank and becomes nonsingular. Here, it is assumed the estimate Λ(t) is created by the monitoring scheme. Invariably, Λ(t) will not be identical to W (t), but it is assumed to satisfy the inequality

470

H. Alwi et al.

0 ε for all ρ ∈ Ω ,

(16)

where 0 < ε < 1 is a small design scalar. Then for any Λ(t) ∈ Wε det(B2 (ρ)Λ(t)2 B2 (ρ)T ) = 0 and the allocation structure N (ρ) in (13) is well defined. Remark 3 The larger ε, the more stringent the constraint in (16) and the smaller the allowable set Wε in the sense that if two scalars satisfy ε1 > ε0 > 0 then Wε1 ⊂ Wε0 . It follows N (ρ) is bounded for all Λ(t) ∈ Wε and ρ ∈ Ω because from its definition in (13) 1 N (ρ) ≤ ΛB2 (ρ)(B2 (ρ)Λ2 B2 (ρ)T )−1  < B2 (ρ). (17) ε To facilitate the control law design, partition the states as x(t) = col(x1 (t), x2 (t)), where x1 (t) ∈ R(n−l) and x2 (t) ∈ Rl , then (11) can be written as          A11 (ρ) A12 (ρ) x1 0 0 x˙1 = + v(t)+ ξ(t, x) x˙2 A21 (ρ) A22 (ρ) x2 Il D2 (ρ)

(18)

Note Eq. (18) is in regular form. The first stage in the development of a conventional sliding-mode controller is the choice of the sliding surface. The approach which will be used is a natural extension of the viewpoint in [1, 49].

Flight Evaluation of a Sliding-Mode Fault-Tolerant Control Scheme

471

3.2 Definition of the Switching Function Define a parameter-dependent switching function as s(t) = S(ρ)x(t)

(19)

Exploiting the regular form structure in (18), choose   S(ρ) := M(ρ) Il ,

(20)

where M(ρ) ∈ Rl×(n−l) represents the design freedom to be synthesized during the design process. During a sliding motion s(t) = 0, and therefore from (19) and (20), the two state components in (18) satisfy x2 (t) = −M(ρ)x1 (t)

(21)

Substituting (21) into (18) yields x˙1 (t) = (A11 (ρ) − A12 (ρ)M(ρ)) x1 (t). 



(22)

As (ρ)

The dynamics associated with (22) constitute the sliding motion. In the sequel, two methods (an LPV pole placement and an LPV linear quadratic regulator approach) will be proposed to calculate M(ρ). The choice of M(ρ) can be thought of as an LPV state feedback problem for the pair (A11 (ρ), A12 (ρ)) in (22). Here, the polytopic LPV-based approach is adopted as the basis for the developments (although other options in the literature could be pursued: see, for example, [40]). Let A¯ 11,i and A¯ 12,i represent the values of A11 (ρ) and A12 (ρ) in terms of the ith vertex of Ω so that A11 (ρ) =

N

pi (ρ) A¯ 11,i , A12 (ρ) =

N

i=1

pi (ρ) A¯ 12,i

(23)

i=1

and M(ρ) =

N

p j (ρ) M¯ j ,

(24)

j=1

where N = 2nr represents the number of vertices of Ω, M¯ j represents the jth vertex of M(ρ), and the (time) differentiable scalars pi (ρ) satisfy the unit simplex expression N i=1

pi (ρ) = 1 and pi (ρ) ≥ 0

(25)

472

H. Alwi et al.

Define

Ψi j = A¯ 11,i P1 − A¯ 12,i Y j

(26)

where P1 is a symmetric positive definite (s.p.d) matrix and Y j represents a matrix of appropriate dimension. Theorem 1 ([17]) If there exists a symmetric positive definite matrix P1 and matrices Y j such that the LMIs Ψi j + ΨiTj < 0 (27) are feasible for all i, j = 1, . . . , N , then the system in (22) is quadratically stable if in (24), the M¯ j are defined as M¯ j = Y j P1−1 . Corollary 2.1. ([17]) Let a real symmetric matrix L 1 and a square matrix L 2 be appropriately selected a-priori. If there exists an s.p.d matrix P1 and matrices Y j such that the LMIs L 1 ⊗ P1 + L 2 ⊗ Ψi j + L 2 T ⊗ ΨiTj < 0

(28)

are feasible for all i, j = 1, . . . , N , then for any frozen ρ, the poles of the system in (22), in which M¯ j are defined as M¯ j = Y j P1−1 , are located in the LMI region D = {z ∈ C : L 1 + z L 2 + z¯ L 2 T < 0}

(29)

parameterized by the matrices L 1 and L 2 . Proof The proof is similar to the one in [50].



In the sliding-mode literature for linear time-invariant (LTI) systems, one of the early methods for sliding surface design was based on quadratic optimal control—in fact, a particular form of LQR exploiting “cheap control” [48, 51]. Specifically: Minimize, by choice of M(ρ), the cost



J=

x1 (τ )T Q 11 x1 (τ ) + x1 (τ )T M(ρ)T Q 22 M(ρ)x1 (τ )dτ

(30)

ts

subject to the system (22) where Q 11 and Q 22 are s.p.d and ts is the time at which sliding occurs. Equation (30) is an extension of Eq. (4.40) in Sect. 4.2.2 of [48] from an LTI setting to an LPV formulation. Theorem 2 ([17]) Suppose there exist an s.p.d matrix P1 and matrices Y j such that the LMIs ⎤ ⎡ Ψi j + ΨiTj Y jT Q 22 P1 Q 11 ⎣ (31) ∗ −Q 22 0 ⎦ 0 is a small positive scalar, a sliding motion takes place on S in finite time. Remark 4 A super-twisting differentiator [53] can be used to estimate ρ˙ (which is ˙ required to obtain M(ρ) in (34)) in the case it is not measurable. Notice that any estimation error associated with ρ˙ is a “matched” uncertainty, since it appears in the control channels. Hence, through tuning of the modulation gain K , the sliding motion can still be maintained in the presence of imperfect ρ˙ estimates.

4 Flight Validation Using a Quadrotor UAV In this section, validation of the flight control scheme described in the previous section is undertaken via implementation on an IRIS+ multirotor UAV2 (see Fig. 5). The UAV is manufactured by 3DR [55] and uses Pixhawk [55] as its flight controller. The nonlinear equations of motion of the 3DR IRIS+ quadrotor [55] are given by ⎤ ⎡ ⎤ ⎡ φ˙ p + qsφtanθ + r cφtanθ ⎥ ⎢ ⎢ θ˙ ⎥ ⎢ qcφ − r sφ ⎥ ⎢ ⎢ ⎥ ⎢ ⎢ ψ˙ ⎥ ⎢ qsφsecθ + r cφsecθ ⎥ ⎢ ⎥ ⎢ ⎢ ⎥ ⎢ ⎥ ⎢ ⎢ u˙ ⎥ ⎢ vr − qw − gsθ ⎥ ⎢ ⎢ ⎥ ⎢ ⎥+⎢ ⎢ v˙ ⎥ = ⎢ pw − ur + gcθ sφ ⎥ ⎢ ⎢ ⎥ ⎢ ⎥ ⎢ ⎢ w˙ ⎥ ⎢ uq − pv + gcθcφ ⎥ ⎢ ⎢ ⎥ ⎢ ⎥ ⎢ ⎢ p˙ ⎥ ⎢ − I )/I qr (I yy zz x x ⎥ ⎢ ⎢ ⎥ ⎢ ⎦ ⎢ ⎣ q˙ ⎦ ⎣ pr (Izz − Ix x )/I yy ⎣ qr (Ix x − I yy )/Izz r˙ ⎡

2

0 0 0 0 0 Fz m Lb Ix x Mb I yy Nb Izz

⎤ ⎥ ⎥ ⎥ ⎥ ⎥ ⎥ ⎥ ⎥ ⎥ ⎥ ⎥ ⎥ ⎥ ⎦

(43)

Reprinted with permission from A Khattab based on his PhD thesis “Fault Tolerant Control of Multi-Rotor Unmanned Aerial Vehicles Using Sliding Mode Based Schemes” [54].

Flight Evaluation of a Sliding-Mode Fault-Tolerant Control Scheme

475

Fig. 5 3DR IRIS+ quadrotor [55]

and

⎡ ⎤ ⎡ ⎤⎡ ⎤ x˙ cθ cψ sφsθ cψ − cφsψ cφsθ cψ + sφsψ u ⎣y˙⎦ = ⎣cθ sψ sφsθ sψ + cφcψ cφsθ sψ − sφcψ⎦⎣v ⎦ z˙ −sθ sφcθ cφcθ w

(44)

As described in [55], the states are roll angle ( p), pitch angle (q), yaw angle (r ); velocities in the body axes; and positions (x, y, z) in the inertial axes. The variable Fz represents the total thrust, L b represents the roll torque, Mb represents the pitch torque, and finally, Nb represents the yaw torque. The system parameters are the mass m and inertia about the x, y, and z body axes (i.e., Ix x , I yy , and Izz ), respectively. Finally, c and s are used to represent cos(·) and sin(·), respectively. For implementation, the controlled states are the roll, pitch, and yaw rates. The altitude will be controlled using a separate outer loop control. For controller synthesis, only the attitude states will be considered, i.e.,  T X= φθ ψ pqr Assuming hover conditions, the equations of motion in (43)–(44), can be written in quasi-LPV form as X˙ (t) = A(ρ)X (t) + Bu(t), (45) where the time-varying scheduling parameters are     ρ(t) := ρ1 ρ2 ρ3 = p(t) q(t) r (t)

(46)

476

H. Alwi et al.

The scheduling parameter ρ(t) is assumed to lie in a particular compact set Ω ∈ R3 chosen to avoid singularities. The system matrices in (45) are given by 

I3 0 A(ρ) = 3×3 03×3 A22 (ρ) where

 (47)



⎤ 0 c22 ρ3 c22 ρ2 A22 (ρ) = ⎣ c25 ρ3 0 c25 ρ1 ⎦ , c8 ρ c8 ρ 0 2 2 2 1

(48)

The input distribution matrix is given by ⎡

0 ⎢0 ⎢ ⎢0 B=⎢ ⎢ c4 ⎢ ⎣0 0

0 0 0 0 c7 0

⎤ 0 ⎡ ⎤ 0⎥ ⎥ −bl1 bl2 bl1 −bl2 ⎥ 0 ⎥⎣ bl3 −bl4 bl3 −bl4 ⎦ 0⎥ ⎥ d d −d −d 0⎦ c9

(49)

where the constants c2 = (I yy − Izz )/Ix x , c4 = 1/Ix x c5 = (Izz − Ix x )/I yy , c7 = 1/I yy c8 = (Ix x − I yy )/Izz , c9 = 1/Izz

(50)

As shown in Fig. 5, the inputs of the system u(t) = col(ω12 , ω22 , ω32 , ω42 ), which are the squares of the individual motor rotational velocities. The parameters b and d are thrust and drag factors, while l1 , l2 , l3 , l4 are the four moment arm lengths (Fig. 5). Equation (49) can be written as  0 B2 I3 

 B=

(51)



where



⎤ −c4 bl1 c4 bl2 c4 bl1 −c4 bl2 B2 = ⎣ c7 bl3 −c7 bl4 c7 bl3 −c7 bl4 ⎦ c9 d c9 d −c9 d −c9 d

(52)

The structure in (51) satisfies the conditions in (6) associated with Assumption 4. Also, note the very specific structure of the system matrix in (47) in which only the bottom right sub-block depends on the scheduling parameters. The controlled output distribution matrix for this problem is

Flight Evaluation of a Sliding-Mode Fault-Tolerant Control Scheme

  Cc = I3 03×3

477

(53)

which implies φ, θ, ψ are the states to be controlled. To create a steady state reference tracking capability, integrator states are introduced according to (54) x˙r (t) = yc (t) − Cc x p (t) where yc (t) is the (differentiable) command signal. Define the augmented state vector xa = col(xr , x p ) and create using (1) and (54) the augmented state-space system x˙a (t) = Aa (ρ)xa (t)+ Ba (ρ)W (t)u(t)+ Bc yc (t)

(55)

where Bc = [Il 0]T and  Aa (ρ) =

   0 −C 0 , Ba (ρ) = 0 A(ρ) B

(56)

The system in (55) will be used as the basis for the design. In terms of the virtual control law a small change to vl in (34) needs to be made to account for the reference signal: ˙ (57) vl (t) = −S(ρ)(A(ρ)x(t)+ Bc yc (t))− M(ρ)x 1 (t) + Φs(t)

4.1 Implementation The platform used for the implementation of the control scheme is called Pixhawk [56]. Both the hardware and firmware of Pixhawk are available open source, which makes it popular among hobbyists and researchers. The original firmware is written in C and includes a “standard” PID controller which can be tuned using a desktop software application called “Mission Planner” which also acts as the ground control station [57]. Due to the freely available software and hardware, some researchers (see, for example, [58, 59]) have created MATLAB and Simulink-based support tools for Pixhawk through the ArduPilotMega (APM). Official Mathworks support is now available through the Pixhawk Support Packages (PSP) [60]. This support package allows rapid development of any flight controllers directly from Simulink blocks to the Pixhawk hardware without any manual programming in C. The work in [55] has identified the physical parameters of the IRIS+ multirotor UAV (e.g., the mass and moment of inertia). Reference [55] also describes detailed step-by-step procedures on how to use the Simulink support package to implement a PID controller on Pixhawk.

478

4.1.1

H. Alwi et al.

Pixhawk—the Hardware

Pixhawk is currently the standard for the open-source drone hardware and software [56]. Not only can it be used for multicopter UAVs but also can be used for fixed-wing UAVs, typical helicopter UAVs, and unmanned ground vehicles (UGVs). Originated at the Computer Vision and Geometry Lab at ETH Zurich, the main objective of Pixhawk was to provide low-cost, open, and easily available hardware and software—mainly for academics—but also for hobbyists and industry [56]. The Pixhawk hardware is relatively small (only 50 × 15.5 × 81.5 mm) and includes a Cortex M4F CPU processor (168 MHz, 256 kb Ram, and 2MB Flash), a sensor suite (3D accelerometer, gyroscope, magnetometer, and barometer) and a few input/output interfaces [56]. Additional hardware such as a flow sensor (for non-drift hover capability) and GPS can be added through the input/output interface.

4.1.2

Gimbal Setup

To avoid safety and regulation concerns, flight tests for the quadrotor were conducted indoors; and in the absence of an expensive motion capture system, a simple test rig was created. This allows the inner loop Euler angles control performance to be tested and evaluated with a longer duration of roll, pitch, and yaw commands. This is an advantage in comparison to the motion capture systems which are limited to the room size (to avoid hitting the walls) thus limiting the duration of any sustained Euler angle command. In the literature, the simplest test rig called a “string test” (see, for example, [61]) can be used to tune each of the roll, pitch, and yaw axes independently. While this is simple and allows for quick tuning of the controller (typically a PID from Mission Planner [61]), the rig does not allow testing on all axes simultaneously and suffers from severe vibrations especially when there is slack on the string. In the work described in [62], a gimbal (gyroscopic) test rig was used (see Fig. 6). This allows smooth rotations of all Euler angles while the center of gravity remains fixed in space and is less prone to severe vibrations. Since the test rig will be part of the drone to be tested, lightweight yet strong materials have been considered during the manufacturing process in order to reduce the effect of additional mass and inertia on the quadrotor. As described in [62], carbon fiber tubes were used (see Fig. 6).

4.1.3

Design

The original physical parameters, e.g., mass and moment of inertia were identified in [55]. In this chapter, since the quadrotor is attached to a gimbal, the physical properties of the IRIS+ quadrotor with the gimbal were identified in [54, 62]. To introduce tracking of the Euler angles as discussed earlier, integral action has been included inflating the order of the states to 9. As shown in (47), in this example the matrix sub-blocks A11 and A12 associated with the design of the sliding surface do not depend on the scheduling parameters and so the synthesis problem does not

Flight Evaluation of a Sliding-Mode Fault-Tolerant Control Scheme

479

Fig. 6 3DR IRIS+ quadrotor in gimbal

require the complexities described in Theorem 2 in the previous section. Details of the physical parameters appear in [54]. In this case, a simpler form of the LQR-like optimal design methodology from Theorem 2 can be deployed with the design matrix Q chosen as Q = diag(13, 13, 13, 2, 2, 2, 0.02, 0.02, 0.17) Here, the design freedom Φ = diag(−3, −1, −1). Because of the special structure of B in (51) and the diagonal choice of Φ, resulting in a diagonal matrix P2 , the switching surfaces are nominally decoupled. By exploiting this structure, the discontinuous injection signal has been split into individual channels. This aids tuning and minimizes coupling between the controlled states. It also allows the individual modulation gains ρi to be chosen channel-wise (in this case) as 5, 1, 1, respectively. Furthermore, a sigmoidal function [48] is used to “smooth” the channel-wise discontinuous terms to avoid chattering. Here, the “smoothing factors” δi have been chosen as 0.12, 0.17, 0.1, respectively.

4.1.4

Implementation and Test Results

As described in [62], during the controller synthesis, the control law u(t) corresponds to ω12 , ω22 , andω32 , ω42 , which are the squares of the four individual motor rotational

480

H. Alwi et al.

speeds. However, for implementation, since the electronic speed controller (ESC) for the brushless motor can only accept pulse width modulation (PWM) signals, the control law u(t) (r pm)2 needs to be converted into a PWM signal. The conversion is achieved using the following equation:  P W Mi = (2000 − trim) ×

ui u i,max

 + trim.

(58)

Note that u i,max = (10, 000r pm)2 represents the square of the maximum rotational speed and the PWM signals vary between trim = 1000 (no rotation) to 2000 (maximum rotation). In this chapter, two scenarios are considered: a fault-free scenario and a scenario when a fault develops in the first motor. Specifically, the fault considered is a Motor 1 reduction in effectiveness to 25% starting from t = 2 s. As described in [62], the fault is introduced at a software level without damaging either the propeller or the motor. The controller was implemented on the flight control hardware (Pixhawk) using the official Matlab Pixhawk Support Packages (PSP) [60]. The fault-free implementation results are shown in Fig. 7; while, in Fig. 8, the faulty case is shown. For the fault-free case, Fig. 7a shows good tracking performance for the Euler angles and that sliding is maintained throughout the flight. Figure 7b shows the PWM signals for each motor in the fault-free condition. For the faulty scenario, Fig. 8a shows good tracking performance despite the presence of a fault on motor 1. Figure 8a shows only a very small deviation from the sliding surfaces, even during the fault. Finally Fig. 8b shows the effect of the fault on the PWM signals. Here, it can be seen that the PWM signals are limited from the minimum (trim) value (PWM = 1000) to the maximum rotation (PWM = 2000). The results show the robustness of the implemented SMC scheme, even in the presence of a faulty motor (actuator).

5 HIL and Flight Validation Using MuPAL-α Research Aircraft The theoretical developments in Sect. 3 will now be used to create a lateral directional FTC controller for JAXA’s MuPAL-α aircraft. The MuPAL-α is a twin-propeller engine Dornier Do228-202 aircraft, which has been modified to implement an experimental (research) FBW system and Direct Lift Control (DLC) flaps [12, 63] (Fig. 9). MuPAL-α is used for evaluating human-machine interactions, and flight testing new guidance and navigation technologies [42, 63, 64]. The protocol is typically to first test new strategies on the ground in a Hardware-in-the-Loop (HIL) configuration. However, one of its unique assets is its capacity to allow piloted flight evaluation. During any flight tests, the configuration allows a safety pilot to override the FBW system via the original aircraft’s direct mechanical linkages from the pilot’s wheel,

Flight Evaluation of a Sliding-Mode Fault-Tolerant Control Scheme switching ROLL

roll (deg)

10

0

-10 0

10

20

30

1

0

-1 0

40

10

switching PITCH

pitch(deg)

10

0

-10 10

20

30

0

switching YAW

yaw(deg)

30

40

30

40

-1 10

20

time (sec)

0

-10 20

40

0

40

10

10

30

1

time (sec)

0

20

time (sec)

time (sec)

0

481

30

40

1

0

-1 0

10

20

time (sec)

time (sec)

(a) Controlled states and switching functions (Fault-Free Case) 1600

1600

1400

1400

1200

1200

1000

1000 10

20

30

40

10

time (sec)

20

30

40

30

40

time (sec)

1600

1600

1400

1400

1200

1200

1000

1000 10

20

30

40

10

time (sec)

(b) PWM signals (Fault-Free Case)

Fig. 7 Fault-free case (red curves represent the nominal performance)

20

time (sec)

482

H. Alwi et al. switching ROLL

roll (deg)

10

0

-10 0

10

20

30

1

0

-1

40

0

10

time (sec) switching PITCH

pitch(deg)

10

0

-10 0

10

20

30

0

switching YAW

yaw(deg)

30

40

30

40

-1 10

20

time (sec)

0

-10 20

40

0

40

10

10

30

1

time (sec)

0

20

time (sec)

30

40

1

0

-1 0

10

time (sec)

20

time (sec)

(a) Controlled states and switching functions (Faulty case: w1 = 0.75 @t = 2sec) 1600

1600

1400

1400

1200

1200

1000

1000 10

20

30

40

10

time (sec)

20

30

40

30

40

time (sec)

1600

1600

1400

1400

1200

1200

1000

1000 10

20

time (sec)

30

40

10

20

time (sec)

(b) PWM signals (Faulty case: w1 = 0.75 @t = 2sec)

Fig. 8 Faulty case: w1 = 0.75 @ t = 2 s (red curves represent the nominal performance)

Flight Evaluation of a Sliding-Mode Fault-Tolerant Control Scheme

483

Fig. 9 MuPAL-α fly-by-wire configuration (adapted from [12] with permission from JAXA)

column, pedal, and throttle levers to the control devices [12]. This makes MuPAL-α an ideal platform for safely testing new FTC schemes. A high-fidelity nonlinear 6-DOF simulation model is available for developing, tuning, and testing new control strategies before flight testing. Its validity has been established by comparisons between flight test results and offline calculations with the designed controllers [63]. The first stage of testing the proposed FTC scheme involved using the nonlinear simulation.

5.1 Controller Design In contrast to [42], the dependence of the system on the square of the equivalent airspeed is also considered. Specifically, here, the scheduling parameters have been selected as   2 , (59) ρ = veas veas 2 where veas denotes equivalent airspeed. The second component veas in (59) potentially offers greater fidelity in modeling accuracy, and is treated (conservatively) independently of veas . The controller was tested under the following conditions:

• • • •

altitude of 5000 f t at standard atmosphere conditions; flaps and gear set to up, and the DLC flaps at 0 deg; a weight of 5700 kg, and c.g. at 28% MAC (mean aerodynamic chord); equivalent airspeed of around 120 kts;

This chapter will focus on lateral-directional control. The system states are given by

484

H. Alwi et al.

 T xp = φ β r p

(60)

which denote roll angle (rad), sideslip angle (rad), yaw rate (rad/s), and roll rate (rad/s), respectively. The system inputs u are given by T  u = δtd δa δr

(61)

where δtd represents differential power lever deviation (which translates into differential thrust), while δa and δr represent the aileron (rad) and rudder (rad) surface deflections. The controlled outputs are the sideslip angle and the roll angle. In the underpinning LTI models obtained from different trim points, the control derivatives are given by ⎤ ⎡ 0 0 0 ⎢ 0 Yδa /U0 Yδr /U0 ⎥ ⎥ (62) B(ρ) = ⎢ ⎣ Nδt Nδa Nδr ⎦ L δt L δa L δr where U0 represents the aircraft’s constant (trimmed) forward velocity. A pair of ailerons move asymmetrically and thus does not produce a yaw force, as a conseY quence Uδ0a is negligible. On the other hand, the rudder creates a yaw moment as well as yaw force, and thus, in general, Yδr U0

Yδr U0

is not negligible. However, in MuPAL-alpha’s

case, is neglected. The validity of this has been verified by the flight test results in [64]. This enables factorization of the input distribution matrix as in (6), where, in this instance,   0 (63) Bv = 2×2 I2 

and B2 (ρ) =

Nδt Nδa Nδr L δt L δa L δr

 (64)

This structure facilitates the use of the controller described in the earlier sections. To create a steady-state reference tracking capability, integrator states are introduced according to x˙r (t) = yc (t) − C p x p (t), (65) where yc (t) is the (differentiable) command signal. Define the augmented state vector xa = col(xr , x p ) and create using (1) and (65) the augmented state-space system x˙a (t) = Aa (ρ)xa (t)+ Ba (ρ)W (t)u(t)+ Bc yc (t)+ Da (ρ)ξ(t, x), where Bc = [Il 0]T and

(66)

Flight Evaluation of a Sliding-Mode Fault-Tolerant Control Scheme

 Aa (ρ) =

     0 −C 0 0 , Ba (ρ) = , Da (ρ) = 0 A p (ρ) B(ρ) D p (ρ)

485

(67)

Define the state transformation matrix (for the augmented system) according to 

I 0 Ta = l 0 Tn

 (68)

This induces regular form in the augmented system. To guarantee a stable sliding motion in (22) with suitable performance, the D-stable region is defined by selecting L 1 = 2 and L 2 = 1 (where L 1 and L 2 are defined in Corollary 2.1). For implementation, a sigmoidal approximation of the discontinuous output injection signal from (35) has been used [51] 2 s(t) (69) vn = −K (t, x) P2Ps(t)+δ where the modulation gain K = 0.4 (see Remark 5) and δ = 0.05. The design freedom Φ, in (36), which must be Hurwitz, has been chosen as Φ = −0.5I2 . Then from (36), P2 = I2 . Remark 5 These values were selected after tuning on the HIL setup (described later). Ideally K (·) should be as small as possible, but large enough to provide robustness against matched uncertainty. Since, in each channel |vn,i | ≤ K (·), the allowed maximum magnitude of each control signal component gives an initial guess for an upper bound on the maximum allowed value K (·). From this starting point, after some simple tuning and examining of the closed-loop system performance for a range of faults, the value of K = 0.4 was settled on.

5.2 HIL Test on the MuPAL-α Platform The HIL facility was used to test the controller C-code implementation and to provide a preliminary assessment of the controller performance before the real test flights. Figure 10a shows a schematic of the MuPAL-α HIL test platform which incorporates the actual aircraft as part of the setup (see Fig. 10b). An external emulation computer provides the flight simulator capabilities based on a high-fidelity simulation model of the aircraft motion. The emulation computer uses the actual positions of the aircraft control surfaces (which move during the HIL tests) as inputs to the motion model of MuPAL-α. The emulation computer then generates realistic “sensor measurements” of the aircraft states. These “measurement” signals are used by the FBW processor for its controller calculations and also for the cockpit displays and panels to give visual feedback to the pilot. The graphic environment display is sent from the emulation computer to a screen located outside of the cockpit to give the pilots information for a visualization of the aircraft’s attitude. The tests served as rehearsals for maneuvers

486

H. Alwi et al.

Fig. 10 MuPAL-α HIL test platform

(a) HIL setup (adapted from [53] with permission from JAXA)

(b) ’umbilical cords’

and flight conditions likely to be experienced in the flight tests. The control scheme was written in C-code using an input/output template provided by JAXA. During the HIL tests, the actuator faults have been created at a software level: specifically, the controller output is deliberately modified in the software and it is the modified signal which is sent to the physical actuators. No FDI scheme was incorporated within the flight control computer. To simulate a fault estimation scheme and any computational delays in decision-making, an estimate of the fault is given by χ˙ (t) = −0.5χ (t) + 0.5W (t) Λ(t) = sat (χ (t) + c) which means Λ(t) is related to W (t) through a unity gain low pass filter with a time constant of 0.5, where the vector c is a fixed offset and sat(·) represents the (vector) saturation function which “clips” each component to lie in the interval [0, 1].

Flight Evaluation of a Sliding-Mode Fault-Tolerant Control Scheme

487

Consequently, when a fault is injected Λ(t) = W (t). However if c = 0 then Λ(t) → W (t) as t → ∞. In all the flight tests Λ(t) is used in the allocation matrix (13).

5.2.1

Fault Free

Auto-pilot maneuver: The fault-free HIL test results are shown in Fig. 11 where a “doublet” maneuver is created within the FBW system to ensure repeatability of the maneuver in both fault-free and faulty scenarios. The trajectories of the lateraldirectional states are shown in Fig. 11a. The plots from Fig. 11a shows good roll and sideslip tracking performance during fault-free conditions. Figure 11b shows that a pseudo-sliding motion occurs and can be maintained. The aileron and rudder commands and their surface deflections are shown in Fig. 11c. In all the HIL testing, K = diag(1, k2 , k3 ) since differential thrust cannot be used.

5.2.2

Faulty Case

Auto-pilot maneuver: Fig. 12 shows the HIL test results when the aileron and the rudder are simultaneously faulty (50% efficiency for the aileron and 80% efficiency for the rudder). For testing, the same “doublet” auto-pilot maneuver with roll angle (15deg to −15deg) and sideslip angle (−3deg to 3deg) commands was considered. The lateral-directional state trajectories are shown in Fig. 12a and demonstrate that roll angle and sideslip angle tracking performance can be maintained in the presence of faults. This confirms the good potential of the designed controller. The sliding surfaces are plotted in Fig. 12b which shows that sliding is maintained. The aileron and rudder commands and their surface deflections are shown in Fig. 12c where it can be observed that both the aileron and the rudder deflections do not follow the commands because of the faults. Once sufficient confidence had been gained by the flight crew based on the HIL testing, the actual flight testing phase was embarked upon.

5.3 Piloted Flight Tests In this section, results from the actual flight test campaign will be introduced and some flight test results will be used to demonstrate the efficacy of the scheme. These results represent the first sliding-mode controller to be flight tested on a manned aircraft and represent a major milestone in the area of sliding-mode FTC. The results presented here were obtained from flight test campaigns which took place between 16 and 27 January 2017 in Sagami Bay. The weather conditions were clear and satisfied the visual meteorological conditions (VMC) flight category (which allowed the flight test to take place). During the test, the maximum recorded wind gust was 11.3 m/s.

488

H. Alwi et al.

Since the flight test campaign concentrated on the evaluation of lateral-directional control, longitudinal control of altitude and speed was manually maintained by the evaluation pilot. The lateral-directional controller commands were created via wheel and pedal manipulations. Detailed descriptions of each evaluation will be provided in the subsequent plots and discussions.

4

20 state command

state command 10 φ (deg)

β (deg)

2

0

0

−2

−4

−10

0

20

40

60

80

−20

100

0

20

Time (sec)

100

80

100

80

100

5 p (deg/s)

r (deg/s)

80

10

2

0

−2

0 −5 −10

0

20

40

60

80

100

−15

0

20

4

30

2 v (m/s)

40

20

10

0

40

60

Time (sec)

Time (sec)

ψ (deg)

60

Time (sec)

4

−4

40

0

−2

0

20

40

60

Time (sec)

80

100

−4

0

20

40

60

Time (sec)

(a) The trajectories of the system states Fig. 11 Fault-free case: states, switching functions and control surface deflections (HIL test with auto-pilot)

Flight Evaluation of a Sliding-Mode Fault-Tolerant Control Scheme 5

2

1

5

s

s

0

−5

489

0

20

40 60 Time (sec)

80

0

−5

100

0

20

40 60 Time (sec)

80

100

80

100

(b) Switching functions 15

4 deflection command

10

2

0

−2

−5 −10

0

r

δ (deg)

a

δ (deg)

5

0

20

40 60 Time (sec)

80

100

−4

deflection command 0

20

40 60 Time (sec)

(c) Commands and surface deflections Fig. 11 (continued)

5.3.1

Fault Free

Fault-free flight test results are shown in Figs. 13 and 14. These provide a baseline for the subsequent evaluation. In the first set (Fig. 13), a doublet maneuver (±20deg roll and ±2deg sideslip angle) was created by the evaluation pilot. In the second set (Fig. 14), a steady s-turn with a roll angle of ±20deg is used as the reference command. The evolution of the aircraft lateral-directional motion states is shown in Figs. 13a and 14a. It is clear from these that good sideslip and roll angle tracking performance is obtained. Figures 13b and 14b show the sliding surfaces and demonstrate sliding can be maintained during the flight. The aileron and rudder commands and the corresponding surface deflections are shown in Figs. 13c and 14c. From Figs. 13c and 14c, it is clear the aileron and the rudder are fault free.

490

H. Alwi et al. 4

20 state reference

10 φ (deg)

β (deg)

2

0

−2

−4

0

−10

0

20

40 60 Time (sec)

80

−20

100

p (deg/s)

r (deg/s)

20

40 60 Time (sec)

80

100

0

20

60 40 Time (sec)

80

100

20

40 60 Time (sec)

80

100

5

2

0

−2

0 −5 −10

0

20

40 60 Time (sec)

80

100

−15

40

4

30

2 v (m/s)

ψ (deg)

0

10

4

−4

state reference

20

10

0

0

−2

0

20

40 60 Time (sec)

80

100

−4

0

(a) The trajectories of the system states Fig. 12 Faulty case—K = diag(1, 0.5, 0.2): states, switching functions and control surface deflections (HIL test with auto-pilot)

Flight Evaluation of a Sliding-Mode Fault-Tolerant Control Scheme 5

2

1

5

s

s

0

−5

491

0

20

40 60 Time (sec)

80

0

−5

100

0

20

40 60 Time (sec)

80

100

80

100

(b) Switching functions 4

15 deflection command

10

2 δr (deg)

δa (deg)

5 0

−2

−5 −10

0

0

20

60 40 Time (sec)

80

100

−4

deflection command 0

20

60 40 Time (sec)

(c) Commands and surface deflections Fig. 12 (continued)

5.3.2

Aileron Faults Only

This set of results shows the response of the aircraft when faults occur on the aileron (Fig. 15). It is assumed that the aileron operates at 50% efficiency. This corresponds to K = diag(1, 0.5, 0). The trajectories of the lateral motion are shown in Fig. 15a. In these tests, doublet commands are generated by the pilot for both sideslip and roll angles. Clearly, despite these faults, the FTC scheme still maintains good roll and sideslip tracking. Figure 15b shows that, during the flight tests in the presence of faults, sliding is maintained. The aileron and rudder commands and the corresponding surface deflections are given in Fig. 15c. Here, it can be seen that the aileron no longer follows the commands due to the presence of faults.

492

H. Alwi et al. 30

4 state ref

2

state ref

20

φ (deg)

β (deg)

10 0

0 −10

−2 −20 −4

0

20

40 Time (sec)

60

−30

80

8

0

20

40 Time (sec)

60

80

20

40 Time (sec)

60

80

20

40 Time (sec)

60

80

20

6 10

2

p (deg/s)

r (deg/s)

4

0 −2

0

−10

−4 −6 0

20

40 Time (sec)

60

80

−20

0

200

6

150

4 v (m/s)

ψ (deg)

−8

100

2 0

50 −2 0 0

20

40 Time (sec)

60

80

−4

0

(a) The trajectories of the system states Fig. 13 Fault-free case: states, switching functions and control surface deflections (flight test)

Flight Evaluation of a Sliding-Mode Fault-Tolerant Control Scheme 5

2

s1

5

s

0

−5

493

0

20

40

60

0

−5

80

0

20

40

60

80

Time (sec)

Time (sec)

(b) Switching functions 15

8 6

deflection command

10

4 δ (deg)

δ (deg)

5

0

r

a

0

2

−5

−2 −4

−10 −15

deflection command

−6 0

20

40

60

80

−8

0

Time (sec)

20

40

60

80

Time (sec)

(c) Commands and surface deflections Fig. 13 (continued)

5.3.3

Simultaneous Aileron and Rudder Faults

This series of flight tests consider the situation when faults exist on both the aileron and the rudder. Now the aileron and rudder work at 50% efficiency and 80% efficiency, respectively. This corresponds to K = diag(1, 0.5, 0.2). The lateraldirectional states’ trajectories are shown in Fig. 16a. Again despite the presence of faults, the FTC scheme still maintains good roll and sideslip tracking performance. Figure 16b shows that sliding occurs and is maintained despite the presence of simultaneous faults. The aileron and rudder commands and the associated surface deflections are shown in Fig. 16c. This figure shows the aileron and the rudder no longer follow their respective commands because of the presence of faults.

494

H. Alwi et al. 4

30

state ref

state ref

20

2 φ (deg)

β (deg)

10 0

0 −10

−2 −20 −4

0

20

60

40

80

100

−30

120

0

20

60

40

80

100

120

80

100

120

80

100

120

Time (sec)

Time (sec)

8

20

6 10

2

p (deg/s)

r (deg/s)

4

0 −2 −4

0

−10

−6 −8

20

0

60

40

80

100

120

−20

0

20

40

60 Time (sec)

200

6

150

4 v (m/s)

ψ (deg)

Time (sec)

100

2 0

50 −2 0 0

20

40

60 Time (sec)

80

100

120

−4

0

20

40

60 Time (sec)

(a) The trajectories of the system states Fig. 14 Fault-free case: states, switching functions and control surface deflections (flight test for S-turn)

Flight Evaluation of a Sliding-Mode Fault-Tolerant Control Scheme

2

5

0

−5

s

s

1

5

495

0

20

40

60

80

100

0

−5

120

0

20

40

Time (sec)

60

80

100

120

80 60 Time (sec)

100

120

Time (sec)

(b) Switching functions 8

15

6

10

4 δ (deg)

0

2 0

r

a

δ (deg)

5

−5

−2 −4

−10 −15

−6 0

20

40

60 80 Time (sec)

100

120

−8

0

20

40

(c) Commands and surface deflections Fig. 14 (continued)

Remark 6 Interestingly, during the flight tests, the evaluation pilot did not notice any changes in maneuverability and behavior of the aircraft, when comparing faultfree and faulty conditions, which highlights the ability of the FTC scheme to mitigate the effects of the faults.

496

H. Alwi et al. 4

30 state reference

2

state reference

20

φ (deg)

β (deg)

10 0

0 −10

−2 −20 −4

0

20

40 Time (sec)

60

−30

80

8

0

20

40 Time (sec)

60

80

0

20

40 Time (sec)

60

80

40 Time (sec)

60

80

20

6 10

2

p (deg/s)

r (deg/s)

4

0 −2 −4

0

−10

−6 0

20

40 Time (sec)

60

80

−20

200

6

150

4 v (m/s)

ψ (deg)

−8

100

2 0

50 −2 0 0

20

40 Time (sec)

60

80

−4

0

20

(a) The trajectories of the system states Fig. 15 Aileron faults—K = diag(1, 0.5, 0): states, switching functions and control surface deflections (flight test)

Flight Evaluation of a Sliding-Mode Fault-Tolerant Control Scheme 5

2

0

s

s1

5

−5

497

20

0

40 Time (sec)

60

0

−5

80

0

20

40 Time (sec)

60

80

(b) Switching functions 8

15

6

10

4 δ (deg)

0

2 0

r

δ (deg) a

5

−5

−15

−4

command surface deflection

−10 0

20

40 Time (sec)

60

−2 command surface deflection

−6 80

−8

0

20

40 Time (sec)

(c) Commands and surface deflections Fig. 15 (continued)

60

80

498

H. Alwi et al. 4

30 state command

state command

20

2 φ (deg)

β (deg)

10 0

0 −10

−2 −20 −4

0

20

40

60 80 Time (sec)

100

−30

120

0

20

40

60 80 Time (sec)

100

120

0

20

40

80 60 Time (sec)

100

120

20

40

60 80 Time (sec)

100

120

20

8 6

10

2

p (deg/s)

r (deg/s)

4

0 −2

0

−10

−4 −6 20

0

40

80 60 Time (sec)

100

120

−20

200

6

150

4 v (m/s)

ψ (deg)

−8

100

2 0

50 −2 0 0

20

40

80 60 Time (sec)

100

120

−4

0

(a) The trajectories of the system states Fig. 16 Aileron and rudder faults—K = diag(1, 0.5, 0.2): states, switching functions and control surface deflections (flight test for S-turn)

Flight Evaluation of a Sliding-Mode Fault-Tolerant Control Scheme

2

5

0

−5

s

s1

5

499

0

20

40

60 80 Time (sec)

100

0

−5

120

0

20

40

60 80 Time (sec)

100

120

(b) Switching functions 15

8 6

10

4 δ (deg)

0

2 0

r

δ (deg) a

5

−5

−15

−4

surface deflection command

−10 0

20

40

60 80 Time (sec)

−2 surface deflection command

−6 100

120

−8

0

20

40

60 80 Time (sec)

100

120

(c) Commands and surface deflections Fig. 16 (continued)

6 Conclusion This chapter considered the development of sliding-mode fault-tolerant controllers and their application to aerospace systems. The chapter has described the complete design process from development and testing on nonlinear models, through to piloted flight tests. A specific class of sliding-mode FTC scheme has been described which includes a control allocation element to address over-actuation, which is usually present in safety-critical systems, and allows total actuator failures to be compensated. Acknowledgements This work has received funding from the European Union Horizon 2020 research and innovation program under grant agreement No. 690811 and the Japan New Energy and Industrial Technology Development Organization under grant agreement No. 062800, as part of the EU/Japan joint research project entitled “Validation of Integrated Safety-enhanced Intelligent flight cONtrol (VISION)”. We gratefully acknowledge the contributions of T. Hosoya, M. Naruoka, J. Kawaguchi, S. Morokuma, H. Ishii, and Y. Sagara from JAXA and Y. Uetake from Nakanihon Air Service for their support in terms of the implementation and evaluations of the SMC scheme on the MuPAL-α.

500

H. Alwi et al.

References 1. Edwards, C., Lombaerts, T., Smaili, H.: Fault Tolerant Flight Control: A Benchmark Challenge. Springer (2010) 2. Briere, D., Favre, C., Traverse, P.: A family of fault-tolerant systems: electrical flight controls from Airbus A320/330/340 to future military transport aircraft. In: Microprocessors and Microsystems, pp. 75–82 (1995) 3. Alwi, H., Edwards, C., Tan, C.P.: Fault Detection and Fault-Tolerant Control Using Sliding Modes. Springer (2011) 4. Zhang, Y., Jiang, J.: Bibliographical review on reconfigurable fault-tolerant control systems. Annu. Rev. Control 32, 229–252 (2008) 5. Wilkinson, S.: The 10 greatest emergency landings. In: Aviation History Magazine (2016) 6. Lemaignan, B.: Flying with no flight controls: handling qualities analyses of the Baghdad event. In: AIAA Atmospheric Flight Mechanics Conference and Exhibit, AIAA, pp. 2005– 5907 (2005) 7. Job, M.: Air Disaster: Volume 2. Aerospace Publications Pty Ltd (1996) 8. Bateman, F., Noura, H., Ouladsine, M.: Actuators fault diagnosis and tolerant control for an unmanned aerial vehicle. In: International Conference on Control Applications, pp. 1061–1066 (2007) 9. Tucker, T.: Touchdown: the development of propulsion controlled aircraft at NASA Dryden. In: Monographs in Aerospace History (1999) 10. Williams Hayes, P.S.: Flight test implementation of a second generation intelligent flight control system, NASA. Technical Memorandum NASA/TM–2005–213669 (2015) 11. Hanson, C.: Capability description for NASA’s F/A–18 TN 853 as a testbed for the integrated resilient aircraft control project, NASA. Technical Memorandum DFRC–IRAC–CAP–002, DFRC–972 (2009) 12. Masui, K., Tsukano, Y.: Development of a new in-flight simulator MuPAL-α. In: AIAA paper 2000-4574 Aug (2000) 13. Watanabe, Y., Manecy, A., Amiez, A., Aoki, S., Nagai, S.: Fault-tolerant final approach navigation for a fixed-wing UAV by using long-range stereo camera system. In: International Conference on Unmanned Aircraft Systems (ICUAS), pp. 1065–1074 (2020) 14. Takase, R., Yoshikawa, N., Suzuki, S.: Combined fault detection, isolation, and control: propulsion controlled aircraft in case of elevator failure. In: IEEE Conference on Control Technology and Applications, pp. 754–759 (2018) 15. Marcos, A., Waitman, S., Sato, M.: Fault tolerant linear parameter varying flight control design, verification and validation, special issue high fidelity LPV systems under constraints. J. Frankl. Inst. 359, 653–676 (2022) 16. Sato, M., Akasaka, D.: Luenberger observer-based flight controller design using robust control toolboxTM . In: IEEE Conference on Control Technology and Applications, pp. 1160–1165 (2021) 17. Chen, L., Alwi, H., Edwards, C., Sato, M.: Flight evaluation of a sliding mode online control allocation scheme for fault tolerant control. Automatica 114 (2020) 18. Chen, L., Alwi, H., Edwards, C., Sato, M.: Flight evaluation of an LPV sliding mode observer for sensor FTC. IEEE Trans. Control Syst. Technol. https://doi.org/10.1109/TCST.2021.3096946 19. Hardier, G., Ferreres, G., Sato, M.: On-line parameter identification for indirect adaptive control: a practical comparison of frequency and time domain techniques. In: IEEE Conference on Control Technology and Applications, pp. 180–187 (2020) 20. Chamseddine, A., Zhang, Y., Rabbath, C.A., Fulford, C., Apkarian, J.: Model reference adaptive fault tolerant control of a quadrotor UAV. In: AIAA Infotech Aerospace, St. Louis, Missouri, USA, vol. 2931 (2011) 21. Li, T., Zhang, Y., Gordon, B.W.: Passive and active nonlinear fault-tolerant control of a quadrotor unmanned aerial vehicle based on the sliding mode control technique. Proc. Inst. Mech. Eng., Part I: J. Syst. Control Eng. 227, 12–23 (2013)

Flight Evaluation of a Sliding-Mode Fault-Tolerant Control Scheme

501

22. Izadi, H.A., Zhang, Y., Gordon, B.W.: Fault tolerant model predictive control of quadrotor helicopters with actuator fault estimation. IFAC Proc. Vol. 44, 6343–6348 (2011) 23. Merheb, A.-R., Noura, H., Bateman, F.: A novel emergency controller for quadrotor UAVs. In: IEEE Conference on Control Applications (CCA), pp. 747–752 (2014) 24. Merheb, A.-R., Noura, H., Bateman, F.: Design of passive fault-tolerant controllers of a quadrotor based on sliding mode theory. Int. J. Appl. Math. Comput. Sci. 25, 561–576 (2015) 25. Mueller, M.W., D’Andrea, R.: Stability and control of a quadrocopter despite the complete loss of one, two, or three propellers. In: IEEE International Conference on Robotics and Automation (ICRA), pp. 45–52 (2014) 26. Mueller, M.W., D’Andrea, R.: Relaxed hover solutions for multicopters: application to algorithmic redundancy and novel vehicles. Int. J. Robot. Res. 35(8), 873–889 (2015) 27. Alwi, H., Edwards, C.: Fault tolerant control of an octorotor using LPV based sliding mode control allocation. In: Proceedings of the American Control Conference, pp. 6505–6510 (2013) 28. Saied, M., Lussier, B., Fantoni, I., Francis, C., Shraim, H., Sanahuja, G.: Fault diagnosis and fault–tolerant control strategy for rotor failure in an octorotor. In: IEEE International Conference on Robotics and Automation (ICRA), pp. 5266–5271 (2015) 29. Avram, R.C.: Fault diagnosis and fault–tolerant control of quadrotor UAVs. Ph.D. Dissertation, Wright State University (2016) 30. Schneider, T., Ducard, G., Rudin, K., Strupler, P.: Fault–tolerant control allocation for multirotor helicopters using parametric programming. In: International Micro Air Vehicle Conference and Flight Competition (IMAV) (2012) 31. Rugh, W.J., Shamma, J.S.: Research on gain scheduling. Automatica 36, 1401–1425 (2000) 32. Blanke, M., Kinnaert, M., Lunze, J., Staroswiecki, M.: Introduction to Diagnosis and FaultTolerant Control. Springer, Berlin, Heidelberg (2016) 33. Niemann, H., Stoustrup, J.: Integration of control and fault detection: nominal and robust design. In: SAFEPROCESS ’97, pp. 331–336 (1997) 34. Lombaerts, T., Van Oort, E., Chu, Q.P., Mulder, J.A., Joosten, D.: Online aerodynamic model structure selection and parameter estimation for fault tolerant control. J. Guid. Control Dyn. 33(3), 707–723 (2010) 35. Maciejowski, J.M., Jones, C.N.: MPC fault-tolerant flight control case study: flight 1862. IFAC Proc. Vol. 36, 119–124 (2003) 36. Edwards, C., Alwi, H., Tan, C.P.: Sliding mode methods for fault detection and fault tolerant control with application to aerospace systems. Int. J. Appl. Math. Comput. Sci. 22, 109–124 (2012) 37. Alwi, H., Edwards, C., Stroosma, O., Mulder, J., Hamayun, M.: Real-time implementation of an integral sliding mode fault tolerant control scheme for LPV plants. IEEE Trans. Ind. Electron. 62(6), 3896–3905 (2015) 38. Alwi, H., Edwards, C.: Development and application of sliding mode LPV fault reconstruction schemes for the ADDSAFE Benchmark. Control Eng. Pract. 31, 148–170 (2014) 39. Vanek, B., Edelmayer, A., Szabo, Z., Bokor, J.: Bridging the gap between theory and practice in LPV fault detection for flight control actuators. Control Eng. Pract. 31, 171–182 (2014) 40. Mohammadpour, J., Scherer, C.: Control of Linear Parameter Varying Systems with Applications. Springer (2012) 41. Rotondo, D., Nejjari, F., Puig, V., Blesa, J.: Model reference FTC for LPV systems using virtual actuator and set-membership fault estimation. Int. J. Robust Nonlinear Control 25, 753–60 (2015) 42. Sato, M.: Gain-scheduled flight controller using bounded inexact scheduling parameters. IEEE Trans. Control Syst. Technol. 26, 1074–1082 (2018) 43. Tapia, A., Bernal, M., Fridman, L.: Nonlinear sliding mode control design: an LMI approach. Syst. Control Lett. 104 (2017) 44. Chen, J., Patton, R.J.: Robust Model-Based Fault Diagnosis for Dynamic Systems. Kluwer Academic Publishers (1999) 45. Ding, S.X.: Model-Based Fault Diagnosis Techniques. Springer, London (2013)

502

H. Alwi et al.

46. Alwi, H., Edwards, C.: Fault tolerant control using sliding modes with on-line control allocation. Automatica 44, 1859–66 (2008) 47. Harkegard, O., Glad, S.: Resolving actuator redundancy - optimal vs. control allocation. Automatica 41, 137–144 (2005) 48. Edwards, C., Spurgeon, S.K.: Sliding Mode Control: Theory and Applications. Taylor & Francis, London (1998) 49. Zinober, A.S.I.: Variable Structure and Lyapunov Control. Springer, Berlin, Heidelberg (1994) 50. Chilali, M., Gahinet, P.: H∞ design with pole placement constraints: an LMI approach. IEEE Trans. Autom. Control 41, 358–367 (1996) 51. Utkin, V.: Sliding Modes in Control and Optimization. Springer (1992) 52. Ryan, E.P., Corless, M.: Ultimate boundedness and asymptotic stability of a class of uncertain dynamical systems via continuous and discontinuous feedback control. IMA J. Math. Control Inf. 1, 223–242 (1984) 53. Levant, A.: Robust exact differentiation via sliding mode technique. Automatica 34, 379–384 (1998) 54. Khattab, A.: Fault Tolerant Control of Multi-Rotor Unmanned Aerial Vehicles Using Sliding Mode Based Schemes. Ph.D. Thesis, University of Exeter (2020) 55. Fum, W.Z.: Implementation of Simulink controller design on Iris+quadrotor. Master Thesis, Naval Postgraduate School (2015) 56. Meier, L., Tanskanen, P., Heng, L., Lee, G.H., Fraundorfer, F., Pollefeys, M.: PIXHAWK: a micro aerial vehicle design for autonomous flight using onboard computer vision. Auton. Robot. 33, 21–39 (2012) 57. Oborne, M.: Mission planner (software). Retreived from: http://ardupilot.org/planner/ [Online; accessed 28-Feb-2017] (2017) 58. Polak, A.: PX4 Development Kit for Simulink. Technical Report, Polakium Engineering (2014) 59. Hartley, R.: APM2 Simulink blockset. MATLAB Central, vol. 13 (2012) 60. Kuznicki, S., Lee, D.: Pixhawk Pilot Support Package (PSP) User Guide, Version 2.1. MathWorks, Feb 2017 61. Li, K., Phang, S.K., Chen, B.M., Lee, T.H.: Platform design and mathematical modeling of an ultralight quadrotor micro aerial vehicle. In: International Conference, Unmanned Aircraft Systems (ICUAS) (2013) 62. Khattab, A., Alwi, H., Edwards, C.: Implementation of sliding mode fault tolerant control on the IRIS+ quadrotor. In: Conference on Control Technology and Applications (CCTA), Copenhagen, Denmark (2018) 63. Sato, M., Satoh, A.: Flight control experiment of multipurpose-aviation-laboratory-α in-flight simulator. J. Guid., Control, Dyn. 34 (2011) 64. Sato, M.: Robust model-following controller design for LTI systems affected by parametric uncertainties: a design example for aircraft motion. Int. J. Control 82, 689–704 (2009)

Sliding-Mode-Based Fault Diagnosis and Fault-Tolerant Control for Quad-Rotors Romeo Falcón, Héctor Ríos, and Alejandro Dzul

Abstract This chapter presents the design of a sliding-mode-based fault diagnosis and a fault-tolerant control for the trajectory tracking problem in quad-rotors. The problem considers external disturbances, and two different actuator faulty scenarios: multiple losses of rotor effectiveness or a complete rotor failure. The proposals are only based on measurable positions and angles. For the fault diagnosis strategy, a finite-time sliding-mode observer is proposed to estimate some state variables, and it provides a set of residuals. These residuals allow us the detection, isolation, and identification of multiple actuator faults even under the presence of a class of external disturbances. Using the proposed fault diagnosis, an actuator fault accommodation controller is developed to solve the trajectory tracking problem in quad-rotors under the effects of multiple losses of rotor effectiveness and external disturbances. The fault accommodation partially compensates the actuator faults allowing the usage of a baseline robust-nominal controller that deals with external disturbances. AdditionThis chapter contains material reprinted from Romeo Falcón, Héctor Ríos, and Alejandro Dzul, A sliding-mode-based active fault-tolerant control for robust trajectory tracking in quad-rotors under a rotor failure, International Journal of Robust and Nonlinear Control. Copyright ©1999– 2023 John Wiley & Sons, Inc. All rights reserved. Section 3 and Figs. 1, 2, 3, 4, 5, 6, 7, and 8 ©2022 IEEE. Reprinted with permission, from, R. Falcón, H. Ríos and A. Dzul, “A Robust Fault Diagnosis for Quad-Rotors: A Sliding-Mode Observer Approach,” in IEEE/ASME Transactions on Mechatronics, vol. 27, no. 6, pp. 4487–4496, Dec. 2022, DOI:10.1109/TMECH.2022.3156854. Section 4.1 ©2021 IEEE. Reprinted with permission, from, R. Falcón, H. Ríos and A. Dzul, “An Actuator Fault Accommodation Sliding-Mode Control Approach for Trajectory Tracking in QuadRotors,” 2021 60th IEEE Conference on Decision and Control (CDC), Austin, TX, USA, 2021, pp. 7100–7105, DOI:10.1109/CDC45484.2021.9682845. R. Falcón · H. Ríos · A. Dzul Tecnológico Nacional de México/I.T. La Laguna, División de Estudios de Posgrado e Investigación, C.P. 27000 Torreón, Coahuila, México e-mail: [email protected] A. Dzul e-mail: [email protected] H. Ríos (B) CONAHCYT IxM, C.P. 03940 Ciudad de México, México e-mail: [email protected] © The Author(s), under exclusive license to Springer Nature Switzerland AG 2023 T. R. Oliveira et al. (eds.), Sliding-Mode Control and Variable-Structure Systems, Studies in Systems, Decision and Control 490, https://doi.org/10.1007/978-3-031-37089-2_19

503

504

R. Falcón et al.

ally, in order to deal with the rotor failure scenario, an active fault-tolerant control is proposed. First, the rotor failure is isolated using the proposed fault diagnosis, and then, a combination of a finite-time sliding-mode observer, PID controllers, and continuous high-order sliding-modes controllers is proposed. Such a strategy allows the yaw angular velocity to remain bounded and the position tracking to be achieved even in the presence of some external disturbances. Numerical simulations and experimental results on Quanser’s QBall2 show the performance of the proposed strategies.

1 Introduction Among the Unmanned Aerial Vehicles (UAVs), the quad-rotors have attracted significant attention due to their capability to perform vertical take-off and landing tasks, high maneuverability, and low cost. Consequently, they are used by a wide variety of applications, e.g., delivery, agriculture, maintenance, and military applications (see, for instance, [1]). The growing demand for reliability, safety, and acceptable performance level for these tasks is a priority, being a topic of interest in the field of robust control design in these vehicles (see, for instance, [2, 3]). However, most of these techniques are effective when the system works under a healthy environment, i.e., rotor nominal conditions. In the presence of actuators faults, the performance level may considerably be degraded when the control commands are operating in a faulty scenario. Moreover, a rotor failure can provoke a dangerous midair collision or hit the ground, being dangerous in the surrounding environment. Therefore, to increase safety and robustness, a Fault-Tolerant Control (FTC) must be designed in order to guarantee system stability and acceptable performance in the presence of actuator faults, or even under a rotor failure. The FTC techniques can be classified into two types: passive and active [4]. In passive techniques, similar to the robust control approach, the control law does not change when a fault occurs since these are considered uncertainties. However, fault tolerance may not be sufficient for severe faults [4]. In the literature, there are several works related to passive FTCs applied to quad-rotors. For instance, in [5], an adaptive controller, based on a model reference, is proposed to deal with actuator faults and external disturbances. In [6], a passive FTC is proposed by the inherent robustness of sliding-mode controllers. In [7], the upper bound of the fault is estimated and the gains of an adaptive sliding-mode controller are computed. In [8], the control allocation strategy is implemented in an adaptive sliding-mode controller for a quad-rotor under actuator faults. In [9], radial basis function neural networks are proposed for actuator faults and external disturbances. In all the previous works, the faults are not detected and are suitable for goals associated with low-performance levels. In this sense, the faults may exceed the FTC capability causing a dangerous collision.

Sliding-Mode-Based Fault Diagnosis and Fault-Tolerant Control for Quad-Rotors

505

In the case of active techniques, the Fault Diagnosis (FD), i.e., the detection, isolation, and identification of the faults, plays a major role. In this sense, the control law can be modified in faulty situations making use of the FD, in order to compensate the faults and accomplish the tracking task, or, if it is necessary, to carry out landing actions. There are also several works related to active FTCs applied to quad-rotors. For instance, in [10], an adaptive finite-time extended state observer is designed to compensate external disturbances and accommodating actuator faults. In [11], an active FTC, based on a robust adaptive fault estimation observer, is proposed. In [12], an adaptive sliding-mode control, based on recurrent neural networks, is designed for actuator faults and model uncertainties. However, it is designed only for the rotational subsystem, dealing with a fully actuated system without considering coupled dynamics. An active FTC that employs fault accommodation technique is proposed by [13]. The proposed approach is based on a nonlinear adaptive FD, which adapts the controller gains. In the previous works, the active FTC allows much more demanding objectives in the presence of faults. However, knowledge of the fault effect is essential, and it depends mainly on the FD. Regarding the FD problem on quad-rotors, there are several works reported in the literature. In [14], an FD method is proposed to identify faults in the quad-rotor sensors. A sensor fault reconstruction method in the presence of external disturbances is designed in [15]. In [16], an FD for actuator faults is proposed based on a polynomial observer without considering disturbances. In the same context, [17] deals with the design of a model-based FD, for sensor and actuator faults, based on a set of residuals. However, such residuals are not feasible for the isolation and identification of the faults. In [18], a residual is generated by the parity space method to identify actuator faults. In [19], a two-stage Kalman filter is designed to diagnose actuator faults. In [20], an FD, based on an adaptive augmented state Kalman filter, is designed for actuator faults and external disturbances. Such a scheme is designed only for the attitude subsystem and developed for the linearized model of the quad-rotor. In [21], an FD, based on an adaptive Thau observer is developed to build a set of residuals to identify actuators faults. In [22], an adaptive observer-based FD is designed for actuator faults without considering external disturbances. The external disturbances can affect the performance level of most of the previous works, since these do not deal with faults and external disturbances simultaneously. In this context, Sliding-Modes Observers (SMOs) are frequently adopted to deal with disturbance/fault estimation because of their robustness and finite-time convergence. For instance, in [23], an SMO is designed for a quad-rotor actuator fault reconstruction method. In [24], SMOs are designed to generate a set of residual signals for a 3-DOF helicopter. In [25], an FD scheme is developed based on a bank of SMOs, to identify disturbances and faults. In [26], an FTC, based on SMOs, is proposed to deal with actuator faults and external disturbances in a quad-rotor. However, in the previous works, the SMOs do not distinguish the effect between faults and disturbances, identifying the total uncertainty. Is difficult to separate the effect of the faults from external disturbances because both of them act in the same input channels. The literature related to the development of FD methods dealing with the problem of

506

R. Falcón et al.

distinguishing between the effect of faults and external disturbances in quad-rotors is practical scant (see, for instance, [20]). Regarding the rotor failure scenario, in [27], after the failure of an actuator, a transformation of the quad-rotor system into a tri-rotor system is proposed. In this approach, external disturbances are not considered, and the quad-rotor weight has to be redistributed to shift its gravity center toward the rear rotor. In [28], the quadrotor system is transformed into a bi-rotor system neglecting the effect of external disturbances. Such an approach turns off the propeller that is aligned on the same axis of the faulty rotor, losing the pitch or roll angle, according to the pair of rotors that stopped working, as well as the yaw angle. In [29], an FTC, based on periodic solutions, is proposed for a quad-rotor to maintain a regulation task after the loss of one, two, or three propellers. However, external disturbances are not taken into account, the stability analysis is not addressed and only regulation tasks are considered. In [30], once the yaw rate is stabilized, a cyclic reference is provided for the roll and pitch command angles in order to achieve a regulation goal, but external disturbances are not considered. In [31], a multi-loop hybrid nonlinear controller is experimentally implemented for a quad-rotor under a rotor failure. However, such a scheme lacks a formal closed-loop stability proof. In [32], an FTC is proposed for a quad-rotor under a rotor failure and considering external disturbances. Such a strategy exploits the robustness of the nonsingular terminal sliding-mode controller, but the yaw dynamics is totally ignored and cannot guarantee the generation of positive thrusts. In the aforementioned works, the effect of a rotor failure on the angular moments is not considered but there exists. The contribution of this chapter is threefold: • A Finite-Time Sliding-Mode Observer (FT-SMO) is used in order to design an FD method which estimates some states and it also provides a set of residuals by only using the measures of the angles and position. These residuals allow the detection, isolation, and identification of faults expressed as partial Loss Of Effectiveness (LOE) on the rotors despite the presence of external disturbances. • Making use of the proposed fault diagnosis, an Actuator Fault Accommodation Controller (FAC) is proposed, and with which all actuator faults are partially compensated. Both the control-loop and our strategy are independent, then, a baseline robust-nominal controller can easily be coupled in order to handle the external disturbances. The proposed designed controller is based on PIDs and Continuous High-Order Sliding-Modes Controllers (HOSMCs) [33]. • In order to deal with the rotor failure scenario, an active FTC is proposed. The rotor failure is isolated using the proposed FD, and then, a combination of an FT-SMO, PID controllers, and HOSMCs is proposed. Such a strategy allows the yaw angular velocity to remain bounded and the position tracking to be achieved even in the presence of some external disturbances. Numerical simulations and experimental results on Quanser’s QBall2 platform show the performance of the proposed strategies. The chapter is organized as follows. Section 2 gives the problem statements. The proposed FD method is described in Sect. 3. The active FAC is designed in Sect. 4.

Sliding-Mode-Based Fault Diagnosis and Fault-Tolerant Control for Quad-Rotors

507

An active FTC is proposed to deal with the rotor failure scenario in Sect. 5. Some concluding remarks are given in Sect. 6. Finally, the proofs of the proposed results are given in the Appendix. Notation: Denote s(·) := sin(·) and c(·) := cos(·); denote R+ := {x ∈ R : x ≥ 0}; for a Lebesgue measurable function d : R+ → Rm , define the norm ||d||[t0 ,t1 ) = esssupt∈[t0 ,t1 ) ||d(t)||, then ||d|| f = ||d||[t f ,+∞) and ||d||∞ = ||d||[0,+∞) ; the set of d(t) with the property ||d||∞ < +∞ is denoted as L∞ ; and L D = {d ∈ L∞ : ||d||∞ ≤ D}, for any D > 0. The sequence of integers 1, . . . , n is denoted as 1, n. The function sβ := |s|β sign(s), for any s ∈ R and β ∈ R≥0 , and sβ := (s1 β , . . . , sn β )T , for any s ∈ Rn .

2 Problem Statements Consider the simplified quad-rotor dynamics (see Fig. 1, and for modeling details, see [34]), is given by ξ˙1 = ξ2 , ξ˙2 = gξ (η1 )u m − G − ξ ξ2 + dξ (t), η˙ 1 = η2 ,

(1b) (1c)

η˙ 2 = J τ + wη (η2 ) − η η2 + dη (t),

(1d)

(1a)

˙ y˙ , z˙ )T ∈ R3 , η1 := (φ, θ, ψ)T ∈ R3 , η2 := where ξ1 := (x, y, z)T ∈ R3 , ξ2 := (x, ˙ θ˙ , ψ−) ˙ T ∈ R3 with x, y and z ∈ R as the translational coordinates, and φ, θ (φ, and ψ ∈ R as the roll, pitch and yaw angles, respectively. dξ := (dx , d y , dz )T ∈ R3 and dη := (dφ , dθ , dψ )T ∈ R3 represent uncertainties and external disturbances. G := (0, 0, g)T ∈ R3 represents the gravity vector with g as the gravitational acceleration. J := diag(Jx−1 , Jy−1 , Jz−1 ) ∈ R3×3 represents the inertial matrix with Jx , Jy

Fig. 1 Representation of the Quad-rotor variables. ©2022 IEEE. Reprinted with permission, from, R. Falcón, H. Ríos and A. Dzul, “A Robust Fault Diagnosis for Quad-Rotors: A Sliding-Mode Observer Approach,” in IEEE/ASME Transactions on Mechatronics, vol. 27, no. 6, pp. 4487–4496, Dec. 2022, DOI:10.1109/TMECH.2022. 3156854

508

R. Falcón et al.

and Jz as the moments of inertia along X , Y and Z -axis. τ = (τφ , τθ , τψ )T ∈ R3 represents the angular moment vector with τφ , τθ and τψ ∈ R as the roll, pitch and yaw angular moments, respectively. u m := u z /m, with u z ∈ R as the main thrust and m ∈ R+ as the mass of the quad-rotor. ξ := diag(ax , a y , az ) ∈ R3×3 represents the aerodynamic damping coefficients. η := diag(aφ /Jx , aθ /Jy , aψ /Jz ) ∈ R3×3 represents the rotational resistance moment coefficients.  := diag(bφ , bθ , bψ ) ∈ R3×3 is given by the inertial coefficients bφ := (Jy − Jz )/Jx , bθ := (Jz − Jx )/Jy and bψ := (Jx − Jy )/Jz . The functions gξ : R3 → R3 and wη : R3 → R3 are given as gξ (η1 ) := (c(φ)s(θ )c(ψ) + s(φ)s(ψ), c(φ)s(θ )s(ψ) − s(φ)c(ψ), c(φ)c(θ ))T and ˙ φ˙ ψ, ˙ φ˙ θ˙ )T , respectively. wη (η2 ) := (θ˙ ψ, The relation between the control inputs u z , τφ , τθ , τψ and the thrusts Ti , generated by the ith rotor, is given by ⎡

⎤ ⎤ ⎡ ⎤ ⎡ uz T1 1 1 1 1 ⎢ τφ ⎥ ⎥ ⎢ T2 ⎥ ⎢ 0 0 L −L ⎢ ⎥u = ⎢ ⎥ ⎢ ⎥ , ⎣ τθ ⎦ ⎣ L −L 0 0 ⎦ M ⎣ T3 ⎦ T τψ T4 K τ K τ −K τ −K τ

  

(2)

where u is the control input vector, T represents the thrust vector and M represents the linear relation between the control signals and thrusts. L represents the distance between the rotors and the center of mass of the quad-rotor, while K τ is a thrust coefficient. The actuator faults are represented by LOE in the rotor thrust, as in [13, 20, 21]. For instance, an unexpected change in the rotor physical parameters or a propeller structural damage would result in an LOE on the thrust generated by the respective rotor. Therefore, in the presence of faults, the current command thrust T¯ is given as T¯ (t) = (I4 − (t)) T (t) = T (t) − f (t),

(3)

where (t) := diag(γ1 (t) , γ2 (t) , γ3 (t) , γ4 (t)) ∈ R4×4 and f (t) = (t)T (t) := ( f 1 (t) , f 2 (t) , f 3 (t) , f 4 (t))T ∈ R4 . The term γi (t) ∈ (0, 1) represents the case of an LOE fault in the ith rotor. The case of a rotor nominal condition is given by γi (t) = 0, and the case of the loss of the ith rotor is given by γi (t) = 1. It is worth mentioning that the current thrust T¯ is not measured and therefore is unknown. The goals are as follows: 1. To design an FD method for the detection, isolation, and identification of multiple LOE. 2. To design an actuator FAC for the tracking task under the influence of possible multiple LOEs. 3. To design an active FTC to achieve the tracking task under the influence of a rotor failure.

Sliding-Mode-Based Fault Diagnosis and Fault-Tolerant Control for Quad-Rotors

509

It is assumed that only the positions and angles of the vehicle are measurable. Before proceeding with the actuator FD method, the following assumptions are introduced. Assumption 1 The disturbances are uniformly bounded and Lipschitz, i.e., dx ∈ L D1 , d y ∈ L D2 , dz ∈ L D3 , dφ ∈ L D4 , dθ ∈ L D5 , dψ ∈ L D6 , d˙x ∈ L D¯ 1 , d˙y ∈ L D¯ 2 , d˙z ∈ L D¯ 3 , d˙φ ∈ L D¯ 4 , d˙θ ∈ L D¯ 5 , d˙ψ ∈ L D¯ 6 ; with known positive constants Dq and D¯ q , with q = 1, 6. Assumption 2 The faults are uniformly bounded and Lipschitz, i.e., f ∈ L F1 , f˙ ∈ L F2 , with known positive constants F1 and F2 . It is worth saying that the price to be paid for the continuity of the proposed approach is the class of disturbances that these methods are able to deal with, i.e., Lipschitz-continuous. However, external disturbances, such as wind gusts, are generally modeled as sinusoidal signals which can be classified into this class of perturbations [35].

3 Fault Diagnosis Strategy In this section, an FD strategy is proposed to estimate some state variables and provides a set of residuals in order to diagnose the LOEs, only by means of the measurable positions and angles, and despite the presence of external disturbances. ˙ θ˙ , ψ) ˙ T ∈ R4 . For this purpose, define χ1 := (z, φ, θ, ψ)T ∈ R4 and χ2 := (˙z , φ, The reason to only take these states is the fact that the faults directly affect the control signals, present in the dynamics of χ1 and χ2 . Then, considering (2) and (3), the dynamics of χ1 and χ2 are given as χ˙ 1 = χ2 , χ˙ 2 = ζ (χ1 ) M (I4 − (t)) T (t) + ϑ(χ2 ) + d (t) ,

(4a) (4b)

where ζ (χ1 ) := diag (cφcθ/m, J ) ∈R4×4 , ϑ(χ2 ) := (az z˙ −g, (wη (η2 ) − η η2 )T )T

T ∈ R4 and d := dz , dηT ∈ R4 .

3.1 Reduced Finite-Time Sliding-Mode Observer Let us introduce the following constraint.

    Assumption   3 The term ϑ(χ24) is Lipschitz, i.e., ϑ (χ2 ) − ϑ χˆ 2 ∞ ≤ L ϑ χ2 − χˆ 2  , for any χ2 , χˆ 2 ∈ R . ∞ Note that Assumption 3 is fulfilled for any trajectory for which the roll and pitch angles exceed ±π/2 [rad], and whose angular velocities grow faster than a linear rate.

510

R. Falcón et al.

In this sense, Assumption 3 implies that the pitch and roll angles must hold |φ| < π/2 and |θ | < π/2, respectively; and this could be possible if the desired trajectories are Lipschitz; therefore, the angular velocities do not grow faster than a linear rate. The FT-SMO has the following structure [36]: χ˙ˆ 1 = χˆ 2 + K 1 ϕ1 (eχ ), χˆ˙ 2 = ζ (χ1 ) M T + ϑ(χˆ 2 ) + χˆ 3 + K 2 ϕ2 (eχ ), χ˙ˆ = K 3 ϕ3 (eχ ), 3

(5a) (5b) (5c)

where eχ := χ1 − χˆ 1 ∈ R4 is the output error, the nonlinear output injections ϕ1 , 2 1 ϕ2 , ϕ3 : R4 → R4 are given as ϕ1 (s) := s 3 , ϕ2 (s) := s 3 and ϕ3 (s) := s0 , and K j = diag(k j1 , k j2 , k j3 , k j4 ) ∈ R4×4 , with j = 1, 3, are some matrix gains to be designed. Define the state estimation error as eˆr := (eχ , εχ , ςχ ) ∈ R12 , where εχ := χ2 − χˆ 2 ∈ R4 is the estimation error of the velocities, and ςχ :=  (t) − χˆ 3 ∈ R4 is the estimation error of the total uncertainty given by (t) =  (t) + ϑ(χ2 ) − ϑ(χˆ 2 ), with  (t) = d(t) − ζ (χ1 )M f (t). Due to Assumptions 1–3, such an uncertainty is ˙ ∈ Lδ¯ , with known positive constants δ bounded and Lipschitz, i.e.,  ∈ Lδ and  ¯ and δ, respectively. Let us introduce the following theorem. Theorem 1 [36]. Let the observer (5) be applied to system (4). Suppose that 1 Assumptions 1–3 hold, and that the observer gains are selected as K 1 = 2δ¯ 3 I4 , 1 K 2 = 1.5δ¯ 2 I4 , K 3 = 1.1δ¯ I4 ; then, eˆr = 0 is Uniformly Finite-Time Stable (UFTS). According to Theorem 1, eˆr = 0 is UFTS, which implies that χˆ 1 (t) = χ1 (t), χˆ 2 (t) = χ2 (t) and χˆ 3 (t) = (t), for all t ≥ T1 > 0. Therefore, if Assumptions 1–3 hold, the FT-SMO (5) provides the following residuals

T χˆ 3 (t) = χˆ z (t) , χˆ φ (t) , χˆ θ (t) , χˆ ψ (t) = (t), ∀t ≥ T1 > 0.

(6)

3.2 Fault Detection In order to detect if there is an actuator fault, the following proposition is introduced. Proposition 1 [37]. Suppose that Assumptions 1–3 hold, and assume that the fault f satisfies (7) || f || f > 2||M −1 ζ −1 (χ1 )D|| f , with D := (D3 , D4 , D5 , D6 )T . Then, if

Sliding-Mode-Based Fault Diagnosis and Fault-Tolerant Control for Quad-Rotors

||χˆ 3 ||∞ > ||D||,

511

(8)

the fault f is strongly detectable. Faults that do not satisfy (7) are not detectable. This implies that the effect of such faults cannot be distinguished from the effect of external disturbances. However, it is always possible to design robust controllers to compensate for such non-detectable faults (see, e.g., [2, 3]) The matrix ζ −1 always exists as long as the roll and pitch angles do not exceed ±π/2 [rad], which is fulfilled since the quad-rotor does not perform aggressive maneuvers. For fault detection the inequality (8) is verified.

3.3 Fault Isolation The next step is to isolate the faults. Based on the result of Theorem 1 and on Assumption 1, one can establish the following constraints for all t ≥ 0 d z ≤ dz (t), d θ ≤ dθ (t) ≤ d θ ,

d φ ≤ dφ (t) ≤ d φ ,

(9a)

d ψ ≤ dψ (t) ≤ d ψ ,

(9b)

where d z , d φ , d φ , d θ , d θ , d ψ , d ψ ∈ R are known constants. These constants can be estimated by means of the FT-SMO through experimental tests under nominal conditions, i.e., free of faults. Then, the following variables are introduced : 

0, dz, ⎧ ⎪ ⎨0, σφ (t) = d φ , ⎪ ⎩ dφ, ⎧ ⎪ ⎨0, σθ (t) = d θ , ⎪ ⎩ dθ , ⎧ ⎪ ⎨0, σψ (t) = d ψ , ⎪ ⎩ dψ , σz (t) =

if ||χˆ z ||∞ ≤ D3 , if ||χˆ z ||∞ > D3 ,

(10a)

if ||χˆ φ ||∞ ≤ D4 , if ||χˆ φ ||∞ > D4 ∧ χˆ φ (t) > 0, if ||χˆ φ ||∞ > D4 ∧ χˆ φ (t) < 0,

(10b)

if ||χˆ θ ||∞ ≤ D5 , if ||χˆ θ ||∞ > D5 ∧ χˆ θ (t) > 0, if ||χˆ θ ||∞ > D5 ∧ χˆ θ (t) < 0,

(10c)

if ||χˆ ψ ||∞ ≤ D6 , if ||χˆ ψ ||∞ > D6 ∧ χˆ ψ (t) > 0, if ||χˆ ψ ||∞ > D6 ∧ χˆ ψ (t) < 0.

(10d)

The previous functions provide an approximation of the disturbances taking into account the sign and magnitude of the residuals signals. Consider the following functions:

512

R. Falcón et al.

σψ (t) − χˆ ψ (t) σθ (t) − χˆ θ (t) σz (t) − χˆ z (t) , + + −1 4m cφ(t)cθ (t) 4Jz−1 K τ 2J y−1 L σψ (t) − χˆ ψ (t) σθ (t) − χˆ θ (t) σz (t) − χˆ z (t) ρ2 (t) = , − + 4m −1 cφ(t)cθ (t) 4Jz−1 K τ 2J y−1 L ρ1 (t) =

σψ (t) − χˆ ψ (t) σθ (t) − χˆ θ (t) σz (t) − χˆ z (t) , + − 4m −1 cφ(t)cθ (t) 4Jz−1 K τ 2Jx−1 L σψ (t) − χˆ ψ (t) σθ (t) − χˆ θ (t) σz (t) − χˆ z (t) ρ4 (t) = , − − 4m −1 cφ(t)cθ (t) 4Jz−1 K τ 2Jx−1 L σψ (t) − χˆ ψ (t) − Dψ σz (t) − χˆ z (t) − Dz λ1 (t) = , + 2m −1 cφ(t)cθ (t) 2Jz−1 K τ χˆ ψ (t) − σψ (t) − Dψ σz (t) − χˆ z (t) − Dz λ2 (t) = , + 2m −1 cφ(t)cθ (t) 2Jz−1 K τ ρ3 (t) =

where all these functions depend on known or estimated variables. Then, the following proposition is introduced. Proposition 2 [37]. Suppose that Assumptions 1–3 hold, and assume that the fault f i is strongly detectable, i.e., || f 1 || f1 > 2Q 1 (χ1 )||Iθ D||,

(11a)

|| f 2 || f2 > 2Q 2 (χ1 )||Iθ D||, || f 3 || f3 > 2Q 3 (χ1 )||Iφ D||,

(11b) (11c)

|| f 4 || f4 > 2Q 4 (χ1 )||Iφ D||,

(11d)

where Q 1 (χ1 ) = ||Iθ M −1 ζ −1 (χ1 )|| f1 , Q 2 (χ1 ) = ||Iθ M −1 ζ −1 (χ1 )|| f2 , Q 3 (χ1 ) = ||Iφ M −1 ζ −1 (χ1 )|| f3 and Q 4 (χ1 ) = ||Iφ M −1 ζ −1 (χ1 )|| f4 , with Iθ := diag(1, 0, 1, 1) ∈ R4×4 and Iφ := diag(1, 1, 0, 1) ∈ R4×4 . Then, Algorithms 1 and 2 provide the isolation of the ith fault with the warning signals Ai , indicating the occurrence of a fault in the ith rotor when Ai = 1, or indicating that the rotor is healthy when Ai = 0. Algorithm 1: Rotors 1 and 2. Input: χˆ θ , D5 , ρ1 , ρ2 , λ1 ; Output: A 1 , A 2 ; 1: if ||χˆ θ ||∞ > D5 ∧ χˆ θ < 0 → A 1 = 1; 2: if ρ2 > 0 → A 2 = 1; 3: else→ A 2 = 0; 4: end 5: elseif ||χˆ θ ||∞ > D5 ∧ χˆ θ > 0 → A 2 = 1; 6: if ρ1 > 0 → A 1 = 1; 7: else→ A 1 = 0; 8: end 9: elseif ||χˆ θ ||∞ < D5 ∧ λ1 > 0 → A 1 = A 2 = 1; 10: else→ A 1 = A 2 = 0; 11: end

Algorithm 2: Rotors 3 and 4. Input: χˆ φ , D4 , ρ3 , ρ4 , λ2 ; Output: A 3 , A 4 ; 1: if ||χˆ φ ||∞ > D4 ∧ χˆ φ < 0 → A 3 = 1; 2: if ρ4 > 0 → A 4 = 1; 3: else→ A 4 = 0; 4: end 5: elseif ||χˆ φ ||∞ > D4 ∧ χˆ φ > 0 → A 4 = 1; 6: if ρ3 > 0 → A 3 = 1; 7: else→ A 3 = 0; 8: end 9: elseif ||χˆ φ ||∞ < D4 ∧ λ2 > 0 → A 3 = A 4 = 1; 10: else→ A 3 = A 4 = 0; 11: end

Sliding-Mode-Based Fault Diagnosis and Fault-Tolerant Control for Quad-Rotors

513

Note that each Q i depends on the system parameters, and the pitch and roll angles. Therefore, each Q i depends on the intensity of the vehicle maneuvers. However, taking into account that the quad-rotor does not perform aggressive maneuvers, such variables are clearly bounded. The warning signal Ai indicates a fault in the ith rotor, and prevents from providing a wrong fault identification.

3.4 Fault Identification The last step in the proposed fault diagnosis is to identify the magnitude of each fault. For this purpose, consider A (t) := diag(A1 (t), A2 (t), A3 (t), A4 (t)) ∈ R4×4 and the vector σ (t) := (σz (t), σφ (t), σθ (t), σψ (t))T ∈ R4 . Then, the following proposition introduces the fault identification module. Proposition 3 [37]. Suppose that Assumptions 1–3 hold, and assume that the fault f i is strongly detectable and it has been isolated. Then, the identification of the fault vector f is given by

fˆ(t) = A (t)M −1 ζ −1 (χ1 ) σ (t) − χˆ 3 (t) .

(12)

Moreover, the identification errors satisfy ⎛ ⎞⎤  dz    ⎝ ⎠⎦ Q 1 (χ1 ) ⎣ Iθ D −   d θ  ≤|| f 1 −  dψ  ⎛ ⎞⎤ ⎡  dz    ⎝ ⎠⎦ ⎣ Q 2 (χ1 ) Iθ D −   d θ  ≤|| f 2 −  d  ψ ⎛ ⎞⎤ ⎡  dz      ⎝ ⎠⎦ Q 3 (χ1 ) ⎣ Iφ D  −   d φ  ≤|| f 3 −  dψ  ⎛ ⎞⎤ ⎡      dz     ⎦ Q 4 (χ1 ) ⎣ Iφ D − ⎝ d φ ⎠  ≤|| f 4 −  dψ  ⎡

fˆ1 || f1 ≤ Q 1 (χ1 ) Iθ D ,

(13a)

fˆ2 || f2 ≤ Q 2 (χ1 ) Iθ D ,

(13b)

  fˆ3 || f3 ≤ Q 3 (χ1 )  Iφ D  ,

(13c)

  fˆ4 || f4 ≤ Q 4 (χ1 )  Iφ D  .

(13d)

The proposed FD approach allows the detection, isolation, and identification of faults satisfying Assumption 2, i.e., any type of strongly detectable, bounded, and Lipschitz faults, even in the presence of a compound fault given by bias, or freezing control signals, and LOE faults. Nevertheless, such a strategy will not be capable of distinguishing the effect of each of them.

514

R. Falcón et al.

3.5 Fault Diagnosis Implementation In order to validate the performance of the proposed FD, experimental results on the QBall2 by Quanser are presented. This platform consists of a ground control station with the real-time control software QUARC, that generates real-time code directly from MATLAB/Simulink to the onboard computer via WiFi, allowing to configure the system parameters and to observe sensor measurements in real time. Using the OptiTrackT M camera system, with six synchronized infrared cameras connected to the ground control station, the position, and attitude of the QBall2 are accurately measured [38]. Additionally, the experimental platform is composed of an industrial fan allowing to generate wind gusts (see Fig. 2). The QBall2 parameters are given as m = 1.79 [kg], Jx = Jy = 0.03 [Ns2 /rad], Jz = 0.04 [Ns2 /rad], ax = a y = az = 0.021 [Ns/kgm], aφ = aθ = 0.009 [Ns/rad], aψ = 0.012 [Ns/rad], L = 0.2 [m] and K τ = 0.0057 [Ns/rad2 ]. In order to obtain the upper and lower bounds of the external disturbances given in (9), multiple experimental tests have been implemented under nominal conditions. Two faults are injected by software in the command thrusts before sending them to the vehicle. Such faults are given as • A fault in rotor 1 with an LOE of 20% starting at t f1 = 15 [s] and finishing at t f1 = 60 [s]. • A fault in rotor 2 with an LOE of 30% starting at t f2 = 45 [s]. In order to fulfill a trajectory tracking task, a continuous singular terminal slidingmode control is designed as in [2]. In order to verify the robustness of the method, wind gusts of 4 [m/s] are generated by means of an industrial fan, over a diagonal of the X –Y -axis (see Fig. 2).

Fig. 2 Quanser Unmanned Vehicle System. ©2022 IEEE. Reprinted with permission, from, R. Falcón, H. Ríos and A. Dzul, “A Robust Fault Diagnosis for Quad-Rotors: A Sliding-Mode Observer Approach,” in IEEE/ASME Transactions on Mechatronics, vol. 27, no. 6, pp. 4487–4496, Dec. 2022, DOI:10.1109/TMECH.2022.3156854

Sliding-Mode-Based Fault Diagnosis and Fault-Tolerant Control for Quad-Rotors

515

The desired trajectory is given as xd (t) = 0.2(arctan(15) + arctan(t − 15)) cos(π t/6), yd (t) = 0.2(arctan(15) + arctan(t − 15)) sin(π t/6),

(14a) (14b)

z d (t) = 0.63 + 0.25 tanh(t − 7.5) + 0.1 tanh(t/3 − 12), ψd (t) = 0,

(14c) (14d)

and, the gains of the FT-SMO are given as K 1 = 6.463I4 , K 2 = 4.743I4 and K 3 = 11I4 , with δ¯ = 10. The experiments have been implemented with a sampling-time equal to h s = 0.002 [s]. The following figures show the results corresponding to five tests. The residual signals χˆ n , with n = z, φ, θ, ψ, are evaluated through the following test function [4]  t 1 χˆ n (τ )dτ, μ(χˆ n (t)) = T t−T which provides a filtered version of each residual signal. The behavior of these signals, with T = 1, is shown in Fig. 3. In order to improve the performance of the proposed FD, a threshold of 100 samples, i.e., 0.2 s, gives an estimated mean time between false alarms and potential faults (for more details see Chap. 6 in [4]).

4 2 0 -2 -4 80 4 0 -4 -8 0 20 10 0 -10 -20 0 10 5 0 -5 -10 0

20

40

60

80

100

20

40

60

80

100

20

40

60

80

100

20

40

60

80

100

Fig. 3 Residual signals. The solid line represents the average of all the experimental tests. The shaded light gray area depicts the time when fault f 1 is active, the gray area when fault f 2 is active, and the dark gray when both of them are active. © 2022 IEEE. Reprinted with permission, from, R. Falcón, H. Ríos and A. Dzul, “A Robust Fault Diagnosis for Quad-Rotors: A Sliding-Mode Observer Approach,” in IEEE/ASME Transactions on Mechatronics, vol. 27, no. 6, pp. 4487–4496, Dec. 2022, DOI:10.1109/TMECH.2022.3156854

516

R. Falcón et al.

18

12

6

0 0

20

40

60

80

100

Fig. 4 Fault detection. The solid line represents the average of all the experimental tests. ©2022 IEEE. Reprinted with permission, from, R. Falcón, H. Ríos and A. Dzul, “A Robust Fault Diagnosis for Quad-Rotors: A Sliding-Mode Observer Approach,” in IEEE/ASME Transactions on Mechatronics, vol. 27, no. 6, pp. 4487–4496, Dec. 2022, DOI:10.1109/TMECH.2022.3156854

Once the fault f 1 occurs, at t f1 = 15 [s], the residual signals χˆ z and χˆ θ are affected. When f 2 occurs, at t f2 = 45 [s], the magnitude of the residual signal χˆ z is even greater, χˆ ψ is affected, while the effect of both faults is compensated in the residual χˆ θ . Once the first fault disappear at t f1 = 60 [s], the residuals χˆ z and χˆ ψ decrease, while the magnitude on χˆ θ is greater. Throughout the entire test, the residual χˆ φ remains within its thresholds, where the roll angle φ is not affected by faults on rotors 1 and 2. The detection of the fault is shown in Fig. 4, where ||χ¯ˆ 3 ||∞ exceeds the value of ||D||, as described in (8), in an average time t¯d = 15.42 [s], indicating the existence of an actuator fault. The isolation of the faults are shown in Fig. 5 where the alert signals A3 and A4 remain deactivated during all tests, A1 is activated in an average time t¯1 = 15.42 [s] and deactivated at t¯1 = 60.52 [s], while A2 is activated in an average time t¯2 = 45.53 [s]. The fault identification scheme is implemented through (12), and its performance is shown in Fig. 6, for the LOE on rotor 1, and in Fig. 7, for the LOE on rotor 2. In order to provide performance indexes, the root mean square values for the identification errors e1 (t) = γ1 (t) − γˆ1 (t) and e2 (t) = γ2 (t) − γˆ2 (t) are obtained. Also, the minimum, the mean value, and the mean percentage error are computed and illustrated in Table 1. These experiments show that the proposed FD scheme successfully detects, isolates, and identifies the faults. Using the information provided by the FD it is possible to apply corrective actions for an active FTC. For illustrative purposes, the control objective is shown in Fig. 8, i.e., the trajectory tracking of x, y, z and ψ, where the quad-rotor fulfills the trajectory task being a passive fault-tolerant control given by the robustness of the controller, however, it is notorious that the tracking error is greater due to the faults present in the actuators.

Sliding-Mode-Based Fault Diagnosis and Fault-Tolerant Control for Quad-Rotors 2

2

1

1

0

0

-1 0

20

40

60

80

100

517

-1 0

2

2

1

1

0

0

20

40

60

80

100

20

40

60

80

100

-1

-1 0

20

40

60

80

100

0

Fig. 5 Fault isolation. ©2022 IEEE. Reprinted with permission, from, R. Falcón, H. R os and A. Dzul, “A Robust Fault Diagnosis for Quad-Rotors: A Sliding-Mode Observer Approach,” in IEEE/ASME Transactions on Mechatronics, vol. 27, no. 6, pp. 4487–4496, Dec. 2022, DOI:10.1109/TMECH.2022.3156854 0.3

0.2

0.1

0

-0.1

0

20

40

60

80

100

Fig. 6 Fault identification on rotor 1. The solid line represents the average of all the experimental tests. ©2022 IEEE. Reprinted with permission, from, R. Falcón, H. Ríos and A. Dzul, “A Robust Fault Diagnosis for Quad-Rotors: A Sliding-Mode Observer Approach,” in IEEE/ASME Transactions on Mechatronics, vol. 27, no. 6, pp. 4487–4496, Dec. 2022, DOI:10.1109/TMECH.2022.3156854

4 Active Fault Accommodation Control Design In this section, an active FTC, proposed in [39], based on the actuator fault accommodation is designed to partially compensate the effect of possible multiple losses of rotor effectiveness. The proposed FAC uses the FD information in a baseline robust-nominal controller to increase its fault tolerance, i.e., T (t) = M −1 u(t) + fˆ(t),

(15)

518

R. Falcón et al. 0.4 0.3 0.2 0.1 0 -0.1

0

20

40

60

80

100

Fig. 7 Fault identification on rotor 2. The solid line represents the average of all the experimental tests. ©2022 IEEE. Reprinted with permission, from, R. Falcón, H. Ríos and A. Dzul, “A Robust Fault Diagnosis for Quad-Rotors: A Sliding-Mode Observer Approach,” in IEEE/ASME Transactions on Mechatronics, vol. 27, no. 6, pp. 4487–4496, Dec. 2022, DOI:10.1109/TMECH.2022.3156854 Table 1 Properties of the performance indexes e¯1R M S (t) and e¯2R M S (t) LOE Mean–e R M S Mean–e% γˆ1 γˆ2

0.018 0.021

9% 7%

min–e R M S 0.011 0.010

2 0 -2

0

20

40

60

80

100

-1.6 0 1

20

40

60

80

100

20

40

60

80

100

20

40

60

80

100

1.6 0

0.5 0 0 10 0 -10

0

Fig. 8 Quad-rotor position and attitude. The solid line represents the average of all the experimental tests. The shaded light gray area depicts the time when fault f 1 is active, the gray area when fault f 2 is active, and the dark gray when both of them are active. ©2022 IEEE. Reprinted with permission, from, R. Falcón, H. Ríos and A. Dzul, “A Robust Fault Diagnosis for Quad-Rotors: A Sliding-Mode Observer Approach,” in IEEE/ASME Transactions on Mechatronics, vol. 27, no. 6, pp. 4487–4496, Dec. 2022, DOI:10.1109/TMECH.2022.3156854

Sliding-Mode-Based Fault Diagnosis and Fault-Tolerant Control for Quad-Rotors

519

where u represent the nominal control input. Hence, based on (3), the current command thrust is given as T¯ (t) = M −1 u(t) − f˜(t), with f˜ = f − fˆ. A baseline robust-nominal control is designed, in order to apply the proposed FAC.

4.1 Baseline Robust-Nominal Control Design In order to highlight the fault accommodation properties, a baseline robust-nominal controller is proposed based on a combination between PID controllers (for the position subsystem) and Continuous Twisting Controllers (CTCs) (for the attitude subsystem) (for more details, please refer to [2]). It is possible to apply different HOSMCs for the design of the attitude controllers, such as the Continuous singular/nonsingular terminal sliding-modes controllers, or some HOSMC based on discontinuous integral action (see, e.g., [40, 41]). Consider the desired position and attitude vectors given by ξd := (xd , yd , z d )T ∈ 3 R and ηd := (φ , θ , ψd )T ∈ R3 , respectively. Define the tracking error vectors as eξ := (ex , e y , ez )T = ξ1 − ξd , εξ := (εx , ε y , εz )T = ξ2 − ξ˙d , eη := (eφ , eθ , eψ )T = η1 − ηd , εη := (εφ , εθ , εψ )T = η2 − η˙ d . The reference signals φ and θ must be properly designed in order to achieve the tracking trajectory task. To do this, a virtual control ν := (νx , ν y , νz )T ∈ R3 is introduced in the position error dynamics, i.e., e˙ξ = εξ ,

(16a)

ε˙ ξ = ν + wξ (η1 , u z , ν) − ξ ξ2 + dξ (t) − ξ¨d ,

(16b)

where the disturbance term wξ (η1 , u z , ν) := u m gξ (η1 ) − G − ν. Thus, selecting appropriately the virtual control ν, the command signals u z , φ and θ can be computed as  u z = m νx2 + ν y2 + (νz + g)2 ,

  φ = arcsin u −1 , m νx sψd − ν y cψd

(17b)

θ = arctan[(νz + g)−1 (νx cψd + ν y sψd )].

(17c)

(17a)

Then, the virtual controllers ν := (νx , ν y , νz )T , and the angular moment vector τ are designed as gξ (η1 )  ˆ f i (t), m i=1 4

ν = ν +

τ = τ + Iη M fˆ(t),

(18a) (18b)

520

R. Falcón et al.

where Iη := (03×1 , I3 ) ∈ R3×4 . The nominal controllers are given as ν = ν¯ + ξ ξ2 + ξ¨d , τ = J

−1

(19a)

(τ¯ − wη (η2 ) + η η2 + η¨ d ).

(19b)

The term ν¯ is designed as a PID controller, i.e., ν¯ = K iξ e¯ξ + K pξ eξ + K dξ εξ ,

(20)

where K iξ = diag(k x1 , k y1 , k z1 ) ∈ R3×3 , K pξ = diag(k x2 , k y2 , k z2 ) ∈ R3×3 , K dξ = t diag(k x3 , k y3 , k z3 ) ∈ R3×3 and e¯ξ := (e¯x , e¯ y , e¯z )T ∈ R3 , with e¯ p := 0 e p (τ )dτ , p = x, y, z. Additionally, each term of τ¯ := (τ¯φ , τ¯θ , τ¯ψ ) ∈ R3 is designed as a CTC [42], i.e., 1

1

τ¯ p = v p − k p1 e p  3 − k p2 ε p  2 ,

(21a)

v˙ p = −k p3 e p  − k p4 ε p  , ∀ p = φ, θ, ψ,

(21b)

0

0

2

where a possible selection for the gains is given by [42], as k p1 = 25 3 , k p2 = 1 15 2 , k p3 = 2.3 and k p4 = 1.1 with any  > 0. The baseline robust-nominal controller (19)–(21) has an intrinsic passive fault tolerance due to its robustness properties [2]. Then, the following result can be established. Theorem 2 Let the baseline control (19)–(21), considering the fault accommodation (18), be applied to system (1). Suppose that Assumptions 1–3 hold. Then, the position tracking error dynamics is Input-to-State Stable (ISS) with respect to dξ and f˜; while the attitude tracking error (eη , εη ) = 0 is UFTS. To summarize, the proposed strategy allows a quad-rotor to track a desired trajectory under the possible effect of multiple LOE on the rotors and some external disturbances.

4.2 Active Fault Accommodation Implementation In order to show the performance of the proposed FAC, experimental tests are presented on the QBall2 by Quanser, and implemented through MATLAB/Simulink. In this section, a comparison is presented between the proposed FAC and the baseline robust-nominal controller. Both controllers are subject to the same faults and they are designed using the same gains. The desired tracking task is given by the helical trajectory given in (14). The rotor LOE faults are considered as

Sliding-Mode-Based Fault Diagnosis and Fault-Tolerant Control for Quad-Rotors

521

1 0 -1 0 1

20

40

60

80

100

20

40

60

80

100

20

40

60

80

100

0 -1 0 1 0.5 0 0

Fig. 9 Quad-rotor position. The shaded light gray area depicts the time when fault. f 1 is active, and the dark gray when both faults are active 3

0

-3

-6

0

20

40

80

60

100

Fig. 10 Quad-rotor yaw angle. The shaded light gray area depicts the time when fault f 1 is active, and the dark gray when both faults are active



0, ∀t < 15, γ1 (t) = 0.2, ∀t ≥ 15,

 γ3 (t) =

0, ∀t < 45, 0.2, ∀t ≥ 45,

while rotors 2 and 4 are considered healthy. The control objective, given by the tracking trajectory, is shown in Figs. 9 and 10. Even that the quad-rotor is under LOE faults, both controllers achieve different performances in the tracking task. However, when the propose FAC strategy is considered in the baseline robust-nominal controller, the tracking performance is evidently improved. The relation γˆi (t) = fˆi (t)/Ti (t) can be used to obtain the magnitude of the LOE of each fault. The fault identification is shown in Figs. 11 and 12. After the occurrence of each fault, the proposed strategy approximates the magnitude of each LOE. Finally, the control inputs of each rotor are shown in Fig. 13, where the control effort in the FAC method is more aggressive.

522

R. Falcón et al. 0.4 0.2 0 -0.2 0

20

40

60

80

100

60

80

100

Fig. 11 Fault identification on rotor 1 0.3 0.15 0 -0.15 0

20

40

Fig. 12 Fault identification on rotor 3 8

0 0 8

20

40

60

80

100

0 0 8

20

40

60

80

100

0 0 8

20

40

60

80

100

20

40

60

80

100

0

0

Fig. 13 Rotor thrusts

Sliding-Mode-Based Fault Diagnosis and Fault-Tolerant Control for Quad-Rotors

523

Table 2 Properties of the tracking performance index ex R M S , e y R M S , ez R M S and e¯ψ R M S Controller

x

y

z

ψ

FAC CTC

0.0683 0.1158

0.0750 0.1323

0.0754 0.1695

0.7092 1.3391

In order to better illustrate the performance of the controllers, the average value of the root mean square of each position error signal and the yaw angle error are computed and they are illustrated in Table 2. The first 5 s of each experimental test are not taken into account in order to avoid the initial condition effect. Note that using the same controller, gains, and under the effect of the same faults, the FAC improves the performance in each coordinate.

5 Fault-Tolerant Control for a Rotor Failure In a rotor failure scenario, relation (2) ceases to be bijective. In case of a failure on T1 , the relation between the control inputs and the remaining thrusts is given by ⎛

⎞ ⎛ ⎞ ⎛ ⎞ uz 1 1 1 ⎜ τφ ⎟ ⎜ 0 ⎟ T2 L −L ⎜ ⎟=⎜ ⎟ ⎝ T3 ⎠ . ⎝ τθ ⎠ ⎝ −L 0 0 ⎠ T4 τψ K τ −K τ −K τ

(22)

From (22), it is easy to see that the control input τψ is now linearly dependent on u and τθ , i.e., τψ (t) = K τ −u z (t) − 2

! τθ (t) , ∀t ≥ t f 1 , L

(23)

where t f 1 represents the instant at which the rotor 1 failure has occurred. Similar expressions can be obtained by considering a failure on rotors 2, 3, or 4, respectively, as ! τθ (t) , ∀t ≥ t f 2 , (24a) τψ (t) = K τ −u z (t) + 2 L ! τφ (t) , ∀t ≥ t f 3 , (24b) τψ (t) = K τ u z (t) + 2 L ! τφ (t) , ∀t ≥ t f 4 , (24c) τψ (t) = K τ u z (t) − 2 L

524

R. Falcón et al.

where t f i represents the instant at which the failure of the ith rotor has occurred. Therefore, the FTC design is carried out taking into account such a linear dependence caused by a single rotor failure. The strategy is composed of a full FT-SMO, that provides a full state estimation from the measurable output and identifies some type of disturbances, and a combination of PID controllers (in order to generate the main thrust and the desired pitch an roll angles) and Continuous HOSMCs (for the angular moments). Firstly, the design of the FT-SMO is presented.

5.1 Full Finite-Time Sliding-Mode Observer An additionally full state FT-SMO, based on the FD scheme, is proposed [36] ξ˙ˆ1 = ξˆ2 + Kˆ 1 ϕ1 (eˆξ ), ξ˙ˆ2 =

4 

(Ai (t)Ti (t)) gξ (η1 ) − G − ξ ξˆ2 + ξˆ3 + Kˆ 2 ϕ2 (eˆξ ),

(25a) (25b)

i=1

ξ˙ˆ3 = Kˆ 3 ϕ3 (eˆξ ), η˙ˆ 1 = ηˆ 2 + Kˆ 4 ϕ1 (eˆη ), η˙ˆ 2 = J Iη A (t)M T (t) + wη (ηˆ 2 ) − η ηˆ 2 + ηˆ 3 + Kˆ 5 ϕ2 (eˆη ),

(25d)

η˙ˆ 3 = Kˆ 6 ϕ3 (eˆη ),

(25f)

(25c)

(25e)

where Iη := (03×1 , I3 ) ∈ R3×4 , eˆξ = ξ1 − ξˆ1 ∈ R3 and eˆη = η1 − ηˆ 1 ∈ R3 are the output errors, respectively; the nonlinear output injections ϕ1 , ϕ2 , ϕ3 : R3 → R3 2 1 are given as ϕ1 (s) := s 3 , ϕ2 (s) := s 3 and ϕ3 (s) := s0 , and the diagonal gain matrices Kˆ q = diag(kˆq1 , kˆq2 , kˆq3 ) ∈ R3×3 , with q = 1, 6. Define the state estimation error as eˆ := (eˆξ , εˆ ξ , eˆη , εˆ η , ςˆξ , ςˆη )T ∈ R18 , where εˆ ξ := ξ2 − ξˆ2 ∈ R3 and εˆ η := η2 − ηˆ 2 ∈ R3 are the linear and angular velocity estimation errors, respectively, while ςˆξ := dξ (t) − ξˆ3 and ςˆη := dη (t) − ηˆ 3 are the estimation errors of the external disturbances. The following lemma describes the finite-time convergence properties of the FT-SMO (25). Lemma 1 [36]. Let the observer (25) be applied to system (1). Suppose that Assumptions 1–3 hold, assume that the failure has been isolated, and that the observer gains 1 1 are selected as kˆq1 = 2 D¯ q3 , kˆq2 = 1.5 D¯ q2 and kˆq3 = 1.1 D¯ q , with q = 1, 6; then, eˆ = 0 is UFTS. Remark 1 The control inputs Ti , used in the observer (25), are the real ones and are expressed in function of the rotor thrusts, i.e., considering the expressions given in (2), (23) and (24), depending on the rotor failure. In this sense, aided by the fault

Sliding-Mode-Based Fault Diagnosis and Fault-Tolerant Control for Quad-Rotors

525

isolation matrix A , the proposed FT-SMO is sensitive to disturbances and insensitive to the failure effect. According to Lemma 1, eˆ = 0 is UFTS, which implies that ξˆ1 (t) = ξ1 (t), ξˆ2 (t) = ξ2 (t), ξˆ3 (t) = dξ (t), ηˆ 1 (t) = η1 (t), ηˆ 2 (t) = η2 (t) and ηˆ 3 (t) = dη (t), for all t ≥ T , where T is the observer convergence time. Therefore, if Assumptions 1 and 3 hold, the FT-SMO (25) provides the following disturbance identifications for all t ≥ T : ξˆ3 (t) = (ξˆx (t), ξˆ y (t), ξˆz (t))T = dξ (t),

(26a)

ηˆ 3 (t) = (ηˆ φ (t), ηˆ θ (t), ηˆ ψ (t)) = dη (t).

(26b)

T

5.2 Control Strategy Due to the under-actuated nature of the quad-rotor and the failure of one of the rotors, it is not possible to control all the positions and angles, independently. Moreover, based on (23) and (24), it is not possible to stabilize the yaw angle due to the effect of the failure. Hence, the proposed control strategy is designed in order to allow the yaw angular velocity to remain bounded and the position tracking to be achieved even in the presence of some external disturbances. The proposed strategy is based on two cascading loops. The internal loop controls the attitude of the vehicle, while the external loop contains the controllers of the translational coordinates. In this way, the external loop generates desired roll and pitch angles, that the internal loop will use in the attitude control. Define the tracking error vectors as eξ = (ex , e y , ez )T = ξ1 − ξd , εξ = (εx , ε y , εz )T = ξ2 − ξ˙d , eη = (eφ , eθ )T = H η1 − ηd , εη = (εφ , εθ )T = H η2 − η˙ d , with H = (I2 , 02×1 ), and where ξd = (xd , yd , z d )T ∈ R3 and ηd = (φ , θ )T ∈ R2 are the desired position and attitude vectors, respectively; with φ and θ being some reference signals designed as in (17). Therefore, the objective now is to design the virtual controller ν and the angular moments τφ and τθ such that the tracking error vector e := (eξ , εξ , eη , εη ) ∈ R10 converges to zero despite the complete loss of a rotor and external disturbances. Let us propose the following PID virtual controller for as ν = K iξ e¯ξ + K pξ eξ + K dξ εξ + ξ ξ2 − ξˆ3 + ξ¨d ,

(27)

where K iξ = diag(k x1 , k y1 , k z1 ) ∈ R3×3 , K pξ = diag(k x2 , k y2 , k z2 ) ∈ R3×3 , K dξ = t diag(k x3 , k y3 , k z3 ) ∈ R3×3 and e¯ξ = (e¯x , e¯ y , e¯z )T ∈ R3 with e¯x = 0 ex (τ )dτ , e¯ y = t t 0 e y (τ )dτ and e¯z = 0 ez (τ )dτ . Then, the following result can be established. Lemma 2 [2]. Let the control (27) be applied to system (1), and assume that Assumptions 1–3 hold, and assume that the failure has been isolated. Then, (eξ , εξ ) is ISS with respect to wξ = u m gξ (η1 , u z , ν) − G − ν. The attitude tracking error dynamics, i.e.,

526

R. Falcón et al.

e˙η = εη ,

(28a)

ε˙ η = H (J τ + wη (η2 ) − η η2 + dη (t)) − η¨ d ,

(28b)

can be viewed as a decoupled dynamics with two independent control inputs. Since the rotor thrust Ti is proportional to the square of the rotation speed i , i.e., Ti = ki i2 , with ki as a positive constant, then Ti ≥ 0. Therefore, after the complete loss of a single rotor, the angular moments τθ = L(T1 − T2 ) and τφ = L(T3 − T4 ) are severely affected. In the occurrence of a failure in the first rotor, it follows that τθ (t) = −L T2 (t) ≤ 0, for all t ≥ t f 1 . Consequently, such a control must be negative. A failure in the second rotor requires a positive control since τθ (t) = L T1 (t) ≥ 0, for all t ≥ t f 2 . In the same way, a failure in the third rotor requires a negative control since τφ (t) = −L T4 (t) ≤ 0, for all t ≥ t f 3 , while a failure in the fourth rotor requires a positive control since τφ (t) = L T3 (t) ≥ 0, for all t ≥ t f 4 . Therefore, such sign constraints must be taken into account for the attitude control design. Then, in order to satisfy the sign conditions, and considering f φ (η2 , ηˆ φ , φ¨ ) := a −bφ θ˙ ψ˙ + Jφx φ˙ − ηˆ φ + φ¨  and f θ (η2 , ηˆ θ , θ¨ ) := −bθ φ˙ ψ˙ + aJθy θ˙ − ηˆ θ + θ¨ , the angular moments τφ and τθ are designed according to Algorithm 3. Algorithm 3: Pitch and Roll Control Design. Input: τ¯φ , τ¯θ , f φ , f θ Output: τφ , τθ 1: if γ1 = 0 2: τφ = Jx (τ¯φ + f φ (η2 , ηˆ 3 , φ¨  )) J 3: τθ = 2y (τ¯θ + f θ (η2 , ηˆ 3 , θ¨ ) − |τ¯θ + f θ (η2 , ηˆ 3 , θ¨ )|) 4: elseif γ2 = 0 5: τφ = Jx (τ¯φ + f φ (η2 , ηˆ 3 , φ¨  )) J 6: τθ = 2y (τ¯θ + f θ (η2 , ηˆ 3 , θ¨ ) + |τ¯θ + f θ (η2 , ηˆ 3 , θ¨ )|) 7: elseif γ3 = 0 8: τφ = J2x (τ¯φ + f φ (η2 , ηˆ 3 , φ¨  ) − |τ¯θ + f φ (η2 , ηˆ 3 , φ¨ )|) 9: τθ = Jy (τ¯θ + f θ (η2 , ηˆ 3 , θ¨ )) 10: elseif γ4 = 0 11: τφ = J2x (τ¯φ + f φ (η2 , ηˆ 3 , φ¨  ) + |τ¯θ + f φ (η2 , ηˆ 3 , φ¨ )|) 12: τθ = Jy (τ¯θ + f θ (η2 , ηˆ 3 , θ¨ )) 13: end

The terms τ¯φ and τ¯θ are designed by the CTC [42] as 1

1

τ¯ = v − k0 e  3 − k1 ε  2 ,

(29a)

v˙ = −k2 e  − k3 ε  ,

(29b)

0

0

2/3

with  = φ, θ ; and a possible selection for the gains is given as k0 = 25ζ , k1 = 1/2 15ζ , k2 = 2.3ζ and k3 = 1.1ζ , with any ζ > 0. Then, the following result can be established.

Sliding-Mode-Based Fault Diagnosis and Fault-Tolerant Control for Quad-Rotors

527

Lemma 3 [43]. Let τφ and τθ , given in Algorithm 3, be applied to system (1), and assume that Assumptions 1–3 hold, and assume that the failure has been isolated. Suppose that the disturbances dθ and dφ satisfy the following constraints aθ ˙ ψ(t) ˙ θ˙ (t) − bθ φ(t) + θ¨ (t), ∀t ≥ t f 1 , Jy aθ ˙ ψ(t) ˙ + θ¨ (t), ∀t ≥ t f 2 , dθ (t) ≤ θ˙ (t) − bθ φ(t) Jy aφ ˙ − bθ φ(t) ˙ ψ(t) ˙ dφ (t) ≥ φ(t) + φ¨  (t), ∀t ≥ t f 3 , Jx aφ ˙ − bφ θ˙ (t)ψ(t) ˙ dφ (t) ≤ φ(t) + φ¨  (t), ∀t ≥ t f 4 , Jx dθ (t) ≥

(30a) (30b) (30c) (30d)

for a particular rotor failure. Then, (eη , εη ) = 0 is UFTS. The disturbance constraints (30) are required to ensure that the thrusts on the rotors are always positive. In order to verify such constraints, the information of the finite-time disturbance estimation, given by ηˆ 3 in (26b), is used. For the fault-free case, τφ and τθ are designed according to lines 2 and 9 of Algorithm 3, as in [2].

5.3 Virtual Control Disturbance Term According to Lemma 2, it is given that the dynamics for eξ and εξ is ISS with respect to wξ . Now, let us recall that such a disturbance term can be written as ⎡⎛

⎞ ⎛ ⎞⎤ cφsθ cψ + sφsψ cφ sθ cψ + sφ sψ wξ (η1 , u z , ν) = u m ⎣⎝ cφsθ sψ − sφcψ ⎠ − ⎝ cφ sθ sψ − sφ cψ ⎠⎦ . cφcθ cφ cθ Such a function is Lipschitz in η1 and continuous in u z , then, it follows that ||wξ || ≤ L w ||eη ||, for all η1 , ν ∈ R3 and u z ∈ R, for some positive L w > 0. Then, as a result of Lemma 3, it follows that eη (t) = 0, for all t ≥ Tη > 0, with Tη as the convergence time of the attitude errors. Therefore, the disturbance term wξ vanishes when eη = 0. Moreover, the previous statement implies that (eξ , εξ ) = 0 will be Uniformly Exponentially Stable (UES).

5.4 Yaw Dynamics The yaw dynamics, which cannot be controlled in the occurrence of a rotor failure due to the linear dependence of τψ , with respect to u, τφ and τθ , as shown in (23) and (24); is given by

528

R. Falcón et al.

ψ¨ =

τψ aψ + bψ φ˙ θ˙ − ψ˙ + dψ . Jz Jz

(31)

Based on the results given by Lemmas 1–3, the angular velocities φ˙ and θ˙ , the desired angular accelerations φ¨ and θ¨ , as well as the disturbance identification ηˆ 3 , are ˙ ∞ ≤ δθ , ||φ¨  ||∞ ≤ δφ , ||θ¨ ||∞ ≤ δθ , ||ηˆ φ ||∞ ≤ Dφ ˙ ∞ ≤ δφ , ||θ|| bounded, i.e., ||φ|| and ||ηˆ θ ||∞ ≤ Dθ , with positive constants δφ , δθ , δφ , δθ , Dφ and Dθ . Furthermore, at steady state (eξ , εξ ) = 0, the virtual control inputs satisfy νx = ax x˙ − ξˆx + x¨d , ν y = a y y˙ − ξˆ y + y¨d and νz = az z˙ − ξˆz + z¨ d , which are bounded, and therefore  lim

t→∞

νx2 (t) + ν y2 (t) + (νz (t) + g)2 ≤ L ν ,

with a positive constant L ν . Consider the following expressions m Kτ 2K τ Lν + (aθ δθ + Jy (Dθ + δθ )), Jz L Jz m Kτ 2K τ = bψ δφ δθ + Dψ + Lν + (aφ δφ + Jx (Dφ + δφ )). Jz L Jz

L ψ1 = bψ δφ δθ + Dψ + L ψ2

Then, let us introduce the following result. Lemma 4 [43]. Let the position control (27) and the attitude control signals given in Algorithm 3 be applied to system (1), and assume that Assumptions 1–3 hold, and assume that the failure has been isolated. If the angular velocity bounds δφ and δθ satisfy Laψ Laψ , δθ < , (32) δφ < 2K τ Jy bθ 2K τ Jx bφ then, after a single rotor failure, the yaw angular velocity holds ˙ fl ≤ L ψ1 , or lim ||ψ|| ˙ fk ≤ L ψ2 , lim ||ψ||

t→∞

t→∞

(33)

in the occurrence of a failure in rotors l = 1, 2; or in the occurrence of a failure in rotors k = 3, 4, respectively. The constraint (32) limits the admissible pitch and roll angular velocities under the corresponding rotor failure. However, since these bounds for the angular velocities, i.e., δφ and δθ , depend on the desired trajectory, it is possible to ensure the fulfillment of (32). Moreover, since the vehicle does not deal with aggressive maneuvers, the angular velocities in pitch and roll are always bounded. In this way, based on all the previous statements, the main result of this section is established by the following theorem. Theorem 3 [43]. Let the position control (27) and the attitude control given in Algorithm 3 be applied to system (1). If Assumptions 1–3 hold, assume that the failure has been isolated, and constraints (30) and (32) are satisfied; then, after the

Sliding-Mode-Based Fault Diagnosis and Fault-Tolerant Control for Quad-Rotors

529

complete loss of a single rotor, at steady state (eξ , εξ , eη , εη ) = 0, the tracking error dynamics is UES and the yaw angular velocity is bounded. In the following section, some numerical simulations are depicted.

5.5 Fault-Tolerant Control Implementation In order to show the performance of the proposed FTC strategy, numerical simulations, using MATLAB/Simulink, are presented. For this purpose, the failure of the first rotor is considered at t f = 50 [s]. The desired trajectory is given as xd (t) =r [arctan(ϕ) + arctan(t − ϕ)] cos(ωt), yd (t) =r [arctan(ϕ) + arctan(t − ϕ)] sin(ωt), z d (t) =1.5[1 + tanh(t − ϕ/2)],

(34a) (34b) (34c)

with r = 0.4 [m], ϕ = 15 [rad] and ω = π/10 [rad/s], while the external disturbances are given as dx (t) = 0.2 + 0.1[sin(ω1 t) + cos(ω6 t)],

(35a)

d y (t) = −0.2 + 0.1[sin(ω5 t) − cos(ω2 t)], dz (t) = −2 − sin(ω3 t) + cos(ω4 t),

(35b) (35c)

dφ (t) = −2 − 0.5[cos(ω3 t) + sin(ω6 t)], dθ (t) = 3 − 0.5[sin(ω1 t) − cos(ω5 t)], dψ (t) = 1.5 + 0.5[sin(ω2 t) + cos(ω4 t)],

(35d) (35e) (35f)

where ω1 = 0.2 [rad/s], ω2 = 0.3 [rad/s], ω3 = 0.4 [rad/s], ω4 = 2 [rad/s], ω5 = 3 [rad/s] and ω6 = 4 [rad/s]. The position tracking task is illustrated in Fig. 14. Despite the failure of the first rotor and the external disturbances, the quad-rotor tracks the desired trajectory. The pitch, roll, and yaw angles, as well as the yaw velocity, are depicted in Fig. 16. The yaw angle ψ, after the occurrence of the failure, begins to increase, while the angular velocity ψ˙ is bounded. Then, using the information given by ηˆ 3 , the disturbance constraint (30a), as well as the angular velocity constraint (32), are shown in Fig. 15. It is easy to see that these constraints hold since there does not exist a crossing between the curves throughout the whole simulation. Finally, the thrusts for each rotor are shown in Fig. 17. The FTC forces the thrust of rotor 2 to be close to zero, while the thrust on rotors 3 and 4 is increased slightly.

530

R. Falcón et al. 2 0 -2

0

10

20

30

40

50

60

70

80

90

100

0

10

20

30

40

50

60

70

80

90

100

0

10

20

30

40

50

60

70

80

90

100

2 0 -2

6 3 0

Fig. 14 Trajectories of the quad-rotor. Position

6 3 0 -3

0 800 600 400 200 0 -200 0

10

20

30

40

50

60

70

80

90

100

10

20

30

40

50

60

70

80

90

100

Fig. 15 Disturbance and angular velocity constraints

6 Conclusions This chapter solves three important problems: First, an FD method, based on an FT-SMO, is designed for the detection, isolation, and identification of multiple LOE faults despite the presence of external disturbances. Second, an active FTC, given by an FAC, is designed based on the proposed FD to partially compensate the actuator faults, ensuring the tracking trajectory task. Finally, an active FTC is designed to deal with the effects of a rotor failure. The rotor failure is isolated using the proposed FD. Such a strategy is composed of an FT-SMO, PID controllers, and HOSMCs that allow the position tracking to be achieved even in the presence of some external

Sliding-Mode-Based Fault Diagnosis and Fault-Tolerant Control for Quad-Rotors

531

0.08 0 -0.08 0 0.08

10

20

30

40

50

60

70

80

90

100

0

10

20

30

40

50

60

70

80

90

100

0

10

20

30

40

50

60

70

80

90

100

0

10

20

30

40

50

60

70

80

90

100

0 -0.08 0 -125 -250 4.5 0 -4.5 -9

Fig. 16 Trajectories of the quad-rotor. Orientation 8 4 0 0 8

10

20

30

40

50

60

70

80

90

100

0

10

20

30

40

50

60

70

80

90

100

0

10

20

30

40

50

60

70

80

90

100

0

10

20

30

40

50

60

70

80

90

100

4 0 20 10 0 20 10 0

Fig. 17 Rotor thrusts

disturbances. Numerical simulations and experimental results on Quanser’s QBall2 platform show the performance of the proposed strategies. Acknowledgements This work was supported in part by the SEP-CONACYT-ANUIES-ECOS NORD Project 315597. The authors gratefully acknowledge the financial support from TecNM projects, CONAHCYT CVU 270504 project 922 and CONAHCYT CVU 785635.

532

R. Falcón et al.

Appendix Proof of Theorem 2 The attitude-tracking error dynamics is given as e˙η = εη ,

ε˙ η = J τ − Iη M f (t) + wη (η2 ) − η η2 + dη (t) − η¨ d . Let us prove that the tracking error dynamics for the roll angle φ is UFTS when the controller (18b) is applied. Then, the closed-loop tracking error dynamics for φ is given as e˙φ = εφ ,

(36a) 1 3

1 2

ε˙ φ = ν¯ φ − kφ1 eφ  − kφ2 εφ  , ¯ φ (t), ν¯˙ φ = −kφ3 eφ 0 − kφ4 εφ 0 + 

(36b) (36c)

¯ φ (t) = −Jx−1 L( f˙˜3 (t) − f˙˜4 (t)) + where ν¯ φ = νφ − Jx−1 L( f˜3 (t) − f˜4 (t)) + dφ (t) and  ˙ + M −1 ζ˙ −1 (η1 )d(t), and thus d˙φ (t). It is given that f˙˜(t) = M −1 ζ −1 (η1 )d(t) f˙˜1 (t) =

m ˙ dz (t) + 4cφcθ m ˙ f˙˜2 (t) = dz (t) + 4cφcθ m ˙ f˙˜3 (t) = dz (t) − 4cφcθ m ˙ f˙˜4 (t) = dz (t) − 4cφcθ

Jz ˙ dψ (t) + 4K τ Jz ˙ dψ (t) − 4K τ Jz ˙ dψ (t) + 4K τ Jz ˙ dψ (t) − 4K τ

Jy ˙ dθ (t) + ρ(t), 2L Jy ˙ dθ (t) + ρ(t), 2L Jx ˙ dφ (t) + ρ(t), 2L Jx ˙ dφ (t) + ρ(t), 2L

˙ ˙ where ρ(t) = m(θcφsθ + φsφcθ )dz (t)/4c2 φc2 θ . Then, based on the previous equal˙ ˙ ities, it follows that f˜3 (t) − f˜4 (t) = Jx L −1 d˙φ (t), and hence, (36c) can be rewritten 2 as ν˙¯ φ = −kφ3 eφ 0 − kφ4 εφ 0 . Hence, if the gains are selected as kφ1 = 25 3 , 1 kφ2 = 15 2 , kφ3 = 2.3 and kφ4 = 1.1 with any  > 0, the finite-time convergence to zero is ensured for the tracking error dynamics (36) [42]. The same procedure can be followed to prove the finite-time convergence to zero for the tracking error dynamics of pitch and yaw angles, i.e., (eη ; εη ) = 0 is UFTS. The position tracking error dynamics, taking into account the virtual control (18a), is given as

Sliding-Mode-Based Fault Diagnosis and Fault-Tolerant Control for Quad-Rotors

"

˙ξ = A k

ξ

# 4 gξ (η1 )  ˜ + B dξ (t) − f i (t) + wξ (η1 , u z , ν) , m i=1

533

A=

! 06×3 I6 , 03 03×6

B=

(37)

! 06×3 , I3

where Ak := A + B K ξ ∈ R9×9 , ξ := (e¯ξT , eξT , εξT ) ∈ R9 and K ξ = (1K iξ , 1K pξ , 1K dξ ) ∈ R1×9 , with 1 := (1, 1, 1) ∈ R1×3 . The nonlinear decoupling term wξ = u m gξ (η1 ) − G − ν is Lipschitz in η1 and continuous in u m , then it follows that ||wξ ||∞ ≤ L η ||eη ||∞ , for all η1 , ν ∈ R3 and u m ∈ R, for some positive L η > 0. This implies that wξ vanishes when eη = 0. Hence, the closed-loop tracking error dynamics (37), considering wξ ≡ 0, can be rewritten as "

˙ξ = A k

ξ

# 4 gξ (η1 )  ˜ + B dξ (t) − f i (t) . m i=1

(38)

Let us propose a candidate Lyapunov function V :∈ R9 → R as V ( ξ ) = ξT P ξ , with P = P T > 0. The time derivative of V , along the trajectories of the system (38), satisfies "

V˙ ( ξ ) =

T ξ (P Ak

+

AkT

P)

ξ

+2

T ξ

# 4 gξ (η1 )  ˜ P B dξ (t) − f i (t) . m i=1

Since the pair (A, B) is controllable, there always exists K ξ such that Ak is Hurwitz; and thus, it holds that P Ak + AkT P = −Q, with Q = Q T > 0. Then, the time derivative of V is upper bounded as V˙ ( ξ ) ≤ −(1 − μ)λmin {Q}|| ξ ||2 , % $ 4 2λmax {P} 1  ˜ ∀|| ξ || ≥ || f i ||∞ , ||dξ ||∞ + μλmin {Q} m i=1 for any μ ∈ (0, 1). Then, it is proved that the position tracking error dynamics is ISS  with respect to dξ and f˜.

Proof of Lemma 3 The convergence to zero of the attitude tracking error dynamics, when the Continuous Twisting Control is active (Algorithm 3—lines 2, 5, 9, and 12), is given in [2]. Then, only the convergence to zero of the attitude tracking error dynamics, when the positive

534

R. Falcón et al.

(Algorithm 3—lines 6 and 11) and negative controllers (Algorithm 3—lines 3 and 8) are active, will be proven. Let us assume that the first rotor has failed. Then, according to Algorithm 3—line 3, the angular moment τθ must be negative, and thus, it is designed as τθ =

Jy (τ¯θ + f θ (η2 , ηˆ θ , θ¨ ) − |τ¯θ + f θ (η2 , ηˆ θ , θ¨ )|), 2

where f θ (η2 , ηˆ θ , θ¨ ) = −bθ φ˙ ψ˙ +

(39)

aθ ˙ θ Jy

− ηˆ θ + θ¨ . Note that if τ¯θ + f θ (η2 , ηˆ θ , θ¨ ) ≤ 0, the angular moment τθ given in (39) is rewritten as τθ = Jy (τ¯θ + f θ (η2 , ηˆ θ , θ¨ )), where the Continuous Twisting Control is active, just as in Algorithm 3—lines 9 and 12. On the other hand, if τ¯θ + f θ (η2 , ηˆ θ , θ¨ ) > 0, the control signal (39) is given by τθ = 0, where the control effort is null, in order to avoid negative thrusts. The closed-loop tracking error dynamics for θ , taking into account (39), is written as e˙θ = εθ , ε˙ θ =

τ¯θ + f θ (η2 , ηˆ θ , θ¨ ) − |τ¯θ + f θ (η2 , ηˆ θ , θ¨ )| − f θ (η2 , ηˆ θ , θ¨ ). 2

Such a dynamics can be viewed as a state-dependent switched system where the switching surface is given by S := {(τ¯θ , η2 , ηˆ θ , θ¨ ) ∈ R × R2 × R : τ¯θ + f θ (η2 , ηˆ θ , θ¨ ) = 0}, i.e., e˙θ = εθ ,

(40a)

ε˙ θ = gσ (t) (τ¯θ , η2 , ηˆ θ , θ¨ ),  1, if f θ (η2 , ηˆ θ , θ¨ ) ≤ −τ¯θ , σ (t) = 2, if f θ (η2 , ηˆ θ , θ¨ ) > −τ¯θ ,

(40b) (40c)

where g1 (τ¯θ , η2 , ηˆ θ , θ¨ ) = τ¯θ and g2 (τ¯θ , η2 , ηˆ θ , θ¨ ) = bθ φ˙ ψ˙ − aJθy θ˙ + ηˆ θ − θ¨ . In order to provide the convergence properties of the tracking error dynamics (40), the analysis is carried out for each operating mode, i.e., for each σ = 1, 2. (1) Case σ = 1: In this case it holds that f θ (η2 , ηˆ θ , θ¨ ) ≤ −τ¯θ , implying that g1 (τ¯θ , η2 , ηˆ θ , θ¨ ) = τ¯θ , and the tracking error dynamics (40) is rewritten as e˙θ = εθ ,

(41a) 1 3

1 2

ε˙ θ = vθ − kθ0 eθ  − kθ1 εθ  ,

(41b)

v˙θ = −kθ2 eθ  − kθ3 εθ  .

(41c)

0

0

Then, at steady state (eθ , εθ , vθ ) = 0, according to [42], system (41) is UFTS. 1 1 Thus, it follows that τ¯θ (t) = vθ − kθ0 eθ  3 − kθ1 εθ  2 = 0, for all t ≥ Tθ ; and hence, the switching condition turns into f θ (η2 , ηˆ θ , θ¨ ) ≤ 0, implying that (39) is

Sliding-Mode-Based Fault Diagnosis and Fault-Tolerant Control for Quad-Rotors

535

rewritten as τθ = Jy f θ (η2 , ηˆ θ , θ¨ ) = Jy (−bθ φ˙ ψ˙ +

aθ θ˙ − ηˆ θ + θ¨ ). Jy

(42)

Since f θ (η2 , ηˆ θ , θ¨ ) ≤ 0, then the control law (42) is negative. Recalling that ˙ ψ(t) ˙ ηˆ θ (t) = dθ (t), for all t ≥ T with T < t f 1 , if ηˆ θ (t) = dθ (t) ≥ aJθy θ˙ (t) − bθ φ(t) + ¨ ¨ θ (t), holds for all t ≥ t f 1 , i.e., the constraint (30a), then f θ (η2 (t), ηˆ θ (t), θ (t)) ≤ 0, for all t ≥ t f 1 ; and hence, system (41) never switches to the case σ = 2 and (eθ , εθ , vθ ) = 0 is UFTS. (2) Case σ = 2: In this case it holds that f θ (η2 , ηˆ θ , θ¨ ) > −τ¯θ , implying that g2 (τ¯θ , η2 , ηˆ θ , θ¨ ) = bθ φ˙ ψ˙ − aJθy θ˙ + ηˆ θ − θ¨ , and the tracking error dynamics (40) is rewritten as e˙θ = εθ ,

(43a)

aθ ε˙ θ = bθ φ˙ ψ˙ − θ˙ + dθ − θ¨ . Jy

(43b)

If constraint (30a), i.e., ηˆ θ (t) = dθ (t) ≥

aθ ˙ θ (t) Jy

˙ ψ(t) ˙ − bθ φ(t) + θ¨ (t), holds for

all t ≥ t f 1 , then it follows that f θ (η2 (t), ηˆ θ (t), θ¨ (t)) ≤ 0, for all t ≥ t f 1 . Taking into account that εθ = θ˙ − θ˙ , (43b) can be written as follows ε˙ θ = bθ φ˙ ψ˙ −

aθ aθ θ˙ − εθ + dθ . Jy Jy

The solution of the previous differential equation is given by  +

εθ (t) = εθ (t f 1 )e t

a

e

− Jθy (t−τ )

˙ )ψ(τ ˙ )− (bθ φ(τ

tf1

a

− Jθy (t−t f 1 )

aθ θ˙ (τ ) + dθ (τ ) − θ¨ (τ ))dτ, Jy

˙ ψ(t) ˙ and, since f θ (η2 (t), ηˆ θ (t), θ¨ (t)) ≤ 0 implies that bθ φ(t) − ¨ θ (t) ≥ 0, for all t ≥ t f 1 , it is clear that

aθ ˙ θ (t) Jy

lim εθ (t) > 0 ⇒ lim eθ (t) > 0.

t→∞

t→∞

Therefore, the previous statements imply that lim vθ (t) < 0,

t→∞

and thus

1

1

lim τ¯θ (t) = lim (vθ − kθ0 eθ  3 − kθ1 εθ  2 ) = −∞.

t→∞

t→∞

+ dθ (t) −

536

R. Falcón et al.

On the other hand, note that, due to the convergence properties of the CTC, the roll angular velocity φ˙ converges to a bounded reference φ˙  ; the yaw angular velocity ψ˙ is bounded, as it is shown later by Lemma 4, and due to Assumption 1, the disturbance term dθ is also bounded. Thus, it follows that f θ (η2 , ηˆ θ , θ¨ ) = −bθ φ˙ ψ˙ + aθ ˙ θ − dθ (t) + θ¨ is bounded and f θ (η2 (t), ηˆ θ (t), θ¨ (t)) ≤ 0, for all t ≥ t f 1 . ThereJy fore, there always exists a finite time tσ 1 such that f θ (η2 (t), ηˆ θ (t), θ¨ (t)) ≤ −τ¯θ (t) holds for all t ≥ tσ 1 , and hence, system (41) always switches to the case σ = 1, for which, (eθ , εθ , vθ ) = 0 is UFTS. The previous analysis, together with the fact that (eφ , εφ ) = 0 is UFTS, implies that, at steady state (eη , εη ) = 0, the attitude tracking error dynamics is UFTS. The same procedure can be followed to analyze the convergence properties of Algorithm 1 when other rotors have failed. This concludes the proof. 

Proof of Lemma 4 In this proof, the yaw dynamics is analyzed in the occurrence of a rotor failure. With this aim, the results given by Lemmas 1–3 are considered; then, it is demonstrated that the yaw angular acceleration can be bounded at steady state. Next, the conditions to ensure the boundedness of the yaw angular velocity are obtained by means of the acceleration upper bound. Consider the loss of the first rotor. Then, using (23), the yaw dynamics (31) can be rewritten as aψ τθ Kτ ψ˙ + dψ , (44) ψ¨ = − (u z + 2 ) + bψ φ˙ θ˙ − Jz L Jz and by substituting u z , given in (17a), one obtains ψ¨ = −

Kτ  2 aψ τθ ψ˙ + dψ . (m νx + ν y2 + (νz + g)2 + 2 ) + bψ φ˙ θ˙ − Jz L Jz

As it was shown by Lemma 3, at steady state, τθ = Jy (−bθ φ˙ ψ˙ + as in (42). Then, at steady state, (45) satisfies m Kτ  2 νx + ν y2 + (νz + g)2 + bψ φ˙ θ˙ Jz 2K τ Jy aθ aψ − ψ˙ + dψ − ( θ˙ − bθ φ˙ ψ˙ − ηˆ θ + θ¨ ). Jz L Jz Jy

aθ ˙ θ Jy

(45)

− ηˆ θ + θ¨ )

ψ¨ = −

(46)

Taking into account that the angular velocities and the disturbances are bounded, (46) satisfies ! 2K τ Jy bθ δφ aψ ˙ ¨ ψ, (47) − ||ψ|| f1 ≤ L ψ1 + L Jz Jz

Sliding-Mode-Based Fault Diagnosis and Fault-Tolerant Control for Quad-Rotors

with L ψ1 = bψ δφ δθ + Dψ − the solution of (47) satisfies ˙ f1 ≤ ||ψ(t ˙ f1 )|| f1 e ||ψ||

&

537

m Kτ 2K τ Lν + (aθ δθ + Jy (Dθ + δθ )). Subsequently, Jz L Jz

2K τ J y L Jz

bθ δφ −

aψ Jz

'

(t−t f1 ) + L

& ψ1

1−e

2K τ J y L Jz

bθ δφ −

aψ Jz

! (t−t f1 ) ,

'

LJ a

for all t ≥ t f1 . Therefore, if (32) holds, i.e., δφ < 2K τ Jzy Jψz bθ , for all t ≥ t f1 , then it is clear that ˙ f1 ≤ L ψ1 . lim ||ψ|| t→∞

The same result is obtained if the second rotor fails and similar conclusions can be obtained considering a failure in the other rotors. This concludes the proof. 

References 1. Palunko, I., Cruz, P., Fierro, R.: Agile load transportation. IEEE Robot. Autom. Mag. 19(3), 69–79 (2012) 2. Ríos, H., Falcón, R., González, O.A., Dzul, A.: Continuous sliding-modes control strategies for quad-rotor robust tracking: real-time application. IEEE Trans. Ind. Electron. 66, 1264–1272 (2019) 3. Falcón, R., Ríos, H., Mera, M., Dzul, A.: Attractive ellipsoid-based robust control for quadrotor tracking. IEEE Trans. Ind. Electron. 67(9), 7851–7860 (2020) 4. Blanke, M., Kinnaert, M., Lunze, J., Staroswiecki, M.: Diagnosis and Fault Tolerant Control. Springer, New York (2003) 5. Dydek, Z.T., Annaswamy, A.M., Lavretsky, E.: Adaptive control of quadrotor UAVs: a design trade study with flight evaluations. IEEE Trans. Control Syst. Technol. 21(4), 1400–1406 (2013) 6. Merheb, A.R., Noura, H., Bateman, F.: Design of passive fault-tolerant controllers of a quadrotor based on sliding mode theory. Int. J. Appl. Math. Comput. Sci. 25(3), 561–576 (2015) 7. Xiao, B., Hu, Q., Zhang, Y.: Adaptive sliding mode fault tolerant attitude tracking control for flexible spacecraft under actuator saturation. IEEE Trans. Control Syst. Technol. 20(6), 1063–6536 (2012) 8. Wang, B., Zhang, Y.: An adaptive fault-tolerant sliding mode control allocation scheme for multirotor helicopter subject to simultaneous actuator faults. IEEE Trans. Ind. Electron. 65(5), 4227–4236 (2018) 9. Song, Y., He, L., Zhang, D., Qian, J., Fu, J.: Neuroadaptive fault-tolerant control of quadrotor UAVs: a more affordable solution. IEEE Trans. Neural Netw. Learn. Syst. 30(7), 1975–1983 (2019) 10. Tang, P., Lin, D., Zheng, D., Fan, S., Ye, J.: Observer based finite-time fault tolerant quadrotor attitude control with actuator faults. Aerosp. Sci. Technol. 105968 (2020) 11. Nian, X., Chen, W., Chu, X., Xu, Z.: Robust adaptive fault estimation and fault tolerant control for quadrotor attitude systems. Int. J. Control 93(3), 725–737 (2020) 12. Wang, B., Shen, Y., Zhang, Y.: Active fault-tolerant control for a quadrotor helicopter against actuator faults and model uncertainties. Aerosp. Sci. Technol. 99, 105745 (2020) 13. Avram, R.C., Zhang, X., Muse, J.: Quadrotor actuator fault diagnosis and accommodation using nonlinear adaptive estimators. IEEE Trans. Control Syst. Technol. 25(6), 2219–2226 (2017) 14. Lyu, P., Liu, S., Lai, J., Liu, J.: An analytical fault diagnosis method for yaw estimation of quadrotors. Control Eng. Pract. 86, 118–128 (2019)

538

R. Falcón et al.

15. Chang, J., Cieslak, J., Dávila, J., Zhou, J., Zolghadri, A., Guo, Z.: A two-step approach for an enhanced quadrotor attitude estimation via IMU data. IEEE Trans. Control Syst. Technol. 26(3), 1140–1148 (2018) 16. Aguilar-Sierra, H., Flores, G., Salazar, S., Lozano, R.: Fault estimation for a quad-rotor MAV using a polynomial observer. J. Intell. Robot. Syst. 73, 455–468 (2013) 17. Freddi, A., Longhi, S., Monteriù, A.: A diagnostic Thau observer for a class of unmanned vehicles. J. Intell. Robot. Syst. 67(1), 61–73 (2012) 18. Han, W., Wang, Z., Shen, Y.: Fault estimation for a quadrotor unmanned aerial vehicle by integrating the parity space approach with recursive least squares. Proc. Inst. Mech. Eng., Part G: J. Aerosp. Eng. 232(4), 783–796 (2018) 19. Amoozgar, M.H., Chamseddine, A., Zhang, Y.: Experimental test of a two-stage Kalman filter for actuator fault detection and diagnosis of an unmanned quadrotor helicopter. J. Intell. Robot. Syst. 70(1–4), 107–117 (2013) 20. Zhong, Y., Zhang, Y., Zhang, W., Zuo, J., Zhan, H.: Robust actuator fault detection and diagnosis for a quadrotor UAV with external disturbances. IEEE Access 6, 48169–48180 (2018) 21. Cen, Z., Noura, H., Susilo, T.B., Al Younes, Y.: Robust fault diagnosis for quadrotor UAVs using adaptive Thau observer. J. Intell. Robot. Syst. 73(1–4), 573–588 (2014) 22. Chen, F., Jiang, R., Zhang, K., Jiang, B., Tao, G.: Robust backstepping sliding-mode control and observer-based fault estimation for a quadrotor UAV. IEEE Trans. Ind. Electron. 63(8), 5044–5056 (2016) 23. Chandra, K.P.B., Alwi, H., Edwards, C.: Fault reconstruction for a quadrotor using an LPV sliding mode observer. IFAC-PapersOnLine 48(21), 374–379 (2015) 24. Capello, E., Punta, E., Fridman, L.: Strategies for control, fault detection and isolation via sliding mode techniques for a 3-DOF helicopter. In: The 55th Conference on Decision and Control, pp. 6464–6469 (2016) 25. Liu, J., Jiang, B., Zhang, Y.: Sliding mode observer-based fault detection and isolation in flight control systems. In: The IEEE International Conference on Control Applications, pp. 1049–1054 (2007) 26. Wang, X., Sun, S., van Kampen, E.-J., Chu, Q.: Quadrotor fault tolerant incremental sliding mode control driven by sliding mode disturbance observers. Aerosp. Sci. Technol. 87, 417–430 (2019) 27. Merheb, A.-R., Noura, H., Bateman, F.: Emergency control of AR drone quadrotor UAV suffering a total loss of one rotor. IEEE/ASME Trans. Mechatron. 22(2), 961–971 (2017) 28. Lippiello, V., Ruggiero, F., Serra, D.: Emergency landing for a quadrotor in case of a propeller failure: a backstepping approach. In: The 2014 IEEE/RSJ International Conference on Intelligent Robots and Systems, pp. 4782–4788 (2014) 29. Mueller, M.W., D’Andrea, R.: Stability and control of a quadrocopter despite the complete loss of one, two, or three propellers. In: The 2014 IEEE International Conference on Robotics and Automation, pp. 45–52 (2014) 30. Lanzon, A., Freddi, A., Longhi, S.: Flight control of a quadrotor vehicle subsequent to a rotor failure. J. Guid. Control Dyn. 37, 580–591 (2014) 31. Sun, S., Sijbers, L., Wang, X., De Visser, C.: High-speed flight of quadrotor despite loss of single rotor. IEEE Robot. Autom. Lett. 3, 3201–3207 (2018) 32. Hou, Z., Lu, P., Tu, Z.: Nonsingular terminal sliding mode control for a quadrotor UAV with a total rotor failure. Aerosp. Sci. Technol. 98(105716), 1–18 (2020) 33. Falcón, R., Ríos, H., Dzul, A.: Comparative analysis of continuous sliding-modes control strategies for quad-rotor robust tracking. Control Eng. Pract. 90, 241–256 (2019) 34. García-Carrillo, L.R., Dzul-López, A.E., Lozano, R., Pégard, C.: Quad rotorcraft control. In: Advances in Industrial Control. Springer, London, Heidelberg, New York, Dordrecht (2013) 35. Khan, W., Nahon, M.: A propeller model for general forward flight conditions. Int. J. Intell. Unmanned Syst. 3(2/3), 72–92 (2015) 36. Levant, A.: High-order sliding modes: differentiation and output-feedback control. Int. J. Control 76(9–10), 924–941 (2003)

Sliding-Mode-Based Fault Diagnosis and Fault-Tolerant Control for Quad-Rotors

539

37. Falcón, R., Ríos, H., Dzul, A.: A robust fault diagnosis for quad-rotors: a sliding-mode observer approach. IEEE/ASME Trans. Mechatron. 27(6), 4487–4496 (2022) 38. Quanser Consulting Inc.: Quanser QBall2, User Manual. http://www.quanser.com, pp. 1–41 (2014) 39. Falcón, R., Ríos, H., Dzul, A.: An actuator fault accommodation sliding-mode control approach for trajectory tracking in quad-rotors. In: 2021 60th IEEE Conference on Decision and Control (CDC), Austin, TX, USA, pp. 7100–7105 (2021) 40. Mercado-Uribe, Á., Moreno, J.A.: Discontinuous integral action for arbitrary relative degree in sliding-mode control. Automatica 118, 109018 (2020) 41. Cruz-Zavala, E., Moreno, J.A.: Higher order sliding mode control using discontinuous integral action. IEEE Trans. Autom. Control 65(10), 4316–4323 (2019) 42. Torres-González, V., Sanchez, T., Fridman, L.M., Moreno, J.A.: Design of continuous twisting algorithm. Automatica 80, 119–126 (2017) 43. Falcón, R., Ríos, H., Dzul, A.: A sliding-mode-based active fault-tolerant control for robust trajectory tracking in quad-rotors under a rotor failure. Int. J. Robust Nonlinear Control 32(15), 8451–8469 (2022)

Second-Order Sliding-Mode Leader-Follower Consensus for Networked Uncertain Diffusion PDEs with Spatially Varying Diffusivity Alessandro Pilloni, Alessandro Pisano, Elio Usai, and Yury Orlov

Abstract The primary concern of the present chapter is to address the distributed leader-following consensus tracking problem for a network of agents governed by uncertain diffusion PDEs with the Neumann-type boundary actuation and uncertain spatially varying diffusivity. Except for the “leader” agent that generates the reference profile to be tracked, all remaining agents, called “followers,” are required to asymptotically track the infinite-dimensional time-varying leader state. The dynamics of the follower agents are affected by smooth boundary disturbances unbounded in magnitude and with a bounded derivative. The proposed local interaction rule is developed by assuming that only neighboring collocated boundary sensing is available, and it consists of a nonlinear sliding-mode-based protocol. The performance and stability properties of the resulting infinite-dimensional networked system are then formally studied by means of Lyapunov analysis. The analysis demonstrates the global exponential stability of the resulting error boundary-value problem in a suitable Sobolev space. The effectiveness of the developed control scheme is supported by simulation results.

1 Introduction Since its first formulation, the problem of understanding when individual actions of interacting dynamical systems give rise to a coordinated emergent behavior continues receiving considerable attention in academic research; see, for instance, [15, 38] and references therein. In the most typical setup, a local interaction rule is found such that A. Pilloni (B) · A. Pisano · E. Usai Department of Electrical and Electronic Engineering (DIEE), University of Cagliari, Cagliari, Italy e-mail: [email protected] A. Pisano e-mail: [email protected]; [email protected] Y. Orlov CICESE Research Center, Ensenada, Mexico e-mail: [email protected] © The Author(s), under exclusive license to Springer Nature Switzerland AG 2023 T. R. Oliveira et al. (eds.), Sliding-Mode Control and Variable-Structure Systems, Studies in Systems, Decision and Control 490, https://doi.org/10.1007/978-3-031-37089-2_20

541

542

A. Pilloni et al.

a number of dynamical agents reach an agreement condition among the respective state values that is called consensus. The local interaction rule is distributed in the sense that it can only use information from the local agent and from its neighbors. The application of the consensus concept can also be found in the distributed optimization setting; see, e.g., [30]. A particular class of consensus problems is the leader-following distributed tracking, where a specific agent in the network plays the role of a leader, and all remaining agents, called followers, are wanted to synchronize their state with that of the leader; see for details [4, 15]. Relevant applications of the leader-following setting to real problems can be found, e.g., in [9, 35] and [21] respectively dealing with flocking, coordinated tracking of mechanical systems, and opinion dynamics. The reader can refer to [5] for an overview of consensus-based coordination control problems within the finite-dimensional setting, namely in the case where agents are modeled by means of lumped-parameter dynamical models. It can be noticed that deep connections between the consensus problem and certain partial differential equations (PDEs), the diffusion equation in particular, exist. Indeed, discretizing in the spatial domain the 1D diffusion equation yields a highdimensional system of networked first-order continuous-time integrators interacting through a linear consensus protocol. Owing to this fact, some authors, see for instance [7, 32], have exploited (discretized forms of) different distributed parameter systems, such as advection and diffusion–advection equations, to derive more effective consensus protocols with improved convergence features. In spite of this intimate relationship between consensus algorithms and certain discretized PDEs, the consensus problem for networks of coupled infinite-dimensional dynamical agents has not received yet the same level of attention than its corresponding finite-dimensional counterpart. The leaderless consensus (i.e., synchronization) problem for coupled hyperbolic processes governed by the wave equation was studied firstly in [19], then in [20] and [1]. In the parabolic PDE setting, it is worth mentioning the seminal work [10] where the consensus-based control for a network of agents, each modeled by a certain class of parabolic PDEs, was studied first. Afterwards, in [29] the leaderless coordination problem for a network of agents, each governed by a diffusion equation and coupled at one boundary, is investigated. There, it was interestingly shown that the use a linear boundary local interaction rule of the agents’ state forces the network states to converges toward the spatial average of their initial conditions, thereby establishing an infinite-dimensional counterpart of the average consensus algorithm for single-integrator finite-dimensional agents. In addition, the same paper proposed a sliding-mode-based boundary local interaction rule, based on boundary sensing, yielding consensus in the presence of matched boundary perturbations. The distributed leader-following tracking problem for multi-agent network agents governed by parabolic PDEs subject to external disturbances was dealt with in [14] and [25]. In [14], the goal was achieved by means of an adaptive unit vector slidingmode local interaction rule and by exploiting distributed state measurements taken on the entire spatial domain. On the contrary, in [25] the same goal was achieved

Second-Order Sliding-Mode Leader-Follower Consensus …

543

using collocated measurement only along with a second-order sliding-mode-based local interaction rule. Motivated by the previous state-of-the-art analysis, in this chapter we take a step beyond the existing literature by addressing the leader-following consensus problem with reference to a more general class of agents’ dynamics as compared to that considered in earlier works, in particular [25]. More precisely, we are going to consider a network of infinite-dimensional agents governed by the diffusion PDE having an uncertain and spatially varying diffusion coefficient. This extended class of agents’ dynamics was never considered in previous works on the leaderless or leader-following consensus for parabolic PDEs. The presence of a spatially varying diffusivity characterizes several relevant physical applications; as an example, it is worth mentioning the heat diffusion problem on non-homogeneous media, as well as the groundwater flow on unsaturated soils modeled by means of the so-called Richards equation. For further details, the reader can refer to the classic book [6], where the known exact solutions of most of the heat flow boundary-value control problems of interest is discussed in detail. Moreover, it is also worth mentioning the recent study [23], where a non-collocated robust sliding-mode controller is developed for the soil irrigation at a specified depth and under a spatially varying diffusivity. We propose a sliding-mode-based local interaction rule similar to that considered in [25], and we present here a novel Lyapunov analysis demonstrating the global exponential achievement of the leader-following consensus for this wider class of agent’s dynamics that also includes possibly unbounded external boundary perturbations. The proposed design is accompanied by a set of simple tuning rules for setting the control parameters. The provided analysis demonstrates the global exponential stability of the synchronization error boundary-value problem in the H2 (0, 1) Sobolev space, which in turn implies the point-wise achievement of leader-following consensus. The correct functioning of the proposed local interaction rule is finally supported by numerical simulations.

1.1 Chapter Organization In Sect. 2, the used mathematical notation along with some preliminaries notions on Sobolev spaces, spatial norms, and graph theory is illustrated. Then, in Sect. 3 the considered distributed leader-following consensus tracking problem is formulated, the proposed second-order sliding-mode local interaction protocol is presented, and the convergence and stability features are investigated by means of Lyapunov analysis in Sect. 4. Simulation supporting the proposed theoretical results is then discussed in Sect. 5. Finally, the last Sect. 6 reports some concluding remarks and perspectives for the next research.

544

A. Pilloni et al.

2 Preliminaries and Notations 2.1 Mathematical Notation and Norm Properties The sets of natural and real numbers are denoted by N and R. For n ∈ N, a real column vector is denoted by X = (x1 , . . . , xn ) ∈ Rn . The transpose of X is denoted by X  ∈ R1×n . Let 1n = (1, . . . , 1) ∈ Rn and 0n = (0, . . . , 0) ∈ Rn be the n-dimensional allone and all-zero column vectors, whereas I n = diag(1n ) ∈ Rn×n denotes the ndimensional identity matrix, and the operator diag(X ) returns a square diagonal matrix with the elements of vector X on the main diagonal. The 1-norm, 2-norm, and ∞-norm of X are defined as follows: X 1 =

n  i=1

|xi |, X 2 =

 n

 21 xi2

i=1

  , X ∞ = max xi . 1≤i≤n

(1)

Moreover, for the norms in (1) the next chain of inequalities hold [16] X ∞ ≤ X 2 ≤ X 1 ≤



n X 2 , with X ∈ Rn .

(2)

Consider an n-dimensional positive definite matrix M = [m i j ]  0, with entries m i j ∈ R, and let 0 < m1 ≤ m2 ≤ · · · ≤ mn ⊂ R+ be the collection of its ordered eigenvalues, then the 2-norm of the vector M X ∈ Rn satisfies the next relation m1 X 2 ≤ M X 2 ≤ mn X 2 , where mn ≥ m1 > 0.

(3)

Let X p and Y q be the p- and q-norms of X and Y ∈ Rn , and let p ≥ 1 and q ≥ 1 be given such that 1/p + 1/q = 1. Then, the next chain of inequalities is in force [16] Y 2q X 2p |X  Y | ≤ X p Y q ≤ + . (4) p q The Sign (X ) operator with argument X ∈ Rn denotes the set-valued componentwise operator ⎧ ⎪ if xi > 0 ⎨1 sign(xi ) = [−1, 1] if xi = 0 . (5) ⎪ ⎩ −1 if xi < 0

Second-Order Sliding-Mode Leader-Follower Consensus …

545

2.2 Preliminaries on Sobolev Spaces and Integral Norms The space of absolutely continuous scalar functions z(ς ) with square integrable derivatives z (k) (ς ) on (0, 1) up to a given order  ∈ N, and k ≤ , is called Sobolev space of order , and here it is denoted by H (0, 1). It follows that the space H0 (0, 1) corresponds to the well-known Hilbert space, which is commonly denoted also by L 2 (0, 1). Let us now consider a function z(ζ ) ∈ H (0, 1); the corresponding H -norm is defined as follows: z(·)H (0,1) =

0

1

 

z (k) (ξ )

2

 21 dξ

.

(6)

k=0

In the remainder of the chapter, the first- and second-order derivatives z (1) (ς ) and z (2) (ς ) will be also denoted as z (1) (ς ) = z ς (ς ) and z (2) (ς ) = z ςς (ς ). For later use, we further introduce the explicit expression of z(·)H2 (0,1) z(·)H2 (0,1) =



z(·)2H0 (0,1) + z ξ (·)2H0 (0,1) + z ξ ξ (·)2H0 (0,1) ,

(7)

and that of the “weighted” H2,ϕ (0, 1)-norm defined as follows: z(·)H2,ϕ (0,1) =



z(·)2H0 (0,1) + z ξ (·)2H0 (0,1) + [ϕ(·)z ξ (·)]ξ 2H0 (0,1) ,

(8)

where ϕ(ξ ) ∈ H1 (0, 1) is a sufficiently smooth weighting function. Then, we introduce the notation n

 H (0, 1) = H (0, 1) × H (0, 1) × · · · × H (0, 1)   

(9)

n -times

and Z (·)2[H (0,1)]n =

n 

z i (·)2H (0,1)

(10)

i=1

n for the norm of Z (ς ) = [z 1 (ς ), . . . , z n (ς )] ∈ H (0, 1) . We further define Z (·)2[H2 (0,1)]n = Z (·)2[H0 (0,1)]n + Z ξ (·)2[H0 (0,1)]n + Z ξ ξ (·)2H0 (0,1) ,

(11)

and Z (·)2[H2,ϕ (0,1)]n = Z (·)2[H0 (0,1)]n + Z ξ (·)2[H0 (0,1)]n + [ϕ(·)Z ξ (·)]ξ 2H0 (0,1) . (12)

546

A. Pilloni et al.

Lemma 1 Let Z (ς ) ∈ [H1 (0, 1)]n . Then, the next inequality is verified as   Z (·)2[H0 (0,1)]n ≤ 2 Z (k)22 + Z ς (·)2[H0 (0,1)]n , ∀ k = 0, 1.

(13)

Proof of Lemma 1: Following [31, Lemma 1], it results the generic scalar function z(ς ) ∈ H1 (0, 1) satisfies z(·)2H0 (0,1) ≤ 2(z(k)2 + z ς (·)2H0 (0,1) ), ∀ k = 0, 1. Thus, by invoking definitions (6) and (10), and by virtue of (14) specified with z(·) = z i (·), the following chain of inequalities can be further derived: Z (·)2[H0 (0,1)]n =

n 

z i (·)2H0 (0,1) ≤ 2

i=1

n  

z i (k)2 + Z i,ς (·)2H0 (0,1)

i=1

  = 2 Z (k)22 + Z ς (·)2[H0 (0,1)]n , ∀ k = 0, 1. Lemma 1 is proved.



(14) 

2.3 Algebraic Graph Theory Consider a network of n dynamical subsystems, referred to as agents. Let V = {1, . . . , n} be the agent set. The interaction among these subsystems can be described by means of a weighted graph G(V, E, A), where V is referred to as the node set and E ⊆ {V × V} denotes the so-called edge set collecting all the enabled oriented interactions among pairs of nodes, namely (i, j) ∈ E if node (agent) i can receive information from node (agent) j. Finally, the real square matrix A = [ai j ] ∈ Rn×n denotes the so-called weighted adjacency matrix associated with G, whose entry ai j is positive if (i, j) ∈ E and zero otherwise. The neighbor set of node i is denoted by N i = { j ∈ V : (i, j) ∈ E}. A directed path over G denotes a finite or infinite sequence of edges joining two distinct nodes. A node is called root, if any other vertex of the digraph can be reached by one and only one directed path starting at the root. A weighted graph is called undirected if and only if ai j = a ji ∀ i = j ∈ V, which implies that, if (i, j) ∈ E, then also ( j, i) ∈ E. Moreover, if an undirected graph is such that there exists a path between every pair of nodes (i, j) ∈ V × V, then this graph is said to be also connected. The Matrix associated with G is denoted by L = [li j ] ∈ Rn×n , where Laplacian n lii = j=1,i = j ai j and li j = −ai j , i = j, such that L1n = 0n . If G is undirected then  L is positive semi-definite and symmetric, thus satisfying also 1n L = 0n . Finally, if G is also connected then L has a simple zero eigenvalue whose corresponding eigenvector is the all-one vector 1n .

Second-Order Sliding-Mode Leader-Follower Consensus …

547

3 Leader-Following Consensus for Diffusion PDEs 3.1 Problem Formulation Consider a set of n infinite-dimensional dynamical systems, each identified as a follower agent, and communicating to each other throughout a static connected and undirected network whose topology is described by the graph G f (V f , E f , A f ), where V f = {1, 2, . . . , n}. The state of each follower is the space- and time-dependent function qi (ς, t) with i = 1, 2, . . . , n, which is governed by the diffusion equation

 qi,t (ς, t) = θ (ς ) · qi,ς (ς, t) ς ,

(15)

where the subscripts “t” and “ς ” denote, respectively, the temporal and spatial derivatives, whereas θ (ς ) ∈ C 1 is a positive and, possibly, uncertain, smooth function known as diffusivity. The initial condition for the state of each follower is qi0 (ς ) = qi (ς, 0) ∈ H4 (0, 1).

(16)

Neumann-type boundary conditions are considered of the form qi,ς (0, t) = 0, qi,ς (1, t) = u i (t) + ψi (t),

(17) (18)

where u i (t) ∈ R is the manipulable boundary control input of the i − th follower agent, and ψi (t) is a persistent time-varying matching disturbance. By stacking all the follower agents’ states into the vector Q(ς, t) = [q1 (ς, t), q2 (ς, t), . . . , qn (ς, t)] ∈ [H4 (0, 1)]n , the following compact representation of the followers’ boundary-value problem (15)–(18) is derived:

 Q t (ς, t) = θ (ς )Q ς (ς, t) ς

(19)

Q(ς, 0) ∈ [H (0, 1)]

(20)

4

n

Q ς (0, t) = 0n Q ς (1, t) = Ψ (t) + U (t) where

U (ς, t) = [u 1 (t), u 2 (t), . . . , u n (t)] ∈ Rn

(21) (22)

548

A. Pilloni et al.

is the vector collecting all the local control inputs to be designed, and Ψ (ς, t) = [ψ1 (t), ψ2 (t), . . . , ψn (t)] ∈ Rn collects all the boundary disturbances. In addition to the n followers, a leader agent is also considered. The leader state q0 (ς, t) ∈ H4 (0, 1), which without loss of generality can possibly be virtually generated, is governed by the following boundary-value problem:

 q0,t (ς, t) = θ (ς )q0,ς (ς, t) ς

(23)

q0 (ς, 0) ∈ H4 (0, 1) q0,ς (0, t) = 0 q0,ς (1, t) = u 0 (t)

(24) (25) (26)

where u 0 (t) is the leader control input.

3.2 Control Objective and Operating Assumptions The control objective consists of designing, for each follower agent, a local boundary control input u i (t) such that the local states qi (ς, t) will asymptotically track the leader state q0 (ς, t) point-wisely in space, i.e., such that the next relation holds lim |qi (ς, t) − q0 (ς, t)| = 0, ∀ t ≥ 0, ς ∈ (0, 1), i ∈ V.

t→∞

(27)

Let us note that the above point-wise synchronization requirement can be ensured provided that the deviation error E(ς, t) = Q(ς, t) − 1n · q0 (ς, t)

(28)

n

is asymptotically stable in the space H1 (0, 1) , i.e., that lim E(ς, t)[H1 (0,1)]n = 0.

t→∞

(29)

Note also that the achievement of (29) is equivalent to the following conditions: lim E(ς, t)H0 (0,1) = 0,

t→∞

lim E ς (ς, t)H0 (0,1) = 0.

t→∞

(30)

In the remainder of the chapter, the following set of assumptions is made. Assumption 1 The network initial condition Q(ς, 0) ∈ [H4 (0, 1)]n in (20) is compatible with the given perturbed boundary conditions, i.e.,

Second-Order Sliding-Mode Leader-Follower Consensus …

Q ς (0, 0) = 0,

Q ς (1, 0) = Ψ (0).

549

(31)

Assumption 2 The spatially varying diffusivity coefficient is uncertain, and there exist some known a priori constants m and M satisfying 0 < m ≤ θ (ς ) ≤ M , ∀

ς ∈ (0, 1).

(32)

Assumption 3 There exist known positive constants , such that   Ψ˙ (t)



≤ < ∞, | u˙ 0 (t) | ≤ < ∞.

(33)

Remark 1 Assumptions 1–3 guarantee the collective motion of the overall (leaderplus-followers) network can be studied in the [H2 (0, 1)]n+1 Sobolev space. As a consequence of that, the domain D(A ) of the operator A = ∂ 2 /∂ς 2 in (19)–(22) and (23)–(26) is clearly confined to [H4 (0, 1)]n+1 Sobolev space. For details, the reader may refer to [8].

4 Main Result 4.1 Control Synthesis To construct the local interaction law, it is assumed that the leader state is available only to a non-empty subset of followers such that node 0 is a root node of the overall communication graph G(V, E, A) with V = {0} ∪ V f . Moreover, the leader input u 0 (ς, t) is assumed to be independent of the state of the followers. Under the given communication constraints, the overall communication network comprising both the leader and the followers is thus characterized by the following adjacency matrix: ⎤ ⎡ 0 0 0 ··· 0 ⎥ ⎢ a10 ⎥ ⎢ ⎥ ⎢ a20 (34) A=⎢ ⎥ Af ⎥ ⎢ .. ⎦ ⎣ . an0 where ai0 > 0 if the leader position is available to the i-th follower, and ai0 = 0 otherwise. It is further assumed that each follower can only access its own state and the boundary measurement qh (1, t) of its neighboring follower agents, i.e., h ∈ { i } ∪ Ni.

550

A. Pilloni et al.

To achieve the control goal (29), or equivalently to guarantee (30), the state of each follower has been augmented by one by simply inserting an integrator at its controlled boundary input u i (t). Then, by means of such dynamic input extension, one can further note u˙ i (t) can thus be regarded as the newer control signal to be designed ad hoc to achieve (29). Specifically, in the remainder the following local interaction mechanism ⎛ ⎞ n  " ! ai j qi (1, t) − q j (1, t) ⎠ u˙ i (t) = −a · sign ⎝ ⎛ −b · sign ⎝ ⎛ −c · ⎝

j=0 n 

⎞ ! " ai j qi,t (1, t) − q j,t (1, t) ⎠

j=0

n 

⎞ " ! ai j qi (1, t) − q j (1, t) ⎠

j=0

⎛ ⎞ n  ! " −d · ⎝ ai j qi,t (1, t) − q j,t (1, t) ⎠

(35)

j=0

is considered, where i = 1, 2, . . . , n, and whose initial values u i (0) are all set to zero compatibly with Assumption 1. Coefficient ai j denotes the (i, j) entry of the adjacency matrix associated with the overall communication graph G. It is thus apparent that each local controller only requires information from the neighboring agents. Finally, a, b, c, and, d are the nonnegative design parameters which will be subject to proper design tuning inequalities to be derived in the sequel. The next auxiliary boundary-value problem

 E tt (ς, t) = θ (ς )E ςt (ς, t) ς

(36)

E ςt (0, t) = 0n

(37)

E ςt (1, t) = Ψt (t) − 1n · u˙ 0 (t) + Ut (t)

(38)

governing the error variable E(ς, t) is thus derived. The boundary input vector Ut (t) = [u˙ 1 (t), u˙ 2 (t), ..., u˙ n (t)]T , can be rewritten in compact form as follows: Ut (t) = −a Sign (M E(1, t)) − b Sign (M E t (1, t)) −c M E(1, t) − d M E t (1, t),

(39)

" ! M = L f − diag a10 a20 . . . an0 ∈ Rn×n , where L f is the Laplacian matrix associated with G f . Note that M = M   0 is a positive definite matrix (see [4]).

Second-Order Sliding-Mode Leader-Follower Consensus …

551

Since the dynamic control vector U (t) is governed by the discontinuous righthand side ordinary differential equation (39), the solutions of the resulting closedloop boundary-value problem (36)–(38), driven by the dynamic controller (39), need to be specified in the so-called Filippov sense. Nonetheless, as established in [18], such a generalized solution for an infinite-dimensional system is a mild solution of the closed-loop system (36)–(38) with the multi-valued discontinuous feedback (39). In the remainder, the meaning of the solutions of the auxiliary boundary-value problem (36)–(39) will be viewed in the so-called mild sense; see [8] for details. That said, and in accordance with [2], it should be further noticed that the mild solutions of a boundary-value problem coincide with the weak solutions of the corresponding standardizing PDE in distributions, provided below for clarity’s sake, E t (ς, t) = W (ς, t)

 Wt (ς, t) = θ (ς )Wς (ς, t) ς + θ (1) (Ψt (t) − 1n · u˙ 0 (t) + Ut (t)) δ(ς − 1)

(40) (41)

E ς (0, t) = E ς (1, t) = 0n

(42) (43)

Wς (0, t) = Wς (1, t) = 0n

where E(ς, t) = Q(ς, t) − 1n · q0 (t),

(44)

W (ς, t) = E t (ς, t) = Q t (ς, t) − 1n · q˙0 (t).

(45)

It is also worth mentioning that, as in the finite-dimensional case, a motion along the discontinuity manifolds E(1, t) = 0n and W (1, t) = 0n generates a sliding motion on it. Notice that the rigorous proof of the networked closed-loop dynamics well-posedness is beyond the scope of this chapter. Nevertheless, such well-posedness is actually verifiable, following [8, Theorem 3.3.3], by noticing the effective control vector U (t) is twice piece-wise differentiable along their solutions E(ς, t). By virtue of (36), the next relation      W (·, t)H0 (0,1) = E t (·, t)H0 (0,1) =  θ (ς )E ς (ς, t) ς 

H0 (0,1)

,

(46)

which will be instrumental in the next derivations is finally introduced.

4.2 Convergence Analysis The main result of the chapter and the resulting convergence analysis will be now discussed in detail. The following preliminary instrumental result is demonstrated.

552

A. Pilloni et al.

Lemma 2 Let Assumptions 1, 2, and 3 be satisfied, and let W (ς, t) be defined as in (40). Then, the functional V : [H2 (0, 1)]n × [H0 (0, 1)]n → R 1 V(E, W ) = aθ (1)M E(1, t)1 + cθ (1)M E(1, t)22 2 1 1 + W (ζ, t) MW (ζ, t)dζ, 2 0

(47)

being computed on the mild solutions (E(ς, t), W (ς, t)) of the closed-loop boundaryproblem n (40)–(43),n is nonnegative, and it upper-estimates the weighted

value H2,θ (0, 1) × H0 (0, 1) -norm of these solutions, namely   ∃ λ > 0 : V(E, W ) ≥ λ E(·, t)2[H2,θ (0,1)]n + W (·, t)2[H0 (0,1)]n ∀t ≥ 0. (48)  Proof of Lemma 2: Let 0 < m1 < m2 < · · · < mn be the ordered sequence of the real positive eigenvalues of the symmetric positive definite matrix M. By applying (2) and (3), it results in m1 M E(1, t)1 ≥ √ E(1, t)1 , n

(49)

M E(1, t)22 ≥ m12 E(1, t)22 ,

(50)



W (ς, t) MW (ς, t) ≥

m1 W (ς, t)22 .

(51)

Spatial integration of all terms in (51) yields 0

1

W (ς, t) MW (ς, t)dς ≥ m1 W (·, t)2[H0 (0,1)]n .

(52)

Thus, from (49)–(51) and (52), one concludes that cθ (1)m12 m1 E(1, t)22 + W (·, t)2[H0 (0,1)]n . 2 2 (53) Successively applying relation (13) specified with i = 1 to the mild solutions Z (ς, t) = E(ς, t) and Z (ς, t) = θ (ς )E ς (ς, t), and because of the homogenous boundary conditions (42)–(43), one derives that V(E, W ) ≥ aθ (1)m1 E(1, t)1 +

Second-Order Sliding-Mode Leader-Follower Consensus …

553

  (54) E(·, t)2[H0 (0,1)]n ≤ 2 E(1, t)22 + E ς (·, t)2[H0 (0,1)]n ,   θ (·)E ς (·, t)2[H0 (0,1)]n ≤ 2 θ 2 (0)E ς (0, t)22 + [θ (·)E ς (·, t)]ς 2[H0 (0,1)]n = 2[θ (·)E ς (·, t)]ς 2[H0 (0,1)]n .

(55)

Moreover, by exploiting Assumption 2, and specifically (32), which is reported below for clarity’s sake, 0 < m ≤ θ (ς ) ≤ M ∀ ς ∈ (0, 1),

(56)

the next chain of inequalities holds

2m E(·, t)2[H0 (0,1)]n ≤ θ (·)E ς (·, t)2[H0 (0,1)]n ≤ 2M E(·, t)2[H0 (0,1)]n .

(57)

Thus, it follows that relation (55) can be further manipulated so as to obtain E ς (·, t)2[H0 (0,1)]n ≤ β[θ (·)E ς (·, t)]ς 2[H0 (0,1)]n with β =

2 .

2m

(58)

Now, by exploiting relation (12), and by means of (46), (54), and (58), it further results in E(·, t)2[H2,θ (0,1)]n = E(·, t)2[H0 (0,1)]n + E ς (·, t)2[H0 (0,1)]n +[θ (·)E ς (·, t)]ς 2[H0 (0,1)]n ≤ 2E(1, t)22 + 2E ς (·, t)2[H0 (0,1)]n + E ς (·, t)2[H0 (0,1)]n +[θ (·)E ς (·, t)]ς 2[H0 (0,1)]n ≤ 2E(1, t)22 + 3β[θ (·)E ς (·, t)]ς 2[H0 (0,1)]n +[θ (·)E ς (·, t)]ς 2[H0 (0,1)]n ≤ 2E(1, t)22 + (3β + 1)[θ (·)E ς (·, t)]ς 2[H0 (0,1)]n = 2E(1, t)22 + (3β + 1)W (·, t)2[H0 (0,1)]n .

(59)

Now, taking into account (53) and (59), the validity of (48) is finally concluded for all t ≥ 0 and for some positive constant λ. The proof of Lemma 2 is thus concluded.  We are now in a position to state the main result of the chapter.

554

A. Pilloni et al.

Theorem 1 Consider the perturbed network of followers (19)–(22) communicating along the undirected topology G f (V f , E f , A f ), with V f = {1, 2, . . . , n}. Let Assumptions 1, 2, and 3 be in force. If the leader node “0” is a root node for the overall communication graph G(V, E, A), with V = {0} ∪ V f , then the boundary second-order sliding-mode consensus protocol (35) with the controller gains selected as follows a > b + + , b >

M ( + ) , c > 0, d > 0

m

(60)

ensures the global exponential stability of the mild solutions (E(ς, t), W (ς, t)) of the error boundary-value problem (40)–(43) in the space [H2 (0, 1)]n ×  [H0 (0, 1)]n .

4.3 Proof of Theorem 1 The proof is divided into two separated steps. Firstly, by means of Lyapunov arguments it will be demonstrated by the uniform stability of the solutions of the boundaryvalue problem (40)–(43). Then, by relying on this preliminary result, the exponential stability of the corresponding solutions in the space [H2 (0, 1)]n × [H0 (0, 1)]n is finally proven.

4.3.1

Uniform Stability

By Lemma 2, functional (47) is positive definite along the mild solutions (E, W ) ∈ [H2 (0, 1)]n × [H0 (0, 1)]n of the boundary-value problem (40)–(43). The time derivative of V(t) along these solutions is ˙ V(t) = aθ (1)W (1, t) MSign(M E(1, t)) + cθ (1)E(1, t) M 2 W (1, t) 1 + W (ς, t) M[θ (ς )Wς (ς, t)]ς dς 0

+θ (1)W (1, t)M (Ψt (t) − 1n · u˙ 0 (t) + Ut (t)) .

(61)

The integral term of (61), being integrated by parts by taking into account the boundary conditions (43), can then be expressed as follows:

Second-Order Sliding-Mode Leader-Follower Consensus …

1 0

555

% W (ς, t) M[θ (ς )Wς (ς, t)]ς dς = W (1, t) Mθ(1)Wς (1, t) +

& −W (0, t)M  θ(0)Wς (0, t) + 1 − θ (ς )Wς (ς, t) MWς (ς, t) dς 0

=−

1 0

θ (ς )Wς (ς, t) MWς (ς, t) dς.

(62)

In accordance with (51), the negative definite term (62) can further be upperestimated as follows: 1 − θ (ς )Wς (ς, t) MWς (ς, t)dς ≤ −m 1 m Wς (·, t)2[H2 (0,1)]n . (63) 0

By substituting (62) and (63) into (61), one obtains ˙ V(t) ≤ aθ (1)W (1, t) MSign(M E(1, t)) + cθ (1)E(1, t) M 2 W (1, t) −m 1 m Wς (·, t)2

[H2 (0,1)]n

+ θ (1)W (1, t)M (Ψt (t) − 1n · u˙ 0 (t) + Ut (t)) .

(64)

Now, by replacing (39) with (64), and making simple manipulations, one obtains ˙ V(t) ≤ −m 1 m Wς (·, t)2[H2 (0,1)]n − bθ (1)M W (1, t)1 −dθ (1)M W (1, t)22 + θ (1)W (1, t) M(Ψt (t) − 1n · u˙ 0 (t)).

(65)

By (4), and taking into account Assumption 3 and relation (33), the last term on the right-hand side of (65) is upper-estimated as follows: | W (1, t) M(Ψt (t) − 1n · u˙ 0 (t)) | ≤ (Ψt (t)∞ + 1n · u˙ 0 (t)∞ ) MW (1, t)1 ≤ ( + ) MW (1, t)1 . (66) Finally, substituting (66) into (64), and taking into account (50), one derives, after some manipulations, the next expression: ˙ V(t) ≤ −[b m − M ( + ]MW (1, t)1 −m 1 m Wς (·, t)2[H0 (0,1)]n − d m m 21 W (1, t)22 .

(67)

Thus, due to (67) and (60), one can note that the derivative of V(t), being computed along the mild solutions of the boundary-value problem (40)–(43), is negative semidefinite. This implies that V(t) is non-increasing and thus the compact domain DVR =

'

( (E, W ) ∈ [H2 (0, 1)]n × [H0 (0, 1)]n : V ≤ R ,

(68)

556

A. Pilloni et al.

specified for an arbitrarily large ball of radius R ≥ V(0), is also invariant. It then follows that the closed-loop solutions will be confined in the invariant domain DVR forever. Due to the invariance of the domain DVR , and (47), the inequality V(t) ≤ R yields the next estimates M E(1, t)1 ≤

4.3.2

R 2R 2R , E(1, t)22 ≤ , W (·, t)2[H0 (0,1)]n ≤ . (69) a m m1 c m m 21

Global Exponential Stability

Let us now considered the following augmented functional: ¯ V R (t) = V(t) + κ R V(t) with κ R > 0

(70)

¯ is the following sign-indefinite funcwhere κ R > 0 will be specified later, and V(t) tional: 1 1 2 ¯ E(1, t) MW (ς, t)dς. (71) V(t) = dθ (1) M E(1, t)2 + 2 0 By invoking inequalities (2) and (4), and the first relation in (69), it results that, within the domain DVR in (68), the second addend of (71) can be lower-estimated as follows: 1  1 E(1, t) MW (ς, t)dς ≥ − M E(1, t)22 + W (·, t)2[H0 (0,1)]n 2 0  1 (72) ≥ − M E(1, t)21 + W (·, t)2[H0 (0,1)]n 2  R 1 ≥− M E(1, t)1 + W (·, t)2[H0 (0,1)]n . 2 a m Considering (71), (72), and (50), along with (47), it results that the right-hand side of (70) can be lower-estimated as follows:  V R (t) ≥ a m − κ R

R 2a m

 M E(1, t)1 +

1 (m1 − κ R ) W (·, t)2[H0 (0,1)]n 2

1 + m m12 (c + κ R d)E(1, t)22 . 2

(73)

Thus, the positiveness of VR (t) can be easily guaranteed by selecting κ R > 0 small enough so that the next expression 0 < κ R ≤ min



2a 2 2m , R

 m1

(74)

Second-Order Sliding-Mode Leader-Follower Consensus …

557

is verified. As a consequence, within the invariant domain (68), functional V R (t) can further be lower-estimated by V(t) as follows: V R (t) ≥ γ V(t)  0 with γ = min



1−

κR R 2a 2 2m

, m1 − κ R ,

(75) c c+κ R d



.

(76)

Let us now evaluate the time derivative of (70) along the trajectories of the closedloop system (40)–(43). By simple manipulation, it results in 1 d E(1, t) MW (ς, t)dς V˙ R (t) = V˙ (t) + κ R dθ (1)W T (1, t)M 2 E(1, t) + κ R dt 0 1 = V˙ (t) + κ R dθ (1)W T (1, t)M 2 E(1, t) + κ R E(1, t) MWt (ς, t)dς 0 1  +κ R W (1, t) MW (ς, t)dς. (77) 0

By solving the first integral term of (77) and, thanks to the homogenous boundary conditions (40)–(43), by (35), it yields 1 0

E(1, t) MWt (ς, t)dς = E(1, t) M

1

 θ (ς )Wς (ς, t) ς dς 0

+E(1, t) M θ (1) (Ψt (t) − 1n · u˙ 0 (t) + Ut (t)) ! " = E(1, t) M θ (1)Wς (1, t) − θ(0)Wς (0, t)

+θ (1)E(1, t) M (Ψt (t) − 1n · u˙ 0 (t) + Ut (t)) = −aθ (1)M E(1, t)1 − bθ(1)E(1, t) MSign(MW (1, t)) −cθ (1)M E(1, t)22 − dθ(1)E(1, t) M 2 W (1, t)

+θ (1)E(1, t) M(Ψt (t) + 1n u˙ 0 (t)).

(78)

Then, by invoking (4), (33), and (69), the sign-undefined terms of (78) can be upper-estimated as follows: | −b E(1, t)MSign(MW (1, t)) | ≤ b M E(1, t)1 , | E(1, t) M(Ψt (t) − 1n · u˙ 0 (t))| ≤ ( + ) M E(1, t)1 .

(79) (80)

Let m i j be the (i, j) entry of matrix M. Then, the last term of (77) is estimated by the next chain of inequalities, constructed on the basis of (69) and by exploiting the well-known Holder integral inequality, as follows:

558

A. Pilloni et al.

) ) ) ) ) ) 1  n n  ) ) 1 ) ) ) )  W (1, t) MW (ς, t)dς ) = κ R )) Wi (1, t)m i j W j (ς, t)dς )) )κ R ) ) 0 ) ) 0 i=1 j=1 ⎛ ⎞ n n 1   ≤ κR | W j (ς, t) | dς ⎝ | Wi (1, t)m i j |⎠ j=1 0

i=1

⎞ ⎞⎛ n n   W j (·, t)H0 (0,1) ⎠ ⎝ | Wi (1, t)m i j |⎠ ≤ κR ⎝ ⎛

j=1

≤ κR

 i, j

i=1

* 2R | m i j Wi (1, t) | m1

* = κR

2R MW (1, t)1 m1

(81)

where the positive constant κ R is satisfying the constraint given in (74). Then, by (78)–(81), and (67), relation (77) can further be manipulated as follows: *

˙ R (t) ≤ − b m − M ( + ) − κ R V

2R m 1 2m

 M W (1, t)1

−κ R m (a − b − − ) M E(1, t)1 − m 21 m d W (1, t)22 −m 1 m Wς (·, t)2[H0 (0,1)]n − κ R m 21 m c E(1, t)22 . (82) Let us now consider Lemma 1 specialized for k = 1 and Z (ς, t) = W (ς, t). From the resulting relation, the next inequality is derived as W (1, t)22 + Wς (·, t)2[H0 (0,1)]n ≥

1 W (·, t)2[H0 (0,1)]n . 2

(83)

Then, in light of the above relation, one further obtains that − m 21 m dE(1, t)22 − m 1 m Wς (·, t)22 ≤ −υ4 W (·, t)2[H0 (0,1)]n where

(84)

( ' υ4 = min 2m 1 m , 2m 21 m d .

Now, by substituting (84) on the right-hand side of (82), ˙ R (t) ≤ −υ1 M E(1, t)1 − υ2 MW (1, t)1 V −υ3 E(1, t)22 − υ4 W (·, t)2[H0 (0,1)]n is obtained, with

(85)

Second-Order Sliding-Mode Leader-Follower Consensus …

υ1 = κ R m (a − b − − ) , υ2 = b m − M ( + ) − κ R

559

*

2R m 1 2m

(86)

 ,

(87)

υ3 = κ R m 21 m c, υ4 = min{2m 1 m , 2m 21 m d}.

(88) (89)

Thus, it is clear that all the right-hand side terms of (85) are nonpositive provided that the derived tuning rules (60) on the controller gains are verified, and, where, in place of (74), the next more restrictive condition on κ R > 0 is additionally satisfied 0 < κ R ≤ min



2a 2 R

  , m 1 , (b m − M ( + ))/ m2R . 2 1

m

(90)

From (85)–(90), and by neglecting the strictly negative term proportional to υ2 in (87), (85) can be further upper-estimated as follows: ˙ R (t) ≤ −υ1 M E(1, t)1 − υ2 M W (1, t)1 V −υ3 E(1, t)22 − υ4 W (·, t)2[H0 (0,1)]n ≤ −υ1 M E(1, t)1 − υ3 E(1, t)22 − υ4 W (·, t)2[H0 (0,1)]n   ≤ −γ1 M E(1, t)1 + E(1, t)22 + W (·, t)2[H0 (0,1)]n

(91)

where γ1 = min{υ1 , υ3 , υ4 } ∈ R+ . Consider now V R (t) = V (t) + κ R V¯ (t) in (70). By combining (47) with the second inequality of (3), it yields that V(t) satisfies 1 1 V(t) ≤ a M M E(1, t)1 + c M m 2n E(1, t)22 + m n W (·, t)[H0 (0,1)]n . 2 2 (92) On the other hand, by means of (3), (4), and (72), then also V¯ (t) can be estimated as follows: R 1 1 ¯ M E(1, t)1 + m n W (·, t)[H0 (0,1)]n . V(t) ≤ m 2n M dE(1, t)22 + 2 2a m 2 (93) Thus, by means of (92) and (93), one finally obtains that ! " V R (t) = V(t) + V¯ R (t) ≤ γ2 M E(1, t)1 + E(1, t)22 + W (·, t)2H0,n where γ2 = min



2a 2 2M −κ R R 2a m

 , 21 m 2n M (c + κ R d) , 21 m n (1 − κ R ) > 0.

(94)

560

A. Pilloni et al.

Thus, from (91) and (94), one can finally conclude that ˙ R (t) ≤ −ρ R V R (t) with ρ R = γ1 . V γ2

(95)

From that, the exponential decay of VR (t) is guaranteed whenever VR (0) is initialized within D VR in (68). To complete the proof, note that, due to the upperestimation (75) of the functional V(t) by the functional V R (t), it follows that V(t), being computed on the mild solutions (E(ς, t), W (ς, t)) of the boundary-value problem (40)–(43) exponentially decays, too, and by virtue of Lemma 2, the exponential stability of (40)–(43) with the augmented state (E(ς, t), W (ς, t)) in the [H2 (0, 1)]n × [H0 (0, 1)]n -space is established for any point within the initial set DVR defined in (68). Since D VR can be specified with an arbitrarily large radius R > 0, and the tuning rules (60) do not depend by R, condition (30) is thus guaranteed. The proof of Theorem 1 is concluded. 

5 Simulation Results Let us consider a network of dynamical agents consisting of n = 9 followers, indexed from “1” to “9,” and whose state qi (ς, t) is the solution of the boundary-value problem (15)–(18), plus a leader agent, identified by the index “0,” and whose state q0 (ς, t) is the solution of (23)–(26). Hereinafter, we are going to show the proposed local control protocol (35) allows each follower to track asymptotically and point-wisely the leader state, by exploiting only local boundary actuation, in a networked and collocated sensing, in accordance with a given communication network. Specifically, each agent i is enabled to share its own state, measured at ς = 1, with its neighborhood N i . The interaction topology associated with the considered case studies is encoded by the graph G depicted in Fig. 1, where, for the sake of simplicity, all the edges have a unitary weight. In the remainder, the spatially varying uncertain diffusivity entering the agent dynamics in (19) and (23) is taken as θ (ς ) = θ0 + θ¯ · ς, θ0 = 1, θ¯ = 0.5, ς ∈ [0, 1]

(96)

and where the corresponding lower and upper bounds in (32) are evaluated as m = 1 and M = 1.5. The Neumann-type boundary conditions of the leader agent are set, in accordance with (26), as follows: q0 (1, t) = u 0 (t) q0 (ς, 0) = 0 where the leader input signal is instead taken as

Second-Order Sliding-Mode Leader-Follower Consensus …

561

Fig. 1 Considered interaction graph G . Node 0 denotes the leader (23)–(26), whereas the nodes from 1 to 9 play as followers (15)–(18), which aim is to track the leader state q0 (ς, t) by exploiting only boundary neighboring information q j (1, t) ∈ N i

u 0 (t) = sin (π t) , such that its time derivative satisfies the constraint |u˙ 0 (t)| ≤ = π , as required by Assumption 3. On the other hand, the followers’ boundary conditions are selected as follows: qi (1, t) = ψi (t) + u i (t) qi (ς, 0) = 0 where u i (t) is the local control input, and ψi (t) = sin(φi t) with φi ∈ (π, 2π ), i = 1, 2, . . . , 9

(97)

play as disturbances, and where the coefficients φi are randomly selected, for each follower, within the range (π, 2π ). In Fig. 2 is shown the temporal behavior of each boundary disturbance ψi (t) with i = 1, 2, . . . , 9. Let us further note that the disturbance vector Ψ (t) satisfies Assumption 3. Specifically, from (97), it results that it has a bounded derivative, such that Ψ˙ (t)∞ ≤ = 2π.

(98)

The initial condition of the leader agent is set, for simplicity, as q0 (ς, 0) = 10,

(99)

562

A. Pilloni et al.

Fig. 2 Temporal profile of the boundary local disturbances ψi (t), with i = 1, 2, . . . , 9

Fig. 3 Spatio-temporal profile of the leader state q0 (ς, t)

whereas two sets of initial conditions are considered for the follower agents. Specifically, the following two case studies are considered: • Case1 : qi (ς, 0) = 10, i = 1, 2, . . . , 9 where all the followers have the same initial conditions, and • Case2 : qi (ς, 0) = (10 + i) + (1 + i) cos(2π ς · i), i = 1, 2, . . . , 9 where each follower has a very different initial condition. The control gains of the local interaction rule (39) are finally selected, in accordance with (60), as follows:

M ( + ) , c = 5 > 0, d = 5 > 0.

m (100) The considered network of diffusion PDEs is also spatially discretized by means of the backward finite-difference approximation method by considering N = 30 uniformly spaced solution nodes. Then, the resulting system of ordinary differential equations, having order (N + 1) × n = 300, is finally solved by means of the Euler fixed-step method with a step size T = 5 × 10−5 seconds. a = 70 > b + + , b = 35 >

Second-Order Sliding-Mode Leader-Follower Consensus …

563

We are now in a position to discuss our numerical results. Specifically, in Fig. 3 is shown the spatio-temporal profile of the leader state q0 (ς, t), whereas in Fig. 4, and in Fig. 5, is shown the closed-loop state response of the 2-nd follower agent q2 (ς, t), respectively, under the initial conditions of Case 1, and of Case 2. Then, in Fig. 6, and in Fig. 7, is depicted the corresponding spatio-temporal tracking error profile, namely e2 (ς, t) = q2 (ς, t) − q0 (ς, t). It is worth noting that in Case 1, since the followers and the leader at the start-up have the same initial condition qi (ς, 0) = 10, and thus the discrepancy between the leader and the follower dynamic is limited to the presence of the boundary disturbances ψi (t) at the actuated boundary ς = 1, the

Fig. 4 Case 1: Spatio-temporal profile of the 2-th follower state q2 (ς, t)

Fig. 5 Case 2: Spatio-temporal profile of the 2-th follower state q2 (ς, t)

Fig. 6 Case 1: Spatio-temporal tracking error e2 (ς, t) = q2 (ς, t) − q0 (ς, t)

564

A. Pilloni et al.

Fig. 7 Case 1: Spatio-temporal tracking error e2 (ς, t) = q2 (ς, t) − q0 (ς, t)

Fig. 8 Case 1: Temporal profile of the boundary control inputs u i (t) with i = 1, 2, . . . , 9

tracking error e2 (ς, t) is, since the first time, very close to zero. This is because in Case 1 each local controller is exhibiting a sliding motion since the first time instant. In Fig. 6 is is also evident a small numerical chattering at ς = 1 because of, at that coordinate it is applied our discontinuous control protocol u i (t). On the contrary, in Case 2, and specifically in Fig. 7, since each follower has a different initial condition, it is possible to appreciate also the reaching phase of the proposed control protocol. The same conclusions can be concluded by observing the larger overshoot in control signals u i (t) of Case 2, shown in Fig. 9, with respect to the control signals of Case 1, depicted in Fig. 8. Finally, in Fig. 10 is shown the temporal profile of the [H2 (0, 1)]n -norm of the synchronization error vector E(ς, t) = Q(ς, t) − 1n · q0 (ς, t) for Case 2. This norm tends exponentially to zero as stated in Theorem 1, thus confirming the exponentially and point-wisely in space tracking of the leader state by the whole network of follower agents.

Second-Order Sliding-Mode Leader-Follower Consensus …

565

Fig. 9 Case 2: Temporal profile of the boundary control inputs u i (t) with i = 1, 2, . . . , 9

Fig. 10 Temporal profile of the [H2 (0, 1)]n -norm of the tracking error vector E(ς, t)

6 Conclusion In this work, the leader-follower distributed tracking for a network of heat PDEs with spatially varying, uncertain diffusivity, under persistent smooth boundary disturbance, has been addressed and solved by means of a dynamic collocated secondorder sliding-mode control local interaction rule. Only boundary sensing and collocated Neumann-type boundary actuation are used. Then a formal Lyapunov analysis confirms that the consensus between the leader and followers’ trajectories is exponentially achieved point-wisely in space. Future activities will be further targeted

566

A. Pilloni et al.

to extend the chapter results to more general classes of the boundary-value problem or to non-collocated and/or sampled-in-space control applications, such as the networked control of irrigators for precise agriculture irrigation applications [23]. In addition, relaxing the requirement of an undirected and static communication network for the followers by covering directed and/or switching communication topologies is another interesting line of activity for next investigations. Acknowledgements The research leading to these results has been partially founded by the Sardinian Regional Government, under project POR SARDEGNA FSE 2014-2020-Asse III, Azione 10.5.12, “Avviso di chiamata per il finanziamento di Progetti di ricerca Anno 2017,” and by the Fondazione di Sardegna under the projects “IQSS, Information Quality aware and Secure Sensor networks for smart cities” and “SISCO- ITC methodologies for the security of complex systems” (cup F74I19001060007).

References 1. Aguilar, L., Orlov, Y., Pisano, A.: Leader-follower synchronization and ISS analysis for a network of boundary-controlled wave PDEs. IEEE Control Syst. Lett. (L-CSS) 5(2), 683–688 (2021) 2. Buthovskiy, A.G.: Green’s Functions and Transfer Functions Handbook, p. 236. John Wiley & Sons, INC (1982) 3. Boskovic, D.M., Krstic, M.: Backstepping control of chemical tubular reactors. Comput. Chem. Eng. 26(7–8), 1077–1085 (2002) 4. Cao, Y., Ren, W.: Distributed coordinated tracking with reduced interaction via a variable structure approach. IEEE Trans. Autom. Control 57(1), 33–48 (2012) 5. Cao, Y., Yu, W., Ren, W., Chen, G.: An overview of recent progress in the study of distributed multi-agent coordination. IEEE Trans. Ind. Inf. 9(1), 427–438 (2013) 6. Carslaw, H.S., Jaeger, J.C.: Conduction of Heat in Solids. Clarendon Press, Oxford (1959) 7. Chapman, A., Mesbahi, M.: Advection on graphs. In: IEEE Conference on Decision and Control, pp. 1461–1466 (2011) 8. Curtain, R.F., Zwart, H.: An Introduction to Infinite-Dimensional Linear Systems Theory, vol. 21. Springer (1995) 9. Davila, J.: Distributed tracking for mechanical systems using second-order sliding-modes. In: Variable Structure Systems (VSS), 2014 13th Int. Workshop on, pp. 1–6. IEEE (2014) 10. Demetriou, M.A.: Synchronization and consensus controllers for a class of parabolic distributed parameter systems. Syst. Control Lett. 62(1), 70–76 (2013) 11. Di Meglio, F., Vazquez, R., Krstic, M.: Stabilization of a system of coupled first-order hyperbolic linear pdes with a single boundary input. IEEE Trans. Autom. Control 58(12), 3097–3111 (2013) 12. Filippov, A.F.: Differential Equations with Discontinuous Righthand Sides: Control Systems, vol. 18. Springer (1988) 13. Fu, Q., Du, L., Xu, G., Wu, J., Yu, P.: Consensus Control for Multi-agent Systems with Distributed Parameter Model. Neurocomputing, vol. 308, pp. 58–64. Elsevier (2018) 14. He, P.: Consensus of uncertain parabolic PDE agents via adaptive unit-vector control scheme. IET Control Theory Appl. 12(18), 2488–2494 (2018) 15. Huang, J., Wang, W., Wen, C., Zhou, J., Li, G.: Distributed adaptive leader-follower and leaderless consensus control of a class of strict-feedback nonlinear systems: A unified approach. Automatica 118, 109021 (2020) 16. Khalil, H.: Nonlinear Systems. Prentice Hall (2002)

Second-Order Sliding-Mode Leader-Follower Consensus …

567

17. Krstic, M., Smyshlyaev, A.: Boundary Control of PDEs: A Course on Backstepping Designs, vol. 16. SIAM (2008) 18. Levaggi, L.: Sliding modes in banach spaces. Diff. Integr. Equ. 15(2), 167–189 (2002) 19. Li, T., Rao, B.: Exact synchronization for a coupled system of wave equations with dirichlet boundary controls. In: Partial Differential Equations: Theory, Control and Approximation, pp. 295–321. Springer (2014) 20. Li, T., Lu, X., Rao, B.: Exact boundary synchronization for a coupled system of wave equations with Neumann boundary controls. Chinese Ann. Math. 38B(2), 473–488. Springer (2017) 21. Li, Z., Li, M., Ji, W.: Modelling the public opinion transmission on social networks under opinion leaders. IOP Conf. Ser.: Earth Environ. Sci. 69(1), 012125 (2017) 22. Li, Y., Mei, R., Xu, Y., Kurths, J., Duan, J., Metzler, R.: Particle dynamics and transport enhancement in a confined channel with position-dependent diffusivity. New J. Phys. 22(5), 053016 (2020) 23. Molina, N.I.C., Cunha, J.P.V.S.: Non-collocated sliding mode control of partial differential equations for soil irrigation. J. Process Control 73, 1–8. Elsevier (2019) 24. Orlov, Y., Pisano, A.: Boundary second-order sliding-mode control of an uncertain heat process with spatially varying diffusivity. In: IEEE Conference on Decision and Control & European Control Conference, pp. 1323–1328 (2011) 25. Orlov, Y., Pilloni, A., Pisano, A., Usai, E.: Consensus-based leader- follower tracking for a network of perturbed diffusion PDEs via local boundary interaction. In: Proc. of 2nd IFAC Workshop Control Syst. Governed Partial Differ. Equ. (CPDE), pp. 228–233 (2011) 26. Pazy, A.: Semigroups of Linear Operators and Applications to Partial Differential Equations, vol. 44. Springer Science & Business Media (1992) 27. Pilloni, A., Pisano, A., Franceschelli, M., Usai, E.: Finite-time consensus for a network of perturbed double integrators by second-order sliding mode technique. In: IEEE Conference on Decision and Control, pp. 2145–2150 (2013) 28. Pilloni, A., Franceschelli, M., Pisano, A., Usai, E.: Recent advances in sliding-mode based consensus strategies. In: IEEE International Workshop on Variable Structure Systems (VSS), 2014 13th Int. Workshop on, pp. 1–6 (2014) 29. Pilloni, A., Pisano, A., Orlov, Y., Usai, E.: Consensus-based control for a network of diffusion PDEs with boundary local interaction. IEEE Trans. Autom. Control 61(9), 2708–2713 (2015) 30. Pilloni, A., Franceschelli, M., Pisano, A., Usai, E.: Sliding mode-based robustification of consensus and distributed optimization control protocols. IEEE Trans. Autom. Control 63(3), 1207–1214 (2020) 31. Pisano, A., Orlov, Y.: Boundary second-order sliding-mode control of an uncertain heat process with unbounded matched perturbation. Automatica 48(8), 1768–1775 (2012) 32. Sardellitti, S., Giona, M., Barbarossa, S.: Fast distributed average consensus algorithms based on advection-diffusion processes. IEEE Trans. Sig. Process. 58(2), 826–842 (2010) 33. Smyshlyaev, A., Krstic, M.: On control design for PDEs with space-dependent diffusivity or time-dependent reactivity. Automatica 41(9), 1601–1608 (2005) 34. Siranosian, A.A., Krstic, M., Smyshlyaev, A., Bement, M.: Motion planning and tracking for tip displacement and deflection angle for flexible beams. J. Dyn. Syst. Meas. Control 131(3), 031009 (2009) 35. Shao, J., Zheng, W.X., Shi, L., Cheng, Y.: Leader-follower flocking for discrete-time CuckerSmale models with lossy links and general weight function. IEEE Trans. Autom. Control 66(10), 4945–4951 (2020) 36. Vazquez, R., Krstic, M.: Boundary observer for output-feedback stabilization of thermal-fluid convection loop. IEEE Trans. Control Syst. Technol. 18(4), 789–797 (2010) 37. Wang, J., Krstic, M.: Output feedback boundary control of a heat PDE sandwiched between two ODEs. IEEE Trans. Autom. Control 64(11), 4653–4660 (2019) 38. Zampieri, S.: Trends in networked control systems. In: 17th IFAC World Congress, pp. 2886– 2894 (2008)

Index

A Acceleration, 446, 449 Accuracy, 229–231, 235, 241–244, 246, 247, 250, 252, 253, 258, 259, 261, 262 Actuator, 340, 343–345, 347, 352, 353, 355 Actuator effectiveness, 467 Adaptation, 308 Adaptive higher-order sliding-mode control, 289, 294 in observers/differentiators, 268 in the dual layer structure, 290 using integral sliding-mode concept, 292 second-order-discrete-sliding-mode control, 286 second-order sliding modes, 276, 277, 284, 288, 294 sliding modes, 270, 273, 276, 289, 300 super-twisting sliding-mode control of the cable-driven manipulators, 288 third-order sliding modes, 294–296 twisting sliding mode control of a hypersonic reentry vehicle, 286 of a UAV, 284 of the electropneumatic actuator, 288 of the perturbed mass-spring-damper, 287 Adaptive sliding-mode control of a quadrotor, 276 the electropneumatic actuator, 274 Aggregation matrix, 422 Aging, 439 Airworthiness, 462

Allocator matrix, 469 Allowable set, 470 Amplitude of chattering, 338, 340, 345–347, 353, 354 Applications of adaptive sliding-mode control, 273 higher-order sliding-mode control, 294 to hypersonic missiles, 294 second-order sliding-mode control, 284 Arbitrary order differentiator, 76, 77 ArdupilotMega, 477 Availability, 437

B Barrier function, 308, 309, 313, 314, 318, 319, 325, 332 Bergman minimal model, 388 Bias function, 338, 347, 348, 350, 355 Bi-homogeneous differentiator, 71, 75 Bogie, 441, 443 Boundary layer approximation, 371–373, 375–377 Bounded, 467, 468, 470, 474

C Canonical, 469 Carbody, 441, 444, 447 Cascade observer, 139 Chattering, 229–231, 235, 240, 243, 244, 247, 248, 251, 254, 258, 259, 337– 340, 342–347, 352, 354, 355, 357– 365, 367, 368, 370–372, 376–379

© The Editor(s) (if applicable) and The Author(s), under exclusive license to Springer Nature Switzerland AG 2023 T. R. Oliveira et al. (eds.), Sliding-Mode Control and Variable-Structure Systems, Studies in Systems, Decision and Control 490, https://doi.org/10.1007/978-3-031-37089-2

569

570 Compact set, 476 Computational effort, 453 Conservation of energy, 342 Continuous controller, 345, 346, 350, 352, 353 Continuous twisting controller, 519, 520, 526, 536 Control allocation, 467, 468, 499 Convergence, 442, 451 Convergence time estimation, 85 Coordinate system, 469 Cost functional, 21, 22, 31, 34 Coulomb friction, 351 Coupling, 447 Cutoff frequency, 449

D Damping, 443, 447 Data, 439, 447 Decision basis, 440, 443 Decoupled identification, 452 Degree Of Freedom (DOF), 483 Describing function, 357, 359, 361, 363, 364, 366, 368, 371, 373–375 Differential observability, 136 Differentiation, 239, 240, 243, 245, 257 Differentiator design, 183 Direct lift control, 480, 483 Discontinuous gradient algorithm, 450 Discretization, 205, 206, 208, 211, 212, 214– 217, 221–226, 229–231, 234–237, 243–254, 256–259 Displacement, 445, 447 Disturbances, 446, 450 Disturbance-tailored super-twisting convergence time estimation, 18 coordinate transformation, 14 equations, 7 finite-time convergence, 16 global robust stability, 15 initial bound pair, 10 key property, 7 linear framework, 14 tuning, 9 Dominant mode, 421 Doublet, 487 Downtimes, 438, 439 Drift, 449 Dry friction, 339, 351–355 DTST, see Disturbance-tailored supertwisting Dynamics, 444

Index E Electronic speed controller, 480 Electro-pneumatic system, 178 Equivalent airspeed, 483 Equivalent gain, 347, 348, 350 Error dynamics, 451 Estimation accuracy, 457 External disturbances, 338 F Factorization, 468, 484 Factorized, 468 Fault, 440, 441, 452 Fault accommodation control, 506, 517, 520, 521, 523 Fault detection, 510, 511, 516 Fault detection and diagnosis, 464 Fault detection and isolation, 466, 486 Fault diagnosis, 503, 504, 506, 507, 513, 517, 518 Fault-free scenario, 453 Fault identification, 513, 516, 521, 522 Fault isolation, 444, 447, 511, 517 Fault tolerant control, 461–468, 474, 483, 487, 493, 495, 499 Faulty scenarios, 453 Finite time, 450 Finite-time convergence, 338, 355 Finite–time sliding–mode observer, 503, 504, 506, 509–511, 524, 525, 530 First-order Sliding Mode(SM), 338, 339 Fixed-time, 308, 327, 332 Fixed-time convergence, 77 Fixed-time sliding-mode observer, 97, 105, 116 Fixed-time stability, 42 Fly By Wire (FBW), 480, 485, 487 Food-intake, 385, 386, 389, 396–398, 400, 402, 404, 408, 411 Forward speed, 452 Fractal dynamics, 343 Frequency-domain, 447 G Gain calculation, 85 Gain non-overestimation, 277, 291 Generalized Second-Order Algorithm (GSOA), 3, 7 Generalized Super-Twisting Algorithm (GSTA), 3 with linear terms, 7, 8 Gimbal, 478

Index

571

H Hamel locus, 337 Harmonic balance, 340, 344, 353 Harmonic balance equation, 359 Health status, 443 Healthy actuator, 468 Hexarotor, 466 High altitude long endurance, 462 Higher-order Sliding Mode(SM), 338, 339 High–order sliding–modes controllers, 506, 519, 524, 530 High-order sliding-mode differentiator, 138 High-order sliding modes, 205, 206 High-pass filter, 449 Homogeneity in the bi-limit, 185, 187 Homogeneous in the 0-limit, 75 Homogeneous in the ∞-limit, 75 Homogeneous observers, 97–102, 109, 112, 119, 120 Homogeneous systems, 206, 207 Hybrid controller for predefined-time convergence, 43

I Ideal sliding mode, 338, 343 Identification, 440, 442 Implicit discretization, 165–170, 174, 178 In silico, 385, 387, 397, 401, 403, 408, 411 Infinite-time (asymptotic) convergence, 338 Inherent gain margin, 347, 350 Integral metrics, 340 Integrator, 449 Intelligent flight control system, 464, 499 Intensive care unit, 385–387, 397, 405, 408, 411 Inter-event time, 423 Interpatient variability, 385, 386, 397, 398, 400, 402, 404 Intravenous route, 388, 411 Inverse optimality, 21, 22, 34 IRIS+, 474, 477, 478 Irregularities, 446

J Japan

Aerospace Exploration Agency (JAXA), 464, 480, 486, 499

L Leader-follower consensus, 541, 543, 547 Leading bogie, 445

Linear parameter varying, 461, 466, 467, 471, 472 Linear quadratic regulator, 465, 471, 472, 479 Linear time invariant, 472 Lipschitz-discontinuous, 337, 339, 343, 355 Locus of a Perturbed Relay System (LPRS), 337 Low-pass filter, 449 Lyapunov function homogeneous in the bilimit, 74 Lyapunov transformation, 150

M Maintenance, 438, 439 Maintenance strategies, 438 Matched and unmatched disturbances, 308 Matched disturbances, 467 Matched uncertainties, 467, 474, 485 Mean value, 453 Measurement/Quatization noise, 59, 63 Measurements, 444, 457 Mechanical systems, 337–340, 342, 345, 351, 354 Mechanical vibrations, 337 Metabolic simulator, 385, 387, 395 Metrics of chattering, 338 Minimum energy prescribed-time control, 46 Mission planner, 477, 478 Model, 443 Model-based methods, 442 Model reference adaptive control, 465 Model uncertainties, 457 Modulation gain, 474, 479, 485 Monitoring function, 307–309, 311–313, 317, 318, 325, 327, 332 Multirotor, 465, 474, 477 MuPAL-α, 464, 480, 483–485, 499 MuPAL-α.eps, 486

N Noise, 230, 231, 239–244, 247, 258, 259, 451 Non-dominant mode, 421 Nonlinearities, 337–340, 345, 347, 348, 350, 355 Nonlinear output injection design, 102, 107 Nonlinear system, 450 Numerical integration, 449 Numerical simulations, 452

572 O Observability indices, 136 Observability number, 136 Observer gain, 451 Observer in dynamic compensation form, 154 Octocopter, 466 Oscillations, 337, 338, 340, 342, 343, 345, 351–355 Over-actuation, 499

P Parameter identification, 442 Parametrization effort, 456 Partial differential equations, 542 Performance indexes, 387, 396 Perturbation bounds admissible, 11 initial, 7 initial bound pair, 10 preadmissible, 11 Phase deficit, 338, 340 Pixhawk support package, 477, 480 Plant maintenance, 438 Poincare map, 337 Pointwise stability, 548, 550 Portland protocol, 397, 405, 408–411 Positive definiteness, 451 Practical sliding mode, 423 Precision of the differentiator, 82 Predefined-time controller, 43 Predictive maintenance, 439 Pre-processing, 449 Prescribed performance, 308, 314, 332 Prescribed-time controller, 43 Prescribed-time stabilization, 21–23, 30, 34 Prognostic models, 440 Propulsion controlled aircraft, 463 Pseudo inverse, 473, 474 Pulse Width Modulation (PWM), 480

Q Quadrotor, 461, 465, 474, 478, 503, 504, 506, 507, 511, 514, 516–518, 520, 521, 525, 529, 530 Quasi-Linear Parameter Varying (LPV), 467, 475

R Railroad transportation, 437 Railway vehicle, 440, 442

Index Reaching phase, 308, 312, 313, 327, 332 Reduced-order model, 421 Redundancy, 462, 465–468 Regressor, 450 Regular form, 469–471, 485 Relay controller, 345 Residual, 451 Residual phase, 308, 312–314, 318, 325, 327, 332 Robustness, 229, 235 Rotor failure, 503, 504, 506, 508, 523, 524, 527, 528, 530, 536

S Sampling, 230, 231, 234, 236, 239, 241–244, 247, 250–252, 256–258, 261, 262 Sampling frequency, 452 Saturation function, 357, 358, 360, 362, 370, 373, 378 Scaled quadratic Lyapunov functions, 21, 22, 27, 34 Scaling of the gains, 86, 87 Scenario, 452 Scheduling parameters, 467, 475, 476, 478, 483 Second-order Sliding Mode (SM), 339 Semi-implicit discretization, 181 Sensitivity to faults, 457 Settling-time function, 42 Sigmoidal approximation, 485 Sigmoidal function, 479 Simulations, 447 Singular perturbation, 338 Sliding Mode (SM), 337–339, 344, 355, 453 Sliding-mode control, 462, 464, 465, 467, 469, 480, 499 Sliding-mode observer, 466 Sobolev spaces, 541, 545, 549 Spatially varying diffusivity, 543 Spring, 456 Standard deviation, 453 State, 442, 446 Stiffness, 443, 447 Strict-feedback nonlinear systems, 21, 34 Strict lyapunov function, 16, 17 Strong observability, 137 Super-twist, 474 Super-twisting algorithm, 450 Super-twisting differentiator, 165–170, 174, 178 Suspension, 441 Switching function, 471, 473, 492, 494, 496

Index Synchronization, 542, 543, 548, 564 System matrices, 450 System parameters, 441 T Taxi drone, 462 Technology readiness level, 464 Threshold, 452 Time-scale transformation for predefinedtime convergence, 43 Time-varying gain, 43 Time-varying parameters, 450 Trailing bogie, 445 Transient phase, 308 Tsypkin locus, 337 Type 1 Diabetes Mellitus, 385–389, 391, 394, 411 U Uniform differential observability, 149 Uniform Lyapunov stability, 61

573 Unit vector control, 307–309, 313, 319, 320 Unknown input identification, 97, 99, 109, 111, 120, 121 Unknown parameter, 451 Unmanned Aerial Vehicle (UAV), 461–463, 465, 466, 474, 477, 478 Unmanned ground vehicle, 478 UVA/Padova, 385, 387, 395, 411

V Variable-structure differentiator, 165, 167, 178 Vehicle parameters, 444 Velocity, 447 Virtual control, 469, 473, 477 Viscous friction, 341, 342 Visual meteorological condition, 487

W Wheelsets, 441