Design and Analysis of Control Systems: Driving the Fourth Industrial Revolution [2 ed.] 1032718803, 9781032718804

Written to inspire and cultivate the ability to design and analyse feasible control algorithms for a wide range of engin

176 11 14MB

English Pages 794 Year 2024

Report DMCA / Copyright

DOWNLOAD PDF FILE

Table of contents :
Cover
Half Title
Title Page
Copyright Page
Dedication
Table of Contents
Preface
Key Features of the Book
Key Benefits for the User
Acknowledgements
Author
Chapter 1 An Introduction to Control Systems
1.1 Introduction
1.1.1 Background
1.2 A Recent History of Control Systems
1.2.1 Automatic Control
1.2.2 Multivariable Control
1.3 What Is a Control System?
1.4 Open-Loop Control vs. Closed-Loop Control
1.4.1 Open-Loop Control
1.4.2 Closed-Loop Control
1.4.3 Advantages of Closed-Loop Systems
1.4.4 Disadvantages of Closed-Loop Systems
1.5 Examples of Simple Control Systems
1.5.1 Manual Car Direction of Travel Control
1.5.2 Cruise Control for a Car
1.5.3 Automatic Water Level Control
1.5.4 Manual Water Level Control
1.5.5 Feedback in Social, Economic, and Political Systems
1.6 Classification of Control Systems
1.6.1 Linear vs. Nonlinear Control Systems
1.6.2 Time-Invariant vs. Time-Variant Control Systems
1.6.3 Continuous-Data vs. Discrete-Data Control Systems
1.6.4 Regulator vs. Tracking Control Systems
1.7 Control System Design
1.8 Advanced Applications of Control Systems
1.8.1 Autonomous SpaceX Rocket Landing
1.8.2 Boston Dynamics’ Atlas Robot
1.8.3 Honda’s ASIMO Robot
1.8.4 Sophia the Robot
1.9 Computer-Aided Design and Analysis
1.9.1 Overview of Automated Computational Tools
1.9.2 MATLAB
1.9.3 MATLAB Control System Toolbox
1.9.4 Simulink Control Design
1.9.5 LabVIEW
1.9.6 SPICE-Based Simulation Software
1.9.7 SimPowerSystems
1.9.8 Stateflow
1.9.9 Mathcad
1.10 Control Systems in the 4IR
1.10.1 The Fourth Industrial Revolution
1.10.2 Control System Innovations in the 4IR
1.10.3 Neural Networks
1.10.4 Fuzzy Logic
1.10.5 Intelligent Control System Structure
1.10.6 Examples of Intelligent Control Systems
1.10.7 Challenges of Intelligent Control Systems
1.11 Book Outline
Chapter 2 Modelling of Dynamic Systems
2.1 Introduction
2.1.1 Chapter Objectives
2.2 Dynamic Systems
2.3 Dynamic System Models
2.3.1 Modelling Concepts
2.3.2 Summary of Model Derivation Procedure
2.3.3 Different Dynamic System Models
2.4 Overview of Different Dynamic Systems
2.4.1 Translational Mechanical Systems
2.5 Key Dynamic System Models
2.6 Input-Output Differential Equation Form Model
2.7 State-Variable Matrix Form (State-Space) Model
2.7.1 Choice of State Variables
2.7.2 Summary of the State-Variable Form Modelling
2.7.3 Obtaining the State-Variable Matrix Model
2.7.4 State-Variable Matrix Models for Nonlinear Systems
2.7.5 Characteristics of State-Variable Models
2.7.6 Comparison with the Input-Output Model
2.8 Transfer Function Form Model
2.8.1 Obtaining the Transfer Function Model
2.8.2 The Laplace Transform
2.8.3 Properties of Laplace Transforms
2.8.4 Laplace Transform of Some Key Functions
2.8.5 Determination of the Transfer Function Model
2.8.6 The s-Operator Method
2.8.7 The Component Transfer Function Method
2.8.8 The Transfer Function in Pole-Zero Factored Form
2.9 Block Diagram Form Model
2.9.1 Networks of Blocks
2.9.2 Negative Feedback in Op-Amp Circuits
2.9.3 Positive Feedback in Op-Amp Circuits
2.9.4 Simplifying Block Diagram Models
2.9.5 Worked Examples of Simplifying Block Diagrams
2.10 Examples of Dynamic System Modelling
2.10.1 Translational Mechanical Systems
2.10.2 Rotational Mechanical Systems
2.10.3 Electrical Systems (RLC Networks)
2.10.4 Electromechanical Systems
2.11 Switching between Different System Models
2.12 Input-Output Model to a Transfer Function Model
2.13 Transfer Function Model to an Input-Output Model
2.14 Block Diagram Model to a Transfer Function Model
2.15 Transfer Function Model to a State-Variable Matrix Model
2.15.1 General Controllable Canonical Form
2.15.2 Special Controllable Canonical Form
2.15.3 General Conversion to a State-Variable Matrix Model
2.16 State-Variable Matrix Model to a Transfer Function Model
2.16.1 Inverse of a General n × n Matrix
2.16.2 Model Conversion Using Matrix Inversion
2.16.3 State-Space to Transfer Function Using Laplace Transforms
2.17 Linearisation of Nonlinear Models
2.17.1 Small Signal Linearisation
2.17.2 Linearisation of Element Laws
2.17.3 Linearisation of Models
2.17.4 Linearisation Concepts
2.17.5 Examples of Nonlinear Dynamic Systems
2.17.6 Modelling of Dynamic Systems with Time Delays
Chapter 3 Dynamic System Response
3.1 Introduction
3.1.1 Chapter Objectives
3.2 Time Domain Solution of System Models
3.2.1 Homogeneous Input-Output Equations
3.2.2 Nonhomogeneous Input-Output Equations
3.2.3 First-Order Systems
3.2.4 Second-Order Systems
3.2.5 Analogous Mechanical and Electrical Systems
3.2.6 Solution of the State-Variable Matrix Model
3.3 Frequency Domain Solution of System Models
3.3.1 The Inverse Laplace Transform
3.3.2 Partial Fractions
3.3.3 General Second-Order Laplace Function
3.4 Determination of the System Response
3.4.1 Using the Input-Output Model
3.4.2 Using the Transfer Function Model
3.4.3 Impulse Response (Natural Response)
3.4.4 Unit Step Response
3.4.5 Impulse and Unit Step Responses: The Relationship
3.4.6 Final Value Theorem (FVT)
3.4.7 Initial Value Theorem
3.4.8 System DC Gain
3.5 First-Order Systems
3.6 Second-Order Systems
3.6.1 Impulse and Step Responses
3.6.2 Stability of Second-Order Systems
3.6.3 Response Characteristics of Second-Order Systems
3.6.4 Effects of Pole-Zero Location on System Response
3.6.5 Impact of Zeros on System Response
3.6.6 Nonminimum Phase Systems
3.6.7 Impact of Nonlinearities on System Response
3.6.8 Impact of Time Delay on System Response
3.7 Worked Examples of Dynamic System Response
Chapter 4 Characteristics of Feedback Control Systems
4.1 Introduction
4.2 Open-Loop vs. Closed-Loop Control: Analysis
4.2.1 Open-Loop Control
4.2.2 Closed-Loop Control
4.2.3 Advantages of Closed-Loop Systems
4.2.4 Disadvantages of Closed-Loop Systems
4.2.5 Open-Loop Control Design
4.2.6 The Case for Closed-Loop Control
4.2.7 Closed-Loop Control Design
4.2.8 Cascade and Feedback Controllers
4.2.9 Control System Design Requirements
4.2.10 Control Objectives and Specifications
4.3 Feedback Control Systems
4.3.1 Special Case I
4.3.2 Special Case II
4.3.3 Positive Feedback vs. Negative Feedback
4.4 Steady-State Performance
4.4.1 Expressions for Steady-State Error
4.5 Disturbance Modelling
4.5.1 Effective Disturbance Rejection
4.5.2 Examples of Open- and Closed-Loop Systems
4.6 Car Cruise Control System (Open-Loop)
4.6.1 Input-Output Model
4.6.2 Transfer Function Model
4.6.3 Block Diagram Model
4.6.4 State-Variable Matrix Model
4.7 Car Cruise Control System (Closed-Loop)
4.7.1 Input-Output Model
4.7.2 Transfer Function Model
4.7.3 Block Diagram Model
4.7.4 State-Variable Matrix Model
4.8 DC Motor Speed Control (Open-Loop)
4.8.1 Input-Output Model
4.8.2 Transfer Function Model
4.8.3 Block Diagram Model (Open-Loop)
4.8.4 State-Variable Matrix Model
4.9 DC Motor Position Control (Open-Loop)
4.9.1 Input-Output Model
4.9.2 Transfer Function Model
4.9.3 State-Variable Model
4.10 DC Motor Speed Control (Closed-Loop)
4.10.1 Input-Output Model
4.10.2 Transfer Function Model
4.10.3 Block Diagram Model (Closed-Loop)
4.10.4 State-Variable Model
4.11 Modelling PID Controllers
4.11.1 Proportional Controller (P)
4.11.2 Proportional and Integral Controller (PI)
4.11.3 Proportional and Derivative Controller (PD)
4.11.4 The General PID Controller
4.11.5 The Standard PID Controller
4.11.6 Summary of PID Controller Characteristics
4.11.7 Performance of PID-Type Controllers
4.12 MATLAB Implementation
4.12.1 State-Variable Matrix Model
4.12.2 Transfer Function Model
4.12.3 Sample MATLAB Code: Motor Speed PID Control
4.13 Further Analysis of PID-Type Controllers
4.13.1 Poles and Zeros of PID-Type Controllers
4.14 Tuning the General PID Controller
4.14.1 Trial and Error Method
4.14.2 Quarter Decay Ratio Method
4.14.3 Stability Limit Method
4.15 The Standard PID Controller
4.15.1 Structure and Advantages
4.15.2 Standard PI and PD Controllers
4.15.3 Standard PID Controller: Performance Analysis
4.15.4 Tuning the Standard PID Controller
4.16 PID Controller Design and Implementation
4.16.1 Implementation of PID Controllers
4.16.2 Realisation Using an Operational Amplifier
4.16.3 Windscreen Wiper Control System
4.17 System Steady-State Tracking
4.17.1 Steady-State Errors and System Type
4.17.2 Illustrative Examples
4.18 Sensitivity
4.18.1 Definition of Sensitivity
4.18.2 Open- and Closed-Loop Sensitivity
4.19 System Stability
4.19.1 Bounded Input-Bounded Output Stability
4.19.2 Asymptotic Internal Stability
4.19.3 Routh-Hurwitz Stability Criterion
4.20 Worked Examples of System Stability
4.21 System Type Based on Stability
4.21.1 Absolutely Stable System
4.21.2 Marginally Stable System
4.21.3 Conditionally Stable System
4.22 Time Delays in Control Systems
4.22.1 Impact of Time Delay in Control Systems
4.22.2 Time Delay and the Smith Predictor Controller
Chapter 5 Root Locus Design Methods
5.1 Introduction
5.2 Root Locus
5.2.1 Background
5.2.2 Definition
5.2.3 Magnitude and Angle Criteria
5.2.4 Breakpoint, Departure and Arrival Angles
5.3 Constructing the Root Locus
5.3.1 Summary of the Root Locus Steps
5.3.2 Simple Rules for Constructing the Root Locus
5.3.3 Details of the Root Locus Steps
5.3.4 Determining the Root Locus Gain (Control Gain)
5.3.5 Root Locus for Second-Order Systems
5.4 Worked Examples of Root Locus Design
5.5 Dynamic Compensation: Lead and Lag
5.6 Extensions of Root Locus Method
5.6.1 Time Delay
5.6.2 Nonlinear Systems
5.7 Computer-Aided Determination of the Root Locus
5.7.1 MATLAB
Chapter 6 Frequency-Response Design Methods
6.1 Introduction
6.2 Definition of the Frequency Response
6.2.1 Magnitude and Phase Angle
6.2.2 Combining Magnitudes and Phase Angles
6.3 Bode Plots
6.3.1 Definition
6.3.2 Background
6.3.3 Advantages of Bode Plots
6.3.4 Bode Plot Techniques
6.3.5 Four Classes of Basic Factors
6.4 Constant Factors (Gain)
6.4.1 Magnitude
6.4.2 Phase Angle
6.5 A Simple Zero Factor
6.5.1 Magnitude
6.5.2 Phase Angle
6.6 A Simple Pole Factor
6.6.1 Magnitude
6.6.2 Phase Angle
6.7 An Integrator Factor
6.7.1 Magnitude
6.7.2 Phase Angle
6.8 A Derivative Factor
6.8.1 Magnitude
6.8.2 Phase Angle
6.9 A Complex Pole Factor
6.9.1 Magnitude
6.9.2 Phase Angle
6.10 A Complex Zero Factor
6.10.1 Magnitude
6.10.2 Phase Angle
6.11 Drawing Bode Plots of Complex Systems
6.12 Worked Examples of Bode Plots
6.13 Nonminimum Phase Systems
6.13.1 Magnitude and Phase
6.14 Impact of Time Delay on Bode Plots
6.14.1 Bode Plots of a Time Delay
6.14.2 Impact of Time Delay on Bode Plots
6.15 Bode Plots Using MATLAB
6.15.1 A Single Plot
6.15.2 Several Plots on the Same Curve
6.15.3 System in State-Space
6.16 Models from Experimental Frequency Data
6.17 Compensation
6.17.1 PD and Lead Compensators
6.17.2 PI and Lag Compensators
6.17.3 Generic Compensator
6.17.4 PID Compensator
6.17.5 Lead-Lag Compensator
6.17.6 Summary of Compensation Characteristics
Chapter 7 State-Space Design Methods
7.1 Introduction
7.2 Definitions
7.3 Block Diagram and the Transfer Function
7.3.1 State-Space Description and the Block Diagram
7.3.2 Transfer Function Model: A Revisit
7.4 System Response by State-Transition Matrix
7.4.1 Direct Solution of the Differential Equation
7.4.2 Direct State-Transition Matrix Method
7.4.3 Diagonalisation
7.4.4 System Response by Laplace Transform Method
7.5 System Controllability and Observability
7.5.1 Summary of Definitions
7.5.2 Implication of Pole-Zero Cancellation
7.5.3 Worked Examples of Controllability & Observability
7.6 Canonical State-Space Models
7.6.1 Controllable Canonical State-Space Model
7.6.2 Observable Canonical State-Space Model
7.6.3 Diagonal Canonical State-Space Model
7.6.4 Jordan Canonical State-Space Model
7.6.5 Matrix Eigenvalues and Eigenvectors
7.6.6 Matrix Diagonalisation
7.7 Similarity Transformations
7.8 Canonical State-Space Models: Revisited
7.8.1 Controllable Canonical State-Space Model
7.8.2 Observable Canonical State-Space Model
7.8.3 Jordan Canonical State-Space Model
7.9 Transfer Function Direct Decomposition
7.9.1 Decomposition to Controllable Canonical Form
7.9.2 Decomposition to Observable Canonical Form
7.10 Full State Feedback Control
7.10.1 Pole Placement Design Method
7.10.2 Pole Placement Using Ackermann’s Formula
7.11 Introduction to Optimal Control
7.11.1 Overview of Optimisation Theory
7.11.2 Basic Optimal Control Problem
7.12 Estimator Design
7.12.1 Full-Order State Estimator
7.12.2 Duality of Estimation and Control
7.12.3 Reduced-Order Estimator
7.12.4 Compensator Design: Control Law and Estimator
7.12.5 A Robust Tracking Control System
Chapter 8 Digital Control Systems
8.1 Introduction
8.2 Digitisation: Sampled Data Systems
8.2.1 General Structure
8.2.2 Data Sampling
8.2.3 Characteristics of Discrete Time Signals
8.2.4 The Z-Transform
8.2.5 Z-Transform in Digital Control: A Summary
8.2.6 Use of the System DC Gain
8.3 Key Digital Control Design Methods
8.3.1 Equivalent Digital Control Design
8.3.2 Assessment and Comparison of EDCD Methods
8.3.3 Direct Digital Control Design
8.3.4 State-Space Analysis
8.3.5 System Transfer Function
8.3.6 Controllability and Observability
8.3.7 Stability of Digital Control Systems
8.4 Worked Examples of Digital Control Systems
8.5 MATLAB Implementation of Digital Systems
8.5.1 Mass-Spring-Damper System
8.5.2 Ball and Beam Control System
8.5.3 Digitising a PID Controller and the Plant
8.5.4 The Digital PID Controller
8.5.5 Time Delay in Digital Control System
8.5.6 Implementation of Digital Control Systems
Chapter 9 Advanced Control Systems
9.1 Introduction
9.2 State-Space Estimation
9.2.1 System Description
9.2.2 Kalman Filter Algorithm
9.3 The Information Filter
9.3.1 Information Space
9.3.2 Information Filter Derivation
9.3.3 Filter Characteristics
9.4 The Extended Kalman Filter (EKF)
9.4.1 Nonlinear State-Space
9.4.2 EKF Derivation
9.4.3 Summary of the EKF Algorithm
9.5 The Extended Information Filter (EIF)
9.5.1 Nonlinear Information Space
9.5.2 EIF Derivation
9.5.3 Summary of the EIF Algorithm
9.5.4 Filter Characteristics
9.5.5 Decentralised Estimation
9.6 Optimal Stochastic Control
9.6.1 Stochastic Control Problem
9.6.2 Optimal Stochastic Solution
9.6.3 Nonlinear Stochastic Control
9.6.4 Centralised Control
9.7 Nonlinear Control Systems
9.7.1 Nonlinear Dynamic Systems
9.8 Analysis of Nonlinear Control Systems
9.8.1 Describing Function Analysis
9.8.2 Phase Plane Methods
9.9 Design of Nonlinear Control Systems
9.9.1 Linearisation Methods
9.9.2 Introduction to Adaptive Control
9.9.3 Introduction to Robust Control
9.9.4 Nonlinear Control for a Solar PV Power System
Chapter 10 AI-Based Design and Analysis of Control Systems
10.1 Introduction
10.2 Data-Driven Dynamic System Modelling
10.2.1 AI-Based Approaches to Control Systems
10.3 Introduction to Artificial Intelligence
10.3.1 Traditional (Single-Task) Artificial Intelligence
10.3.2 Artificial General Intelligence
10.3.3 Machine Learning
10.3.4 Deep Learning
10.3.5 Deep Reinforcement Learning
10.3.6 Generative AI
10.3.7 Machine Learning Workflow
10.4 Applying AI Techniques to Control Systems
10.4.1 Classification of PID Control Systems
10.4.2 Machine Learning and PID Control
10.4.3 Deep Reinforcement Learning Agents
10.4.4 MATLAB Reinforcement Learning Environments
10.5 Fuzzy Logic Control Systems
10.5.1 Fuzzy PID Control System
10.5.2 Tank Water Level Control System
10.5.3 Advanced Fuzzy Logic Control Systems
10.6 Artificial Neural Networks-Driven Control Systems
10.7 The Fourth Industrial Revolution
10.7.1 History of Industrial Revolutions
10.7.2 Key 4IR Technologies
10.8 A 4IR Example: An Intelligent Fleet Management System
10.8.1 System Components
10.8.2 Summary of Intelligent Capabilities
10.9 Design and Analysis of a Drone’s Control System
10.9.1 Components of the Control System
10.9.2 Designing the Drone Control System
10.9.3 Advanced Drone Control Systems
10.10 Design and Analysis of a Driverless Car’s Control System
10.10.1 Automation Levels for a Driverless Car
10.10.2 Designing the Control System
10.10.3 Objective Functions for a Driverless Car
10.11 Artificial Intelligence and Robotics: Great Expectations and Daunting Existential Risks
10.11.1 Definitions
10.11.2 Opportunities and Challenges in Equal Measure
10.11.3 AI Risk Mitigation and Management
10.11.4 The Way Forward
Appendix A Laplace and Z-Transforms
A.1 Properties of Laplace Transforms
A.2 Table of Laplace Transforms
A.3 Properties of Z-Transforms
A.4 Table of Z-Transforms
Appendix B MATLAB: Basics and Exercises
B.1 Getting Started
B.2 Creating MATLAB Files
B.3 Commands
B.3.1 Vectors
B.3.2 Functions
B.3.3 Plotting
B.3.4 Polynomials
B.3.5 Matrices
B.4 Printing
B.4.1 Macintosh
B.4.2 Windows
B.4.3 Unix
B.4.4 Plots
B.5 Using M-files in MATLAB
B.5.1 Macintosh
B.5.2 PC Windows
B.5.3 Unix
B.6 Saving Workspace
B.7 Getting Help in MATLAB
B.8 Control Functions
B.8.1 Step
B.8.2 Impulse
B.9 More Commands
B.10 LABWORK I
B.11 LABWORK II
B.12 LABWORK III
B.13 LABWORK IV
B.14 LABWORK V
B.15 LABWORK VI
B.16 LABWORK VII
B.17 LABWORK VIII
B.18 LABWORK IX
B.19 LABWORK X
B.19.1 Control Using Artificial Intelligence and MATLAB
B.19.2 DC Motor Control Using Arduino
Bibliography
Index
Recommend Papers

Design and Analysis of Control Systems: Driving the Fourth Industrial Revolution [2 ed.]
 1032718803, 9781032718804

  • 0 0 0
  • Like this paper and download? You can publish your own PDF file online for free in a few minutes! Sign Up
File loading please wait...
Citation preview

Design and Analysis of Control Systems Written to inspire and cultivate the ability to design and analyse feasible control algorithms for a wide range of engineering applications, this comprehensive text covers the theoretical and practical principles involved in the design and analysis of control systems. This second edition introduces 4IR adoption strategies for traditional intelligent control, including new techniques of implementing control systems. It provides improved coverage of the characteristics of feedback control, root-locus analysis, frequency-response analysis, state space methods, digital control systems and advanced controls, including updated worked examples and problems. Features: • Describes very timely applications and contains a good mix of theory, application, and computer simulation. • Covers all the fundamentals of control systems. • Takes a transdisciplinary and cross-disciplinary approach. • Explores updates for 4IR (Industry 4.0) and includes better experiments and illustrations for nonlinear control systems. • Includes homework problems, case studies, examples, and a solutions manual. This book is aimed at senior undergraduate and graduate students, professional engineers and academic researchers, in interrelated engineering disciplines such as electrical, mechanical, aerospace, mechatronics, robotics and other AI-based systems.

Design and Analysis of Control Systems Driving the Fourth Industrial Revolution Second Edition

Arthur G.O. Mutambara

Designed cover image: Arthur G.O. Mutambara

MATLAB® and Simulink® are trademarks of The MathWorks, Inc. and are used with permission. The MathWorks does not warrant the accuracy of the text or exercises in this book. This book’s use or discussion of MATLAB® or Simulink® software or related products does not constitute endorsement or sponsorship by The MathWorks of a particular pedagogical approach or particular use of the MATLAB® and Simulink® software Second edition published 2024 by CRC Press 2385 NW Executive Center Drive, Suite 320, Boca Raton FL 33431 and by CRC Press 4 Park Square, Milton Park, Abingdon, Oxon, OX14 4RN CRC Press is an imprint of Taylor & Francis Group, LLC © 2024 Arthur G.O. Mutambara First edition published by CRC Press 1999 Reasonable efforts have been made to publish reliable data and information, but the author and publisher cannot assume responsibility for the validity of all materials or the consequences of their use. The authors and publishers have attempted to trace the copyright holders of all material reproduced in this publication and apologize to copyright holders if permission to publish in this form has not been obtained. If any copyright material has not been acknowledged please write and let us know so we may rectify in any future reprint. Except as permitted under U.S. Copyright Law, no part of this book may be reprinted, reproduced, transmitted, or utilized in any form by any electronic, mechanical, or other means, now known or hereafter invented, including photocopying, microfilming, and recording, or in any information storage or retrieval system, without written permission from the publishers. For permission to photocopy or use material electronically from this work, access www.copyright.com or contact the Copyright Clearance Center, Inc. (CCC), 222 Rosewood Drive, Danvers, MA 01923, 978-750-8400. For works that are not available on CCC please contact [email protected] Trademark notice: Product or corporate names may be trademarks or registered trademarks, and are used only for identification and explanation without intent to infringe. Library of Congress Cataloging-in-Publication Data Names: Mutambara, Arthur G. O., author. Title: Design and analysis of control systems : driving the fourth industrial revolution / Arthur G.O. Mutambara. Description: Second edition. | Boca Raton, FL : CRC Press, 2024. | Includes bibliographical references and index. Identifiers: LCCN 2023043910 (print) | LCCN 2023043911 (ebook) | ISBN 9781032718804 (hbk) | ISBN 9781032718866 (pbk) | ISBN 9781032718880 (ebk) Subjects: LCSH: Automatic control. | System design. Classification: LCC TJ213 .M85 2024 (print) | LCC TJ213 (ebook) | DDC 629.8—dc23/eng/20231220 LC record available at https://lccn.loc.gov/2023043910 LC ebook record available at https://lccn.loc.gov/2023043911 ISBN: 9781032718804 (hbk) ISBN: 9781032718866 (pbk) ISBN: 9781032718880 (ebk) DOI: 10.1201/9781032718880 Typeset in Times by codeMantra

Dedication This book is dedicated to oppressed people throughout the world and their struggle for social justice and egalitarianism. Defeat is not on the agenda.

Contents Preface...........................................................................................................................................xviii Key Features of the Book................................................................................................................. xx Key Benefits for the User................................................................................................................xxi Acknowledgements........................................................................................................................xxii Author ...........................................................................................................................................xxiii Chapter 1

An Introduction to Control Systems........................................................................ 1 1.1 1.2

1.3 1.4

1.5

1.6

1.7 1.8

1.9

Introduction..................................................................................................... 1 1.1.1 Background ........................................................................................ 1 A Recent History of Control Systems ............................................................ 2 1.2.1 Automatic Control ............................................................................. 3 1.2.2 Multivariable Control......................................................................... 3 What Is a Control System? ............................................................................. 3 Open-Loop Control vs. Closed-Loop Control ................................................ 5 1.4.1 Open-Loop Control............................................................................ 6 1.4.2 Closed-Loop Control ......................................................................... 6 1.4.3 Advantages of Closed-Loop Systems ................................................ 6 1.4.4 Disadvantages of Closed-Loop Systems............................................ 7 Examples of Simple Control Systems ............................................................ 7 1.5.1 Manual Car Direction of Travel Control............................................ 7 1.5.2 Cruise Control for a Car..................................................................... 8 1.5.3 Automatic Water Level Control ......................................................... 9 1.5.4 Manual Water Level Control.............................................................. 9 1.5.5 Feedback in Social, Economic, and Political Systems .................... 10 Classification of Control Systems................................................................. 11 1.6.1 Linear vs. Nonlinear Control Systems ............................................. 12 1.6.2 Time-Invariant vs. Time-Variant Control Systems .......................... 12 1.6.3 Continuous-Data vs. Discrete-Data Control Systems...................... 13 1.6.4 Regulator vs. Tracking Control Systems ......................................... 13 Control System Design ................................................................................. 13 Advanced Applications of Control Systems................................................. 15 1.8.1 Autonomous SpaceX Rocket Landing............................................. 15 1.8.2 Boston Dynamics’ Atlas Robot ....................................................... 16 1.8.3 Honda’s ASIMO Robot.................................................................... 17 1.8.4 Sophia the Robot.............................................................................. 19 Computer-Aided Design and Analysis ......................................................... 21 1.9.1 Overview of Automated Computational Tools ................................ 21 1.9.2 MATLAB ......................................................................................... 22 1.9.3 MATLAB Control System Toolbox................................................. 22 1.9.4 Simulink Control Design ................................................................. 24 1.9.5 LabVIEW......................................................................................... 28 1.9.6 SPICE-Based Simulation Software.................................................. 31 vii

viii

Contents

1.9.7 SimPowerSystems............................................................................ 31 1.9.8 Stateflow .......................................................................................... 32 1.9.9 Mathcad............................................................................................ 32 1.10 Control Systems in the 4IR........................................................................... 32 1.10.1 The Fourth Industrial Revolution..................................................... 32 1.10.2 Control System Innovations in the 4IR............................................ 33 1.10.3 Neural Networks .............................................................................. 34 1.10.4 Fuzzy Logic ..................................................................................... 34 1.10.5 Intelligent Control System Structure ............................................... 35 1.10.6 Examples of Intelligent Control Systems ........................................ 36 1.10.7 Challenges of Intelligent Control Systems ...................................... 38 1.11 Book Outline................................................................................................. 40 Chapter 2

Modelling of Dynamic Systems ............................................................................ 44 2.1

Introduction................................................................................................... 44 2.1.1 Chapter Objectives........................................................................... 44 2.2 Dynamic Systems ......................................................................................... 45 2.3 Dynamic System Models.............................................................................. 45 2.3.1 Modelling Concepts ......................................................................... 46 2.3.2 Summary of Model Derivation Procedure ....................................... 47 2.3.3 Different Dynamic System Models ................................................. 47 2.4 Overview of Different Dynamic Systems..................................................... 47 2.4.1 Translational Mechanical Systems................................................... 47 2.5 Key Dynamic System Models ...................................................................... 49 2.6 Input-Output Differential Equation Form Model ......................................... 49 2.7 State-Variable Matrix Form (State-Space) Model ........................................ 51 2.7.1 Choice of State Variables ................................................................. 52 2.7.2 Summary of the State-Variable Form Modelling............................. 53 2.7.3 Obtaining the State-Variable Matrix Model..................................... 53 2.7.4 State-Variable Matrix Models for Nonlinear Systems ..................... 56 2.7.5 Characteristics of State-Variable Models......................................... 57 2.7.6 Comparison with the Input-Output Model....................................... 57 2.8 Transfer Function Form Model..................................................................... 57 2.8.1 Obtaining the Transfer Function Model........................................... 58 2.8.2 The Laplace Transform.................................................................... 58 2.8.3 Properties of Laplace Transforms .................................................... 59 2.8.4 Laplace Transform of Some Key Functions .................................... 60 2.8.5 Determination of the Transfer Function Model ............................... 65 2.8.6 The s-Operator Method.................................................................... 70 2.8.7 The Component Transfer Function Method..................................... 72 2.8.8 The Transfer Function in Pole-Zero Factored Form ........................ 74 2.9 Block Diagram Form Model......................................................................... 75 2.9.1 Networks of Blocks.......................................................................... 76 2.9.2 Negative Feedback in Op-Amp Circuits .......................................... 80 2.9.3 Positive Feedback in Op-Amp Circuits ........................................... 81 2.9.4 Simplifying Block Diagram Models................................................ 82 2.9.5 Worked Examples of Simplifying Block Diagrams......................... 85 2.10 Examples of Dynamic System Modelling .................................................... 90 2.10.1 Translational Mechanical Systems................................................... 91 2.10.2 Rotational Mechanical Systems..................................................... 100

ix

Contents

2.11 2.12 2.13 2.14 2.15

2.16

2.17

Chapter 3

2.10.3 Electrical Systems (RLC Networks).............................................. 106 2.10.4 Electromechanical Systems ........................................................... 128 Switching between Different System Models ............................................ 131 Input-Output Model to a Transfer Function Model .................................... 134 Transfer Function Model to an Input-Output Model .................................. 134 Block Diagram Model to a Transfer Function Model ................................ 134 Transfer Function Model to a State-Variable Matrix Model ...................... 135 2.15.1 General Controllable Canonical Form ........................................... 135 2.15.2 Special Controllable Canonical Form............................................ 136 2.15.3 General Conversion to a State-Variable Matrix Model.................. 142 State-Variable Matrix Model to a Transfer Function Model ...................... 148 2.16.1 Inverse of a General n × n Matrix.................................................. 149 2.16.2 Model Conversion Using Matrix Inversion.................................... 152 2.16.3 State-Space to Transfer Function Using Laplace Transforms ....... 154 Linearisation of Nonlinear Models............................................................. 162 2.17.1 Small Signal Linearisation............................................................. 163 2.17.2 Linearisation of Element Laws ...................................................... 163 2.17.3 Linearisation of Models ................................................................. 163 2.17.4 Linearisation Concepts................................................................... 164 2.17.5 Examples of Nonlinear Dynamic Systems .................................... 166 2.17.6 Modelling of Dynamic Systems with Time Delays ....................... 172

Dynamic System Response ................................................................................. 185 3.1 3.2

3.3

3.4

3.5 3.6

Introduction................................................................................................. 185 3.1.1 Chapter Objectives......................................................................... 185 Time Domain Solution of System Models ................................................. 186 3.2.1 Homogeneous Input-Output Equations.......................................... 186 3.2.2 Nonhomogeneous Input-Output Equations.................................... 187 3.2.3 First-Order Systems ....................................................................... 190 3.2.4 Second-Order Systems................................................................... 196 3.2.5 Analogous Mechanical and Electrical Systems ............................. 198 3.2.6 Solution of the State-Variable Matrix Model................................. 199 Frequency Domain Solution of System Models ......................................... 204 3.3.1 The Inverse Laplace Transform ..................................................... 204 3.3.2 Partial Fractions ............................................................................. 205 3.3.3 General Second-Order Laplace Function ...................................... 210 Determination of the System Response...................................................... 214 3.4.1 Using the Input-Output Model....................................................... 214 3.4.2 Using the Transfer Function Model ............................................... 221 3.4.3 Impulse Response (Natural Response) .......................................... 223 3.4.4 Unit Step Response ........................................................................ 223 3.4.5 Impulse and Unit Step Responses: The Relationship .................... 224 3.4.6 Final Value Theorem (FVT)........................................................... 225 3.4.7 Initial Value Theorem .................................................................... 225 3.4.8 System DC Gain ............................................................................ 227 First-Order Systems .................................................................................... 228 Second-Order Systems................................................................................ 231 3.6.1 Impulse and Step Responses.......................................................... 234 3.6.2 Stability of Second-Order Systems................................................ 235 3.6.3 Response Characteristics of Second-Order Systems ..................... 239

x

Contents

3.7 Chapter 4

3.6.4 Effects of Pole-Zero Location on System Response...................... 243 3.6.5 Impact of Zeros on System Response............................................ 245 3.6.6 Nonminimum Phase Systems ........................................................ 245 3.6.7 Impact of Nonlinearities on System Response .............................. 248 3.6.8 Impact of Time Delay on System Response .................................. 250 Worked Examples of Dynamic System Response...................................... 251

Characteristics of Feedback Control Systems ..................................................... 275 4.1 4.2

Introduction................................................................................................. 275 Open-Loop vs. Closed-Loop Control: Analysis ......................................... 275 4.2.1 Open-Loop Control........................................................................ 275 4.2.2 Closed-Loop Control ..................................................................... 276 4.2.3 Advantages of Closed-Loop Systems ............................................ 277 4.2.4 Disadvantages of Closed-Loop Systems........................................ 277 4.2.5 Open-Loop Control Design............................................................ 277 4.2.6 The Case for Closed-Loop Control................................................ 280 4.2.7 Closed-Loop Control Design ......................................................... 282 4.2.8 Cascade and Feedback Controllers ................................................ 282 4.2.9 Control System Design Requirements ........................................... 283 4.2.10 Control Objectives and Specifications ........................................... 284 4.3 Feedback Control Systems ......................................................................... 285 4.3.1 Special Case I................................................................................. 287 4.3.2 Special Case II ............................................................................... 287 4.3.3 Positive Feedback vs. Negative Feedback ..................................... 288 4.4 Steady-State Performance........................................................................... 288 4.4.1 Expressions for Steady-State Error................................................ 289 4.5 Disturbance Modelling ............................................................................... 290 4.5.1 Effective Disturbance Rejection .................................................... 292 4.5.2 Examples of Open- and Closed-Loop Systems ............................. 293 4.6 Car Cruise Control System (Open-Loop) ................................................... 293 4.6.1 Input-Output Model ....................................................................... 294 4.6.2 Transfer Function Model ............................................................... 294 4.6.3 Block Diagram Model.................................................................... 294 4.6.4 State-Variable Matrix Model.......................................................... 295 4.7 Car Cruise Control System (Closed-Loop)................................................. 297 4.7.1 Input-Output Model ....................................................................... 297 4.7.2 Transfer Function Model ............................................................... 298 4.7.3 Block Diagram Model.................................................................... 298 4.7.4 State-Variable Matrix Model.......................................................... 299 4.8 DC Motor Speed Control (Open-Loop)...................................................... 300 4.8.1 Input-Output Model ....................................................................... 302 4.8.2 Transfer Function Model ............................................................... 303 4.8.3 Block Diagram Model (Open-Loop) ............................................. 304 4.8.4 State-Variable Matrix Model.......................................................... 305 4.9 DC Motor Position Control (Open-Loop) .................................................. 306 4.9.1 Input-Output Model ....................................................................... 306 4.9.2 Transfer Function Model ............................................................... 306 4.9.3 State-Variable Model ..................................................................... 307 4.10 DC Motor Speed Control (Closed-Loop) ................................................... 308 4.10.1 Input-Output Model ....................................................................... 308

xi

Contents

4.11

4.12

4.13 4.14

4.15

4.16

4.17

4.18

4.19

4.20 4.21

4.22

4.10.2 Transfer Function Model ............................................................... 309 4.10.3 Block Diagram Model (Closed-Loop) ........................................... 309 4.10.4 State-Variable Model ..................................................................... 311 Modelling PID Controllers ......................................................................... 312 4.11.1 Proportional Controller (P) ............................................................ 314 4.11.2 Proportional and Integral Controller (PI)....................................... 315 4.11.3 Proportional and Derivative Controller (PD) ................................. 316 4.11.4 The General PID Controller........................................................... 317 4.11.5 The Standard PID Controller ......................................................... 319 4.11.6 Summary of PID Controller Characteristics .................................. 319 4.11.7 Performance of PID-Type Controllers ........................................... 322 MATLAB Implementation.......................................................................... 323 4.12.1 State-Variable Matrix Model.......................................................... 323 4.12.2 Transfer Function Model ............................................................... 323 4.12.3 Sample MATLAB Code: Motor Speed PID Control ..................... 324 Further Analysis of PID-Type Controllers.................................................. 324 4.13.1 Poles and Zeros of PID-Type Controllers ...................................... 325 Tuning the General PID Controller ............................................................ 329 4.14.1 Trial and Error Method .................................................................. 329 4.14.2 Quarter Decay Ratio Method ......................................................... 329 4.14.3 Stability Limit Method................................................................... 330 The Standard PID Controller ...................................................................... 331 4.15.1 Structure and Advantages .............................................................. 332 4.15.2 Standard PI and PD Controllers..................................................... 334 4.15.3 Standard PID Controller: Performance Analysis........................... 335 4.15.4 Tuning the Standard PID Controller .............................................. 337 PID Controller Design and Implementation ............................................... 337 4.16.1 Implementation of PID Controllers ............................................... 338 4.16.2 Realisation Using an Operational Amplifier ................................. 339 4.16.3 Windscreen Wiper Control System................................................ 342 System Steady-State Tracking .................................................................... 346 4.17.1 Steady-State Errors and System Type............................................ 346 4.17.2 Illustrative Examples...................................................................... 348 Sensitivity ................................................................................................... 356 4.18.1 Definition of Sensitivity ................................................................. 356 4.18.2 Open- and Closed-Loop Sensitivity............................................... 356 System Stability .......................................................................................... 364 4.19.1 Bounded Input-Bounded Output Stability ..................................... 364 4.19.2 Asymptotic Internal Stability......................................................... 364 4.19.3 Routh-Hurwitz Stability Criterion ................................................. 368 Worked Examples of System Stability ....................................................... 371 System Type Based on Stability ................................................................. 378 4.21.1 Absolutely Stable System .............................................................. 378 4.21.2 Marginally Stable System .............................................................. 378 4.21.3 Conditionally Stable System.......................................................... 378 Time Delays in Control Systems ................................................................ 380 4.22.1 Impact of Time Delay in Control Systems..................................... 380 4.22.2 Time Delay and the Smith Predictor Controller ............................ 382

xii

Chapter 5

Contents

Root Locus Design Methods ............................................................................... 397 5.1 5.2

5.3

5.4 5.5 5.6

5.7

Chapter 6

Introduction................................................................................................. 397 Root Locus.................................................................................................. 397 5.2.1 Background .................................................................................... 397 5.2.2 Definition ....................................................................................... 398 5.2.3 Magnitude and Angle Criteria ....................................................... 399 5.2.4 Breakpoint, Departure and Arrival Angles .................................... 400 Constructing the Root Locus ...................................................................... 400 5.3.1 Summary of the Root Locus Steps................................................. 400 5.3.2 Simple Rules for Constructing the Root Locus ............................. 400 5.3.3 Details of the Root Locus Steps..................................................... 406 5.3.4 Determining the Root Locus Gain (Control Gain) ........................ 410 5.3.5 Root Locus for Second-Order Systems.......................................... 410 Worked Examples of Root Locus Design................................................... 411 Dynamic Compensation: Lead and Lag ..................................................... 433 Extensions of Root Locus Method ............................................................. 435 5.6.1 Time Delay..................................................................................... 435 5.6.2 Nonlinear Systems ......................................................................... 439 Computer-Aided Determination of the Root Locus ................................... 440 5.7.1 MATLAB ....................................................................................... 440

Frequency-Response Design Methods ................................................................ 445 6.1 6.2

6.3

6.4

6.5

6.6

6.7

6.8

6.9

Introduction................................................................................................. 445 Definition of the Frequency Response........................................................ 445 6.2.1 Magnitude and Phase Angle .......................................................... 446 6.2.2 Combining Magnitudes and Phase Angles .................................... 447 Bode Plots................................................................................................... 451 6.3.1 Definition ....................................................................................... 451 6.3.2 Background .................................................................................... 451 6.3.3 Advantages of Bode Plots .............................................................. 452 6.3.4 Bode Plot Techniques .................................................................... 453 6.3.5 Four Classes of Basic Factors ........................................................ 455 Constant Factors (Gain) .............................................................................. 456 6.4.1 Magnitude ...................................................................................... 456 6.4.2 Phase Angle ................................................................................... 456 A Simple Zero Factor ................................................................................. 458 6.5.1 Magnitude ...................................................................................... 458 6.5.2 Phase Angle ................................................................................... 459 A Simple Pole Factor.................................................................................. 460 6.6.1 Magnitude ...................................................................................... 460 6.6.2 Phase Angle ................................................................................... 461 An Integrator Factor.................................................................................... 462 6.7.1 Magnitude ...................................................................................... 462 6.7.2 Phase Angle ................................................................................... 463 A Derivative Factor..................................................................................... 463 6.8.1 Magnitude ...................................................................................... 463 6.8.2 Phase Angle ................................................................................... 463 A Complex Pole Factor............................................................................... 464 6.9.1 Magnitude ...................................................................................... 465 6.9.2 Phase Angle ................................................................................... 466

xiii

Contents

6.10 A Complex Zero Factor .............................................................................. 467 6.10.1 Magnitude ...................................................................................... 467 6.10.2 Phase Angle ................................................................................... 467 6.11 Drawing Bode Plots of Complex Systems.................................................. 468 6.12 Worked Examples of Bode Plots ................................................................ 468 6.13 Nonminimum Phase Systems ..................................................................... 480 6.13.1 Magnitude and Phase ..................................................................... 480 6.14 Impact of Time Delay on Bode Plots.......................................................... 481 6.14.1 Bode Plots of a Time Delay ........................................................... 482 6.14.2 Impact of Time Delay on Bode Plots............................................. 484 6.15 Bode Plots Using MATLAB....................................................................... 492 6.15.1 A Single Plot .................................................................................. 492 6.15.2 Several Plots on the Same Curve ................................................... 492 6.15.3 System in State-Space.................................................................... 494 6.16 Models from Experimental Frequency Data............................................... 495 6.17 Compensation ............................................................................................. 496 6.17.1 PD and Lead Compensators........................................................... 496 6.17.2 PI and Lag Compensators .............................................................. 498 6.17.3 Generic Compensator..................................................................... 499 6.17.4 PID Compensator........................................................................... 500 6.17.5 Lead-Lag Compensator.................................................................. 502 6.17.6 Summary of Compensation Characteristics................................... 502 Chapter 7

State-Space Design Methods............................................................................... 508 7.1 7.2 7.3

7.4

7.5

7.6

7.7 7.8

Introduction................................................................................................. 508 Definitions................................................................................................... 508 Block Diagram and the Transfer Function ................................................. 509 7.3.1 State-Space Description and the Block Diagram........................... 509 7.3.2 Transfer Function Model: A Revisit .............................................. 511 System Response by State-Transition Matrix............................................. 515 7.4.1 Direct Solution of the Differential Equation.................................. 515 7.4.2 Direct State-Transition Matrix Method.......................................... 516 7.4.3 Diagonalisation .............................................................................. 518 7.4.4 System Response by Laplace Transform Method.......................... 520 System Controllability and Observability................................................... 524 7.5.1 Summary of Definitions................................................................. 524 7.5.2 Implication of Pole-Zero Cancellation........................................... 525 7.5.3 Worked Examples of Controllability & Observability................... 526 Canonical State-Space Models ................................................................... 548 7.6.1 Controllable Canonical State-Space Model ................................... 548 7.6.2 Observable Canonical State-Space Model..................................... 549 7.6.3 Diagonal Canonical State-Space Model ........................................ 550 7.6.4 Jordan Canonical State-Space Model ............................................ 550 7.6.5 Matrix Eigenvalues and Eigenvectors............................................ 552 7.6.6 Matrix Diagonalisation .................................................................. 553 Similarity Transformations ......................................................................... 555 Canonical State-Space Models: Revisited .................................................. 556 7.8.1 Controllable Canonical State-Space Model ................................... 556 7.8.2 Observable Canonical State-Space Model..................................... 559 7.8.3 Jordan Canonical State-Space Model ............................................ 562

xiv

Contents

7.9

Transfer Function Direct Decomposition ................................................... 563 7.9.1 Decomposition to Controllable Canonical Form ........................... 563 7.9.2 Decomposition to Observable Canonical Form ............................. 564 7.10 Full State Feedback Control ....................................................................... 564 7.10.1 Pole Placement Design Method ..................................................... 566 7.10.2 Pole Placement Using Ackermann’s Formula ............................... 570 7.11 Introduction to Optimal Control ................................................................. 574 7.11.1 Overview of Optimisation Theory ................................................. 574 7.11.2 Basic Optimal Control Problem..................................................... 577 7.12 Estimator Design ........................................................................................ 582 7.12.1 Full-Order State Estimator............................................................. 583 7.12.2 Duality of Estimation and Control................................................. 586 7.12.3 Reduced-Order Estimator .............................................................. 587 7.12.4 Compensator Design: Control Law and Estimator ........................ 594 7.12.5 A Robust Tracking Control System ............................................... 597 Chapter 8

Digital Control Systems ...................................................................................... 604 8.1 8.2

8.3

8.4 8.5

Chapter 9

Introduction................................................................................................. 604 Digitisation: Sampled Data Systems .......................................................... 604 8.2.1 General Structure ........................................................................... 604 8.2.2 Data Sampling................................................................................ 606 8.2.3 Characteristics of Discrete Time Signals ....................................... 610 8.2.4 The Z-Transform ........................................................................... 612 8.2.5 Z-Transform in Digital Control: A Summary ............................... 615 8.2.6 Use of the System DC Gain ........................................................... 616 Key Digital Control Design Methods ......................................................... 617 8.3.1 Equivalent Digital Control Design................................................. 617 8.3.2 Assessment and Comparison of EDCD Methods .......................... 620 8.3.3 Direct Digital Control Design........................................................ 621 8.3.4 State-Space Analysis...................................................................... 623 8.3.5 System Transfer Function .............................................................. 626 8.3.6 Controllability and Observability .................................................. 626 8.3.7 Stability of Digital Control Systems .............................................. 627 Worked Examples of Digital Control Systems ........................................... 627 MATLAB Implementation of Digital Systems........................................... 642 8.5.1 Mass-Spring-Damper System ........................................................ 642 8.5.2 Ball and Beam Control System...................................................... 644 8.5.3 Digitising a PID Controller and the Plant ...................................... 645 8.5.4 The Digital PID Controller ............................................................ 647 8.5.5 Time Delay in Digital Control System .......................................... 654 8.5.6 Implementation of Digital Control Systems .................................. 654

Advanced Control Systems ................................................................................. 660 9.1 9.2

9.3

Introduction................................................................................................. 660 State-Space Estimation ............................................................................... 660 9.2.1 System Description ........................................................................ 661 9.2.2 Kalman Filter Algorithm................................................................ 661 The Information Filter ................................................................................ 662 9.3.1 Information Space.......................................................................... 662 9.3.2 Information Filter Derivation......................................................... 665

xv

Contents

9.4

9.5

9.6

9.7 9.8

9.9

Chapter 10

9.3.3 Filter Characteristics ...................................................................... 667 The Extended Kalman Filter (EKF) ........................................................... 667 9.4.1 Nonlinear State-Space.................................................................... 667 9.4.2 EKF Derivation .............................................................................. 668 9.4.3 Summary of the EKF Algorithm.................................................... 671 The Extended Information Filter (EIF)....................................................... 672 9.5.1 Nonlinear Information Space......................................................... 672 9.5.2 EIF Derivation................................................................................ 672 9.5.3 Summary of the EIF Algorithm ..................................................... 674 9.5.4 Filter Characteristics ...................................................................... 675 9.5.5 Decentralised Estimation ............................................................... 675 Optimal Stochastic Control......................................................................... 676 9.6.1 Stochastic Control Problem ........................................................... 676 9.6.2 Optimal Stochastic Solution .......................................................... 677 9.6.3 Nonlinear Stochastic Control......................................................... 679 9.6.4 Centralised Control ........................................................................ 680 Nonlinear Control Systems......................................................................... 680 9.7.1 Nonlinear Dynamic Systems.......................................................... 681 Analysis of Nonlinear Control Systems ..................................................... 682 9.8.1 Describing Function Analysis........................................................ 683 9.8.2 Phase Plane Methods ..................................................................... 683 Design of Nonlinear Control Systems ........................................................ 684 9.9.1 Linearisation Methods ................................................................... 684 9.9.2 Introduction to Adaptive Control................................................... 684 9.9.3 Introduction to Robust Control ...................................................... 685 9.9.4 Nonlinear Control for a Solar PV Power System .......................... 687

AI-Based Design and Analysis of Control Systems............................................ 692 10.1 Introduction................................................................................................. 692 10.2 Data-Driven Dynamic System Modelling .................................................. 692 10.2.1 AI-Based Approaches to Control Systems .................................... 693 10.3 Introduction to Artificial Intelligence ......................................................... 694 10.3.1 Traditional (Single-Task) Artificial Intelligence............................ 694 10.3.2 Artificial General Intelligence........................................................ 694 10.3.3 Machine Learning .......................................................................... 695 10.3.4 Deep Learning................................................................................ 696 10.3.5 Deep Reinforcement Learning....................................................... 696 10.3.6 Generative AI................................................................................. 697 10.3.7 Machine Learning Workflow ........................................................ 698 10.4 Applying AI Techniques to Control Systems ............................................. 699 10.4.1 Classification of PID Control Systems .......................................... 700 10.4.2 Machine Learning and PID Control............................................... 703 10.4.3 Deep Reinforcement Learning Agents........................................... 703 10.4.4 MATLAB Reinforcement Learning Environments........................ 704 10.5 Fuzzy Logic Control Systems..................................................................... 705 10.5.1 Fuzzy PID Control System ............................................................ 706 10.5.2 Tank Water Level Control System ................................................. 707 10.5.3 Advanced Fuzzy Logic Control Systems....................................... 709 10.6 Artificial Neural Networks-Driven Control Systems.................................. 712 10.7 The Fourth Industrial Revolution................................................................ 714

xvi

Contents

10.8

10.9

10.10

10.11

Appendix A

Laplace and Z-Transforms.................................................................................. 735 A.1 A.2 A.3 A.4

Appendix B

10.7.1 History of Industrial Revolutions................................................... 715 10.7.2 Key 4IR Technologies.................................................................... 717 A 4IR Example: An Intelligent Fleet Management System ....................... 721 10.8.1 System Components ...................................................................... 721 10.8.2 Summary of Intelligent Capabilities .............................................. 721 Design and Analysis of a Drone’s Control System .................................... 722 10.9.1 Components of the Control System ............................................... 722 10.9.2 Designing the Drone Control System ............................................ 723 10.9.3 Advanced Drone Control Systems................................................. 724 Design and Analysis of a Driverless Car’s Control System ....................... 725 10.10.1 Automation Levels for a Driverless Car ........................................ 726 10.10.2 Designing the Control System ....................................................... 727 10.10.3 Objective Functions for a Driverless Car....................................... 728 Artificial Intelligence and Robotics: Great Expectations and Daunting Existential Risks ......................................................................................... 729 10.11.1 Definitions...................................................................................... 729 10.11.2 Opportunities and Challenges in Equal Measure........................... 730 10.11.3 AI Risk Mitigation and Management ............................................ 732 10.11.4 The Way Forward........................................................................... 732

Properties of Laplace Transforms............................................................... 735 Table of Laplace Transforms ...................................................................... 736 Properties of Z-Transforms ........................................................................ 737 Table of Z-Transforms ............................................................................... 737

MATLAB: Basics and Exercises ......................................................................... 739 B.1 Getting Started ............................................................................................ 739 B.2 Creating MATLAB Files ............................................................................ 739 B.3 Commands .................................................................................................. 740 B.3.1 Vectors............................................................................................ 740 B.3.2 Functions........................................................................................ 740 B.3.3 Plotting........................................................................................... 741 B.3.4 Polynomials ................................................................................... 742 B.3.5 Matrices.......................................................................................... 743 B.4 Printing ....................................................................................................... 744 B.4.1 Macintosh....................................................................................... 744 B.4.2 Windows ........................................................................................ 744 B.4.3 Unix................................................................................................ 744 B.4.4 Plots ............................................................................................... 744 B.5 Using M-files in MATLAB......................................................................... 745 B.5.1 Macintosh....................................................................................... 745 B.5.2 PC Windows................................................................................... 745 B.5.3 Unix................................................................................................ 745 B.6 Saving Workspace....................................................................................... 745 B.7 Getting Help in MATLAB .......................................................................... 745 B.8 Control Functions ....................................................................................... 746 B.8.1 Step ................................................................................................ 746 B.8.2 Impulse........................................................................................... 746 B.9 More Commands ........................................................................................ 748

xvii

Contents

B.10 B.11 B.12 B.13 B.14 B.15 B.16 B.17 B.18 B.19

LABWORK I.............................................................................................. 749 LABWORK II ............................................................................................ 750 LABWORK III ........................................................................................... 750 LABWORK IV ........................................................................................... 753 LABWORK V ............................................................................................ 754 LABWORK VI ........................................................................................... 754 LABWORK VII.......................................................................................... 756 LABWORK VIII ........................................................................................ 757 LABWORK IX ........................................................................................... 758 LABWORK X ............................................................................................ 760 B.19.1 Control Using Artificial Intelligence and MATLAB ..................... 760 B.19.2 DC Motor Control Using Arduino................................................. 762

Bibliography .................................................................................................................................. 764 Index .............................................................................................................................................. 765

Preface This book covers the principles and challenges that characterise the design and analysis of control systems within the context of the Fourth Industrial Revolution (4IR) – an all-embracing technological transformation that has radically altered the way individuals, institutions, businesses, communities and countries operate, organise, relate, and innovate. The 4IR is characterised by a range of technologies fusing the physical, digital, and biological worlds, impacting all disciplines, economies, and industries. The revolution’s fascinating and interrelated drivers include Artificial Intelligence, Artificial General Intelligence, Machine Learning, Augmented Reality, Human Augmentation, Big Data, the Internet of Things, the Internet of Everything, Autonomous Systems (military vehicles, drones, and driverless cars), Chatbot Systems such as ChatGPT, 3D Printing, Robotics and Mechatronics, Quantum Computing, Cloud Computing, the Metaverse, Blockchain Technology, and Cryptocurrency. Most of these technologies and their applications require some type of control system or mechanism. Moreover, the 4IR’s challenges and solutions are transdisciplinary in nature, hence the deployment of techniques of such a cross-disciplinary subject as control systems engineering becomes natural and efficacious. A control system is an interconnection of interacting components forming a configuration that provides a desired system response. Feedback is part of the control system and entails measuring the controlled variable and using that information to influence the controlled variable. This book is novel because it provides insight into the fundamental nature of control systems and enables the reader to engineer effective controllers within the context of the 4IR. This book aims to imbue the reader with skills to design and analyse implementable control algorithms for a wide range of real engineering applications. With the advent of the 4IR, the book seeks to provide control systems solutions to the various 4IR technologies and their applications. This book is divided into ten chapters which can be organised into four transdisciplinary courses (key drivers of the 4IR) that can be offered in Engineering departments worldwide such as Electrical and Electronic, Mechanical, Electrical and Computer Science, Mechatronics and Robotics, Industrial, Chemical, and Metallurgy. The first part of this book (Chapters 1–3) covers new control technologies, introduces the principles involved in control systems, discusses various applications (including new 4IR cases), develops mathematical models for dynamic systems, and shows how the models are used to obtain and study the system response (behaviour). This material is useful in a Dynamic Systems II class, which serves as a basis for two standard courses, Control Systems III and IV, whose key material is covered in the second part of this book (Chapters 4–6, and the first part of Chapter 7). This part of the book covers the basic characteristics of control systems, root locus design methods, frequency-response design methods, and an introduction to state-space design methods. The last part of this book (second part of Chapter 7, and Chapters 8–10) presents subjects suitable for a graduate course – Advanced Control Systems. This material examines advanced state-space design methods, digital control systems, state and information space estimation, and nonlinear control systems. More significantly, this last session covers comprehensively the new 4IR field of AI-based design and analysis of control systems. The way the material is presented (just-in-time learning) is such that these four transdisciplinary, integrated and overlapping courses can be pursued successfully by students without prior background in the field. Control systems practice is a cross-disciplinary subject that cuts across aerospace, chemical, electrical, industrial, and mechanical engineering. Therefore, this book is useful in all these multiple but interrelated fields. Although written ostensibly as an innovative transdisciplinary 4IR classroom textbook, this book is an indispensable tool for research, self-study, and reference for engineering professionals in the 4IR. Given the application of control system concepts to other fields such as organisational theory, economics, and medicine, the transdisciplinarity of this xviii

Preface

xix

book makes it valuable in these various fields. Furthermore, with the advent of the 4IR, it is clear that the design and analysis of control systems constitute a key driver of this brave new world. The 4IR requires special skills, capabilities, and competencies such as critical thinking, mastering how to think, problem-solving, structured thinking, learning how to learn, blended learning, an interdisciplinary approach, and ecosystem thinking. This book is written to advance these attributes in the design and analysis of control systems.

Key Features of the Book • A transdisciplinary textbook that addresses control systems in the context of the 4IR with applications beyond engineering. • Covers the theoretical and practical principles involved in the design and analysis of control systems applicable to a broad spectrum of engineering disciplines, with a focus on the crossdisciplinarity occasioned by the 4IR. • Develops mathematical models for different types of dynamic systems such as mechanical, electrical, electromechanical, mechatronic, thermal-fluid, chemical, hydraulic, and pneumatic systems. • Illustrates how system models are used to study the system response (behaviour), and how the models are employed to achieve system control. • Presents standard control systems material in a systematic and easily accessible way. • Introduces and develops advanced control systems concepts on a well-established foundation of standard control system principles. • Employs just-in-time learning where concepts are introduced and developed as they become relevant. This fosters a better understanding of the concepts and their engineering applications. • Uses many worked examples, design problems, and a wide range of illustrative applications from aerospace, robotics, mechatronics, and manufacturing. • Makes extensive use of MATLAB and its Control System Toolbox to demonstrate control system concepts and their implementation. • Introduces and leverages newer Computer-Aided Design (CAD) control system techniques such as Simulink Control Design, LabVIEW, SPICE-Based Simulation Software, SimPowerSystems, Stateflow, and Mathcad. • Includes descriptions of several demonstrative modern control systems laboratory experiments such as Magnetic Levitation, the Inverted Pendulum, the 3-DOF Helicopter, the 2-DOF Robot, and the Pendubot. • Contains many detailed examples and solved extended problems throughout every chapter. This enhances both teaching and learning. • Presents and discusses examples of advanced and transdisciplinary applications and innovations of control systems in the 4IR.

xx

Key Benefits for the User • A control systems textbook written specifically for the 4IR and its various technologies, complexities, and opportunities. • One textbook usable for four different but overlapping courses: Dynamic Systems II, Control Systems III, Control Systems IV, and Advanced Control Systems. It offers a more affordable option than using three different textbooks. • A textbook that emphasises and embraces the multidisciplinary and transdisciplinary nature of control system engineering while employing just-in-time learning and teaching. The user follows and learns the material better compared to a conventional control system textbook. • All the relevant principles and concepts (standard and advanced) involved in the control of dynamic systems are covered in one textbook. • Detailed descriptions of a range of new control systems laboratory experiments that can be incorporated into the control systems curriculum. • The extensive use of worked examples and illustrations, anchored by the book’s structure (justin-time philosophy), makes the book easy to read and understand, dramatically enhancing both teaching and learning. Practical examples from different engineering fields are used to demonstrate concepts and their applications. This promotes holistic engineering education and transdisciplinarity in general. • The use of MATLAB throughout the textbook helps to strengthen the grasp of concepts and enhance thorough understanding of computer implementation of control systems. • A comprehensive appendix containing the essential basics of MATLAB, including laboratory exercises. • In the 4IR, there is a move towards a new engineering curriculum (emphasising the multidisciplinary and transdisciplinary nature of engineering practice and the need for just-in-time learning) sweeping throughout engineering schools in Africa, the USA, Europe, and Japan. Most current control system textbooks do not adequately satisfy the requirements of blended learning, structured thinking, learning how to learn, and the cross-disciplinarity that effective engineering education in the 4IR demands. This book is written with the specific objective of enabling and facilitating the achievement of all these new attributes and imperatives of education in the 4IR.

xxi

Acknowledgements This book is an outgrowth of the efforts of the Institute for the Future of Knowledge at the University of Johannesburg to produce transdisciplinary teaching material for the Fourth Industrial Revolution. Control systems is a typical cross-disciplinary subject with wide applications in multiple engineering and non-engineering fields. I have used some of the material in this book to teach integrated classes in dynamic systems, control systems and mechatronics in the Mechanical, Electrical, and Electronic Engineering departments at the University of Johannesburg, and other institutions in the United States. The examples and illustrations were derived from collaborative research that I have been involved in at various institutions in the United States, Europe, China, and South Africa. I greatly appreciate the comradeship, support, and suggestions provided by my colleagues in these institutions. I would like to acknowledge all my colleagues at the Massachusetts Institute of Technology, NASA, and the University of Johannesburg for reviewing the book, classroom-testing the material, and providing insightful comments. In particular, I would like to thank Professors Edward Crawley (MIT), Chuchu Fan (MIT), Dexter Johnson (NASA Lewis), Rodney Roberts (FAMU-FSU), and Edward Chikuni (UZ) for their detailed remarks and suggestions. Professor Mark Spong of the University of Illinois, Dr Simon Julier of the US Naval Research Laboratory, Professor Paul Mason of the University of Florida and Mohammad Reza Maghami (UJ) also offered useful comments. I am deeply indebted to all my undergraduate and graduate students, in South Africa and the United States, for acting as an effective testbed for the book and providing a robust critique of the material. Their questions, comments, and extensive feedback helped me make the material more readable.

xxii

Author Prof. Dr. Eng. Arthur G.O. Mutambara is Director and Full Professor at the University of Johannesburg’s (UJ) Institute for the Future of Knowledge (IFK) in South Africa where he also serves as the Head of the Decentralised Artificial Intelligence and Control Systems (DAICS) Research Group. He is a world-renowned control systems specialist, roboticist, academic, author and technology strategist. Professor Mutambara teaches Control Systems in two departments at UJ: Mechanical Engineering Science and Electrical and Electronic Engineering Science. He previously taught and conducted research in Control Systems, Robotics and Mechatronics in the Aeronautics and Astronautics Department at the Massachusetts Institute of Technology (MIT), and in the Mechanical Engineering Department at the Florida Agricultural and Mechanical University Florida State University (FAMU-FSU), both in the United States. He was also a research scientist at the National Aeronautics and Space Administration (NASA), a Visiting Professor at MIT, and a Visiting Research Professor at Carnegie Mellon University’s Robotics Institute. In his academic research and teaching, he has received outstanding reviews and praise from students and peers worldwide. He is a Chartered Engineer, a Fellow of the Institution of Engineering and Technology (IET), a Professional Engineer, a Fellow of the Zimbabwe Institution of Engineers (ZIE), a Fellow of the Zimbabwe Academy of Sciences (ZAS), and a Senior Member of the Institute of Electrical and Electronics Engineers (IEEE). Prof. Mutambara has written two electrical engineering books widely used in undergraduate and graduate engineering programmes in the United States, Europe, China, Japan and Africa: Decentralised Estimation and Control for Multisensor Systems (1998), and Design and Analysis of Control Systems (1999). He holds a PhD in Robotics and Mechatronics, and an MSc in Computer Engineering, both from the University of Oxford. Mutambara was there as a Rhodes Scholar from September 1991 to March 1995. He graduated with a BSc (Honours) in Electrical Engineering from the University of Zimbabwe in 1990. Beyond Engineering and Science, Professor Mutambara is a distinguished Pan-Africanist and a prolific public intellectual across the African continent and globally. He has conducted outstanding presentations on leadership, African unity, regional integration, management, and business strategy throughout the world, in addition to motivational speeches and seminars. He has authored three books on Thought Leadership: Volume I: The Formative Years and the Big Wide World (1983–2002); Volume II: The Path to Power (2003–2009); Volume III: Ideas & Solutions: Deputy Prime Minister and Beyond (2009–2023). In 2007, Mutambara was accorded the World Economic Forum (WEF) Young Global Leader status and subsequently attended WEF events from 2007 to 2013 in Davos (Switzerland), China, India and Africa. Professor Mutambara is the Former Deputy Prime Minister (DPM) of Zimbabwe. He was one of the three Principals who created and led the Government of National Unity (GNU) in Zimbabwe from 2009 to 2013. The other two were the late former Prime Minister Morgan Tsvangirai and the late former President Robert Mugabe. As the Deputy Prime Minister, his key functions included assisting the Prime Minister in policy formulation by the Cabinet and supervision of policy implementation by the Council of Ministers. He also specifically supervised the Ministries under the Infrastructure Cluster, such as Energy and Power Development, Transport and Infrastructure, Information Communication Technologies, Water Resources and Development, and Public Works. Professor Mutambara coordinated regional policy formulation within these sectors across SADC

xxiii

xxiv

Author

and COMESA. Furthermore, DPM Mutambara drove three critical national efforts of the GNU: the development of a Shared National Vision, the Rebranding of Zimbabwe and the formulation of a National Infrastructure Master Plan. Professor Mutambara was also a management consultant with McKinsey & Company in the United States (Chicago Office) and the Director of Electronic Payments at Standard Bank in South Africa.

Introduction to Control 1 An Systems 1.1

INTRODUCTION

This chapter provides the background and motivation for the material presented in this book. The general principles and issues involved in the design and analysis of control systems are outlined and illustrated by using examples. Basic control system terms are introduced and defined. Open- and closed-loop control system configurations are introduced and appraised. Classes of control systems are discussed, and a broad spectrum of control system examples from various applications are presented. Advanced applications of control systems are outlined in addition to a discussion about the nature of the control design process. Brief descriptions of a variety of basic experiments that can be used to illustrate the material presented in the book are also included. This chapter introduces newer Computer-Aided Design (CAD) control system techniques such as Simulink Control Design, LabVIEW, SPICE-Based Simulation Software, SimPowerSystems, Stateflow, and Mathcad. It includes descriptions of several demonstrative modern control systems laboratory experiments such as Magnetic Levitation, the Inverted Pendulum, the 3-DOF Helicopter, the 2-DOF Robot and the Pendubot. More significantly, this chapter presents and discusses examples of advanced and transdisciplinary applications and innovations of control systems in the 4IR. Beyond all this, this chapter provides a book outline as a road map through the contents of the manuscript. Control is the process of causing a system variable to conform to some desired value or reference value. A system is any collection of interacting components for which there are cause-and-effect relationships among the variables. The components are connected to form a whole entity that has properties that are not present in the separate entities. Within this context, a control system is then defined as an interconnection of interacting components forming a system configuration that will provide a desired system response. Feedback is the process of measuring the controlled variable and using that information to influence the controlled variable. To understand the purpose of a control system, it is useful to examine examples of simple control systems. These simple systems incorporate the same ideas of control and feedback that are used in complex and advanced applications. Modern control engineering practice includes the use of control design strategies for improving manufacturing processes, efficient energy use and advanced automobile technology. These control principles are also used in rapid transit systems, advanced robotic systems, and in the emerging fields of mechatronics and micro-electromechanical systems (MEMS). Mechatronics involves the synergistic integration of mechanics, electronics, and computer science to produce optimum products and systems. 1.1.1

BACKGROUND

The field of engineering is concerned with the understanding, usage, and control of natural materials and forces for the benefit of humankind. Control system engineering is a sub-aspect of engineering that is concerned with understanding and controlling segments of the environment, often called systems, to provide useful economic products for society. The twin goals of understanding and control

DOI: 10.1201/9781032718880-1

1

2

Design and Analysis of Control Systems

are complementary because effective systems control requires that the systems be understood and modelled. Furthermore, control engineering must often consider the control of poorly understood systems such as chemical process systems. The present challenge to control engineers is the modelling and control of modern, complex, multidisciplinary systems such as traffic control systems, chemical processes, robotic systems, mechatronic systems and MEMS. At the same time, the engineer has the opportunity to control many useful and intellectually challenging industrial automation systems. Perhaps the most characteristic quality of control engineering is the opportunity to control machines, industrial systems, and economic processes for the benefit of society. Control engineering is based on the foundations of feedback theory and linear system analysis, and it integrates the concepts of network theory and communication theory. Control engineering is ostensibly multidisciplinary. It is equally applicable to aeronautical, chemical, mechanical, environmental, civil, and electrical engineering. Feedback controllers are used in many different systems, from aeroplanes and rockets to chemical processing plants and semiconductor manufacturing. Quite often a single control system includes electrical, mechanical, and chemical components. The theory and practice of control systems apply to many disciplines other than engineering. As the understanding of the dynamics of business, social, and political systems increases, the ability to model and control these systems will also improve.

1.2

A RECENT HISTORY OF CONTROL SYSTEMS

The first significant work in automatic control was carried out in the 18th century by James Watt with his centrifugal governor for the speed control of a steam engine. Other notable efforts in the early stages of control theory were by Minorsky, Hazen, and Nyquist. In 1922, Minorsky worked on automatic controllers for steering ships. He showed how stability could be achieved from the differential equations describing the system. During the 1980s, the utilisation of digital computers for control components became routine. The technology that allows these new control elements to perform accurate and rapid calculations was previously unavailable to control engineers. These computers are employed especially for process control systems in which many variables are measured and controlled simultaneously by the computer. With the advent of Sputnik (the first space vehicle) and the space age, another new impetus was imparted to control engineering. It became necessary to design complex, highly accurate control systems for missiles and space probes. Furthermore, the necessity to minimise the weight of satellites and to control them very accurately has spawned the important field of optimal control. Due to these requirements, the time-domain methods developed by Lyapunov, Minorsky, and others have been embraced with great interest in the past two decades. Recent theories of optimal control developed by L. S. Pontryagin in the former Soviet Union and R. Bellman in the United States, and recent studies of robust systems, have also contributed to the interest in time-domain methods. Control engineering must consider both the time-domain and the frequency-domain approaches simultaneously in the analysis and design of control systems. Classical control theory, which deals only with single-input single-output (SISO) systems, is ineffectual for multiple-input multiple-output (MIMO) systems. The availability of digital computers, from around 1965, made time-domain analysis of complex systems possible, leading to the development of modern control theory based on time-domain analysis and synthesis. This theory uses state variables to cope with the increased complexity of modern plants. These new methods also meet the stringent requirements on accuracy, weight, and cost in civilian, defence, space, and industrial applications. From 1965 to 1985, optimal control of both deterministic and stochastic systems, as well as adaptive and learning control of complex systems, were fully investigated. From 1980 to the present, developments in modern control theory have centred around robust control, H∞ control, multisensor-based control, robust estimation, and associated topics. Now that digital computers

An Introduction to Control Systems

3

have become cheaper and more compact, they are used as integral parts of control systems. Modern control theory has also started to find its way into such non-engineering systems as biological, biomedical, social, economic, and political systems. 1.2.1

AUTOMATIC CONTROL

The control of an industrial process (manufacturing, production, and processing) by automatic rather than manual means is often called automation. Automation is the automatic operation or control of a process, device, or system. It is prevalent in chemical, electric power, paper, automotive, and steel industries, among others. The concept of automation is central to an industrial society. Automatic machines can be used to increase the productivity of a plant and to obtain high-quality products. Automatic control of machines and processes is utilised to create a product within specified tolerances and to achieve high precision. Automatic control has played a vital role in the advancement of engineering and science. In addition to its extreme importance in space-vehicle, missile-guidance, robotic and mechatronic systems, automatic control has become an important and integral part of modern manufacturing and industrial processes. For example, automatic control is essential in the numerical control of machine tools in the manufacturing industries, in the design of autopilot systems in aerospace industries, and in the design of vehicles in automobile industries. It is also essential in industrial operations that require the control of pressure, temperature, humidity, viscosity, and fluid flow. Due to the importance of automatic control as a means of attaining optimal performance of dynamic systems, improving productivity and relieving the drudgery of many routine repetitive manual operations, most engineers and scientists must now have a good understanding of this field. The current revolution in computer and information technology is causing an equally momentous social change: the expansion of information gathering and information processing as computers extend the reach of the human brain. Control systems are used to achieve increased productivity and improved the performance of a device or system. 1.2.2

MULTIVARIABLE CONTROL

Due to the increasing complexity of the systems that must be controlled and the interest in achieving optimum performance, the importance of control system engineering has grown in the past decade. As the systems become more complex, the interrelationship of many controlled variables must be considered in the control scheme. This leads to control systems that have more than one feedback loop, i.e., multi-loop control systems as opposed to single-loop control systems (one feedback loop). Such systems are nontrivial and are much more challenging than single-loop control systems. Most of the standard control system theory applies to single-loop control only. Multiple control loops are needed whenever a plant has multiple sensors or multiple actuators. In this case, the interaction of every feedback loop with every other feedback loop must be accounted for. While many single-loop concepts hold in principle in the multi-loop case, the technicalities are much more involved. The performance benefits of multi-loop control, however, are often far more than one would expect from a collection of single-loop controllers. Such multivariable control is essential in multi-input, multioutput (MIMO) systems, whereas single-loop control is sufficient for single-input single-output (SISO) systems.

1.3

WHAT IS A CONTROL SYSTEM?

A control system is a network that seeks to influence the behaviour of a dynamic system in pursuit of specified objectives or the desired response. It is an interconnection of components forming a configuration that will facilitate the achievement of those specified outcomes for a dynamic system.

4

Design and Analysis of Control Systems

The basis for control system analysis is provided by linear system theory, which assumes a causeeffect relationship for the system’s components. As illustrated in Figures 1.1 and 1.2, basic control systems have three common physical components (plant, actuator, and controller) and belong to either of two structures, open-loop or closed-loop. The presence of a sensor is what distinguishes a closed-loop control system from an open-loop one. The full details (components and signals) of these two control system topologies are shown in Figures 1.3 and 1.4.

Figure 1.1 Open-loop control system.

Figure 1.2 Closed-loop control system.

Figure 1.3 The elements of an open-loop control system.

An Introduction to Control Systems

5

Figure 1.4 The elements of a closed-loop control system.

• Process (or plant): This is the main physical component of a control system as it is the component (a dynamic system) whose output is to be controlled. Usually, a mathematical model of the process is required for its behaviour to be understood and then controlled. • Actuator: A device that is used to physically influence the process. It is the muscle that receives the control signal from the controller and forces the plant to produce the desired output. • Controller: An algorithm or mechanism that takes the error signal and generates the control signal required to drive the actuator. The controller is the “brain” of the control system and its design and analysis are the central motivation for the material presented in this book. • Sensor: A device that measures the actual system output and produces the measured out put. Sensors are not perfect; they are often affected by sensor errors and associated uncertainties. Hence, there is always at least a slight difference between the actual and measured outputs. • Desired output: The desired value for the output of the process being controlled, and achieving this desired output is the objective of a control system. It is also called the reference signal. • Actual output: The actual state of the process that is to be controlled or influenced. It must be measured by a sensor and then compared with the desired output. • Feedback signal: A measurement of the actual output which is then added (positive feedback) or subtracted (negative feedback) from the desired output (reference signal) producing the system’s error signal to be fed into the controller. • Comparator: This component takes the desired output and the measured output as inputs and generates an error signal that is the difference between the desired and measured outputs. This error signal is sent to the controller. • Disturbance signals or noise: These are signals that are external to the control system but affect the process. Examples of disturbance signals include heat losses, electromagnetic radiation, and vibrations. A good controller eliminates or minimises the effects of disturbances, i.e., it manifests effective disturbance rejection. These are the components that are common to all control systems irrespective of system complexity, nature of the application, and type of dynamic system. However, depending on the type of control system, some components are not relevant. For example, the open-loop control system does not require both the sensor and the comparator.

1.4

OPEN-LOOP CONTROL VS. CLOSED-LOOP CONTROL

There are two main configurations for control systems: • Open-loop • Closed-loop All control systems can be placed into these two general categories. In this section, the general structures of these two configurations are presented and their applications, advantages, and disadvantages are discussed.

6

Design and Analysis of Control Systems

1.4.1

OPEN-LOOP CONTROL

An open-loop control system utilises a controller and actuator to obtain the desired response without using any measurement of the actual system response (controlled variable) to influence the system. Thus the objective of an open-loop control system is to achieve the desired output by utilising an actuating device to control the process directly without the use of feedback. The elements of an open-loop control system are shown in Figure 1.3. There are many applications where it is sufficient to use an open-loop control system. In particular, closed-loop control methods are unnecessary when system inputs and the plant model are known with total certainty, and there are no external disturbances. Also, open-loop systems are more convenient when obtaining the output measurement is either difficult or not economically feasible. For example, in a washer system, it would be quite expensive to provide a device that measures the quality of the washer output, i.e., cleanliness of the clothes. However, there are several limitations of open-loop systems which, include: slow system response, poor disturbance rejection, poor tracking under uncertainties, high sensitivity to system parameter errors (e.g., errors in plant or controller gains), and high sensitivity to changes in calibration errors (hence recalibration is usually necessary). A thorough comparison of the benefits and limitations of open- and closed-loop systems is presented later. 1.4.2

CLOSED-LOOP CONTROL

In contrast to an open-loop control system, a closed-loop control system utilises a measurement of the actual output to compare the actual output with the desired output response (the reference or command input). The measured output is called the feedback signal. Thus, a closed-loop control system compares a measurement of the output with the desired output. The difference between the two quantities (the error signal) is then used to drive the output closer to the reference input through a controller and an actuator. The elements of a general closed-loop feedback control system are shown in Figure 1.4. Thus, a feedback control system is a control system that tends to maintain a prescribed relationship of one system variable to another by comparing the functions of these variables and using the difference as a means of control. The system often uses a function of a prescribed relationship between the output and reference input to control the process. Often the difference between the output of the process under control and the reference input is amplified and used to control the process so that the difference is continually reduced. The feedback concept is the basis of control system analysis and design. 1.4.3

ADVANTAGES OF CLOSED-LOOP SYSTEMS

Closed-loop systems have the following advantages: • • • • •

Faster response to an input signal Effective disturbance rejection Better tracking of reference signals Low sensitivity to system parameter errors (e.g., errors in plant or controller gains) Low sensitivity to changes in calibration errors (recalibration is unnecessary) • More accurate control of plant under disturbances and internal variations • Effective and flexible control tuning by varying the control gain • Used to stabilise systems that are inherently unstable in the open-loop form

An Introduction to Control Systems

1.4.4

7

DISADVANTAGES OF CLOSED-LOOP SYSTEMS

The following are some of the disadvantages of closed-loop systems: • • • • • • • • •

Require the use of sensors, which increases the system costs Involve more components, which leads to more costs and complexity The power costs (due to high gains) are high More complex designs, which are harder to build Sometimes obtaining the output measurement is either hard or not economically feasible Initial tuning is more difficult, especially if the bandwidth is narrow There is always a steady-state error (with proportional controllers) The system tends to become unstable as the gain is increased beyond certain limits Closed-loop control methods are unnecessary when system inputs and the plant model are known with total certainty, and there are no external disturbances • Closed-loop systems are not always controllable Essentially, the advantages of closed-loop systems are the disadvantages of open-loop systems, and the disadvantages of closed-loop systems are the advantages of open-loop systems. The introduction of feedback enables the engineer to control a desired output and improve accuracy, but it requires attention to the issue of stability of response. Feedback is used to reduce the error between the reference input and the system output. However, the significance of the effects of feedback in control systems is more complex. The reduction of system error is merely one of the many important effects that feedback may have upon a system. The effects of feedback on system performance characteristics such as stability, bandwidth, gain, disturbance, and sensitivity will be shown in Chapter 4. To understand the effects of feedback on a control system, it is essential that the phenomenon be examined in a broad sense. When feedback is deliberately introduced for the purpose of control, its existence is easily identified. However, there are numerous situations where a physical system that is normally recognised as an inherently non-feedback system turns out to have feedback when it is observed from a certain perspective.

1.5

EXAMPLES OF SIMPLE CONTROL SYSTEMS

There are several examples of control systems (open- and closed-loop) in biological systems, daily activities, and industrial operations. These systems could be manual, automatic, or semi-automatic, and they are presented here to show the pervasiveness and usefulness of control system principles in general, and feedback control system principles in particular. 1.5.1

MANUAL CAR DIRECTION OF TRAVEL CONTROL

Driving an automobile provides a good example for both manual and automatic control. The driving activity is a pleasant task when the automobile responds rapidly to the driver’s command. Many cars have power steering and brakes, which utilise hydraulic amplifiers to amplify the forces used for braking and steering. An example of manual control occurs when the driver has a desired path (direction) for the car, observes the actual path of the car and then forces the car, using the steering wheel, to follow the desired path as closely as possible. Figure 1.5 shows a simple sketch of a car in a direction of travel under manual control. An annotated block diagram of the car’s manual steering control system is shown in Figure 1.6, illustrating the sensor, process, and actuator. The driver’s eyes are the sensors that obtain a measurement of the car’s actual direction. The driver then compares the desired course with the measured course and generates an error signal. This error signal is then used by the driver, who is the manual controller, to determine how to move the steering wheel (the actuator) so that the car (process) moves in the desired direction.

8

Design and Analysis of Control Systems

Figure 1.5 Control of a car’s direction of travel.

Figure 1.6 A manual closed-loop control system for a car.

A typical result of the direction of travel manual control is shown in Figure 1.7. The actual path tracks the desired path. 1.5.2

CRUISE CONTROL FOR A CAR

Automobile cruise control provides a good example of automatic control. Most modern cars have a cruise control facility where a desired car speed is set, a speedometer measures the actual speed of the vehicle, and the difference between the two is used to drive an automatic controller (a programmed processor). This controller sends a signal to the engine throttle so that more fuel (or less) is burnt for the car to travel at the desired speed. Figure 1.8 shows a general car’s cruise control system where the car is the process, the speedometer is the sensor and the engine is the actuator.

An Introduction to Control Systems

9

Figure 1.7 Control of a car’s path (direction of travel).

Figure 1.8 A car cruise closed-loop control system.

The car’s cruise control system is very simple but very illustrative of control system phenomena. In Chapter 4, it is extensively modelled and quantitatively analysed in both its open- and closed-loop forms. 1.5.3

AUTOMATIC WATER LEVEL CONTROL

Water level control provides another simple platform to illustrate both manual and automatic control. Figure 1.9 shows a schematic diagram of a liquid level control system. The automatic controller maintains the water level by comparing the measured level with a desired level and correcting any error by adjusting the opening of the pneumatic valve. Figure 1.10 shows the corresponding block diagram of the water level control system where the water tank and its dynamics constitute the process, the float is the sensor, and the valve is the actuator. The (float) sensor measures the actual level of the water and this is compared with the desired water level. The difference (error signal) is used to open or close the valve (actuator). Thus, the mechanism automatically adjusts the water level until the desired level is achieved. Automatic water level control is used in many manufacturing plants where Programmable Logic Controllers (PLC) are deployed. 1.5.4

MANUAL WATER LEVEL CONTROL

For the water level control system, a manual control system can be used instead of an automatic control system. In this case, the human operator knows the desired water level, observes the water level, and uses the difference between the two to determine how to turn the pneumatic valve so that

10

Design and Analysis of Control Systems

Figure 1.9 Automatic water level control mechanism.

Figure 1.10 Block diagram of an automatic water level control system.

Figure 1.11 A manual water level closed-loop control system.

the desired water level is achieved. The control system block diagram for such a manual control system is shown in Figure 1.11. The eyes of the human operator now constitute the sensor, the operator’s brain is now the controller, and the actuator consists of the operator’s muscles together with the pneumatic valve. 1.5.5

FEEDBACK IN SOCIAL, ECONOMIC, AND POLITICAL SYSTEMS

The principles of feedback and control are not restricted to engineering and biological systems. There are many applications and occurrences of feedback control concepts in the socio-economic

An Introduction to Control Systems

11

Figure 1.12 Application of the principles of feedback to economic systems.

and political arena. Society is composed of many feedback systems and regulatory bodies that are controllers exerting the forces on society necessary to maintain a desired output. A simplified model of the national income feedback control system is in Figure 1.12 [10]. Such a control network fosters an understanding of government control and expenditure. Other loops in the system are consumers and taxes. Although social, political, and economic feedback systems are not very rigorous, they do provide useful information and system understanding. A control system that was popular during the COVID-19 pandemic is the Chatbot customer service (in social media). This chatbot uses Neuro-Linguistic Programming (NLP) control architecture, and it has many applications in various industries.

1.6

CLASSIFICATION OF CONTROL SYSTEMS

Feedback control systems can be classified in a number of ways, depending on the purpose of the classification. For instance, based on the method of analysis and design, control systems are classified as linear or nonlinear, and time-varying or time-invariant. When considering the type of signals used in the system, reference is often made to continuous-data and discrete-data systems, or modulated and unmodulated systems. Control systems are also often classified according to the purpose of the system. For instance, a position-control system and a velocity-control system control the output variables according to ways implied by the names. The classification of a control system can also be defined according to the form of the open-loop transfer function. In general, there are many other ways of identifying control systems according to some special features of the system. It is important that some of these more common ways of classifying control systems are appreciated so that a proper perspective is gained before embarking on the analysis and design of these systems. The different classification strategies can be summarised as follows: • • • • • • •

Linear vs. nonlinear Time-variant vs. time-invariant Continuous-data vs. discrete-data Single-input single-output (SISO) vs. multiple-input multiple-output (MIMO) Regulator vs. tracking Purpose of control (e.g., position control or velocity control) Form of open-loop transfer function

These classes are summarised in Figure 1.13 and some are briefly discussed in the next sections.

12

Design and Analysis of Control Systems

Figure 1.13 Classification of control systems.

1.6.1

LINEAR VS. NONLINEAR CONTROL SYSTEMS

This classification is made according to the methods of analysis and design. Strictly speaking, linear systems do not exist in practice, since all physical systems are nonlinear to some extent. Linear feedback control systems are idealised models fabricated by the engineer purely for the simplicity of analysis and design. When the magnitudes of signals in a control system are limited to ranges in which system components exhibit linear characteristics (i.e., the principle of superposition applies), the system is essentially linear. But when the magnitudes of signals are extended beyond the range of the linear operation, depending on the severity of the nonlinearity, the system should no longer be considered linear. For instance, amplifiers used in control systems often exhibit a saturation effect when their input signals become large; the magnetic field of a motor usually has saturation properties. Other common nonlinear effects found in control systems include the backlash or dead play between coupled gear members, nonlinear spring characteristics, and nonlinear friction or torque between moving members. Quite often, nonlinear characteristics are intentionally introduced in a control system to improve its performance or provide more effective control. For instance, to achieve minimum-time control, an on-off relay is used in many missile or spacecraft control systems. Typically in these systems, jets are mounted on the sides of the vehicle to provide reaction torque for attitude control. These jets are often controlled in a full-on or full-off fashion, so a fixed amount of air is applied from a given jet for a certain time period to control the altitude of the space vehicle. For linear systems, there exists a wealth of analytical and graphical techniques for design and analysis purposes. A majority of the material in this book is devoted to the analysis and design of linear systems. Nonlinear systems, on the other hand, are usually difficult to treat mathematically, and there are no general methods available for solving a wide class of nonlinear systems. In the design of control systems, it is practical first to design the controller based on the linear-system model by neglecting the nonlinearities of the system. The designed controller is then applied to the nonlinear system model for evaluation or redesign by computer simulation. 1.6.2

TIME-INVARIANT VS. TIME-VARIANT CONTROL SYSTEMS

When the parameters of a control system are constant with respect to time during the operation of the system, the system is called a time-invariant system. A time-variant system is one whose output response depends on the moment of observation and that of input signal application. In practice, most physical systems contain elements that drift or vary with time. For example, the winding resistance of an electric motor will vary when the motor is first being excited and its temperature

An Introduction to Control Systems

13

is rising. Another example of a time-varying system is a guided-missile control system in which the mass of the missile decreases as the fuel on board is being consumed during flight. Although a time-varying system without nonlinearity is still a linear system, the analysis and design of this class of systems are usually much more complex than those of linear time-invariant systems. 1.6.3

CONTINUOUS-DATA VS. DISCRETE-DATA CONTROL SYSTEMS

A continuous-data system is one in which the signals are functions of the continuous-time variable t. Among all continuous-data control systems, the signals may be further classified as alternating current (AC) or direct current (DC). Unlike the general definitions of these terms in electrical engineering, in an AC control system, the signals are modulated by some form of modulation scheme, and in a DC control system not all the signals in the system are unidirectional, otherwise there would be no corrective control movement. A DC control system simply implies that the signals are unmodulated, but they are still AC signals according to the conventional definition. Typical components of a DC control system include potentiometers, DC amplifiers, DC motors, and DC tachometers. AC control systems are used extensively in aircraft and missile control systems in which noise and disturbance often create problems. By using modulated AC control systems with carrier frequencies of 400 Hz or higher, the system will be less susceptible to low-frequency noise. Typical components of an AC control system include synchros, AC amplifiers, AC motors, gyroscopes, and accelerometers. In practice, not all control systems are strictly of the AC or DC type. A system may incorporate a mixture of AC and DC components, using modulators and demodulators to match the signals at various points in the system. Discrete data control systems differ from continuous-data systems in that the signals at one or more points of the system are in the form of either a pulse train or a digital code. Usually, discretedata control systems are subdivided into sampled-data and digital control systems. Sampled-data control systems refer to a more general class of discrete-data systems in which the signals are in the form of pulse data. A digital control system refers to the use of a digital computer or controller in the system so that the signals are digitally coded, such as in binary code. In general, a sampled-data system receives data or information only intermittently at specific instants of time. A sampled-data system can also be classified as an AC system since the signal of the system is pulse-modulated. 1.6.4

REGULATOR VS. TRACKING CONTROL SYSTEMS

A regulator control system is a control system whose desired output (the reference) is constant. Most control systems in industry, particularly the process industry, are of the regulator type. However, the desired output in many other systems is time-varying, i.e., the reference signal is time-variant and the controlled output is required to follow this signal as closely as possible. This situation characterises the tracking problem.

1.7

CONTROL SYSTEM DESIGN

Engineering design is both an art and a science that together constitute one of the main functions of an engineer. It is a complex process in which both creativity and analysis play major roles. Design is the process of conceiving or inventing the forms, parts, and details of a system to achieve a specified purpose. Design activity can be thought of as planning for the emergence of a particular product or system. Design is an innovative act whereby the engineer creatively uses knowledge and materials to specify the shape, function, and material content of a system. The steps can be summarised as follows: • Determination of the need for a product or system • Development of the specifications of the solution to the need

14

Design and Analysis of Control Systems

• Development and evaluation of various alternative solutions that meet these specifications • Determination of the best solution to be designed in detail and fabricated • Verification and validation of design Any design process starts with the establishment of specifications. Specifications are statements that explicitly state the desired characteristic of a device or product and clearly outline its functions. The design of technical systems aims to achieve appropriate design specifications and rests on four characteristics: complexity, trade-offs, design gaps, and risk. Complexity of design results from the wide range of tools, issues, and knowledge used in the process. The large number of factors to be considered illustrates the complexity of the design specification activity, not only in assigning these factors their relative importance in a particular design, but also in giving them substance either in numerical or written form, or both. The design of control systems is a specific example of engineering design. Again, the goal of control engineering design is to obtain the configuration, specifications, and identification of the key parameters of a proposed system to meet an actual need. The first step in the design process is to establish the system goals. For example, the goal may be to control the velocity of a motor accurately. The second step is to identify the variables that are to be controlled, for example, the velocity of the motor. The third step is to write the specifications in terms of the accuracy that must be attained. This required accuracy of control then leads to the identification of a sensor to measure the controlled variable. The strategy is to proceed to the first attempt to configure a system that results in the desired control performance. This system configuration normally consists of a sensor, the process under control, an actuator, and a controller. The next step consists of identifying a candidate for the actuator. This depends on the process, but the actuation chosen must be capable of effectively adjusting the performance of the process. For example, if the objective is to control the speed of a rotating flywheel, a motor is selected as the actuator. The sensor, in this case, needs to be capable of accurately measuring the speed. The next step is to obtain models for the actuator and sensor. This is followed by the design and selection of a controller, which often consists of a summing amplifier that compares the desired response and the actual response and then forwards this error-measurement signal to an amplifier. The final step in the design process is the adjustment of the parameters of the system to achieve the desired performance. The control system design process can be summarised as follows: • • • • • • •

Establishment of control goals Identification of the variables to be controlled Development of the specifications for the variables Establishment of the system configuration and identification of the actuator Design of the controller and selection of parameters to be adjusted Optimisation parameter and analysis of performance Verification and validation of design

If the performance does not meet the specifications, then the configuration and the actuator are iterated. If the performance meets the specifications, the design is finalised and the parameters adjusted accordingly. The design is concluded, and the results are then documented. If the performance still does not meet the specifications, there is a need to establish an improved system configuration and perhaps select an enhanced actuator and sensor. The design process is then repeated until the specifications are met, or until a decision is made that the specifications are too demanding and should be relaxed. As an illustration, the performance specifications could describe how a closed-loop system should perform and will include: • Effective disturbance rejection • Fast and desirable system response

An Introduction to Control Systems

15

• Realistic actuator signals • Low sensitivities to parameter variations • Robustness The control design process has been dramatically affected by the advent of powerful and inexpensive computers that characterise the information age, and the availability of effective control design and analysis software. For example, the Boeing 777, which incorporates the most advanced flight avionics of any US commercial aircraft, was almost entirely computer-designed. Design verification of final designs is essential in safety-security critical systems (such as nuclear plants, ambulance systems, and surgery/emergency room systems) and high-fidelity computer simulations (advanced flight avionics). In many applications, the certification of the control system in realistic simulations represents a significant cost in terms of money and time. The Boeing 777 test pilots flew about 2,400 flights in high-fidelity simulations before the first aircraft was even built. Thus, system verification and validation constitute essential aspects of the design and analysis of control systems.

1.8

ADVANCED APPLICATIONS OF CONTROL SYSTEMS

Control systems have advanced applications in the general areas of large-scale systems, multisensor systems, space structures, manufacturing and flexible structures. Of particular interest are the specific fields of robotics, mechatronics, MEMS, and aerospace. In 2023, with advances in Artificial Intelligence and related fields, there is a broad range of products and systems driven by sophisticated and innovative control methods. These include drones, driverless cars, autonomous space vehicles, intelligent manufacturing plants, and advanced robots. In the following subsections, several examples are discussed. 1.8.1

AUTONOMOUS SPACEX ROCKET LANDING

The company Space Exploration Technologies Corporation (SpaceX) is a US aerospace manufacturer and space transportation services provider. It was set up in 2002 by Elon Musk to revolutionise space technology by reducing space transportation costs to enable the colonisation of Mars. SpaceX is involved in the design, manufacture and launching of advanced rockets and spacecraft. Since 2002, it has developed several launch vehicles, including the Starlink satellite constellation, and the Dragon cargo spacecraft. It has flown humans to the International Space Station on the Crew Dragon Demo-2. NASA astronauts Bob Behnken and Doug Hurley became the first people to launch into space on a SpaceX-built capsule and rocket. According to them: ‘The ride experience was smoother than on the NASA space shuttle.’ SpaceX has had many outstanding achievements. These include being the first privately funded liquid-propellant rocket to reach orbit, Falcon 1 in 2008. It was the first private company to launch, orbit, and recover a spacecraft, successfully, that is Dragon in 2010. It is also the first private entity to send a spacecraft to the International Space Station (Dragon in 2012). SpaceX conducted the first vertical take-off and vertical propulsive landing for an orbital rocket (Falcon 9 in 2015). Furthermore, it was the first to reuse an orbital rocket, and the first private organisation to send an object into orbit around the Sun. This was the Falcon Heavy’s payload of a Tesla Roadster in 2018. SpaceX was the first private entity to send astronauts to orbit and the International Space Station. This was SpaceX Crew Dragon Demo-2 mission in 2020. Under a partnership with NASA, SpaceX has flown 20 cargo resupply missions to the International Space Station (ISS), as well as an uncrewed demonstration flight of the human-rated Dragon 2 spacecraft. This was Crew Demo-1 on 2 March 2019, and the first crewed Dragon 2 flight on 30 May 2020. The reusable launch system championed by SpaceX is a development programme which is a privately driven and funded exercise to develop a set of new technological innovations for an orbital

16

Design and Analysis of Control Systems

Figure 1.14 SpaceX Falcon 9 rocket. (Photo courtesy of SpaceX.)

launch system that may be reused many times as is the case with the reusability of aircraft. The company has been developing technologies to facilitate the rapid and full reusability of space launch vehicles. The long-term objectives include returning a launch vehicle first stage to the launch site in minutes. After that, return a second stage to the launch pad following orbital realignment with the launch site and atmospheric re-entry in up to 24 hours. SpaceX’s long-term goal is that both stages of their orbital launch vehicle will be designed to allow reuse a few hours after return. On 5 August 2019, a Falcon 9 rocket precisely landed on a SpaceX autonomous spaceport drone ship (ASDS). An ASDS is an ocean-going vessel derived from a deck barge, outfitted with stationkeeping engines and a large landing platform. An autonomous robot controls it. Construction of such ships was commissioned by SpaceX to allow for the recovery of rocket first stages at sea for missions that do not carry enough fuel to return to the launch site after boosting spacecraft onto an orbital or transplanetary trajectory. Figure 1.14 shows the SpaceX Falcon 9 Rocket, and Figure 1.15 displays its autonomous landing on the ASDS. The principles of the control system required for the precise landing of the rocket on the mobile floating platform are similar to those deployed in designing a control system for an inverted pendulum. 1.8.2

BOSTON DYNAMICS’ ATLAS ROBOT

The robot Atlas is a bipedal humanoid robot developed by the US engineering and robotics design company Boston Dynamics. This was done with funding and oversight from the US Defense Advanced Research Projects Agency (DARPA). Boston Dynamics was founded as a spin-off from the Massachusetts Institute of Technology in 1992. Its headquarters is in Waltham, Massachusetts. Boston Dynamics is now wholly owned by the Japanese conglomerate SoftBank Group. The company has excelled in the development of a series of dynamic highly mobile robots, such as Atlas, Handle, BigDog, and Spot. Spot has been made commercially available since 2019. It is the first commercially available robot from Boston Dynamics. The company has ambitions to commercialise other robots as well, including Handle. The 1.8m Atlas robot is designed for a variety of search and rescue tasks. It was unveiled on 11 July 2013. Atlas is the world’s most dynamic humanoid robot, and as a research platform, it is designed to push the limits of whole-body mobility robotics. Atlas’s advanced control system

An Introduction to Control Systems

17

Figure 1.15 Autonomous SpaceX rocket landing. (Photo courtesy of SpaceX.)

and state-of-the-art hardware give the robot the power and balance to demonstrate human-level agility. The latest footage from Boston Dynamics is, unsurprisingly, both impressive and terrifying. In recent years, Atlas has been able to navigate uneven terrain and even jump around a parkour course. This is unprecedented in robotics. The bipedal robot does a handstand, rolls around and even does a few jumping twists. All this is accomplished without losing its balance. The 2019 version of Atlas, shown in Figure 1.16, is designed to operate both outdoors and inside buildings. It is specialised for mobile manipulation and is very adept at walking over a wide range of terrain, including snow. It can do backflips and cartwheels. Atlas is electrically powered and hydraulically actuated while using sensors in its body and legs to balance. Furthermore, it uses LIDAR and stereo sensors in its head to avoid obstacles, assess the terrain, execute navigation, and manipulate objects, even when the objects are being moved. This version of Atlas is about 1.75 m tall, about a head shorter than the original Atlas. It weighs 182 kg. 1.8.3

HONDA’S ASIMO ROBOT

The Advanced Step in Innovative Mobility (ASIMO) robot is a human-like (humanoid) robot designed and built by Honda. The company started to develop humanoid robots in the 1980s, including several prototypes that preceded ASIMO. Honda’s goal was to create a walking robot. The prototype E0 was the first bipedal (two-legged) model produced as part of the Honda E series. It was an early experimental line of self-regulating, walking humanoid robot with wireless movements created between 1986 and 1993. Next was the Honda P series of robots which were developed from 1993 to 1997. The outstanding research made on the E- and P-series led to the creation of ASIMO with dramatic continuous improvements right up to 2018. The result was the most advanced version of ASIMO shown in Figure 1.17.

18

Design and Analysis of Control Systems

Figure 1.16 Boston dynamics’ Atlas robot. (Photo courtesy of Boston Dynamics.)

Figure 1.17 Honda’s ASIMO robot. (Photo courtesy of Honda.)

An Introduction to Control Systems

19

ASIMO stands 1.30 m tall and weighs 54 kg. Honda research established that the ideal height for a mobility assistant robot was between 1.20 m and the height of an average adult. This is ideal for operating switches and doorknobs. The robot is powered by a rechargeable 51.8 V lithium-ion battery with an operating time of one hour. It has a three-dimensional computer processor that was created by Honda. It is composed of a processor, a signal converter, memory and three stacked dies. The computer that controls ASIMO’s movement is located in the robot’s waist area and can be controlled by a tablet, wireless controller, or voice commands. ASIMO can recognise moving objects, postures, gestures, its surrounding environment, sounds and faces. This enables it to interact with human beings. It can detect the movements of multiple objects by using visual information captured by two camera ‘eyes’ in its head. It can also determine distance and direction. This feature enables ASIMO to face or follow a person when approached. It interprets voice commands and human gestures, enabling it to recognise when a handshake is offered or when a person waves or points. It then responds accordingly. ASIMO’s capability to distinguish between voices and other sounds enables it to identify its companions. It can respond to its name and recognises sounds associated with a collision or a falling object. This enables the robot to face a person when spoken to and look towards a sound. The robot responds to questions by nodding or providing a verbal answer in different languages and can recognise approximately 10 different faces and address them by name. ASIMO has multiple sensors to enable autonomous robot navigation, where the two cameras inside its head are used as visual sensors for obstacle detection. The lower portion of the torso has a ground sensor which comprises one infrared sensor and one laser sensor. The laser sensor is deployed to detect the ground surface, while the infrared sensor (with automatic shutter adjustment based on brightness) serves to detect pairs of floor markings thus confirming the navigable paths of the inbuilt planned map. Effectively, the pre-loaded map and the detection of floor markings help the robot to identify its present location and continuously adjust its position precisely. There are rear and front ultrasonic sensors to sense the obstacles. The front sensor is located at the lower portion of the torso together with the ground sensor. ASIMO is arguably one of the most advanced robots in the world, in the same league as Boston Dynamics’ Atlas and Hanson Robotics’s Sophia. 1.8.4

SOPHIA THE ROBOT

Sophia is a female social humanoid robot developed by Hong Kong-based company Hanson Robotics. She was first turned on on 14 February 2016 and made her first public appearance at the South by Southwest Festival in mid-March 2016 in Austin, Texas, United States. An advanced human-like robot, she personifies the dreams for the future of AI. As a unique combination of artistry, science, engineering, and control system design, Sophia is both a humanoid robot depicting the future of AI and robotics and a platform for advanced robotics and AI research. The nature of Sophia has captured the imagination of audiences globally. She is the world’s first robot to gain citizenship of a country, and the first robot Innovation Ambassador for the United Nations. With appearances on Good Morning Britain and the Tonight Show, in addition to speaking at many conferences around the world, Sophia is now a household name. She has been covered by media around the world and has participated in many high-profile interviews. In October 2017, Sophia ‘became’ a Saudi Arabian citizen. Sophia was accorded the United Nations Development Programme’s first-ever Innovation Champion, in November 2017. Hence, she is the first non-human to be given any United Nations title. Figures 1.18 and 1.19 show the front and back of Sophia, respectively. The cameras in Sophia’s eyes, together with computer algorithms, enable her to see. Thus, she can follow faces, sustain eye contact, and recognise individuals. She can process speech and have conversations by deploying a natural language subsystem. In January 2018, Sophia was upgraded with functional legs and the ability to walk. Sophia’s lifelike skin and her ability to emulate more than 60 facial expressions is more than impressive.

20

Design and Analysis of Control Systems

Figure 1.18 Sophia the robot. (Photo courtesy of Hanson Robotics.)

Figure 1.19 Sophia the robot. (Photo courtesy of Hanson Robotics.)

An Introduction to Control Systems

21

Conceptually, Sophia is similar to the computer programme ELIZA, which was one of the first efforts to simulate human conversation. The software is programmed to give pre-written responses to specific questions or phrases. These responses create the illusion that the robot can understand a conversation, including stock answers to questions like ‘Is the door open or shut?’ Hanson Robotics, 2017, announced ambitious plans to open Sophia to a cloud environment using a decentralised blockchain marketplace. Sophia would ultimately be a good fit for robotics applications in healthcare, education, customer service, and therapy. In 2019, Sophia displayed the capabilities and capacity to create drawings and portraits. As indicated earlier, she also provides a platform for advanced robotics and AI research. Of particular interest is understanding human-robot interactions and their potential applications in education, entertainment and service. She has been used for research in the Loving AI Project. This work seeks to understand how robots can adapt to users’ needs through intra- and inter-personal development. Sophia combines sophisticated work in expert systems, machine perception, conversational natural language processing, symbolic AI, neural networks, adaptive motor control and cognitive architecture, among others. The underlying AI components can be combined in different ways. Sophia’s responses can be unique to any given situation or interaction. She utilises advanced machine perception that allows her to recognise human faces, see emotional expressions, and recognise various hand gestures. She also has inverse kinematics algorithms and path planning for controlling her hands, gaze, and locomotion strategy. Her body performs dynamic stabilisation for adaptive walking over various terrains.

1.9

COMPUTER-AIDED DESIGN AND ANALYSIS

With the advent of the Fourth Industrial Revolution (4IR) characterised by the extensive use of high technology techniques, the computer now plays a significant role in automating the design and analysis of modern control systems. Previously, most of the control system tools and techniques were implemented through hand calculations and graphical aids. The results were not always accurate, reliable or consistent. Moreover, the process was both labour-intensive and slow. At some stage, cumbersome mainframe computers and inefficient personal computers were then used to design, analyse and simulate control systems. In the age of the 4IR, with the use of sophisticated (desk and hand-held) computers, AI, and intelligent algorithms and incredibly fast computing power, analysis, design, and simulation of control systems can be easily, efficaciously, and efficiently accomplished. Since the simulation of a design can be done rapidly, adjustments can be introduced easily, and the new design can be re-evaluated. Different scenarios can be explored so that results are optimised with respect to design specifications such as stability, sensitivity, and model accuracy. 1.9.1

OVERVIEW OF AUTOMATED COMPUTATIONAL TOOLS

The design and analysis of control systems require tools to enable and speed up the process. There are a number of such computer-based computational tools that can be used. MATLAB and its MATLAB Control System Toolbox are the most popular tools, followed by Simulink Control Design. There is also LabVIEW which is a graphical tool and an alternative programming method to MATLAB. Other tools include SPICE-Based Simulation Software, SimPowerSystems, StateFlow, and Mathcad. MATLAB Control System Toolbox expands traditional MATLAB to include control systemspecific commands. Furthermore, several MATLAB enhancements give added efficacy and functionality to MATLAB and its Control System Toolbox. These include Simulink, Linear-Time

22

Design and Analysis of Control Systems

Invariant (LTI) Viewer, SISO Design Tool and Symbolic Math Toolbox. Simulink is a graphical programming environment that facilitates modelling, analysis and simulation of dynamic systems through the use of a graphical user interface (GUI). The LTI Viewer is also a GUI platform that simplifies the analysis of linear, time-invariant systems. It is used to view and compare the response plots of SISO and MIMO systems, or several linear models at the same time. The LTI Viewer also permits measurements to be made directly from time and frequency response curves. The SISO Design Tool is a convenient and intuitive design and analysis tool, while the Symbolic Math Toolbox saves labour when making symbolic calculations required in control system design and analysis. Obviously, some of these advanced products may require the purchase of additional software. Most of these control system tools are available from MathWorks. 1.9.2

MATLAB

MATLAB is a high-level technical computing environment ideal for solving mathematical, scientific and engineering problems. It is a multi-paradigm programming language and numerical computing environment developed by MathWorks. The software allows matrix algebra, plotting of data and functions, algorithms development and execution, user interface creation, and integration with programs written in other languages. In engineering, it is an alternative method of solving control system problems, in place of manual manipulation of mathematical representations such as models, equations and diagrams. However, in this book, readers are encouraged to first solve problems in design and analysis of control systems by hand and then deploy MATLAB. This is essential so that understanding and mastery of the concepts are not undermined by automation of design, analysis and solution derivation. Consequently, many examples throughout the book are solved by hand, followed by the suggested use of MATLAB or other tools that automate the design and analysis of control systems. MATLAB and its associated module, the MATLAB Control System Toolbox, are used to solve, analyse, design and simulate control systems. Both systems are commercial software products available from MathWorks, Inc., 3 Apple Hill Drive, Natick, MA 01760-2098. Phone: (508) 647-7000. Email: [email protected]. URL: www.mathworks.com. It is recommended that users of the book install MATLAB and the Control System Toolbox. 1.9.3

MATLAB CONTROL SYSTEM TOOLBOX

The MATLAB Control System Toolbox provides algorithms and application software for systematic and structured design, analysis, tuning and simulation of linear time-invariant control systems. In using it, the dynamic system model can be specified in the Transfer Function, State-Space, ZeroPole-Gain or Frequency-Response form. Functions such as Step Response plot, Root Locus plot and Bode plot enable the analysis and visualisation of system behaviour in the time and frequency domains. Interactive control system techniques such as Bode Loop Shaping and the Root Locus method are used for tuning compensator parameters. The MATLAB Control System Toolbox tunes both SISO and MIMO compensators automatically, including all configurations of PID-type controllers (P, PI, PD and PID). It handles a broad range of compensators, including multiple tunable blocks spanning several feedback loops. One can tune gain-scheduled controllers and specify multiple tuning objectives, such as reference tracking, sensitivity, disturbance rejection and stability. Design validation can be achieved by verifying rise time, overshoot, settling time, gain and phase margins, and other design requirements. Thus, the MATLAB Control System Toolbox is a highly effective tool used in the design and analysis of control systems. It effectively enhances and expands MATLAB by including control

An Introduction to Control Systems

23

Figure 1.20 SISO design for SISO design task.

system specific commands and routines. The toolbox is the number one software of choice if the transfer function of the plant is known. In the newer version of MATLAB, there is the SISO Tool incorporated within the MATLAB Control System Toolbox. The plant transfer function can be exported to the SISO Tool to start controller optimisation based on that particular plant. In the MATLAB workspace, one can process the plant transfer function from the z domain to s domain or vice versa. The SISO tool windows appear when the SISO tool is loaded, and this is depicted in Figure 1.20. 1.9.3.1

SISO Design for SISO Design Task

The SISO Design for SISO Design Task is empty at the beginning. It is filled when a transfer function is imported from the workspace. There are four sections of the SISO Design for SISO Design Task: • Root locus editor for open-loop: This is in the upper left area, where one can tune a particular controller. • Bode editor for closed-loop: The bottom left windows are the closed-loop gain and phase response. One can see the response of the entire system here. • Open-loop bode editor for open loop: This window will display the open-loop response. • Control and estimation tools manager: In this window, one can choose the preferred control architecture. This is demonstrated in Figure 1.21.

24

Design and Analysis of Control Systems

Figure 1.21 Control and estimation tools manager.

1.9.4

SIMULINK CONTROL DESIGN

As already stated, Simulink is a MATLAB-based graphical programming environment which facilitates modelling, analysis and simulation of dynamic systems. A graphical block diagramming tool linked to a customisable set of block libraries constitutes its primary interface. Simulink Control Design, a toolbox of Simulink, enables the design and analysis of control systems modelled in Simulink. Arbitrary SISO and MIMO control architectures can be tuned automatically, including PID controllers. Furthermore, PID autotuning can be deployed to embedded software for automatically computing PID gains in real time. Effectively, this Simulink toolbox is a part of MATLAB whereby one can perform drag-and-drop model-based simulations. The designer can find operating points and compute exact linearisation of Simulink models at various operating conditions. Simulink Control Design provides tools that enable the computation of simulation-based frequency responses without model modification. The software uses a graphical user interface (GUI) to enable interaction with blocks that represent subsystems. Blocks can be positioned, resized, and labelled, while block parameters are specified. The user can interconnect blocks to form complete systems from which simulations can be run. Simulink offers functionality and convenience. However, before using Simulink, mastery of MATLAB is highly recommended. Both Simulink and MATLAB software products are required to use Simulink tools. Furthermore, to design, analyse, and tune PID controllers, the Simulink Control Design software is required. The MATLAB installation guide provides the minimum system hardware required to use this software. There are Simulink block libraries from which subsystems, sources or function generators, and sinks or scopes can be obtained. For representing linear,

An Introduction to Control Systems

25

nonlinear and discrete systems, subsystem blocks are available. Linear time-invariant objects can be generated if the Control System Toolbox is installed. In using Simulink, there are five key activities to be executed. 1.9.4.1

Simulink Access

The tool is accessed through the Simulink Library Browser by typing ‘simulink’ in the MATLAB Command Window. Alternatively, one can click on the Simulink Library button. MATLAB displays the Simulink Library Browser in response to these actions. Figure 1.22 depicts the browser. This is the window that opens up first when the Simulink environment is opened. An untitled window is created by clicking on the New model button on the toolbar of the Simulink Library Browser. The system can be built in this window. Clicking on the Open model button on the Simulink Library Browser toolbar opens existing models. It can be observed that the button is immediately to the right of the New model button. Existing models may also be opened from the File section of the Home tab on the MATLAB window. Figures 1.23 and 1.24 display examples of basic Simulink models. 1.9.4.2

Select Blocks

After accessing Simulink, the next stage is selecting blocks. Figure 1.22 shows the Simulink Library Browser. From this framework, all blocks can be accessed. It can be observed that the left-hand side of the browser shows major libraries, such as Simulink, together with underlying block libraries, such as Continuous. The underlying block libraries are on the right-hand side of Figure 1.22. To reveal a block library’s underlying blocks, the block library on the left-hand side is selected. Alternatively, one can double-click on the block library on the right-hand side.

Figure 1.22 Simulink library browser.

26

Design and Analysis of Control Systems

Figure 1.23 Example of a simulink model.

Figure 1.24 Basic simulink model.

1.9.4.3

Assemble and Label Subsystems

The required subsystems (blocks) are dragged to the model window from the browser. Also, one may access the blocks by double-clicking on the libraries. One can position, resize, and rename the blocks. Drag with the mouse to position the block. Resizing is accomplished with a click on the subsystem and dragging of the handles. Finally, to rename, click on the existing name, select the existing text, and type the new name.

An Introduction to Control Systems

1.9.4.4

27

Interconnect Subsystems and Label Signals

To do this, position the pointer on the small arrow on the side of a subsystem and press the mouse button. Next drag the resulting cross-hair pointer to the small arrow of the next subsystem, which leads to a line being drawn between the two subsystems. Blocks may also be interconnected by single-clicking the first block followed by single-clicking the second block while holding down the control key. One can move line segments by positioning the pointer on the line, pressing the mouse button, and dragging. 1.9.4.5

Choose Subsystem Parameters

This is achieved by double-clicking on a subsystem in the model window and typing in the desired parameters. Some explanations are provided in the block. The Simulink Control Design Tool is very popular with controls and systems engineers. If using the SISO tool is simple when the transfer function is known, using Simulink is much simpler and more elegant. Although Simulink is a model-based environment within MATLAB, when one buys only a MATLAB licence, this toolbox is not included. One needs to buy the Simulink Control Design Toolbox, separately. The Simulink Library Browser contains blocks, and when you click on these blocks, there are further sub-blocks inside that can be dragged to the new model and used in simulations. These blocks can also be obtained in the Libraries on the left side. With Simulink, tuning of PID controllers is easy, unlike with the SISO tool tuning, which uses the root locus approach, which is tedious. All that is required is moving the tuning bar. Once the desired response is obtained, tuning is accomplished. Pure and simple. Once the automatically update block parameters are selected, and the OK button is clicked, the model is updated automatically. The controller gain will also be changed simultaneously. To view these margins, one can drag a Gain and Phase Margin Plot from a Simulink Control Design Library under Linear Analysis Plots. Figure 1.25 shows an example of a Simulink-based control system design.

Figure 1.25 An example of simulink control design.

28

Design and Analysis of Control Systems

1.9.5

LABVIEW

Laboratory Virtual Instrument Engineering Workbench (LabVIEW) is a system-design platform and development environment for a visual programming language. It is a programming environment presented as an alternative to MATLAB. This graphical method produces front panels of virtual instruments that are pictorial reproductions of hardware instruments, such as waveform generators or oscilloscopes. Underlying the front panels are block diagrams. The blocks contain underlying code for the controls and indicators on the front panel. Thus, knowledge of coding is not required. Also, parameters can be quickly passed or viewed from the front panel. Ideally, the reader is encouraged to become acquainted with MATLAB before proceeding with LabVIEW. Familiarity with MATLAB can enhance the understanding of the relationship between textual (MATLAB) and graphical (LabVIEW) programming languages. This way, LabVIEW will be more useful to the reader or controls engineer. Its functionality and utility will be enhanced. LabVIEW is used effectively and extensively to analyse and design control systems, and simulate control systems. It supports Windows, Linux, macOS and cross-platforms, while MATLAB supports Windows, Linux and macOS. MATLAB mainly provides a mathematical and numerical computing environment, whereas LabVIEW is a system-design platform that allows data acquisition, test automation, instrument control and embedded system design. As already indicated, LabVIEW is a graphical programming environment that produces virtual instruments (VIs). A VI is a pictorial reproduction and display of a hardware device such as an oscilloscope or waveform generator on a computer screen. The VI can consist of various indicators and controls, which become inputs and outputs, respectively, to the computer programme under construction. An associated block of code that defines its operation underpins each control and indicator. The LabVIEW model thus consists of two windows: • Front panel, which is a replica of the hardware front interface showing the controls and indicators, and • Block diagram, which contains the underlying code for the controls and indicators on the Front Panel. Figure 1.26 illustrates a LabVIEW block diagram window, while Figure 1.27 depicts the LabVIEW Front Panel. Linked to the Front Panel window is a Controls Palette window which is depicted in Figure 1.28. It contains numerous icons representing both indicators and controls. To create that control or indicator, the icons can be dragged onto a Front Panel window. Hence, simultaneously, the associated code block is formed on the Block Diagram window. Alternatively, the Block Diagram can be formed first and then the Front Panel is created from the Block Diagram. Linked to the Block Diagram window is a Functions palette window that contains numerous icons representing a broad range of functions. To create that code block, icons can be dragged onto a Block Diagram window. As already indicated LabVIEW can be used as an alternative to or in conjunction with MATLAB to analyse, design, simulate, build, and deploy control systems. In addition to the general LabVIEW software, the LabVIEW Control Design and Simulation module is a pre-requisite. Some applications might require the installation of the MathScript RT Module. With LabVIEW, analysis and design are similar to writing MATLAB code, while the simulation is comparable to Simulink. In LabVIEW, design and analysis, as opposed to simulation, are handled from different subpalettes of the Functions window. The process of using LabVIEW for control systems analysis, design, and simulation starts with the window where one can select a New VI or open an existing VI from the File menu. Figure 1.29 shows an example of a LabVIEW block diagram.

An Introduction to Control Systems

Figure 1.26 LabVIEW block diagram window.

Figure 1.27 LabVIEW front panel.

29

30

Figure 1.28 LabVIEW controls palette.

Figure 1.29 Example of a LabVIEW block diagram.

Design and Analysis of Control Systems

An Introduction to Control Systems

31

Figure 1.30 Loss calculation using SimPowerSystems.

1.9.6

SPICE-BASED SIMULATION SOFTWARE

The Simulation Program with Integrated Circuit Emphasis (SPICE) tool is an advanced generalpurpose, open-source analogue electronic circuit simulator. It is an automated method used in the design of circuit boards and integrated circuits to ascertain circuit design integrity and to predict circuit behaviour. SPICE-Based Simulation Software is a SPICE-based simulator for control systems with extensive use of electronic circuits. It has a speed advantage with simulation involving analogue devices. Moreover, with this method, one does not need to derive the transfer function. However, one still needs to model the elements of the desired controller. Sometimes that model is already available on the internet, and this is a key advantage. This SPICE-Based Simulation Software tool also allows a trial-and-error approach until the right gain and phase margins are achieved. 1.9.7

SIMPOWERSYSTEMS

This automated tool is an extension of Simulink by incorporating tools for modelling and simulating electrical grid activities such as generation, transmission, distribution, and consumption of electrical power. Harmonic analysis, calculation of total harmonic distortion, load flow, and other key power system concepts are automated. SimPowerSystems is particularly useful where the control system involves power systems, power electronics and other analogue-related circuits. It is also a MATLAB

32

Design and Analysis of Control Systems

Toolbox under MATLAB software which (as explained earlier) is composed of many toolboxes that have specific functions. The SimPowerSystems tool will also run under the Simulink environment wherein the user can drag and drop blocks and form a circuit. Here, one can draw a circuit just as with SPICE- Based Simulation Software. 1.9.8

STATEFLOW

An emerging automated computational method for control system design is the Stateflow MATLAB toolbox. It is a control logic tool devised to model reactive systems through state machines and flow charts within a traditional Simulink model. Stateflow deploys a variant of the finite-state machine notation enabling the representation of hierarchy, parallelism and history within a statechart. It is used to design and simulate sophisticated control systems. Stateflow is clearly a more complicated approach than the previous methods. It can be used to design systems containing control, supervisory, mode logic, and AI algorithms. Effectively, it is an extension of the Simulink design environment to develop statecharts and flow diagrams. It can also provide mode logic, fault management, scheduling, and AI enhancements for control systems. 1.9.9

MATHCAD

Another tool that can be used in the design and analysis of control systems is a Mathcad template. Mathcad is computer software for the documentation, verification, validation and reuse of engineering calculations. It involves live editing of typeset mathematical notation, combined with its automatic computations. The use of the Mathcad template for the design and analysis of control systems is a tedious design approach. Moreover, suppose the user is starting the task; in that case, the tool requires advanced knowledge in control systems such as poles and zeros, system bandwidth, gain and phase margin derivations. It also requires control system formula derivation, in some applications. However, if the Mathcad template is already proven to work, one may only change some values on it, and the desired result is obtained.

1.10

CONTROL SYSTEMS IN THE 4IR

The Fourth Industrial Revolution (4IR) is introduced briefly, and the role of control systems within it is contextualised. After that, the innovations driving control systems in the 4IR are discussed. 1.10.1

THE FOURTH INDUSTRIAL REVOLUTION

The Fourth Industrial Revolution (4IR) is a global, all-embracing, technological transformation that will radically and fundamentally alter the way individuals, institutions, businesses, communities and countries operate, work, organise, relate and innovate; primarily because of the scale, scope, speed, and complexity of the changes. It is characterised by a range of new technologies that are fusing the physical, digital and biological worlds, impacting all disciplines, economies and industries, and even redefining the very meaning of humanity. Consequently, this revolution demands an interdisciplinary, comprehensive, holistic and integrated ecosystem approach involving all stakeholders across the globe such as multilateral institutions, national governments, the public sector, civil society, the private sector, academia and the media. The 4IR requires new and different skills such as critical thinking, how to think, problem-solving, structured thinking, blended learning, interdisciplinary approaches, ecosystem thinking, emotional intelligence (EQ), cultural intelligence (CQ), judgement, acquisition of key competencies, negotiation, cognitive capacity, flexibility, knowledge production, knowledge management, and knowledge ownership.

An Introduction to Control Systems

1.10.1.1

33

Key Drivers of the 4IR

The central pillars of the 4IR include the following innovative technologies: (a) Artificial Intelligence (AI), (b) Intelligent Algorithms (IA), (c) Augmented Reality (AR), (d) Human Augmentation (Human 2.0), (e) Big Data, (f) Internet of Things (IoT), (g) Internet of Everything (IoE), (h) Nanotechnology, (i) Autonomous vehicles, Drones and Driverless cars, (j) 3D Printing, (k) Biotechnology (The Human Genome Project), (l) Materials Science, (m) Renewable Energy, (n) Energy Storage, (o) Robotics and Mechatronics, (p) Quantum Computing, (q) Cloud Computing, (r) Blockchain Technology, (s) Artificial General Intelligence, (t) Chatbot systems such as ChatGPT, and (u) Virtual Classrooms, Laboratories, Libraries, and Teachers (augmentation of the educational experience). With the breakout of the global COVID-19 pandemic in February 2020, these pillars of the 4IR have become the front and centre of the fight for survival by organisations, individuals and countries. The existential threat posed by the disease has accelerated the 4IR globally. Covid19-era activities such as chatbots for service delivery, remote working, online learning, webinars, e-conferencing, vaccine development, pivoting, ventilator technology development, PPE manufacturing and distribution are all dependent on tools of the 4IR. 1.10.2

CONTROL SYSTEM INNOVATIONS IN THE 4IR

Given the pivotal role of the field of design and analysis of control systems in industrial automation, it has been a central pillar and driver of the Second, Third, and Fourth Industrial Revolutions. However, in the 4IR, unlike the previous revolutions, the control systems are intelligent, flexible, and more efficacious. Intelligent control is a class of control methods that use various artificial intelligence computing techniques. These include neural networks, Bayesian probability, fuzzy logic, Machine Learning, Deep Learning, reinforcement learning, genetic algorithms, quantum, and evolutionary computation. By using these approaches, intelligent control systems are thus designed to leverage concepts from biological systems and human intelligence to solve control system problems. A few terms need some formal definitions: • Artificial Intelligence (AI) or Machine Intelligence refers to intelligence programmed into and demonstrated by machines. This capability is in contrast to the natural intelligence manifested by humans and other animals. • Artificial General Intelligence (AGI) is the intelligence of a machine that can learn, understand, and execute any intellectual task that a human being can. The idea is to have in a machine the ability to perform multiple complex tasks as opposed to specific task competence. • Machine Learning is the study of algorithms and statistical models that computer systems use to perform tasks effectively, without using explicit instructions, but rather relying on patterns and inference. Machine Learning is a subset of AI. Learning can be supervised, semisupervised or unsupervised. • Deep Learning is part of a broader family of Machine Learning methods. This is about learning data representations, as opposed to task-specific algorithms. • Robotics refers to the design, construction and operation of machines that carry out activities that are normally ascribed to human beings such as vision, path planning, navigation, and decision-making. The field is about the intelligent connection of perception to action. • Mechatronics is the synergistic integration of mechanics, electronics and computer science to produce optimum products. • Intelligent Algorithms refers to an intelligent set of rules to be followed (step-by-step procedure) in solving a complex and sophisticated problem or task that would typically require a human expert. Intelligent Algorithms make systems smart.

34

1.10.3

Design and Analysis of Control Systems

NEURAL NETWORKS

Artificial neural networks (ANNs), or just neural networks, are computing systems inspired by the human brain’s biological neural networks. Hence, an ANN consists of a collection of connected nodes called artificial neurons, which loosely model the neurons in a biological brain. A basic neural network is depicted in Figure 1.31, while an extended version is illustrated in Figure 1.32. 1.10.4

FUZZY LOGIC

Fuzzy logic is a type or form of many-valued logic such that the variables’ truth values can be any real number, inclusively, between 0 and 1. It is used to handle the idea of partial truth, where the truth value may range between completely true and completely false. Hence, fuzzy logic is a computer engineering strategy based on degrees of truth rather than the usual true or false (1 or 0)

Figure 1.31 A basic neural network.

Figure 1.32 An extended neural network.

An Introduction to Control Systems

35

Figure 1.33 Fuzzy logic example.

Figure 1.34 Fuzzy logic architecture.

Boolean logic on which the classic computer is modelled. An example of a fuzzy logic system is shown in Figure 1.33, while the general fuzzy logic architecture is shown in Figure 1.34. A typical fuzzy logic control system is then illustrated in Figure 1.35. 1.10.5

INTELLIGENT CONTROL SYSTEM STRUCTURE

Recent progress in the areas of artificial intelligence, Machine Learning, microprocessor technology, and actuator and sensor technology has provided a strong groundwork for the development and realisation of intelligent control system structures. Such a structure has broad applications in robotics, manufacturing, mechatronics, and aerospace. A generic intelligent control system structure has the following capabilities:

36

Design and Analysis of Control Systems

Figure 1.35 Fuzzy logic control system.

• A data acquisition process that recognises the present dynamic state of its structure, monitors, and deduces critical structural properties and evaluates the functional performance of the structure. • An intelligent multisensor system that identifies functional descriptions of external and internal disturbances, detects changes in structural properties and changes in external and internal disturbances and feeds this into an intelligent controller. • An intelligent model that predicts or infers possible future changes in structural properties and changes in external and internal disturbances, makes intelligent decisions regarding compensations for disturbances and adequately generates actuation forces using feedback control. • An intelligent controller that identifies potential future failures and identifies the cause of current failures and failed components, remedies failures and/or informs the appropriate operators. It learns from past performance to improve future actions. The above capabilities require the structure to be embedded with a closed-loop controller equipped with intelligent sensors, intelligent actuators, and an intelligent control algorithm. Figure 1.36 illustrates the closed-loop control of intelligent structures. As indicated previously, there are two main approaches related to intelligent control that are widely used: (i) artificial neural networks and (ii) fuzzy logic. As already explained, artificial neural networks are envisaged as a methodology that emulates the biology of the human brain, resulting in systems that learn by experience, while fuzzy logic is conceptualised as a means of processing imprecise and vague linguistic information. The techniques can be applied separately or jointly depending on the nature and complexity of the intelligent control system being proffered. Illustrations of neural networks and fuzzy logic architectures were given at the beginning of Section 1.10.2. An example of an intelligent closed-loop control system structure is given in Figure 1.36. 1.10.6

EXAMPLES OF INTELLIGENT CONTROL SYSTEMS

Here a few examples are presented to practically showcase the efficacy of intelligent control systems. 1. Figure 1.37 shows a block diagram of an example of an intelligent control system for dynamic storage optimisation. It mimics a simple thinking process of a skilled grower, which is applied to achieve the optimisation control of the rate of water loss. It involves both a decision system

An Introduction to Control Systems

37

Figure 1.36 Intelligent closed-loop control system.

Figure 1.37 Intelligent control system: optimal control of storage.

and a feedback control system. The decision system, which consists of neural networks and genetic algorithms, determines the optimal setpoint trajectory of the temperature. The neural network first identifies the rate of water loss (as affected by temperature). Next is the search for the optimal combination through simulation of the step set points of the temperature that minimise the objective function. It is an intelligent control system for realising the optimal control of the storage process. 2. Robotics, mechatronics, space vehicles, manufacturing plants, cars, and wheelchairs are among artificial intelligence beneficiaries, making control loops smarter, adaptive, and able to change behaviour. Clearly, AI can enable and automate control engineering. For example, AI tools can capacitate sensor systems by making them smart in terms of information acquisition and data fusion. 3. Here is a case study of a central aircraft system that can tolerate control surface failures using semantic control theory. In this application, the semantic control system uses system identification and the control mixer concept to exploit the analytical or control redundancy among

38

Design and Analysis of Control Systems

the control surfaces and thus accommodate the failures. System identification is performed using artificial intelligence techniques. With this approach, unpredictable faults can also be accommodated, which is usually not realised using conventional techniques. 4. This example discusses the application of techniques of artificial intelligence to control system design. A self-organising system changes its basic structure as a function of its experience and/or environment. The general aim of a self-organising system is to evolve towards some desired output state or mode of behaviour, despite some degree of ignorance of process, inputs, or controls. As its structure changes as a function of experience, the self-organising system can be said to learn. A controller that is also a self-organising system is called a selforganising controller. Such a controller contains three primary subsystems – (i) sensors, (ii) a learning network, and (iii) a goal circuit. The sensors – accelerometers, rate gyros and horizon scanners, observe the local environment and provide descriptive data to the learning network and the goal circuit. The learning network consists of decision elements that operate on data input from the sensors, and that render a desirable output response. 5. In the 4IR, intelligent control systems have rapidly replaced classical control methods to solve problems in many fields, in particular industrial engineering, aerospace engineering, manufacturing, and academic research. The advantages of intelligent control systems, when compared to classical methods, can be demonstrated in various problems, such as greenhouse temperature control, quality control in a manufacturing plant, advanced robotics, driverless cars, and drones. This is mostly done using the intelligent control methods, which are powered by neural networks and fuzzy logic techniques. The results from simulations are used to compare with the simulations of classical methodologies such as open-loop controllers in general and PID-type controllers specifically. In all these examples, the results show that intelligent control methods have a better overall performance with better accuracy and smaller parameter variance. Furthermore, neural networks have the benefit of independence from models. They do not require extensive knowledge of the problem and its environment. However, classical methods are easier to conceptualise and implement. They do not require a complex selection of training data and sophisticated algorithms. Intelligent control systems are exceptionally advantageous in highly nonlinear problems, where linearisation followed by the use of classical control systems produces suboptimal results. The same applies to cases where a model of the system is difficult or impossible to obtain. For example, the field of robotics (which includes humanoids, driverless cars, unmanned space vehicles, and drones) ideally lends itself to the use of intelligent control because the systems involved are difficult, and sometimes impossible, to completely model. Because of the non-deterministic nature of robotics and the accompanying prevalence of extensive uncertainty, there is inevitably always a need to deploy some sort of intelligent control. 1.10.7

CHALLENGES OF INTELLIGENT CONTROL SYSTEMS

There are several drawbacks of intelligent control systems that must be mitigated in the process of the adoption of such systems. Some of them have already been flagged in the preceding discussions. Here a broad summary is outlined. 1.10.7.1

Instability

Sometimes the intelligence of control systems makes them vulnerable to stability problems. The failure of the automated control system of the Boeing 737 Max 8, which led to its crash on 10 March 2019, is a dramatic illustration. Although the plane’s in-flight control system was smart and automated, it was not stable. The instructive lesson is that there is a need to creatively incorporate,

An Introduction to Control Systems

39

reinforce and embed the techniques of robust control and stability analysis as part of the intelligent control system infrastructure. 1.10.7.2

Complexity, Cost, and Human Capital

Implementing an intelligent control system and building its corresponding architecture entail major financial and human capital investment given the system’s complexity. Furthermore, there are additional expenses for the repair and maintenance of such a control system. Classical control systems involve less complexity, cost, and human capital outlay. 1.10.7.3

Static Performance

While it delivers excellent performance, a standard intelligent control system cannot improve that performance with experience or time. It cannot automatically alter its behaviour or responses when there are dramatic changes in the environment, conditions, or circumstances. The intelligent control system would need reassessment, retraining and reconstruction. All this contrasts with a humancentred classical control system, whose performance improves with time (as the human in the loop gets experience) and can adapt to changes. In fact, over time, a complex, intelligent control architecture becomes a victim of wear and tear. In the special case where the control system incorporates learning algorithms, together with a self-assessment module, a self-organising module and selfimprovement capability, then its performance can improve with experience. Obviously, for such an elaborately sophisticated system, the cost, complexity, and intellectual capital requirements become astronomical. 1.10.7.4

Lack of Intrinsic Creativity

AI-based systems can assist in the design, creation, management, and control of a wide range of sophisticated engineering systems and solutions. However, on their own, these intelligent systems cannot create or reproduce themselves. An intelligent control system cannot demonstrate human intentionality and initiative. It can show neither human emotion nor sensitivity. Indeed, as part of the pursuit of Artificial General Intelligence (AGI), there is research towards the achievement of intentionality and human emotions in machines. This is a work in progress. We are not there yet! 1.10.7.5

Implications for Jobs and Livelihoods

In the Second and Third Industrial Revolutions, the control system has been a key part of automation, i.e., the use or introduction of technology in a manufacturing plant or industrial equipment (such as PLC) in place of manual operations. In the 4IR, with the advent of intelligent control techniques, automation is now pervasive and ubiquitous across all economic sectors. In a recent study published by the McKinsey Global Institute, there is a prediction that intelligent agents and robots could replace about 30% of the world’s current human labour by the year 2030. The report further states that: ‘Automation will displace between 400 and 800 million jobs by 2030, requiring as many as 375 million people to switch job categories entirely.’ Clearly, the adoption of intelligent control systems has a potentially devastating impact on jobs and livelihoods. The future of work cannot be assessed with a business-as-usual attitude. However, there is a silver lining – a method in the madness of automation. Designing, developing and building automation systems and equipment can be used to create new industries. The maintenance and service of the new technology will produce new jobs. Of course, reskilling, upskilling and retooling will be critical in addition to the development of completely new skills, competencies and capabilities. Automation can also be used to address dirty, difficult and dangerous jobs such as arc-welding, spray-painting, sewer management, underground mining, space-shuttle control, and management of teams of drones or driverless cars.

40

Design and Analysis of Control Systems

1.10.7.6

Conclusion

In the preceding section, some of the major challenges and potential drawbacks of intelligent control systems have been articulated. These problems are not insurmountable. They can be mitigated. The benefits of intelligent control systems outweigh the potential pitfalls.

1.11

BOOK OUTLINE

This book is organised into nine chapters. Chapter 1 introduces the concepts of control systems, discusses examples of control systems, and thus provides the motivation for the material covered in the book. To control a plant (or process), a mathematical model (description) of the plant is required. Chapter 2 develops different techniques of obtaining and expressing mathematical descriptions for various physical systems, which include mechanical, electrical, electromechanical, thermal, chemical, and hydraulic systems. A system is any collection of interacting elements for which there are cause-and-effect relationships among the variables. The activity of mathematically capturing the behaviour of physical systems is called system modelling. This chapter concentrates on dynamic systems, that is, systems whose variables are time-dependent. In most of the cases considered, not only will the excitations and responses vary with time, but at any instant, the derivatives of one or more variables will depend on the values of the system variables at that instant. Four forms of dynamic system models are presented: the State-Variable Matrix model, the input-output differential equation form, the transfer function form, and the block diagram form. Methods for approximating a nonlinear system by a linear time-invariant model are developed. For time-varying or nonlinear systems that cannot be approximated by a linear and time-invariant model, computer solutions are employed. Derivation of models, conversion between models and analysis of models using MATLAB are presented. Chapter 3 develops techniques for finding system responses for the dynamic systems modelled in Chapter 2. This activity is also called solving the model and involves using the mathematical model to determine certain features of the system’s cause-and-effect relationships. To design a control system, it is essential that the behaviour of the plant (or process) is analysed and understood. Three main mathematical approaches are used to obtain the system response: Direct solution of differential equations in the time domain, the use of the Laplace transform to solve differential equations in the frequency domain, and the deduction of system behaviour from the system transfer function. Computer generation and analysis of system response using MATLAB are presented. The use of numerical methods, experimental time response data, and frequency-response data, instead of using analytical methods to determine the system response in circumstances where these methods are more feasible, are explored. Chapter 4 introduces, develops, and analyses the principles of feedback control systems and illustrates their characteristics and advantages. In several applications, there is a need to have automatic regulation and tracking. Quantities such as pressure, temperature, velocity, thickness, torque, and acceleration have to be maintained at desired levels. Feedback control is a convenient way in which these tasks can be accomplished. Two case studies, the cruise control system and the DC motor (both position and speed) control system, are used as illustrative running examples throughout this chapter. The different types of controllers: Proportional (P), Proportional and Integral (PI), Proportional and Derivative (PD), Proportional and Integral and Derivative (PID), are discussed, together with their advantages and disadvantages. The concepts of system errors, tracking, disturbance rejection, and system type are covered. The notions of sensitivity, bounded input-bounded output (BIBO) stability, asymptotic internal stability, and Routh-Hurwitz stability are discussed and illustrated using examples. Chapter 5 deals with the root locus design techniques, explains the procedure of creating root loci and outlines their uses. Definitions of the necessary terms are provided, including a step-bystep guide to constructing a root locus, and details of how to design and evaluate controllers using

An Introduction to Control Systems

41

the root locus method. Given a feedback control system, the root locus illustrates how the poles of the closed-loop system vary with system parameters, in particular the closed-loop gain. Root locus is a powerful graphic method for analysis and design of control systems. Although this method is commonly used to study the effect of control gain variations, it can also be used to plot the roots of any polynomial expressed in the Evans root locus form. Most control systems work by regulating the system they are controlling around a desired operating point. The root locus method helps the designer of a control system to understand the stability and robustness properties of the controller at an operating point. Material presented in this chapter enables the reader to create a root locus and use it to understand the closed-loop system behaviour for any given open-loop system and a feedback controller. Case studies and examples that illustrate how to use the root locus for designing a control system are presented. Chapter 6 seeks to investigate the steady-state response of a dynamic system to sinusoidal inputs as the frequency varies. The design of feedback control systems in industry is accomplished by using frequency-response methods more often than any other method. Frequency-response design is popular primarily because it provides good designs in the face of uncertainty in the plant model. For example, for systems with poorly known or changing high-frequency resonances, the feedback compensation can be modified to alleviate the effects of those uncertainties. This modification is carried out more easily using frequency-response design than any other method. Another advantage of using frequency-response is the ease with which experimental information can be used for design purposes. The frequency-response design methods discussed in this chapter offer practical and important alternative approaches to the analysis and design of control systems. The main technique covered involves Bode plots. The material presented empowers the reader with skills to hand-draw these plots for a broad range of systems. Time-domain performance measures are developed in terms of the frequency response and then applied in system performance evaluation. These measures include gain margin, phase margin, and relative stability. The principles of compensation, lead and lag, are introduced and their applications discussed. Chapter 7 discusses state-space methods of analysis and design. The theoretical basics of the methods are discussed and followed by the actual methods. State-space methods are analysis and design methods that use state variables, i.e., the analysis and the design are carried out in the state space. State-space methods are somewhat simpler because they deal directly with the system states, which are first-order differential equations. Another advantage of these methods is their ability to handle multi-input, multi-output (MIMO) systems. This chapter develops and demonstrates these advantages. The concepts of similarity transformations, observability and controllability, transfer function decomposition, and full state feedback control are discussed and illustrated using examples. Brief introductions to optimal control and estimator design are provided. Chapter 8 addresses the issues involved in the design, analysis, and implementation of digital controllers. The rationale for digitisation and using digital control is presented. The objective is to design and implement digital controllers such that the digitisation and discretisation effects of continuous-time analogue signals are either eliminated or minimised. First, the general characteristics of sampled data systems are introduced, and then an analysis of discrete-time systems is presented including stability, root locus and frequency response analysis. The design and implementation of discrete-time controllers are then discussed in detail. In particular, the discrete proportional and integral and derivative (PID) controller is developed and appraised. An overview of the hardware, software, and system integration issues is outlined. In Chapter 9 advanced topics and issues involved in the design and analysis of control systems are addressed. In particular, the subjects of discrete-time estimation (both state-space and information space), optimal stochastic control, and nonlinear control systems are presented. Adaptive control systems and robust control are briefly introduced. Multisensor fusion is the process by which information from a multitude of sensors is combined to yield a coherent description of the system under observation. General recursive estimation is presented and, in particular, the Kalman filter

42

Design and Analysis of Control Systems

is discussed. A Bayesian approach to probabilistic information fusion is outlined and the notion and measures of information are defined. This leads to the derivation of the algebraic equivalent of the Kalman filter, the (linear) Information filter. State estimation for systems with nonlinearities is considered and the extended Kalman filter is treated. Linear information space is then extended to nonlinear information space by deriving the extended Information filter. The estimation techniques are then extended to LQG stochastic control problems, including systems involving nonlinearities, that is, the nonlinear stochastic control systems. This chapter also introduces the issues and concepts involved in the analysis and design of control systems for nonlinear dynamic systems. In Chapter 10, details of the latest control system techniques are presented, followed by a discussion of AI-based approaches to control systems. A detailed treatment of 4IR control system applications is then outlined. The modelling and control of a drone and an autonomous vehicle are then presented. Appendix A contains summaries of the properties of Laplace and Z-transforms, including tables of key transforms. In Appendix B the basic principles and syntax of MATLAB are introduced and illustrated. Sample laboratory exercises are also provided in Appendix B for most of the chapters.

PROBLEMS Problem 1.1 The student-teacher learning activity is inherently a feedback exercise intended to reduce the system error to a minimum. The desired output is the knowledge being studied, and the student is the process. Construct a feedback control system block diagram and identify the control blocks (sensor, actuator, process, controller, actual output, and desired output) of the system. Problem 1.2 An engineering organisational system is composed of major groups such as management, research and development, preliminary design, experiments, product design and drafting, fabrication and assembling, and testing. These groups are interconnected to make up the whole operation. The system can be analysed by reducing it to the most elementary set of components necessary that can provide the analytical detail required, and by representing the dynamic characteristics of each component by a set of simple equations. The dynamic performance of such a system can be determined from the relationship between progressive accomplishment and time. Draw a functional block diagram showing an engineering organisational system. Problem 1.3 Many closed-loop and open-loop control systems can be found in homes. (a) List six such examples (three open-loop and three closed-loop). (b) Construct feedback control system block diagrams for the six examples, and identify the control blocks (sensor, actuator, process, controller, actual output, and desired output). Problem 1.4 Give two examples of feedback control systems in which a human acts as a controller. Problem 1.5 Figure 1.10 shows an automatic water level control system. (a) Explain how the automatic control system works indicating the roles of key components. (b) If the automatic water level control system is replaced by a manual control system, draw a diagram representing this new arrangement. Explain how it will work. (c) State four advantages of closed-loop control systems over open-loop ones. (d) Explain how this control system works. (e) If the automatic controller is replaced by manual control, explain how the new system will function. (f) Construct a feedback control system block diagram and identify the control blocks (sensor, actuator, process, controller, actual output, and desired output) of the new system.

An Introduction to Control Systems

43

Problem 1.6 The diagram depicted in Figure 1.38 is a closed-loop temperature control system. (a) Explain how this control system works. (b) If the automatic controller is replaced by manual control, explain how the new system will function. (c) Construct a feedback control system block diagram and identify the control blocks (sensor, actuator, process, controller, actual output, and desired output) of the new system. Problem 1.7 List the major advantages and disadvantages of closed-loop control systems with respect to open-loop systems. Problem 1.8 The diagram depicted in Figure 1.39 is an automatic closed-loop system for paper moisture level control. Problem 1.9 The principles of feedback and control are not restricted to engineering and biological systems. There are many applications and occurrences of feedback control concepts in the socioeconomic and political arena. Society is composed of many feedback systems and regulatory bodies that are controllers exerting the forces on society necessary to maintain a desired output. Explain how a simplified model of the national income feedback control system can be represented as shown in Figure 1.12 [10].

Figure 1.38 Room temperature control using a thermostat.

Figure 1.39 A closed-loop control system for paper moisture.

2 Modelling of Dynamic Systems 2.1

INTRODUCTION

To control a plant (or process), a mathematical model of the plant is required. This chapter develops different techniques of obtaining and expressing such mathematical descriptions for various physical systems, which include mechanical, electrical, electromechanical, thermal, chemical, and hydraulic systems. More advanced systems such as mechatronic and robotic systems are also considered. A system is any collection of interacting elements for which there are cause-and-effect relationships among the variables. This definition is necessarily general, in order to encompass a broad range of systems. The most important feature of the definition is that it indicates that the interactions among the variables must be taken into account in system modelling and analysis, rather than treating individual elements separately. A system can also be understood as a set of physical components connected so as to form a whole entity that has properties that are not present in the separate components. The activity of capturing in mathematical terms the behaviour of physical systems is called system modelling. This chapter concentrates on dynamic systems, that is, systems whose variables are timedependent. In most of the cases considered, not only will the excitations and responses vary with time but at any instant the derivatives of one or more variables will depend on the values of the system variables at that instant. The basis for constructing a model of a system is rooted in physical laws such as conservation of energy, Newton’s laws of motion and Kirchhoff’s laws. These are the laws that the system elements and their interconnections are known to obey. The type of model sought will depend on both the objective of the engineer and the tools for analysis. If a pencil-and-paper analysis with parameters expressed in literal rather than numerical form is to be performed, a relatively simple model will be needed. To achieve this simplicity, the engineer should be prepared to neglect elements that do not play a dominant role in the system. On the other hand, if a computer is available for carrying out simulations of specific cases with parameters expressed in numerical form, a comprehensive mathematical model that includes descriptions of both primary and secondary effects might be appropriate. The modelling is restricted to lumped, continuous, non-quantized systems that can be described by sets of ordinary differential equations because well-developed analytical techniques are available for solving linear ordinary differential equations with constant coefficients. Most of the examples will involve systems that are both linear and timeinvariant. A method for approximating a nonlinear system by a linear time-invariant model will be developed. For time-varying or nonlinear systems that cannot be approximated by a linear and time-invariant model, one can resort to computer solutions. Four forms of dynamic system models are presented, the State-Variable Matrix model, the Input-Output (differential equation) model, the Transfer Function model and the Block Diagram model. 2.1.1

CHAPTER OBJECTIVES

Upon completion of this chapter, the reader should be able to carry out the following tasks for a wide variety of dynamic systems such as mechanical (translational and rotational), electrical, thermal, chemical, and hydraulic systems: • From the mathematical description of the system, construct a simplified version using idealised elements and define a suitable set of variables. • Use the appropriate element and interconnection laws to obtain a mathematical model generally consisting of ordinary differential equations (the Input-Output model).

44

DOI: 10.1201/9781032718880-2

Modelling of Dynamic Systems

45

• Introduce the Laplace transform and use it to determine the Transfer Function model. • Arrange the differential equations that describe the dynamic system into readily usable forms, such as the Input-Output (differential equation) model, State-Variable Matrix model, Transfer Function model and the Block Diagram model. • Compare and contrast the four key dynamic system models and be able to convert from one form to the other. • Use the Block Diagram model and Signal flow graph to determine the Transfer Function model • If the model is nonlinear, determine the equilibrium conditions and, where appropriate, obtain a linearised model in terms of incremental variables. • Obtain system models from experimental data such as frequency response, transient response and stochastic steady-state data.

2.2

DYNAMIC SYSTEMS

A dynamic system is a system whose variables are time-dependent, which means the inputs and outputs vary with time, and the derivatives of one or more variables will depend on the values of the system variables. There is a broad range of dynamic systems from different engineering disciplines. Such dynamic systems include the following: • • • • • • • • • •

Mechanical (translational and rotational) Electrical Electromechanical Mechatronic Robotic Thermal and fluid Hydraulic and pneumatic Chemical and processing Civil and structural Manufacturing

Although these dynamic systems come from a wide spectrum of engineering fields, their modelling follows the same principles. What is specific to each type are the dynamic elements, the element laws and interconnection laws. Also, certain forms of the mathematical models are more useful in certain fields than others.

2.3

DYNAMIC SYSTEM MODELS

The activity of capturing in mathematical terms the behaviour of physical systems is called dynamic system modelling. The basis for constructing a system model consists of the physical laws, such as the conservation of energy, Newton’s law and Kirchhoff’s laws, which the system elements and their interconnections are known to obey. A dynamic system model is defined as the mathematical representation of the behaviour of a dynamic system. The process of its construction is summarised in Figure 2.1, where the starting point in deriving such models is understanding the physics of the plant (the system being modelled). This means understanding what the dynamic elements are, the laws that govern them, how they affect each other and how the dynamic behaviour is physically accomplished. Once this is mastered, a simplified diagrammatic representation of the system is constructed using idealised elements. In this way, a complex dynamic system can be represented by a simplified (albeit approximate) mathematical description. The representation is further broken into free-body diagrams or circuits to which element laws and interconnection laws

46

Design and Analysis of Control Systems

Figure 2.1 Dynamic system modelling.

are applied. The differential equations are then established using dynamic system laws such as Newton’s laws (balance of forces or torques), Kirchhoff’s current law (KCL) and Kirchhoff’s voltage law (KVL) on the free-body diagrams and circuits. As shown in Figure 2.1, once the dynamic system model is established it can be used to determine the system response - the subject matter of Chapter 3. 2.3.1

MODELLING CONCEPTS

The issues involved in developing dynamic system models can be grouped as follows: • • • •

Elements Element laws Interconnection laws Inputs and outputs

Modelling of Dynamic Systems

47

• State variables • Free-body diagrams • System model 2.3.2

SUMMARY OF MODEL DERIVATION PROCEDURE

The procedure of deriving models is similar for the different dynamic systems, and can be summarised as follows: • • • • • •

Understand the physics and interaction of the elements of the dynamic system. Construct a simplified diagrammatic representation of the system using idealised elements. Apply element laws and interconnection laws. Draw the free-body or circuit diagrams. Identify or define the state variables, inputs, and outputs. Establish the system equations by using dynamic system laws such as D’Alembert’s law, KCL and KVL. • Obtain the desired form of the system model by manipulating the equations. • If the model is nonlinear, determine the equilibrium conditions and obtain a linearised model. 2.3.3

DIFFERENT DYNAMIC SYSTEM MODELS

There are several ways of expressing the differential equations that describe the dynamic behaviour of a system. Put differently, the mathematical representation of a dynamic system can take different forms. The form of the dynamic system model employed depends on both the objective of the modelling and the available tools for analysis. Also, certain forms are more useful or practical for particular types of dynamic systems than others. There are four general forms of dynamic system models discussed in this book: • • • •

State-Variable Matrix Form Input-Output Differential Equation Form Transfer Function Form Block Diagram Form

Before these four models are developed and explored in detail, a review of the nature and primary elements of different dynamic systems must be carried out.

2.4

OVERVIEW OF DIFFERENT DYNAMIC SYSTEMS

In this section, an overview of different dynamic systems is carried out. The motivation is to identify the physical elements involved, the element laws, the interconnection laws, and the typical state variables in these systems. The following types of systems are outlined and reviewed: translation mechanical, rotational mechanical, electrical, electromechanical, hydraulic (fluid), and thermal systems. 2.4.1

TRANSLATIONAL MECHANICAL SYSTEMS

In this section, the variables, element laws, and interconnection laws for linear (translational mechanical) dynamic systems are presented. Translational systems are systems that can have only horizontal or/and vertical motion. For masses that can move vertically, the gravitational forces must be considered. Either force or displacement inputs can be applied to any part of the system. An applied force is a known function of time, but the motion of the body to which it is applied is not

48

Design and Analysis of Control Systems

known at the beginning of a problem. Conversely, a displacement input moves some part of the system with a specified motion, but the force exerted by the external mechanism moving that part is normally not known. Displacements can be measured with respect to fixed reference positions or with respect to some other moving body. When relative displacements are used, it is important to keep in mind that the inertial force of a mass is always proportional to its absolute acceleration, not to its relative acceleration. To model a system, free-body diagrams are drawn, and the forces acting on every mass and junction point whose motion is unknown are summed up. The free-body diagram for a massless junction is drawn in the usual way, except that there is no inertial force [7]. The modelling issues specific to the modelling of translational systems are presented in this section. After introducing the variables to be used, the laws for the individual elements are presented in addition to the laws governing the interconnections of the elements. Next, the use of free-body diagrams as an aid in formulating the equations of the model is presented. Inputs consist of either the application of a known external force or the movement of a body with a known displacement. If the system contains an ideal pulley, then some parts can move horizontally and other parts vertically. Special situations, such as free-body diagrams for massless junctions and rules for the series or parallel combination of similar elements, are also treated. The modelling process can sometimes be simplified by replacing a series-parallel combination of stiffness or friction element with a single equivalent element. Special attention is given to linear systems that involve vertical motion. If displacements are measured from positions where the springs are neither stretched nor compressed, the gravitational forces must be included in the freebody diagrams for any masses that can move vertically. 2.4.1.1

State Variables

State variables of translational mechanical systems are the displacement x(t), measured in metres (m), the velocity v(t), which is measured in metres per second (m/s), the acceleration a(t), measured in metres per second squared (m/s2 ), and the force f (t) which is measured in newtons (N). dx . dt dv d 2 x a= = 2. dt dt v=

In addition to the variables mentioned above, the energy w in joules (J) and power p in watts (w) are also defined variables. p = f v. dw . p= dt The energy supplied between time t0 to t1 is given by Z t1

p(τ)dτ, t0

and total energy supplied is obtained from Z t

w(t) = w(t0 ) + where w(t0 ) is the energy supplied up to time t0 .

p(τ)dτ, t0

49

Modelling of Dynamic Systems

2.5

KEY DYNAMIC SYSTEM MODELS

Now that there has been extensive coverage of different types of dynamic systems and the corresponding equations that are used to represent them, attention is explicitly shifted to the subject of dynamic system modelling. There are several ways of expressing the differential equations that describe the dynamic behaviour of a system. Put differently, the mathematical representation of a dynamic system can take different forms. The form of the dynamic system model employed depends on both the objective of the modelling and the available tools for analysis. Also, certain forms are more useful or practical for particular types of dynamic systems than others. As stated in Section 2.3.3, there are four major types of dynamic system models discussed in this book: • • • •

Input-Output Differential Equation Form State-Variable Matrix (State-Space) Form Transfer Function Form Block Diagram Form

2.6

INPUT-OUTPUT DIFFERENTIAL EQUATION FORM MODEL

This form of the dynamic model is developed by expressing the system equations in terms of the inputs and the outputs, and their respective derivatives, while eliminating all other variables. This representation is thus in the form of input-output differential equations representing the behaviour of the system. It is called the Input-Output Differential Equation Form model, shortened as InputOutput model. The model consists of differential equations of the input and output variables only, and it is the most basic and intuitive representation of a dynamic system. For any given system, the Input-Output model is unique, and all the other three dynamic system models can be derived from it. For a system with one input u(t) and one output y(t), the output has the general form, f (y, y, ˙ y, ¨ . . . , y(n) ) = a0 y + a1 y˙ + a2 y¨ + · · · + an y(n)

f (u, u, ˙ u, ¨ . . . , u(m) )

= b0 u + b1 u˙ + b2 u¨ + · · · + bm u(m) ,

(2.1)

where

dmu dny (m) and u = . dt n dt m For linear and time-invariant systems all the coefficients in Equation 2.1 are constants. Equation 2.2 shows a simple input-output differential equation with one input and one output y(n) =

3y + y˙ + 2y¨ = 4u + 7u. ˙

(2.2)

For systems with more than one output, the right side of the input-output differential equation will include additional input terms. If there are several outputs, there is a need for a separate but similar equation for each output. For example, the following pair of equations represent a system with two outputs and three inputs, 3y1 + y˙1 + 2y¨1

= 4u1 + u2 + 7u˙3

(2.3)

y2 + y˙2 + 2y¨2

= 4u1 + 7u2 + u3 .

(2.4)

Assuming the form of the input(s) is known, in the general case, each of the input-output differential equations involves only one unknown variable and its derivatives. Thus, unlike state-variable equations, each equation can be solved independently of the others. An input-output differential equation can be obtained by combining the equations from a State-Variable model or more directly from a free-body diagram or circuit by labelling the variables in terms of the input and output variables.

50

Design and Analysis of Control Systems

In summary it can be restated that the input-output differential equation form model (InputOutput model) is the most basic and unique representation of a dynamic system, which consists of differential equations of the input and output variables only. Example 2.1 Consider the dynamic system, a mass, spring and damper structure, shown in Figure 2.2. (a) Draw a free-body diagram for the dynamic system (b) Derive the Input-Output model of the dynamic system Solution 2.1 (a) The first step is drawing the free-body diagram as shown in Figure 2.3. All forces felt by the mass are then indicated. The mass is travelling toward the right. Hence, only the applied force points to the right. All other forces impede the motion and act to oppose it. Hence, the spring force, frictional force (viscous damper), and the force due to acceleration point to the left. (b) The equation of motion for the dynamic system is then established by using Newton’s law (the total of the forces going to the left is equal to the total of those going to the right) in Figure 2.3, such that: mx(t) ¨ + bx(t) + kx(t) = f (t) (2.5) This differential equation (Equation 2.5) is the Input-Output model of the dynamic system where the input is f (t) and the output is x(t). As explained, in electrical networks the applications of electrical phenomena can be modelled using ordinary differential equations. An RLC circuit is an electrical circuit consisting of a resistor (R), an inductor (L), and a capacitor (C), connected in series or in parallel. The name of the circuit is

Figure 2.2 Mass, spring and damper system.

Figure 2.3 Mass, spring and damper system: free body diagram.

51

Modelling of Dynamic Systems

Figure 2.4 RLC series network.

derived from the letters that are used to denote the constituent components of this circuit, where the sequence of the components may vary from RLC. Example 2.2 Consider an RLC circuit consisting of a resistor (R), an inductor (L), and a capacitor (C), connected in series, as depicted in Figure 2.4. Derive the Input-Output model of the network. Solution 2.2 When dealing with a circuit that has a resistor, an inductor or a capacitor, it is prudent to immediately write down the circuit equations of these three elements, respectively, as: VR (t) = Ri(t) vL = Li˙L (t) iC (t) = Cv˙C (t) The Input-Output model is then derived by using these circuit element equations, Kirchhoff’s Current Law (KCL) and Kirchhoff’s Voltage Law (KVL) as follows: v(t) = vR (t) + vC (t) + vL (t) Z 1 i(t) + Li˙(t) = Ri(t) + C But i(t) = Cv˙C (t) Z C =⇒ v(t) = RCv˙C (t) + v˙C (t) + LCv¨C (t) C =⇒ LCv¨C (t) + RCv˙C (t) + vC (t) = v(t)

(2.6) (2.7)

(2.8)

Equation 2.8 is the input and output differential equation representing the dynamics of the RLC series network. It is called the Input-Output model of the dynamic system, where v(t) is the input and vC (t) is the output of the circuit. Ostensibly, the Input-Output model of a dynamic system consists of differential equations of the inputs and outputs of the dynamic system. It is the most basic mathematical representation of the behaviour of the system.

2.7

STATE-VARIABLE MATRIX FORM (STATE-SPACE) MODEL

This form of the dynamic system model is achieved by first considering the system equations, choosing variables of interest and identifying the inputs and outputs. The system is then modelled by finding equations of the derivatives of the variables in terms of the variables and the inputs [7]. The outputs are also expressed in terms of variables and inputs. The State-Variable Matrix model is then developed by pulling out the coefficients of the state variables and inputs to obtain the matrices A, B, C, and D such that x˙ (t) = Ax(t) + Bu(t) y(t) = Cx(t) + Du(t),

(2.9) (2.10)

52

Design and Analysis of Control Systems

where x(t) is a vector of state variables, x˙ (t) is a vector of the derivatives of the state variables, u(t) is a vector of inputs and y(t) is a vector of outputs. A, B, C, and D are matrices of coefficients of the state variables and inputs and the general matrix elements are represented by ai j , bi j , ci j , and di j . For linear time-invariant systems the matrices A, B, C, and D are fixed, which means that their elements are constants. This is is the state-space representation of a dynamic system, where it is described directly from the differential equations without using transformations. The framework can be used for Large Scale Systems (LSS), Multi Input Multi Output (MIMO) systems and nonlinear systems. It is appropriately structured for computer implementation and computer-based design due to its extensive use of matrices. Details of state-space design methods are presented in Chapter 7. Consequently, the State-Variable Matrix Form model of a dynamic system is interchangeably called the State-Variable Matrix model or the State-Space model. It is important to note that this representation of a dynamic system is not unique. Different State-Variable Matrix models can be derived for the same dynamic system. This will be illustrated and explored in detail, later in the book. 2.7.1

CHOICE OF STATE VARIABLES

The procedure of formulating the state variable system model begins with the selection of a set of state variables. This set of variables must completely describe the effect of the past history of the system and its response in the future. Although the choice of state variables is not unique, the state variables for dynamic systems are usually related to the energy stored in each of the system’s energy-storing elements [7]. Since any energy that is initially stored in these elements can affect the response of the system at a later time, one state variable is normally associated with each of the independent energy-storing elements. Hence, the number of independent state variables that can be chosen is equal to the number of independent energy-storing elements in the system. Figure 2.5 summarises the key energy-storing elements of dynamic systems. The fact that the state variables are independent means it is impossible to express any state variable as an algebraic function of the remaining state variables and the inputs. A system with state variables chosen with this constraint is said to be minimal (or minimised) and the matrix A is full rank, and thus invertible. In some systems, the number of state variables is larger than the number of energy-storing elements because a particular interconnection of elements causes redundant variables or because there is a need for a state variable that is not related to the storage of energy. The latter might occur when a particular variable or parameter in a system has to be monitored or controlled where this can only be achieved by using an extra state (redundant) variable. In such a State-Variable Matrix model the system is not minimal and matrix A is not full rank (i.e. not invertible). It is not possible to model a system using a fewer number of state variables than the number of independent energy-storing elements in the system. Figure 2.5 summarises the key energy-storing elements of dynamic systems.

Figure 2.5 Classification of control systems.

53

Modelling of Dynamic Systems

2.7.2

SUMMARY OF THE STATE-VARIABLE FORM MODELLING

The general approach employed in deriving the mathematical model in state-variable matrix form can be summarised as follows: • • • •

Determine the number of energy-storing elements in the dynamic system (n). Choose the n state variables and identify the inputs and outputs. Draw free-body diagrams or circuit diagrams for the system elements. Obtain a set of differential equations by using dynamic system laws (such as mechanical, electrical, chemical and hydraulic). • Manipulate the differential equations into a State-Variable Matrix model by expressing the derivative of each state variable as an algebraic function of the state variables and inputs. Take note of simple cases such as x˙ = v where both x and v are state variables. • Express the output variables as algebraic functions of the state variables and the inputs. • Write the state variable and output equations in matrix form by pulling out the coefficients of the state variables and the inputs to obtain the matrices A, B, C and D.

2.7.3

OBTAINING THE STATE-VARIABLE MATRIX MODEL

As already explained, once an Input-Output model is available, the State-Variable Matrix model can be obtained by manipulating this equation. Consider the dynamic system, a mass, spring and damper structure discussed in Example 2.1. There are two energy-storing elements, the spring and the mass. Hence, the state variables are chosen as the displacement of the spring x(t) and the speed of the mass v(t), where v(t) = x(t). ˙ The system output can be any variable of interest. For example, it can be chosen as y(t) = x(t). Using the Input-Output model obtained from the free-body diagram presented in Example 2.1, it follows that mx(t) ¨ + bx(t) ˙ + kx(t) =

f (t)

=⇒ mv(t) ˙ + bv(t) + kx(t) =

f (t) b f (t) k =⇒ v(t) ˙ = − x(t) − v(t) + m m m

Hence, the State-Variable Matrix model is then extracted as follows: # " # "   0 1 x(t) 0 x(t) ˙ = + 1 f (t) b k v(t) v(t) ˙ − − m m  m   x(t)   + 0 f (t) y(t) = 1 0 v(t) Thus, the state-variable matrices are given by " A =

B = C

=

D =

0 1 k b − − m m " # 0 1 m   1 0   0

#

Similarly, the State-Variable Matrix model of the RLC circuit discussed in Example 2.2 can be obtained from the Input-Output model. There are two energy-storing elements, the capacitor and

54

Design and Analysis of Control Systems

the inductor. Hence, the state variables are chosen as the voltage across the capacitor vC (t) and the current through the inductor i(t), where i(t) = Cv˙C (t). The system output can be any variable of interest. For example, it can be chosen as y(t) = vC (t). Using the Input-Output model obtained by applying Kirchhoff’s Current and voltage laws as demonstrated in Example 2.2, it follows that LCv¨C (t) + RCv˙C (t) + vC

= v(t)

=⇒ Li˙(t) + Ri(t) + vC (t) = v(t) R 1 1 =⇒ i˙(t) = − vC (t) − i(t) + v(t). L L L Hence, the State-Variable Matrix model is then extracted as follows:   1     "0# 0 v˙C (t) v (t)   C C  =  1 + 1 v(t) R i˙(t) i(t) − − L L  L    v (t)   y(t) = 1 0 C + 0 v(t) i(t) Thus, the state-variable matrices are given by 

A

=

B

=

C

=

D

=

 1 0  C   1 R − − L L " # 0 1 L   1 0   0

Example 2.3 Two connected cars with an applied input force u(t) and negligible rolling friction can be represented by a translational mechanical system as shown below.

Translational Mechanical System (a) Draw the free-body diagrams of this mechanical system. (b) Write down the differential equations that represent the system. (c) Explain why the vector of state variables should be chosen as x(t) = [x1 v1 x2 v2 ]T . (d) Express these differential equations in the State-Variable Matrix model, where the output is the frictional force in b1 .

55

Modelling of Dynamic Systems

Solution 2.3 (a) The free-body diagrams for the translational mechanical system are drawn as shown below.

Free-Body Diagram (b) Balancing the forces (D’Alembert’s law) on the free-body diagrams of masses m1 and m2 , respectively, gives m1 x¨1 + b1 (x˙1 − x˙2 ) + k1 (x1 − x2 ) = u(t) m2 x¨2 + k2 x2 + b2 x˙2 = b1 (x˙1 − x˙2 ) + k1 (x1 − x2 )

(2.11) (2.12)

(c) There are four energy-storing elements, two springs and two masses. The potential energy stored in a spring is a function of the displacement x(t) and the kinetic energy stored in a moving mass is a function of the velocity v(t) (which is equal to x(t)). ˙ Hence, the state variables corresponding to the energy-storing elements are given by x(t) = [x1 v1 x2 v2 ]T . (d) The State-Variable Matrix model is achieved by obtaining expressions of the derivatives of these variables in terms of the variables and the input. Two of these expressions are simple: x˙1 = v1 x˙2 = v2 . The other two expressions, for v˙1 and v˙2 , are obtained by rearranging Equations 2.11 and 2.12. m1 x¨1 + b1 (x˙1 − x˙2 ) + k1 (x1 − x2 ) = u(t) m1 v˙1 + b1 (v1 − v2 ) + k1 (x1 − x2 ) = u(t) v˙1 = − +

b1 k1 k1 b1 v1 + v2 − x1 + x2 m1 m1 m1 m1 1 u(t) m1

m2 x¨2 + k2 x2 + b2 x˙2 = b1 (x˙1 − x˙2 ) + k1 (x1 − x2 ) m2 v˙2 + k2 x2 + b2 v2 = b1 (v1 − v2 ) + k1 (x1 − x2 )   b1 b1 b2 k1 v1 − + v2 + x1 v˙2 = m2 m2 m2 m2   k1 k2 − + x2 . m2 m2

56

Design and Analysis of Control Systems

The results can be summarised as follows: x˙1 = v1 v˙1 = −

b1 b1 k1 k1 1 v1 + v2 − x1 + x2 + u(t) m1 m1 m1 m1 m1

x˙2 = v2 v˙2 =

    b1 b1 b2 k1 k1 k2 v1 − + v2 + x1 − + x2 . m2 m2 m2 m2 m2 m2

The output, the frictional force in b1 , is given by y(t) = b1 (v1 − v2 ) = b1 v1 − b1 v2 . By extracting the coefficients of the variables and inputs, the state-variable matrix system is obtained.   0 1 0 0     0   k b1 k1 b1         − 1  1   − x˙1 x  m1    1 m1 m1 m1    v˙1     v    m 1 =  + 1     u(t)  x˙2   0 0 0 1   x2      0   v˙2    v2   k    b1 (k1 + k2 ) (b1 + b2 ) 1   − −   0 m2 m2 m2 m2 

y(t) =



0

b1

0

−b1

 x1   v1       x2  + 0 u(t). v2

These two matrix equations constitute the State-Variable Matrix model of the dynamic system. 2.7.4

STATE-VARIABLE MATRIX MODELS FOR NONLINEAR SYSTEMS

For linear time-invariant systems, all of the elements of A, B, C, and D represented by ai j , bi j , ci j , and di j (coefficients of state variables and inputs) are constants. For linear systems whose parameters vary with time, some of these elements are functions of time. For nonlinear elements, some of the coefficients are functions of the state variables or inputs. Hence, in the general case of time-varying, nonlinear systems, the right-hand side of the state variable and output equations are nonlinear functions of the state variables, the inputs, and time such that, x˙ (t) = f(x, u, t)

(2.13)

y(t) = g(x, u, t).

(2.14)

This means that for nonlinear systems the matrices A, B, C, and D cannot be extracted, and StateVariable Matrix model takes the general structure shown in Equations 2.13 and 2.14. Two other situations where the matrices cannot be extracted occur when derivatives of two independent state variables appear in the same state-variable equation, and when derivatives of the inputs appear in

Modelling of Dynamic Systems

57

the state-variable equation. While these more general and more complicated systems are addressed in this chapter, emphasis is placed on linear time-invariant systems where the state-variable matrices can be easily extracted. Examples of linearisation of nonlinear models is presented in Section 2.17. Once a model is linearised, all the classical control techniques are easily applied. 2.7.5

CHARACTERISTICS OF STATE-VARIABLE MODELS

One of the characteristics of state-variable models is that they are easily captured by matrix notation and are thus amenable to the techniques of linear algebra. For example, any number of firstorder state-variable equations can be represented by a single matrix differential equation merely by making the appropriate definitions. Furthermore, when one is dealing with complex multi-input, multi-output systems, using matrix concepts and properties leads to an understanding of system behaviour that would be difficult to achieve otherwise. An additional advantage of using matrices is that one can apply many of the theoretical properties of matrices such as multiplication, evaluation of determinants, and inversion, to the study of dynamic systems once their models have been put into matrix form. The State-Variable Matrix model is obtained by pulling out the coefficients of the state variables and the inputs to obtain the coefficient matrices A, B, C, and D. These matrices can then be used in MATLAB (Appendix B) simulation to obtain the system response and design controllers to control the system response. State-variable equations are particularly convenient for complex multi-input, multi-output systems. They are often written in matrix form, and, in addition to their computational advantages, they can be used to obtain considerable insight into system behaviour. The state variable concept has formed the basis for many of the theoretical developments in system analysis, in particular the state-space methods discussed in Chapter 7. 2.7.6

COMPARISON WITH THE INPUT-OUTPUT MODEL

For a first-order system, both forms of the system model involve a single first-order differential equation and are essentially identical. For higher-order systems, they are quite different. A set of n first-order differential equations in State-Variable Matrix model must be solved for as a group, and the initial value of each state variable must be known to solve the set of n equations. An inputoutput differential equation of order n contains only one dependent variable, but there is a need to know the initial values of that variable and its first (n − 1) derivatives. In practice, finding the input-output differential equation and the associated initial conditions may require more effort than finding the information needed for a state variable solution. Using the state-variable equations has significant computational advantages when a computer solution is to be found. In fact, standard methods for solving a high-order, possibly nonlinear input-output differential equation numerically (using computers) usually require decomposition into a set of simultaneous first-order equations. The analytical solution of input-output differential equations and of sets of state-variable equations is considered in Chapter 3. The State-Variable Matrix models are particularly convenient for complex multi-input, multi-output systems, while the Input-Output models are very convenient when the Transfer Function model is to be readily derived analytically.

2.8

TRANSFER FUNCTION FORM MODEL

The models in the previous sections are either ordinary differential equations or algebraic equations of time-dependent quantities. They are called time-domain models. Another method of capturing the dynamic behaviour of linear systems is through their frequency responses. Such models are called frequency-domain models. Laplace transformation is the key to frequency-domain modelling. The Transfer Function Form model (shortened as the Transfer Function model) of a dynamic system with an input u(t) and output y(t) is defined from the Laplace transforms of the input and output such that

58

Design and Analysis of Control Systems

T (s) = =

Y (s) U(s) b0 sm + b1 sm−1 + b2 sm−2 + · · · + bm , a0 sn + a1 sn−1 + a2 sn−2 + · · · + an

where {b j } and {ai } are constants, and n ≥ m. The variable s is called the complex variable or the Laplace variable. Hence, the Transfer Function model is the ratio of the Laplace transforms of the output and input of a dynamic system. It is easily obtained by taking the Laplace transforms of the Input-Output model of the system. In this book, both time-domain and frequency-domain models are employed, and it is important to be able to convert freely from one form to the other. In addition to modelling linear systems in the frequency domain, Laplace transformation is also used in engineering to solve the ordinary differential equations of dynamic systems, i.e., obtain the system response. This is the subject matter of Chapter 3. 2.8.1

OBTAINING THE TRANSFER FUNCTION MODEL

There are four ways of establishing the transfer function of a dynamic system: • Directly taking the Laplace transform of time-domain models (State-Variable form or InputOutput form) • Using the s-operator • Using transfer functions (obtained by employing Laplace transforms) of the components of the system to establish the overall system transfer function • Using frequency response experimental data As previously indicated the Transfer Function model can be obtained directly from the time-domain models (when all the initial conditions are known) by taking the Laplace transforms throughout the differential equations. Before that can be accomplished the Laplace transform must be briefly introduced. 2.8.2

THE LAPLACE TRANSFORM

Difficult mathematical problems can be transformed into equivalent problems that are easier to solve and the solution is transformed back into the original framework. Such transformation techniques include Logarithms, Fourier series, the Fourier transform, the Laplace transform, and the Z-transforms. The Laplace transform is a mathematical tool for transforming differential equations into an easier-to-manipulate algebraic form. In this domain, the differential equations are easily solved and the solutions converted back into the time domain to give the system response. In this way, the Laplace transform is very useful in establishing and analysing the system responses of linear dynamic systems by using relatively easily solved algebraic equations instead of the more difficult differential equations. The use of Laplace transforms to solve dynamic system models proceeds from two forms of the models; the input-output differential equation, or from the system transfer function. 2.8.2.1

Definitions

The Laplace transform converts a time function into a function of a complex variable that is denoted by s. The transform of a function of time y(t) is represented by either L [y(t)] or Y (s) such that Y (s) = L [y(t)] ,

59

Modelling of Dynamic Systems Laplace L Time Domain

s-plane {frequency domain} L-1 Inverse Laplace

Figure 2.6 Laplace transformation.

where the symbol L stands for “the Laplace transform of ”. One can think of the Laplace transform as providing a means of transforming a given problem from the time domain, where all variables are functions of t, to the complex-frequency domain, where all variables are functions of s. The Laplace transformation is defined as follows: Y (s) = L [y(t)] Z ∞

=

y(t)e−st dt.

−∞

In most applications only the one-sided Laplace transform is essential. This transform uses 0− (a value just before t = 0) as the lower limit such that Z ∞

Y (s) =

0−

y(t)e−st dt.

Once the system model has been solved in the s-domain, it is converted back into the time domain to give the system response. Z 1 σ + jω Y (s)est ds. y(t) = 2π σ − jω In practice, this inverse Laplace transform equation is seldomly used because it is too complex. Instead, tables of simple inverse Laplace transforms are used. Such tables are in Appendix A. Figure 2.6 shows a summary of the Laplace transformation between the time and frequency domains. 2.8.3

PROPERTIES OF LAPLACE TRANSFORMS

Two attributes of analytical techniques of linear time-invariant systems form the basis for their application to dynamic systems: • A linear system response obeys the principle of superposition. • The response of a linear constant system can be expressed as the convolution of the input with the unit impulse response of the system. The principle of superposition states that if the system has an input that can be expressed as a sum of signals, then the response of the system can be expressed as the sum of the individual responses to the respective input signals. From the second property it follows immediately that the response of a linear time-invariant system to an exponential input is also exponential. This result is the principle reason for the usefulness of Fourier and Laplace transforms in the study of linear constant systems. The properties of Laplace transforms can be summarised as follows:

60

Design and Analysis of Control Systems

• Superposition principle: L{αy1 (t) + β y2 (t)} = αY1 (s) + βY2 (s). Laplace transforms are applied to linear systems, and hence, they obey the principle of superposition. • Convolution: Z ∞ y(t) = u(τ)h(t − τ)dτ −∞

where u(t) is the input and h(t) is the impulse response. • Scaling property: L{αy(t)} = αY (s). This is a special case of the superposition principle. • Differentiation: d = s Differential operator dt   dy(t) L = sY (s) − y(0) dt h i L y(m) (t) = smY (s) − sm−1 y(0) − sm−2 y(0) ˙ − · · · − y(m−1) (0). • Integration: =

1 s

=

Y (s) . s

Z

dt L



Z y(t)dt

Integral operator

• Multiplication by time: L [ty(t)] = −

dY (s) . ds

Multiplication of a function y(t) by time corresponds to differentiating the negative of the Laplace transform, −Y (s). 2.8.4

LAPLACE TRANSFORM OF SOME KEY FUNCTIONS

To build an understanding of Laplace transforms, the transforms of simple functions are derived in this section. Laplace transforms of some key functions are shown in the following chart. Complete tables of the transforms and their properties are in Appendix A. Number

Y (s)

y(t), t ⩾ 0

1

1

δ (t)

2

1 s

1(t)

3

1 s2

t

61

Modelling of Dynamic Systems

2.8.4.1

4

m! sm+1

tm

5

1 s+a

e−at

6

1 (s + a)2

te−at

7

1 (s + a)m

1 t m−1 e−at (m − 1)!

8

s+a (s + a)2 + b2

e−at cos bt

9

b (s + a)2 + b2

e−at sin bt

Step Function

This is a function that is constant for t ≥ 0 as shown in Figures 2.7 and 2.8b. y(t) = a Z ∞

=⇒ Y (s) =

(a)e−st dt

0

=

−ae−st ∞ |0 s

a = . s 2.8.4.2

Ramp Function

This is represented as a line of constant gradient for t ≥ 0 as shown in Figure 2.8c. y(t) = bt

Figure 2.7 A step function.

62

Design and Analysis of Control Systems

Input y(t)= d (t) (a)Impulse

Input a y (t) = a (b)Step

y (t) = bt (c)Ramp Figure 2.8 Impulse (a), step (b), and ramp (c) inputs. Z ∞

=⇒ Y (s) =

(bt)e−st dt

0

 ∞ bte−st be−st = − − 2 s s 0 = 2.8.4.3

b . s2

Exponential Function

This is represented by the expression y(t) = e−at Z ∞

=⇒ Y (s) =

(e−at )e−st dt

0

Z ∞

=

e−(s+a)t dt

0

=− =

e−(s+a)t ∞ | s+a 0

1 . s+a

63

Modelling of Dynamic Systems

2.8.4.4

Impulse Function

This is a spike at the origin of 0 sec (0− to 0+ ) as shown in Figure 2.8a. y(t) = δ (t) Z ∞

=⇒ Y (s) =

0−

δ (t)e−st dt

Z 0+

= 2.8.4.5

0−

δ (t)dt = 1.

Sinusoid Function

This is represented by the expression y(t) = sin ωt Z ∞

=⇒ Y (s) =

(sin ωt)e−st dt

0

e jωt − e− jωt 2j  Z ∞  jωt e − e− jωt =⇒ Y (s) = e−st dt 2j 0

using sin ωt =

= = 2.8.4.6

1 2j

Z ∞

 e( jω−s)t − e−( jω+s)t dt

0

ω . s2 + ω 2

Pure-Time Delay

Consider system y(t) delayed by time τ, such that the system is represented by y1 = y(t − τ). Taking Laplace transforms leads to y1 (t) = y(t − τ) L [y1 (t)] = L [y(t − τ)] =⇒ Y1 (s) = e−sτ Y (s). The function e−sτ represents the Laplace transform of a pure time delay function. 2.8.4.7

Differentiation  L

 dy(t) = sY (s) − y(0) dt

For example, consider y(t) = sin ωt   dy(t) L = sL (sin ωt) − sin 0 dt

64

Design and Analysis of Control Systems

 ω −0 =s 2 s + ω2 

=

sω s2 + ω 2

The result can be checked by taking the derivative first, and then taking the Laplace transforms. dy(t) d (sin ωt) = = ω cos ωt. dt dt   dy(t) Therefore, L = ωL [cos ωt] dt ωs . (same result as before) s2 + ω 2

= 2.8.4.8

Integration L

Z

t

0

 1 y(τ)dτ = F(s) s

For example, consider y(t) = cos ωt Y (s) = L

Z 0

t

s s2 + ω 2



1 y(τ)dτ = L (cos ωt) s   1 s = s s2 + ω 2 =

1 . s2 + ω 2

The result can be checked by taking the integral first and then taking the Laplace transforms as follows: Z t

cos ωτdτ = 0

1 sin ωt ω



 1 1 Therefore, L sin ωt = L (sin ωt) ω ω   1 ω = ω s2 + ω 2 = 2.8.4.9

1 s2 + ω 2

(same result as before).

The Differential Operator

The s-operator can be considered as the differential operator. d dt dy(t) =⇒ sY (s) = . dt s=

65

Modelling of Dynamic Systems

2.8.4.10 The Integral Operator 1 The operator can be interpreted as the integral operator. s t 1 = dτ s 0 Z t  Y (s) =⇒ =L y(τ)dτ . s 0

Z

Example 2.4 Find the Laplace transform of the function y(t) = α sin2 t + β cos2 t. Solution 2.4 Using the half angle formula and rewriting the result leads to y(t) = α sin2 t + β cos2 t     1 cos 2t 1 cos 2t = α − +β + . 2 2 2 2 Therefore,  Y (s) = αL

1 cos 2t − 2 2



 +βL

1 cos 2t + 2 2

 .

From the Laplace tables it follows that     1 s 1 s Y (s) = α − + β + . 2s 2(s2 + 4) 2s 2(s2 + 4) Rearranging the terms Y (s) =

α +β β −α + 2s 2



 s . s2 + 4

The Laplace transform is very useful in the modelling of dynamic systems and determination of the system response. Its use in solving dynamic system models, which is covered in Chapter 3, proceeds from two forms of the models; the Input-Output Differential Equation Form, or from the Transfer Function Form. In this chapter, the focus is on the use of the Laplace transform for modelling of dynamic systems. 2.8.5

DETERMINATION OF THE TRANSFER FUNCTION MODEL

With clarity on the nature of Laplace transforms, the Transfer Function Form model can be obtained directly from the time-domain models (when all the initial conditions are known) by taking the Laplace transforms throughout the differential equations. The time-domain model can either be the State-Variable Matrix model or the Input-Output model (input-output differential equation form). It is easier to start with the input-output differential equation form and hence it is advisable to convert the State-Variable Matrix model to the input-output differential equation form, and then proceed to obtain the transfer function. The procedure can be summarised as follows: • Consider the input-output differential equation form or convert the State-Variable Matrix form to an input-output differential equation form. • Take the Laplace transforms of the input-output differential equation. • Apply the initial conditions (usually set to zero). • Obtain the Laplace transform of the output divided by the Laplace transform of the input.

66

Design and Analysis of Control Systems

Consider a system that has the input-output differential equation a1 y + a2 y˙ + a3 y¨ = b1 u + b2 u, ˙ where the input is u(t) and output is y(t). Taking Laplace transforms throughout the differential equation gives   a1Y (s) + a2 [sY (s) − y(0)] + a3 s2Y (s) − sy(0) − y(0) ˙ = b1U(s) +b2 [sU(s) − u(0)] . Setting all initial conditions to zero leads to a1Y (s) + a2 sY (s) + a3 s2Y (s) = b1U(s) + b2 sU(s)   Y (s) a1 + a2 s + a3 s2 = U(s) [b1 + b2 s] . The system transfer function T (s), defined as the Laplace transform of the output Y (s) divided by the Laplace transform of the input U(s) is then obtained as follows: T (s) =

=

Y (s) U(s) b1 + b2 s . a1 + a2 s + a3 s2

This is the Transfer Function model of the dynamic system. Example 2.5 Consider the dynamic system, a mass, spring and damper structure, discussed in Example 2.1 and shown in Figure 2.9. Find the Transfer Function model of the dynamic system, where f (t) is the input and x(t) is the output. Solution 2.5 As explained in Example 2.1, the first step is drawing the free-body diagram from which the Input-Output model is obtained as mx(t) ¨ + bx(t) + kx(t) = f (t).

(2.15)

The Transfer Function model is obtained by taking the Laplace transforms of this Input-Output model.

Figure 2.9 Mass, spring and damper system.

67

Modelling of Dynamic Systems

ms2 X(s) + bsX(s) + kX(s) = F(s)   =⇒ X(s) ms2 + bs + k = F(s) X(s) T (s) = F(s) =⇒ T (s) =

1 ms2 + bs + k

=

1 m b k s2 + s + m m

This is the Transfer Function model of the mass, spring and damper system in Figure 2.9. Example 2.6 Consider the RLC series circuit in Figure 2.10. Find the Transfer Function model of the dynamic system, where v(t) is the input and vC (t) is the output. Solution 2.6 As discussed in Example 2.2, the Input-Output model of this RLC series network is derived as follows: v(t) = vR (t) + vC (t) + vL (t) Z 1 = Ri(t) + i(t) + Li˙(t) C But i(t) = Cv˙C (t) Z C =⇒ v(t) = RCv˙C (t) + v˙C (t) + LCv¨C (t) C =⇒ v(t) = LCv¨C (t) + RCv˙C (t) + vC The Transfer Function model is obtained by taking the Laplace transforms of this Input-Output model. V (s) = s2 LCVC (s) + sRCVC (s) +VC (s) VC (s) =⇒ T (s) = V (s) 1 = LCs2 + sRC + 1

=⇒ T (s) =

1 LC R 1 s2 + s + L LC

Figure 2.10 RLC series network in time domain.

68

Design and Analysis of Control Systems

This is the Transfer Function model of the RLC series network. It can also be obtained by considering the impedance of the network in the Laplace domain. For the RLC series network in Figure 2.10, the impedance across each circuit element (resistor, capacitor and inductor) is derived as follows: vR (t) = Ri(t) =⇒ VR (s) = RI(s) =⇒ ZR (s) = R 1 i(t) C 1 I(s) sC 1 sC Z

vC (t) = =⇒ VC (s) = =⇒ ZC (s) =

vL (t) = Li˙(t) =⇒ VL (s) = sLI(s) =⇒ ZL (s) = sL Hence the overall impedance of the network is obtained from Figure 2.11 as follows:   1 V (s) = I(s) R(s) + + sL sC V (s) =⇒ Z(s) = I(s) = ZR (s) + ZC (s) + ZL (s) 1 =⇒ Z(s) = R(s) + + sL sC This is the overall impedance of the RLC series network. If the output of the network is the voltage across the capacitor, then the Transfer Function model of the network is given by the ratio of the output and overall impedances as follows: T (s) =

Figure 2.11 RLC series network (impedances).

VC (s) V (s)

=

ZC (s)I(s) Z(s)I(s)

=

ZC (s) Z(s)

69

Modelling of Dynamic Systems

1 sC 1 R(s) + + sL sC 1 LC R 1 s2 + s + L LC

=

=⇒ T (s) =

As already established, this is the Transfer Function model of the RLC series network. Here it has been obtained from impedance analysis. It is important to note that extracting the Transfer Function model from the Input-Output model is more intuitive. This is quite clear from the preceding derivations of the Transfer Function models from the Input-Output models of the two basic examples – the mass, springer and damper system and the RLC series circuit. Example 2.7 Consider a system with the following State-Variable Matrix model  

x˙1 x˙2



a1

a2



0

  

 =  a3

y=



c1



0







x1 x2

x1 x2



b1

+

 u

(2.16)

b2

 + [0] u,

(2.17)

where x1 (t) and x2 (t) are the state variables, the u(t) is the input and y(t) is the output. (a) Find the Transfer Function model of the dynamic system. Solution 2.7 To obtain the system’s Transfer Function model, the Input-Output model must be established first. This is done by eliminating all the state variables from the state-variable equations except for the output y(t) (and its derivatives) and the input u(t) (and its derivatives). From the matrix form, the state-variable equations are given by x˙1 = a1 x1 + a2 x2 + b1 u x˙2 = a3 x1 + b2 u

(2.18)

y = c1 x1 . Elimination of the state variables proceeds as follows: x˙1 = a1 x1 + a2 x2 + b1 u =⇒ x¨1 = a1 x˙1 + a2 x˙2 + b1 u˙

(2.19)

y = c1 x1 =⇒ x1 =

y . c1

Substituting for x˙2 (Equation 2.18) into Equation 2.19 gives x¨1 = a1 x˙1 + a2 (a3 x1 + b2 u) + b1 u. ˙

(2.20)

70

Design and Analysis of Control Systems

Replacing x1 by

y in Equation 2.20 leads to c1   y¨ y˙ y = a1 + a2 a3 + b2 u + b1 u˙ c1 c1 c1   y y¨ = a1 y˙ + c1 a2 a3 + b2 u + c1 b1 u˙ c1 y¨ = a1 y˙ + a2 a3 y + c1 a2 b2 u + c1 b1 u˙ y¨ − a1 y˙ − a2 a3 y = c1 a2 b2 u + c1 b1 u. ˙

(2.21)

Equation 2.21 is the input-output differential equation for the system, which is the Input-Output model. Therefore, the Transfer Function model of the system can be obtained by taking Laplace transforms of this Input-Output model as follows: s2Y (s) − sy(0) − y(0) ˙ − a1 [sY (s) − y(0)] − a2 a3Y (s) = c1 a2 b2U(s) + c1 b1 [sU(s) − u(0)] . Assuming zero initial conditions s2Y (s) − a1 sY (s) − a2 a3Y (s) = c1 a2 b2U(s) + c1 b1 sU(s)   Y (s) s2 − a1 s − a2 a3 = U(s) [c1 a2 b2 + c1 b1 s] . Hence, the Transfer Function model of the dynamic system is given by T (s) = 2.8.6

Y (s) c1 b1 s + c1 a2 b2 = 2 . U(s) s − a1 s − a2 a3

THE s-OPERATOR METHOD

Starting with the time-domain models (State-Variable Matrix model or input-output differential equation form), use the differential operator identity d . dt This operator is used to convert the time domain models into the frequency domain, thus facilitating the derivation of the transfer function. The s-operator has also been called the p-operator. It is important to note that if the Transfer Function model of the system is given then the input-output differential equation is easily obtained using the same identity; however, the initial conditions will be lacking. The s-operator is also useful in the general reduction of simultaneous differential equations. It is often necessary to combine a set of differential equations involving more than one dependent variable into a single differential equation with a single dependent variable. An example of such an equation is the input-output differential equation model. This is very handy when it is not obvious how to eliminate the unwanted variables easily. This algebraic method provides a useful means of manipulating sets of differential equations with constant coefficients. It is important to note that s must operate on the variable or expression that follows it, and that it is not a variable or algebraic quantity itself. Consider the state variable system in Equations 2.16 and 2.17, whose state-variable equations are given by s≡

x˙1 = a1 x1 + a2 x2 + b1 u x˙2 = a3 x1 + b2 u y = c1 x1 .

71

Modelling of Dynamic Systems

The three equations can be expressed in terms of the s-operator as follows: sX1 = a1 X1 + a2 X2 + b1U

(2.22)

sX2 = a3 X1 + b2U

(2.23)

Y = c1 X1 =⇒ X1 =

Y . c1

(2.24)

Pre-multiplying Equation 2.22 by s gives s2 X1 = a1 sX1 + a2 sX2 + b1 sU.

(2.25)

Substituting for sX2 (Equation 2.23) and X1 (Equation 2.24) in Equation 2.25 leads to s2 X1 = a1 sX1 + a2 [a3 X1 + b2U] + b1 sU   Y Y Y = a1 s + a2 a3 + b2U + b1 sU c1 c1 c1   Y s2 − a1 s − a2 a3 = U [c1 b1 s + c1 a2 b2 ] . s2

(2.26)

Hence, the system transfer function is given by T (s) =

c1 b1 s + c1 a2 b2 Y (s) = , U(s) s2 − a1 s − a2 a3

which is the same result as obtained by taking the direct Laplace transforms of the input-output Equation 2.21. Although the s-operator can be used to find the transfer function without first finding the input-output differential equation, it can also be used to get simplified time domain equations such as input-output differential equations. An expression of the input-output differential equation (Equation 2.21) can be obtained from Equation 2.26 by reversing the s-operator.   Y s2 − a1 s − a2 a3 = U [c1 b1 s + c1 a2 b2 ] s2Y − a1 sY − a2 a3Y = c1 b1 sU + c1 a2 b2U y¨ − a1 y˙ − a2 a3 y = c1 b1 u˙ + c1 a2 b2 u. When the s-operator is used in this way, i.e., in the general reduction of simultaneous differential equations into a single differential equation with a single dependent variable, it is sometimes called the p-operator. Consider a system represented by the following pair of equations, where y(t) is the output and u(t) is the input. x˙ + 2x + y = 3u

(2.27)

2x˙ + 5x − 2y˙ + 2y = 0.

(2.28)

The input-output differential equation and the system transfer function can be found by applying the s-operator. In terms of the s-operator, Equation 2.27 and 2.28 become (s + 2)X +Y = 3U (2s + 5)X + (−2s + 2)Y = 0. Pre-multiplying the first equation by (2s + 5) and pre-multiplying the second by (s + 2) gives (2s + 5) [(s + 2)X +Y ] = 3U(2s + 5) (s + 2) [(2s + 5)X + (−2s + 2)Y ] = 0.

(2.29) (2.30)

72

Design and Analysis of Control Systems

Subtracting Equation 2.30 from Equation 2.29 gives [(2s + 5) − (s + 2)(−2s + 2)]Y = 3U(2s + 5) (2s2 + 4s + 1)Y = U(6s + 15).

(2.31)

The input-output differential equation is obtained by reversing the s-operator in Equation 2.31. Hence, 2s2Y + 4sY +Y = 6sU + 15U 2y¨ + 4y + y = 6u˙ + 15u. The transfer function is obtained from Equation 2.31, thus T (s) =

6s + 15 Y (s) = . U(s) 2s2 + 4s + 1

It is important to note that s-operator (or p-operator) has the same effect as taking the Laplace transform of a system. 2.8.7

THE COMPONENT TRANSFER FUNCTION METHOD

In this approach, the overall transfer function is obtained by finding the transfer functions of each component of the system and then using the relevant physical interconnection relationships or laws to form a set of algebraic equations involving the component transfer functions. These frequency domain equations are then solved to obtain the overall transfer function relating the output in Laplace transforms to the input in Laplace transforms. The block diagram or the signal-flow diagram can also be used to establish the overall system transfer function from component transfer functions. As an example, consider a car’s cruise control system, where a component transfer function H1 (s) from the speed error e(t) to the control signal u(t) is given by U(s) = KE(s) =⇒ T1 (s) =

U(s) = K. E(s)

Another component transfer function H2 (s) from the control signal u(t) to the output speed v(t) is given by   b V (s) = U(s) ms + b =⇒ T2 (s) =

V (s) b = . U(s) ms + b

Therefore, the overall transfer function T (s) from the speed error e(t) to the output speed v(t) can be obtained by T (s) = =

V (s) E(s) V (s) U(s) × U(s) E(s)

= T1 (s)T2 (s) =

bK . ms + b

73

Modelling of Dynamic Systems

Hence, in this case, the overall transfer function is obtained as a product of the component transfer functions. The following electrical circuit problem further illustrates the use of the component transfer function method. Example 2.8 In the RLC electrical circuit shown below, assuming zero initial conditions, find the following:

An RLC Electrical Circuit (a) The time domain equation relating i(t) and v1 (t). (b) The time domain equation relating i(t) and v2 (t). (c) The component transfer functions T1 (s) =

V1 (s) V2 (s) and T2 (s) = . I(s) I(s)

(d) The overall system transfer function T (s) =

V2 (s) . V1 (s)

Solution 2.8 (a) The KVL is used to get the expressions for v1 (t) and v2 (t) as follows: v1 (t) = vL + vR + vC =L

1 di + Ri(t) + dt C

Z

i(t)dt

(2.32)

v2 (t) = vC =

1 C

Z

(2.33)

i(t)dt.

(b) The transfer function T1 (s) is obtained by taking the Laplace transforms (assuming zero initial conditions) of Equation 2.32. V1 (s) = sLI(s) + RI(s) +

I(s) sC 

 1 = I(s) sL + R + sC T1 (s) =

V1 (s) I(s)

= sL + R +

1 . sC

74

Design and Analysis of Control Systems

Similarly, the transfer function H2 (s) is obtained by taking the Laplace transforms of Equation 2.33 V2 (s) =

I(s) sC

T2 (s) =

V2 (s) I(s)

=

1 . sC

(c) The overall transfer function H(s) is obtained from T1 (s) and T2 (s) as follows: T (s) =

V2 (s) V1 (s)

=

V2 (s) I(s) × V1 (s) I(s)

=

V2 (s) V1 (s) / I(s) I(s)

=

T2 (s) T1 (s)

=

multiplication by one

1 sC sL + R +

1 sC

1 LC =⇒ T (s) = . R 1 s2 + s + L LC This is the Transfer Function model of the RLC series network. In this case, the overall transfer function is a ratio of the two component transfer functions. It is prudent to note that this result is the same as that obtained by getting the Input-Output model first or using impedance analysis as demonstrated in Example 2.6 for an identical RLC series network. 2.8.8

THE TRANSFER FUNCTION IN POLE-ZERO FACTORED FORM

A special form of the transfer function of a dynamic system is the pole-zero factored form, where the numerator is expressed as the product of zero factors and the denominator as a product of pole factors, as opposed to being expressed as polynomials of s. The pole-zero form is derived from rewriting the transfer function as T (s) =

Y (s) U(s)

b0 sm + b1 sm−1 + b2 sm−2 + · · · + bm a0 sn + a1 sn−1 + a2 sn−2 + · · · + an   m + b1 sm−1 + b2 b sm−2 + · · · + bm s 2 b0  b0 b0 b0   =  a a a   n 1 2 a0 n n−1 n−2 s + s + s +···+ a0 a0 a0 =

75

Modelling of Dynamic Systems

b0 = a0 =K



(s − z1 )(s − z2 ) · · · (s − zm ) (s − p1 )(s − p2 ) · · · (s − pn )

Πm i=1 (s − zi ) , Πnj=1 (s − p j )



(2.34)

b0 . Equation 2.34 represents the pole-zero factored form of the transfer function and a0 is useful because it clearly indicates the zeros (zi ) and poles (p j ) of the system. This form is very useful in determining system characteristics such as stability, and will be extensively used in later chapters. where K =

2.9

BLOCK DIAGRAM FORM MODEL

A block diagram is an interconnection of symbols representing certain basic mathematical operations in such a way that the overall diagram obeys the system’s mathematical model. In the diagram, the lines interconnecting the blocks represent the variables describing the system behaviour, such as the input and state variables. This is the Block Diagram Form model (Block Diagram model) of a dynamic system. Inspecting this model may provide new insight into the system’s structure and behaviour beyond that available from the differential equations themselves. The block diagram can be simplified into one block with an input and an output, both in Laplace form. This final block is unique for a dynamic system, whereas several different block diagrams can be designed for the same system [11]. To obtain the transfer function of a system, it is required to find the Laplace transform of the equations of motion and solve the resulting algebraic equations for the relationship between the input and the output. In many control systems, the system equations can be written so that their components do not interact except by having the input of one part be the output of another part. In these cases, it is easy to draw a block diagram that represents the mathematical relationships. The transfer function of each component is placed in a box, and the input-output relationships between components are indicated by lines and arrows. It is important to note that for the Block Diagram model, T (s) is reserved for the overall transfer function of the network of blocks, while G(s) is used to denote the transfer function of one block, usually the plant. Hence, a single general block diagram for a system with an input u(t) and an output y(t) has a transfer function G(s) as shown in Figure 2.12. The transfer function, the ratio (in Laplace transforms) of the output to the input, is given by G(s) =

Y (s) . U(s)

Thus, the block diagram can represent a dynamic system with the transfer function of the system printed inside the block. This is the final (non-reducible) form of the Block Diagram model of a dynamic system. It is unique for any dynamic system, since the Transfer Function model G(s) is unique. For illustration, consider the Transfer Function model of the RLC series network derived in Examples 2.6 and 2.8.

Figure 2.12 Block diagram model (final form).

76

Design and Analysis of Control Systems

Figure 2.13 Block diagram model for an RLC series network.

G(s) =

1 LC R 1 s2 + s + L LC

This Transfer Function model can be presented in the form of a Block Diagram model as shown in Figure 2.13. This is the Block Diagram model of the RLC series network. Similarly the Block Diagram model of the mass, spring and damper dynamic system in Example 2.1 (Figure 2.2) can be deduced from the Transfer Function model as follows: mx(t)b ¨ x(t) ˙ + kx(t) = X(s) =⇒ G(s) = = F(s)

f (t) 1 ms2 + bs + k

Hence, Figure 2.14 is the Block Diagram model of the mass, spring and damper system which was presented in Example 2.1 (Figure 2.2). 2.9.1

NETWORKS OF BLOCKS

The blocks in a block diagram can be connected in different basic forms that include series, parallel, negative feedback and positive feedback networks. These basic forms constitute the building structures of complex block diagrams and it is essential to understand how their transfer functions are obtained. This knowledge can then be used to simplify complex block diagrams and determine their transfer functions. Two blocks in series are represented in Figure 2.15. Hence, the overall transfer function is obtained as follows:

Figure 2.14 Block diagram model for a mass, spring and damper system.

Figure 2.15 Blocks in series.

77

Modelling of Dynamic Systems

T (s) = =

Y2 (s) U(s) G2 (s)G1 (s)U(s) U(s)

= G2 (s)G1 (s). The final block diagram of two blocks in series is shown in Figure 2.16. Two blocks in parallel are represented in Figure 2.17. Hence, the overall transfer function is obtained as follows: T (s) =

Y (s) U(s)

=

G1 (s)U(s) + G2 (s)U (s) U(s)

=

[G1 (s) + G2 (s)]U(s) U(s)

= G1 (s) + G2 (s). The final block diagram of two blocks in parallel is shown in Figure 2.18.

Figure 2.16 Series: final block diagram.

Figure 2.17 Blocks in parallel.

Figure 2.18 Parallel: final block diagram.

78

Design and Analysis of Control Systems

Figure 2.19 Negative feedback block diagram.

For negative feedback the block diagram takes the form shown in Figure 2.19. The closed-loop transfer function for the negative feedback system is then obtained as follows: U2 (s) = G1 (s)U1 (s) Y2 (s) = G2 (s)U2 (s) =⇒ Y2 (s) = G2 (s)G1 (s)U1 (s) U1 (s) = R(s) −Y2 (s) Y1 (s) = G1 (s)U1 (s) =⇒ Y1 (s) = G1 (s) [R(s) −Y2 (s)] = G1 (s) [R(s) − G2 (s)G1 (s)U1 (s)]   Y1 (s) = G1 (s) R(s) − G2 (s)G1 (s) G1 (s) =⇒ Y1 (s) = [G1 (s)R(s) − G1 (s)G2 (s)Y1 (s)] =⇒ Y1 (s) [1 + G1 (s)G2 (s)] = G1 (s)R(s) =⇒ T (s) =

=

Y1 (s) R(s) G1 (s) . 1 + G1 (s)G2 (s)

The final block diagram for a negative feedback system is shown in Figure 2.20. The positive feedback block diagram is similar to the negative feedback one, except that the feedback signal is added and not subtracted. The closed-loop transfer function for the positive feedback block diagram is

Figure 2.20 Negative feedback: final block diagram.

79

Modelling of Dynamic Systems

Figure 2.21 Positive feedback block diagram.

obtained in the same way as for the negative feedback one. This is illustrated as follows (using Figure 2.21): U2 (s) = G1 (s)U1 (s) Y2 (s) = G2 (s)U2 (s) =⇒ Y2 (s) = G2 (s)G1 (s)U1 (s) U1 (s) = R(s) +Y2 (s) Y1 (s) = G1 (s)U1 (s) =⇒ Y1 (s) = G1 (s) [R(s) +Y2 (s)] = G1 (s) [R(s) + G2 (s)G1 (s)U1 (s)]   Y1 (s) = G1 (s) R(s) + G2 (s)G1 (s) G1 (s) =⇒ Y1 (s) = [G1 (s)R(s) + G1 (s)G2 (s)Y1 (s)]

=⇒ Y1 (s) [1 − G1 (s)G2 (s)] = G1 (s)R(s) =⇒ T (s) =

=

Y1 (s) R(s) G1 (s) . 1 − G1 (s)G2 (s)

It is instructive to note that for the positive and negative feedback block diagrams, the closed-loop transfer function has the same form T (s) =

G1 (s) , 1 ± G1 (s)G2 (s)

where T (s) is the Transfer Function model (or overall system gain) of the negative or positive feedback control system, G1 (s) is the open-loop gain, which is a function of frequency, and G2 (s) is the gain of feedback path, which is a function of frequency. The +ve sign is for the negative feedback system and the −ve sign is for the positive feedback system. The corresponding final Block Diagram model (for both the negative and positive feedback systems) is shown in Figure 2.22. Feedback control systems will be discussed in detail in Chapter 4.

80

Design and Analysis of Control Systems

Figure 2.22 Negative or postive feedback: final block diagram.

2.9.2

NEGATIVE FEEDBACK IN OP-AMP CIRCUITS

An operational amplifier (op-amp) is a widely used component in analogue electronics. Only a few external components are required to configure it to perform a broad spectrum of tasks such as amplification, addition, subtraction, multiplication and integration. The op-amp’s versatile functionality is due to its use of both negative and positive feedback. This is achieved by sampling part of the output (or the entire output) and then subtracting it from, or adding it to, the input to accomplish the desired outcome. Consider the op-amp circuit with negative feedback shown in Figure 2.23, where K is the internal differential voltage gain of the op-amp. This is a very large number such as 250,000. The Transfer Function model of the negative feedback circuit is obtained as follows: K [Vi (s) −Vo (s)] = Vo (s) KVi (s) − KVo (s) = Vo (s) =⇒ Vo (s) [1 + K] = KVi (s) K Vo = =⇒ T (s) = Vi 1+K This is the Transfer Function model of the negative feedback op-amp in Figure 2.23. However since K is very large, this leads to 1

T (s) =

1+

1 K

= 1 This means the output voltage is always equal to the input voltage, which means this negative feedback op-amp circuit is a ‘follower’ or a ‘repeater’. The key advantage of using an op-amp with negative feedback is that the actual differential voltage gain of the op-amp is immaterial, as long as it is very large. Consequently, op-amp differential gains do not have to be precisely calibrated for the circuit builder to design an amplifier circuit with a precise Transfer Function model. Negative feedback makes the system self-correcting and self-stabilising. This is in keeping with the general properties of negative feedback control where it is insensitive to modelling errors and effectively reject disturbances.

Figure 2.23 Negative feedback in an op-amp circuit: a follower.

81

Modelling of Dynamic Systems

Figure 2.24 Negative feedback in an op-amp circuit: a multiplier.

Consider the op-amp negative feedback circuit in Figure 2.24. The Transfer Function model of the negative feedback circuit is obtained as follows:     Z1 (s) Vo (s) = Vo (s) K Vi (s) − Z1 (s) + Z2 (s)   Z1 (s) KVi (s) − K Vo (s) = Vo (s) Z1 (s) + Z2 (s)   Z1 (s) = KVi (s) =⇒ Vo (s) 1 + K Z1 (s) + Z2 (s) Vo K   =⇒ T (s) = = Z1 (s) Vi 1+K Z1 (s) + Z2 (s) However since K is very large, this leads to T (s) =

=⇒ T (s) =

1   Z1 (s) 1 + K Z1 (s) + Z2 (s) Z1 (s) + Z2 (s) . Z1 (s)

This is the general Transfer Function model of the negative feedback op-amp in Figure 2.24. For example, if Z1 (s) = 1, 000 ohms and Z2 (s) = 1, 000 ohms, then the Transfer Function model becomes 1, 000 + 1, 000 1, 000 = 2.

T (s) =

This means the output voltage is always double the input voltage, which means this negative opamp circuit becomes a multiplier or an amplifier. By giving different values to the impedances Z1 (s) and Z2 (s) in the op-amp circuit in Figure 2.24 or reconfiguring the negative feedback differently, negative feedback in op-amp circuits can be used to perform various functions. Negative feedback in op-amp circuits is also extensively used to implement PID controllers. This is discussed in detail in Chapter 4. 2.9.3

POSITIVE FEEDBACK IN OP-AMP CIRCUITS

The negative feedback op-amp circuit in Figure 2.24 can be modified slightly to create an op-amp circuit that has positive feedback. The inverting and the non-inverting inputs of the op-amp are

82

Design and Analysis of Control Systems

Figure 2.25 Positive feedback in an op-amp circuit.

Figure 2.26 Positive feedback in an op-amp circuit: an oscillator.

switched such that the signal input now comes through the negative terminal, and the positive terminal becomes the pin that receives feedback from the output through the impedance-driven voltage divider. This op-amp circuit with positive feedback is shown in Figure 2.25. The output of this circuit does not maintain equilibrium with the input. It saturates to either of two states in a nonlinear fashion. Positive feedback increases the gain of the system drastically and it is not stable. This means, positive feedback in an op-amp circuit cannot be used to create a multiplier since the outcome is highly nonlinear. If the circuit in Figure 2.25 is modified by adding a capacitor between the negative terminal and ground and a resistor between the negative terminal and output, a simple relaxation oscillator is produced. This positive feedback oscillator, illustrated in Figure 2.26, is an effective way of demonstrating an op-amp circuit with positive feedback. 2.9.4

SIMPLIFYING BLOCK DIAGRAM MODELS

When a dynamic system is represented by a Block Diagram model (with a network of blocks and variables), the structure can be reduced to one block, thus providing the overall Transfer Function model of the system. For the simple series, parallel and feedback networks this was achieved by algebraic manipulation in the previous section. However, the overall Transfer Function model can also be achieved by using block diagram simplification, which is often easier and more informative than algebraic manipulation, even though the methods are in every way equivalent. The central motivation behind simplifying a Block Diagram model is the reduction of the complexity of the diagram and obtaining the overall transfer function while maintaining the same relationship among the remaining variables. There are two main techniques that can be used to achieve this objective. The first method is direct block diagram reduction using block diagram algebra, and the other approach is based on signal flow diagram analysis.

83

Modelling of Dynamic Systems

Figure 2.27 Summary of block diagram algebra.

Once the block diagrams have been simplified, the overall system transfer function can be obtained. If the block diagram is not available its structure can be established from component transfer functions. It is important to note that blocks can be connected in series only if the output of one block is not affected by the following block. If there are any loading effects between the components, it is necessary to combine these components into a single block. Any number of cascaded blocks representing non-loading components can be replaced by a single block, the transfer function of which is simply the product of the individual transfer functions. The most important block diagram rules are summarised in Figure 2.27. 2.9.4.1

Direct Block Diagram Reduction

A complicated block diagram involving many feedback loops can be simplified by a step-by-step rearrangement, using rules of block diagram algebra. This is called Direct Block Diagram Reduction. This algebra depends on equivalence between simple block diagram configurations as shown in the following illustrations in Figure 2.28. Example 2.9 Consider the block diagram in Figure 2.29. (a) Find the system Transfer Function model. (b) Choose system gains to make the transfer function unity. Solution 2.9 The block diagram is reduced in three stages. First, the point before G2 is moved to after G2 creating Figure 2.30. This network is then reduced to Figure 2.31 by replacing the positive feedback loop with one block. Thereafter, the two blocks in series are combined and the resultant negative feedback loop is replaced with one block leading to Figure 2.32. Hence, the final Transfer Function model is obtained by simplifying the content of Figure 2.32 as follows: T (s) =

Y (s) U(s)  G1 G2 1−G H  2 2  H1 G1 G2 1+ G2 1 − G2 H2 

=

=⇒ T (s) =

G1 G2 . 1 − G2 H2 + H1 G1

84

Design and Analysis of Control Systems

Figure 2.28 Examples of block diagram algebra.

Figure 2.29 Block diagram.

(b) The design problem is to choose gains such that T (s) =

Y (s) G1 G2 = = 1. U(s) 1 − G2 H2 + H1 G1

One set of gains that will satisfy this requirement is obtained by setting the gain products to 1, i.e., G1 G2 = G2 H2 = H1 G1 = 1.

85

Modelling of Dynamic Systems

Figure 2.30 Block diagram.

Figure 2.31 Block diagram.

Figure 2.32 Block diagram.

For example, if G1 is chosen to be 100, it then follows that G1 = 100 =⇒ G2 = G2 = 2.9.5

1 1 and H1 = 100 100

1 =⇒ H2 = 100. 100

WORKED EXAMPLES OF SIMPLIFYING BLOCK DIAGRAMS

Example 2.10 Determine the Transfer Function model linking R(s) and Y (s) in the Block Diagram model in Figure 2.33. Solution 2.10 In Figure 2.33, simplifying the first internal negative feedback loop produces the diagram in Figure 2.34. In Figure 2.34, simplifying the resultant negative feedback loop C, and then combining the outcome with block G2 in series, leads to Figure 2.35.In Figure 2.35, moving the middle block N to the left side of (i.e., before) the left summer produces Figure 2.36. (Alternatively, the middle block N can be moved to the right of (i.e., after) the right summer.) Now, in Figure 2.36, reversing the order of the summers, by moving the positive signal from G3 (from the second summer) to the first summer and moving the negative feedback signal from the first summer to the second summer, produces Figure 2.37. In Figure 2.37, separately simplifying the parallel and negative

86

Design and Analysis of Control Systems

Figure 2.33 Block diagram.

Figure 2.34 Block diagram.

Figure 2.35 Block diagram.

feedback networks, and then combining the resultant two blocks in series leads to the derivation of the Transfer Function model as follows:   Y (s) 1 T= = (N + G3 ) R(s) 1 + NH3 =⇒ T (s) =

G1 G2 + G3 + G3 G1 H1 + G3 G1 H2 . 1 + G1 H1 + G1 H2 + G1 G2 H3

This is the Transfer Function model of the dynamic system in Figure 2.33.

Modelling of Dynamic Systems

87

Figure 2.36 Block diagram.

Figure 2.37 Block diagram.

Alternatively, in Figure 2.35, the block N can be moved to after the second summer, producing 1 Figure 2.38. Now reversing the order of the summers, by moving the positive signal from block N (from the second summer) to the first summer and moving the negative feedback signal from the first summer to the second summer, produces Figure 2.39. Separately simplifying the parallel and negative feedback networks in Figure 2.39, and then combining the resultant two blocks in series, leads to the derivation of the Transfer Function model as follows:

Figure 2.38 Block diagram.

88

Design and Analysis of Control Systems

Figure 2.39 Block diagram.

T (s) =

=⇒ T (s) =

=

   Y (s) G3 N = 1+ R(s) N 1 + NH3 N + G3 1 + NH3 G1 G2 + G3 + G3 G1 H1 + G3 G1 H2 . 1 + G1 H1 + G1 H2 + G1 G2 H3

Clearly, this is the same Transfer Function model as that already established. Example 2.11 Find the Transfer Function model of the system in the following Block Diagram model.

Block Diagram

89

Modelling of Dynamic Systems

Solution 2.11 The block diagram is simplified as follows:

Therefore, the overall Transfer Function model is given by T (s) =

Y (s) G1 G2 G3 = . U(s) 1 − G1 G2 H1 − G2 G3 H2 − G1 G2 G3

Example 2.12 Obtain a State-Variable Matrix model (State-Space model) for the chemical processing plant represented by the following Block Diagram model.

A Chemical Processing Plant

90

Design and Analysis of Control Systems

Solution 2.12 The system involves one integrator and two delayed integrators. The output of each integrator or delayed integrator can be a state variable. Let the output of the plant be defined as x1 , the output of the controller as x2 , and the output of the sensor as x3 . It then follows that X1 (s) 10 = X2 (s) s + 5 X2 (s) 1 = U(s) − X3 (s) s X3 (s) 1 = X1 (s) s + 1 Y (s) = X1 (s), which can be rewritten as sX1 (s) = −5X1 (s) + 10X2 (s) sX2 (s) = −X3 (s) +U(s) sX3 (s) = X1 (s) − X3 (s) Y (s) = X1 (s). By taking the inverse Laplace transforms of the preceding four equations, the state-variable equations are obtained as, x˙1 = −5x1 + 10x2 x˙2 = −x3 + u x˙3 = x1 − x3 y = x1 . Thus, a State-Space model of the system in the standard form is given by        0 −5 10 0 x1 x˙1  x˙2  =  0 0 −1   x2  +  1  u x3 0 x˙3 1 0 −1  x1  x2  . x3  y=



1

0

0



It is important to note that this is not the only state-space representation of the system, i.e., it is not a unique representation. Many different State-Space models can be derived for the same dynamic system. However, the minimum number of state variables is the same in any state-space representation of the same system. In the present system, the number of state variables is three, regardless of what variables are chosen as state variables.

2.10

EXAMPLES OF DYNAMIC SYSTEM MODELLING

In this section, a number of detailed examples are presented to clearly demonstrate the concepts of modelling. They are classified into translational, rotational, electrical, electromechanical, mechatronic and robotic dynamic systems.

91

Modelling of Dynamic Systems

2.10.1

TRANSLATIONAL MECHANICAL SYSTEMS

Mechanical systems can be translational or rotational and be modelled using free-body diagrams while using element and interconnection laws. Here focus is on translational mechanical systems. Example 2.13 The input to the translational mechanical system shown in the following diagram is the displacement x3 (t) of the right end of the spring k1 . The displacement of m2 relative to m1 is x2 . The forces exerted by the springs are zero when x1 = x2 = x3 = 0.

Translational Mechanical System (a) Draw the free-body diagrams of the system (b) Obtain the equations of motion. Solution 2.13 (a) The free-body diagrams are drawn as follows:

(b) The equations of motion are then obtain from the free-body diagrams. m1 x¨1 + bx˙2 − k1 [x3 (t) − x1 ] = 0 m1 (x¨1 − x¨2 ) + k2 (x1 − x2 ) − bx˙2 = 0 Example 2.14 Consider the mechanical system shown below, where the input is a displacement y2 (t), and the output is the force in spring k3 .

92

Design and Analysis of Control Systems

(a) (b) (d) (e)

Draw the free-body diagrams for the system. Obtain the equations of motion. Choose a minimum set of variables for the system and justify the choice. Derive the State-Variable Matrix model by expressing the equations of motion in a StateVariable Matrix model (i.e. obtain A, B, C, D).

Solution 2.14 (a) The forces acting on the masses are shown in the following free-body diagrams.

Translational Mechanical System: Free-body Diagram (b) The equations of motion are obtained as: m1 x¨1 + b1 x˙1 + k1 x1 + b2 (x˙1 − x˙2 ) + k2 (x1 − x2 ) = 0

(2.35)

m2 x¨2 + b2 (x˙2 − x˙1 ) + k2 (x2 − x1 ) + k3 (x2 − y2 (t)) = 0

(2.36)

(c) There are five energy-storing elements (two masses and three springs) but only four are independent, since m2 and k3 are dependent. Therefore, the state variables are chosen as x=



x1

x˙1

x2

x˙2

T

where the input is y2 (t). (d) Two of the equations of the derivatives of variables are simple: x˙1 = x˙1 x˙2 = x˙2 . The other two equations are obtained by rearranging Equations 2.35 and 2.36, x¨1 = − x¨2 =

k1 + k2 b1 + b2 k2 b2 x1 − x˙1 + x2 + x˙2 m1 m1 m1 m1

k2 b2 k2 + k3 b2 k3 x1 + x˙1 − x2 − x˙2 + y2 (t). m2 m2 m2 m2 m2

The output is the spring force in the spring k3 y = k3 (x2 − y2 (t)) = k3 x2 − k3 y2 (t).

93

Modelling of Dynamic Systems

Therefore, the state variable system is given by  0 1 0 0          b1 + b2 k2 b2  − k1 + k2 x˙1 −  m m m m 1 1 1 1  x¨1    =  x˙2    0 0 0 1  x¨2      k2 k2 + k3 b2 b2 − − m2 m2 m2 m2 

0

             

 x1 x˙1   x2  x˙2



     0      +  y (t),  0  2      k  3 m2 where the output is given by  x1   x˙1   0   x2  + [−k3 ] [y2 (t)] . x˙2 

y=



0

0

k3

This is the State-Variable Matrix model of the dynamic system. Example 2.15 Consider the mechanical dynamic system in Figure 2.40 where the movement of a large object is controlled by manipulating a much smaller object that is mechanically coupled with it. A force u(t) is applied to the small mass m in order to position the larger mass M. The coupling between the objects is modelled by a spring constant k with a damping coefficient b. Find the State-Variable Matrix model of the system. Solution 2.15 Two free-body diagrams can be established as shown in Figure 2.41. From the first free-body diagram it follows that:

Figure 2.40 Mass, spring and damper system.

Figure 2.41 Free-body diagrams.

94

Design and Analysis of Control Systems

u(t) = mx¨ + b(x˙ − y) ˙ + k(x − y) mx¨ = −kx − bx˙ + ky + by˙ + u(t) x¨ = −

kx b k b u(t) − x˙ + y + y˙ + m m m m m

From the second free-body diagram: M y¨ = b(x˙ − y) ˙ + k(x − y) y¨ =

k b k b x + x˙ − y − y˙ M M M M

Choosing the state variables as:   x x˙   y  y˙ 

0

     k x˙ − m x¨   = y˙    0 y¨    k M

1

0

0

b m

k m

b m

0

0

1



b M



k M



b M



  0         1  x      x˙  m   + u    y 0     y˙     0

The output, which is the displacement of the second mass, is given by   x   x˙    y= 0 0 1 0  y  + 0 u y˙ This is the complete State-Variable Matrix model for the system. Example 2.16 Find the State-Variable Matrix model for the mechanical dynamic system in Figure 2.42.

Figure 2.42 Mass, spring and damper system.

95

Modelling of Dynamic Systems

Figure 2.43 Mass, spring and damper system.

Solution 2.16 This is an example of a dynamic mechanical system. To obtain its State-Variable Matrix model, two free-body diagrams must be established as shown in Figure 2.43. They are drawn together showing the common forces between them which only differ in direction. From the first free-body diagram, it follows that: u(t) = mx¨ + b(x˙ − y) ˙ + k(x − y) mx¨ = −kx − bx˙ + ky + by˙ + u(t) x¨ =

b k b u(t) −k x − x˙ + y + y˙ + m m m m m

From the second free-body diagram, we have: b(x˙ − y) ˙ + k(x − y) = M y¨ k b k b y¨ = x + x˙ − y − y˙ M M M M The vector of state variables can be chosen as:   x x˙   y  y˙ Now we can obtain the state-variable equation as follows:   0 1 0 0           −k −b  0 k b x   x    m m m  x˙  1  x¨  m  = +   u(t)  y˙   y   m 0 0 0 1   0 y¨   y˙ 0    k b −k −b  M

M

M

M

The output is represented as:

 y= 0

0

1

  x  x˙    0  y + 0 u(t) y˙

These two preceding matrix equations provide the complete State-Variable Matrix model of the dynamic system.

96

Design and Analysis of Control Systems

Example 2.17 For the translational dynamic system in Figure 2.44, obtain the system Transfer Function model: T (s) =

X1 (s) F(s).

Solution 2.17 To solve the problem, two free-body diagrams must be drawn as shown in Figures 2.45 and 2.46. It is very important to ensure that for each free-body diagram, the forces at play are all represented with their directions carefully determined. The directions of the arrows must have physical meaning to what is happening to the free-body as shown in Figure 2.45. An additional free-body diagram for the system is represented in Figure 2.46. The forces that are common to the two free-body diagrams must be the same, and only differ in direction. In fact you can draw the two free-body diagrams in one diagram where the common forces are only indicated once. Summing the various forces from first free-body diagram leads to: bx˙1 + M x¨1 = k(x2 − x1 ) =⇒ bx˙1 + M x¨1 + kx1 = kx2 Similarly, we can obtain the following from the second free-body diagram: k(x2 − x1 ) = f (t) x2 =

f (t) + kx1 k

Now we have an expression of x2 . Substituting this into the differential equation above leads to:   f (t) + kx1 bx˙1 + M x¨1 + kx1 = k k As stated before the objective is to obtain the system transfer function given by: T (s) =

X1 (s) F(s).

This is achieved by simplifying the aforementioned expression and taking Laplace transforms as follows:

Figure 2.44 Mass, spring and damper system.

Figure 2.45 First free-body diagram.

Figure 2.46 Second free-body diagram.

97

Modelling of Dynamic Systems

bx˙1 + M x¨1 + kx1 − kx1 = f (t) M x¨1 + bx˙1 = f (t) X1 (s)(Ms2 + bs) = F(s) =⇒ T (s) =

1 X1 (s) = F(s) s(Ms + b)

This is the general Transfer Function model of the dynamic system. It is important to note that the model is independent of the spring constant k. For the specific case where M = 2, b = 4 and k = 5, we get the following specific model: 1 s(2s + 4) 1 = 2s(s + 2)

T (s) =

This is the specific Transfer Function model of the dynamic system. Example 2.18 Find the Transfer Function model of the dynamic system that in Figure 2.47. Solution 2.18 The two free-body diagrams of the system are determined as shown in Figure 2.48. From the two free-body diagrams the equations of motion are obtained. f (t) = k2 (x1 − x2 ) + b(x˙1 − x˙2 ) + M1 x¨1 M2 x¨2 = k(x1 − x2 ) + b(x˙1 − x˙2 ) These are the differential equations that describe the dynamic system. The Transfer Function model is obtained by taking Laplace transforms of these equations and eliminating X1 (s) as follows:

Figure 2.47 Dynamic system.

Figure 2.48 Free-body diagrams.

98

Design and Analysis of Control Systems

F(s) = k(X1 (s) − X2 (s)) + b(sX1 (s) − sX2 ) + s2 M1 X1 (s)   =⇒ F(s) = X1 (s) k + bs + M1 s2 − X2 (s) [k + bs] M2 s2 X2 (s) = k [X1 (s) − X2 (s)] + b [sX1 (s) − sX2 (s)]   =⇒ X1 (s) [k + bs] = X2 (s) M2 s2 + bs + k   M2 s2 + bs + k =⇒ X1 (s) = X2 (s) k + sb     M2 s2 + bs + k =⇒ F(s) = X2 (s) k + bs + M1 s2 − X2 (s) [k + bs] k + sb     M2 s2 + bs + k  2 =⇒ F(s) = X2 (s) k + bs + M1 s − [k + bs] k + sb X2 (s) =⇒ T (s) = F(s) k + bs = [M2 s2 + bs + k] [k + bs + M1 s2 ] − (k + bs)2 =⇒ T (s) =

k + bs s2 (k + bs)(M1 + M2 ) + M1 M2 s4

This is the general Transfer Function model of the dynamic system. For the specific case where the parameter values are given as k = 1N/m, b = 1Ns/m and M1 = M2 = 1kg, the specific model can be deduced as X2 (s) F(s) 1+s = 2 2s + 2s3 + s4

T (s) =

This is the specific Transfer Function model of the dynamic system. Example 2.19 Find the Transfer Function model of the dynamic system in Figure 2.49. Solution 2.19 The free-body diagrams for the system are derived as shown in Figure 2.50. Given these free-body diagrams, the objective is to obtain the Transfer Function model given by T (s) =

X1 (s) F(s)

The first step is to draw the two free-body diagrams as shown in Figure 2.50. The next step is establishing the differential equations for the dynamic system and then expressing them in the Laplace

Figure 2.49 Dynamic system.

99

Modelling of Dynamic Systems

Figure 2.50 Free-body diagrams.

domain. Thereafter X2 (s) is eliminated and the transfer function is obtained. From the free-body diagrams it follows that: k1 x1 + b1 x˙1 + M1 x¨1 = k2 (x2 − x1 ) + b2 (x˙2 − x˙1 ) f (t) = M2 x¨2 + b3 x˙2 + k2 (x2 − x1 ) + b2 (x˙2 − x˙1 ) Taking the Laplace transform of these equations leads to:   X1 (s) (k1 + k2 ) + s(b1 + b2 ) + s2 M1 = X2 [k2 + sb2 ]

=⇒ X2 (s) = F(s) = =⇒ F(s) = =⇒ F(s) = =⇒ T (s) = =

=⇒ T (s) =

  (k1 + k2 ) + s(b1 + b2 ) + s2 M1 X1 (s) [k2 + sb2 ] 2  X2 (s) s M2 + s(b3 + b2 ) + k2 − X1 (s) [k2 + sb2 ]    (k1 + k2 ) + s(b1 + b2 ) + s2 M1  2 X1 (s) s M2 + s(b3 + b2 ) + k2 − X1 (s) [k2 + sb2 ] [k2 + sb2 ] " ! #   (k1 + k2 ) + s(b1 + b2 ) + s2 M1  2 X1 (s) s M2 + s(b3 + b2 ) + k2 − [k2 + sb2 ] [k2 + sb2 ] X1 (s) F(s) 1 " ! #  2 (k1 + k2 ) + s(b1 + b2 ) + s M1 2 [s M2 + s(b3 + b2 ) + k2 ] − [k2 + sb2 ] [k2 + sb2 ] k2 + sb2 2 [(k1 + k2 ) + s(b1 + b2 ) + s M1 ] [s2 M2 + s(b3 + b2 ) + k2 ] − (k2 + sb2 )2

This is the general Transfer Function model of the dynamic system. The model for the specific case where the parameters are k1 = 4N/m, k2 = 5N/m, b1 = 3Ns/m, b2 = 3Ns/m, b3 = 2Ns/m and M1 = M2 = 2kg, is then given by =⇒ T (s) =

X1 (s) F(s)

3s + 5 (9 + 6s + 2s2 )(2s2 + 5s + 5) − (3s + 5)(3s + 5) 3s + 5 = 4 3 4s + 22s + 58s2 + 75s + 45

=

This is the specific Transfer Function model of the dynamic system.

100

2.10.2

Design and Analysis of Control Systems

ROTATIONAL MECHANICAL SYSTEMS

In this section, examples illustrating the modelling of rotational mechanical systems are presented. The procedure is similar to that used in modelling translational mechanical systems. Example 2.20 Consider the rotational mechanical system shown below (Figure 2.51). (a) Draw the free-body diagrams of the system. (b) Find the dynamic system model in the form of an input-output differential equation, where the input is angular speed ω(t) and the output is angular speed ω1 (t). Solution 2.20 The free-body diagram for disk J is shown below.

Free-Body Diagram for Example 2.20 From the free-body diagram of disk J the dynamic equation can be deduced b1 [ω(t) − ω1 ] = J ω˙ 1 + b2 ω1 J ω˙ 1 + (b1 + b2 )ω1 = b1 ω(t) ω˙ 1 +

(b1 + b2 ) b1 ω(t) ω1 = . J J

Example 2.21 (a) Draw the free-body diagrams for the rotational mechanical system shown below (Figure 2.52). (b) Explain why the state-variable vector should be chosen as x(t) = [θ1 θ3 ω1 ω2 ]T . (c) Express the dynamic equations in the State-Variable Matrix model (i.e. obtain A, B, C, D) where the input is the applied torque T (t) and the output is the viscous frictional torque on disk J2 .

Figure 2.51 Diagram for Example 2.20.

101

Modelling of Dynamic Systems

Figure 2.52 Rotational mechanical system.

Solution 2.21 (a) The free-body diagrams are given by

Free-Body Diagrams (b) There are four independent energy-storing elements: two shafts and two disks (with moment of inertia J1 and J2 ). The potential energy stored in a shaft depends on the angular displacement θ (t) and the kinetic energy stored in a rotating disk depends on the angular speed ω(t). Hence, the minimum set of state variables can be represented by x(t) = [θ1 θ3 ω1 ω2 ]T . (c) From the free-body diagrams the equations of motion are given by J1 ω˙ 1 + (k1 + k2 )θ1 − k2 θ3 = 0 J2 ω˙ 2 + bω2 − bθ˙ 3 = T (t) −k2 θ1 − bω2 + bθ˙ 3 + k2 θ3 = 0. Adding the second and third equation, the following equation is obtained: −k2 θ1 + J2 ω˙ 2 + k2 θ3 = T (t). Rewriting these equations in terms of state variable gives θ˙ 1 = ω1 ω˙ 1 =

1 [−(k1 + k2 )θ1 + k2 θ3 ] J1

ω˙ 2 =

1 [k2 θ1 − k2 θ3 + T (t)] J2

1 θ˙ 3 = [k2 θ1 + bω2 − k2 θ3 ]. b

102

Design and Analysis of Control Systems

The output equation is given by Tb = b(ω2 − ω3 ) = b(ω2 − θ˙ 3 ) = bω2 − [k2 θ1 + bω2 − k2 θ3 ] = −k2 θ1 + k2 θ3 . Putting the aforementioned equations in matrix form produces   0 1 0 0      (k1 + k2 )  k 2   0 0   −   ˙θ 1 J1 J1     ω˙ 1     =  k k  2 2  ω˙ 2    0 0 −   J J 2 2 ˙θ 3       k2 k2   0 1 −   b b



0

   θ1  0   ω1   +  ω2   1  θ3  J2 

       [T ]    

0

 θ1   ω1   0 k2   ω2  + [0] [T ] . θ3 

Tb =



−k2

0

This is the State-Variable Matrix model of the dynamic system. Example 2.22 (a) Draw the free-body diagram of the gear and shaft mechanism shown below (Figure 2.53). (b) Find expressions for the gear ratio, N, and the angular displacements, θa (t) and θb (t). (c) Find the equivalent stiffness constant keq such that the algebraic model of the system is given by T (t) θ1 = , keq where keq is a function of k1 , k2 , ra , and rb .

Figure 2.53 A gear and shaft mechanism.

103

Modelling of Dynamic Systems

Solution 2.22 (a) The free-body is shown here.

Free-Body Diagrams (b) The gear ratio is given by N=

rb ra

=

θa θb 

⇒ θa = Nθb =

rb ra

 θb .

(2.37)

From the free-body diagrams T = k1 (θ1 − θa ) ⇒ θ1 =

(first free-body dgm)

T + θa k1

(2.38)

T = k1 (θ1 − θa ) = ra fc ⇒ fc =

(second free-body dgm)

T ra

(2.39)

rb fc = k2 θb

(third free-body dgm)

=⇒ θb =

rb fc rb = k2 k2

=⇒ θb =

rb T . ra k2



T ra

 (using Equation 2.39) (2.40)

Substituting this expression of θb (t) (Equation 2.40) in Equation 2.37 leads to the expression for θa (t).    rb rb T θa = ra ra k2  2 rb T = . (2.41) ra k2 Substituting this expression for θa (t) in Equation 2.38 leads to  2 T rb T θ1 = + k1 ra k2

104

Design and Analysis of Control Systems





1  = +  k1

  rb 2 T ra   T ≡ k2  keq

 =⇒

1 1  = + keq  k1

=⇒ keq =



  rb 2 ra    k2 

k1 k2  2 . rb k2 + k1 ra

This is the equivalent torsional stiffness keq for the gear and shaft system. It can also be expressed in terms of the gear ratio as k1 k2 keq = . k2 + k1 N 2 Example 2.23 In the following rotational mechanical system, there is a driving torque T (t) exerted on disk J1 and a load torque TL (t) exerted on disk J2 (Figure 2.54). (a) Draw the free-body diagrams for the system. (b) Choose the minimum number of state variables for this system and give a rationale for the choice. (c) Express the dynamic equations in State-Variable Matrix model (i.e. obtain A, B, C, D) where both the applied torque T (t) and the load torque TL (t) are considered as inputs. The output is the torque in the shaft k. Solution 2.23 (a) The free-body diagrams are shown as follows.

Free-Body Diagrams

Figure 2.54 Rotational mechanical system.

105

Modelling of Dynamic Systems

(b) There are three independent energy-storing elements; one shaft and two disks (with moment of inertia J1 and J2 ). The potential energy stored in a shaft depends on the angular displacement θ (t) and the kinetic energy stored in a rotating disk depends on the angular speed ω(t). Hence, the minimum set of state variables can be represented by x(t) = [θr ω1 ω3 ]T , where θr (t) is the relative angular displacement between the two ends of the shaft, θr (t) = θ2 − θ3 =⇒ θ˙ r (t) = ω2 − ω3.

(2.42) (2.43)

(c) From the free-body diagrams the equations of motion are determined. J1 ω˙ 1 + b(ω1 − ω2 ) = T b(ω1 − ω2 ) = k(θ2 − θ3 ) k(θ2 − θ3 ) = TL + J2 ω˙ 3 . Replacing (θ2 − θ3 ) by θr in these equations gives J1 ω˙ 1 + b(ω1 − ω2 ) = T b(ω1 − ω2 ) = kθr kθr = TL + J2 ω˙ 3 .

(2.44) (2.45) (2.46)

Substituting Equation 2.45 into Equation 2.44 gives J1 ω˙ 1 + kθr = T.

(2.47)

k ω2 = ω1 − θr . b

(2.48)

Also, from Equation 2.45, it follows that

Substituting Equation 2.48 into Equation 2.43 gives k θ˙ r = ω1 − θr − ω3 . b Rearranging Equations 2.46, 2.47 and 2.49 produces the state variable equations. k θ˙ r = − θr + ω1 − ω3 b k T ω˙ 1 = − θr + J1 J1 ω˙ 3 =

k TL θr − . J2 J2

The output equation is given by Tk = k(θ2 − θ3 ) = kθr .

(2.49)

106

Design and Analysis of Control Systems

The State-Variable Matrix model is obtained by extracting the coefficients of the variables and inputs.  θ˙ r  ω˙ 1 ω˙ 3 



k b

1

     k   =  − J1     k  J2

−1

0

0

0

0



         θr     ω1  +      ω3      

 Tk =



k

0

0



 θr   ω1  + 0 ω3

0





T TL

0

0

1 J1

0

0

−1 J2

       T   TL   

 .

This is the State-Variable Matrix model of the dynamic system.

2.10.3

ELECTRICAL SYSTEMS (RLC NETWORKS)

In this section, examples illustrating the modelling of electrical systems are presented. These circuits consist of voltage sources, current sources, resistors, capacitors, inductors and operational amplifiers in various configurations. Most of them take the form of RLC networks or a combination thereof. As discussed earlier, an RLC circuit is an electrical circuit consisting of a resistor (R), an inductor (L), and a capacitor (C), connected in series or parallel. Techniques for modelling a series RLC circuit have already been covered in previous sections. The full modelling is repeated for completeness and comparison with a parallel RLC circuit. Example 2.24 (a) For an RLC series network, derive the Input-Output, Transfer Function, and Block Diagram models. (b) Consider an RLC parallel network where the output is the voltage across the capacitor, and the input is a source current. Derive the Input-Output and Transfer Function models of the system. (c) Consider the same RLC parallel network in (b). However, the output is the current following through the capacitor and not the voltage. Derive the Input-Output and Transfer Function models of the system. Solution 2.24 (a) Consider the RLC series circuit in Figure 2.55, where the output is the voltage across the capacitor, vC (t), and the input is a source voltage, v(t).

Figure 2.55 RLC series network in time domain.

107

Modelling of Dynamic Systems

The Input-Output model is derived as follows: v(t) = vR (t) + vC (t) + vL (t) Z 1 i(t) + Li˙(t) = Ri(t) + C But i(t) = Cv˙C (t) =⇒ v(t) = LCv¨C (t) + RCv˙C (t) + vC This is the input and output differential equation of the RLC series network, which is also called the Input-Output model of the system. The Transfer Function model is obtained by taking the Laplace transforms of this Input-Output model. V (s) = s2 LCVC (s) + sRCVC (s) +VC (s) VC (s) =⇒ T (s) = V (s) 1 = LCs2 + sRC + 1 1 LC = R 1 2 s + s+ L LC This is the Transfer Function model of the RLC series network. It can also be obtained by considering the impedance of the network in the Laplace domain. For the RLC series network in Figure 2.55, the impedance across each circuit element (resistor, capacitor, and inductor) is derived as follows: vR (t) = Ri(t) =⇒ VR (s) = RI(s) =⇒ ZR (s) = R 1 i(t) C 1 I(s) sC 1 sC Z

vC (t) = =⇒ VC (s) = =⇒ ZC (s) =

vL (t) = Li˙(t) =⇒ VL (s) = sLI(s) =⇒ ZL (s) = sL Hence the overall impedance of the network is obtained from Figure 2.11 as follows:   1 V (s) = I(s) R(s) + + sL sC V (s) =⇒ Z(s) = I(s) = ZR (s) + ZC (s) + ZL (s) 1 = R(s) + + sL sC This is the overall RLC series network impedance.

108

Design and Analysis of Control Systems

Figure 2.56 RLC network block diagram and transfer function.

If the output of the network is the voltage across the capacitor, then the Transfer Function model of the network is obtained as the ratio of the output and overall (input) impedances as follows (Figure 2.56): T (s) = = =

=

=

VC (s) V (s) ZC (s)I(s) Z(s)I(s) ZC (s) Z(s) 1 sC 1 R(s) + + sL sC 1 LC R 1 s2 + s + L LC

As already established, this is the Transfer Function model of the RLC series network. It is presented in a Block Diagram model in Figure 2.11 (b) The modelling of the RLC parallel circuit follows the methods adopted for the series one. For a parallel network, the input is a source current and the output is the voltage across the capacitor as depicted in Figure 2.57, where the output is the voltage across the capacitor, vC (t), and the input is a source current, i(t). The Input-Output model of the RLC parallel network can be obtained by using KCL as follows: i(t) = iR (t) + iL (t) + ic (t) Z vR (t) 1 = + vL (t) +Cv˙C (t) R L But vR (t) = vL (t) = vC (t)

Figure 2.57 RLC parallel network.

109

Modelling of Dynamic Systems

=⇒ i(t) =

vC (t) 1 + R L

Z

vC (t) +Cv˙C (t)

v˙C vc =⇒ i˙(t) = Cv¨C + + (2.50) R L This is the Input-Output model for the RLC parallel circuit. The corresponding Transfer Function model can be obtained by taking Laplace transforms of this equation. sI(s) = Cs2VC (s) + s =⇒ T (s) = =

=

VC (s) Vc (s) + R L

VC (s) I(s) s s 1 Cs2 + + R L s C s 1 2 s + + RC LC

This is the Transfer Function model of the RLC parallel network with a current source as an input. It can also be derived from impedance analysis. The overall impedance of the RLC parallel network is obtained as follows:    1 1 1  I(s) = V (s)   R(s) +  1  + sL  sC 1 I(s) =⇒ = Z(s) V (s) = =

=

1 1 1 + + ZR (s) ZC (s) ZL (s) 1 1 1 + + 1 R(s) sL sC 1 1 + sC + R(s) sL s 1 + RC LC s C s C s 1 s2 + + RC LC s2 +

=

=⇒ Z(s) =

This is the overall RLC parallel network impedance. Since the output of the RLC parallel network is the voltage across the capacitor and there is a source current as an input, then the Transfer Function model of the network is determined as follows:

110

Design and Analysis of Control Systems

T (s) = =

=

VC (s) I(s) V (s) C  V Z(s) 

V (s)  V (s) Z(s)

= Z(s) s C =⇒ T (s) = Z(s) = s 1 2 s + + RC LC This is the Transfer Function model of the RLC parallel network, where the output is the voltage across the capacitor and the input is a source current. It is important to note that this transfer function is the same as the impedance of the parallel RLC network. (c) If the output of the RLC parallel network is the current following through the capacitor and there is source current as an input, then the Input-Output model and Transfer Function model can also be determined. The Input-Output model is obtained by expressing Equation 2.50 in terms of a differential equation of input current and capacitor current as follows: iC (t) = Cv˙C (t) =⇒ v˙C (t) =

iC (t) C

v˙C vc Substitute in i˙(t) = Cv¨C + + (Equation 2.50) R L Z i˙C (t) iC (t) ic (t) =⇒ i˙(t) = C + + C RC LC ˙iC (t) ic (t) =⇒ i¨(t) = i¨C (t) + + RC LC This is the Input-Output model for this case where the output of the RLC parallel network is the current following through the capacitor and there is source current as an input. The Transfer Function model can be obtained by taking Laplace transforms of this equation: IC (s) IC (s) s2 I(s) = s2 IC (s) + s + RC LC   s 1 = IC (s) s2 + + RC LC =⇒ T (s) = =

Ic (s) I(s) s2 s 1 s2 + + RC LC

This is the Transfer Function model of this parallel network, where the input is a source current, and the output is the current in the capacitor. It can also be obtained by impedance analysis as follows:

111

Modelling of Dynamic Systems

T (s) =

=

=

=

=⇒ T (s) =

IC (s) I(s)   VC (s) Z (s)  C  V (s) Z(s) Z(s) ZC (s) 

 s   C   s 1 s2 + + RC  LC 1 sC s2 s 1 s2 + + RC LC

As already established, this is the Transfer Function model of the RLC parallel network where the output is the current following through the capacitor, and there is a source current as the input. Example 2.25 Find the Transfer Function model of the circuit in Figure 2.58, where there are two resistors and an inductor. Solution 2.25 Whenever there is an inductor, it is imperative to note (in addition to realising that the impedance across capacitor and that across an inductor are given by 1/sC and sL, respectively) that: LdiL vL = dt Z b vL iL = a L Applying Kirchhoff’s Current Law (KCL), it follows that vo − vi vo + iL + = 0 1 2

Figure 2.58 Electrical circuit.

112

Design and Analysis of Control Systems

bv vo − vi vo L + + =0 1 2 a L Taking Laplace transform throughout, the transfer function can be derived as follows:

Z

Vo (s) Vo (s) + =0 s  2  1 1 = Vi (s) =⇒ Vo (s) 1 + + s 2 Vo (s) =⇒ T (s) = Vi (s)

Vo (s) −Vi (s) +

=

1 1 + 1/s + 1/2

=

2s 3s + 2

This is the Transfer Function model of the electrical circuit, a dynamic system. Example 2.26 Find the Transfer Function model of the circuit in Figure 2.59, where there are two resistors and a capacitor. Solution 2.26 According to Kirchhoff’s Current Law (KCL), the sum of all currents operating at a node or junction is zero. Hence, applying KCL at Node VT means: ir1 + ir2 + iL = 0 Writing this in terms of voltages VT (s) −Vi (s) VT (s) VT (s) −Vo (s) + + =0 1 1 s We can simplify this to   1 Vo (s) VT (s) 2 + = Vi (s) + s s Note that iL = iC Elements in series share the same current.

Figure 2.59 Electrical circuit.

(2.51)

113

Modelling of Dynamic Systems

For an inductor:

LdiL dt It is important to note that expressions for the impedances of a capacitor and that of an inductor are given by 1/sC and sL, respectively. Also consider that in this circuit, the current through the inductor is the same as that through the capacitor; IL = IC . It follows that: vL =

VT (s) −Vo = VL (s) =⇒ VT (s) −Vo = sL[IL (s)] VT (s) −Vo (s) = 1 × IL (s) s VT (s) −Vo (s) =⇒ = IC (s) s VT (s) −Vo (s) = sCVo (s) =⇒ s VT (s) −Vo (s) =⇒ = 2sVo (s) s =⇒ VT (s) −Vo (s) = 2s2Vo (s) =⇒

=⇒ VT (s) = Vo (s)[2s2 + 1] By substituting this expression of VT (s) in Equation 2.51 it follows that: 1 Vo (s) Vo (s)[2s2 + 1][2 + ] = +Vi (s) s s Vo (s) 1 Vo (s)(4s2 + 2s + 2 + ) = Vi (s) + s s Vo (s)(4s2 + 2s + 2) = Vi (s) =⇒ T (s) = =

Vo (s) Vi (s) 1 4s2 + 2s + 2

This is the Transfer Function model of the electrical circuit, an example of a dynamic system. Example 2.27 Find the Transfer Function model of the circuit in Figure 2.60.

Figure 2.60 Electrical circuit.

114

Design and Analysis of Control Systems

Solution 2.27 Consider solving this question with the use of two loops. We obtain the following: Applying KVL in Loop 1 Vi (s) = 2s(I1 + I2 ) + 2I1 = I1 (2s + 2) + 2I2 Applying KVL in Loop 2 2I1 = 2I2 + 2sI2 2I1 = 2I2 (1 + s) I1 = I2 (1 + s) Substituting this outcome into the expression for Vi (s) leads to: Vi (s) = I2 (1 + s)(2s + 2) + 2sI2 Vi (s) = I2 (4s + 4 + 2s2 + 2s − 2) Vi (s) = I2 (2s2 + 6s + 2) From the circuit, the output voltage is the same as the voltage across the inductor: Vo = VL (s) = 2sI2 =⇒ Vo = 2sI2 Thus the transfer function is obtained as follows: Vo (s) Vi (s) VL (s) = Vi (s)

T (s) =

=

I2

2sI2 2 (2s + 6s + 2)

2s 2s2 + 6s + 2 s =⇒ T (s) = 2 s + 3s + 1 This is the Transfer Function model of the electrical circuit, an example of a dynamic system. =

Example 2.28 Find the Transfer Function model of the electrical circuit in Figure 2.61. Solution 2.28 Consider the definition of the impedance of a capacitor (1/sC) and that of an inductor (sL), respectively. Using these definitions in the application of KVL in Loop 1 leads to: I1 I1 + I2 Vi (s) = 2(I1 + I2 ) + + 2I1 + s     s 2 1 = I1 4 + + I2 2 + s s Applying KVL in Loop 2 gives: 2I1 +  I1

1 + 2s s

I1 = 2I2 + 2sI2 s  = I2 (2 + 2s) I1 = I2

2s2 + 2s 2s + 1

115

Modelling of Dynamic Systems

Figure 2.61 Electrical circuit.

Now substituting this result into the equation obtained from Loop 1 leads to:      2s2 + 2s 2 1 Vi (s) = I2 4+ + I2 2 + 2s + 1 s s   2 (4s + 4s) (2s + 1) + =⇒ Vi (s) = I2 s s =⇒ Vi (s) =

I2 (4s2 + 6s + 1) s

From the circuit we can see that the output voltage is equal to the voltage across the inductor: Vo = VL = 2sI2 Thus, we can obtain the transfer function as follows: T (s) =

Vo (s) Vi (s)

=

I2

2sI2  2 [4s + 6s + 1] s

=⇒ T (s) =

2s2 4s2 + 6s + 1

This is the Transfer Function model of the electrical circuit, an example of a dynamic system. Example 2.29 Find the Transfer Function model of the electrical circuit in Figure 2.62. Solution 2.29 It is important to note the general equations for the impedance and voltage across an inductor, and corresponding ones for a capacitor: ZL (s) = sL =⇒ VL (s) = IL (s)[sL] 1 sC IC (s) =⇒ VC (s) = sC ZC (s) =

116

Design and Analysis of Control Systems

Figure 2.62 Electrical circuit.

When dealing with an inductor or capacitor it is prudent to start by stating either of these two equations. Applying KVL in Loop 1 leads to: Vi (s) = 2s(I1 + I2 ) + 1 × I1 = 2sI1 + 2sI2 + I1 = I1 (2s + 1) + 2sI2 Applying KVL in Loop 2: I2 1 × I1 = 3sI2 +   1 s 2 Hence 2I2 I1 = 3sI2 + s   2 = I2 3s + s Substituting in the expression for Vi (s) leads to Vi (s) = I1 (2s + 1) + 2sI2   2 = I2 (2s + 1)(3s + ) + 2sI2 s   2 = I2 (2s + 1)(3s + ) + 2s 3 But Vo (s) = 3sI2 , hence the system transfer function is given by T (s) = =

Vo (s) V1 (s) 3sI2   2 I2 (2s + 1)(3s + ) + 2s s

117

Modelling of Dynamic Systems

=

3s2   2 + 2s2 s(2s + 1) 3s + s

=

3s2 (2s + 1)(3s2 + 2) + 2s2

=

3s2 6s3 + 5s2 + 4s + 2

This is the Transfer Function model of the electrical circuit which is a dynamic system. Example 2.30 Consider the electrical circuit shown in Figure 2.63, where the input u(t) is a current and the output y(t) is a voltage. Find the State-Variable Matrix model of the system. Solution 2.30 The capacitor and the inductor are the two energy-storing elements in the system, and the vector represents the corresponding variables:   iL vC The generic equations for any capacitor and any inductor are respectively given by iC = =⇒ v˙C = vL = =⇒ i˙L =

CdvC dt iC C LdiL dt vL L

From the circuit, the necessary equations are established using KCL and KVL. From KVL: vL + vR = vC + vR Also from KVL: LiL + RiL = vC + RCv˙C From KCL: u =⇒ u

= iL + iC = iL +Cv˙C

Also from KCL: y = vR = RCv˙C

Figure 2.63 Electrical circuit.

118

Design and Analysis of Control Systems

The next step is to find the expressions for the derivatives of the chosen state variables, that is:   i˙L v˙C These are obtained as: i˙L v˙C

=

−2R 1 R iL + vC + u L L L −iL u = + C C

The output is given by y = −RiL + Ru Hence, the state-variable matrices are obtained from the following state-space model:        i˙L −2R/L 1/L iL R/L  =  + u v˙C −1/C 0 vC 1/C     iL y = −R 0   + [R]u vC This is the State-Variable Matrix model of the electrical circuit, which is a dynamic system. Example 2.31 Find the State-Variable Matrix model of the circuit in Figure 2.64. Solution 2.31 It is important to note that there are three energy-storage elements present – two inductors and a capacitor. The vector of state variables is given by:   iL1 iL2  vC

Figure 2.64 Electrical circuit.

119

Modelling of Dynamic Systems

The task is to find the expressions for the derivatives of these state variables represented by the following vector:   i˙L1 i˙L2  v˙C The general equations for an inductor and a capacitor are respectively given by: diL dt dvC iC = C dt

vL = L

Each of the three energy-storing elements can then be analysed. For the first inductor, it follows that: diL1 dt vL diL1 = L dt diL1 ˙ vL = = iL1 dt vL = L

For the second inductor, it follows that diL2 dt vL diL2 = L dt diL2 vL = dt = i˙L2 vL = L

For the capacitor, it follows that dvC dt iC dvC = C dt dvC iC = C dt iC =⇒ = v˙C 2 =⇒ iC = 2v˙o iC = C

but iC = i5 =⇒ i5 = 2v˙o Applying KVL around the outer loop leads to −vi + i4 + i3 + i5 + vo = 0 vo (t) = vc (t)

120

Design and Analysis of Control Systems

KCL at node v1 and v2 i3 = i4 − iL2 i5 = i3 − iL1 Hence we obtain −vi (t) + i4 + i4 − iL2 + i4 − iL2 − iL1 + vo (t) = 0 We thus obtain the expression below i4 =

vi − vo + 2i2 + i1 3

Consider the voltage drop across R1 i4 =

vi − v1 1 

2iL2 vi vo i4 v1 = vi − i4 = vi − + − + 3 3 3 3



Thus we obtain

2iL2 iL1 vo 2vi − + + 3 3 3 3 Using the equations derived from node v1 and v2 above, we obtain v1 = −

1 1 1 1 i3 = − iL2 + iL1 − vC + vi 3 3 3 3 Similarly for the equation obtained from node v2 1 2 1 1 i5 = − iL2 − iL1 − vc + vi 3 3 3 3 We can now collect the coefficients of the model:  −2  3     i˙L1  i˙L2  =  −1  3  v˙C   −1 6 Also, the output is given by:

state variables and develop the State-Variable Matrix    −1 1 2   3 3   3   iL1   1 −2 2   iL2  +   vi 3 3 3   vC       1 −1 −1 3 6 6

 y = vo = vc =⇒ y = 0

0

     iL1 1 iL2  + 0 vi vC

This gives the complete State-Variable Matrix model of the system, where the state-space matrices A, B, C, and D are identifiable. Example 2.32 In the electrical circuit in Figure 2.65 the input is a current ii (t) and the output is a voltage vo (t).

121

Modelling of Dynamic Systems

Figure 2.65 Electrical circuit.

(a) Choose the state variables for the system. (b) Derive the State-Variable Matrix model (i.e. obtain A, B, C, D). (c) Find the input-output differential equation model for the circuit.

Solution 2.32 (a) There are two independent energy-storing elements: a capacitor and an inductor. The energy stored in the capacitor is a function of the voltage vC (t), and that stored in the inductor depends on the current iL (t). Hence, the minimum set of variables is chosen as x(t) = [iL vC ]T , (b) The annotated circuit diagram with current directions is shown in Figure 2.32.

From analysing the inductor and capacitor, it follows that diL dt diL 1 ⇒ = vL dt L dvC iC = C dt dvC 1 ⇒ = iC . dt C

vL = L

122

Design and Analysis of Control Systems

Using KCL and KVL for the circuit vL = R1 (ii − iL ) + vC diL 1 = [R1 (ii − iL ) + vC ] dt L vC iC = iR1 − io = (ii − iL ) − R2   vC dvC 1 (ii − iL ) − . ⇒ = dt C R2 ⇒

(2.52)

(2.53)

The output is given by vo = vC . Writing the aforementioned equations in the State-Variable Matrix model produces    R  R1 1 1 −       L L L   ˙iL  iL   [ii (t)] + =    v˙C  1 1  vC 1 − − C R2C C     iL [vo ] = 0 1 + [0] [ii (t)] . vC This is the State-Variable Matrix model of the circuit. diL (t) should be eliminated to obtain (c) From Equations 2.52 and 2.53 the variables iL (t) and dt the input-output differential equation.   v˙o 1 ˙ ˙ ii − iL − v¨o = C R2   v˙o 1 ˙ 1 1 = ii − {R1 (ii − iL ) + vo } − using i˙L = {R1 (ii − iL ) + vo } C L R2 L   1 ˙ 1 v˙o = ii − {R1 (iC + iR2 ) + vo } − using (ii − iL ) = (iC + iR2 ) C L R2     1 ˙ 1 vo v˙o = ii − {R1 Cv˙o + + vo } − C L R2 R2     1 ˙ R1 vo vo v˙o = ii − Cv˙o + + − C L R2 R1 R2       1 ˙ R1 R1 + R2 v˙o = ii − Cv˙o + vo − C L R1 R2 R2   1 ˙ R1C R1 + R2 v˙o == ii − v˙o − vo − . C L LR2 R2 Rearranging this equation gives the input-output differential equation for the electrical circuit as     R1 1 R1 + R2 i˙i v¨o + + v˙o + vo = . L R2C R2CL C This is the Input-Output model of the electrical circuit.

123

Modelling of Dynamic Systems

Example 2.33 (a) Write the dynamic equations for the circuit shown below.

Circuit Diagram (b) Put equations in the State-Variable Matrix model where the voltages va (t) and vb (t) are the inputs, and both v1 (t) and v2 (t) are the outputs of interest (i.e. obtain A, B, C, D). Choose the state variables as v1 (t) and v2 (t). Solution 2.33 (a) The dynamic equations are obtained by applying the KCL and manipulating the result. Employing the KCL at node 1 gives iC1 = iR1 + iR2 va − v1 v2 − v1 + C1 v˙1 = R1 R2   1 1 1 1 ⇒ v˙1 = − + v1 + v2 + va . R1C1 R2C1 R2C1 R1C1 Applying the KCL at node 2 gives iR3 = iC2 + iR2 iC2 = iR3 − iR2 vb − v2 v2 − v1 − R3 R2   1 1 1 1 ⇒ v˙2 = v1 − + v2 + vb . R2C2 R2C2 R3C2 R3C2

C2 v˙2 =

(b) Expressing these equations in State-Variable Matrix model is achieved by pulling out the coefficients of the variables and inputs. 

  1 1 − +  R1C1 R2C1     v˙1 =  1 v˙2   R2C2 1  R1C1   +  0  

0 1 R3C2



1 R2C1  −

1 1 + R2C2 R3C2

     va   vb 



    v1   v2  

124

Design and Analysis of Control Systems

 

v1 v2



1

 =  0

0 1

   

 

v1 v2



0

 +  0

0



0

  



va vb

 .

This is the State-Variable Matrix model of the circuit. Example 2.34 (a) Obtain the dynamic equations of the circuit shown below.

Circuit Diagram (b) Explain why the state variables should be chosen as vC1 (t), vC2 (t), and vC3 (t). (c) Express the equations in the State-Variable Matrix model where the input is voltage u(t) and the output is voltage vo (t). Solution 2.34 (a) The output is given by using the KVL u + vC1 + vC2 − vo = 0 vo = vC1 + vC2 + u. Applying KCL at node 2, −iC2 = iR2 v0 − vC3 R2 u + vC1 + vC2 − vC3 = R2 −u − vC1 − vC2 + vC3 . = R2C2

−C2 v˙C2 = −C2 v˙C2 v˙C2 Applying KCL at node 1,

iC1 + iR3 + (−iC2 ) = 0 C1 v˙C1 +

where − iC2 = iR2

u + vC1 u + vC1 + vC2 − vc3 + =0 R3 R2 v˙C1 =

−u − vC1 −u − vC1 − vC2 + vC3 + . R3C1 R2C1

Applying KCL at node 3 iR1 + iR2 = iC3 u − vC3 u + vC1 + vC2 − vC3 + = C3 v˙C3 R1 R2 v˙C3 =

u − vC3 u + vC1 + vC2 − vC3 + . R1C3 R2C3

125

Modelling of Dynamic Systems

(b) There are three independent energy-storing elements: three capacitors. The energy stored in the capacitor is a function of the voltage vC (t). Hence, the minimum set of variables is chosen as  T x(t) = vC1 vC2 vC3 . (c) The State-Variable Matrix model is then obtained by extracting the coefficients of the variables vC1 , vC2 , and vC3 , and input u. Example 2.35 In the figure shown below, determine the input-output differential equation of the circuit.

Solution 2.35 Using the KCL and finding all the currents flowing from the sources (virtual ground) leads to iR4 = iR1 + iR2 + iR3 v1 − vA v2 − vA v3 − vA vA − vo = + + R4 R1 R2 R3 0 − v0 v1 − 0 v2 − 0 v3 − 0 = + + R4 R1 R2 R3   v1 v2 v3 vo = −R4 + + . R1 R2 R3 This means the circuit is a summer (it performs a summing function) since it adds up the input voltages. Example 2.36 Find the input-output differential equation describing the circuit shown in the following diagram

Op-Amp Circuit: Virtual-Short Concept

126

Design and Analysis of Control Systems

Solution 2.36 Applying KCL at B gives iC = iR3 C(v˙i − v˙B ) = C(v˙B − v˙i ) +

1 vB R3

1 vB = 0. R3

By the virtual-short concept, vB = vA , which according to the voltage-divider rule can be written as vB = [R1 /(R1 + R2 )]vo . Substituting this into the previous equation gives R1C R1 v˙o −Cv˙i + vo = 0 R1 + R2 (R1 + R2 )R3 from which R1 R3Cv˙o + R1 vo = (R1 + R2 )R3Cv˙i or

  1 R2 v˙o + vo = 1 + v˙i . R3C R1 This is the Input-Output model for the op-amp circuit. 2.10.3.1

Analogous Dynamic Systems

Translational, rotational and electrical systems can be shown to manifest the same dynamic behaviour and hence their models can be used interchangeably. Consider the four second-order systems shown in Figures 2.66– 2.69. Using free-body diagrams and circuit laws (current and voltage), it can be shown that the inputoutput differential equations for the four systems are given by:

Figure 2.66 A translational mechanical system.

Figure 2.67 A rotational mechanical system.

127

Modelling of Dynamic Systems

Figure 2.68 An RLC series circuit.

Figure 2.69 An RLC parallel circuit.

x¨ +

b1 k1 1 x˙ + x = f (t) m m m

k2 1 b2 θ¨ + θ˙ + θ = T (t) J J J ı¨ + v¨o +

R1 ˙ 1 1 i+ i = v˙i (t) L1 L1C1 L1

1 1 1 v˙o + vo = i˙(t). R2C2 L2C2 C2

The solutions for theses four differential equations (Input-Output models) are the same. Therefore, the four dynamic systems in Figures 2.66–2.69 are analogous to each other. There are two important types of analogous dynamic systems, that is, force-voltage analogy and force-current analogy. These two sets of analogous systems are summarised in Figure 2.70.

Figure 2.70 Analogous dynamic systems.

128

2.10.4

Design and Analysis of Control Systems

ELECTROMECHANICAL SYSTEMS

Figure 2.71 shows a simplified model of a typical electromechanical system, that is, a system containing both moving mechanical parts and electrical components. The system represents a capacitor microphone and consists of a parallel plate capacitor connected into an electrical circuit. Capacitor plate 1 is rigidly fastened to the microphone frame. Sound waves pass through the mouthpiece and exert a force fs (t) on plate 2, which has mass m and is connected to the frame by a spring (k) and a damper (b).The capacitance C(x) is a function of the distance x(t) between the plates C(x) =

εA , x

where ε is the dielectric constant of the material between the plates, and A is the surface area of the plates. The charge q and the voltage e across the plates are related by q = C(x)e. The electric field in turn produces a force fe , which opposes motion of the movable plate. This force is given by q2 fe = . 2εA Example 2.37 Consider the electromechanical system in Figure 2.71. (a) Draw the electrical circuit and the free-body diagram that can be used to represent this system. (b) Justify the choice of the vector of state variables as x = [q q˙ x x] ˙T. (c) Derive the State-Variable model for this electromechanical system, where the output is the force fe on the movable plate and the inputs are the voltage v(t) and sound force fs (t). Solution 2.37 (a) From Figure 2.71 the electrical circuit of the system can be deduced to be an RLC circuit with a variable capacitor C(x) as shown in Figure 2.72. The free-body diagram is obtained by considering the forces acting on capacitor plate 2.

Figure 2.71 A simple electromechanical system.

129

Modelling of Dynamic Systems

Figure 2.72 RLC circuit for electromechanical system.

(b) There are four energy-storing elements in the electromechanical system: a capacitor, an inductor, a spring, and a moving mass. The electrical energy stored in a capacitor is a function of the charge q(t), the electrical energy stored in an inductor is a function of the current q(t), ˙ the potential energy stored in a spring is a function of the displacement x(t), and the kinetic energy stored in a moving mass is a function of the velocity x(t). ˙ Hence, the minimum set of variables is given by x = [q q˙ x x] ˙T. (c) Employing the KVL in Figure 2.72 gives vC + vL + vR − v(t) = 0 v(t) = vR + vL + vC = Ri + L

1 di + dt C(x)

Z

i(t)dt

=R

d2q 1 dq +L 2 + dt dt C(x)

=R

dq d 2 q qx +L 2 + dt dt εA

Z

dq dt dt

  dq using i(t) = dt   εA using C(x) = . x

Balancing the forces in the free-body diagram in Figure 2.73 gives fs (t) = mx¨ + bx˙ + kx + fe = mx¨ + bx˙ + kx +

1 2 q 2εA

(using fe =

q2 ). 2εA

This is a nonlinear system because of the terms qx and q2 , and hence, the matrices A and C cannot be extracted. The state-variable system can be represented as follows:

Figure 2.73 Free-body diagram: electromechanical system.

130

Design and Analysis of Control Systems

 





q˙      1 qx    − Rq˙ + −v  q¨   L εA     =      x˙   x˙          1 1 2 x¨ − q − fs bx˙ + kx + m 2εA y=

            

1 2 q . 2εA

These equations are in the format of the general State-Variable Matrix model for nonlinear systems, which is given by x(t) ˙ = f (x, u,t) y(t) = g(x, u,t), where the state variable and input vectors are given by x(t) = [q q˙ x x] ˙ T and u(t) = [v fs ]T . Example 2.38 The pulley system shown below is assumed to be ideal. Draw the free-body diagrams and obtain the modelling equations.

Fig. 5 Solution 2.38 The free-body diagrams are shown in the following figures.

Note the inclusion of gravity (g) in the equations for the vertical system.

131

Modelling of Dynamic Systems

Example 2.39 (a) Write the equations describing the series combination of the translational mechanical elements shown below.

(b) Find the expressions for Keq and Beq (see the figure below) such that the motions of the ends of the combination are the same as those shown above.

Solution 2.39 (a) The solution is obtained from analysing the series as follows:

From (1-2) f (t) = k1 (x2 − x1 ) from (1-3) f (t) = k1 (x3 − x1 ) + b1 (x˙3 − x˙1 ) from (1-4) f (t) =

k1 k2 (x4 − x1 ) + b1 (x˙4 − x˙1 ) k1 + k2

from (1-5) f (t) =

k1 k2 b1 b2 (x5 − x1 ) + (x˙5 − x˙1 ) k1 + k2 b1 + b2

(b)

from (1-5) keq =

2.11

k1 k2 ; k1 + k2

beq =

b1 b2 b1 + b2

SWITCHING BETWEEN DIFFERENT SYSTEM MODELS

In this book, emphasis is placed on both time-domain (State-Variable Matrix and Input-Output Differential Equation) and frequency-domain models (Transfer Function [including pole-zero format] and Block Diagram). It is essential to be able to convert from one model form to another as this enhances system understanding. Furthermore, depending on the system analysis, design of interest or specific application, one form may be more useful and informative than another. Consequently, the ability to analytically derive one form of the dynamic system model from another form is imperative. There are functions in MATLAB (Appendix B) that directly transform from one model form to another.

132

Design and Analysis of Control Systems

• ss2tf : This function converts from the State-Variable Matrix model to the Transfer Function model [num, den] = ss2tf (A, B,C, D) , where num and den are the coefficients of the numerator polynomial and denominator polynomial of the system transfer function, i.e., H(s) =

as2 + bs + c ds2 + es + f

num = [a b c] and den = [d e f ] . For example, the following State-Variable Matrix model can be converted to a transfer function model as follows:  



x˙ v˙



y



0

  =  0

=



1

1



1 20

" #   0  x 1 + [vr ]   v 20



0





x v

 +



0



vr .

[num, den] = ss2tf (A, B,C, D)     1 1 and den = 1 0 num = 0 0 20 20 1 H(s) = 20 s s2 + 20

=

1 20 s(s +

=

1 ) 20

1 . 20s2 + s

• tf2ss: This function converts from the Transfer Function model to the State-Variable Matrix model [A, B,C, D] = tf2ss (num, den) . For example, MATLAB can be used to find the State-Variable Matrix model of the following input-output differential equation (Input-Output model) where zero initial conditions are assumed. y¨ + 6y˙ + 25y = 9u + 3u˙

133

Modelling of Dynamic Systems

First, the Transfer Function model is obtained as follows: L [y¨ + 6y˙ + 25y] = L [9u + 3u] ˙ s2Y (s) + 6sY (s) + 25Y (s) = 9U(s) + 3sU(s) =⇒ H(s) =

num = [0

3

9] and den = [1

[A, B,C, D] = tf2ss (num, den)  −6 −25   A= 1 0 C=



3

3s + 9 s2 + 6s + 25

9



,

  , 

6

25]



1

B=

D=

(2.54)

 

(2.55)

0 

0



(2.56)

• ss2zp: This function converts from the State-Variable Matrix model to the zero-pole form of the Transfer Function model [z, p, k] = ss2zp (A, B,C, D) . • zp2ss: This function converts from the zero-pole form of the Transfer Function model to the State-Variable Matrix model [A, B,C, D] = zp2ss (z, p, k) . These last two functions can be illustrated by using the matrices in Equations 2.55 and 2.56. [z, p, k] = ss2zp (A, B,C, D)   −3 + 4 j z = −3, p = , k = 3. −3 − 4 j From these parameters, the Transfer Function model can be deduced H(s) =

Y (s) U(s)

=3

[s − (−3)] [s − (−3 + 4 j)] [s − (−3 − 4 j)]

=

3(s + 3) (s + 3 − 4 j)(s + 3 + 4 j)

=

3s + 9 . s2 + 6s + 25

Note that this equation is the same as Equation 2.54, as expected. Now that switching between models has been explored using MATLAB functions and exercises, this conversion from one type of model to another is now discussed and implemented using analytical methods for real dynamic systems.

134

2.12

Design and Analysis of Control Systems

INPUT-OUTPUT MODEL TO A TRANSFER FUNCTION MODEL

As explained in Chapter 1, the four dynamic system models, Input-Output Differential Equation, Transfer Function, State-Variable Matrix, and Block Diagram are interchangeable. One can convert from one model to the other and back. The conversion from an Input-Output Model to a Transfer Function model is the most usual way of deriving a Transfer Function model from the differential equations representing the dynamics of the system. As discussed in Section 2.8.5, this is achieved by taking Laplace transforms of the Input-Output model or by applying the s-operator. Hence, examples of the conversion from the Input-Output model to the Transfer Function model have already been presented and demonstrated in that section, which covers the determination of Transfer Function models.

2.13

TRANSFER FUNCTION MODEL TO AN INPUT-OUTPUT MODEL

Given a Transfer Function model of a system, understanding the differential equations that describe the dynamics of the system might be needed. The Input-Output model can be obtained by transformation from the Laplace domain back to the time domain. This is also called taking the Inverse Laplace transforms. For example, given a dynamic system’s Transfer Function model as 6 X(s) = F(s) s2 + 4s + 9  =⇒ X(s) s2 + 4s + 9 = 6F(s) =⇒ x(t) ¨ + 4x(t) ˙ + 9x(t) = 6 f (t) T (s) =

Thus, the Input-Output model is obtained. Given the following two further Transfer Function models, Input-Output models can be similarly obtained. X(s) 11 = F(s) s2 + 17s + 80  =⇒ X(s) s2 + 17s + 80 = 11F(s) =⇒ x(t) ¨ + 17x(t) ˙ + 80x(t) = 11 f (t) T (s) =

This is the Input-Output model of the dynamic system. s+2 X(s) = 3 F(s) s + 7s2 + 6s + 19  =⇒ X(s) s3 + 7s2 + 6s + 19 = sF(s) + 2F(s) ... =⇒ x (t) + 7x(t) ¨ + 6x(t) ˙ + 19x(t) = f˙(t) + 2 f (t) T (s) =

This is the Input-Output model of the dynamic system.

2.14

BLOCK DIAGRAM MODEL TO A TRANSFER FUNCTION MODEL

Block diagram algebra is used to simplify complex block diagrams through a process called block diagram reduction. The result is one block with an overall transfer function (the Transfer Function model) inside it, a ratio of the Laplace domain output and the Laplace domain input. Hence, this block diagram reduction is, effectively, conversion from the Block Diagram model to the Transfer Function model. This is demonstrated in Section 2.9.5 titled Worked Examples of Simplifying Block Diagrams.

135

Modelling of Dynamic Systems

2.15

TRANSFER FUNCTION MODEL TO A STATE-VARIABLE MATRIX MODEL

Sometimes it is necessary to convert a transfer function of a dynamic system to state-space representation. This effectively converts from a Transfer Function model to a State-Variable Matrix (StateSpace) model. Such transformation is essential so that widely used classical state-space methods of control system analysis, design and implementation can be applied to the dynamic system. However, as explained earlier, it is essential to emphasise that the State-Variable Matrix model is not unique. 2.15.1

GENERAL CONTROLLABLE CANONICAL FORM

One method of converting from the Transfer Function model to State-Variable methods is called the controllable canonical form. The starting point is to consider the Transfer Function model of a generic dynamic system expressed as a ratio of two polynomials. T (s) =

=

b(s) Y (s) = U(s) a(s)

(2.57)

b0 sn + b1 sn−1 + b2 sn−2 + · · · + bn sn + a1 sn−1 + a2 sn−2 + · · · + an

(2.58)

The controllable canonical form is a system description in which the control input can modify all states. This is in keeping with the definition of controllability (a concept to be discussed in detail in Chapter 7), which states that: If an input to the system can be found that takes every state variable from a desired initial state to a desired final state, the system is said to be controllable. Hence, the controllable canonical form is very convenient in the design of system controllers. The controllable canonical form arranges the coefficients of the transfer function denominator across one row of the A matrix. It is the most direct method for converting the Transfer Function model to a State-Space model. From Equation 2.58, the State-Variable Matrix model (A, B,C, D) for the controllable canonical form is then extracted as follows: 

0 0 .. .

   A=   0 −an

1 0 .. .

0 1 .. .

0 −an−1

0 −an−2

··· ··· ··· ··· ···

0 0 .. .



     1  −a1

(2.59)

  0 0     B = 0  ..  . 1  C = bn − an b0

bn−1 − an−1 b0

bn−2 − an−2 b0

D = [b0 ]

(2.60)

···

b1 − a1 b0



(2.61) (2.62)

This is the General Controllable Canonical State-Variable Matrix model for a generic dynamic system. It is prudent to note that the stacking of variables in the state-variable vector, x(t), is in ascending order, that is:

136

Design and Analysis of Control Systems



x1 x2 x3 .. .



          x(t) =     xn−2    xn−1  xn The General Controllable Canonical form will be fully developed and discussed in Chapter 7, together with other types of canonical forms such as observable, diagonal and Jordan. 2.15.2

SPECIAL CONTROLLABLE CANONICAL FORM

A special case is obtained if, in the General Controllable Canonical form, two conditions are met. 1. b0 = 0, which means the order of the numerator polynomial is at least one less than that of the denominator one 2. the stacking order of variables is in descending order such that:   xn xn−1    xn−2      x(t) =  ...     x3     x2  x1 In this case, the transfer function of the system consisting of two polynomials is adjusted as follows: T (s) =

=

N(s) b(s) = D(s) a(s) b1 sn−1 + b2 sn−2 + · · · + bn sn + a1 sn−1 + a2 sn−2 + · · · + an

In this Special Controllable Canonical form, it is essential to note that the coefficient of sn is 1; the coefficients depicted by ai and bi are real numbers and must be present (in canonical analysis) even if they are equal to the number 0; and that the degree of the numerator is n − 1, while that of the denominator is n. Before deploying the controllable canonical form analysis, it is essential to establish these three conditions. The state-space matrices (A, B,C, D) for the controllable canonical form are extracted as follows:   −a1 −a2 · · · −an  1 0 ··· ···     0 1 ··· ···  A=   ..  ..  . . ··· ···  0 0 1 0   1 0     B = 0  ..  . 0

137

Modelling of Dynamic Systems

 C = b1

···

b2

bn



D=0 This is the Special Controllable Canonical State-Variable Matrix model, which is obtained by the Special Controllable Canonical method for a generic dynamic system model where: (i) the coefficients of the numerator polynomial appear (with the same signs and order) as the row matrix, C; and (ii) the coefficients of the denominator polynomial appear (with opposite signs but same order) as the first row in the system matrix, A. As indicated earlier, it is important to note that the stacking of variables in the state-variable vector, x(t), is in descending order. It is essential to emphasise that MATLAB uses the General Controllable Canonical form in its function t f 2ss. However, it organises the state variables in descending order (as with the special form). Hence the model obtained using the Special Controllable Canonical method is the same as that produced by deploying the MATLAB function. This is the rationale behind the use of the Special Controllable Canonical form. Num = [b1 b2 · · · bn ] Den = [1 a1 a2 · · · an ] [A, B,C, D] = tf2ss (num, den) . In the preceding MATLAB operation, the State-Variable Matrix model (A, B,C, D) generated is the same as that obtained by hand calculations using the the Special Controllable Canonical form. Example 2.40 Consider the following Transfer Function model. T (s) =

s+2 s2 + 7s + 12

(a) Find its corresponding State-Variable Matrix (State-Space) model. (b) Draw a block diagram representing the Transfer Function model in controllable canonical form. (c) What is the nature of the relationship between each state-variable and the control input? Solution 2.40 (a) Assess the coefficients of the numerator and denominator of the following transfer function. s+2 T (s) = 2 s + 7s + 12 The degree of the system is n = 2. The coefficient of sn is 1; the coefficients depicted by ai and bi are real numbers. The degree of the numerator is (n − 1), while that of the denominator is n. Hence all three controllable canonical form conditions are satisfied, thus the state-space matrices can be extracted without any manipulation.   −7 −12 A= 1 0   1 B= 0   C= 1 2   D= 0 This is the State-Variable Matrix model obtained from the Transfer Function model by the Special Controllable Canonical form approach where: (a) the coefficients (1 and 2) of the numerator

138

Design and Analysis of Control Systems

Figure 2.74 Controllable canonical form block diagram.

polynomial appear (with the same signs and order) as the row matrix, C, and (b) the coefficients (7 and 12) of the denominator polynomial appear (with opposite signs but same order) as the first row in the system matrix, A. (b) Figure 2.74 shows a block diagram representing the Transfer Function model in controllable canonical form. (c) It is important to note that each state-variable feeds back to the control input, u(t), through the coefficients of the system matrix A, obtained in (a). Example 2.41 Consider the following Transfer Function models and find their corresponding StateVariable Matrix (State-Space) models. (a) T (s) =

s+3 1 3 s + 2s + 4 2

(b) T (s) =

2s + 3 (s + 1)(3s2 + 5s + 12)

(c) T (s) =

s+9 1 3 s + 3s + 7 4

(d) T (s) =

s+7 s3 + 2s2 + 2s

(e) T (s) =

s+9 s(s2 + 3s + 2)

(f) T (s) =

s + 10(s2 + s + 2s) s2 (s + 3)(s2 + s + 36)

139

Modelling of Dynamic Systems

Solution 2.41 (a) Consider the transfer function T (s) =

s+3 1 3 s + 2s + 4 2

This transfer function has to be rewritten so that the coefficient of s3 is 1 in the denominator and that the coefficient (0) of s2 is present in both the numerator and denominator T (s) =

=

=

s+3 1 3 s + 2s + 4 2 2s + 6 3 s + 4s + 8 0 × s2 + 2s + 6 s3 + 0 × s2 + 4s + 8

Now that the transfer function is now in controllable canonical form, the state-space matrices can be extracted.   0 −4 −8 0 A = 1 0 0 1 0   1 B = 0 0   C= 0 2 6 D=0 This is the State-Variable Matrix model obtained by the controllable canonical form approach. (b) From the Transfer Function model, the State-Variable Matrix model can be obtained by expressing the Transfer Function model in a controllable canonical form and thus deriving the canonical State-Variable Matrix model. 2s + 3 (s + 1)(3s2 + 5s + 12) 2s + 3 = 3 3s + 8s2 + 17s + 12 (2/3)s + 1 = 3 s + (8/3)s2 + (17/3)s + 4

T (s) =

=

0 × s2 + (2/3)s + 1 s3 + (8/3)s2 + (17/3)s + 4

 −8/3 A= 1 0

−17/3 0 1   1 B = 0 0

 −4 0 0

140

Design and Analysis of Control Systems

 C= 0

2/3   D= 0

 1

This is the State-Variable Matrix model obtained by the controllable canonical form approach. (c) From the Transfer Function model, the State-Variable Matrix model can be obtained by expressing the Transfer Function model in a controllable canonical form and thus deriving the canonical State-Variable Matrix model. s+9

T (s) =

1 3 s + 3s + 7 4 4(s + 9)  =  1 3 4 s + 3s + 7 4 4s + 36 = 3 s + 12s + 28 =

0 × s2 + 4s + 36 s3 + 0 × s2 + 12s + 28

 0 A = 1 0

 −12 −28 0 0  1 0

  1 B = 0 0  C= 0

 4 36   D= 0

This is the State-Variable Matrix model obtained by the controllable canonical form approach. (d) Consider the transfer function T (s) =

s+7 s3 + 2s2 + 2s

Expressing the transfer function in controllable canonical form and extracting the state-space matrices is carried out as follows: T (s) =

0 × s2 + s + 7 s3 + 2s2 + 2s + 0 × s0

 −2 A= 1 0

−2 0 1

  1 B = 0 0

 0 0 0

141

Modelling of Dynamic Systems

 C= 0

 1 7   D= 0

This is the State-Variable Matrix model obtained by the controllable canonical form approach. (e) Consider the transfer function s+9 s(s2 + 3s + 2) s+9 = 3 s + 3s2 + 2s 0 × s2 + s + 9 = 3 s + 3s2 + 2s + 0 × s0   −3 −2 0 0 0 A= 1 0 1 0   1 B = 0 0   C= 0 1 9   D= 0

T (s) =

This is the State-Variable Matrix model obtained by the controllable canonical form approach. (f) Consider the transfer function T (s) =

s + 10(s2 + s + 25) s2 (s + 3)(s2 + s + 36)

The equation for the transfer function is considered, expanded out, arranged in canonical form and the state-space matrices are then extracted. This is carried out as follows: T (s) =

s + 10(s2 + s + 25) s2 (s + 3)(s2 + s + 36)

=

s3 + 11s2 + 35s + 250 s5 + 4s4 + 39s3 + 108s2

=

0 × s4 + s3 + 11s2 + 35s + 250 s5 + 4s4 + 39s3 + 108s2 + 0 × s + 0

 −4 1  A= 0 0 0

−39 −108 0 0 0 0 1 0 0 0 1 0 0 0 1   1 0    B= 0 0 0

 0 0  0  0 0

142

Design and Analysis of Control Systems

 C= 0

1

11 35   D= 0

 250

This is the State-Variable Matrix model obtained by the controllable canonical form approach. 2.15.3

GENERAL CONVERSION TO A STATE-VARIABLE MATRIX MODEL

There is a technique that can be used to convert any Transfer Function model of a general dynamic system to a State-Variable Matrix model. In this approach, there are no conditions imposed on the format of the transfer function and neither does it need to be adjusted into any particular structure. Consider a generic system transfer function which is a ratio of two polynomials: T (s) =

=

N(s) b(s) = D(s) a(s) b0 sn + b1 sn−1 + b2 sn−2 + · · · + bn a0 sn + a1 sn−1 + a2 sn−2 + · · · + an

It is instructive to note that there are no conditions on the coefficients of numerator and denominator polynomials, unlike in the Special Controllable Canonical form. A quick technique of obtaining the State-Variable Matrix model from any given Transfer Function model can be developed in the following way. The transfer function is broken into two transfer functions by introducing a dummy variable X(s) such that: T (s) =

C(s) R(s)

C(s) X(s) × R(s) X(s)    N(s) C(s) X(s) = = X(s) R(s) D(s)       C(s) X(s) N(s) 1 =⇒ = X(s) R(s) 1 D(s) =

This gives us two transfer functions as follows: C(s) X(s)

=

N(s) 1

(2.63)

X(s) R(s)

=

1 D(s)

(2.64)

These two transfer functions can be used to convert any Transfer Function model T (s) into a StateVariable Matrix model (A, B,C, D). Equation 2.63 is used to determine matrices A and B, while Equation 2.64 produces matrices C and D. Example 2.42 Consider the Transfer Function model in Figure 2.75 and derive the State-Variable Matrix model.

143

Modelling of Dynamic Systems

Figure 2.75 Transfer function model.

Solution 2.42 Given the system represented in Figure 2.75, the objective is to transform from the Transfer Function model to the State-Variable Matrix model. A dummy variable X(s) is introduced such that: T (s) = = = X(s) R(s) C(s) and X(s) =⇒

= =

C(s) R(s)    C(s) X(s) X(s) R(s) 7s + 6 13s3 + 10s2 1 3 13s + 10s2 7s + 6 1

Hence, the block diagram can be redrawn as follows:

From the first part of the new block diagram it follows that: X(s) 1 = R(s) 13s3 + 10s2 Cross multiplying gives

... 13 x + 10x¨ = r

Introducing three new variables leads to x1 = x x2 = x˙ x3 = x¨ Differentiating each of the new equations gives the state-variable equation x˙1 = x2 x˙2 = x3 ... x˙3 = x 1 = (r − 10x3 ) 13 −10 r = x3 + 13 13

144

Design and Analysis of Control Systems

  0 x˙1 0 x˙2  =   x˙3 0

    0 0 x1 1  x  +  0  r  2   −10 1 x3 0 13 13 Evaluating the second part of the new block diagram gives the output equation, thus completing the State-Variable Matrix model: 1 0

C(s) 7s + 6 = X(s) 1 =⇒ C(s) = (7s + 6)X(s) =⇒ c = 7x˙ + 6x = 7x2 + 6x1

 =⇒ c = 6

7

   x1 0 x2  + [0]r x3

This example shows how, for a dynamic system, you can transform from the Transfer Function model to the State-Variable Matrix model. Example 2.43 Consider the following Transfer Function model and derive the State-Variable Matrix model.

Solution 2.43 Given the system represented above, the objective is to transform from the Transfer Function model to the State-Variable Form model, where Transfer Function model is given by: T (s) =

C(s) 8s + 10 = R(s) s2 + 5s + 13

The aforementioned diagram can be redrawn as

Rewrite the equation from the first block diagram as follows: 1 X(s) = 2 R(s) s + 5s + 13 After cross multiplying, we obtain x¨ + 5x˙ + 13x = r The next step is to introduce a set of new variables as follows: x1 = x x2 = x˙

145

Modelling of Dynamic Systems

And then differentiate each of the defined equations x˙1 = x2 x˙2 = x¨ = r − 5x2 − 13x1 This leads to the state-variable equation        x˙1 0 1 x1 0 = + r 1 x˙2 −13 −5 x2 Now evaluating the second part in the block diagram gives C(s) = (8s + 10)X(s) c = 8x˙ + 10x = 8x2 + 10x1 Hence the output equation is obtained, thus completing State-Variable Form model.     x1 c = 10 8 + [0]r x2 This example shows how, for a dynamic system, you can transform from the Transfer Function model to the State-Variable Matrix model. Example 2.44 Consider the Transfer Function model in the following diagram and derive the StateVariable Matrix model.

Solution 2.44 Rewrite the equation from the block diagram as follows: T (s) = = = X(s) R(s) C(s) and X(s) =⇒

= =

C(s) R(s)    C(s) X(s) X(s) R(s) 100 4 3 s + 20s + 10s2 + 7s + 100 1 s4 + 20s3 + 10s2 + 7s + 100 100 1

Consider the equation X(s) 1 = 4 3 R(s) s + 20s + 10s2 + 7s + 100

146

Design and Analysis of Control Systems

After cross multiplying, we obtain .... ... x + 20 x + 10x¨ + 7x˙ + 100x = r For the successful completion of the question, we have to implement new set of variables defined as follows: x1 = x x2 = x˙ x3 = x¨ ... x4 = x We now differentiate each of the variables to obtain the State-Variable Matrix model. x˙1 = x2 x˙2 = x3 x˙3 = x4 x˙4 = r − 20x4 − 10x3 − 7x2 − 100x1        x1 0 x˙1 0 1 0 0  x2  0 x˙2   0 0 1 0  + r  = x˙3   0 0 0 1  x3  0 1 −100 −7 −10 −20 x4 x˙4 Also, consider the second block equation given by 100 1 100 =⇒ C(s) = X(s) 1 =⇒ c = 100x C(s) X(s)

=

= 100x1 =

 100

0

0

  x1  x2     0  x3  + 0 r x4

This gives the complete State-Variable Matrix model of the system. It is instructive to note that using a dummy variable X(s) will produce a State-Variable Matrix model for any Transfer Function model. For the case where, in the denominator polynomial of the Transfer Function model, the coefficient of sn (the highest power of s) is 1, then this method produces the same result as the General Controllable Canonical method. This can be discerned from Examples 2.43 and 2.44. Since every transfer function can be expressed with 1 as the coefficient of sn (the highest power of s) in its denominator polynomial, it follows that this technique will always give the same result as the General Controllable Canonical method if the coefficient of sn is made 1 before the application of the technique. Redoing Example 2.42, with this adjustment can demonstrate this point. Example 2.45 Consider the Transfer Function model T (s) = =

C(s) R(s) 7s + 6 13s3 + 10s2

147

Modelling of Dynamic Systems

First make the coefficient of sn in the denominator polynomial 1 and then derive the State-Variable Matrix model. Solution 2.45 C(s) R(s)    C(s) X(s) = X(s) R(s) 7s + 6 = 13s3 + 10s2

T (s) =

=

(7/13)s + 6/13 s3 + (10/13)s2

=⇒

X(s) R(s)

=

1 s3 + (10/13)s2

and

C(s) X(s)

=

(7/13)s + 6/13 1

From the first part of the new the block diagram it follows that: X(s) 1 = 3 R(s) s + (10/13)s2 Cross multiplying gives ... x + (10/13)x¨ = r Introducing three new variables leads to x1 = x x2 = x˙ x3 = x¨ Differentiating each of the new equations gives the state-variable equation x˙1 = x2 x˙2 = x3 ... x˙3 = x = r − (10/13)x3 ) −10 = x3 + r 13   0 x˙1 0 x˙2  =   x˙3 0

1 0 0

    0 x1 0 1  x  + 0 r  2 −10 x3 1 13

148

Design and Analysis of Control Systems

Evaluating the second part of the new block diagram gives the output equation as follows: C(s) (7/13)s + 6/13 = X(s) 1 =⇒ C(s) = [(7/13)s + 6/13] X(s) =⇒ c = (7/13)x˙ + (6/13)x 7 6 = x2 + x1 13 13 

6 =⇒ c = 13

7 13

   x1 0 x2  + [0]r x3

This completes the State-Variable Matrix model of the system. Now using the General Controllable Canonical method as stipulated by Equations 2.59 to 2.62, the State-Variable Matrix model matrices are obtained as:     0 1 0   0 7  1  B = 0 C = 6 A = 0 0 0 D = [0]  −10 13 13 1 0 0 13 Clearly, this State-Variable Matrix model is the same as that obtained in the preceding example. This demonstrates that using a dummy variable X(s) will produce a State-Variable Matrix model, which is the same as that obtained by the General Controllable Canonical method if the coefficient of sn is made 1 before the application of the technique. It is important to emphasise that there are various techniques and methods of obtaining the State-Variable Matrix model, with different outcomes, for the same dynamic system because the State-Variable Matrix model is not unique for any given system.

2.16

STATE-VARIABLE MATRIX MODEL TO A TRANSFER FUNCTION MODEL

Sometimes it is necessary to move from the state-space domain to the frequency domain (Laplace domain). To achieve this, the State-Variable Matrix (State-Space) model is converted into the Transfer Function model. It is important to note that the State-Space model is not unique, whereas the Transfer Function model of a system is unique. From the state-variable matrices, the Transfer Function model is obtained. Consider a dynamic system with the following State-Variable Matrix model. x˙ (t) = Ax(t) + Bu(t) y(t) = Cx(t) + Du(t). Taking Laplace transforms of both these equations leads to sX(s) = AX(s) + BU(s) Y(s) = CX(s) + DU(s). From Equation 2.65, it be can be seen that X(s) [sI − A] = BU(s), =⇒ X(s) = (sI − A)−1 BU(s),

(2.65) (2.66)

149

Modelling of Dynamic Systems

which, when used in Equation 2.66, leads to Y(s) = C (sI − A)−1 BU(s) + DU(s)   = C (sI − A)−1 B + D U(s) T(s) =

Y(s) U(s)

=⇒ T(s) = C (sI − A)−1 B + D, This is the transfer function matrix of a dynamic (continuous-time) system. For SISO systems where B is a column vector, and C is a row vector, this transfer function matrix becomes a scalar function, known as the Transfer Function model of the dynamic system. This model is used extensively in Chapter 7, which covers State-Space Design Methods. The most tedious part of this derivation of the Transfer Function model is obtaining the inverse matrix (sI − A)−1 . This matrix is called the system Resolvent Matrix such that Φ(s) = (sI − A)−1 The import and impact of the Resolvent Matrix will be explored in more detail in Chapter 7. This chapter focusses on obtaining this inverse matrix and using it to establish the Transfer Function model. Hence, the process of calculating the inverse of an n × n matrix is reviewed in the following section. 2.16.1

INVERSE OF A GENERAL n × n MATRIX

There is extensive use of inverses of matrices in the design and analysis of control systems. For a general n × n matrix A the inverse, A−1 , is given by: A−1

1 × ad j A det A

=

where ad j A = CT The entity C is a matrix of the cofactors of matrix A, while det A is the determinant of matrix A. The determination of the inverse of a generic n × n matrix can be illustrated by using a 3 by 3 matrix. Consider a generic 3 by 3 matrix given by 

a A = d g

b e h

 c f i

Hence the determinant of A is obtained as follows: e det A = a h

d f − b g i

d f + c g i

e h

In doing this, you can choose any row or column. The determinant obtained is the same. However it is important to remember the use of the alternate signs (+ − +). Here we have chosen row (a b c) and applied the alternate signs.

150

Design and Analysis of Control Systems

For the matrix A, the matrix of minors M is given by  e f d  h i g     b c a  M =  g h i    b c a e f d

f i

d g

c i

a g

c f

a d

 e h     b   h     b  e

Each element in a square matrix has a place sign. The place sign of the element in the first row and the first column is always +. The place signs then alternate from + to − as we move across the rows and down the columns. The matrix of place signs is given by   + − + − + − + − + Each element in a square matrix has its own cofactor. The cofactor is the product of the element’s place sign and minor. Hence the matrix of cofactors C is given by:   e f − d f + d e + g h  g i  h i       a b  a c  b c  C=  − h i + g i − g h        a b  a c  b c − + d f + d e e f From this matrix of cofactors C, the adjoint matrix, ad j A, is obtained by making the rows of C become columns, that is: ad j A = CT This is because the adjoint of a matrix is the transpose of its cofactor matrix. Thus the inverse of a 3 by 3 matrix is established in conformity with the general formula: A−1

=

1 × ad j A det A

Consider the following 3 by 3 matrix 

7 A= 0 −3

 2 1 3 −1 4 −2

If row (7 2 1) is chosen, the determinant of A is obtained as follows: 3 −1 − 2 0 −1 + 1 0 3 = 1 det A = +7 4 −2 −3 −2 −3 4 If column (7 0 − 3)T is chosen, the determinant of A is obtained as follows: 3 −1 2 1 2 1 =1 det A = +7 −0 + (−3) 4 −2 4 −2 3 −1

151

Modelling of Dynamic Systems

These two exercises show that in getting the determinant, det A, it does not matter which row or column is used. The same result is obtained. From the definition of minors and cofactors, it can be shown that:   −2 −3 9 M = −8 −11 34 −5 −7 21   −2 3 9 C =  8 −11 −34 −5 7 21   −2 8 −5 =⇒ CT =  3 −11 7  9 −34 21 Therefore the inverse of the 3 by 3 matrix is given by A−1

=

1 × ad j A det A

1 ×CT det A   −2 8 −5 1  = × 3 −11 7  1 9 −34 21   −2 8 −5 =  3 −11 7  9 −34 21 =

=⇒ A−1

This is the inverse of matrix A. The result can be checked and proved by showing that A−1 A = AA−1 = I         −2 8 −5 7 2 1 7 2 1 −2 8 −5  3 −11 7  ×  0 3 −1 =  0 3 −1 ×  3 −11 7  9 −34 21 −3 4 −2 −3 4 −2 9 −34 21   1 0 0 = 0 1 0 0 0 1 Although we have demonstrated how to find the inverse of a 3 by 3 matrix, the generic method that uses minors and cofactors is applicable to any square matrix of any arbitrary dimensions n by n. Obtaining the inverse of a 2 by 2 matrix is a relatively easy task. However, its derivation is just a special case of the general cofactor based formula. Consider the following matrix:   a b A= c d The inverse is obtained from the general formula: A−1

=

1 × ad j A det A

152

Design and Analysis of Control Systems

−1

=⇒ A

2.16.2

=

1 ×CT det A

=

 1 d × −c (ad − bc)

 −b a

MODEL CONVERSION USING MATRIX INVERSION

Example 2.46 Find the Transfer Function model for a dynamic system with the following StateVariable Matrix model.     −2 −2 1 A= B= −3 −5 5

 C= 3

 2

  D= 0

Solution 2.46 The objective is to obtain the Transfer Function model. This is obtained by using the following equation: T (s) = C(sI − A)−1 B + D, where its components are established as follows: 

s sI − A = 0

  0 −2 − s −3

  −2 s+2 = −5 3

 2 s+5

det(sI − A) = (s + 2)(s + 5) − 6 = s2 + 7s + 4 It is important to note that the system determinant, det(sI − A), can also be used to find the poles of the dynamic system. In this case, the poles are: √ 33 −7 ± p1 , p2 = 2 2 = −0.628 and − 6.372

ad j (sI − A) det (sI − A)  1 s+5 =⇒ (sI − A)−1 = 2 s + 7s + 4 −3 (sI − A)−1 =

 −2 s+2

153

Modelling of Dynamic Systems

Thus, we obtain the system transfer function as follows: T (s) = C(sI − A)−1 B + D      1 s + 5 −2 1 =⇒ T (s) = 3 2 2 +0 s + 7s + 4 −3 s + 2 5     1 s+5 −10 +0 = 3 2 2 s + 7s + 4 −3 5s + 10     1 s−5 = 3 2 2 +0 s + 7s + 4 5s + 7

=

3s − 15 + 10s + 14 s2 + 7s + 4

=

13s − 1 s2 + 7s + 4

This is the Transfer Function model of the dynamic system. This example shows how we can transform from a State-Variable Matrix model to the Transfer Function model. For any given dynamic system, the Transfer Function model is unique. Example 2.47 Consider the following State-Variable Matrix model of a dynamic system, and find the Transfer Function model.      0 1 0 x1 0 0 1  x2  +  0  r x˙ =  0 −3 −2 −5 x3 10

 y= 1

   x1 0 0 x2  x3

Solution 2.47 Given this model, the unique Transfer Function model can be derived as follows: x˙ = Ax + Bu y = Cx + Du =⇒ T (s) = C(sI − A)−1 B + D Evaluate (sI − A) 

s sI − A = 0 3

−1 s 2

 0 −1  s+5

The task is to obtain (sI − A)−1 . We can start with finding the determinant. s −1 0 −1 0 det (sI − A) = s +1 + 0 2 s + 5 3 s + 5 3 =⇒ det (sI − A) = s3 + 5s2 + 2s + 3

s 2

154

Design and Analysis of Control Systems

Using the cofactor method to find the adjoint matrix of (sI − A) leads to 2  s + 5s + 2 s+5 1 −3 s2 + 5s s  ad j (sI − A) =  −3s −3 − 2s s2 We can now compute the transfer function: T (s) = C(sI − A)−1 B + D  = 1 =

0

 0

2 s + 5s + 2 1  −3 s3 + 5s2 + 2s + 3 −3s

s+5 s2 + 5s −3 − 2s

  1 0 s  0 +0 s2 10

10 s3 + 5s2 + 2s + 3

This is the Transfer Function model of the system, which is unique. Given any State-Space model: x˙ = Ax + Bu y = Cx + Bu 2.16.3

STATE-SPACE TO TRANSFER FUNCTION USING LAPLACE TRANSFORMS

There are two ways of obtaining the Transfer Function model from the State-Space model: 1. Matrix algebra (as already discussed) using the following equation: T (s) = C(sI − A)−1 B + D 2. Taking Laplace transforms of the state-variable equations and then deriving T (s). The second method is summarised as follows: • Work backwards from the State-Space model to obtain the state-variable equations. • Take the Laplace transforms of these equations. • Eliminate the Laplace transforms of the state variables, leaving only those of the input and output, Y (s) and U(s). • Obtain the Transfer Function model from the equation: T (s) = 2.16.3.1

Y (s) U(s)

Illustration of Model Conversion Using Laplace Transforms

Consider a dynamic system represented by the following State-Space model:         −10 1 5 A= B= C = −5 0 D = 5 −1 0 1 Working backwards the state-variable equations are obtained as follows:        x˙1 −10 1 x1 5 = + u x˙2 −1 0 x2 1     x   y = −5 0 1 + 5 u x2

155

Modelling of Dynamic Systems

This means the state-variable equations are given by x˙1

= −10x1 + x2 + 5u

x˙2

= −x1 + u = −5x1 + 5u

y

The next step is taking the Laplace transforms of these equations and eliminating X1 (s) and X2 (s), leaving only Y (s) and U(s). sX1 (s) = −10X1 (s) + X2 (s) + 5U(s) ........ (i) sX2 (s) = −X1 (s) +U(s) ........ (ii) Y (s) = −5X1 (s) + 5U(s) ......... (iii) From (i) and (ii) it follows that X2 (s) = X1 (s)(s + 10) − 5U(s) =⇒ s[X1 (s)[s + 10] − 5U(s)] = −X1 (s) +U(s) =⇒ X1 (s)[s(s + 10) + 1] = U(s)[5s + 1]   5s + 1 =⇒ X1 (s) = U(s) s(s + 10) + 1 Substituting this outcome into (iii) yields 

Y (s) = = Given that T (s) = =⇒ T

= = =

 5s + 1 −5U(s) + 5U(s) s(s + 10) + 1   −5(5s + 1) U(s) +5 s(s + 10) + 1 Y (s) U(s) −5(5s + 1) +5 s(s + 10) + 1 −25s − 5 + 5s2 + 50s + 5 s2 + 10s + 1 2 5s + 25s s2 + 10s + 1

Since the Transfer Function model of any dynamic system is unique, this result is the same as that obtained using the matrix algebra expression: T (s) = C(sI − A)−1 B + D Proof of this is left as an exercise for the reader. Example 2.48 Consider the dynamic system represented by the following Block Diagram model in Figure 2.76. (a) Find the Transfer Function model of the system. (b) Express this Transfer Function model in controllable canonical form and deduce the StateVariable Matrix model. (c) By directly analysing the block diagram in Figure 2.76, develop the state equations in terms of the given state variables. Deduce the State-Variable Matrix model.

156

Design and Analysis of Control Systems

Figure 2.76 Block diagram model.

(d) Explain why the two models obtained from (b) and (c) are different. (e) From the State-Variable Matrix model in (c), use matrix algebra to find the Transfer Function model of the system. (f) Is the Transfer Function model obtained in (e) the same as that from (a)? Explain why this is the case.

Solution 2.48 (a) For the given diagram the objective is to find the system Transfer Function model which is given by T (s) =

Y (s) . U(s)

There are several methods to accomplish this. The first method involves deriving this transfer function from the system’s input and output differential equations, that is, obtaining the Transfer Function model from the Input-Output model using Laplace transforms. This is achieved as follows: y

= x1 + x2 + x4

=⇒ Y (s) = X1 (s) + X2 (s) + X4 (s) X1 (s) = X1 (s) + + 2sX1 (s)  s  1 = X1 (s) 1 + 2s + s U − X3 (s) = X4 (s) = 2sX1 (s) s+4 =⇒ U(s) = (s + 4)2sX1 (s) + X3 (s) X4 (s) + X2 (s) = (s + 4)2sX1 (s) + s 2sX1 (s) + X1 (s)/s = (s + 4)2sX1 (s) +  s 1 =⇒ U(s) = X1 (s) 2 + 2 + 2s2 + 8s s

157

Modelling of Dynamic Systems

The transfer function T (s) is obtained as follows: T (s) =

Y (s) U(s)

=

  1 X1 (s) 1 + 2s + s   1 2 X1 (s) 2 + 2 + 2s + 8s s

=

2s3 + s2 + s 2s4 + 8s3 + 2s2 + 1

This is the Transfer Function model for the system, and there is only one such representation for any system. It is unique for every system. (b) One way of obtaining the State-Variable Matrix model for a system is expressing its Transfer Function model in controllable canonical form as follows: T (s) =

2s3 + s2 + s 2s4 + 8s3 + 2s2 + 1

=

s3 + 0.5s2 + 0.5s s4 + 4s3 + s2 + 0.5

=

s3 + 0.5s2 + 0.5s + 0 × s0 s4 + 4s3 + s2 + 0 × s1 + 0.5

Thus, the canonical state-space matrices are obtained as:  −4 1 A= 0 0

−1 0 1 0

 0 −0.5 0 0   0 0  1 0

  1 0  B= 0 0  C= 1

0.5

0.5

 0

  D= 0 This is the complete canonical State-Variable Form model of the system. However, it is important to state that this representation is not unique. As indicated earlier, the State-Variable Matrix model of a dynamic system is not unique. (c) The State-Variable Form model can also be determined from establishing the state-variable equations of the system (i.e., finding the derivatives of the state variables). From the information in the given diagram, the following can be deduced:

158

Design and Analysis of Control Systems

X4 (s) 2s

= X1 (s) =⇒

X1 (s) s

= X2 =⇒ sX2 (s) = X1 (s) =⇒ x˙2 = x1

U(s) − X3 (s) s+4 =⇒ x˙4

= X4 (s) =⇒ U(s) − X3 (s) = sX4 (s) + 4X4 (s) = u − x3 − 4x4

X3 (s) = =⇒ x˙3

x4 = x˙1 2

X4 (s) + X2 (s) =⇒ sX3 (s) = X4 (s) + X2 (s) s

= x4 + x2

The summary of the derivatives of the four variables (the state-variable equations) is thus given as follows:

x˙2

x4 2 = x1

x˙3

= x2 + x4

x˙4

= −x3 − 4x4 + u

x˙1

=

The system output equation is given by: y = x1 + x2 + x4 The state-space matrices (State-Variable Matrix model) can then be obtained from these five equations by extracting the coefficients with respect to the state variables and the input as follows:     x1 x˙1 x2  x˙2    = A   + Bu x3  x˙3  x4 x˙4   x1 x2   y =C x3  + Du x4   0 0 0 0.5 1 0 0 0  A= 0 1 0 1 0 0 −1 −4   0 0  B=  0 1

159

Modelling of Dynamic Systems

 C= 1

1

0

1



  D= 0 This is the complete State-Variable Matrix model for the system. It is prudent to note that it is different from the one obtained in (b). (d) The State-Variable Matrix model obtained in (b) is different from the one in (c). This is because, for any system, the choice of State-Variable Matrix model matrices A,B,C,D is not unique. An infinite set of options exists. However, the system transfer function, that is, the Transfer Function model, is unique. (e) In addition to deriving the Transfer Function model for the Input-Output model as carried out in (a), there is another way of achieving the same outcome. Starting with a State-Variable Matrix model, the Transfer Function model can be obtained by using the following expression: T (s) = C(sI − A)−1 B + D Using this formula, the time-consuming part is always getting the inverse of the matrix (sI − A). In this case, it is a 4 by 4 matrix! 

s 0 sI − A =  0 0

  0 0 0 0 1 s 0 0 − 0 s 0 0 0 0 0 s

0 0 1 0

  s 0 0.5 −1 0 0 = 0 1 0 0 −1 −4

 0 0 −0.5 s 0 0   −1 s −1  0 1 s+4

The inverse, (sI − A)−1 , is obtained using the general cofactor method discussed in previous sections. Since it is a 4 by 4 matrix, the exercise is tedious but doable. we can use the row  Alternatively,  operations method where the aim is to convert [(sI − A)|I] to I|(sI − A)−1 . This way, the inverse, (sI −A)−1 , is obtained. Nonetheless, here we proceed to use the general cofactor method as follows: (sI − A)−1

=

1 × ad j (sI − A) det (sI − A)

where ad j (sI − A) = CT (transpose of the cofactor matrix of (sI − A)) Establishing the cofactor matrices: 

 s 0 0 −0.5 −1 s 0 0   (sI − A) =   0 −1 s −1  0 0 1 s+4 s 0 1 0 det (sI − A) = s −1 s −1 + 0.5 0 0 1 s + 4 0

s 0 −1 s 0 1

= s2 (s2 + 4s + 1) + 0.5(1 − s × 0) =⇒ det (sI − A) = s4 + 4s3 + s2 + 0.5

160

Design and Analysis of Control Systems

Using the obtained cofactor matrix and determinant of (sI − A), the inverse of (sI − A) can be established. Hence the sytem transfer function is obtained as follows: T (s) = C(sI − A)−1 B + D =⇒ T (s) =

Cad j (sI − A)B +0 det (sI − A)

=

s3 + 0.5s2 + 0.5s s4 + 4s3 + s2 + 0.5

=

2s3 + s2 + s 2s4 + 8s3 + 2s2 + 1

This is the Transfer Function model of the system. It is clearly the same as that obtained in question (a). This demonstrates that the Transfer Function model of a system is unique. It is important to note that the MATLAB function ss2t f can be used to get the transfer function T (s) and MATLAB can also be used to obtain the inverse of the 4 by 4 matrix, (sI − A). (f) A third alternative approach used to obtain the Transfer Function model T (s) is through Block Diagram Algebra. This is performed by using block diagram reduction as follows: STEP 1: In the original diagram, Figure 2.76, move G1 forward to produce Figure 2.77. STEP 2: In Figure 2.77, move G1 G2 forward to produce Figure 2.78. STEP 3: In Figure 2.78, combine G1 and G1 G2 H1 to produce Figure 2.79. STEP 4: In Figure 2.79, move G1 +G1 G2 H1 forward to produce Figure 2.80.

Figure 2.77 Block diagram reduction.

Figure 2.78 Block diagram reduction.

161

Modelling of Dynamic Systems

Figure 2.79 Block diagram reduction.

Figure 2.80 Block diagram reduction.

Figure 2.81 Block diagram reduction.

STEP 5: In Figure 2.80, combine remaining blocks into two blocks as shown in Figure 2.81. We can then combine these two blocks and deduce the following: T (s) = = =

Y (s) U(s) G1 G2 + G1 + G1 G2 H1 1 + H2 (G1 + G1 G2 H1 ) G1 (G2 + 1 + G2 H1 ) 1 + G1 H2 (1 + G2 H1 )

=

1 (0.5s + 1 + 0.5s2 ) (s + 4) 1 1+ (1 + 0.5s2 ) s(s + 4)

=

s3 + 0.5s2 + 0.5s s4 + 4s3 + s2 + 0.5

=

2s3 + s2 + s 2s4 + 8s3 + 2s2 + 1

162

Design and Analysis of Control Systems

This is the system Transfer Function model. Clearly, this result is the same as that obtained in question (a) and also that achieved the hard way by inverting a 4 by 4 matrix in question (e). Once again this confirms the uniqueness of the Transfer Function model of any system. (g) All the Transfer Function models obtained for the dynamic system in the preceding example are the same because the transfer function of a system is unique. There is only one transfer function for a system. This contrasts with the fact that there is no unique State-Variable Matrix (State-Space) model for a system. It is important to note that this example demonstrates three ways of obtaining the Transfer Function model for a dynamic system. It can be extracted from the other three mathematical representations (models) of a dynamic system: (1) Input-Output model, (2) State-Variable Matrix model, and (3) Block Diagram model. All three methods produce the same unique Transfer Function model.

2.17

LINEARISATION OF NONLINEAR MODELS

The differential equations describing the dynamic behaviour of most practical systems are nonlinear. This is because in practice many elements of a dynamic system are inherently nonlinear and are only linear over a limited range of operating conditions. A nonlinear differential equation is one where the derivatives of the state variables have a nonlinear relationship to the states themselves and/or the inputs such that the differential equations cannot be written in the form x˙ (t) = Ax + Bu y(t) = Cx + Du, but rather in the general form x˙ (t) = f(x, u, t) y(t) = g(x, u, t). This is the generic mathematical representation of a nonlinear dynamic system. It is the nonlinear model of the system. Modelling, analysis and control design are far easier for linear systems than nonlinear ones. However, real systems have all kinds of nonlinearities such as deadband, backlash, Coulomb friction, hysteresis, quantisation, saturation, and kinematic nonlinearities. Thus, a controller designed for a linear system model to satisfy performance specifications may perform poorly when applied to the actual system. The trade-off here is between the mathematical tractability of the linearised model and the greater validity of a nonlinear model. When confronted with a mathematical model that contains nonlinearities, there are four approaches that can be used: • • • •

Solve the nonlinear differential equations directly. Small signal linearisation (about an operating point). Linearisation by feedback. Obtain computer solutions of the response for specific cases of interest.

The first alternative is possible only in specialised cases and will not be pursued. Linearisation by feedback involves using part of the control effort to cancel the nonlinear terms and design the control’s remainder based on linear theory. This technique is accomplished by subtracting the nonlinear terms out of the equations of motion and adding them to the control equation. The result is a linear system, provided the computer implementation of the control system has enough capability to compute the nonlinear terms fast enough. Linearisation by feedback is popular in robotics, which is called the computed torque method. It is also a research topic for the control of aircraft and space shuttles.

Modelling of Dynamic Systems

2.17.1

163

SMALL SIGNAL LINEARISATION

The most popular and general approach employed in modelling nonlinear systems is small signal linearisation. It is applicable for a broad range of systems and is extensively used in industry. Linearisation is finding a linear model that approximates a nonlinear one. Suppose a small signal linear model is valid near an equilibrium and is stable. In that case, there is a region (usually very small) containing the equilibrium within which the nonlinear system is stable. Thus one can make a linear model and design a linear control for it such that, in the neighbourhood of the equilibrium, the design will be stable. Since feedback control plays a vital role in maintaining the process variables near equilibrium, such small-signal linear models are a frequent starting point for control models. The strategy is to determine the equilibrium points and then linearise the system. An equilibrium point is a state in which the system would remain if it were unperturbed by external disturbances. An equilibrium point can be unstable (an egg standing on its end), neutrally stable (a ball on a table), or stable (a book lying on a table). An equilibrium point is called an operating point for a system under feedback control. A linearised model can approximate a nonlinear system near an equilibrium point of the nonlinear system by a procedure called small signal linearisation. The resulting linear system has an equilibrium point at zero that corresponds to the equilibrium point of the nonlinear system. While linearised models are only an approximation of the nonlinear system, they are convenient to analyse and give considerable insight into the behaviour of the nonlinear system near the equilibrium point. For example, if the zero equilibrium point of the linear system is stable, then the equilibrium of the nonlinear system is locally stable. The approximate linearised model of the system will be considered. 2.17.2

LINEARISATION OF ELEMENT LAWS

A method is developed for linearising an element law where the two variables, such as force and displacement, are not directly proportional. Next, the linearised element law is incorporated into the system model. Mechanical systems with nonlinear stiffness or friction elements are considered, then nonlinear electrical systems are discussed. The object of linearisation is to derive a linear model whose response will agree closely with that of the nonlinear model. Although the responses of the linear and nonlinear models will not match exactly and may differ significantly under some conditions, there will be a set of inputs and initial conditions for which the agreement is satisfactory. In this section, the following are considered: the linearisation of a single element law, a nonlinear operating point, a nominal value and incremental variable, and a graphical approach series-expansion approach. 2.17.3

LINEARISATION OF MODELS

The linearised element laws can now be incorporated into a system model, thus producing a linearised model from a nonlinear one. Starting with a nonlinear model, the procedure can be summarised as follows: • Determine the operating point of the model by writing and solving the appropriate nonlinear algebraic equations. Select the proper operating point value if extraneous solutions also appear. • Rewrite all linear terms in the mathematical model as the sum of their nominal and incremental variables, noting that the derivatives of constant terms are zero. • Replace all nonlinear terms with the first two terms of their Taylor-series expansions, that is, the constant and linear terms. • Using the algebraic equations defining the operating point, cancel the constant terms in the differential equations, leaving only linear terms involving incremental variables.

164

Design and Analysis of Control Systems

• Determine the initial conditions of all incremental variables in terms of the initial conditions of the variables in the nonlinear model. The system’s operating point will be a condition of equilibrium in which each variable will be constant and equal to its nominal value and in which all derivatives will be zero. Inputs will take on their nominal values, typically selected as their average values. For example, if a system input is u(t) = A + B sin ωt, the nominal value of the input would be used. Under these conditions, the differential equations reduce to algebraic equations that one can solve for the operating point, using a computer if necessary. In general, the coefficients involved in those terms that came from the expansion of nonlinear terms depend on the equilibrium conditions. Hence, a specific operating point must be found before the linearised model can be expressed in numerical form. An example will illustrate the entire procedure. 2.17.4

LINEARISATION CONCEPTS

Linearisation of the element law is carried with respect to an operating point as shown in the following diagram, which depicts the nonlinear spring characteristic (force f (t) vs. displacement x(t) curve) (Figure 2.82). The operating point is a specific point on this curve and is denoted by the coordinates (x, ˜ f˜). Thus the displacement x(t) can be expressed as x(t) = x˜ + x(t), ˆ where x˜ is a constant term called the nominal value of x, and x(t) ˆ is the incremental variable corresponding to x. Similarly, the force f (t) is represented in the same way f (t) = f˜ + fˆ(t), where f˜ = f (x). ˜ The previous figure shows that element laws can be linearised graphically with the necessary terms defined. The tangent curve to the nonlinear law f (x) is a good approximation to the nonlinear curve around the operating point. The slope of the tangent is given by k=

df |x=x˜ . dx

Figure 2.82 Nonlinear spring characteristic: linear approximation.

165

Modelling of Dynamic Systems

Hence, the equation of the tangent is given by f = f˜ + k(x − x) ˜  f − f˜ = k(x − x) ˜ fˆ = kx. ˆ Thus, the linearised element law can be expressed in terms of incremental variables, fˆ(t) and x(t). ˆ The results are graphically displayed in the following diagram.

Nonlinear Spring Characteristic: Incremental Variable Coordinates Example 2.49 A nonlinear translational spring obeys the force-displacement law f (x) = |x|x. Determine the linearised element law in numerical form for each of the following operating points x˜1 = −1, x˜2 = 0, x˜3 = 1, x˜4 = 2. Solution 2.49 The nonlinear law can be expressed as  f (x) =

−x2 x2

for x < 0 for x ≥ 0.

The slope of the tangent at the operating point is given df |x=x˜ = k= dx



−2x˜ 2x˜

for x˜ < 0 for x˜ ≥ 0

= 2|x| ˜ for all x. ˜ Hence, the linearised element law in terms of incremental variables is given by fˆ = 2|x| ˜ x, ˆ where k = 2|x| ˜ describes the variable spring constant. For the four values of x˜ the results are summarised in the following figure and table:

166

Design and Analysis of Control Systems

Nonlinear Spring Characteristic: Linear Approximations for the Four Values of x. ˜                

i

x˜i

f˜i

ki

1

−1

−1

2

2

0

0

0

3

1

1

2

4

2

4

4

               

In terms of accuracy, the approximation is quite good for deviations up to 0.30 from the operating point. 2.17.5

EXAMPLES OF NONLINEAR DYNAMIC SYSTEMS

Example 2.50 Most interesting and practical systems have nonlinear dynamics. Consider the spring and damper system presented in Figure 2.83. The spring is nonlinear such that the spring force and displacement are related as follows: fs (t) = 2xs 2 (t) The applied force is given by f (t) = 14 + δ f (t) Find the linearised State-Variable Matrix model of the nonlinear system.

Figure 2.83 Nonlinear dynamic system.

167

Modelling of Dynamic Systems

Figure 2.84 Free-body diagram.

Solution 2.50 The first step is to draw the free-body diagram as depicted in Figure 2.84. f (t) = 2xs2 + M x¨M f (o) = 2xo2 + M × 0 From the diagram in Figure 2.84, it follows that f (t) = 2xs2 + M x¨M f (o) = 2xo2 + M × 0 Solving for xo from the second equation produces r

fo 2 √ = 7 where fo is 14N

xo =

2

Starting with the general form of the nonlinear input and output equation, and given that M= 1, it follows that δ 2x δt 2 2x √ δ 14 + δ f (t) = 2( 7 + δ x)2 + 2 δt √ δ 2x 14 + δ f (t) = 14 + 4 7δ x + 2(δ x)2 + 2 δt 14 + δ f (t) = 2(xo + δ x)2 + M

2(δ x)2 is discarded for first order Taylor approximation, and this leads to: √ δ 2x δ f (t) = 4 7δ x + 2 δt √ δ 2x =⇒ = δ f (t) − 4 7δ x(t) δt 2 The vector of state variables is chosen as:     δx x1 = ˙ x2 δx x˙1 = δ x˙ = x2 √ x˙2 = δ¨ x = −4 7δx (t) + δ f (t) √ = −4 7x1 + δ f (t)

168

Design and Analysis of Control Systems

Hence the state-variable representation is given by        0√ 1 x1 x˙1 0 = + δ f (t) x˙2 1 −4 7 0 x2 The output is given by y = δ x = x1 This means  y= 1

   x 0 1 + [0]δ f (t) x2

This completes the State-Variable Form model, which is the linearised model for the nonlinear dynamic system. Once linearised, the classical control techniques can be applied to the dynamic system. Example 2.51 The simple pendulum is another good example of a nonlinear system. Derive its State-Variable Matrix model. Solution 2.51 The starting point is to draw an annotated diagram of the pendulum and its free-body diagram as shown in Figure 2.85. From the free-body diagram, the torques can be summed up as follows: J θ¨ +

mgL sinθ = T 2

This is a nonlinear system and the state variables can be chosen as follows: x1 = θ x2 = θ˙ The derivatives of these variables are then obtained as follows: x˙1 = x2 −MgL T x˙2 = θ¨ = sinx1 + 2 J y = θ = x1 being the output

Figure 2.85 Simple pendulum and its free-body diagram.

169

Modelling of Dynamic Systems

This is the nonlinear State-Variable Matrix model of the simple pendulum, which is a nonlinear dynamic system. Although it is nonlinear, it is a valid and complete model of the dynamic system. However, if we are to obtain the state-variable matrices (A, B,C, and D) so that classical control system techniques can be applied, we must linearise the system. Linearisation about the equilibrium point: θ = 0; θ˙ = 0; x1 = 0; x2 = 0 leads to x1 = 0 + δ x1 x2 = 0 + δ x2 Using linearisation equations, specifically, the First Order Taylor expansion approximation: f (x) ≈ f (xo ) + ma (x − xo ) =⇒ ma =

f (x) − f (xo ) (x − xo )

Evaluating this expression at x = xo gives: ma =

d f (x) dx

For the nonlinear element, sinx1 , we can then evaluate the expression for ma at x1 = 0, as follows: d f (x) dx dsinx1 = dx1 = cosx1

ma =

= cos0 = 1 Using the First Order Taylor expansion approximation, a linearised equation for the pendulum is obtained: f (x) ≈ f (xo ) + ma (x − xo ) =⇒ sinx1 ≈ sin0 + 1 × δ x1 =⇒ sinx1 ≈ δ x1 =⇒ x1 ≈ δ x1 Deploying this linearisation outcome in the original nonlinear model for the pendulum, the linearised model is obtained as follows: x1 = δ x1 x2 = δ x2 δ x˙1 = δ x2 −MgL T δ x˙2 = δ x1 + 2J J " " # #   0 0 1 δ x  δ x˙1 1 = −MgL + 1 T δ x˙2 0 δ x2 2J J The output is given by y(t) = θ (t) = x1 = δ x1

(2.68)

170

Design and Analysis of Control Systems

This means that  y(t) = 1

   δ x1 0 + [0]T δ x2

(2.68)

This completes the linearised State-Variable Matrix model for the simple pendulum. In this format, the state-space matrices A, B,C, and D are clear from Equations 2.67 and 2.68. Now, classical control techniques can be applied to the simple pendulum which is an example of a nonlinear dynamic system. Example 2.52 Consider the following nonlinear system in Figure 2.86. Derive its linearised StateVariable Matrix model

Solution 2.52 The starting point is establishing the free-body diagrams as shown in Figure 2.87: The forces operating on free-body diagrams must balance. For example, from the free-body diagram of M2 , we obtain M2 x¨2 + b(x˙2 − x˙1 ) = f (t) =⇒ M2

d 2 x2 dt

+b

dx2 dx1 −b = f (t) dt dt

From the free-body diagram of M1 we can deduce b(x˙2 − x˙1 ) = kx1 2 + M1 x¨1 =⇒ M1

d 2 x1 dx1 dx2 +b −b + kx12 = 0 2 dt dt dt

Choosing the state variables as   x1 x2    v1  v2

Figure 2.86 Nonlinear spring, mass and damper system.

Figure 2.87 Free-body diagrams.

171

Modelling of Dynamic Systems

Now obtaining the derivatives of the state variables, after substituting the given values for the constants, leads to: x˙1 = v1 x˙2 = v2 v˙1 = −v1 − 2x12 + v2 v˙2 =

v1 v2 f (t) − + 2 2 2

The linearisation process is summarised as follows: x1 = 1 x1 = 1 + δ x1 x˙1 = δ x˙1 Also, it is imperative to note that: 2x12 = 2(1 + δ x)2 = 2(1 + 2δ x + (δ x)2 = 2(1 + 2δ x) = 2 + 4δ x We can ignore δ x2 due to Taylor expansion dx1 | at x = 1 and δ x1 = 1 + 2x1 at x = 1; δ x1 = 1 + 2δ x1 x12 = x12 and at x = 1| + dt Therefore, the state and output equations are, δ x˙1 = v1 x˙2 = v2 v˙1 = −v1 − 2(1 + 2δ x1 ) + v2 = −v1 − 2x12 + v2 v1 v2 f (t) − + 2 2 2 y = x2

v˙2 =

  0 0 1 δ x˙1  x˙2   0 0 0    v˙1  −4 0 −1 v˙2 0 0 1/2

 y= 0

1

    0 0 0  δ x1     1  0    x2  +  0  1 1   v1  −2 0  f (t) −1/2 v2 0 1/2   δ x1     x2    1   0 0  + 0 0 v1 f (t) v2

where f (t) = 2 + δ f (t), since force in nonlinear spring is 2N and must be balanced by 2N force on the damper. This is the linearised State-Variable Matrix model, and classical control techniques can now be applied to the nonlinear dynamic system.

172

2.17.6

Design and Analysis of Control Systems

MODELLING OF DYNAMIC SYSTEMS WITH TIME DELAYS

Time delays or dead-times often occur in the signals that flow between components of a dynamic system. Such time delays exist in thermal, hydraulic, and pneumatic systems. For example, when a tap is turned on at one end of a long hose pipe, it takes some time before the water appears at the far end. Also, when the feed to a conveyor belt is changed, it takes some time to observe the change at the other end. These are two examples of the phenomenon of pure time delay, also called transportation lag or dead time. A pure time delay function can be modelled by considering a general system g(t) delayed by time τ, such that the system is represented by g(t − τ). Taking Laplace transforms of this function leads to L g(t − τ) = e−sτ G(s) = Gτ (s)G(s). Hence, the Laplace transform of a pure time delay is given by Gτ (s) = e−sτ . Figures 2.88 and 2.89 illustrate the Block Diagram models of a time delay function and a plant with a time delay function, respectively. For a specific case shown in Figure 2.90, the system with a time delay is represented as follows: GD = e−sτ G(s)   5 −3.4s =e s+1 =

5e−3.4s . s+1

Figure 2.88 Time delay function.

Figure 2.89 A dynamic system with a time delay function.

Figure 2.90 A specific dynamic system with a time delay function.

173

Modelling of Dynamic Systems

Figure 2.91 Mass, spring and damper system.

Figure 2.92 RLC series network.

PROBLEMS Problem 2.1 Consider the dynamic system, a mass, spring and damper structure in Figure 2.91, where the input is f (t) and the output is x(t). (a) Derive the Input-Output model. (b) Derive the State-Variable Matrix model. (c) Derive the Transfer Function model. Problem 2.2 Consider a series RLC circuit as depicted in Figure 2.92, where the input is v(t) and the output is vC (t). (a) Derive the Input-Output model. (b) Derive the State-Variable Matrix model. (c) Derive the Transfer Function model. Problem 2.3 For the following translational mechanical system, the springs are undeflected when x1 = x2 = 0.

Translational Mechanical System

174

(a) (b) (c) (d)

Design and Analysis of Control Systems

Draw the free-body diagrams for the system. Write down the dynamic equations of motion. Choose the minimum set of state variables for the system and justify your choice. Develop the system State-Variable Matrix model (i.e. find A, B, C, D), where the input is force f (t) and the output is the force in spring k2 .

Problem 2.4 For the system shown below, the springs are undeflected when x1 = x2 = 0, and the input is force f (t).

Translational Mechanical System (a) (b) (c) (d)

Draw the free-body diagrams for the system. Write down the dynamic equations of motion. Choose the minimum set of state variables for the system and justify your choice. Develop the state-variable equations in matrix form (i.e. find A, B, C, D).

Problem 2.5 For designing an automobile suspension, a two-mass system can be used for modelling as shown in the following diagram.

The Quarter-Car Model This is called a quarter-car model because it comprises one of the four-wheel suspensions. The car and wheel positions are denoted by y(t) and x(t), respectively. These displacements are from static equilibrium, corresponding to no inputs except gravity. (a) Draw the free-body diagram of this system, assuming one-dimensional vertical motion of the mass above the wheel.

175

Modelling of Dynamic Systems

(b) Write down the equations of motion for the automobile. (c) Express these equations in a State-Variable Matrix model (A, B, C, D) using the following state-variable vector, x(t) = [x x˙ y y] ˙ T, and justify this choice of state variables. Note that the car and wheel positions, y(t) and x(t), are the two outputs of the car system while the input is the unit step bump r(t). (d) Plot the position of the car and the wheel after the car hits a “unit bump” (i.e. r(t) is a unit step) using MATLAB. Assume m1 = 10 kg, m2 = 250 kg, kw = 500, 000 N/m, ks = 10, 000 N/m. Find the value of b that you would prefer if you were a passenger in the car. Problem 2.6 The input to the translational mechanical system shown in the following diagram is the displacement x3 (t) of the right end of the spring k1 . The absolute displacements of m1 and m2 are x1 and x2 , respectively. The forces exerted by the springs are zero when x1 = x2 = x3 = 0.

Translational Mechanical System (a) Draw the free-body diagrams of the system. (b) Obtain the system dynamic equations of motion. Problem 2.7 Consider the mechanical and the electrical systems shown in the following diagrams.

176

Design and Analysis of Control Systems

(a) Obtain the transfer function of the mechanical system, where xi (t) is the input and xo (t) is the output. (b) Obtain the transfer function of the electrical system, where vi (t) is the input and vo (t) is the output. (c) Show that the transfer functions of the two systems have an identical form and are thus analogous. Problem 2.8 In the following mechanical system the input is a force f (t) and the output is the spring force in k2 .

Translational Mechanical System (a) Draw the free-body diagrams for the mass m and the massless point A. (b) Write down the equations of motion. (c) Choose the minimum number of state variables for the system and develop the state-variable matrix model (i.e. obtain A, B, C, D). (d) If the dashpot b2 is removed, show that x3 =

1 [(k2 + k3 )x2 − k2 x1 ]. k3

Choose a new set of independent state variables for this new system and develop the statevariable matrix model (i.e. obtain A, B, C, D). Problem 2.9 In the following translational mechanical system the input is f (t) and the output is the force in spring k3 .

Translational Mechanical System (a) Draw the free-body diagrams for the mass m and the massless point A. (b) Write down the system equations of motion. (c) Choose the minimum set of state variables for the system and develop the state-variable matrix model (i.e. obtain A, B,C, and D).

177

Modelling of Dynamic Systems

Problem 2.10 For a certain dynamic system the state-variable equations for the position x1 and velocity v1 are given below: x˙1 = v1 1 [−(k1 + k2 )x1 − bv1 + bx˙2 + k2 x2 (t), v˙1 = m1 where x2 (t) is the input and the output is x1 . Use the s-operator to obtain the input-output differential equation which relates the output x1 and the input x2 (t). Problem 2.11 Consider the following mechanical system.

Translational Mechanical System The forces exerted by the springs are zero when x1 = x2 = x3 = 0. The input force is f (t) and the absolute displacements of m1 , m2 , and m3 are x1 , x2 , and x3 , respectively. (a) (b) (c) (d)

Draw the free-body diagrams of the system. Obtain the system dynamic equations of motion. Choose a suitable vector of state variables and justify your choice. Obtain the State-Variable Matrix model, by expressing the dynamic equations in the StateVariable Matrix model (A, B,C, D), where the output is the spring force in k2 .

Problem 2.12 The following data is provided for the translational mechanical system considered in Problem 2.11. m1 = 5kg m2 = 15kg m3 = 50kg b1 = 500N/m/s b2 = 600N/m/s k1 = 5, 000N/m k2 = 10, 000N/m Use MATLAB to find the following: (a) The output response (y) to an input force of 50N. (b) The response to an impulse force input. (c) If the values of b1 and b2 were variables, choose values of b1 and b2 that will reduce oscillations, give practical settling time, and produce smooth responses.

178

Design and Analysis of Control Systems

If the output was the platform displacement x3 , obtain the new matrices C and D. (a) Find the output response (y) to an input force of 50N. (b) Find the response to an impulse force input. (c) If the values of b1 and b2 were variables, choose values of b1 and b2 that will reduce oscillations, and give practical settling time and smooth responses. Problem 2.13 In some mechanical positioning systems, the movement of a large object is controlled by manipulating a much smaller object that is mechanically coupled with it. The following diagram depicts such a system, where a force u(t) is applied to a small mass m to position a larger mass M. The coupling between the objects is modelled by a spring constant with a damping coefficient b.

(a) Draw the free-body diagrams of the system. (b) Write the equations of motion governing this system. (c) Identify the appropriate state variables and express the equations of motion in the StateVariable Matrix model (A, B,C, D). Problem 2.14 In the following rotational mechanical system, there is a driving torque T (t) exerted on disk J1 and a load torque TL (t) exerted on disk J2 . Both the applied torque T (t) and the load torque TL (t) are considered as inputs and the output is the angular velocity ω3 (t) (Figure 2.93).

Figure 2.93 Rotational mechanical system.

(a) Draw the free-body diagrams for the system. (b) Explain why the vector of state variables can be chosen as x(t) = [ θ2 θ3 ω1 ω3 ]T . (c) Express the dynamic equations in State-Variable Matrix model (i.e. obtain A, B, C, D). (b) Use the s-operator method to express the dynamic equations in the form of an input-output differential equation. Problem 2.15 The diagram below shows a double pendulum system. Assume the displacement angles of the pendulums are small enough to ensure that the spring is always horizontal. The pendulum rods are taken to be massless, of length l, and the springs are attached 2/3 of the way down.

179

Modelling of Dynamic Systems

Double Pendulum Derive two differential equations that model the motion of the double pendulum. Problem 2.16 Consider the RLC electrical circuit given below, where the input is current i1 (t) and the output is voltage vo (t).

An RLC Electrical Circuit (a) Find the State-Variable model (A, B,C, D) for the circuit. (b) Show that the input-output differential equation is given by 1 1 Cv¨o + v˙o + vo = i˙(t) R L Problem 2.17 In the figure below, determine the input-output differential equation of the circuit.

Problem 2.18 Find the input-output differential equation relating vo and vi (t) for the circuit shown below.

180

Design and Analysis of Control Systems

Problem 2.19 (a) For the following circuit find the State-Variable Matrix model (A, B, C, D) where vo is the output voltage and vi is the input voltage.

(b) Also, find the input-output differential equation for the system. Problem 2.20 Consider the following circuit where ii (t) is the input current and io (t) is the output current.

(a) Obtain the State-Variable Matrix model (A, B,C, D) for the circuit. (b) Obtain the input-output differential equation for the circuit. Problem 2.21 For the following op-amp circuit derive the algebraic expression for the output voltage vo in terms of the two input voltages v1 (t) and v2 (t). Indicate what mathematical operation the circuit performs.

Modelling of Dynamic Systems

181

Problem 2.22 (a) For the following op-amp circuit derive the input-output differential equation relating the output voltage vo and the input voltage vi (t).

(b) Derive the circuit’s State-Variable Matrix model (A, B,C, D), where the input is voltage vi (t) and there are two outputs: the voltage output vo , and the current through R3 (with positive sense to the right). Problem 2.23 Consider the following op-amp circuit where the output is voltage vo (t) and the input is voltage vi (t).

182

Design and Analysis of Control Systems

(a) Choose a set of state variables and justify your choice. (b) Obtain the State-Variable Matrix model (A, B,C, D) for the system. Problem 2.24 Simplify the following block diagram and obtain its overall transfer function, Y (s)/R(s).

Block Diagram Problem 2.25 Show that the transfer functions obtained by using block diagram algebra and Mason’s rule for the following block diagram are the same.

Block Diagram Problem 2.26 Simplify the following block diagram and obtain the closed-loop transfer function Y (s)/R(s).

Block Diagram

183

Modelling of Dynamic Systems

Problem 2.27 Simplify the following block diagram and obtain the closed-loop transfer function Y (s)/R(s).

Block Diagram Problem 2.28 Simplify the following block diagram and obtain the closed-loop transfer function Y (s)/R(s).

Block Diagram Problem 2.29 Derive the transfer function, the input-output differential equation and the StateVariable Matrix model for the following block diagram.

Block Diagram

184

Design and Analysis of Control Systems

Problem 2.30 Use block diagram algebra to determine the transfer function Y (s)/R(s) for the following block diagram.

Block Diagram

3 3.1

Dynamic System Response

INTRODUCTION

To design a control system, it is essential that the behaviour of the plant (or process) be analysed and understood. Two tasks must be performed in analysing a system: modelling the system and obtaining the dynamic system response. The first task was accomplished in Chapter 2, and the dynamic system response is obtained by solving the differential equations that constitute the system model. The latter constitutes the mission of this chapter. Once the system response is obtained, the function of a controller in a control system is then to influence the system response or behaviour of the plant. The objective of this chapter is to develop techniques for finding the system responses for the dynamic systems modelled in Chapter 2. This activity is also called solving the model and involves using the mathematical model to determine certain features of the system cause-and-effect relationships. Three main mathematical approaches are used to obtain the system response: direct solution of differential equations in the time domain, the use of the Laplace transforms to solve differential equations in the frequency domain and the deduction of system behaviour from the system transfer function. The Laplace transform is a mathematical tool for transforming linear differential equations into an easier-to-manipulate algebraic form. The differential equations are easily solved in this domain, and the solutions are converted back into the time domain to give the system response. The transfer function was introduced in Chapter 2 as a modelling tool. The significant advantage of this form of the dynamic system model is that the system response can be easily obtained from it. From the transfer function, the system poles and zeros can be identified, and these provide information about the characteristics of the system response. The location of the poles and zeros can then be manipulated to achieve certain desired characteristics or eliminate undesirable ones. In addition to the direct mathematical derivation of the transfer function, two visual tools can be employed to derive it. The first is the block diagram introduced in Chapter 2, and the other is the signal flow graph. The latter method characterises the system by a network of directed branches and associated gains (transfer functions) connected to nodes. Mason’s rule is used to relate the graph to the algebra of the system simultaneous equations, thus determining the system transfer function. Numerical methods are more feasible than analytic methods to determine the system response in certain circumstances. The type of equation involved in the model strongly influences the extent to which analytical methods can be used. For example, nonlinear differential equations are not often solved in closed form, and the solution of partial differential equations is far more laborious than that of ordinary differential equations. Computers can generate responses to specific numerical cases for complex models. However, using a computer to solve a complex model has its limitations. These computer models should be relatively insensitive to system parameters with uncertain values or subject to change. When an analytical model is not available or is too complex to formulate the system behaviour, it can be established from experimental time-response and frequency-response data. 3.1.1

CHAPTER OBJECTIVES

After completing this chapter, the reader should be able to accomplish several tasks. Given a mathematical model (or after deriving a model) for a dynamic system, the reader should be able to do the following:

DOI: 10.1201/9781032718880-3

185

186

Design and Analysis of Control Systems

• For the first- or second-order system model, solve the differential equations (the Input-Output model) directly to determine the system response • For a dynamic model, use the Laplace transform to a) find the complete time response b) determine the Transfer Function model, its poles and zeros c) analyse stability, evaluate time constants, damping ratios, and undamped natural frequencies • Use the Transfer Function model to determine system response • Find and analyse the impulse response, step response, and sinusoidal response • In addition to using analytical methods, use numeric methods to obtain the system response of a linear or nonlinear model in numerical form • Determine the system response from experimental data

3.2

TIME DOMAIN SOLUTION OF SYSTEM MODELS

The dynamic response of a system can be determined directly in the time domain without any transformation. The starting point is expressing the system model in the input-output differential equation form (the Input-Output model), with all other variables eliminated. The system response is then obtained by solving this differential equation. Consider the general n-th order model a0 y + a1 y˙ + a2 y¨ + · · · + an y(n) = b0 u + b1 u˙ + b2 u+ ¨ · · · + bm u(m) ,

(3.1)

where

dmu dny (m) and u = . dt n dt m The terms on the right-hand side that involve the input and its derivatives constitute the forcing function f (t) = b0 u + b1 u˙ + b2 u¨ + · · · + bm u(m) . y(n) =

With this definition, the input-output differential equation model can be expressed as a0 y + a1 y˙ + a2 y¨ + · · · + an y(n) = f (t).

(3.2)

The desired solution y(t) for t ≥ 0 must satisfy the input-output differential Equation 3.2. 3.2.1

HOMOGENEOUS INPUT-OUTPUT EQUATIONS

If f (t) = 0 in the input-output Equation 3.2 i.e., neither inputs nor their derivatives are present, then the equation is called a homogeneous differential equation, (n)

a0 yh + a1 y˙h + a2 y¨h + · · · + an yh = 0.

(3.3)

The corresponding characteristic equation for this system is given by a0 + a1 r + a2 r2 + · · · + an rn = 0,

(3.4)

which has roots r1, r2 , r3 , . . . , rn . Hence, the solution of Equation 3.4 takes the form yh (t) = K1 er1 t + K2 er2 t + · · · + Kn ern t n

= ∑ Ki eri t , i=1

where the terms Ki are real constants.

(3.5)

187

Dynamic System Response

If two or more of the roots are identical (repeated roots), then Equation 3.5 is modified as follows: yh (t) = K1 er1 t + K2ter1 t + K3 er3 t + · · · + Kn ern t ,

(3.6)

for r1 = r2 . If any of the roots are complex conjugates (r1 = α + jβ and r2 = α − jβ ), the solution of Equation 3.4 can be expressed in three related forms yh (t) = K1 e(α+ jβ )t + K2 e(α− jβ )t = eαt (K3 cos βt + K4 sin βt) = Keβt cos(βt + φ ), where φ is a constant angle. Hence, to find the solution for any given homogeneous input-output differential equation with complex roots, the constants K1 , K2 , α and β must be determined (K3 , K4 , K and φ depend on the first four constants). Example 3.1 Consider a first-order system with no input such that the input-output differential equation is given by y˙ + 3y = 0. Find its system response. Solution 3.1 The characteristic equation is given by r + 3 = 0 ⇒ r = −3 =⇒ yh (t) = Ke−3t . This is the system response, where K is a real constant, and thus the response represents a family of curves. 3.2.2

NONHOMOGENEOUS INPUT-OUTPUT EQUATIONS

When the forcing function is not zero, i.e., f (t) ̸= 0, the equation is called a nonhomogeneous inputoutput differential equation. Its solution y(t) is called the complete solution (or general solution), and it consists of two parts: the homogeneous solution yh (t) and the particular solution y p (t) such that y(t) = yh (t) + y p (t). The homogeneous solution must satisfy the homogeneous equation, (n)

a0 yh + a1 y˙h + a2 y¨h + · · · + an yh = 0, while the particular solution must satisfy the entire differential equation, a0 y + a1 y˙ + a2 y¨ + · · · + an y(n) = f (t). The procedure for obtaining the homogeneous solution yh (t) has already been explained. A general method of obtaining the y p (t) involves the variation of parameters procedure. When the forcing function f (t) has only a finite number of different derivatives, the method of undetermined coefficients is sufficient. This method assumes that the form of y p (t) consists of terms similar to those in f (t) and their derivatives. Each of the terms and derivatives is multiplied by coefficients

188

Design and Analysis of Control Systems

that must be determined. Some common forms of particular solutions are listed in the following table. f (t) y p (t) β

b

α1t + αo

at + b

eαt

aeαt

cos ωt

a cos ωt + b sin ωt

sin ωt

a cos ωt + b sin ωt

If f (t) or one of its derivatives contains a term identical to a term in the homogeneous solution yh (t), the corresponding terms in the right-hand column of the table above should be multiplied by t. For example, if yh (t) = K1 e−4t + K2 e−3t and f (t) = 2e−3t , then y p (t) = ate−3t , should be used. If a term in f (t) corresponds to a double root of the characteristic equation, the normal form for y p (t) is multiplied by t 2 . For example, if yh (t) = K1 e−3t + K2te−3t and f (t) = 2e−3t , then y p (t) = at 2 e−3t , should be used. It is important to note that when solving for the complete or general solution y(t) = yh (t) + y p (t), the arbitrary constants K1 through Kn cannot be evaluated until both yh (t) and y p (t) have been found. The two parts of the complete solution have physical significance. The homogeneous solution yh (t) represents the natural behaviour of the system where there are no external inputs i.e. f (t) = 0. yh (t) ≡ Free response. The particular solution depends on the form of the forcing function. y p (t) ≡ Forced response. Example 3.2 Consider a first-order system with a forcing function f (t) such that input-output differential equation is given by y˙ + 3y = f (t), (3.7) where y(0) = 1. Find the dynamic system response y(t) for the following forcing functions: (a) f (t) = 5 (b) f (t) = 5 cos 2t.

189

Dynamic System Response

Solution 3.2 The characteristic equation is obtained by inspecting the LHS of Equation 3.7, r + 3 = 0 ⇒ r = −3 =⇒ yh (t) = Ke−3t . The value of K depends on f (t) as well as y(0), and hence it will be different for the two cases. (a) For f (t) = 5, let the particular solution be given by y p (t) = b. Substituting f (t) = 5 and y p (t) = b in Equation 3.7 leads to y˙ + 3y = f (t) db =⇒ + 3b = 5 dt =⇒ 0 + 3b = 5 5 =⇒ b = . 3 The complete solution then becomes y(t) = yh (t) + y p (t) 5 = Ke−3t + . 3 The value of K is then obtained by using the initial condition, y(0) = 1, in Equation 3.8. y(0) = Ke−3×0 +

5 =1 3

5 =1 3 2 =⇒ K = − . 3

=⇒ K +

Therefore, the system response is given by 2 5 y(t) = − e−3t + . 3 3 (b) For the second case f (t) = 5 cos 2t, hence, the particular solution is of the form yP (t) = a cos 2t + b sin 2t. Substituting f (t) = 5 cos 2t and y p (t) = a cos 2t + b sin 2t in Equation 3.7 leads to y˙ + 3y = f (t) d (a cos 2t + b sin 2t) + 3 (a cos 2t + b sin 2t) = 5 cos 2t dt −2a sin 2t + 2b cos 2t + 3a cos 2t + 3b sin 2t = 5 cos 2t (3b − 2a) sin 2t + (2b + 3a) cos 2t = 5 cos 2t. Collecting coefficients gives two simultaneous equations (3b − 2a) = 0 (2b + 3a) = 5 =⇒ b =

10 15 and a = . 13 13

(3.8)

190

Design and Analysis of Control Systems

The complete solution then becomes y(t) = yh (t) + y p (t)   15 10 −3t = Ke + cos 2t + sin 2t . 13 13 The value of K is then obtained by using the initial condition, y(0) = 1. Hence,   15 10 y(0) = Ke0 + cos 0 + sin 0 = 1 13 13   15 =⇒ K + =1 13 2 =⇒ K = − . 13 Therefore, the system response is given by   2 −3t 15 10 e + cos 2t + sin 2t 13 13 13  1 =− 2e−3t − 15 cos 2t − 10 sin 2t . 13

y(t) = −

3.2.3

FIRST-ORDER SYSTEMS

Consider a general first-order differential equation 1 y˙ + y = f (t), τ where τ is a real non-zero constant called the time constant. All first-order differential equations can be expressed in this form. The characteristic equation is obtained by inspection from the homogeneous differential equation, 1 y˙ + y = 0 τ 1 =0 τ 1 ⇒r=− . τ =⇒ r +

The homogeneous response (which is called a transient response for a dynamic system) is thus given by a family of curves of the form t − yh (t) = Ke τ , where K is a real constant. 3.2.3.1

Stability Analysis

The stability of the family of curves can be established as follows: τ > 0 =⇒ A stable system. τ < 0 =⇒ An unstable system. τ −→ ∞ =⇒ A marginally stable system.

191

Dynamic System Response

The stability conditions of the transient response for a first-order system are illustrated in Figure 3.1. The complete dynamic system response can be interpreted as consisting of a transient response and a steady-state response. For stable systems the transient part consists of terms that decay to zero as t approaches ∞. The steady-state response is the part that remains after transient terms have decayed to zero. For the first-order system, as discussed before, the complete response is given by y(t) = y p (t) + yh (t) t − τ where yh (t) = Ke . As already indicated, the homogeneous response yh (t) is also called the transient response. If if τ is positive, the system is stable. The particular solution y p (t) then represents the steady-state response, where y p (t) can be a variety of functions (e.g., β (a constant), a cos ωt +b sin ωt or at +b). However, if y p (t) is of the form ate−t + be−t , then it becomes part of the transient response. For stable systems whose y p (t) terms do not decay to zero, the parts of the complete response are summarised as follows: yh (t) ≡ Transient response ≡ Free response. y p (t) ≡ Steady-state response ≡ Forced response.

Figure 3.1 First-order system (transient response) stability analysis.

192

3.2.3.2

Design and Analysis of Control Systems

The Complete Response to a Constant Input

The system response for the special case where the input to a first-order system is a constant, can be simplified into a form that is easily obtained without explicitly finding yh (t) and y p (t) separately. Consider a first-order system with a constant input f (t) = β , 1 y˙ + y = β . τ The homogeneous and particular solutions are given by

(3.9)

t

yh (t) = Ke− τ and y p (t) = τβ , such that y = y p (t) + yh (t) t − τ = τβ + Ke . The particular solution y p (t) can be taken as the steady-state response such that y p (t) = yss t τ =⇒ y(t) = yss + Ke . −

(3.10)

If the initial value of the response y(0) is known, the constant K can be expressed in terms of it, y(0) = yss + Ke0 ⇒ K = y(0) − yss . Substituting this expression of K in Equation 3.10 leads to t τ y(t) = yss + [y(0) − yss ]e . −

(3.11)

Hence, for any first-order system with a constant input (or a constant forcing function), the system response can be obtained from Equation 3.11. The task is thus reduced to find the steady-state value yss and the time constant τ, where the initial value y(0) is known. The term yss is obtained by setting the derivative y˙ to zero, and τ is the inverse of the coefficient of y(t) in the first-order Equation 3.9. Equation 3.11 is essential because it allows the direct determination of the complete system response without establishing the homogeneous and particular solutions separately. There are two special cases of Equation 3.11 when the system input is a unit-step or a unitimpluse. For the unit-step input y(0) = 0 and yss = 1, t − =⇒ y(t) = 1 + [0 − 1]e τ t − = 1−e τ . For the unit-impulse input y(0) = 1 and yss = 0, t − τ =⇒ y(t) = 0 + [1 − 0]e t − = e τ

193

Dynamic System Response

Figure 3.2 A first-order translational mechanical system.

The efficacy of the general solution depicted by Equation 3.11 is demonstrated by exploring examples of three first-order systems: a translational mechanical system, a rotational mechanical system and an electrical circuit.

Example 3.3 Find the dynamic system response for the spring mass damping system shown in Figure 3.2, where f (t) = β , x(0) = 0 (i.e. there is no initial energy stored).

Solution 3.3 From the system free-body diagram, the Input-Output model obtained as follows: bx˙ + kx = k x˙ + x = b k x˙ + x = b

f (t) 1 f (t) b β . b

(3.12)

From Equation 3.12 the time constant τ and steady-state value xss can be obtained (Figure 3.3). b k k β 0 + xss = b b τ=

⇒ xss =

Figure 3.3 First order rotational mechanical system.

β . k

194

Design and Analysis of Control Systems

Therefore, the system response can be obtained from Equation 3.11. t − x(t) = xss + [x(0) − xss ]e τ

x(t) =

  −kt β β + 0− e b k k

−kt β β = − e b (≡ yh + y p ) k k   −kt β 1−e b . = k

(3.13)

Example 3.4 Find the dynamic system response for the rotational mechanical system shown below. There is a constant input angular speed, ω(t) = β , and zero initial angular speed, ω1 (0) = 0 (i.e. there is no initial energy stored). Solution 3.4 From the system free-body diagrams, the Input-Output model is given by J ω˙ 1 + (b1 + b2 )ω1 = b1 ω(t) ω˙ 1 +

(b1 + b2 ) b1 β ω1 = J J

where ω(t) = β .

From this equation the time constant τ and steady-state value xss can be obtained as follows: τ= 0+

J b1 + b2

(b1 + b2 ) b1 β ω1ss = J J =⇒ ω1ss =

b1 β . b1 + b2

Therefore, the system response can be obtained from Equation 3.11 as follows: t − ω1 (t) = ω1ss + [ω1 (0) − ω1ss ]e τ

=

  t − b1 β b1 β + 0− e τ b1 + b2 b1 + b2

t b1 β b1 β − − e τ b1 + b2 b1 + b2   t − b1 β  = 1−e τ  b1 + b2 =

(≡ yh + y p )

where τ =

This is the system response of the rotational mechanical system.

J . b1 + b2

195

Dynamic System Response

Example 3.5 Consider the electrical circuit shown below where vi (t) = β , for t ≥ 0. Find the dynamic system response vo (t).

A First-Order Electrical System Solution 3.5 The starting point is determining the Input-Output model. This is done by using the KVL and KCL. R2 R3 ic (t) + iR1 = iRT where RT = (parallel resistors) R2 + R3 1 1 [vi − vo ] = vo R1 RT   1 1 1 Cv˙o + + vo = Cv˙i + vi R1 RT R1   1 1 β v˙o + + vo = (vi = β =⇒ v˙i = 0) CR1 CRT CR1

C(v˙i − v˙o ) +

(3.14)

This is a first-order system with a constant input, and hence the system response is of the form t   − (3.15) vo (t) = voss + vo (0+ ) − voss e τ , where the time constant is the inverse of the coefficient of vo (t) in Equation 3.14 and voss is the steady-state value obtained by setting the derivative v˙o (t) to zero in the same equation. The term vo (0+ ) represents the initial value of the output voltage approached from positive time. τ= voss =

R1 RT C R1 + RT RT β . R1 + RT

The initial value vo (0+ ) is obtained by using the KVL, the initial input voltage and the initial voltage across the capacitor. vi (t) − vC (t) − vo (t) = 0 =⇒ vo (t) = vi (t) − vC (t) =⇒ vo (0+ ) = vi (0+ ) − vC (0+ ) but vC (0+ ) = vC (0− ) = 0 and vi (0+ ) = β =⇒ vo (0+ ) = β − 0 =⇒ vo (0+ ) = β .

196

Design and Analysis of Control Systems

The dynamic system response is then obtained by substituting the determined expressions of τ, voss and vo (0+ ) in Equation 3.15. 3.2.4

SECOND-ORDER SYSTEMS

A general second-order system can be represented as follows: y¨ + a1 y˙ + ao y = f (t) y¨ + 2ξ ωn y˙ + ωn2 y = f (t), where ωn is the undamped natural frequency and ξ is the damping ratio. For a second-order dynamic system, this is the generic form of its Input-Output model. By inspection of these equations, the characteristic equation can be determined. r2 + a1 r + a0 = 0 r2 + 2ξ ωn r + ωn2 = 0. (a) (b) (c) (d)

If ξ > 1 =⇒ roots are distinct and negative, yh has two decaying exponential components. If ξ = 1 =⇒ repeated root, r = ωn , yh (t) = terms of the form e−ωn t and te−ωn t . If 0 < ξ < 1 =⇒complex roots. The complex roots are given by p r1 , r2 = −ξ ωn ± jωn 1 − ξ 2 .

(e) With the complex roots, the particular solution (system response) is given by yh = K1 er1 t + K2 er2 t = eαt [K1 cos βt + K2 sin βt] = Keαt cos(βt + φ ). (f) If ξ < 0 =⇒ the system is unstable. The specific expressions for the system responses of second-order systems when the input is an impulse (natural response) or a step function, respectively, are derived and analysed later in Section 3.6.1. Example 3.6 The system shown below obeys the differential equation i.e., Input-Output model). (m1 + m2 )x¨ + (b1 + b2 )x˙ + kx = f (t) − m2 g, where x1 = x2 = x.

197

Dynamic System Response

A Second-Order Translational Mechanical System (a) Verify that the differential equation is correct. (b) Find the expressions for the damping coefficient ξ and the undamped natural frequency ωn . (c) Find the steady-state response when the force is a step unit function. Solution 3.6 (a) Verification of the differential equation from the free-body diagrams m1 x¨ + b1 x˙ + fc = f (t) m2 x¨ + b2 x˙ + k2 x + m2 g − fc = 0 (m1 + m2 )x¨ + (b1 + b2 )x˙ + k2 x = f (t) − m2 g

(by adding the 2 equations).

(b) x¨ +

(b1 + b2 ) k2 1 m2 x˙ + x= f (t) − g (m1 + m2 ) (m1 + m2 ) (m1 + m2 ) (m1 + m2 ) s k2 ωn = (m1 + m2 ) 1 ξ= 2ωn



b1 + b2 m1 + m2



b1 + b2 √ . = √ 2 m1 + m2 k 2 (c)  xss = =

m1 + m2 k2



1 (1 − m2 g) . k2

1 m2 g − m1 + m2 m1 + m2



198

3.2.5

Design and Analysis of Control Systems

ANALOGOUS MECHANICAL AND ELECTRICAL SYSTEMS

As discussed in Section 2.14, translational, rotational, and electrical systems can be shown to manifest the same dynamic behaviour (i.e., identical system response), and hence, their models can be used interchangeably. Such systems are described as analogous. Consider the four second-order systems shown in Figures 3.4–3.7.

Figure 3.4 A translational mechanical system.

Figure 3.5 A rotational mechanical system.

Figure 3.6 An RLC series circuit.

Figure 3.7 An RLC parallel circuit.

199

Dynamic System Response

Using free-body diagrams and circuit laws (current and voltage), it can be shown that the inputoutput differential equations (i.e., the Input-Output models) for the four systems are given by: x¨ +

b1 k1 1 x˙ + x = f (t) m m m

b2 k2 1 θ¨ + θ˙ + θ = T (t) J J J ı¨ + v¨o +

1 1 R1 ˙ i+ i = v˙i (t) L1 L1C1 L1

1 1 1 v˙o + vo = i˙(t). R2C2 L2C2 C2

The derivation of these models is left as an exercise for the reader. By comparing these models with the standard second-order model y¨ + 2ξ ωn y˙ + ωn2 y = f (t), the expressions for the respective natural frequency and damping ratio can be determined for each of the four analogous systems. r k1 b1 ωn = and ξ = √ m 2 mk1 r b2 k2 ωn = and ξ = √ J 2 Jk2 r r 1 R1 C1 ωn = and ξ = L1C1 2 L1 r r 1 1 L2 ωn = and ξ = . L2C2 2R2 C2 The four mechanical and electrical systems are analogous because they satisfy the same general second-order differential equation, which means they have the same Input-Output model. Hence, the four dynamic systems manifest similar dynamic properties. They have identical system responses where the natural frequency and damping ratio are as derived in the previous section. 3.2.6

SOLUTION OF THE STATE-VARIABLE MATRIX MODEL

The State-Variable Matrix model of the dynamic system model can be solved in the time domain to produce the system responses of the individual state variables and the output of the model, in one calculation. A more detailed treatment of this subject is presented in Chapter 7. Consider the general State-Variable Matrix model x˙ = Ax + Bu y= Cx + Du. The homogeneous (zero input) response is obtained by setting the inputs to zero such that x˙ = Ax y= Cx.

200

Design and Analysis of Control Systems

The solutions or responses of the state variables are obtained from the equation x(t)=Φ(t)x(0) The function Φ(t) is called the State-Transition Matrix defined by φ (t) = eAt = α o I + α 1 A + α 2 A2 + · · · + α n−1 An−1 , where αo to αn−1 are scalar coefficients and x(0) is a vector of the initial values of the state variables. The characteristic values (or eigenvalues) of A are values of λ for which |λ I − A| = 0. This is called the system characteristic equation. More details about eigenvalues and eigenvectors are presented in Section 7.6.5 The solution of the State-Variable Matrix model to obtain the system responses of the individual variables and the output is best illustrated by considering a couple of cases. Example 3.7 For a system that has two state variables such that x(t) = [x1 x2 ]T , x(0) = [1 1]T and   0 1   , A=  −2  −3 find the state variable responses x1 (t) and x2 (t). Solution 3.7 First, the eigenvalues are determined. |λ I − A| = 0  λ  =⇒   0  λ  =⇒   2



0



λ

  −   −2 −1

(λ + 3)

0

1 −3

   = 0 

   = 0 

⇒ λ (λ + 3) − (−1 × 2) = 0 =⇒ λ 2 + 3λ + 2 = 0 =⇒ (λ + 1)(λ + 2) = 0 =⇒ λ1 = −1, λ2 = −2 The order of A is 2, which means the power series for eAt contains only two terms, such that eAt = α o I + α 1 A.

(3.16)

201

Dynamic System Response

Replacing A in Equation 3.16 by the eigenvalues λ1 and λ2 produces two scalar equations e−t = αo − α1 e−2t = αo − 2α1 . Solving for α0 and α1 gives αo = 2e−t − e−2t α1 = e−t − e−2t . Substituting in the expression for eAt leads to eAt = α o I + α 1 A 

1



1

  + α1   −2

  = 

  =    = 

α0

−3

  



αo − 3α1

  

2e−t − e−2t

α1



−3α1

  

0

α1

α0 −2α1



 +  −2α1

= 0





0

α0

1

0

0

= αo  0





e−t − e−2t

−2 e−t − e−2t

  2e−t − e−2t − 3 e−t − e−2t



2e−t − e−2t

e−t − e−2t

−2 e−t − e−2t





−e−t + 2e−2t

   

Therefore, the system response (for the two states) is obtained by x(t)= eAt x(0)   =   =

2e−t − e−2t −2 e−t − e−2t 3e−t − 2e−2t −3e−t

+ 4e−2t

e−t − e−2t

  .

−e−t + 2e−2t

   



1 1



  

202

Design and Analysis of Control Systems

Therefore, the system responses of the two state variables are given by x1 (t) = 3e−t − 2e−2t x2 (t) = −3e−t + 4e−2t . Example 3.8 A dynamic system is described by that: x˙ = Ax and y = Cx, where  0 1  A=  0 −2

and x(0) = [1 (a) (b) (c) (d)

the following State-Variable Matrix model such    , C = 3 

−1



2]T .

Obtain the State-Transition Matrix Φ(t). Find the state variable responses x1 (t) and x2 (t). Find the output response y(t). For this system verify that Φ(0) = I and

−1

Φ (t) = Φ(−t).

Solution 3.8 (a) The State-Transition Matrix is obtained as follows: Φ(t) = eAt = α o I + α 1 A. where αo and α1 are constants obtained from the eigenvalues as follows: |λ I − A| = 0 λ (λ + 2) = 0 ⇒ λ1 = 0 and λ2 = −2 e0t = αo I + α1 0 e−2t = αo I − α1 (−2) =⇒ αo = 1 and α1 =

1 − e−2t . 2

Using these values of αo and α1 the state-transition matrix φ (t) can now be determined. Φ(t)= eAt = α o I + α 1 A   Φ(t) = 1  



 1  =  0

1

0

0

1



0

 1 − e−2t  +    0 2

1 − e−2t 2 e−2t



   . 

1



−2

  

203

Dynamic System Response

(b) The two state variable responses are obtained as follows: x(t)=Φ(t)x(0) 1 − e−2t 2



 1  =  0 

x1 (t)

=⇒ 





1 + (1 − e−2t )

2 − e−2t

x2 (t)

 

2e−2t

=

   1   2

e−2t

= 



 .

2e−2t

(c) Now for the output response, y(t)= Cx(t)  =



3

−1





2 − e−2t 2e−2t

 

= 6 − 3e−2t − 2e−2t =⇒ y(t) = 6 − 5e−2t (d) The identities are verified by using the determined expression of φ (t). (i) Φ(0) = I LHS = Φ(0) 

 1  =  0

1 − e−2t 2

    

e−2t

t=0



1

  =  0   = 

1 0

1−1 2 1

0



1

  

    

204

Design and Analysis of Control Systems

=I = RHS. (ii) Φ−1 (t) = Φ(−t) LHS = Φ−1 (t) 

e−2t  1  = −2t  (e − 0)  0 

1 − e2t 2

 1  =   0

e2t





1 − e−2t 2 1



    

     

= Φ(−t) = RHS. Although these two properties have been verified for a specific system, they are valid for any StateTransition Matrix Φ(t).

3.3 3.3.1

FREQUENCY DOMAIN SOLUTION OF SYSTEM MODELS THE INVERSE LAPLACE TRANSFORM

The dynamic system response can also be obtained by frequency domain techniques, in particular, the Laplace transform which was introduced in Chapter 2. The Laplace transform is essential in determining the system response in the frequency domain. As explained in Chapter 2, the Laplace transform is a mathematical tool for transforming differential equations into an easier-to-manipulate algebraic form. The differential equations are easily solved in this domain, and the solutions converted back into the time domain to give the system response. This reverse process is called taking the inverse Laplace transform, and it produces a time-domain expression that captures the system’s dynamic behaviour – the system response. In this way, the Laplace transform is very useful in establishing and analysing the system responses of linear dynamic systems by using relatively easily solved algebraic equations instead of the more difficult differential equations. The use of Laplace transforms to solve dynamic system models proceeds from two forms of the models: Input-Output Differential Equation or Transfer Function. The one-sided Laplace transform was defined as Z ∞

Y (s) =

0−

y(t)e−st dt.

205

Dynamic System Response

Once the system model has been solved in the s-domain, it is converted back into the time domain to give the system response as follows: y(t) =

1 2π

Z σ + jω

Y (s)est ds.

σ − jω

In practice, this inverse Laplace transform equation is seldom used because it is too complex. Instead, tables of simple inverse Laplace transforms are used. Such tables are in Appendix A. As before, Figure 2.6 shows a summary of the Laplace transformation between the time and frequency domains. In Chapter 2, the Laplace transforms of simple functions were derived, and those of some key functions were discussed: step, impulse, ramp, time delay, differentiation, and integration. It is important to review all this material in Section 2.8 in preparation for the use of the Laplace transform in the derivation of the system response. The complete tables of the Laplace transforms and their properties are in Appendix A. The use of Laplace transforms to solve dynamic system models proceeds from two forms of the models: the Input-Output Differential Equation Form or the Transfer Function Form. The two approaches are related because the transfer function is defined as the output in Laplace divided by the input in Laplace. Both require taking inverse Laplace transforms of partial fractions. In the following subsection, the techniques used to deal with partial fractions are discussed, and then the determination of the system response by the two methods is developed. 3.3.2

PARTIAL FRACTIONS

The system response of a dynamic system can be obtained from the Transfer Function model by first expressing this model in terms of partial fractions, whose inverse Laplace transforms can be easily obtained from the Laplace transforms tables. It is important to note that in establishing the system response, the Transfer Function model is represented by H(s), and not T (s) to avoid a time function such as t(t), but rather h(t). The partial fractions of a general Transfer Function model are obtained as follows: H(s) =

=

Y (s) R(s) b0 sm + b1 sm−1 + b2 sm−2 + · · · + bm a0 sn + a1 sn−1 + a2 sn−2 + · · · + an

= K

=

Πm 1 (s − zi ) Πn1 (s − p j )

C1 C2 Cn + +···+ , s − p1 s − p2 s − pn

where {Ci } are coefficients obtained by using the cover up method: Ci = (s − pi )H(s)|s=pi .

(3.17)

The nature of the partial fractions depends on the type of roots. There are three general categories of roots, and they will be discussed through examples: distinct real roots, distinct complex roots, and repeated roots.

206

Design and Analysis of Control Systems

(a) Distinct Real Roots These roots consist of real numbers, and they are all different, i.e., there are no repeated roots. Example 3.9 Express the following Transfer Function model in terms of partial fractions H(s) =

(s + 7)(s + 4) . s(s + 1)(s + 3)

Solution 3.9 The Transfer Function model can be expressed in terms of three partial fractions H(s) =

=

(s + 7)(s + 4) s(s + 1)(s + 3) C2 C3 C1 + + . s s+1 s+3

The coefficients are obtained using the cover-up method. C1

= (s − p1 )H(s)|s=p1 =

=

=

C2

(s + 7)(s + 4) s× s(s + 1)(s + 3) s=0 (s + 7)(s + 4) (s + 1)(s + 3) s=0 28 . 3

= (s − p2 )H(s)|s=p2 =

=

=

(s + 7)(s + 4) (s + 1) × s(s + 1)(s + 3) s=−1 (s + 7)(s + 4) s(s + 3) s=−1 6×3 −1 × 2

= −9. C3

= (s − p3 )H(s)|s=p3 =

=

(s + 7)(s + 4) (s + 3) × s(s + 1)(s + 3) s=−3 (s + 7)(s + 4) s(s + 1) s=−3

207

Dynamic System Response

=

4×1 −3 × (−2)

=

2 . 3

Therefore, the Transfer Function model in partial fractions is given by 28 3

H(s) =

s



2 9 + 3 . s+1 s+3

(b) Distinct Complex Roots These occur when the roots consist of complex numbers, and they are all different, i.e., there are no repeated roots. Example 3.10 Express the following Transfer Function model in terms of partial fractions H(s) =

2 . s(s2 + s + 2)

Solution 3.10 The Transfer Function model can be expressed in terms of partial fractions as follows: H(s) =

=

2 s(s2 + s + 2) C1 C2 s +C3 + 2 s s +s+2

where C1 is obtained by using the cover-up method = (s − p1 )H(s)|s=p1 2 = s× 2 s(s + s + 2) s=0 2 = 2 (s + s + 2) s=0

C1

= 1. The constants C2 and C3 are obtained by equating the original transfer function to its partial fraction form (with the computed value of C1 ) and then comparing coefficients of the powers of s. 2 s(s2 + s + 2)

=

2 s(s2 + s + 2)

=

2 C2

1 C2 s +C3 + s s2 + s + 2  s2 + s + 2 +C2 s2 + sC3 s(s2 + s + 2)

= (1 +C2 ) s2 + (1 +C3 ) s + 2 = −1 and C3 = −1.

208

Design and Analysis of Control Systems

Therefore, the transfer function in partial fractions is given by H(s) =

1 s+1 − 2 . s s +s+2

Complex roots occur in conjugate pairs p1 = a + jb and p∗1 = a − jb. Hence, the above Transfer Function model (which has a conjugate pair of poles) can be expressed in terms of complex partial fractions as follows: H(s) =

2 s(s2 + s + 2)

=

2 s(s − p1 )(s − p1 )

=

C1 C2 C3 + + , s s − p1 s − p∗1

where

√ √ 7 7 1 1 p1 = − + j and p∗1 = − − j . 2 2 2 2 This is an alternative way of expressing a Laplace function that has complex roots in partial fractions. The constants C1 , C2 , and C3 are then determined by the cover-up method. For any function with a complex conjugate pair of poles, it can be shown that the two constants C3 and C2∗ are also complex conjugates, i.e., C3 = C2∗ . Hence, only one of them needs to be determined. (c) Repeated Roots A repeated root p1 occurs when there are multiples of the factor (s − p1 ). The constant Ci for a factor (s − p1 ) with multiplicity k is given by  i 1 di h k Ck−i = (s − p1 ) Y (s) , i! dsi s=p1 where i = 0, , . . . , k − 1. Example 3.11 Express the following Transfer Function model in terms of partial fractions H(s) =

s+4 (s + 1)(s + 2)2

Solution 3.11 The Transfer Function model can be expressed in terms of three partial fractions H(s) = =

s+4 (s + 1)(s + 2)2 C1 C2 C3 + + s + 1 s + 2 (s + 2)2

209

Dynamic System Response

The coefficients are obtained as follows: C1

= (s − p1 )H(s)|s=p1 =

=

s+4 (s + 1) × 2 (s + 1)(s + 2) s=−1 s + 4 (s + 2)2 s=−1

= 3.

=

C2

=

d (s + 2)2 H(s)|s=−2 ds   s+4 d 2 (s + 2) ds (s + 1)(s + 2)2 s=−2

=

  d s+4 ds s + 1 s=−2

=

(s + 1) − (s + 4) (s + 1)2

=

−3 (s + 1)2

s=−2

s=−2

= −3. C3

= (s − p2 )2 H(s)|s=p2 =

=

s+4 (s + 2) × 2 (s + 1)(s + 2) s=−2 s + 4 (s + 1) s=−2 2

= −2. Therefore, the Transfer Function model in partial fractions is given by H(s) =

3 3 2 − − . s + 1 s + 2 (s + 2)2

From these partial fraction forms of the Laplace transforms, the inverse Laplace transforms are obtained by comparing the partial fractions with those in the standard Laplace transform tables. Thus, for the partial fraction forms in Examples 3.9 and 3.11, the inverse Laplace transforms are obtained as follows: H(s) =

28 3

s



2 9 + 3 s+1 s+3

210

Design and Analysis of Control Systems

h(t) = H(s) =

28 2 − 9e−t + e−3t 3 3 3 3 2 − − s + 1 s + 2 (s + 2)2

h(t) = 3e−t − 3e−2t − 2te−2t . For the partial fractions obtained in Example 3.10 (first approach), a special technique of completing the square is required to put the second partial fraction into a form where the inverse Laplace transform can be deduced from the Laplace transform tables. 3.3.2.1

Completing the Square

When the denominator of a Laplace transform function is expressed as a sum of two squares, the following equations (from Laplace Tables) can be used to get its inverse Laplace transform.   s+a −1 = e−at cos bt (3.18) L (s + a)2 + b2   b L−1 = e−at sin bt (3.19) (s + a)2 + b2 L−1



a2 + b2 s [(s + a)2 + b2 ]



  a = 1 − e−at cos bt + sin bt . b

(3.20)

The method of completing the square is a useful technique employed to express the denominator of a Laplace algebraic function as a sum of two squares. This method involves the two techniques of adding zero to an expression and multiplying an expression by one. 3.3.3

GENERAL SECOND-ORDER LAPLACE FUNCTION

In Section 3.2.4, analysis of second-order systems was carried out in the time domain. Here it is conducted in the frequency domain. The technique of completing the square is very useful when dealing with second-order systems or higher-order systems that can be broken down into lowerorder functions that include second-order ones. A general second-order system has two poles and at most one zero such that its general Transform Function model is given by H(s) =

ds + c , s2 + as + b

(3.21)

where a, b, c, and d are constants. The denominator can be expressed as a sum of two squares as follows:  2  a a2 2 2 s + as + b = s + as + − +b Adding zero 4 4 =

  a2 a2 s2 + as + − +b 4 4

=

   a 2 a2 s+ + b− 2 4

   1 2  a 2  a2 2  = s+ + b− . 2 4

(3.22)

211

Dynamic System Response

This form of the denominator (Equation 3.22) is then used in Equation 3.21, and the numerator of the result is further manipulated to put the equation in the structure of the left-hand side of any of the Equations 3.18, 3.19, and 3.20. H(s) =

=

ds + c s2 + as + b ds + c "s #2  a 2 a2 s+ + b− 2 4

Completing the square



=

 ad ad ds + c + − 2 2 r   2 2 a 2 s+ + b − a4 2

=

   ad a + c− d s+ 2 2 "s #2  a2 a 2 + b− s+ 2 4

=

 a d s+ 2 "s

 a 2 s+ + 2

a2 b− 4

Adding zero to the numerator

#2

s  a2 b −   4 ad s c−  2 a2 b− 4 Multiplication by one + "s #2  a 2 a2 s+ + b− 2 4     ad  c−  s    a2 2 s  b −    4  a2  b −  4 d s + 2a = + . "s # "s  2 #2   a 2 a2 a 2 a2 s+ s+ + b− + b− 2 4 2 4 The function is now in the form of Equations 3.18 and 3.19 and the inverse Laplace transform can be found as follows:   ds + c h(t) = L−1 2 (3.23) s + as + b

212

Design and Analysis of Control Systems

   ad s   c− 2   a 2   r b−    4   a2  b− 4   a d s +   2 + = L−1  s s "  #2 "  #2         a 2 a 2 a2 a2  s+  + s+ + b− b−   2 4 2 4   

    ad "s "s  #  c − # 2   −at a2 a 2 − a2 t  2 sin = de b− b− t + t. cos  s e 4 4  a2  b− 4 This is the inverse Laplace transform of a general second-order Laplace function. Understanding the principles involved in this derivation is very important, as these principles are very useful in determining the inverse Laplace transforms of second-order dynamic systems, in particular, those that have complex roots. Example 3.12 Using partial fractions, find the time function y(t) of the following Laplace functions. (a) Y (s) =

2 s(s + 2)

(b) Y (s) =

10 s(s + 1)(s + 10)

(c) Y (s) =

2s2 + s + 1 s3 − 1

(d) Y (s) =

3s + 2 . s2 + 4s + 20

Solution 3.12 In all the four cases, the given function is split into partial fractions and then inverse Laplace transformation is applied to get y(t). (a) Y (s) =

2 s(s + 2)

=

C1 C2 − s s+2

=

1 1 − s s+2

y(t) = 1 − e−2t .

213

Dynamic System Response

(b) 10 s(s + 1)(s + 10)

Y (s) =

=

C1 C2 C3 − + s s + 1 s + 10

=

10 1 1 − 9 + 9 s s + 1 s + 10

y(t) = 1 −

10 −t 1 −10t e + e . 9 9

(c)

Y (s) = =

=

2s2 + s + 1 s3 − 1 C1 C2 s +C3 + 2 s−1 s +s+1 4 3

s−1

+

1 2 3s+ 3 . s2 + s + 1

The next step is completing the square of the denominator of the second term.

Y (s) =

=

4 3

s−1 4 3

s−1

+ s+

2 1 3s+ 3  1 1 2 − 2 4

+1

2 1 3s+ 3

+ s+

 1 2 + 2

r !2 . 3 4

  1 The last step is to express the numerator of the second term as a function of s + so that 2 Equations 3.18 can be used to get the inverse Laplace transforms.

Y (s) =

y(t) =

4 3

2 3

 s + 12 + r !2 s−1  3 1 2 s+ 2 + 4 √ t 4 t 2 − 3 2 e + e cos t. 3 3 2

214

Design and Analysis of Control Systems

(d) The first step is completing the square of the denominator. Y (s) = =

3s + 2 s2 + 4s + 20 3s + 2

adding a zero

s2 + 4s + 22 − 22 + 20

=

3s + 2 (s + 2)2 − 4 + 20

=

3s + 2 . (s + 2)2 + 42

The next step is rearranging the numerator so that Equations 3.18 and 3.19 can be used to get the inverse Laplace transforms. This is done by expressing the numerator in terms of (s + 2) and 2, by using the technique of adding a zero. Y (s) =

3s + 2 + (4 − 4) (s + 2)2 + 42

adding a zero

=

3(s + 2) − 4 (s + 2)2 + 42

=

3(s + 2) 4 − (s + 2)2 + 42 (s + 2)2 + 42

y(t) = 3e−2t cos 4t − e−2t sin 4t = e−2t [3 cos 4t − sin 4t].

3.4

DETERMINATION OF THE SYSTEM RESPONSE

Now that techniques (such as partial fractions and completing the square) of obtaining inverse Laplace transforms of various functions have been discussed, it is clear that these methods can be used to determine the dynamic system response. As already explained, this solution of dynamic system models proceeds from two forms of the models: Input-Output Differential Equation or Transfer Function. 3.4.1

USING THE INPUT-OUTPUT MODEL

The procedure of obtaining the system response starting from the input-output differential equation can be summarised as follows: • • • •

Obtain the input-output differential equation model. Apply Laplace transformation to this model. Solve the resulting algebraic equations for the Laplace transform of the output. Take the inverse Laplace transform of this output.

Example 3.13 Consider the homogeneous input-output differential equation (Input-Output model), y¨ + y

= 0,

where y(0) = α and y(0) ˙ = β.

(3.24)

215

Dynamic System Response

(a) Find the system response y(t). (b) How can this response be verified? Solution 3.13 (a) The system is determined by taking the Laplace transforms of the equation. L [y¨ + 2y] = L [0] 2  s Y (s) − sy(0) − y(0) ˙ +Y (s) = 0 s2Y (s) − αs − β +Y (s) = 0 Y (s)[s2 + 1] = αs + β Y (s) =

=

αs + β s2 + 1 αs s2 + 1

+

β s2 + 1

y(t) = L−1 [Y (s)] = α cost + β sint. (b) This result can be verified by showing that it satisfies Equation 3.24, i.e., LHS

= y¨ + y =

d 2 [α cost + β sint] + [α cost + β sint] dt 2

= −α cost − β sint + α cost + β sint = 0 = RHS. Example 3.14 The car speed cruise control Input-Output model is given by v˙ +

b b v = vr , m m

for a step input vr (t). The car speed response was obtained by the direct time domain determination as h i b v(t) = vr 1 − e− m t . Show that the Laplace transforms method produces the same result. Solution 3.14 The same response can be obtained using Laplace transforms as follows:     b b L v˙ + v = L vr m m sV (s) − v(0) +

b vr b V (s) = m m s

216

Design and Analysis of Control Systems

  b vr b V (s) s + = m m s

where v(0) = 0 

V (s) =



 bvr    1   b  m s s+ m 



bvr  C1 C2   + b m s s+ m m 1 = C1 = b b s+ m s=0 1 m C2 = =− . s s=− b b =

where

and

(3.25)

m

Substituting these coefficients in Equation 3.25 leads to, m bvr  b V (s) =  − m s 

 1 = vr  − s

 m b  b s+ m  1

 b s+ m

v(t) = L−1 [V (s)]  b t = vr 1 − e m  . 



Thus, the same result is obtained. Example 3.15 Consider a spring mass damper system with the following Input-Output model: my¨ + f y˙ + ky = u(t) where u(t) is a constant, and y(0) ˙ =0 (a) Find Y (s), the Laplace transform of the output, y(t). (b) For the specific case with the following data m = 1, k = 2, f = 3, y(0) = 1 and u(t) = 0, find the system response, y(t).

217

Dynamic System Response

Solution 3.15 (a) The Laplace of the output is obtained by taking the Laplace transforms of both sides of the equation. L (my¨ + f y˙ + ky) = L(u(t))   m s2Y (s) − sy(0) − y(0) ˙ + f [sY (s) − y(0)] + kY (s) = U(s) ms2Y (s) − msy(0) + f sY (s) − f y(0) + kY (s) =

u s

  u Y (s) ms2 + f s + k = (ms + f )y(0) + . s Therefore, (ms + f )y(0) + Y (s) =

=

u s

ms2 + f s + k ms2 y(0) + s f y(0) + u . s(ms2 + f s + k)

This is the Laplace transform of the system output, y(t). (b) With the given data, the Laplace transform of the output then reduces to Y (s) =

s2 + 3s s(s2 + 3s + 2)

=

s+3 (s + 1)(s + 2)

=

C1 C2 + s+1 s+2

(3.26)

s+3 |s=−1 = 2 s+2 s + 3 C2 = = −1 s + 1 s=−2 C1 =

Substituting these coefficients in Equation 3.26 leads to Y (s) =

1 2 − s+1 s+2

y(t) = L−1 [Y (s)] −1

=L



   2 1 −1 −L s+1 s+2

= 2e−t − e−2t . This is the system response of the dynamic system. Example 3.16 For the CR electrical circuit shown in Figure 3.8, where R = 2Ω, C = 2F, and u(t) is a unit step input, find the system output vC (t).

218

Design and Analysis of Control Systems vi

vC R

+

+

5u(t)

C -

-

Figure 3.8 An RC circuit.

Solution 3.16 The input-output differential equation (Input-Output model) is obtained by using the KVL 5u(t) − vR − vC (t) = 0 5u(t) − RCv˙C − vC (t) = 0 4v˙C + vC (t) = 5u(t). The output is obtained by taking the Laplace transforms. L [4v˙C + vC (t)] = L [5u(t)] 5 4[sVC (s) − vC (0)] +VC (s) = s 5 VC (s)[1 + 4s] = where vC (0) = 0 s 5 4 =⇒ VC (s) = 1 s(s + ) 4 C1 C2 = + 1 s s+ 4 5 5 = − 1 s s+ 4 



5 =⇒ vC (t) = L−1  − s

5  1 s+ 4   t −  4 = 5 1−e .

Example 3.17 For the following nonhomogeneous input-output differential equations (InputOutput models), find their respective system responses: (a) y¨ + 4y˙ + 3y

= u(t)

where y(0) = 1, y(0) ˙ = 0 and u(t) = 2

219

Dynamic System Response

(b) y¨ + 5y˙ + 4y

= u(t)

where y(0) = α, y(0) ˙ = β and u(t) = 3 (c) y¨ + 5y˙ + 4y = u(t) where y(0) = 0, y(0) ˙ = 0 and u(t) = 2e−2t Solution 3.17 The system responses are obtained by taking the Laplace transforms of the inputoutput differential equations. (a) L [2] = L [y¨ + 4y˙ + 3y]  2 2 = s Y (s) − sy(0) − y(0) ˙ + 4 [sY (s) − y(0)] + 3Y (s) s  2 2 = s Y (s) − s + 4 [sY (s) − 1] + 3Y (s) s 2 = Y (s)[s2 + 4s + 3] − (s + 4) s Y (s) =

s+4 s2 + 4s + 3

+

2 s(s2 + 4s + 3)



   C1 C2 C3 C4 C5 Y (s) = + + + + s+1 s+3 s+1 s+3 s The coefficients are found by cover-up methods such that     1 2 3 1 −    −1  Y (s) =  2 + 2  +  + 3 + 3 s+1 s+3 s+1 s+3 s y(t) = L−1 [Y (s)]  =

   3 −t 1 −3t 1 2 e − e + −e−t + e−3t + 2 2 3 3

1 1 2 = e−t − e−3t + . 2 6 3 (b) L [3] = L [y¨ + 5y˙ + 4y]  3 2 = s Y (s) − sα − β + 5[sY (s) − α] + 4Y (s) s   3 = Y (s) s2 + 5s + 4 − [sα + β + 5α] s

220

Design and Analysis of Control Systems

s(sα + β + 5α) + 3 s(s + 1)(s + 4)

Y (s) =

=

C1 C2 C3 + + s s+1 s+4

The coefficients are then obtained and used to determine the system response. 3 − β − 4α 3 3 − 4α − 4β 3 4 12 Y (s) = − + s s+1 s+4 y(t) =

3 3 − β − 4α −t 3 − 4α − 4β −4t − e + e . 4 3 12

(c)   L [y¨ + 5y˙ + 4y] = L 2e−2t s2Y (s) + 5sY (s) + 4Y (s) = Y (s) =

2 s+2 2 (s + 2)(s + 1)(s + 4)

2 1 −1 3 + + 3 = s+2 s+1 s+4 y(t) = L−1 [Y (s)] 2 1 = −e−2t + e−t + e−4t . 3 3 Example 3.18 Consider a system that has the input-output differential equation (Input-Output model) a1 y + a2 y˙ + a3 y¨ = b1 u + b2 u, ˙ where the input is u(t) and output is y(t) where all initial conditions are zero. Find the system response y(t). Solution 3.18 Taking Laplace transforms throughout the differential equation gives   a1Y (s) + a2 [sY (s) − y(0)] + a3 s2Y (s) − sy(0) − y(0) ˙ = b1U(s) +b2 [sU(s) − u(0)] . Setting all initial conditions to zero leads to a1Y (s) + a2 sY (s) + a3 s2Y (s)

=

b1U(s) + b2 sU(s)

  Y (s) a1 + a2 s + a3 s2

=

U(s) [b1 + b2 s]

b1 + b2 s U(s) a1 + a2 s + a3 s2   b1 + b2 s =⇒ y(t) = L−1 U(s) . a1 + a2 s + a3 s2

=⇒ Y (s) =

221

Dynamic System Response

Given the specific form of the input u(t), the actual expression of the response y(t) can be determined. 3.4.2

USING THE TRANSFER FUNCTION MODEL

The procedure of obtaining the system response using the Transfer Function model is very similar to that used when the model is in the form of an input-output differential equation. The procedure can be summarised as follows: • Obtain the system Transfer Function model H(s) (in Laplace transforms). • Express the Laplace transform of the output Y (s) in terms of the transfer function H(s) and the Laplace transform of the input U(s). • Take the inverse Laplace transform of this output Y (s). These steps can be expressed in terms of equations as follows: Y (s) U(s) Y (s) = H(s)U(s)

H(s) =

y(t) = L−1 [H(s)U(s)] .

(3.27)

This is the general system response of any system in terms of the transfer function and the input in Laplace transforms. Example 3.19 Consider the dynamic system whose input-output differential equation is given by y˙ + 2y

= u(t),

where y(0) = 0, y(0) ˙ = 0 and u(t) = 2. (a) Find the system Transfer Function model H(s). (b) Find the system response y(t). Solution 3.19 (a) The system Transfer Function model is determined by taking the Laplace transforms of the equation. L [y˙ + 2y] = L [u(t)] [sY (s) − y(0)] + 2Y (s) = U(s) sY (s) + 2Y (s) = U(s) Y (s)[s + 2] = U(s) Therefore,

H(s) =

=

Y (s) U(s) 1 . s+2

(b) The system response is obtained from the Transfer Function model and the Laplace transform of the input. y(t) = L−1 [H(s)U(s)]

222

Design and Analysis of Control Systems −1



= L

−1

= L

   2 s



2 s(s + 2)



1 1 − s s+2

= L

−1

1 s+2





= 1 − e−2t . Example 3.20 The car speed cruise control input-output differential equation is given by v˙ +

b b v = vr , m m

for a step input vr . (a) Determine the system Transfer Function model H(s). (b) Find the dynamic system response v(t). Solution 3.20 (a) The system transfer function is obtained by taking the Laplace transforms     b b L v˙ + v = L vr m m b V (s) = m   b V (s) s + = m

[sV (s) − v(0)] +

H(s) =

Therefore,

b Vr (s) m b Vr (s) m

where v(0) = 0

V (s) Vr (s)

b m = . b s+ m (b) The system response is obtained from the transfer function and the Laplace transform of the input. v(t) = L−1 [H(s)Vr (s)]   b v    r  = L−1  m b   s s+ m 

  =

bvr m





  1  L−1    b  s s+ m

223

Dynamic System Response

  =

bvr m





C2   C1 L−1  + . b s s+ m

The coefficients are obtained by the cover up method as C1 =

m m and C2 = − . b b

Hence, m −1  b L  − s

 m b  b s+ m 



 v(t) =

bvr m



 1 = vr L−1  − s

1

 b s+ m 

b − t .  m = vr 1 − e 

These last two examples illustrate how the system response can be determined from the system Transfer Function model. Two simple but common inputs that are useful in the study of the response of any dynamic system are the impulse and step inputs. 3.4.3

IMPULSE RESPONSE (NATURAL RESPONSE)

The impulse response h(t) is the output when the input is an impulse δ (t). Such a response for any system with a general Transfer Function model H(s) is derived from the general system response in Equation 3.27 by substituting for U(s) with the Laplace transform of an impulse input given by U(s) = 1, such that y(t) = L−1Y (s) = L−1 [H(s)U(s)] y(t) = L−1 H(s) = h(t) =⇒ h(t) = L−1 H(s). This is the impulse response of any dynamic system with a Transfer Function model H(s). It is obtained by taking the inverse Laplace transform of the system’s Transfer Function model. This is a very important response and is also called the natural response. It is very significant because it is the simplest type of response that can be analysed for any system to deduce system characteristics. 3.4.4

UNIT STEP RESPONSE

The unit step response yu (t) of a system is the output when the input is a unit step function. Such a response for any system with a general Transfer Function model H(s) is derived from the general system response in Equation 3.27 by substituting for U(s) with the Laplace transform of a unit step 1 input given by U(s) = , such that s

224

Design and Analysis of Control Systems

y(t) = L−1 [H(s)U(s)] y(t) = L

−1

−1



H(s) s



 H(s) . s

=⇒ yu (t) = L



This is the general expression for the unit step response of any dynamic system with a Transfer Function model H(s). 3.4.5

IMPULSE AND UNIT STEP RESPONSES: THE RELATIONSHIP

A relationship can be established between the impulse and step responses by comparing their expressions   −1 −1 H(s) h(t) = L H(s) and yu (t) = L , s and using the properties that s and 1s represent the differential and integrator operators, respectively. Integrating the natural response leads to Z

h(t)dt

 1 = L × H(s) s   −1 H(s) = L s = yu (t). −1



Alternatively, taking the derivative of the step response leads to   dyu (t) H(s) = s× dt s = L−1 H(s) = h(t). Thus, the relationship between responses can be summarised as follows: Z

yu (t) = h(t) =

h(t)dt dyu (t) . dt

This means the unit step response is the integral of the natural response such that the natural response is the derivative of the unit step response. The relationship can also be expressed in the frequency domain in terms of Laplace transforms as follows: H(s) s H(s) = sYu (s).

Yu (s) =

It is important to note that H(s), the system Transfer Function model, is also the Laplace transform of the impulse response, whereas Yu (s) is the Laplace transform of the unit step response.

225

Dynamic System Response

3.4.6

FINAL VALUE THEOREM (FVT)

The response of a dynamic system has a steady-state value or final value which it settles to as time approaches infinity. Clearly, this final value is only finite and defined for stable dynamic systems. The Final Value Theorem (FVT) is used to compute this steady-state value yss of a generic time function y(t) as follows: yss

= = =

lim y(t)

t→∞

lim sY (s)

s→0

lim s [H(s)U(s)] ,

s→0

where H(s) is the system transfer function and U(s) is the Laplace transform of the input. This theorem only applies to stable systems since the final value of an unstable system is not defined. Consider two systems whose outputs in Laplace transforms are given by Y1 (s) = Y2 (s) =

4(s + 2) s(s2 + 2s + 20) 7 . s(s − 4)

The first system’s steady-state value is obtained by using FVT as follows: lim y1 (t) =

t→∞

lim sY1 (s)

s→0



4(s + 2) = lim s s→0 s(s2 + 2s + 20) =

lim



4(s + 2)

s→0 (s2 + 2s + 20)

=

4×2 20

=

8 . 20

For the second system, from the denominator, it is clear that the system has a root in the righthand plane (RHP), which means it is unstable. Hence the FVT is not defined. If the FVT is naively applied, a steady-state value of − 74 is obtained, which is incorrect as the response grows without bound. In fact, y2 (t) = 7e4t + 7, which is clearly an unstable system growing without bound. System stability is discussed in detail in Chapters 4 and 5. 3.4.7

INITIAL VALUE THEOREM

Most dynamic systems have an initial value, which is the value of the system response at the beginning, i.e., when time is zero. The Initial Value Theorem (IVT) is used to compute the initial value y0 of a time function y(t) as follows:

226

Design and Analysis of Control Systems

y0

= y(0) = =

lim sY (s)

s→∞

lim s [H(s)U(s)] ,

s→∞

where H(s) is the system transfer function and U(s) is the Laplace of the input. The theorem is valid for all functions Y (s) where the degree of the numerator polynomial m is less than that of the denominator n. Consider the two examples explored for the FVT in the previous section. The first system’s initial value is obtained by using IVT as follows: lim y1 (t) =

t→0

lim sY1 (s)

s→∞



4(s + 2) = lim s 2 s→∞ s(s + 2s + 20) =



4(s + 2)

lim

s→∞ (s2 + 2s + 20)

=

4(1 + 2/s) s→∞ (s + 2 + 20/s)

=

4(1 + 0) ∞+2+0

=

4 ∞

lim

= 0. Similarly, for the second system, the initial value is calculated as follows: lim y2 (t) =

t→0

lim sY2 (s)

s→∞



7 = lim s s→∞ s(s − 4) 

7 = lim s s→∞ s−4 =





7 ∞

= 0. It is prudent to note that although a system might be unstable, its initial value can be defined, as illustrated in the second case. However, there are cases where the IVT does not apply. Consider a system whose response is represented by the following Laplace function: Y (s) =

s3 + 2s2 + 4s + 7 s3 + 2s

227

Dynamic System Response

Attempting to apply the IVT leads to y(0) = = = = =

lim sY (s)  3  s + 2s2 + 4s + 7 lim s s→∞ s3 + 2s   1 + 2/s + 4/s2 + 7/s3 lim s s→∞ 1 + 2/s2   1+0+0+0 ∞ 1+0 ∞ (undefined) s→∞

The reason why the IVT is not defined for this system is because m = n = 3 in the Laplace function. 3.4.8

SYSTEM DC GAIN

The system DC gain of a general time-invariant and continuous-time system, with a Transfer Function model H(s), is defined as the final value of the system response to a unit step input. It is obtained 1 by employing the FVT when the input is a unit step function, i.e., U(s) = . Hence the DC gain is s calculated as follows: DC gain = yss = =

lim sY (s)

s→0

lim s [H(s)U(s)]

s→0

  1 = lim s H(s) s→0 s Therefore, the DC gain =

lim H(s),

s→0

where H(s) is the system Transfer Function model as indicated. This is the DC gain of a continuoustime system. It is important to note that since it is defined in terms of the FVT, when this theorem is not valid (e.g., an unstable system) then the DC gain is not defined. Consider two systems whose Transfer Function models are given by H1 (s) =

s+5 (s + 5)(s + 7)(s + 1)

H2 (s) =

3s + 5 . (s + 5)(s − 7)

Find the DC gain for each of these systems. For the first system DC gain =

lim H1 (s)

s→0



s+5 = lim s→0 (s + 5)(s + 7)(s + 1) =

5 5×7×1

=

1 . 7



228

Design and Analysis of Control Systems

The second system is unstable because the characteristic equation has a root in the RHP. Hence, the FVT is not valid, making the DC gain undefined.

3.5

FIRST-ORDER SYSTEMS

Most of the dynamic systems considered in this book are either first-order or second-order. This is because all the control techniques developed can be demonstrated using such systems. Moreover, most engineering problems can be reduced to first-order or second-order systems. The properties and characteristics of the system responses for these two types of systems are established by analysing the outputs to such simple inputs as the unit-step, unit-ramp, and unit-impulse functions. In this section, this analysis is carried out in the frequency domain, whereas in Section 3.2.3, it was done in the time domain. A general first-order system with one pole and no zeros is represented as shown in Figure 3.9. Hence this first-order system can be depicted by a Transfer Function model H(s) and a corresponding natural (impulse) response h(t) derived as follows: a s+a R(s) = 1 (impulse input in Laplace)

H(s) =

Y (s) = R(s)H(s) = H(s) (output due to impulse in Laplace) =⇒ y(t) = h(t) = ae−at a ae

−at

decays if a

Time constant τ

> 0 ≡ pole is located at s < 0 ≡

natural response is stable

< 0 ⇒ s > 0 ≡ exponential growth → unstable 1 1 ≡ when response is times the initial value a. = a e

This response h(t) to a unit-impulse input by a stable first-order system is illustrated in Figure 3.10. It is prudent to note that the Transfer Function model of a first-order system can thus be represented by

H(s) =

1 τ , 1 s+ τ

1 where τ is the system time constant (a non-zero constant) and − is the system’s pole. Clearly the τ characteristic equation is the same as in the previously discussed time domain representation of a first-order system, as illustrated below:

Figure 3.9 A first-order system and its pole plot.

229

Dynamic System Response

Figure 3.10 A stable first-order system.

1 y˙ + y = f (t) τ 1 =⇒ sY (s) + Y (s) = F(s) τ Y (s) 1 =⇒ H(s) = = 1 F(s) s+ τ Furthermore, the characteristics of the response of a first-order system can be investigated further by studying its response to a unit-step input. Consider the already introduced first-order system with a Transfer Function model given by a H(s) = , s+a where a = τ1 . If this system is subjected to a unit-step input, the response is derived as follows: 1 s Y (s) = R(s)H(s) a =⇒ Y (s) = s(s + a) =⇒ y(t) = 1 − e−at R(s) =

The resultant characteristics of a first-order system can be derived and summarised from Figure 3.11 as follows:   1 1 • Time constant: 1 − = (1 − e−aτ ) =⇒ (−1 = −aτ) =⇒ τ = e   a     ln(0.1) ln(0.9) 2.31 0.11 2.2 • Rise time: tr = − = − = −a −a a a a ln(0.02) 4 • Settling time: ts = = −a a

230

Design and Analysis of Control Systems

Figure 3.11 Response specifications for a first-order system.

Example 3.21 Given the Transfer Function model of a first-order system as G(s) =

a , s+a

(a) Calculate the time constant, settling time, rise time and pole location for the three first-order systems where the values of a are 1, 2 and 3), respectively. (b) Using MATLAB plot the unit-step responses of the three systems in one diagram. (c) Compare the calculated and experimentally determined (MATLAB) time constant, settling time and rise time for each of the three systems. Solution 3.21 (a) The time constant, settling time, rise time and pole location for the three firstorder systems are calculated using the following formulae: 1 • Time constant: τ = a 4 • Settling time: ts = a 2.2 • Rise time: tr = a • Pole: s = −a The results are summarised in Figure 3.12. (b) The plots of the three unit-step responses are obtained by using the following MATLAB code: % 3 First-Order Systems (Unit-Step Responses) G1= tf(1, [1 1]) G2= tf(2, [1 2])

231

Dynamic System Response

Figure 3.12 Calculated parameters for three first-order systems. Step Response 1 1/(s+1) 2/(s+2) 3/(s+3)

0.9 0.8

Amplitude

0.7 0.6 0.5 0.4 0.3 0.2 0.1 0

0

1

2

3

4 5 Time (seconds)

6

7

8

9

Figure 3.13 Unit-step responses of three first-order systems.

G3= tf(3, [1 3]) stepplot(G1,G2,G3) legend(’1/(s+1)’,’2/(s+2)’,’3/(s+3)’) The resultant MATLAB plots for the three first-order systems are displayed in Figure 3.13. (c) From the plots, the experimentally determined parameters are similar to the hand-calculated ones in Figure 3.12. The extraction of the parameters from the plots is left as an exercise for the reader.

3.6

SECOND-ORDER SYSTEMS

As discussed earlier, in Section 3.3.3, a general second-order system has two poles and at most one zero such that its general Transform Function model is given by

232

Design and Analysis of Control Systems

H(s) =

ds + c Y (s) ≡ , s2 + as + b F(s)

(3.28)

where a, b, c, and d are constants while Y (s) and F(s) are the system input and output in Laplace, respectively. Hence, the Transfer Function model of a second-order dynamic system without any zeros takes the general form H(s) = =

b s2 + as + b ωn2 , 2 s + (2ξ ωn ) s + ωn2

(3.29) (3.30)

√ where √ωn is the undamped natural frequency and ξ is the damping ratio, while ωn = b and ξ = a/(2 b). This frequency domain representation is consistent with the time-domain format of a second-order system discussed previously. This is illustrated below y¨ + 2ξ ωn y˙ + ωn2 y =⇒ s

2

Y (s) + 2ξ ωn sY (s) + ωn2Y (s) =⇒ H(s) =

Y (s) F(s)

=

f (t)

= F(s) =

ωn2 s2 + (2ξ ωn ) s + ωn2

The complex poles of this second-order system are given by s = −σ ± jωd , where σ is the real part of the pole and ωd is the imaginary one. The relationship between the pole variables (σ and ωd ) and the system parameters (ξ and ωn ) can be established by expressing the characteristic equation in terms of the poles. a(s) = (s − s1 )(s − s2 ) = [s − (−σ + jωd )] [s − (−σ − jωd )] = (s + σ − jωd )(s + σ + jωd ) = (s + σ )2 + ωd2 = s2 + 2σ s + σ 2 + ωd2 = s2 + (2σ )s + (σ 2 + ωd2 ). Comparing the coefficients of this characteristic equation with those of the standard one in Equation 3.30 leads to 2σ = 2ξ ωn =⇒ σ = ξ ωn σ

2

+ ωd2

(3.31)

= ωn2 =⇒ (ξ ωn )2 + ωd2 = ωn2 =⇒ ωd2 = ωn2 − (ξ ωn )2 p =⇒ ωd = ωn 1 − ξ 2

ξ ≡ damping ratio ωn ≡ undamped natural frequency.

(3.32)

233

Dynamic System Response

Hence, the relationship between the pole variables (σ and ωd ) and the system parameters (ξ and ωn ) can be summarised as p σ = ξ ωn and ωd = ωn 1 − ξ 2 . Therefore, the poles are located at a radius ωn in the s-plane and at an angle θ as shown in Figure 3.14, where tan θ =

ωd σ

=

ωn

p tan θ =

p

1−ξ2 ξ ωn

1−ξ2 sin θ = ξ cos θ

⇒ cos θ = ξ

and sin θ =

p

1−ξ2

⇒ θ = cos−1 ξ . It is important to note that ξ is the ratio of the exponential decay frequency to the natural frequency ξ

=

= =

σ ωn   1 τ 2π fn   1 Tn , 2π τ

where Tn is the natural period and τ is the exponential time constant.

Figure 3.14 A pole in the S-plane.

234

3.6.1

Design and Analysis of Control Systems

IMPULSE AND STEP RESPONSES

In this section, the impulse and step responses for a general second-order system are derived, plotted and compared. Consider the general second-order transfer function H(s) =

ωn2 . s2 + (2ξ ωn ) s + ωn2

The general second-order system impulse response h(t) is determined as follows: h(t) = L−1Y (s) = L−1 [H(s)U(s)] = L−1 H(s) −1



= L

(where U(s) = 1)

ωn2 s2 + (2ξ ωn ) s + ωn2





  = L−1 

ωn2

(s + ξ ωn

)2 +

  p 2  Completing the square 2 ωn (1 − ξ )

 √ 2 2 × √(1−ξ ) ω n   (1−ξ 2 ) = L−1   p 2    Multiplication by one 2 2 (s + ξ ωn ) + ωn (1 − ξ ) 





 p 2 ωn (1 − ξ )   1−ξ 2 = L−1   2  p   2 2 (s + ξ ωn ) + ωn (1 − ξ ) 

ωn

=

p

=

p

1−ξ2 ωn 1−ξ2

√ ωn

 q  e−ξ ωn t sin ωn (1 − ξ 2 ) t

Rearranging

Using Equation 3.19

e−σt sin ωd t.

This is the natural response for a general second-order system. It is important to note that this response is a special case of the more generalised response given in Equation 3.23, where d = 0, c = b = w2n and a = 2ξ ωn . The step response yu (t) of a general second-order system is obtained in a similar fashion. yu (t) = L−1Y (s) = L−1 [H(s)U(s)] = L

−1

= L−1





H(s) s



ωn2 s (s2 + (2ξ ωn ) s + ωn2 )



235

Dynamic System Response





  ωn2  = L−1   p 2     Completing the square s (s + ξ ωn )2 + ωn (1 − ξ 2 ) 

 h i 2 2 (ξ ωn ) − (ξ ωn )   = L−1   p 2      Adding zero 2 2 s (s + ξ ωn ) + ωn (1 − ξ ) ωn2 +



  p 2 2 (ξ ωn ) + ωn (1 − ξ )    = L−1   p 2     Rearranging s (s + ξ ωn )2 + ωn (1 − ξ 2 ) 2

   p  q  ξ ωn sin ωn (1 − ξ 2 ) t  p = 1 − e−ξ ωn t cos ωn (1 − ξ 2 ) t + ωn (1 − ξ 2 ) 

(by using Equation 3.20)   σ −σt = 1−e sin ωd t . cos ωd t + ωd This is the unit step response of a general second-order system. The impulse and step responses are compared by analysing their two expressions: ωn

e−σt sin ωd t 1−ξ2   σ yu (t) = 1 − e−σt cos ωd t + sin ωd t . ωd h(t) =

p

Plots of the impulse response and the step response for different values of ξ are shown in Figures 3.15 and 3.16. The transient characteristics of the two responses are similar and depend on the value of ξ . (a) ξ = 0 =⇒ Oscillations with constant amplitude of 1. (b) ξ = 1 =⇒ Least overshoot and fastest settling time. (c) The steady-state value or final value is 1 for the step response and 0 for the impulse response. 3.6.2

STABILITY OF SECOND-ORDER SYSTEMS

Whether a system is stable or not can be established by studying its natural response. As already established, for a general second-order system this response is of the form h(t) = p

ωn 1−ξ2

e−σt sin ωd t.

• The response envelope is defined by e−σt and hence the real part of the pole σ determines the decay rate of the natural responses (Figure 3.17). • If σ < 0, then the pole is in the RHP and the natural response grows without bound, which means the system is unstable.

236

Design and Analysis of Control Systems

Figure 3.15 The impulse response for different values of ξ .

2 1.8 1.6 1.4

Amplitude

1.2 1 0.8 0.6 0.4 0.2 0 0

5

10

15

20

25 30 Time (secs)

Figure 3.16 Step response for different values of ξ .

35

40

45

50

Dynamic System Response

237

Figure 3.17 Second-order system response: different damping levels.

Figure 3.18 A stable second-order system.

• If σ > 0, then the pole is in the LHP and the natural response decays, which means the system is stable (Figure 3.18). • If σ = 0, then the pole is on the imaginary axis. The natural response oscillates with a fixed amplitude, which means the system is marginally stable or has indeterminate stability.

238

Design and Analysis of Control Systems

Example 3.22 (a) Discuss the relationship between the poles of the following system transfer function 2s + 1 H(s) = 2 , s + 2s + 5 and the corresponding impulse response. (b) Find the exact response. Solution 3.22 (a) Comparing the characteristic equation of the system with the standard secondorder equation gives s2 + 2ξ ωn s + ωn2 = s2 + 2s + 5. Equating the coefficients gives √ 5 1 2ξ ωn = 2 ⇒ ξ = √ . 5 ωn2 = 5 ⇒ ωn =

The poles are obtained directly from the characteristic equation as (−1 ± 2 j) or as follows: s1 , s2

=

−σ ± jωd

where σ

=

ξ ωn and ωd = ωn

p 1−ξ2

=⇒ σ = −1 and ωd = 2 =⇒ s1 , s2 = −1 ± 2 j. (b) The exact impulse response is obtained by breaking the transfer function into partial fractions and using the method of completing the square. H(s) =

2s + 1 s2 + 2s + 5

=

2s + 1 s2 + 2s + (2/2)2 − (2/2)2 + 5

=

2s + 1 s2 + 2s + 1 − 1 + 5

=

2s + 1 (s + 1)2 + 22

  1 ×2 2(s + 1) 2 − = (s + 1)2 + 22 (s + 1)2 + 22 1 h(t) = 2e−t cos 2t − e−t sin 2t. 2 The impulse response can be plotted using MATLAB as follows: num = [2 1]; den = [1 2 5]; impulse(num, den).

239

Dynamic System Response

3.6.3

RESPONSE CHARACTERISTICS OF SECOND-ORDER SYSTEMS

Consider the Transfer Function model of a second-order dynamic system without any zeros which is given by H(s) = =

b s2 + as + b ωn2 , s2 + (2ξ ωn ) s + ωn2

√ where √ωn is the undamped natural frequency and ξ is the damping ratio, while ωn = b and ξ = a/(2 b). The characteristics of a general unit step response of this second-order system are shown in Figure 3.19 and are defined as follows: • • • •

Rise time (tr ): The time from 10% of final value to 90% of final value. Settling time (ts ): Time for system transients to decay to ± 1% of final value. M p (% Overshoot): Maximum overshoot as a % of final value. Peak time (t p ): Time to reach the maximum value or the overshoot.

The values of the parameters are summarised as follows: 1.8 ωn p π tp = = ωn 1 − ξ 2 ωd tr =

Figure 3.19 Response specifications for an underdamped second-order system.

240

Design and Analysis of Control Systems

√−πξ



 h(t p ) − h(∞) Mp = × 100% = h(∞) ts =

e

1−ξ 2

! × 100%

4.6 4.6 (for ± 1% transient decay) = ξ ωn σ

ξ=p

−ln(M p /100) π 2 + ln2 (M p /100)

Note that: ts = For ± 2% transient decay ts =

3.91 ↔ 4.74 ξ ωn 4 ξ ωn

There is no precise relationship between tr and ξ . Hence, tr is obtained from experimental data. This is done by plotting normalised rise time versus the damping ratio. The effect of an extra pole is to increase the rise time. It is important to note that settling time ts is related to the largest time constant of the control system. Which percentage error criterion to use may be determined from the objectives of the system design in question. The time-domain specifications are quite important since most control systems are time-domain systems, since they must exhibit acceptable time responses. It is important to note that the maximum overshoot as a percentage of final value M p is a function only of the damping ratio ξ . The plot of M p versus the damping ratio ξ is shown in Figure 3.20. This maximum overshoot directly indicates the relative stability of the system. From Figure 3.21 the poles of the underdamped system are given by s = −σ d ± jω d = −ξ ω n ± jω n

Figure 3.20 M p (% overshoot) versus damping ratio ξ .

q

1−ξ2

241

Dynamic System Response

Figure 3.21 Pole location for an underdamped second-order system.

Also it is clear that: ω n2 and cos θ

= σ d2 + ωd2 = ξ

These equations are critical in the analysis of second-order systems. Example 3.23 Given the Transfer Function model of a second-order system as H(s) =

b s2 + as + b

,

(3.33)

(a) Calculate the per cent overshoot, settling time, peak time, rise time and pole locations for the following values: a = 4 and b = 25. Also, plot the poles. (b) Repeat (a) for the another two sets of values: a = 8 and b = 37; a = 2 and b = 22 (c) Using MATLAB, plot the unit step responses of the three systems in one diagram. (d) Compare the hand calculated and the experimentally determined (MATLAB) results for the following: overshoot, settling time, peak time and rise time for each of the three systems. Solution 3.23 (a) and (b) The per cent overshoot, settling time, peak time, rise time and pole locations for the three systems are calculated using the following formulae: √ √ ωn = b and ξ = a/(2 b) 1.8 tr = ωn p t p = ωn 1 − ξ 2 ! √−πξ 1−ξ 2 Mp = e × 100% ts =

4 ξ ωn

The results are summarised in Figure 3.22.

242

Design and Analysis of Control Systems

Figure 3.22 Calculated parameters of three second-order systems.

(c) The plots of the three unit-step responses are obtained by using the following MATLAB code: G1 = tf(25, [1 4 25]) G2 = tf(37, [1 8 37]) G3 = tf(22, [1 2 22]) stepplot(G1,G2,G3) legend (’25/(s2 + 4s + 25)’,’37/(s2 + 8s + 37)’,’22/(s2 + 2s + 22)’) The resultant MATLAB plots for the three first-order systems are displayed in Figure 3.22. (d) From the plots, the experimentally determined parameters are similar to the hand-calculated ones in Figure 3.22. The extraction of the parameters from the plots is left as an exercise for the reader (Figure 3.23).

Step Response 1.6 25/(s2+4s+25) 37/(s2+8s+37) 22/(s2+2s+22)

1.4

Amplitude

1.2 1 0.8 0.6 0.4 0.2 0

0

1

2

4 3 Time (seconds)

Figure 3.23 Unit-step responses for three second-order systems.

5

6

243

Dynamic System Response

3.6.4

EFFECTS OF POLE-ZERO LOCATION ON SYSTEM RESPONSE

The transfer function is used to analyse the response of the system it represents. H(s) =

b(s) A polynomial in s ≡ . a(s) A polynomial in s

Poles are the values of s such that a(s) = 0, zeros are values of s such that b(s) = 0. Assuming no common factors, when a(s) = 0, i.e., H(s) is infinity, the roots of the equation are called poles of H(s). The roots of the polynomial b(s) = 0 are called the zeros of the system. Systems with poles in the right-hand plane (RHP) are said to be unstable, while systems with zeros in the right-hand plane (RHP) are called nonminimum phase systems. The stability of dynamic systems and the significance of nonminimum phase systems will be discussed in detail in later chapters. The effect of poles on the response of a system can be briefly established by considering the following transfer function, b(s) 2s + 1 = , H(s) = 2 s + 3s + 2 a(s) where   1 b(s) = 2 s + 2 a(s) = s2 + 3s + 2 = (s + 1)(s + 2). This means that there are two poles at s = −1 and s = −2, and one zero at s = − 12 . The location of the poles (×) and zero (◦) are illustrated in the following diagram, Figure 3.24. Using partial fractions, the natural response can be obtained as follows:  2 s + 12 H(s) = (s + 1)(s + 2) H(s) = −

1 3 + s+1 s+2

h(t) = −e−t + 3e−2t for t ≥ 0 = 0 for t < 0.

Figure 3.24 Location of zero and poles.

244

Design and Analysis of Control Systems

Thus, the natural response is determined by the location of poles. The fast pole is at s = −2 and the slow pole is at s = −1, where this is determined by the relative rate of signal decay: 3e−2t dominates the early part. − e−t dominates the later part. The sketches of the natural system response can be obtained by considering the numerator (2s + 1) and denominator (s2 + 3s + 2) in MATLAB as follows: num = [2 1]; den = [1 3 2]; impulse(num, den). Zeros exert their influence by modifying the coefficients of the exponential terms whose shape is decided by the poles. This is illustrated by considering a transfer function and its corresponding natural response as follows: H1

=

2 2 2 = − (s + 1)(s + 2) s + 1 s + 2

=⇒ h(t) = 2e−t − 2e−2t .

(3.34)

If a zero is introduced near the pole at s = −1, (say at s = −1.1), the new transfer function and its corresponding natural response are given by H2

=

2 [(s/1.1) + 1] (s + 1)(s + 2)

=

2(s + 1.1) 0.18 1.64 = + 1.1(s + 1)(s + 2) s + 1 s + 2

=⇒ h(t) = 0.18e−t + 1.64e−2t .

(3.35)

Comparing the magnitudes of the coefficients of the exponential functions in Equations 3.34 and 3.35, it can be observed that the introduction of a zero at s = −1.1 dramatically reduces the effect of the pole at s = −1. In fact if a zero is placed exactly at s = −1, the term due to the pole −1 (i.e. 2e−t ) completely vanishes. In general, a zero near a pole reduces the impact of that term in the total response. Some of the effects of zero and pole patterns on the general step response presented in Figure 3.19 can be summarised as follows: • In a second-order system with no finite zeros tr =

1.8 ωn

M p = 5% for ξ = 0.7 = 16% for ξ = 0.5 = 35% for ξ = 0.3 4.6 . σ • An additional LHP zero will increase overshoot. • An additional RHP zero will depress the overshoot (start in the opposite direction to that of the final steady-state value). • An additional pole in LHP will increase the rise time. ts =

245

Dynamic System Response

3.6.5

IMPACT OF ZEROS ON SYSTEM RESPONSE

Adding a zero to the second-order system affects the system response in various ways depending on the zero. In Section 3.6.4, it was observed that the zeros of a system affect the residue or amplitude of a response component but do not affect the nature of the response, be it exponential, damped or sinusoid. In the LHP, the zero is a simple gain factor and does not change the relative amplitudes of the response components. There is another more general approach to assessing the effect of a zero on the response of a dynamic system. Let Yo (s) be the response of a system with a transfer function G(s), which has 1 as the numerator and a polynomial b(s) in the denominator. If a zero is added to the transfer function, yielding (s + a)G(s), the Laplace transform of the response to this system with a zero, Y (s) will be obtained as follows: 1 Yo (s) = G(s) = U(s) b(s) U(s) =⇒ Yo (s) = b(s) (s + a) Y (s) = U(s) b(s) U(s) = (s + a) b(s) =⇒ Y (s) = (s + a)Yo (s) =⇒ Y (s) = sYo (s) + aYo (s)

(3.36)

where −a is the zero, yo (t) is the system response with no zeros and Yo (s) is its Laplace transform; y(t) is the system response with a zero and Y (s) is its Laplace transform. From Equation 3.36, it is evident that, when there are zeros, the time response has two parts; the scaled component of the original response without zeros and the derivative component of the original response without zeros. From analysis of Equation 3.36 the following can be established. • If a is very large, the Laplace transform of the response is approximately equal to aYo (s), or a scaled version of the original response without zeros. • If a is not very large, the response has an additional component sYo (s), consisting of the derivative of the original response without zeros. • As a becomes smaller, the derivative term contributes more to the response and has a greater effect. The derivative is typically positive for step responses at the start of a step response. • Thus, for small values of a, more overshoot is expected in second-order systems because the derivative term will be additive around the first overshoot. 3.6.6

NONMINIMUM PHASE SYSTEMS

When a is negative, it means the zero, −a is positive, and hence it is in the RHP. This leads to what is called a nonminimum phase system. For this case, from Equation 3.36, it is clear that the derivative term, which is typically positive initially, will be of the opposite sign from the scaled response term. Thus, if the derivative term, sYo (s), is larger than the scaled response, aYo (s), the response will initially follow the derivative in the opposite direction from the scaled response. This means the system response initially goes in the opposite direction to its final direction or settling point. This is illustrated in Figure 3.25, where the system response is y(t) is plotted against time. For illustration, if a wheeled mobile robot were a nonminimum phase system, it would initially go backwards when commanded to go forward.

246

Design and Analysis of Control Systems

Figure 3.25 Step response of a nonminimum phase system.

Example 3.24 RHP zeros in a second-order system have an interesting impact on the system response. Consider the transfer function of an operational amplifier circuit, with a zero equal to 2 in the RHP, as depicted by the following equation: Y (s) R(s) (s − 2) =− . (s + 2)

G(s) =

For a unit step input r(t) find: (a) Y (s) - the Laplace transform of the system response. (b) Yo (s) - the Laplace transform of the system response without the zero. (c) Show that Y (s) = sYo (s) − 2Yo (s). (d) Explain the impact of zeros in second-order systems in general. What is the implication of the case where the zero is in the RHP? (e) Determine the system responses y(t) and yo (t). (f) Plot y(t) and−2yo (t) and explain what the two graphs illustrate. Solution 3.24 (a) The Laplace transform Y (s) of the system response for a unit step input r(t) is obtained as follows: Y (s) = R(s)G(s)   1 s−2 =− s s+2 (b) The Laplace transform Y (s) of the system (without a zero) response for a unit step input r(t) is given by removing the zero at s = 2 such that: Yo (s) = −

1 s



1 s+2



247

Dynamic System Response

(c) Manipulating the expression for Y(s) obtained in (a) leads to   1 2 1 Y (s) = − + s+2 s s+2       1 1 1 1 =s − −2 − s s+2 s s+2 = sYo (s) − 2Yo (s) Thus the expression is proven: Y (s) = sYo (s) − 2Yo (s)

Q.E.D;

where the first term sYo (s) is the derivative component of the system without a zero; while the second term −2Yo (s) is the scaled component of the system without a zero. (d) See Sections 3.6.5 and 3.6.6. (e) The time response of the system, with a zero in the RHP, y(t) is obtained by taking inverse Laplace transforms as follows:   1 1 +2 Y (s) = − s+2 s(s + 2)    1 −2t −2t y(t) = −e + 2 1−e 2 = −e−2t + 1 − e−2t =⇒ y(t) = 1 − 2e−2t The time response of the system without a zero yo (t) is obtained by taking inverse Laplace transforms as follows: −1 s(s + 2)  −1 yo (t) = 1 − e−2t 2 1 1 =⇒ yo (t) = − + e−2t 2 2 Yo (s) =

(f) Plot the following curves on the same graph: • −2yo (t) which is given by 1 − e−2t (curve 1) • y(t) which is given by 1 − 2e−2t (curve 2) In Figure 3.26, curve 1 is the scaled response of the system without a zero and curve 2 is the response of the system with a zero in the RHP. The system’s response with the zero in the RHP starts by going in the opposite direction to its final response. It is important to emphasise that nonminimum phase systems are characterised by at least one zero in the RHP. Such a zero can be approximated by a time delay function since e−sτ ≈ (1 − sτ), using Taylor’s approximation, where the RHP zero is given by s = 1/τ. In fact, a dynamic system with a time delay is a special type of a nonminimum phase system. The impact of time delays on various aspects of control systems is explored and discussed in a number of chapters in this book.

248

Design and Analysis of Control Systems

Figure 3.26 Step response of a nonminimum phase system.

3.6.7

IMPACT OF NONLINEARITIES ON SYSTEM RESPONSE

The effects of nonlinearities on the system response can be examined qualitatively. When physical systems have nonlinearities, such as saturation, dead zone, and backlash, the system response is affected as shown in Figures 3.27–3.29 for the case of a DC motor. The responses were obtained

Figure 3.27 Effect of saturation on system response.

Dynamic System Response

Figure 3.28 Effect of deadzone on system response.

Figure 3.29 Effect of backlash on system response.

249

250

Design and Analysis of Control Systems

using Simulink, a simulation software package integrated with MATLAB to provide a graphical user interface (GUI). For example, the effect of backlash on the system response, is obtained by simulation with and without backlash nonlinearity. 3.6.8

IMPACT OF TIME DELAY ON SYSTEM RESPONSE

As explained in Chapter 2, time delays or dead-times often occur in the signals that flow between components of a dynamic system. Such time delays exist in thermal, hydraulic, and pneumatic systems. For example, when a tap is turned on at one end of a long hose pipe, it takes some time before the water appears at the far end. Also, when the feed to a conveyor belt is changed, it takes some time to observe the change at the other end. As outlined in Chapter 2, a pure time delay function can be modelled by considering a general system g(t) delayed by time τ, such that the system is represented by g(t − τ). Taking Laplace transforms of this function leads to L g(t − τ) = e−sτ G(s) = Gτ (s)G(s). Hence, the Laplace transform of a pure time delay is given by Gτ (s) = e−sτ . Figure 3.30 illustrate the Block Diagram model of a general plant with a time delay function. A specific case is shown in Figure 3.31, and is represented as follows: GD = e−sτ G(s)   5 −3.4s =e s+1 =

5e−3.4s . s+1

The unit step responses for the system without the delay and the one with the delay is shown in Figure 3.32. These curves are consistent with the curve obtained from taking the inverse Laplace transform of the original plant and shifting it by 3.4 seconds to the right for the delayed response:   5 −1 L = 5e−t s+1

Figure 3.30 A dynamic system with a time delay function.

Figure 3.31 A specific dynamic system with a time delay function.

Dynamic System Response

251

Figure 3.32 Unit system response to a time delay function.

3.7

WORKED EXAMPLES OF DYNAMIC SYSTEM RESPONSE

Example 3.25 Consider the following first-order systems, each with a step input.

Find the output response c(t) for each of the systems. Also find the time constant τ, rise time tr and settling time ts for each case.

252

Design and Analysis of Control Systems

Solution 3.25 (a) From the first block diagram, with a unit step input, it follows that   4 1 C(s) = s s+4 4 = s(s + 4) 1 1 =⇒ C(s) = − (using partial fractions) s s+4 Taking the Inverse Laplace transform of C(s), produces the time response as c(t) = 1 − e−4t Also the following parameters can be deduced: 1 1 = 4 a 2.2 2.2 tr = = a 4 4 ts = = 1 a τ=

From the second block diagram, with a unit step input, it follows that 15 s(s + 15) 1 1 = − s s + 15

C(s) =

Taking the Inverse Laplace transform, leads to: c(t) = 1 − e−15t

Also, 1 1 = a 15 2.2 2.2 tr = = a 15 4 4 ts = = a 15 τ=

Example 3.26 The cruise control system for a car can be modelled as a single translational mass, as shown in Figure 3.33. The input to the car system is the applied engine force f (t), which produces horizontal motion x(t) at speed v(t) and acceleration v(t). ˙ The frictional force bv(t), which is proportional to the car’s speed, opposes the car’s motion. (a) Show that the car speed cruise control Input-Output model is given by v(t) ˙ +

b b v(t) = vr (t), m m

where the reference or desired constant speed is given by vr (t) =

f (t) . b

253

Dynamic System Response

Figure 3.33 Car cruise control: a single mass model.

(b) Determine the Transfer Function model of the car cruise control system, and explain why the model is considered a first-order system. (c) If b = 2Ns/m and m = 1kg, show that the car speed response is given by   v(t) = vr (t) 1 − e−2t . (d) If vr (t) = 1m/s draw a sketch of the car’s speed response and deduce the following system parameters: time constant τ, rise time tr and settling time ts . Solution 3.26 (a) The cruise control system for a car can be modelled as a single translational mass, as shown in Figure 3.33. The input to the car system is the applied engine force f (t), which produces horizontal motion x(t) at speed v(t) and acceleration v(t). ˙ The frictional force bv(t), which is proportional to the car’s speed, opposes the car’s motion. Using Newton’s second law, the car’s equation of motion is given by mv˙ + bv = f f b v˙ + v = . m m

(3.37)

The applied force f (t) can be related to the desired speed or (reference speed) vr (t) by considering the steady-state conditions, i.e. when all derivatives are equal to zero. The reference speed is equal to the steady-state speed vss (t), which is also the maximum attainable speed when a force f (t) is applied to the car. b f vss = m m ⇔ vss =

f = vr . b

The input-output Equation 4.4 can be rewritten with the reference speed vr (t) as the input as follows: v˙ +

b b v = vr m m ⇔ mv˙ + bv = bvr .

(3.38)

This is a much more intuitive representation of the input-output differential equation as it clearly shows the relationship between the desired output vr (t) and the actual output v(t), where the two quantities are of the same type and dimension. (b) The Transfer Function model is obtained by taking the Laplace transform of the input-output Equation 3.38. When finding the transfer function, zero initial conditions are assumed. v˙ +

b b v = vr m m

254

Design and Analysis of Control Systems

b b V (s) = Vr (s) m m   b b =⇒ H(s) = V (s) s + = Vr (s) m m =⇒ sV (s) +

b m . b s+ m

V (s) =⇒ = Vr (s)

(3.39)

This is the Transfer Function model of the system with respect to the reference or desired car speed. It is a first-order system because its charateristic equation has only one pole (root) which is located b at s = − . Also, the system’s Input-Output model is a first-order differential equation. m (c) The system response (with b = 2Ns/m and m = 1kg) is obtained from the transfer function and the Laplace transform of the input. v(t) = L−1 [H(s)Vr (s)]   b v    r  = L−1  m b   s s+ m 



  =

bvr m



−1

  1  L−1    b  s s+ m



2 s (s + 2)



 C2 C1 + . s s+2

= vr L

−1

= vr L



The coefficients are obtained by the cover-up method as C1 = 1 and C2 = −1. Hence, v(t) = vr L

−1



1 1 − s s+2



  = vr 1 − e−2t . (d) The car cruise speed system response (with vr = 1m/s) can be sketched as illustrated in Figure 3.34. The system characteristics are derived as follows:    1 • Time constant: 1 − = 1 − e−2τ e 1 =⇒ (−1 = −2τ) =⇒ τ = 2

Dynamic System Response

Figure 3.34 Response specifications for the first-order system.



     ln(0.1) ln(0.9) 2.31 0.11 2.2 • Rise time: tr = − = − = = 1.1 −2 −2 2 2 2 ln(0.02) 4 • Settling time: ts = = =2 −2 2

Example 3.27 Consider the spring, mass and damper system depicted in Figure 3.35.

Figure 3.35 Spring, mass and damper system.

255

256

Design and Analysis of Control Systems

(a) Find the Input-Output model of the system. (b) Find the Transfer Function model of the system and explain why it is called a second-order system. (c) For the specific case where m = 5kg, k = 10N/m and b = 1Ns/m: (i) Find the specific Transfer Function model. (ii) Find ζ , ωn , M p (%OS), ts , t p and tr . Solution 3.27 (a) The free-body diagram of the system is given in Figure 3.36. From the free-body diagram of the system, it follows that mx(t) ¨ + bx(t) ˙ + kx(t) = f (t)

(3.40)

This differential equation (Equation 3.40) is the Input-Output model of the dynamic system where the input is f (t) and the output is x(t). (b) Transfer Function model is determined by taking Laplace transforms of the Input-Output model as follows: mx(t) ¨ + bx(t) ˙ + kx(t) =

f (t)

=⇒ ms2 X(s) + bsX(s) + kX(s) = F(s)   =⇒ X(s) ms2 + bs + K = F(s) X(s) G(s) = F(s) =⇒ G(s) =

1 ms2 + bs + k

This is the general Transfer Function model of the dynamic system. (c)(i) For the specific case where M = 5kg, K = 10N/m, and fv = 5Ns/m, the specific Transfer Function model becomes G(s) =

1 5s2 + 5s + 10

1 = 2 5 s +s+2

Figure 3.36 Mass, spring and damper system: free body diagram.

257

Dynamic System Response

This presents the system’s Transfer Function model in the standard second-order form. It is called a second-order system because, clearly, its characteristic equation has two poles (roots) and also the Input-Output model is a second-order differential equation. (c)(ii) The coefficients of the system’s characteristic equation are compared with those of the characteristic equation of a general second-order system which is given by s2 + 2ζ ωn + ωn 2 = 0. This leads to the following results 2ζ ωn = 1 and ωn2 = 2 It therefore follows that: √ 2 = 1.41 1 = 0.3536 ζ= 2ωn 4 ts = = 8.10s ζ ωn

ωn =

tp = ωn

π p = 2.37s 1−ζ2

e−ζ π M p (%OS) = p × 100 = 30.49% 1−ζ2 ωn × tr = 1.76ζ 3 − 0.417ζ 2 + 1.039ζ + 1 =⇒ tr = 0.99s Example 3.28 If one of the two general poles of an underdamped second-order system is given by s = −σ d + jω d (a) Show that the natural frequency ω n is given by q ωn = (ωn 2 + σ 2d ). (b) Also show that cosθ = ζ π tp = ωd 4 ts = , σd where ζ is the damping ratio, t p is the peak time and ts is the settling time. (c) For such a second-order system, consider the case where the plant transfer function is given by G(s) =

1 . s(s + 6)

Design the controller K so that the system responds with 15% overshoot.

258

Design and Analysis of Control Systems

Solution 3.28 (a) The general pole of an underdamped second-order system is located in the complex plane as shown in Figure 3.37. It then follows that: s = σd + jwd p ωd = ωn 2 1 − ζ 2 σd = ζ × ωn ωd =⇒ tanθ = σd

ωd2 = (1 − ζ 2 ) × ωn2 σd2 = ζ 2 × ωn2 ωd2 + σd2 = (1 − ζ 2 + ζ 2 ) × ωn2 q =⇒ ωn = ωd2 + ωn2 Q.E.D (b) From Figure 3.37 the following can be deduced: tanθ =

ωd σd

=

ωn

p

1−ζ2 ζ ωn

p

1−ζ2 ζ 1 − ζ2 tan2 θ = ζ2 =

Furthermore, from trigonometry functions, it follows that: 1 − cos2 θ cos2 θ 2 2 =⇒ cos θ = ζ tan2 θ =

=⇒ cosθ = ζ

Figure 3.37 General pole in complex plane.

Q.E.D

259

Dynamic System Response

This result can also be obtained from Figure 3.37: σd cosθ = q σd2 + ωd2 σd = ωn ζ × ωn = ωn cosθ = ζ Q.E.D The peak time (t p ) and settling time (ts ) are, respectively, obtained as follows: π p ωn 1 − ζ 2 π = Q.E.D ωd

tp =

4 ωn ζ 4 (for 2% transient decay) Q.E.D = σd

ts =

(c) The value of K to give M p (%OS) of 15% is determined by first finding ζ and then ωn as follows: 

−ζ π

p

1−ζ2



−ζ π



15% = e



p

=⇒ 0.15 = e



1−ζ2

× 100% 

−ζ π =⇒ ln(0.15) = p 1−ζ2 =⇒ ln2 0.15 =

ζ 2π 2 1−ζ2

=⇒ ζ 2 (π 2 + ln2 0.15) = ln2 0.15 ln2 0.15 π 2 + ln2 0.15 2 =⇒ ζ = 0.2673

=⇒ ζ 2 =

=⇒ ζ = 0.517 The natural frequency ωn is then obtained from the given plant transfer function, by considering the characteristic equation of the second-order system given by s2 + 6s + K = 0. This equation is then compared with the standard second-order system given by s2 + 2ζ ωn + ωn 2 = 0

260

Design and Analysis of Control Systems

From equating the coefficients of s it follows that 2ζ ωn = 6. 6 2 × 0.517 = 5.803

=⇒ ωn =

Thereafter, equating the coefficients of s0 (the constants) it follows that K = ωn 2 = 33.68 This is the value of the controller K such that the system responds with 15% overshoot. Example 3.29 Consider the following RLC circuit in Figure 3.38 and find: (a) (b) (c) (d) (e)

The time domain equation relating i(t) and v1 (t). The time domain equation relating i(t) and v2 (t). The Transfer Function model V2 (s)/V1 (s), assuming zero initial conditions. The damping ratio ξ and the natural frequency ωn . The value of R such that the overshoot, M p = 25% where v1 (t) is a unit step input, L = 10mH and C = 4µF.

Solution 3.29 The KVL and KCL are used to set up the equations. (a) From applying the KVL in the input loop it follows that: v1 (t) = L

di 1 + Ri(t) + dt C

Z

i(t)dt.

(b) From applying the KVL in the output loop it follows that: v2 (t) =

1 C

Z

i(t)dt.

V2 (s) is obtained by first taking the Laplace transforms of the expresV1 (s) sions of v2 (t) and v2 (t).   1 I(s) V1 (s) = sL + R + sC 1 V2 (s) = I(s). sC

(c) The transfer function

L v 1 (t)

Figure 3.38 An RLC electrical circuit.

R i (t)

C

v 2 (t)

261

Dynamic System Response

The transfer function is the ratio of the transforms H(s) =

V2 (s) V1 (s)

=

1 I(s) sC   1 sL + R + I(s) sC

=

1 LC R 1 s2 + s + L LC

(3.41)



ωn2 . s2 + 2ξ ωn s + ωn2

(3.42)

This is the Transfer Function model of the system and its standard form, respectively. (d) The damping ratio ξ and the natural frequency ωn are obtained by comparing the coefficients of Equation 3.41 with those of the standard form of a general second-order system shown in Equation 3.42, 1 LC r

ωn2

=

⇒ ωn

=

2ξ ωn

=

R L

⇒ξ

=

R 1√ R 1 × = LC 2ωn L 2 L

=

R 2

⇒ξ

1 LC

r

C . L

(3.43)

(iv) Given that the overshoot M p = 25%, a value of the damping ratio ξ can be obtained from the expression for the overshoot as follows: 







−ξ π p  1 − ξ 2 × 100% 25% = e −ξ π p  1−ξ2 =⇒ 0.25 = e −ξ π =⇒ ln(0.25) = p 1−ξ2 =⇒ ln2 0.25 =

ξ 2π 2 1−ξ2

=⇒ ξ 2 (π 2 + ln2 0.25) = ln2 0.25

262

Design and Analysis of Control Systems

=⇒ ξ 2 =

ln2 0.25 π 2 + ln2 0.25

=⇒ ξ 2 = 0.16298 =⇒ ξ = 0.4037. The value of R is then determined by substituting this value of the damping ratio in Equation 3.43. Hence, it follows that r R C ξ = 2 L s r L 10 × 10−3 ⇔ R = 2ξ = (2)(0.4037) C 4 × 10−6 ⇒R

= 40.37Ω.

Example 3.30 (a) Derive the State-Variable Matrix model of the electrical circuit in Figure 3.39. (b) Derive the Input-Output model (a second-order differential equation) of the same circuit. (c) Given the following values L = 1, R = 2, C = 1 find the specific State-Variable Matrix and Input-Output models. (d) Find the dynamic system response of the system assuming: u(t) = δ (t) (a unit impulse function), y(0) = 1 and y(0) ˙ = 0. Plot this response. Solution 3.30 (a) Choose the voltage across the capacitor y(t) and the current flowing through the inductor i(t) as the state variables. For the capacitor it follows that: i(t) = C

dy i(t) =⇒ y˙ = . dt C

Applying the KVL for the circuit u−L

di − Ri − y dt di dt

= 0 =

The state-variable matrix system takes the form  R 1 − − L  L    i˙ =   1 y˙  0 C

Figure 3.39 An RLC electrical circuit.

u R y − i− . L L L

   1    i  L   u  y +  0

263

Dynamic System Response



=

y

0

1







i y

+ [0] u.

This is the State-Variable Matrix model. (b) The input-output differential equation is given by di dt

u R y − i− L L L

= =⇒ =⇒ =⇒

d (C y) ˙ u R y = − (C y) ˙ − where i(t) = Cy˙ dt L L L   CR y u Cd y˙ + y˙ + = dt L L L   d y˙ R y u + y˙ + = . dt L CL CL

This is the Input-Output model for the circuit. (c) Given the values of L = 1, R = 2, C = 1 the specific models can be derived as follows:     −2 −1     1   i ˙i   u =  +  1  y y˙ 0 0

y



=

0

1





i y

(3.44)

 + [0] u.

(3.45)

This is the specific State-Variable Matrix model. The input-output differential equation reduces to di dt

=

−2i − y + u where

=⇒

y¨ = −2y˙ − y + u

=⇒

y¨ + 2y˙ + y − u = 0.

i=

dy dt

This is the specific Input-Output model of the circuit. (d) Applying Laplace transform, assuming the following conditions: u(t) = δ (t) (a unit impulse function), y(0) = 1 and y(0) ˙ = 0, leads to s2Y (s) − sy(0) − y(0) ˙ + 2 [sY (s) − y(0)] +Y (s) − 1

= 0

s2Y (s) + 2sY (s) +Y (s) = sy(0) + y(0) ˙ + 2y(0) + 1 (s2 + 2s + 1)Y (s) = s + 3

264

Design and Analysis of Control Systems

Therefore, s+3 s2 + 2s + 1

Y (s) = =

s+3 (s + 1)2

=

(s + 1) + 2 (s + 1)2

=

2 1 + . s + 1 (s + 1)2

(3.46)

The time domain system response is obtained by taking the inverse Laplace transform of the function in Equation 3.46, and then plotting the result.   1 2 y(t) = L−1 + s + 1 (s + 1)2 = e−t + 2te−t = e−t (1 + 2t). Using MATLAB’s initial command the same result can be obtained.       io Cy˙o 0 = = yo yo 1 The syntax for the MATLAB function is given by initial(A, B, C, D, Xo ), where X0 is the vector of initial values [0 1]T , and State-Variable Matrix model is depicted in Equations 3.44 and 3.45. The resultant plot is shown in Figure 3.40. Example 3.31 Find the Transfer Function model if the system input and output are respectively given by the following equations: u(t) = e−t y(t) = 2 − 3e−t + e−2t cos 2t Solution 3.31 Taking Laplace transforms of the input and output equations separately leads to U(s) =

1 s+1

Y (s) =

2 3 s+2 − + s s + 1 (s + 2)2 + 4

Hence the Transfer Function model is given by T (s) =

=

=⇒ T (s) =

Y (s) U(s) 2 3 s+2 − + s s + 1 (s + 2)2 + 4 1 s+1 s2 + 2s + 16 . s(s2 + 4s + 8)

265

Dynamic System Response 1 0.9 0.8 0.7

y(t)

0.6 0.5 0.4 0.3 0.2 0.1 0 0

1

2

3

4

5 6 Time [sec]

7

8

9

10

Figure 3.40 MATLAB plot for RCL circuit.

This is the Transfer Function model of the system. Example 3.32 Consider the following Transfer Function models for three dynamic systems. (a) H1 (s) =

5s s2 + 2s + 5

(b) H2 (s) =

5 s2 + 2s + 5

(c) H3 (s) =

ωn2 . 2 s + 2ξ ωn + ωn2

Find the output for each system if the input is a sawtooth signal as shown in Figure 3.41. Solution 3.32 (a) The sawtooth input signal can be written in the form of a ramp function and a delayed ramp function using superposition principle, as follows: u(t) = r(t) − 2r(t − 1) + 2r(t − 2) − 2r(t − 3), where r(t) = t is the ramp function. Thus, only the response to a ramp function yr (t) has to be determined and the total response y(t) is then expressed in terms of the ramp response using the superposition principle such that: y(t) = yr (t) − 2yr (t − 1) + 2yr (t − 2) − 2yr (t − 3).

266

Design and Analysis of Control Systems

u(t)

1

2

3

4

t

Figure 3.41 Sawtooth input signal.

To determine the ramp response yr (t) consider the general transfer function equation, =

H(s)

Y (s) U(s)

⇔ Y (s) = H(s)U(s) ⇒ Yr (s) = H(s)R(s) 1 s2   1 −1 ⇔ yr (t) = L H(s) 2 . s ⇒ Yr (s) = H(s)

This is true for all three transfer functions (a), (b), and (c), hence the corresponding ramp responses are given by the following equations, respectively.   5s 1 yr (t) = L−1 2 × 2 (3.47) s + 2s + 5 s   5 1 yr (t) = L−1 2 (3.48) × 2 s + 2s + 5 s   ωn2 1 −1 yr (t) = L × . (3.49) s2 + 2ξ ωn + ωn2 s2 Thus, the remaining task is determining the three inverse Laplace transforms given in Equations 3.47, 3.48, and 3.49. (a) Consider Equation 3.47 and express the Laplace function in partial fractions 5s s2 + 2s + 5

×

1 s2

=

=

5 s(s2 + 2s + 5) A Bs +C + 2 . s s + 2s + 5

The following coefficients are obtained, A = 1, B = −1, C = −2.

(3.50)

267

Dynamic System Response

Using these coefficients and then taking the inverse Laplace transform of Equation 3.50 leads to   5s 1 yr (t) = L−1 2 × 2 s + 2s + 5 s = L−1



= L−1

    1 s+2 − L−1 2 s s + 2s + 5

1 s+2 − s s2 + 2s + 5



−1

      1 s+1 1 −1 −1 −L −L s s2 + 2s + 5 s2 + 2s + 5

−1

      1 s+1 2 −1 −1 1 −L −L × s (s + 1)2 + 4 2 (s + 1)2 + 4

= L

= L

1 = 1 − e−t cos 2t − e−t sin 2t. 2 As explained before, the complete response y(t) is then expressed in terms of this ramp response as follows: y(t) = yr (t) − 2yr (t − 1) + 2yr (t − 2) − 2yr (t − 3) (b) Consider Equation 3.48 and express the Laplace function in partial fractions 5 1 × s2 + 2s + 5 s2

=

5 s2 (s2 + 2s + 5)

=

C3 s +C4 C1 C2 + 2 + 2 . s s s + 2s + 5

(3.51)

The following coefficients are obtained, 2 2 1 C1 = − , C2 = 1,C3 = ,C4 = − . 5 5 5 These coefficients and the inverse Laplace transform of Equation 3.51 can be used to obtain the response yr (t). This is carried out as follows:   2 2 1 − s − 1   yr (t) = L−1  5 + 2 + 5 2 5  s s (s + 1) + 4  2 2 2 1 (s + 1) − − 1  5 5 = L−1  5 + 2 + 5  s s (s + 1)2 + 4 



 2 2 3 1 (s + 1) − × ( ) × 2 1   2 = L−1  5 + 2 + 5 2 − 5  2 s s (s + 1) + 4 (s + 1) + 4 



268

Design and Analysis of Control Systems

2 2 3 = − + t + e−t cos 2t − et sin 2t. 5 5 10 As done before, the complete response y(t) is then expressed in terms of this ramp response as follows: y(t) = yr (t) − 2yr (t − 1) + 2yr (t − 2) − 2yr (t − 3). (c) Consider Equation 3.49 and express the Laplace function in partial fractions ωn2 2 s + 2ξ ωn + ωn2

×

1 s2

=

ωn2 2 2 s (s + 2ξ ωn + ωn2 )

=

C1 C2 C3 s +C4 . + 2 + 2 s s s + 2ξ ωn + ωn2

(3.52)

The following coefficients are obtained C1 = −

2ξ 2ξ , C2 = 1, C3 = , C4 = 4ξ 2 − 1. ωn ωn

Using these coefficients and taking the inverse Laplace transform of Equation 3.52 gives the response yr (t). This is achieved as follows:   2ξ 2ξ 2 − 1) s + (4ξ −   ωn 1 ωn  yr (t) = L−1   s + s2 + s2 + 2ξ ωn + ω 2  n    2ξ 2ξ 2ξ 2 (s + ξ ωn ) − × ξ ωn + (4ξ − 1)   − ωn 1 ωn ωn  p = L−1  + +  s  s2 (s + ξ ωn )2 + (ωn ξ 2 − 1)2 

 2ξ 2ξ 2 + (4ξ 2 − 1) − (s + ξ ω ) − 2ξ n   ωn 1 ωn =⇒ yr (t) = L−1   p 2    s + s2 + (s + ξ ωn )2 + ωn ξ 2 − 1) 



 2ξ (s + ξ ωn ) + (2ξ 2 − 1)  ωn  p 2   2 2 (s + ξ ωn ) + ωn ξ − 1)



 2ξ (s + ξ ωn )  ωn  p 2   (s + ξ ωn )2 + ωn ξ 2 − 1)

2ξ − 1 −1  ωn = L  + 2+ s s

2ξ −  1 ωn = L−1   s + s2 +

!  p  1 2 2 p × ωn ξ − 1   (2ξ − 1) ×   (ωn ξ 2 − 1) −1   +L   p 2  2 2   (s + ξ ωn ) + ωn ξ − 1) 

269

Dynamic System Response

= −

p 2ξ 2ξ + t + e−ξ ωn t cos(ωn ξ 2 − 1)t ωn ωn

+

p 2ξ 2 − 1 −ξ ωn t p e sin(ωn ξ 2 − 1)t. ωn ξ 2 − 1

(3.53)

This is the response of a general second-order system to a ramp input. As done before, the complete response y(t) is then expressed in terms of this ramp response as follows: y(t) = yr (t) − 2yr (t − 1) + 2yr (t − 2) − 2yr (t − 3), where yr (t) is given in Equation 3.53.

PROBLEMS Problem 3.1 Find and sketch the response of a dynamic system described by the equation 6y˙ + y = f (t), with the initial condition y(0) = 4 when the input is: (a) (b) (c) (d)

f (t) = 10 t f (t) = 5e− 2 t f (t) = 10 + 5e− 2 f (t) = sint + cost

Problem 3.2 Use the definition and properties of Laplace transforms to find the Laplace transform of each of the following functions: (a) y1 (t) = te−at + e−at cos ωt (b) y2 (t) = te−2t cos 3t + t 2 sin 2t (c) y3 (t) =

d 2 −t (t e ) + dt

Z t

λ 2 e−λ dλ

0

Problem 3.3 Use partial fraction expansions to find the inverse Laplace transforms (the time system responses) of the following functions: (a) Y1 (s) =

5 s(s + 1)(s + 10)

(b) Y2 (s) =

2s s2 + 8s + 16

(c) Y3 (s) =

s3 + 2s + 4 s4 − 16

270

Design and Analysis of Control Systems

Figure 3.42 Rotational mechanical system.

Problem 3.4 (a) Find the inverse Laplace transform of the function Y (s) =

3s2 + 2s + 2 , (s + 2)(s2 + 2s + 5)

by using a complete partial fraction expansion. (b) Find the inverse Laplace transform of the same function by using the method of completing the square. Problem 3.5 (a) Show that the following rotational mechanical system is represented by the differential equation (Figure 3.42)   b1 + b2 b1 ω˙ 1 + ω1 = ω(t). J J (b) Given the following information: ω(t) = 10, b1 = b2 = 1, ω1 (0) = 0 and J = 2, find τ, the time constant and ω1ss , the steady-state speed. (c) Deduce the system response (solution to the differential equation). (d) Solve the same differential equation in (a) with the data in (b) by using Laplace transforms. (e) Plot the system response and confirm your plot using the IV T and FV T . Problem 3.6 In the following mechanical system a velocity input, v(t) = 6 for t ≥ 0, is applied as illustrated.

A Translational Mechanical System (a) (b) (c) (d) (e)

Write the system’s differential equation in terms of the velocity v1 (t). What is the time constant τ and the steady-state velocity v1ss for the system? Given the information: v1 (0) = 10, b1 = b2 = b3 = 1 and m = 1, obtain the system response. Sketch the system response. Solve the same differential equation in (a) with the data in (c) by using Laplace transforms.

Problem 3.7 (a) Derive the transfer function H(s) for the general first-order system given by y˙ + σ y = ku,

271

Dynamic System Response

where y(t) is the output, u(t) is the input, and all initial conditions are assumed to be zero. (b) Find the natural response of the first-order system. (c) When is this system stable? (d) Obtain the unit step response from the natural response in (b). Problem 3.8 (a) Derive the transfer function H(s) for the general second-order system given by y¨ + 2ξ ωn y˙ + ωn2 y = ωn2 u, where y(t) is the output, u(t) is the input, and all initial conditions are assumed to be zero. (b) Find the natural response of the second-order system. (c) When is this system stable? (d) Explain how the unit step response can be obtained from the natural response in (b). Problem 3.9 Consider the RLC electrical circuit given below, where the input is current ii (t) and the output is voltage vo (t).

An RLC Electrical Circuit

(a) The input-output differential equation for the circuit is given by 1 1 Cv¨o + v˙o + vo = i˙(t). R L 1 1 Given that R = Ω, L = H, C = 1F and assuming zero initial conditions, find the system 4 3 transfer function H(s). (b) Find the impulse (natural) response of the system. (c) Find the unit step response of the system. (d) What is the relationship between the natural and unit step responses?

Problem 3.10 Consider the following diagram where the input is force f (t).

272

Design and Analysis of Control Systems

Translational Mechanical System (a) Obtain a differential equation in x that describes the system’s behaviour. (b) Use the following information: x(0) = 1, x(0) ˙ = 0, m = 1, b = 4, k = 3 and f (t) = 9, use Laplace transforms to show that X(s) =

s2 + 4s + 9 . s(s2 + 4s + 3)

(c) Find the system response x(t) and plot it. (d) Use the IV T and FV T to check your plot. (e) Given the following new information: x(0) = 0, x(0) ˙ = 0, m = 1, b = 2, k = 5 and f (t) ≡ unit impulse, show that 1 X(s) = 2 . s + 2s + 5 Use the method of completing the square to find the response, x(t) and plot the system response. Problem 3.11 The input-output differential equation of an electrical circuit whose input is a voltage vi (t) and the output is a voltage vo (t), is given by   1 1 1 1 Cv¨o + + v˙o + vo = v˙i . R1 R2 L R1 (a) Find the expressions for the damping ratio ξ , and the undamped natural frequency ωn . 1 1 (b) Given the information: R1 = 1Ω, R2 = Ω, C = 1F, L = H, vi (t) = 2t and assuming zero 2 2 initial conditions, use Laplace transforms to solve the differential equation, i.e., obtain the system response. (c) Plot the system response. (d) Illustrate that the initial and final values of the response are the same as those obtained by using the IV T and FV T . Problem 3.12 Verify that the transfer function for the circuit shown below is given by H(s) =

Vo (s) s2 + 2s + 1 = 2 . Vi (s) s + 4s + 4

273

Dynamic System Response

(b) Find the unit impulse response for the circuit. (c) Find the unit step response for the circuit. Problem 3.13 Apply the initial and Final Value Theorems to find y(0+ ) and y(∞) for each of the following transforms. If a theorem does not apply to a particular transform, explain why this is so. (a) Y (s) =

s3 + 2s + 4 s(s + 1)2 (s + 2)

Y (s) =

4s2 + 10s + 10 s3 + 2s2 + 5s

(b)

(c) Y (s) =

3(s3 + 2s2 + 4s + 1) s(s + 3)2

Y (s) =

s3 − 4s (s + 1)(s2 + 4s + 4)

(d)

Problem 3.14 (a) Assuming zero initial conditions, find the transfer function for a system that obeys the equation y¨ + 4y˙ + 4y = u(t). (b) From the transfer function, obtain the unit step response of the system. (c) From the transfer function, obtain the system’s impulse response. (d) Differentiate the answer to part (a) and compare the result with the impulse response obtained in (c). Problem 3.15 A dynamic system is described by the state-variable equations x˙ = Ax and y = Cx, where   1 1     , C = 2 −1 A=  0 −2 

and x(0) = [1 (a) (b) (c) (d)

2]T .

Obtain the State-Transition Matrix Φ(t). Find the state variable responses x1 (t) and x2 (t). Find the output response y(t). For this system verify that: Φ(0) = I −1

Φ (t) = Φ(−t) Φ(1)φ (2) = Φ(3). Problem 3.16 The following diagram shows a control system with conditional feedback. The transfer function G(s) denotes the controlled process, and D(s) and H(s) are the controller transfer functions.

274

Design and Analysis of Control Systems

Conditional Feedback Control System (a) Derive the transfer functions Y (s)/R(s)|N=0 and Y (s)/N(s)|R=0 . (b) Find Y (s)/R(s)|N=0 when D(s) = G(s). (c) Let 100 G(s) = D(s) = , (s + 1)(s + 5) and find the output response y(t) when N(s) = 0 and r(t) = 1(t) (a unit step function). (d) With G(s) and D(s) as given in part (c), select among the following choices of H(s) such that when n(t) = 1(t) and r(t) = 0, the steady-state value of y(t) is equal to zero. (There may be multiple answers.) H(s) =

H(s) =

10 s(s + 1) 10(s + 1) (s + 2)

H(s) =

H(s) =

10 (s + 1)(s + 2)

K (n = positive integer, select n) sn

It is important to note that the poles of the closed-loop system must all be in the left half s-plane for the Final Value Theorem to be valid. Problem 3.17 (a) Draw a state diagram for the following state equations: dx1 (t) = −2x1 (t) + 3x2 (t) dt dx2 (t) = −5x1 (t) − 5x2 (t) + 2r(t) dt (b) Find the characteristic equation of the system. (c) Find the transfer functions X1 (s)/R(s) and X2 (s)/R(s).

of Feedback 4 Characteristics Control Systems 4.1

INTRODUCTION

This chapter introduces the principles of feedback control systems and illustrates their characteristics and advantages. There is a need to have automatic regulation or tracking in several applications. Quantities such as pressure, temperature, velocity, thickness, torque, and acceleration must be maintained at desired levels. Feedback control is a convenient way in which these tasks can be accomplished. Control is the process of causing a system variable to conform to some desired value or reference value. A system is any collection of interacting components for which there are cause-and-effect relationships among the variables. The components are connected to form a whole entity with properties not present in the separate entities. Within this context, a control system is then defined as an interconnection of interacting components forming a system configuration that will provide a desired system response. Chapter 2 discussed the modelling of dynamic systems, while Chapter 3 dealt with obtaining the system response from the models. This chapter aims to influence the dynamic system response by using feedback control. Feedback is the process of measuring the controlled variable and using that information to influence the controlled variable. Modern control engineering practice includes using control design strategies to improve manufacturing processes, the efficiency of energy use, advanced automobile control, and rapid transit systems. Feedback controllers are used in many different systems, from aeroplanes and rockets to chemical processing plants and semiconductor manufacturing. A feedback controller can be used to stabilise a system that is unstable in an openloop configuration. In this chapter, two case studies, the car cruise control system and the DC motor (both position and speed) control system, are used to study, compare, and contrast the chief characteristics of open- and closed-loop control systems. The different types of controllers, Proportional (P), Proportional and Integral (PI), Proportional and Derivative (PD), Proportional and Integral and Derivative (PID), are discussed, together with their advantages and limitations. The concepts of system error, tracking, disturbance rejection and system type are covered. The notions of sensitivity, bounded input-bounded output stability, asymptotic internal stability, and Routh-Hurwitz stability are discussed and illustrated using practical examples.

4.2

OPEN-LOOP VS. CLOSED-LOOP CONTROL: ANALYSIS

It is important to review the open-loop and closed-loop control systems introduced in Chapter 1. A control system is defined as a network that seeks to influence the behaviour of a dynamic system in pursuit of specified objectives or a desired response. There are two main configurations for control systems: open-loop and closed-loop. All control systems can be placed into these two general categories. 4.2.1

OPEN-LOOP CONTROL

An open-loop control system utilises a controller and actuator to obtain the desired response without monitoring the actual system response (controlled variable). This means that the objective of an open-loop control system is to achieve the desired output by utilising an actuating device to control DOI: 10.1201/9781032718880-4

275

276

Design and Analysis of Control Systems

Figure 4.1 The elements of an open-loop control system.

Figure 4.2 General open-loop control block diagram.

the process directly without the use of feedback. The elements of an open-loop control system are shown in Figure 4.1, and the block diagram representation is shown in Figure 4.2. R(s) is the Laplace transform of the input r(t), Y (s) is the Laplace transform of the output y(t), D(s) is the controller transfer function, G(s) is the plant transfer function, and U(s) is the Laplace transform of the control signal u(t). 4.2.2

CLOSED-LOOP CONTROL

In contrast to an open-loop control system, a closed-loop control system utilises an additional measurement of the actual output to compare the actual output with the desired output response. The measurement of the output is called the feedback signal. The elements of a general closed-loop feedback control system are shown in Figure 4.3. A closed-loop control system compares the output measurement with the desired input (reference or command input). The difference between the two quantities is then used to drive the output closer to the reference input through the controller and actuator. The general Block Diagram model of a closed-loop control system is shown in Figure 4.4. R(s) is the Laplace transform of the input r(t), Y (s) is the Laplace transform of the output y(t), D(s) is the controller transfer function, G(s) is the plant transfer function, U(s) is the Laplace transform of the control signal u(t), E(s) is the Laplace transform of the error signal e(t), and β is the sensor gain (or transfer function). For unit feedback control systems, β = 1.

Figure 4.3 The elements of a closed-loop control system.

Figure 4.4 General closed-loop control block diagram.

277

Characteristics of Feedback Control Systems

4.2.3

ADVANTAGES OF CLOSED-LOOP SYSTEMS

As stated in Chapter 1, closed-loop systems have the following advantages: • • • • • • • • 4.2.4

Faster response to an input signal Effective disturbance rejection Better tracking of reference signals Low sensitivity to system parameter errors (e.g., errors in plant or controller gains) Low sensitivity to changes in calibration errors (recalibration is unnecessary) More accurate control of plant under disturbances and internal variations Effective and flexible control tuning by varying the control gain Used to stabilise systems that are inherently unstable in the open-loop form DISADVANTAGES OF CLOSED-LOOP SYSTEMS

The following disadvantages of closed-loop systems are also discussed in Chapter 1: • • • • • • • • • • 4.2.5

Require the use of sensors which increase the system costs Involve more components which leads to more costs and complexity The power costs (gain) are high More complex design, more complicated to build Less convenient when obtaining the output measurement is either complex or not economically feasible Initial tuning is more difficult, in particular, if the bandwidth is narrow There is always a steady-state error (with proportional controllers) Tend to become unstable as the gain is increased beyond certain limits Unnecessary when system inputs and the plant model are known with total certainty and there are no external disturbances Not always controllable OPEN-LOOP CONTROL DESIGN

As already explained, an open-loop control system utilises a controller and actuator to obtain the desired response without using any measurement of the actual system response (controlled variable) to influence the system. Thus the objective of an open-loop control system is to achieve the desired output by utilising an actuating device to control the process directly without the use of feedback. To analyse the efficacy of open-loop control, consider a system where the following are provided: • Objectives or specifications for desired future of a system: Output y(t) should match the reference signal r(t) • Means of changing the state of system: Actuator • Means of predicting future effect of applying input signal to actuator: A plant model or transfer function G(s) The open-loop control problem is to find the controller to make the output match the reference as depicted in Figure 4.5. Hence the task is to find a controller C(s) to make the output match the reference by inverting the dynamics such that (as depicted in Figure 4.6): C(s) =

1 G(s)

278

Design and Analysis of Control Systems

Figure 4.5 Open-loop control problem.

Figure 4.6 Open-loop control solution.

Hence the control input U(s) and system output Y (s) are, respectively, given by U(s) = C(s)R(s) R(s) = G(s) Y (s) = G(s)U(s) = G(s)C(s)R(s) 1 R(s) = G(s) G(s) =⇒ Y (s) = R(s) Thus, the open-loop control system can track a reference signal if the plant dynamics, represented by the plant model G(s), are perfectly known. This is illustrated by considering a plant whose model is given by s2 + s + 2 G(s) = 2 s −s+4 Clearly, with perfect knowledge of the plant model, a perfect open-loop controller can be designed, and the open-loop control system will work perfectly by reproducing the reference signal. This is illustrated in Figure 4.7, where the open-loop controller is given by C(s) = 4.2.5.1

s2 − s + 4 1 = 2 . G(s) s + s + 2

Open-Loop Control: The Drawback

Consider the case where there is some modelling error, and the plant model G(s) is almost, but not exactly, given by G(s) =

s2 + s + 2 . s2 − s + 4

There is an error of 0.25% in an element of the plant model such that G(s) =

s2 + s + 2 . s2 − s + 3.99

Characteristics of Feedback Control Systems

279

Figure 4.7 Open-loop control with a perfect model [13].

In an open-loop control system, this small error will produce the unstable system response shown in Figure 4.8. It is important to note that the open-loop controller C(s) is the same as that used when the plant transfer function G(s) was perfectly known. Clearly, the small error in an element of the plant transfer function leads to instability. This demonstrates that open-loop control is too sensitive to plant model errors. The curves in Figure 4.8 are easily established by using the following MATLAB code: %open-loop Control sys1 = tf([1 -1 4],[1 1 2]); sys2 = tf([1 1 2],[1 -1 3.99]); SymC=sys1*sys2; figure subplot(2,2,1) plot(1) step(sys1,10) title(’Open Loop’) subplot(2,2,2) plot(2) step(SymC,10) title(’Open Loop’) It is important to emphasise that the open-loop controller C(s) is the “inverse” of the plant model or system transfer function in an open-loop control system. This controller is often not causal as it

280

Design and Analysis of Control Systems

Figure 4.8 Open-loop control with a small model error [13].

requires information about future reference signals. The major problem is that it requires a perfect model of the plant. Hence open-loop control will not work if there are modelling errors, model changes over time and unknown disturbances. Consequently, open-loop control is hardly used in practical engineering problems [13]. 4.2.6

THE CASE FOR CLOSED-LOOP CONTROL

In contrast to an open-loop control system, a closed-loop control system utilises a measure of the actual output to compare the actual output with the desired output response. The measure of the output is called the feedback signal. The elements of a general closed-loop feedback control system are shown in Figure 4.3. A closed-loop control system compares a measurement of the output with the desired output (reference or command input). The difference between the two quantities (the error signal) is then used to drive the output closer to the reference input through the controller and actuator. The problems and ineffectiveness of open-loop control are resolved by using closed-loop control. There are two major reasons why we need feedback control: (i) to make the system output resemble as much as possible a given input (“tracking”), for example, in target-tracking missiles. (ii) to reduce the effect of disturbances in the system output (regulation), for example, in minute adjustments in a steering wheel while we drive. As already explained, we cannot just use open-loop control where we invert the plant gain since we might not have perfect knowledge of the transfer function, and there could be unknown disturbances. The efficacy of closed-loop control can be demonstrated by resolving the same problem that open-loop control could not address. We can even use an error of 90% (not just 0.25%) in the same element of the plant transfer function, such that

281

Characteristics of Feedback Control Systems

s2 + s + 2 . s2 − s + 0.4 From Figure 4.9, it is evident that a closed-loop control system can produce the desired outcome despite the modelling error. That is why closed-loop control is more widely applied than open-loop control. In this specific case, a closed-loop control system with a PI controller (to be introduced and discussed in detail later in this chapter) is used. The curves in Figure 4.9 are easily established by using the following MATLAB code: G(s) =

% closed-loop Control Gd = tf([10 200],[1 0]); Gs1 = tf([1 1 2],[1 -1 4]); Gs2 = tf([1 1 2],[1 -1 0.4]); num = (Gs1*Gd); den = (1+Gs1*Gd); num2 = (Gs2*Gd); den2 = (1+Gs2*Gd); subplot(2,2,3.5) step((num/den),10) title(’Closed Loop’) hold on step((num2/den2),10)

Plant

Controller r(t) 10 1 +



G(s) = G(s) =

s2 + s + 2 s2 – s + 4 s2 + s + 2 s2 – s + 0.4

20 s

u(t)

y(t)

G (s)

(Perfect model)

(Model with Error)

Step Function Perfect Model Model with Error

1.04

Amplitude

1.02 1 0.98 0.96 0.94 0.92 –2

0

2

4

6

Time (seconds)

Figure 4.9 Closed-loop control is insensitive to model error [13].

8

10

12

282

Design and Analysis of Control Systems

The advantages of closed-loop control can be restated as follows: • • • • • • • • 4.2.7

Faster response to an input signal Effective disturbance rejection Better tracking of reference signals Low sensitivity to system parameter errors (e.g., errors in plant or controller gains) Low sensitivity to changes in calibration errors (recalibration is unnecessary) More accurate control of plant under disturbances and internal variations Effective and flexible control tuning by varying the control gain Used to stabilise systems that are inherently unstable in the open-loop form CLOSED-LOOP CONTROL DESIGN

In summary, the goal for both open and closed-loop control is to make the system output y(t) follow a reference r(t) automatically. The controller is a dynamic component, and the task is to design the controller transfer function C(s). In an open-loop control system, the controller is the inverse of the plant model. This will not work if there are modelling errors, model changes over time and known disturbances. Hence in this section, we concentrate on closed-loop control design. A general closedloop system is depicted in Figure 4.10. It uses a sensor to measure the actual output y(t). It generates an error signal e(t) as the difference between the reference and output signals. the controller C(s) takes error signal as its input and uses it to generate control input u(t) to reduce difference between y(t) and r(t). There are also external disturbances to the system and errors in sensor measurements (sensor noise). The task is to design C(s) so as to manipulate u(t) to make error e(t) as small as possible. Applying negative feedback means the measured signal is subtracted from the reference signal. The measurement is usually corrupted by sensor noise. 4.2.8

CASCADE AND FEEDBACK CONTROLLERS

There are two types of controller structures: cascade and feedback. For the cascade configuration, the controller is in series with the plant, and there is unit gain in the feedback path as shown in Figure 4.11. It is also called a unity feedback control system. For the feedback configuration, the controller is placed in the feedback path as shown in Figure 4.12. The feedback controller can be remodelled as a cascade controller with an input transducer by using block diagram algebra as illustrated in Figure 4.13. Hence if a control system has both a cascade controller and a feedback one, the two controllers can be replaced by one cascade controller.

Figure 4.10 A general closed-loop control system.

Characteristics of Feedback Control Systems

283

Figure 4.11 Cascade controller.

Figure 4.12 Feedback controller.

Figure 4.13 Remodelled feedback controller.

4.2.9

CONTROL SYSTEM DESIGN REQUIREMENTS

To design a control system, there has to be clarity on the system requirements. The question is: How do we want the system to behave, what performance characteristics do we desire for it, and how is the physical control system going to operate? Before embarking on the design process, these requirements must be clearly outlined and understood. They are categorised as follows: • The objectives or specifications for desired future behaviour of the system. This prescribes how we would like the system to behave and the level of performance we want. • Means of changing the system’s state, that is, an actuator which is a physical mechanism to move the system as desired, such as by opening a valve. An actuator responds by converting the source’s energy into mechanical motion when it receives a control signal. • A systematic way of predicting the future effect of applying an input signal to the actuator. This takes the form of a plant model G(s), which is the mathematical representation of the dynamic system’s behaviour.

284

Design and Analysis of Control Systems

• A way of measuring the output of the process – a sensor or sensors. This measurement or fusion of sensor data is then used to compare the output y(t) with reference r(t) to produce the error signal e(t), which is fed into the controller. The controller then generates the control signal, which drives the actuator into action. 4.2.10

CONTROL OBJECTIVES AND SPECIFICATIONS

Consider the general control system depicted in Figure 4.14. As introduced in Chapter 1, controller design is usually viewed as one of two related problems: • Tracking (Servo) problems: Design C(s) so that the output y(t) follows a time-varying reference r(t). Assume d(t) = 0. • Regulation problem (disturbance rejection): Design C(s) to hold y(t) at a constant reference value, r(t) = r, in the presence of an external disturbance d(t). To design a controller, we must prescribe how we want the system to behave in the future. These control specifications are mainly defined in terms of one or more of the following: • • • • • •

Stability Steady-state accuracy Dynamic response Sensitivity to modelling errors Disturbance rejection effectiveness Sensitivity to sensor noise

Unfortunately, some of these control design requirements are quite intractable and often conflicting. The design of the controller C(s) must seek to address all these performance metrics and balance the conflicting requirements. Control specifications are often defined with standard signals as references for tracking problems and standard signals for disturbances in the case of regulation control cases. There are four main standard signals to determine system performance: three in the time domain and one in the frequency domain. For a reference input and a disturbance signal, these four are represented as follows: • Unit impulse r(t) = δ (t) or d(t) = δ (t) =⇒ R(s) = 1 or D(s) = 1 • Unit step r(t) = 1 or d(t) = 1 =⇒ R(s) =

1 1 or D(s) = s s

Figure 4.14 Controller design: tracking and regulation problems.

Characteristics of Feedback Control Systems

285

• Unit ramp 1 1 r(t) = t or d(t) = t =⇒ R(s) = 2 or D(s) = 2 s s • Sinusoidal signal of known angular frequency ω r(τ) = A cos(ωt + α) or d(τ) = A cos(ωt + α) Most of the other useful signals are essentially combinations of these basic test signals. For linear time-invariant systems, it follows that satisfactory and acceptable performance with respect to these individual basic test signals will result in similar performance for inputs that consist of combinations of these basic signals. This is also the case for even more complex inputs. Hence it is sufficient to investigate the performance of control systems using the basic signals listed in the preceding section.

4.3

FEEDBACK CONTROL SYSTEMS

The block diagram, depicted in Figure 4.15, represents the most generalised feedback control system with all the components that define the essence of such a system. All the other representations are special cases of this block diagram. As explained earlier, the plant is the system we want to control in this general feedback control system. For example, the input is voltage for an elevator plant, and the output is the elevator position. The controller is the apparatus that produces input to the plant, the voltage to the elevator’s motor. Transducers convert physical quantities so the system can use them. For example, the input transducer is the floor button pushed for an elevator plant, which produces a voltage. The output transducer is the current elevator position which produces a voltage signal. Feedback is the apparatus that contributes the current system state to generate the error signal. In the elevator system, the error signal equals the voltage representing the desired position minus the voltage representing the current position. Given this generic Block Diagram model depicted in Figure 4.15, to obtain the corresponding Transfer Function model, we proceed by using block diagram algebra. First, move the block G1 in front of the summing function, which leads to the first diagram in Figure 4.16. Further simplification leads to the second diagram in Figure 4.16. The Transfer Function model of the system represented by the block diagram is then obtained. H2 (s)H1 (s) C(s) G1 (s) C(s) = [G1 (s)G2 (s)G3 (s)] E(s)   H2 (s)H1 (s)C(s) =⇒ C(s) = [G1 (s)G2 (s)G3 (s)] R(s) ± G1 (s) E(s) = R(s) ±

Figure 4.15 Block diagram of a general feedback control system.

286

Design and Analysis of Control Systems

Figure 4.16 Closed-loop control block diagram reduction.

  H2 (s)H1 (s) =⇒ [G1 (s)G2 (s)G3 (s)] R(s) = C(s) 1 ± G1 (s)G2 (s)G3 (s) G1 (s) =⇒ [G1 (s)G2 (s)G3 (s)] R(s) = C(s) (1 ± G2 (s)G3 (s)H2 (s)H1 (s)) T (s) = =⇒ T (s) =

C(s) R(s) G1 (s)G2 (s)G3 (s) 1 ± G2 (s)G3 (s)H2 (s)H1 (s)

This is the Transfer Function model of the most generic feedback control system. When it is negative feedback, the (+) sign is used, and when it is positive feedback, the (−) sign is used. This Transfer Function model can also be obtained by a different block diagram reduction approach – first finding the transfer function Tfeed (s) of the ± feedback loop in Figure 4.15 and then considering a series block diagram of this outcome and G1 (s) as shown in Figure 4.17. The Transfer Function model of the general feedback control system is then derived as follows: G2 (s)G3 (s) 1 ± G2 (s)G3 (s)H2 (s)H1 (s) T (s) = G1 (s)Tfeed (s)   G2 (s)G3 (s) =⇒ T (s) = G1 (s) 1 ± G2 (s)G3 (s)H2 (s)H1 (s) Tfeed (s) =

Figure 4.17 Resultant series block diagram.

Characteristics of Feedback Control Systems

=

287

G1 (s)G2 (s)G3 (s) 1 ± G2 (s)G3 (s)H2 (s)H1 (s)

Clearly, this outcome is the same as the one already established in the preceding section. All Transfer Function models of single input, single output feedback control systems take this format or are simply special cases of this generic Transfer Function model. 4.3.1

SPECIAL CASE I

In this format, it is a negative feedback control system, and there is neither an input nor an output transducer, which means the input and output are fed in directly as shown in the first diagram in Figure 4.18. There is also a Proportional controller: G1 (s) = H1 (s) = 1 G2 (s) = K G3 (s) = G(s) H2 (s) = H(s) G1 (s)G2 (s)G3 (s) 1 + G2 (s)G3 (s)H2 (s)H1 (s) KG(s) =⇒ T (s) = 1 + KG(s)H(s) T (s) =

This is the standard negative feedback control system. The standard positive feedback control system will have same structure but with a minus sign in the denominator. 4.3.2

SPECIAL CASE II

In this case, in addition to the preceding conditions, the output is compared directly with the input (there is no feedback gain) as shown in the second diagram in Figure 4.18:

Figure 4.18 (i) Standard negative feedback (ii) unity negative feedback.

288

Design and Analysis of Control Systems

H(s) = 1 KG(s) 1 + KG(s)H(s) KG(s) =⇒ T (s) = 1 + KG(s) T (s) =

This is called a unity negative feedback control system. In the case of positive feedback, there will be a minus sign in the denominator. It is then called a unity positive feedback control system. 4.3.3

POSITIVE FEEDBACK VS. NEGATIVE FEEDBACK

As illustrated in the preceding figures, a change in a given direction in a positive feedback loop causes an additional change in the same direction. For example, an increase in the concentration of a substance causes feedback that produces continued increases in concentration. In a negative feedback loop, a change in a given direction causes a change in the opposite direction. For example, an increase in the concentration of a substance causes feedback that ultimately causes the concentration of the substance to decrease. Positive feedback loops are inherently unstable systems. This is because a change in an input causes responses that produce continued changes in the same direction leading to instability. It increases the system gain and may lead to self-oscillation. Although often considered undesirable, this behaviour is used in electronics to obtain a swift switching response to a condition or signal. One example of positive feedback is hysteresis, in which a logic device or system maintains a given state until some input crosses a preset threshold. This type of behaviour is called “bi-stability” and is often associated with logic gates and digital switching devices such as multi-vibrators. Hence, positive feedback is widely used in oscillatory circuits such as Oscillators and Timing circuits. Negative feedback loops are inherently stable systems. In conjunction with the various stimuli that can affect a variable, negative feedback loops typically produce a condition in which the variable oscillates around (or settles at) the set point. For example, in biology, negative feedback loops involving insulin and glucagon help keep blood glucose levels within a narrow concentration range. If glucose levels get too high, the body releases insulin into the bloodstream. Insulin causes the body’s cells to take in and store glucose, lowering the blood glucose concentration. If blood glucose gets too low, the body releases glucagon, which causes the release of glucose from some of the body’s cells. This is a classic example of the efficacy of negative feedback in biology. Most engineering problems involve tracking (following a time-varying defined trajectory or target) and regulation (maintaining a certain variable at a specific value or within a particular range). Thus negative feedback is highly applicable in solving these engineering tasks. Consequently, most control system theory and practice concentrate on negative feedback control systems.

4.4

STEADY-STATE PERFORMANCE

The steady-state response of a dynamic system is defined as its behaviour in the limit as time goes to infinity. Hence, the steady-state error is the difference between the reference and the system’s output when the response has reached steady state. Put differently, the steady-state error is defined as the difference between the desired value and the actual value of a system output in the limit as time goes to infinity (i.e. when the response of the control system has reached steady state). The steady-state error will depend on the type of reference (e.g., step or ramp) and type of disturbance (e.g., step or ramp). Steady-state performance is only attainable for stable systems.

289

Characteristics of Feedback Control Systems

Figure 4.19 Defining steady-state response.

Consider the general unity feedback control system in Figure 4.19. The steady-state response is usually defined in terms of response to a unit step input as a reference r(t) or disturbance d(t). The objective is for a steady-state error signal to decay to zero (after the transient response has dissipated) as time approaches infinity. In some cases, a steady-state error cannot be reduced to zero. In such cases, the requirement is for the steady-state error to be less than a specified value. 4.4.1

EXPRESSIONS FOR STEADY-STATE ERROR

A simpler version of Figure 4.19 can be represented by the unity feedback control system in Figure 4.20, where G(s) is the plant gain (or plant transfer function) and T (s) is the closed-loop transfer function. The system error signal can be expressed as a function of G(s) or T (s). From Figure 4.20 it follows that: C(s) = E(s)G(s) E(s) = R(s) −C(s) E(s) = R(s) − E(s)G(s) =⇒ E(s) =

R(s) 1 + G(s)

This is the expression of the system error signal in terms of the plant transfer function G(s) and the reference signal R(s). The steady-state error is obtained by using the Final Value Theorem (FVT). ess (t) = lim

s→0

sR(s) 1 + G(s)

This is the expression of the system steady-state error in terms of the plant transfer function G(s). The closed-loop transfer function is given by C(s) R(s) =⇒ C(s) = R(s)T (s) T (s) =

(But error signal) E(s) = R(s) −C(s)

Figure 4.20 Unity feedback control system.

290

Design and Analysis of Control Systems

=⇒ E(s) = R(s) − R(s)T (s) =⇒ E(s) = R(s)[1 − T (s)] This is the expression of the system error signal in terms of the plant transfer function T (s) and the reference signal R(s). The steady-state error is obtained by using the Final Value Theorem (FVT). ess (t) = lim sR(s)[1 − T (s)] s→0

This is the expression of the system steady-state error in terms of the closed-loop transfer function T (s). For illustration, consider the case where the input is a unit step function and the plant transfer function is given by G(s) =

25 s(s + 5)

G(s) =⇒ T (s) = = 1 + G(s) =⇒ T (s) =

25 s(s + 5) 25 1+ s(s + 5)

25 s2 + 5s + 25

Obtaining the steady-state error by using the plant transfer function G(s) leads to sR(s) 1 + G(s) 1 s = lim where R(s) = 1/s 25 s→0 s 1+ s(s + 5) 1 = ∞ = 0.

ess1 (t) = lim

s→0

This is the steady-state error using the plant transfer function G(s). Deploying the Transfer Function model T (s) leads to ess2 (t) = lim sR(s)[1 − T (s)] s→0   25 s = lim 1 − 2 where R(s) = 1/s s→0 s s + 5s + 25 = 0. This demonstrates that the steady-state error ess (t) obtained from the two formulae is the same.

4.5

DISTURBANCE MODELLING

Disturbance signals represent unwanted inputs which affect the control system’s output, and result in an increase of the system error. The control system must be properly to completely or partially eliminate the affects of disturbances on the output and system error. The exact point where disturbances enter is usually unknown. There are several types of disturbances: input disturbance di (t), output disturbance do (t) and sensor disturbance n(t) as depicted in Figure 4.21. In some industrial control problems, the input disturbance is generated in the actuator, and output disturbance is produced in

Characteristics of Feedback Control Systems

291

Figure 4.21 Modelling sensor noise, input and output disturbances.

the sensor. The main causes of these disturbances are abnormal behaviour of the actuator (in the case of a motor, its magnetic field might become weak or there is a fault in the system for which this motor is used) and environmental effects on a sensor’s performance. The input disturbance and output disturbance can be combined into a single disturbance using block diagram manipulation as shown in Figure 4.22. The result is the combined disturbance model shown in Figure 4.23. Alternatively, model either input disturbance or output disturbance, but not both.

Figure 4.22 Combining input and output disturbances.

Figure 4.23 A combined disturbance model.

292

Design and Analysis of Control Systems

As discussed earlier, controller design should allow for changes in reference signal r(t), unknown disturbances d(t) and uncertainties and errors in the model of the dynamic system G(s), and sensor noise n(t). In practice actuators and sensors have their own dynamics. One can combine responses of actuators and sensors into system dynamics such that, G(s) then describes the dynamic response of the actuator, sensor and system. 4.5.1

EFFECTIVE DISTURBANCE REJECTION

Consider a closed-loop control system with a combined disturbance input D(s) as shown in Figure 4.24. The objective is to determine ways to minimise the impact of this disturbance on the steady-state error of the system. The error due to the disturbance signal is established and analysed from Figure 4.24 as follows: C(s) = E(s)G1 (s)G2 (s) + D(s)G2 (s)

(4.1)

E(s) = R(s) −C(s)

(4.2)

=⇒ C(s) = R(s) − E(s)

(4.3)

Substituting Equation 4.3 into Equation 4.1 one yields R(s) − E(s) = E(s)G1 (s)G2 (s) + D(s)G2 (s) =⇒ E(s) [1 + G1 (s)G2 (s)] = R(s) − D(s)G2 (s) G2 (s) 1 R(s) − D(s) 1 + G1 (s)G2 (s) 1 + G1 (s)G2 (s) 1 =⇒ E(s) = R(s) + ED (s) 1 + G1 (s)G2 (s) −G2 (s) D(s) where ED (s) = 1 + G1 (s)G2 (s) =⇒ E(s) =

The error signal eD (t) whose Laplace transform is ED (s), is the error due to the combined disturbance D(s). Ways of rejecting this error can be explored by considering a unit step disturbance 1 D(s) = . The steady-state error due to the disturbance is then given by s   G2 (s) s eDss = lim s→0 s 1 + G1 (s)G2 (s) 1 = 1 + lims→0 G1 (s) lims→0 G2 (s)

Figure 4.24 Effective disturbance rejection.

293

Characteristics of Feedback Control Systems

This means the steady-state error eDss can be reduced to zero by increasing the DC gain of G1 (s) or by decreasing the DC gain of G2 (s). Since the plant transfer function G2 (s) is usually fixed, the more practical option is to increase the DC gain of the controller G1 (s). 4.5.2

EXAMPLES OF OPEN- AND CLOSED-LOOP SYSTEMS

A variety of control systems (open- and closed-loop) were outlined in Chapter 1. These include blood glucose control, manual car direction control, turntable speed control, automatic water level control, and room temperature control. Two running examples are used in the following sections to illustrate and quantitatively compare open- and closed-loop systems: Speed cruise control design for a car and control of a DC motor.

4.6

CAR CRUISE CONTROL SYSTEM (OPEN-LOOP)

The model of the cruise control system for a car can be established in a relatively simple fashion. The car is modelled as a single translational mass where the rotational inertia of the wheels is neglected. The input to the car system is the applied engine force f (t), which produces horizontal motion x(t) at speed v(t) and acceleration v(t). ˙ The frictional force bv(t), which is proportional to the car’s speed, opposes the car’s motion. The model of the car’s cruise control system is thus reduced to a simple mass and damper system as shown in Figure 4.25. Using Newton’s second law, the car’s equation of motion is given by mv˙ + bv = f f b v˙ + v = . m m

(4.4)

The applied force f (t) can be related to the desired speed or (reference speed) vr (t) by considering the steady-state conditions, i.e. when all derivatives are equal to zero. The reference speed is equal to the steady-state speed vss (t), which is also the maximum attainable speed when a force f (t) is applied to the car. f b vss = m m ⇔ vss =

f = vr . b

For example, an applied force of 500 N, where b = 50 N sec/m, would mean that the reference speed vr is 10 m/sec.

Figure 4.25 Car cruise control: a single mass model.

294

4.6.1

Design and Analysis of Control Systems

INPUT-OUTPUT MODEL

The input-output Equation 4.4 can be rewritten with the reference speed vr (t) as the input as follows: v˙ +

b b v = vr m m ⇔ mv˙ + bv = bvr .

(4.5)

This is a much more intuitive representation of the input-output differential equation as it clearly relates the desired output vr (t) and the actual output v(t), where the two quantities are of the same type and dimension. Most textbooks consider Equation 4.4 as the input-output differential equation, but this does not make much sense when one has to design a car’s cruise control system (open-loop or closed-loop). This is because, under such a representation, the system will have to be designed in terms of a reference force input f (t) (as the desired output) while the variable being controlled (the actual output) is the car speed v(t). Although this is technically correct, it is very unintuitive. 4.6.2

TRANSFER FUNCTION MODEL

The transfer function is obtained by taking the Laplace transform of the input-output Equation 4.5. When finding the transfer function, zero initial conditions are assumed. v˙ +

b b v = vr m m b b =⇒ sV (s) + V (s) = Vr (s) m m   b b = Vr (s) =⇒ V (s) s + m m V (s) =⇒ = Vr (s)

b m . b s+ m

(4.6)

This is the open-loop transfer function of the system with respect to the reference or desired car speed. 4.6.3

BLOCK DIAGRAM MODEL

The Block Diagram model of a system (open-loop or closed-loop) can be derived from analysing segments of the transfer function. Any general open-loop system can be represented as shown in Figure 4.26.

Figure 4.26 General open-loop control system.

295

Characteristics of Feedback Control Systems

Figure 4.27 Open-loop cruise control.

The specific block diagram form is determined by pairing up elements of this generic diagram with the corresponding segments of the specific transfer function being considered. In the car’s cruise control problem, the input is the reference car speed vr (t), and the actual car speed v(t) is the output. The plant gain G(s) is obtained from the transfer function in Equation 4.6 as b G(s) = m . b s+ m The controller is a unit proportional controller i.e. D(s) = K = 1, where U(s) = D(s)Vr (s). Hence, the block model for the open-loop car’s cruise control system takes the form as shown in Figure 4.27. 4.6.4

STATE-VARIABLE MATRIX MODEL

The system has one independent energy-storing element (the mass). Therefore, the minimum number of variables is one, and this variable is chosen as v(t), the car speed. x˙ = v b b v˙ = − v + vr . m m The State-Variable Matrix model (with the output as the car velocity) is then given by     b b v˙ = − v+ vr m m y = [1] v + [0] vr . If the car position x(t) is also of interest, for example in a situation where car position control is an objective, then an extra redundant variable x(t) is required to model the system. The State-Variable Matrix model (with both position and velocity as outputs) becomes     0 1 0      x˙   x   = + (4.7) b   [vr ] v˙ b −  0  v m m  

y1 y2



1

 =  0

0



1

  



x v





0

+

  vr .

0

(4.8)

296

Design and Analysis of Control Systems

Example 4.1 Open-Loop Cruise Control System Design: Consider a car whose mass, m = 1, 000 kg and b = 50Nsec/m, where the reference (input) speed of interest is 10m/sec. Design requirements: The desired speed of the car is 10m/sec (22mph). An automobile should be able to accelerate up to that speed in less than 5 seconds. Since this is only a cruise control system, a 10% overshoot on the velocity will not damage much. A 2% steady-state error is also acceptable for the same reason. Keeping the above in mind, the following design criteria for this problem is proposed: rise time < 5sec, overshoot < 10%, and steady-state error < 2%. The step function in MATLAB calculates the response to a step input (1m/ sec). Hence, the input matrix B will have to be multiplied by the magnitude of the reference speed, 10m/ sec. As a result, the MATLAB command is given by step(A,vr *B,C,D,1,t). Solution 4.1 The state-variable matrices are obtain from Equations 4.7 and 4.8 and t is the time of observation.       0 1   1 0 0 0         , D =  . (4.9) A=  b  , B=   , C=   0  1 b −   0 0 m m Figures 4.28 and 4.29 show the MATLAB plots of the position and speed responses of the car with respect to time. Figure 4.28 shows the open-loop car speed response. The car achieves the desired speed of 10m/ sec in about 110 sec (the settling time) without any steady-state error or overshoot. The settling time is too large and does not satisfy the rise time criterion of < 5 seconds. However, the overshoot requirement (< 10%) and the steady-state error criterion (< 2%) are satisfied. Figure 4.29 shows the car position response, which, after the settling time of 110 sec, is a linear curve of gradient 10m/sec.

Figure 4.28 Car cruise control (open-loop system).

297

Characteristics of Feedback Control Systems

Figure 4.29 Car cruise control (open-loop system).

4.7

CAR CRUISE CONTROL SYSTEM (CLOSED-LOOP)

The car’s cruise control system considered so far has been open-loop. There is no use of a sensor, and hence there is no feedback of sensor information to correct errors in the car speed. A closedloop car’s cruise control system engages a speed-measuring sensor, the speedometer, to measure the actual car speed. This measured speed is then compared with the desired car speed, and the difference between them is then sent to a controller that seeks to reduce this system error. Consider the open-loop cruise control model. b b v˙ + v = vr . m m In the closed-loop system the speed error, [vr (t)-v(t)], is used as the input into the proportional controller K, instead of the reference speed, vr . Thus, the closed-loop model takes the following form b b v˙ + v = K(vr − v). (4.10) m m 4.7.1

INPUT-OUTPUT MODEL

The input to the whole system is still vr (t) while the output is still v(t), and hence the input-output differential equation form is obtained by rearranging Equation 4.10. b bK v+ v= m m   b b v˙ + + K v= m m v˙ +

b Kvr m b Kvr . m

(4.11)

298

4.7.2

Design and Analysis of Control Systems

TRANSFER FUNCTION MODEL

The transfer function form is obtained by taking the Laplace transform of the input-output Equation 4.11, while assuming zero initial conditions. mv˙ + (b + bK)v = bKvr . Taking Laplace transforms it follows that smV (s) + (b + bK)V (s) = bKVr (s) =⇒

V (s) bK = . Vr (s) ms + (b + bK)

(4.12)

This is the transfer function form of the closed-loop car’s cruise control system with respect to the desired car speed. 4.7.3

BLOCK DIAGRAM MODEL

The block diagram form of the closed-loop system can be obtained in the same way as done in the open-loop system, that is, by comparing segments of the transfer function with a generic closedloop block diagram. Any closed-loop (negative feedback) system can be represented by Figure 4.30, where β represents a sensor that measures the actual output. The measured output is compared with the desired output, and the difference between the two is used to drive the controller D(s). From the closed-loop equation v˙ + ⇒

b b v = K(vr − v) m m

m v˙ + v = K(vr − v). b

The block diagram is established by taking the Laplace transforms of this equation and then comparing the elements of the resulting transfer function with the generic block diagram in Figure 4.30. m sV (s) +V (s) = K [Vr (s) −V (s)] b   ms + b ⇒ V (s) = K [Vr (s) −V (s)] , b where the speed error and the control signals in Laplace transforms are, respectively, given by E(s) = Vr (s) −V (s)

(4.13)

U(s) = KE(s).

(4.14)

Figure 4.30 General closed-loop block diagram.

299

Characteristics of Feedback Control Systems

Figure 4.31 Closed-loop cruise control system.

The output speed is thus obtained from the control signal by the equation   b U(s) V (s) = ms + b " # ⇒ V (s) =

b m

s + mb

U(s).

(4.15)

Putting the Equations 4.13–4.15 in block diagram form produces Figure 4.31. Comparing this closed-loop block diagram for the cruise control system with that of the corresponding open-loop system in Figure 4.27, it can be seen that the only difference between them is the unit negative feedback loop. In such a control system, the exact actual output is compared with the desired output, and the difference between the two is used to drive the controller. The assumption is that a perfect sensor (β = 1) is used to measure the output. From the closed-loop block diagram the closed-loop transfer function can be deduced directly.

K

V (s) = Vr (s)



b m

b m b m 1+K b s+ m s+

V (s) bK = . Vr (s) ms + (b + bK)

This equation is the same as Equation 4.12, thus effectively validating the block diagram shown in Figure 4.31. 4.7.4

STATE-VARIABLE MATRIX MODEL

Considering a single state variable, the car speed v(t), the state-variable form is obtained from the closed-loop input-output differential equation,   b b b v˙ + + K v = Kvr m m m     b(1 + K) bK [v] ˙ = − [v] + vr m m y = [1] v + [0] vr .

300

Design and Analysis of Control Systems

If motor position monitoring is also required, then both x(t) and v(t) are chosen as state variables. In this case, there is a redundant state variable x(t). Thus the State-Variable Matrix model of the position control system model (where both position and velocity are outputs) is represented as follows:     0 1 0      x˙    x  + = b(1 + K)   [vr ] v˙ bK  v  0 − m m  

y1 y2



1

 =  0

0



1

  



x v





0

+

  vr

0

Example 4.2 Consider a car of similar characteristics and the same design requirements as those used for the open-loop system, i.e., m = 1, 000 kg and b = 50 N sec/m, where the reference (input) speed of interest is 10 m/sec. The step function in MATLAB calculates the response to a step input (1 m/sec). Hence, matrix B will correspondingly have to be multiplied by the magnitude reference speed 10 m/sec. The MATLAB command is the same as that in the open-loop system, but the system matrices are different, as shown below Solution 4.2 step(A,vr *B,C,D,1,t)      0 1 1 0        A=  b(1 + K)  , B=   , C=   0 bK  0  − m m

0 1



  0  , D =    0

(4.16)

Figures 4.32 and 4.33 and show the MATLAB plots of the position and speed responses of the car with respect to time. Figure 4.32 shows the car speed response for proportional control gain of K = 20. There is a steady-state error of 5% from the desired speed of 10m/sec, a settling time of 5 sec and no overshoot. There is a dramatic improvement from the open-loop system on settling time (110 to 5 sec). The disadvantage of the proportional closed-loop control system with respect to the open-loop one is that there is always a steady-state error, whereas there is no steady-state error in the open-loop system. Although large values of K give low steady-state errors, they involve high energy consumption and large (impractical) car accelerations and might lead to instabilities. Although reducing the steady-state error is desirable, there has to be a trade-off between this objective and these negative tendencies. The plot shows that the closed-loop system satisfies all three design requirements (rise-time, overshoot and steady state). Figure 4.33 shows the corresponding car position response, which, after the settling time (5sec), is a linear curve of gradient 9.5m/sec.

4.8

DC MOTOR SPEED CONTROL (OPEN-LOOP)

A DC motor is a good example of an electromechanical system, that is, a system that contains mechanical and electrical components interconnected to provide a composite function. It is a common actuator in various engineering systems, including mechatronics and robotics. The DC motor directly provides rotary motion and can provide translational motion when coupled with cylinders and cables. The electrical circuit of the armature and the free-body diagram of the rotor are shown in Figure 4.34.

Characteristics of Feedback Control Systems

Figure 4.32 Car cruise control (closed-loop system).

Figure 4.33 Car cruise control (closed-loop control).

301

302

Design and Analysis of Control Systems

Figure 4.34 Circuit of the armature and the rotor free-body diagram [11].

The modelling equations are obtained by using electrical circuit laws (KV L, KCL , etc.) and basic mechanics (Newton’s laws). The armature is driven by the circuit as shown in Figure 4.34 and the motor torque T is related to the armature current i(t) by a constant factor K, while the rotor and shaft are assumed to be rigid. Hence, by summing up the torques on the rotor’s free-body diagram in Figure 4.34, an expression for the current i(t) in terms of the motor angular speed and acceleration is obtained as follows: J θ¨ + bθ˙ = T = Ki ⇐⇒ i =

J θ¨ + bθ˙ . K

(4.17)

The back emf is related to the angular speed θ˙ (t) such that L

di + Ri = v − K θ˙ , dt

(4.18)

where v(t) is the supply voltage. Equations 4.17 and 4.18 are the core equations that can be used to produce different forms of the mathematical model for a DC motor. 4.8.1

INPUT-OUTPUT MODEL

In this form of the model, the system is expressed in terms of the input voltage v(t) and its derivatives, and the output angular speed ω(t) and its derivatives while eliminating all other variables. L

d dt

 ¨   ¨  J θ + bθ˙ J θ + bθ˙ +R = v − K θ˙ K K JL ... Lb ¨ RJ ¨ Rb ˙ θ + θ + θ + θ = v − K θ˙ K K K K JL Lb RJ Rb ω¨ + ω˙ + ω˙ + ω = v − Kω K K K K

JL Lb RJ Rb ω¨ + ω˙ + ω˙ + ω + Kω = v K K K K

303

Characteristics of Feedback Control Systems

JLω¨ + Lbω˙ + RJ ω˙ + Rbω + K 2 ω = Kv (JL)ω¨ + (Lb + RJ)ω˙ + (Rb + K 2 )ω = Kv.

(4.19)

Equation 4.19 is the input-output differential equation model where the input is the voltage v(t) and the output is the angular speed ω(t). Now, since the objective is to design a motor speed control system, it will be more useful to express the input-output differential equation with the desired (reference) motor speed ωr (t) as the input. This reference speed (maximum attainable speed) can be deduced from Equation 4.19 as the steady-state speed, which is a function of the input voltage, where all the derivatives are set to zero, i.e., ω¨ = ω˙ = 0. (Rb + K 2 )ωss = Kv ⇔ wr = wss = ⇔v=

Kv (Rb + K 2 )

(Rb + K 2 )wr . K

(4.20)

Substituting this expression of v(t) in Equation 4.19 gives the input-output differential equation model with respect to the desired DC motor speed. (JL)ω¨ + (Lb + RJ)ω˙ + (Rb + K 2 )ω = (Rb + K 2 )wr .

(4.21)

This is a much more useful form, as it depicts as the input the desired motor speed ω(t), where the motor speed ω(t) is the variable being controlled or monitored. Most textbooks consider Equation 4.19 as the input-output differential equation, which does not make much sense when one has to design a motor speed control system (open-loop or closed-loop). This is because, under such a representation (Equation 4.19), the system will have to be designed in terms of a reference voltage input v(t) (as the desired output) while the variable being controlled (the actual output) is the motor angular speed ω(t). While this is not incorrect and can be done, it is counter-intuitive and unnecessary. 4.8.2

TRANSFER FUNCTION MODEL

Two transfer functions can be obtained, one with respect to the input voltage and the other with respect to the desired motor speed. The transfer functions are obtained by taking the Laplace transform of the input-output Equations 4.19 and 4.21. First, the transfer function with respect to the input voltage is considered.   KV (s) = s2 (JL) + s(Lb + RJ) + (Rb + K 2 ) Ω(s) =⇒

Ω(s) K = . 2 V (s) (JL)s + (Lb + RJ)s + (Rb + K 2 )

Alternatively, this transfer function can be obtained by first finding the Laplace transforms of the angular speed Ω(s) and the voltage V (s), and then finding the ratio, Ω(s) sΘ(s) = V (s) V (s)

(from ω = θ˙ ).

(4.22)

The expression for V (s) in terms of Θ(s) is obtained by using the Laplace transforms of the motor circuit Equation 4.18 and then substituting for the Laplace transform of the motor current I(s). The Laplace transform of Equation 4.18 gives

304

Design and Analysis of Control Systems

(Ls + R)I(s) = V (s) − KsΘ(s) ⇒ V (s) = (sL + R)I(s) + KsΘ(s).

(4.23)

The Laplace transform of the motor current I(s) is obtained from Equation 4.17 as follows: s(Js + b)Θ(s) = KI(s) =⇒ I(s) =

s(Js + b)Θ(s) . K

Substituting this expression of I(s) in Equation 4.23 leads to 

 s(Js + b)Θ(s) + KsΘ(s) K     Js + b = sΘ(s) (sL + R) +K K

V (s) = (sL + R)

(sL + R) (Js + b) + K 2 = sΘ(s) K 

=⇒ ⇔



Ω(s) sΘ(s) K = = V (s) V (s) (sL + R) (Js + b) + K 2 Ω(s) K = . V (s) (JL)s2 + (Lb + RJ)s + (Rb + K 2 )

(4.24)

Similarly the transfer function with respect to the desired motor speed wr (t) can be obtained by the two methods employed above. Taking Laplace transforms of the input-output Equation 4.21 gives Ω(s) Rb + K 2 = . 2 Ωr (s) (JL)s + (Lb + RJ)s + (Rb + K 2 )

(4.25)

This transfer function is more intuitive and relevant to the DC motor speed control than that given in Equation 4.24. However, it is essential to note that Equations 4.24 and 4.25 have the same characteristic equation, which means that they are essentially describing the same dynamic system. From Equation 4.25, the block diagram forms of both open-loop and closed-loop speed control systems with the desired motor speed as the reference can be easily derived.

4.8.3

BLOCK DIAGRAM MODEL (OPEN-LOOP)

The Block Diagram model of a system can be derived by analysing segments of the transfer function in Equation 4.25. Any general open-loop system can be represented as shown in Figure 4.26. The specific block diagram form is determined by identifying what the elements of this generic diagram correspond to in the particular system under consideration. In the motor speed control problem, the input is the reference car speed ωr (t) and the actual car speed v(t) is the output. The plant function G(s) is obtained from the transfer function in Equation 4.25. The controller is a unit proportional controller i.e. D(s) = K p = 1. Thus, the block diagram form for the motor speed control system is obtained as illustrated in Figure 4.35.

305

Characteristics of Feedback Control Systems

Figure 4.35 Open-loop motor speed control.

4.8.4

STATE-VARIABLE MATRIX MODEL

There are two independent energy-storing elements and hence two variables, the armature current i(t) and the motor angular speed ω(t) are chosen. From the motor Equations 4.17 and 4.18 it follows that K b θ¨ = − θ˙ + i J J b K ⇔ w˙ = − w + i J J R 1 K i˙ = − θ˙ − i + v L L L K R 1 ⇔ i˙ = − w − i + v. L L L Extracting the coefficients of the state variables and the input v(t) produces the State-Variable Matrix model   b K   − 0  J J       ω˙   w + =   [v]  i  K R i˙ 1   − − L L L

y=



1



0



ω i

 + [0] [v] .

This is the state-variable matrix system concerning the input voltage v(t). The state-variable system using the desired motor speed as the input is obtained by substituting for v(t) in the above matrix system, i.e., replacing it with the expression v=

(Rb + K 2 )ωr . K

The State-Variable Matrix model, which is now more meaningful, becomes   b K   − 0  J J       w ω˙      i +  Rb + K 2  [ωr ] ˙i =  K R  −  − L L LK

y=



1

0





ω i

(4.26)

 + [0] [ωr ] .

(4.27)

This is a much more intuitive and practical State-Variable Matrix model of the motor speed control.

306

4.9

Design and Analysis of Control Systems

DC MOTOR POSITION CONTROL (OPEN-LOOP)

To achieve motor position control, an extra state variable θ is required in addition to the variables ω and i. From the motor Equations 4.17 and 4.18 it follows that J θ¨ + bθ˙ = Ki L 4.9.1

di + Ri = v − K θ˙ . dt

INPUT-OUTPUT MODEL

This form is obtained by proceeding just as the speed control case but expressing the equations in terms of the angular displacement θ (and its derivatives) and not ω (and its derivatives). In particular, from Equation 4.19 it follows that (JL)ω¨ + (Lb + RJ)ω˙ + (Rb + K 2 )ω = Kv ... ⇐⇒ (JL) θ + (Lb + RJ)θ¨ + (Rb + K 2 )θ˙ = Kv.

(4.28)

Similarly, with respect to the desired speed the input-output differential equation is given by ... (JL) θ + (Lb + RJ)θ¨ + (Rb + K 2 )θ˙ = (Rb + K 2 )ωr . (4.29) 4.9.2

TRANSFER FUNCTION MODEL

The transfer function can be obtained from that of the speed control system. From Equation 4.22 it follows that Ω(s) sΘ(s) = ⇔ V (s) V (s) Θ(s) Ω(s) 1 = = (Speed control transfer function) V (s) sV (s) s =

K s[(sL + R) (Js + b) + K 2 ]

=

K . s [(JL)s2 + (Lb + RJ)s + (Rb + K 2 )]

The transfer equation with respect to the desired speed is obtained in a similar fashion. Ω(s) sΘ(s) = ⇔ Ωr (s) Ωr (s) Ω(s) 1 Θ(s) = = (Speed control transfer function) Ωr (s) sΩr (s) s =

=

Rb + K 2 s[(sL + R) (Js + b) + K 2 ] Rb + K 2 s [(JL)s2 + (Lb + RJ)s + (Rb + K 2 )]

.

307

Characteristics of Feedback Control Systems

4.9.3

STATE-VARIABLE MODEL

To achieve motor position control an extra state variable θ (t) is required in addition to the variables ω(t) and i(t). θ˙ = ω b K θ¨ = ω˙ = − θ˙ + i J J 2

K R Rb + K i˙ = − ω − i + ωr . L L LK  θ˙  ω˙ i˙ 

0

     0 =     0

1 − −

0

b J

K J

K L



R L

(4.30)

  0      θ   0  ω +     i   Rb + K 2  LK

 θ  ω  + [0] [ωr ] . i

     [ωr ]  

(4.31)

 y=



1

0

0



(4.32)

If it is desired to control both the motor angular speed and its position, then the output Equation 4.32 is expanded to include the speed as follows:       1 0 0   0   θ y1   ω  +   [ωr ] . = (4.33)  0 y2 1 0  i 0 It is important to note that the DC motor system has two independent energy-storing elements: the inductor (electrical) and the rotor (mechanical). Hence, when motor position control is required, the number of variables is greater than the number of independent energy-storing elements. Thus, the third variable θ is a redundant state variable, which means the number of variables is not minimised. This fact is amply manifested by observing that the rows of matrix A in Equation 4.31 are not independent, whereas those of A in Equation 4.26 are independent. This means that for a system involving redundant states, the rank of A is less than the number of variables, whereas for a system without redundant states, the rank of A is the same as the number of variables. Put differently, A is not invertible for a system with redundant variables. Example 4.3 The system matrices are different, as shown below. In this example, the following values for the physical parameters are assumed. These values were derived by experiment from an actual motor. moment of inertia of the rotor J = 0.01 kgm2 /s2 damping ratio of the mechanical system b = 0.1 Nms electromotive force constant K = 0.01 Nm/A electric resistance R = 1Ω electric inductance L = 0.5 H input (ωr ): desired motor angular speed output (θ ): position of shaft

308

Design and Analysis of Control Systems

Figure 4.36 DC motor speed control (open-loop).

Solution 4.3 The system matrices for both speed and position control are given below 

0

    0 A=      0

1

0

b − J

K J

K − L

R − L

  0       0  , B=        Rb + K 2  LK

  1      , C=   0  

0

0

1

0



  0  , D =  .  0

(4.34) Figures 4.36 and 4.37 show the MATLAB plots of the speed and position responses of the motor with respect to time. Figure 4.36 shows the open-loop motor speed response. The motor achieves the desired speed of 1rad/ sec in about 5 sec (the settling time) without any steady-state error or overshoot. The settling time is too large and does not satisfy the rise time criterion of < 1sec. However, the overshoot requirement (< 20%) and the steady-state error criterion (< 5%) are satisfied. Figure 4.37 shows the car position response, which after the settling time of 5sec is a linear curve of gradient 1rad/sec.

4.10 4.10.1

DC MOTOR SPEED CONTROL (CLOSED-LOOP) INPUT-OUTPUT MODEL

Consider the open-loop motor speed control input-output differential equation model, (JL)ω¨ + (Lb + RJ)ω˙ + (Rb + K 2 )ω = (Rb + K 2 )wr . In the closed-loop system, the motor speed error, ωr (t) − ω(t), is used as the input into the proportional controller K p , instead of the reference speed, ωr (t). The input to the system is still ωr (t)

Characteristics of Feedback Control Systems

309

Figure 4.37 DC motor position control (open-loop).

while the output is still v(t). Thus the closed-loop input-output differential equation model takes the form (JL)ω¨ + (Lb + RJ)ω˙ + (Rb + K 2 )ω = (Rb + K 2 )KP (ωr − ω) (JL)ω¨ + (Lb + RJ)ω˙ + (Rb + K 2 )(1 + KP )ω = (Rb + K 2 )KP ωr . 4.10.2

(4.35)

TRANSFER FUNCTION MODEL

The transfer function form is obtained by taking the Laplace transform of the input-output Equation 4.35 while assuming zero initial conditions. Ω(s) KP (Rb + K 2 ) = . 2 Ωr (s) (JL)s + (Lb + RJ)s + (Rb + K 2 )(1 + KP )

(4.36)

From this transfer function equation, the block diagram of the closed-loop speed control system with the desired motor speed as a reference can be derived. 4.10.3

BLOCK DIAGRAM MODEL (CLOSED-LOOP)

The block diagram form of the closed-loop system can be obtained in the same way as was done in the open-loop system by comparing segments of the transfer function with a generic closed-loop block diagram. The input is the reference motor speed ωr (t) and the output is the actual motor speed ω(t). Any closed-loop motor speed control system can be represented by Figure 4.38. From the closed-loop input-output differential equation, (JL)ω¨ + (Lb + RJ)ω˙ + (Rb + K 2 )ω = (Rb + K 2 )KP (wr − w).

310

Design and Analysis of Control Systems

Figure 4.38 General closed-loop control block diagram.

The block diagram is established by taking the Laplace transform of this equation and then comparing the elements of the resulting transfer function with the generic block diagram in Figure 4.38. (JL)s2 Ω(s) + (Lb + RJ)sΩ(s) + (Rb + K 2 )Ω(s) = (Rb + K 2 )× KP [Ωr (s) − Ω(s)] Ω(s)[(JL)s2 + (Lb + RJ)s + (Rb + K 2 )] = KP (Rb + K 2 )[Ωr (s) − Ω(s)], where the speed error and the control signals in Laplace transforms are given, respectively, as E(s) = Ωr (s) − Ω(s)

(4.37)

U(s) = KP E(s).

(4.38)

The output speed is thus obtained from the control signal by the equation   (Rb + K 2 ) U(s). Ω(s) = (JL)s2 + (Lb + RJ)s + (Rb + K 2 )

(4.39)

Putting the Equations 4.37–4.39 in block diagram form (with unit negative feedback, i.e., β = 1) produces Figure 4.39. Compare this closed-loop block diagram for the motor speed control system with that of the corresponding open-loop system in Figure 4.35. It can be seen that the only difference between them is the unit negative feedback loop. The closed-loop block diagram can be deduced directly from the closed-loop transfer function. Ω(s) KP G(s) = Ωr (s) 1 + KP G(s)     (Rb + K 2 )  KP (JL)s2 + (Lb + RJ)s + (Rb + K 2 )  =   2   (Rb + K ) 1 + KP (JL)s2 + (Lb + RJ)s + (Rb + K 2 ) ⇒

Ω(s) KP (Rb + K 2 ) = . Ωr (s) (JL)s2 + (Lb + RJ)s + (Rb + K 2 )(1 + KP )

Figure 4.39 Closed-loop control block diagram (proportional).

311

Characteristics of Feedback Control Systems

Comparing this outcome from the Block Diagram model with the closed-loop Transfer Function in Equation 4.36, it is clear that the two are the same. This validates the Block Diagram model. 4.10.4

STATE-VARIABLE MODEL

The State-Variable Matrix model for the closed-loop control is established in the same way as that for the open-loop system. The key difference is that the desired speed ωr (t) in the state variable Equation 4.30 is replaced by K p [ωr (t) − ω(t)] such that, 2

K R Rb + K i˙ = − ω − i + KP [wr − w] L L LK   K KP (Rb + K 2 ) R KP (Rb + K 2 ) ˙ + ωr . ⇔i=− ω − i+ L LK L LK The other two state-variable equations are not affected, θ˙ = ω K b ω˙ = − w + i. J J Hence the closed-loop control system State-Variable model, with both the motor angular speed ωr (t) and angular position θ (t) as outputs, and the desired speed as the input, is given by the following two equations: 

0

   θ˙  0  ω˙  =    i˙   0  



    +  

 −

1

0



b − J

K J

     θ   ω   i   

K KP (Rb + K 2 ) + L LK

0 0

KP (Rb + K 2 ) LK  1 0 0    y1 =  0 y2 1 0

 −

R L

     [ωr ]   

    0  θ   ω  +   [ωr ] .  i 0

This is the State-Variable model for the closed-loop control system. Example 4.4 The system matrices are different, as shown below. In this example, the following values for the physical parameters are assumed. These values were derived by experiment from an actual motor. moment of inertia of the rotor J = 0.01kgm2 /s2 damping ratio of the mechanical system b = 0.1Nms

312

Design and Analysis of Control Systems

electromotive force constant K = 0.01Nm/A electric resistance R = 1Ω electric inductance L = 0.5H input (ωr ): desired motor angular speed output (θ ): position of shaft Solution 4.4 The system matrices for both speed and position control can be obtained from the previous section. They are given by the following matrices: 

0

    0  A=    0  

1 −  −

b J

K KP (Rb + K 2 ) + L LK

1

0

0

 C=  0

1

0

0 K J  −

R L

   0        0  , B=       KP (Rb + K 2 )  LK

      



  0  , D =  .  0

Figures 4.36 and 4.37 show the MATLAB plots of the speed and position responses of the motor with respect to time. There is a steady-state error of 10% from the desired motor speed of 1 rad/sec, an overshoot of about 20% and a settling time of 1 sec. A dramatic improvement is obtained over the open-loop system on settling time (3 to 1 sec). The disadvantage of the proportional closed-loop control system with respect to the open-loop one is that there is always a steady-state error, whereas there is no steady-state error in the open-loop system. As the value of the proportional controller is increased, the steady-state error is reduced, but there are practical limits to the increase of K p . If this gain is too big, this may lead to instability and impractical settling time (too short). Also, a large gain means excessive energy consumption. The motor position response is a curve, which, after the settling time of 1 sec is a linear curve of gradient 0.9 rad/sec (Figure 4.40).

4.11

MODELLING PID CONTROLLERS

Most industrial controllers use electricity or pressurised fluid such as oil or air as power sources. Consequently, controllers may also be classified according to the kind of power employed in the operation, such as pneumatic controllers, hydraulic controllers, or electronic controllers. The choice of controller is based on the nature of the plant and the operational considerations such as safety, cost, availability, reliability, accuracy, weight and size. However in terms of the control action, industrial controllers may classified as: • • • • • •

Two-position or on-off Proportional (P) Integral (I) Proportional-plus-integral (PI) Proportional-plus-derivative (PD) Proportional-plus-integral-plus-derivative (PID)

Characteristics of Feedback Control Systems

313

Figure 4.40 DC motor speed closed-loop control (proportional).

The Proportional-Integral-Derivative (PID) controller is the most widely used in engineering and scientific applications. It is used in both the continuous-time and discrete-time formats. In this chapter, the continuous-time model is dealt with, while the discrete-time one is presented in Chapter 8. Hence, in the following sections, the details of different formats of various continuous-time PID-type controllers are investigated and discussed. These controllers’ impact, drawbacks, tuning, performance, physical realisation, and industrial implementation are covered extensively. Emphasis is on illustrative examples and worked problems. Proportional feedback control can reduce error responses to disturbances. However, it still allows a non-zero steady-state error. In addition, proportional feedback increases the speed of response but has a much larger transient overshoot. The steady-state error can be eliminated when the controller includes a term proportional to the integral of the error. However, this comes at the expense of the further deterioration in the dynamic response. Finally, adding a term proportional to the derivative of the error can dampen the dynamic response. Combined, these three kinds of control form the classical PID controller, which is widely used in industrial applications involving different types of dynamic systems such as assembly plants, mobile vehicles, robots, and process plants. The rules for tuning the PID-type controllers have an extensive and fascinating background. To investigate the various PID-type controllers, in the DC motor closed-loop system already discussed, the proportional controller KP is replaced by a general controller D(s) as shown in Figure 4.41. D(s) can be a P, PI, PD or PID controller. Similarly, for a general feedback control system as shown in Figure 4.42, the controller D(s) is any of the following: P, PI, PD or PID. Thus, the individual control components (P, I and D) and the different PID-type controllers (PI, PD and PID) can be analysed and their mathematical representation discussed. The establishment of the three-term PID controller is considered term by term.

314

Design and Analysis of Control Systems

Figure 4.41 Closed-loop motor speed control (PID).

Figure 4.42 A general unity feedback controller.

4.11.1

PROPORTIONAL CONTROLLER (P)

The P component being the first ingredient in the control loop, is generally the most impactful and most crucial of the three. It is responsible for adjusting the control variable proportionally to the amount of calculated error. Adjusting the P-gain value increases or decreases the rate of system response. Some processes employ a P only control scheme. However, if the P controller is the only element engaged in a system, yes a stable operation can occur, but a steady-state error might persists, and oscillation may become inherent in the process. The proportional (P) controller has already been developed in previous sections. The derivation of its model is presented here for completeness. When the feedback control signal is made to be linearly proportional to the error in the measured output, it is called proportional feedback. The general form of proportional control model is obtained as follows: u(t) = KP [r(t) − y(t)] = KP e(t) U(s) = KP E(s) D(s) =

where E(s) = R(s) −Y (s)

U(s) E(s)

= KP . This is the Transfer Function model of a proportional controller. It is important to note that r(t) is the system reference input, e(t) is the error signal, u(t) is the control signal, while the system output is represented interchangeably by y(t) or c(t). The corresponding Laplace transforms of these signals are R(s), E(s), U(s), Y (s) and C(s), respectively. Figure 4.43 shows the Block Diagram model of a control system with a proportional controller. Such a system can be viewed as an amplifier with a “knob” to adjust the gain up or down. As already indicated, the configuration with only proportional control may have a steady-state offset (or drop) in response to a constant reference input. It may not be entirely capable of rejecting a continuous disturbance. For higher-order systems, large values of the proportional feedback gain will typically lead to instability. There is an upper limit on the proportional feedback gain for most systems to

315

Characteristics of Feedback Control Systems

Figure 4.43 The proportional controller (P).

achieve a well-damped stable response, and this limit may still have an unacceptable steady-state error. Therefore, there is a limit on how much the errors can be reduced by using proportional feedback only. One way to improve the steady-state accuracy of control without adding extremely high proportional gains is to introduce integral control, discussed in the following section. 4.11.2

PROPORTIONAL AND INTEGRAL CONTROLLER (PI)

The primary reason for integral control is to reduce or eliminate constant steady-state errors, but this benefit typically comes at the cost of worse transient response.Integral feedback has the form u(t) = KP [r(t) − y(t)] + KI

Z t

[r(t) − y(t)] dt

0

Z t

= KP e(t) + KI

e(t)dt 0

KI U(s) = KP E(s) + E(s) s   KI = E(s) KP + s D(s) =

U(s) E(s)

KI s KP s + KI . = s

= KP +

(4.40) (4.41)

This demonstrates the modelling of a generic PI controller, where the two parameters or gains KP , and KI are chosen to give the desired system dynamics. These gains are normally known as the proportional and integral gains, respectively. Put differently, Equation 4.41 is the Transfer Function model of a PI controller, while Figure 4.44 shows the Block Diagram model of a control system with such a controller. This feedback controller has the primary virtue that it can provide a finite value of the control signal with no error signal input e(t). This comes about because u(t) is a function of all past values of e(t) rather than just the current value, as in the proportional case. Therefore, past errors e “charge up” the integrator to some value that will remain, even if the error becomes zero and stays there. Several limitations of proportional control are resolved by integral control. The steady-state response to this class of load disturbance is completely eliminated. Thus, as long as the system remains stable, the system output equals the desired output regardless of the value of KP . The final concern is with the dynamic response. If the designer wishes to increase the dynamic speed of response with a large integral gain, then the response becomes very oscillatory. In some cases, a way to avoid this behaviour is to use both proportional and integral control simultaneously.

316

Design and Analysis of Control Systems

Figure 4.44 The PI controller.

In general, even though integral control improves the steady-state tracking response, it has the effect of slowing down the response while the overshoot is kept unchanged. With both proportional and integral control, it can be seen that by choosing KP and KI , the designer has independent control over two of the three terms in the characteristic terms and can provide better transient response than can be done with integral control alone. 4.11.3

PROPORTIONAL AND DERIVATIVE CONTROLLER (PD)

Derivative feedback (also called rate feedback) is obtained by taking the derivative of the error signal and hence has the form d KD [r(t) − y(t)] . dt It is used in conjunction with proportional and/or integral feedback to increase the damping and improve the system’s stability. In practice, pure derivative feedback is not practical to implement; however, its approximations can be implemented. Another reason derivative feedback is not used by itself is that if the error signal e(t) remains constant, then the output of the derivative controller would be zero. A proportional or integral term would be needed to provide a control signal at this time. In some cases, proportional and derivative control are combined to yield the Proportional and Derivative (PD) controller. The PD controller Transfer Function model is obtained as follows: u(t) = KP [r(t) − y(t)] + KD = KP e(t) + KD

d [r(t) − y(t)] dt

de(t) dt

U(s) = KP E(s) + KD sE(s) = E(s) (KP + KD s) D(s) =

U(s) E(s)

= KP + KD s. This is the Transfer Function model of a PD controller. Figure 4.45 shows a Block Diagram model of a control system with a PD controller. In the derivative control, the correction depends on the rate

317

Characteristics of Feedback Control Systems

Figure 4.45 The PD controller.

of change of error. As a result, a controller with derivative control exhibits an anticipatory response: 1 Proportional-derivative behaviour leads the proportion-only action by seconds. Derivative conKD trol may be introduced into the feedback loop in two ways, to the tachometer in a DC motor, or as part of a dynamic compensator in the forward loop. In both cases, the closed-loop characteristic equation is the same, but the zeros from r(t) to y(t) are, of course, different; also, with the derivative in feedback, the reference is not differentiated, which may be a desirable result. 4.11.4

THE GENERAL PID CONTROLLER

For control over steady-state and transient errors, all three control parameters discussed so far should be combined to get Proportional-Integral-Derivative (PID) control. Here the control signal is a linear combination of the error, the time integral of the error, and the time rate of change of the error. All three gain constants are adjustable. The PID controller contains all three control components (proportional, derivative, and integral). Its Transfer Function model can be derived as follows: u(t) = KP [r(t) − y(t)] + KD = KP e(t) + KD

d [r(t) − y(t)] + KI dt

de(t) + KI dt

Z t

[r(t) − y(t)] dt

0

Z t

e(t)dt 0

KI U(s) = KP E(s) + KD sE(s) + E(s) s   KI = E(s) KP + + KD s s Therefore, D(s) =

U(s) E(s)

= KP + =

KI + KD s s

KD s2 + KP s + KI . s

(4.42) (4.43)

318

Design and Analysis of Control Systems

These two representations (Equations 4.42 and 4.43) are different but equivalent forms of the same Transfer Function model of a PID controller in the generic format. It is also referred to as the general PID controller. Its corresponding reduced Block Diagram model is shown in Figure 4.46. This is how the general PID controller is modelled where the three parameters or gains KP , KI , and KD are chosen to give the desired system dynamics. These gains are normally known respectively as the proportional, integral and derivative gains. Figure 4.47 shows a Block Diagram model of a control system with a general PID controller To design a particular control loop, the engineer merely has to adjust the constants KP , KI , and KD in the aforementioned equation to arrive at acceptable performance. This adjustment process is called tuning the controller. Increasing KP and KI tends to reduce system errors but may not be capable of also producing adequate stability, while increasing KD tends to improve stability. For example, the characteristic equation will have three coefficients and three parameters (KP , KI , and KD ), and thus, in theory, the poles of such a system can be set wherever desired. Combining the three control components in this system yields complete control over the system dynamics. The PID controller provides acceptable error reduction and sufficient stability and damping. PID controllers are so effective that PID control is widely used in processing industries such as petroleum refining, papermaking, and metalworking. It is prudent to note that when KI = 0, the PID controller is reduced to a general PD controller. Similarly, when KD = 0, the general PID controller becomes a general PI controller. Furthermore, the general PID controller is reduced to a Proportional controller when KI = 0 and KD = 0. These results can be easily deduced from the Block Diagram and Transfer Function models of these four general PID-type controllers. Consequently, to design, implement or simulate the four general PID-type controllers, only the general PID controller needs to be considered, and the other three controllers are obtained by assigning the number zero to the parameters KI or/and KD .

Figure 4.46 The general PID controller: reduced Block Diagram model.

Figure 4.47 The general PID controller.

319

Characteristics of Feedback Control Systems

4.11.5

THE STANDARD PID CONTROLLER

An alternative and convenient form of the general PID controller is obtained by expressing the three controller gains (KP , KI , and KD ) in terms of one controller gain and two time constants (KP , TI , and TD ) as follows: KI D(s) = KP + + KD s s   1 = KP 1 + + TD s , TI s

(4.44) (4.45)

where the three parameters (KP , TI and TD ) now give a complete description of the PID controller. KP is the proportional gain (as before), TI is the integral (reset) time constant, and TD is the derivative time constant. The two PID controller models (general and standard) given in Equations 4.44 and 4.45, respectively, are used interchangeably where, TI =

KP KD and TD = . KI KP

This alternative form of the PID controller given by Equation 4.45 is called the standard PID controller, while the traditional form given by Equation 4.44 is referred to as the general PID controller, as indicated earlier. The standard PID controller is discussed and explored in more detail in Section 4.15. The alternative models for the PI and PD controllers are easily deduced from the traditional general PI and PD controllers as follows: KI D(s) = KP + (PI) s   1 = KP 1 + TI s D(s) = KP + KD s

(4.46)

(PD)

= KP (1 + TD s) .

(4.47)

These alternative forms of the PI and PD controllers are called the standard PI and PD controllers, respectively. They are discussed in greater detail in Section 4.15.2. Using the models of PID controllers that have been exhaustively developed in this section, the characteristics of the different types of these controllers (P, PI, PD, and PID) can be studied, illustrated and verified. This is achieved by implementing the controllers in MATLAB for systems with simple reference inputs and disturbances such as impulse, step and ramp functions. 4.11.6

SUMMARY OF PID CONTROLLER CHARACTERISTICS

The advantages or impact of the different PID-type controllers can be summarised as follows: • P controller works but has poor transient behaviour (high overshoot and extended oscillations) and a (constant) steady-state error. • PI controller manifests improved performance (compared to the P controller) by eliminating the steady-state error, but the poor transient behaviour (high overshoot and extended oscillations) persists.

320

Design and Analysis of Control Systems

• PD controller manifests improved performance (compared to the P controller) by eliminating the poor transient behaviour (high overshoot and extended oscillations) but the constant steady-state error persists. • PID controller manifests the best performance by eliminating both the poor transient behaviour (high overshoot and extended oscillations) and the constant steady-state error. These characteristics of the P, PI, PD, and PID are illustrated in Figure 4.48, studying the responses to a unit step input. The benefits and limitations of the three components of a PID controller (P, I, and D) can be explored further by studying the system responses of a system to a unit step disturbance (Figure 4.49) and unit step reference input (Figure 4.50), when three controllers, P, PI and PID, are used. The effects of the three components are deduced from the plots in Figures 4.49 and 4.50 and summarised in Table 4.1.

Figure 4.48 Comparison of P, PI, PD and PID controllers.

Characteristics of Feedback Control Systems

321

Figure 4.49 Transient responses to a step disturbance.

Figure 4.50 Transient responses to a step reference input.

Example 4.5 Consider the DC closed-loop motor control system considered in previous examples. Instead of the KP use the D(s) = PI, PD or PID. The MATLAB code is shown in a later section (Figure 4.51). Solution 4.5 The overshoot is reduced to 0%, the response time to 1sec, and the steady-state error to 0%. There is also improved stability, damping, and general system response. The motor position response is a curve, which, after the settling time of 1 sec, is a linear curve of gradient 1rad/sec (the desired motor speed). These results demonstrably illustrate the benefits of the three elements of the PID controller and how they compensate for each other’s limitations. The integrator eliminates the steady-state error introduced by the proportional controller (P) (I). The poor system transient response and poor stability introduced by the integrator (I) are resolved by the derivative controller (D), which improves system damping and stability.

322

Design and Analysis of Control Systems

Table 4.1 Summary of PID Controller Characteristics Controller

Benefits

P(KP )

Simple Cheap Fast response

I (KI )

Eliminates steady State error

D (KD )

Increases damping Reduces overshoot Reduces settling time Improved transients Improves stability

Drawbacks Steady-state error Large overshoot & settling time Poor transient response Prone to instability (large gains) Poor damping Large overshoot & settling time Poor transient response Reduces stability Steady-state error Cannot be used alone

Figure 4.51 DC motor speed closed-loop control (PID).

4.11.7

PERFORMANCE OF PID-TYPE CONTROLLERS

As explained earlier, and illustrated in Figure 4.48 and Table 4.1, the effect of the four PID-type controllers can be summarised as follows: The P controller does not adequately address both the transient system behaviour and steady-state error. The PI controller reduces the steady-state error to zero, but it has no impact on transient behaviour. The PD controller addresses transient behaviour but has no impact on the steady-state error. The PID controller addresses both the steady-state error and the transient system behaviour.

323

Characteristics of Feedback Control Systems

4.12

MATLAB IMPLEMENTATION

There are various MATLAB tools that can be used to implement PID-type controllers. These include standard MATLAB code, Control System Toolbox, and Simulink Control Design. These techniques were introduced and discussed extensively in Chapter 1. MATLAB basics and exercises are presented in Appendix B. 4.12.1

STATE-VARIABLE MATRIX MODEL

If the state-variable matrices (A, B,C, D) are available for the open-loop and closed-loop systems, then the system response of the system to different input functions such as step, impulse, and ramp are easily obtained in MATLAB (Appendix B). For example, with a step reference input, wr (t) =constant, the system responses with respect to time t are obtained as follows: yol = step(Aol , wr ∗ Bol ,Col , Dol , 1,t); ycl = step(Acl , wr ∗ Bcl ,Ccl , Dcl , 1,t); where the number 1 indicates that there is one input. In most complicated closed-loop systems the state-variable matrices are not readily available. In these cases, it is easier to use the Transfer Function model. 4.12.2

TRANSFER FUNCTION MODEL

From the open-loop or closed-loop transfer function, which is expressed as a ratio of two polynomials in s, the numerator and denominator consisting of coefficients of these polynomials are obtained: numol and denol for open-loop systems, numcl and dencl for closed-loop systems. These are then used to simulate the response of the system to input different functions such as step, impulse and ramp. For example, for a step input, the open-loop system response is obtained as follows: Tol = D(s)G(s) =

n(s) numol ⇛ m(s) denol

yol = step(numol, denol,t). The closed-loop system response for a step input is similarly computed using the closed-loop transfer function.

Tcl =

D(s)G(s) q(s) numcl = ⇛ 1 + β D(s)G(s) p(s) dencl

ycl = step(numcl, dencl,t). It is not necessary to completely establish the open-loop or closed-loop transfer function before MATLAB implementation. MATLAB can be used to compute these from the Controller D(s), plant gain G(s), by using the principle of convolution. Unity feedback is assumed (β = 1). The plant gain G(s) is represented by num b(s) G(s) = ⇛ . a(s) den The controller D(s) is represented by the block diagram in Figure 4.46 and hence its transfer function can be expressed as follows: D(s) =

KD s2 + KP s + KI g(s) numcon = ⇛ . s f (s) dencon

324

Design and Analysis of Control Systems

Therefore,[numol, denol] can be calculated from [num, den] and [numcon, dencon], and then [numcl, dencl] from [numol, denol]. Hence the general sequence of MATLAB commands will be as follows: num=[- - -]; den=[- - -]; numcon=[K D , K P , K I ]; dencon=[1 0]; numol=conv(num,numcon); denol=conv(den,dencon); [numcl,dencl]=cloop(numol,denol); y˙ol=step(numol,denol,t); y˙cl=step(numcl,dencl,t);.

4.12.3

SAMPLE MATLAB CODE: MOTOR SPEED PID CONTROL

The sample MATLAB code below is that of the closed-loop control system for the DC motor discussed in the section using a PID controller. The results of this implementation have already been presented. This code helps to illustrate how control systems can be implemented using MATLAB. % DC Motor Speed Closed-Loop Control (PID) t=input(’Input the amount of time to look at in sec:’); J=0.01; b=0.1; K=0.01; R=1; L=2; K P =10; K I =5; K D =1; num=[(R*b)+(Kˆ2)]; den=[(J*L) ((J*R)+(L*b)) ((b*R)+Kˆ2)]; numcon=[K D , K P , K I ]; dencon=[1 0]; numol=conv(num,numcon); denol=conv(den,dencon); [numcl,dencl]=cloop(numol,denol); t = 0.:0.01:t; y=step(numcl,dencl,t); title(’DC Motor Speed Closed-Loop Control (PID) (PID)’) plot(t,y),grid xlabel(’Time [sec]’) ylabel(’Motor Speed [rad/sec]’).

4.13

FURTHER ANALYSIS OF PID-TYPE CONTROLLERS

As shown earlier, the expression for the general PID controller can be derived from first principles as follows: u(t) = KP [r(t) − y(t)] + KI

Z

[r(t) − y(t)] + KD

Z

u(t) = KP e(t) + KI

e(t)dt + KD

d e(t) dt

d [r(t) − y(t)] dt

Characteristics of Feedback Control Systems

325

Taking the Laplace transform leads to KI U(s) = KP E(s) + E(s) + sKD E(s) s   KI =⇒ U(s) = E(s) KP + + sKD s U(s) =⇒ D(s) = E(s) KI = KP + + KD s s KD s2 + KP s + KI = s As discussed earlier, this is the PID Controller, where KP refers to the proportional component, KI refers to the integrator component, and KD refers to the derivative component. It is the three-component controller popular in industrial control systems. Its parallel form, depicted in Figure 4.52, where the parameters are treated as simple gains, is the most general and flexible. It is referred to as the general PID Controller. There is a pole at the origin and two zeros which are the roots of the s2 quadratic equation. Put differently, the controller has a pole at s = 0, which provides integral action, together with two zeros that occur at the roots of a quadratic equation obtained as follows: KD s2 + KP s + KI =0 s KP KI =⇒ s2 + s+ =0 KD KD As explained in previous sections, the PID Controller is the best controller which improves the system transient behaviour and leads to a steady-state error of zero. 4.13.1

POLES AND ZEROS OF PID-TYPE CONTROLLERS

The poles and zeros of PID-type controllers determine their behaviour and hence their control impact on dynamic systems. They are analysed in sequence in this section.

Figure 4.52 The general PID controller.

326

4.13.1.1

Design and Analysis of Control Systems

Proportional Controller (P)

Consider the general control system depicted in Figure 4.53. If the controller D(s) is a proportional controller (P), as derived earlier, it follows that: D(s) = KP This means that the controller has neither poles nor zeros. 4.13.1.2

PI Controller

The expression of the PI controller, derived earlier, is given by KI D(s) = KP +  s  KI KP s + KP = s This means that the PI controller has a pole at s = 0 and a zero at s=−

KI KP

The PI controller reduces the steady-state error to zero, but it has no impact on transient behaviour. The complete PID controller is the best as it improves transient behaviour (damping), reduces steady-state error to zero and improves system stability. 4.13.1.3

PD Controller

The expression of the PD Controller, derived earlier, is given by D(s) = KP + KD s   KP = KD s + , KD where KP refers to the proportional component and KD denotes the derivative component. This final expression of D(s) means that the PD controller has no poles but a zero at s=−

KP . KD

The PD Controller addresses transient system behaviour but has no impact on the steady-state error.

Figure 4.53 A general unity feedback controller.

327

Characteristics of Feedback Control Systems

4.13.1.4

The General PID Controller

As done for the P, PI, and PD controllers, the poles and zeros of the general PID Controller can be similarly explored. It has already been established that the general PID Controller has two zeros which are solutions to the quadratic equation given by: s2 +

KI KP s+ =0 KS KD

The general solution to the quadratic equation is stated as follows: as2 + bs + c = 0

√ −b ± b2 − 4ac =⇒ s = 2a This means the general PID Controller has two zeros defined by: s KI KP KP2 −4 − ± KD KD KD2 s= 2 q −KP 1 = ± KP2 − 4KI KD 2KD 2KD From analysing this equation of the zeros it is clear that there are three classes of zeros for the PID controller: 1. Two real simple zeros occur when KP2 − 4KI KD > 0 =⇒ KP2 > 4KI KD 2. Two complex zeros occur when KP2 − 4KI KD < 0 =⇒ KP2 < 4KI KD 3. One repeated real simple zero occurs at s =

−KP , when 2KD

KP2 − 4KI KD = 0 The importance of the analysis of poles and zeros of PID-type controllers is that depending on the action required of the controller, its poles and zeros can be placed at desired positions by giving the parameters KP , KD , and KI specific values, ranges or conditions. The PID controller addresses both the transient system behaviour and steady-state performance. It produces a system response that settles quickly and smoothly while having a minimised (if not zero) steady-state error. 4.13.1.5

How the General PID Controller Works

In this section, the operational steps of the general PID controller are summarised for emphasis. The controller was derived earlier, and is given by the following Transfer Function model KI D(s) = K p + + KD s s  KI KP KD s2 + s+ KD KD = s

328

Design and Analysis of Control Systems

Hence, the PID controller is a “three-term” controller that has three components: proportional (P), integral (I) and derivative (D), hence the name PID. It is a control loop mechanism employing feedback widely used in industrial control systems and various other applications requiring continuously modulated control. The controller continuously calculates an error signal e(t) as the difference between a desired reference function r(t) and a measured process variable c(t) and generates a corrective control signal u(t) based on proportional, integral, and derivative terms.. Figure 4.54 depicts the role and location of a PID controller in a control system, while Figure 4.55 demonstrates how the PID works. As explained earlier, this is the three-component controller popular in industrial control systems. As already indicated, there is a pole at the origin and two zeros which are the roots of the s2 quadratic equation. The pole at the origin and one of the two zeros can be designed as an ideal integrator, while the other zero is used to provide ideal derivative action. As already established, the PID controller has a pole at s = 0, which provides integral action, together with two zeros that occur at the roots of the quadratic equation given by KI KP s+ =0 s2 + KD KD

Figure 4.54 Locating the general PID controller in a control system.

Figure 4.55 How the general PID controller works.

Characteristics of Feedback Control Systems

329

The s2 term implies a quadratic equation which produces the two zeros. The s in the denominator implies one pole at the origin. The PID controller is the best controller that improves the transient system behaviour and leads to zero steady-state error.

4.14

TUNING THE GENERAL PID CONTROLLER

Techniques are used to develop a PID controller that will meet steady-state and transient specifications for tracking input references and rejecting disturbances. This is called ’tuning’ the controller and entails determining the appropriate parameters KP , KI , and KD . These techs require that control of the process use complete dynamic models in the form of equations of motion or transfer functions. There are three basic manual control tuning approaches: (i) Trial and Error Method, (ii) Quarter Decay Ratio Method (also called Open-Loop Method); and (iii) Stability Limit Method (also called Closed-Loop Method). Ziegler and Nichols developed the last two approaches. 4.14.1

TRIAL AND ERROR METHOD

This strategy entails finding suitable values for the parameters KP , KI , and KD by conducting a manual search, where a series of values are experimented with. The controller parameters are set to an appropriate set of initial values, which are usually low gain, long integral time and short derivative time. A closed-loop step test is performed to see if the closed-loop behaviour is acceptable or not. If the outcome is not satisfactory, the parameters are adjusted, one at a time, until the behaviour is satisfactory and thus acceptable. The Trial and Error Method is the simplest approach to tuning PID controllers. However, it has several drawbacks: • It consumes a lot of energy and time in the determination of the control gains. • The execution of a step test might take a long time, particularly if the plant’s settling time and system dynamics are lengthy. • For multi-dimensional systems, the test searches are quite time-consuming because the optimum value of one parameter is a function of other parameters. • The use of repetition can be ineffective and redundant for higher classes of control systems. • It is losing popularity to more advanced and AI-driven techniques. 4.14.2

QUARTER DECAY RATIO METHOD

In the second method, the choice of controller parameters is based on a decay ratio of approximately 0.25. This means that a dominant transient decays to a quarter of its value after one period of oscillation, as shown in Figure 4.56. A quarter decay corresponds to ξ = 0.21 and is a good compromise

Figure 4.56 Quarter decay ratio.

330

Design and Analysis of Control Systems

Table 4.2 PID Controller Tuning: Quarter Decay Ratio Type of Controller

Optimum Gain KP = 1/RL

P PI

PID

   

KP = 0.9/RL    K = 0.27/RL2  I     KP = 1.2/RL    KI = 0.6/RL2        KD = 0.6/R

between quick response and adequate stability margins. A large number of process control systems exhibit a process reaction curve. R represents the slope of the curve, and the intersection line with the time axis identifies the time delay L = td . The equations are simulated for the system on an analogue computer, and the control parameters are adjusted until the transients show a decay of 25% in one period. The regulator parameters suggested by Ziegler and Nichols for the common controller terms are shown in Table 4.2. It is important to note that the general expression for PID controller used is given by KI D(s) = KP + + KD s. s 4.14.3

STABILITY LIMIT METHOD

In the third method the criteria for adjusting the parameters are based on evaluating the system at the limit of stability rather than on taking a step response. The proportional gain is increased until continuous oscillations are observed, that is, until the system becomes marginally stable. The corresponding gain Ku (also called the ultimate gain) and the period of oscillation Tu (also called the ultimate period) are determined as shown in Figure 4.57. Figure 4.58 shows a marginally stable system, and Tu should be measured when the amplitude of oscillation is relatively small. Then the final parameters are chosen as shown in Table 4.3. Experience has shown that the controller setting, according to Ziegler-Nichols rules, provide a good closed-loop response for many systems. The process operator can do the final tuning of the controller iteratively to yield satisfactory control.

Figure 4.57 Determination of ultimate gain and period.

331

Characteristics of Feedback Control Systems

!t] Figure 4.58 Marginally stable system.

Table 4.3 PID Controller Tuning: Stability Method Type of Controller

Optimum Gain KP = 0.5Ku

P    

KP = 0.45Ku    K = 0.54Ku /Tu  I     KP = 0.6Ku   

PI

PID

      

4.15

KI = 1.2Ku /Tu KD = 0.075Ku Tu

THE STANDARD PID CONTROLLER

The standard PID controller, introduced in Section 4.11.5, is a convenient format of the controller which is popular in many industrial applications. It is obtained by expressing the three PID controller gains (KP , KI , and KD ) in terms of one controller gain and two time constants. As has already been developed, the general PID controller is represented by the Transfer Function model D(s) such that D(s) =

U(s) E(s)

KI + KD s s KD s2 + KP s + KI = s

= KP +

332

4.15.1

Design and Analysis of Control Systems

STRUCTURE AND ADVANTAGES

An alternative and convenient form of the PID controller is obtained by expressing the three controller gains (KP , KI , and KD ) in terms of one controller gain and two time constants (KP , TI , and TD ) as follows: KI D(s) = KP + + KD s  s  1 = KP 1 + + TD s TI s

(4.48) (4.49)

The three parameters (KP , TI , and TD ) now give a complete description of the PID controller. KP is the proportional gain as before, while TI is the integral (reset) time constant, and TD is the derivative time constant. This is called the standard PID Controller, which is popular in industry. The two PID controller models (general and standard) given by Equations 4.48 and 4.49, respectively, are used interchangeably where: KP KI KD TD = KP TI =

The most common and flexible structure is the general PID controller that uses KP , KD , and KI , where the parameters are treated as simple gains. However, it is also a form where the parameters have the least physical interpretation and are mostly reserved for the PID controller’s theoretical treatment. On the other hand, the standard PID controller is more common in industry despite being slightly more complex mathematically. The general form is also called the parallel form because the three controller components (P, I, D) are parallel to each other, as seen in Figure 4.59. In the standard

Figure 4.59 The general PID controller.

Characteristics of Feedback Control Systems

333

Figure 4.60 The standard PID controller [13].

PID controller, the proportional term (P) is applied to both the integral term (I) and the derivative term (D) as shown in Figure 4.60. It is important to emphasise that the general and the standard PID controllers are equivalent. They represent precisely the same controller. However, from comparing the two Block Diagram models (Figures 4.59 and 4.60), the structural differences between the two are stark and clear. In the standard PID controller, the parameters have a clear physical meaning. In particular, the inner summation produces a new single error value which is compensated for future and past errors. The proportional error term is the current error. The derivative components term attempts to predict the error value at TD seconds (or samples) in the future, assuming that the loop control remains unchanged. The integral component adjusts the error value to compensate for the sum of all past errors to completely eliminate them in TI seconds (or samples). The resulting compensated single error value is then scaled by the single gain KP to compute the control variable. The block diagram for the standard PID controller is shown in Figure 4.60. Adjusting KP , TI and TD changes the response of the closed-loop system. As already explained, the standard PID controller is structured where the gain KP multiplies all components in the controller, changing the integration and differentiation times appropriately. The standard PID controller’s transfer function can be analysed further as follows:   1 + TD s D(s) = KP 1 + TI s  KP TD TI s2 + TI s + 1 T = I s This means the standard PID controller has a pole at s = 0, which provides integral action, together with two zeros that occur at the roots of the quadratic equation: TD TI s2 + TI s + 1 = 0 1 1 =⇒ s2 + s + =0 TD TD TI KP KI =⇒ s2 + s+ =0 KD KD This is the same quadratic equation obtained for the general PID controller, clearly demonstrating that the two representations of the PID controller, the general and the standard forms, are analytically equivalent, where:

334

Design and Analysis of Control Systems

KP KI KD TD = KP TI =

4.15.2

STANDARD PI AND PD CONTROLLERS

All the other reduced forms of the standard PID controller can be obtained from the standard PID controller as follows: KI D(s) = KP + + KD s   s 1 + TD s = KP 1 + TI s From this expression, the alternative standard models for the PI and PD controllers are easily deduced. For the PI controller, this is done as follows: KI D(s) = KP +  s  1 KP s + TI = s This means the controller has a pole at s = 0 and a zero at 1 TI KI =− KP

s=−

It is prudent to note that the zero is at the same location as that of the general PI controller. For the PD controller, it follows that: D(s) = KP + KD s   1 KP s+ = TD TD This means the controller has no poles, but a zero at 1 TD KP =− KD

s=−

It is prudent to note that the zero is at the same location as that of the general PD controller. As discussed earlier, the parameters of these standard forms of PI and PD controllers are related to those of the general ones as follows: KP KI KD TD = KP TI =

335

Characteristics of Feedback Control Systems

It is important to note that when 1/TI = 0, the standard PID controller is reduced to a standard PD controller. Similarly, when TD = 0, the standard PID controller becomes a standard PI controller. Furthermore, the standard PID controller is reduced to a Proportional controller when 1/TI = 0 and TD = 0. All these results can be easily deduced from Figure 4.60 (the Block Diagram model of the standard PID controller) and also from the Transfer function models of these four standard PID-type controllers, as illustrated in the preceding section. Consequently, to design, implement or simulate the four standard PID-type controllers, only the standard PID controller needs to be considered, and the other three controllers (P, PI and PD) are obtained by assigning the number zero to the parameters 1/TI or/and TD . 4.15.3

STANDARD PID CONTROLLER: PERFORMANCE ANALYSIS

As explained earlier, an alternative and convenient form of the PID controller is obtained by expressing the three controller gains (KP , KI and KD ) in terms of one controller gain and two time constants (KP , TI and TD ) as follows: KI D(s) = KP + + KD s   s 1 + TD s = KP 1 + TI s where the three parameters (KP , TI and TD ) now give a complete description of the PID controller. KP is the proportional gain (as before), TI is the integral (reset) time constant, and TD is the derivative time constant. This is the standard PID controller which is popular in industry. The general and standard PID controller models are used interchangeably. For the standard PID controller, to demonstrate the effect of each of the controller components (KP , TI and TD ), consider the problem of controlling an underdamped second-order system given by 1 s2 + 1.5s + 6 The starting point is including integral action only. This is often the first step, particularly if we have to include integral action to achieve zero steady-state error when a step is applied to the reference signal or the disturbance. The Transfer Function model for an integral controller is given by G(s) =

D(s) =

KP . TI s

With KP = 5 and TI = 1.5, the response of the closed-loop system to a unit step applied to the reference signal r(t) is shown in Figure 4.61. Increasing the gain of the controller KP /TI in an attempt to increase the speed of the step response results in a highly oscillatory response as shown in Figure 4.62, where KP has been increased to KP = 10. Rather than increasing the gain, the dynamic response can be improved by adding a proportional term, so that the Transfer Function model of the controller becomes   1 . D(s) = KP 1 + TI s with K p = 5 and TI = 1.5. The step response is much faster and there is no overshoot (Figure 4.63), although the response is still oscillatory. Adding a proportional term to the integrator therefore improves the dynamic response in this case. The dynamic response can be improved further by adding a derivative term such that   1 D(s) = KP 1 + + TD s . TI s

336

Figure 4.61 Integral (I) controller.

Figure 4.62 Integral (I) controller (with increased KP ).

Figure 4.63 PI (proportional + integral) controller.

Design and Analysis of Control Systems

Characteristics of Feedback Control Systems

337

Figure 4.64 PID (proportional + integral + derivative) controller.

This has the effect of damping out the oscillations. It has the same effect as adding a viscous damper to a mechanical system. If TD = 4 (with K = 5, TI = 1.5), then the oscillations are removed from the closed-loop step response as shown in Figure 4.64. 4.15.4

TUNING THE STANDARD PID CONTROLLER

The tuning exercise for the standard PID controller follows the same steps as those for the general PID controller, which were discussed in Section 4.14. A standard PID controller has three parameters, KP , TI and TD , whose values must be adjusted, that is, tuned so that the behaviour of the closed-loop control system is as desired or specified. As discussed in Section 4.14, there are three basic manual control tuning techniques: (i) Trial and Error Method, (ii) Quarter Decay Ratio Method (also called Open-Loop Method); and (iii) Stability Limit Method (also called Closed-Loop Method). Ziegler developed the last two approaches, and Nichols [13]. These same methods are used for the standard PID controller. Although there are challenges associated with the three PID controller tuning techniques, their key strength is that they do not require knowledge of the plant G(s). Hence, the Trial and Error, Quarter Decay Ratio (Open-Limit), and Stability Limit (Closed-Loop) tuning approaches are widely used in industry and academic research. However, if the plant G(s) is known, then the parameters of the standard PID controller can be obtained easily by using analytical or graphical techniques.

4.16

PID CONTROLLER DESIGN AND IMPLEMENTATION

Some methods are used to design PID-type controllers in industry and research. These approaches apply to both the general and the standard forms of the PID-type controller. The design steps for both the general and the standard PID controllers are outlined as follows: • Evaluate the performance of the uncompensated system, which is given by such parameters as Tp , Ts , OS and ζ . Determine how much improvement is required. • Design a PD controller to meet transient response specifications (zero location and loop gain). • Simulate the system to ensure all requirements are met. • Redesign if requirements are not met. • Design a PI to yield required steady-state error. • Determine KP , KI (or TI ), KD (or TD ). • Simulate the system to be sure all requirements have been met. • Redesign if the requirements are not met.

338

Design and Analysis of Control Systems

As an illustration, consider the design process for a particular case that leads to a specific PID controller, where the values of the control gains KP , KI (or TI ) and KD (or TD ) are obtained such that the Transfer Function models of the general and standard PID controllers are given by: KI + KD s s 27.96 = 56.42 + +s s   1 =⇒ D(s) = KP 1 + + TD s TI s   1 = 56.42 1 + + 0.02s . 2.02s D(s) = KP +

This means the control gains are given by KP = 56.42 KI = 27.96 or TI = 2.02 KD = 1 or TD = 0.02 The design process for the PD and PI controllers uses the same framework as applied for the PID one [26]. For the PD controller, the objective is to determine KP and KD (or TD ), while for the PI controller, the idea is to determine KP and KI (or TI ). 4.16.1

IMPLEMENTATION OF PID CONTROLLERS

There are several techniques for simulating or implementing PID-type controllers, which apply to all the variants (P, PI, PD and PID) in their general or standard forms. The framework for the implementation or simulation of the PID controller depends on the format (general or standard) of the controller, that is, whether the Block Diagram model is represented by Figures 4.65 or 4.66. Considering these two diagrams in the implementation process it follows that: There are several techniques of simulating or implementing PID-type controllers which apply to all the variants of the P, PI, PD and PID controllers in their general or standard forms. The framework for the implementation or simulation of the PID controller depends on the format of the controller, that is, Figures 4.65 or 4.66.

Figure 4.65 The general PID controller.

Characteristics of Feedback Control Systems

339

Figure 4.66 The standard PID controller [13].

• For the P controller, set KD and KI (or 1/TI ) to zero, and then tune the controller for the value of KP . • For PD controller, set KI or 1/TI to zero, and then tune the controller for the values of KP and KD (or TD ). • For the PI controller set KD or TD to zero, and then tune the controller for the values of KP and KI (or TI ). • For the PID, tune the controller for values of KP , KI (or TI ) and KD (or TD ). 4.16.2

REALISATION USING AN OPERATIONAL AMPLIFIER

In Chapter 2 it was demonstrated how the use of negative feedback in operational amplifier (opamp) circuits can be used to accomplish various functions. Such circuits can be used to achieve the physical realisation of different controllers and compensators. Specifically, the op-amp can be used to implement P, PI, PD and PID controllers and the various forms of lead and lag compensators. The configurations of the impedances (consisting of resistors, capacitors and inductors) used with the Operational Amplifier to achieve the different controllers or compensators are shown in Table 4.4, together with the Transfer Function models for each realisation. These models can be easily derived from first principles. Figure 4.67 shows an operational amplifier circuit that can be used to realise any controller or compensator. Figure 4.68 shows an operational amplifier circuit which can be used to realise a generic PID controller [26].

Figure 4.67 Operational amplifier realisation of controllers and compensators [26].

340

Design and Analysis of Control Systems

Figure 4.68 Operational amplifier realisation of a PID controller [26].

4.16.2.1

Transfer Function Model of an Operational Amplifier

In the derivation of the Transfer Function model for an Operational Amplifier being used to implement the general PID controller, the task is to find the ratio of the Laplace transforms of the output and input voltages as follows: T (s) =

Vo (s) Vi (s)

This Transfer Function model is then obtained by impedance analysis and the Kirchhoff’s Current Law as follows:   1 R1 Za Zb sC1 Z1 = = 1 Za + Zb + R1 sC1 =⇒ Z1 Z2 =⇒ Z2

=

R1 1 + sR1C1

= Zc + Zd = =

sR2C2 + 1 sC2

According to KCL, i1 = i2 . Since v1 = 0, this means Vi (s) −V1 (s) Z1

=

V1 (s) −Vo (s) Z2

Vi(s) − 0 Z1

=

0 −Vo (s) Z2

=⇒

1 + R2 sC2

341

Characteristics of Feedback Control Systems

=⇒

Vi (s) Z1

=

−Vo (s) Z2

−Z2 Z1 −Z2 =⇒ T (s) = Z1   sR2C2 + 1 sC2  = − R1 1 + sR1C1

=⇒

Vo (s) Vi (s)

=

 = −

sR2C2 + 1 sC2



1 + sR1C1 R1



s2 (R2 R1C1C2 ) + s (R2C2 + R1C1 ) + 1 = − sR1C2 

    1    R2 C1  R1C2  =⇒ T (s) = −  + + + (R C )s 2 1  R1 C2  s



(4.50)

This is the generic Transfer Function model of the general PID controller as physically realised by an Operational Amplifier. Ignoring the inverting impact (the minus sign) of the Operational Amplifier and comparing this representation to the transfer function of the general PID controller leads to: KI + KD s s R2 C1 =⇒ KP = + R C2  1 1 R1C2 KI = s KD = R2C1 D(s) = KP +

For a given a specific PID controller, the values of the impedances can be calculated and thus the Operational Amplifier circuit can be established. Consider the specific controller presented in Section 4.16 where [26], KP = 56.42 KI = 27.96 KD = 1.0 =⇒ D(s) = 56.42 +

27.96 +s s

This expression of the Transfer Function model of a specific PID controller is compared with the equation for the general case depicted in Equation 4.50 in terms of impedances as follows:

342

Design and Analysis of Control Systems

D(s) = 56.42 +  =

27.96 +s s 

R2 C1 + R1 C2

 +

1 R1C2 s

 + (R2C1 )s

From which we obtain (by equating corresponding coefficients) the following equations: R2 C1 + = 56.42 R1 C2 1 = 27.96 R1C2 R2C1 = 1 Since we have four unknowns and three equations, we arbitrarily select practical value for one of the elements. Selecting C2 = 0.1 µF, the remaining values are determined as R1 = 358KΩ, R2 = 179KΩ, C2 = 5.6 µ F. This completes the active-circuit realisation of this specific PID controller and the outcome is illustrated in Figure 4.69 [26]. Operational amplifier realisation of various generic controllers and compensators is shown in Table 4.4. This is achieved by giving different impedance elements to Z1 (s) and Z2 (s) in the general operational amplifier circuit in Figure 4.67. For example, a P controller is obtained by having Z1 (s) = R1 and Z2 (s) = R2 , while a PI controller is formed by having Z1 (s) = R1 and Z2 (s) = (R2 + 1/sC) as shown in Table 4.4. 4.16.3

WINDSCREEN WIPER CONTROL SYSTEM

The primary purpose of the wiper system is to clean the windscreen such that adequate visibility is achieved for the driver. A wiper motor on a modern vehicle should be a powered, quiet unit operating on a current of about 2–4 A. In older cars, shunt-wound motors were used. However, permanent-magnet motors are now commonly used. The layout of a typical wiper system is shown in Figure 4.70. The worm on the armature shaft drives a worm wheel connected to a crank, thus providing the reciprocating action needed to oscillate the wiper blades. The speed reduction and the torque increase required to drive the wiper blades are enabled by a gear mechanism. A control model for a windscreen wiper motor typically involves a feedback control system that regulates the speed and position of the wiper blades. The sensor detects the current position and speed of the wiper blades and feeds that information to the control algorithm. This control program

Figure 4.69 Operational amplifier realisation of a specific PID controller.

Characteristics of Feedback Control Systems

343

Table 4.4 Operational Amplifier Realisation of Various Generic Controllers and Compensators [26]

adjusts the motor speed and position based on the sensor feedback and sends commands to the motor driver. This mechanism then drives the wiper motor to achieve the desired speed and position of the wiper blades. The power supply provides energy to the motor and the control electronics. 4.16.3.1

System Components

The components of the windscreen wiper system can be summarised as follows: • Sensor: A sensor is used to detect the current position and speed of the wiper blades. This could be a simple optical sensor that detects the position of a rotating cam on the wiper motor shaft.

344

Design and Analysis of Control Systems

Figure 4.70 Components of a windscreen wiper system.

• Control algorithm: A control algorithm is used to adjust the speed and position of the wiper blades based on the sensor feedback. The algorithm may use a PID control loop, which changes the motor’s speed based on the error between the desired position and the actual position of the wiper blades. The PID loop can be fine-tuned to ensure smooth and accurate wiper blade movement. • Motor driver: The motor driver receives signals from the control algorithm and drives the motor accordingly. The driver may use pulse-width modulation (PWM) to control the motor’s speed and direction. • Power supply: A power supply provides the necessary voltage and current to drive the motor and the control electronics. 4.16.3.2

Control System Block Diagram

The block diagram of the windscreen wiper control system can be represented using the traditional blocks, arrows, and lines, to illustrate the flow of signals and the different components of the system. It visually represents the system’s behaviour and allows designers to analyse the system’s stability, performance, and robustness. The wiper motors in Figure 4.70 primarily consist of permanent magnets (three brush types) and are driven through a worm gear to increase torque and reduce speed. The three brushes permit two-speed operations. The normal speed is obtained through two brushes placed opposite to each other. A fast pace is achieved by installing the third brush closer to the Earth brush. This design reduces the number of armature windings between them, reducing resistance and thus increasing current and speed. Typical values for wiper motor speed and hence wipe frequency are 45 and 65 rpm at normal and fast speeds, respectively. Moving at a minimum speed of 5 rpm, the motor overcomes the starting friction for each blade. A basic block diagram for a windscreen wiper control system is displayed in Figure 4.71. The key components include the following: • Input signal: The input signal to the control system is usually a command signal representing the wiper motor’s desired position or speed, such as a voltage or a digital signal. • Controller: The controller receives the input signal and generates an output signal based on the control algorithm. The controller can be a P, PI, PD or PID controller. • Actuator: The actuator is the wiper motor, which converts the electrical signal into mechanical motion. The controller’s output is connected to the actuator and controls the motor speed or position.

345

Characteristics of Feedback Control Systems

Figure 4.71 Windscreen wiper control system.

• Feedback: The feedback loop measures the actual position or speed of the wiper motor and compares it to the desired position or speed. The difference between the two signals is the error signal, which is fed back to the controller to adjust the output signal. The Transfer Function model for the Windscreen Wiper system is a mathematical representation of the relationship between the input signal (usually a voltage or current) and the output signal (usually the wiper motor speed or position). It describes how the system responds to different input signals. The model can be derived by analysing the system dynamics and electrical characteristics. The Transfer Function model can be represented as: G(s) =

K , sτ + 1

(4.51)

where K is the gain and τ is the time constant. Hence the output signal Y (s) is obtained from the input U(s) by Y (s) = G(s)U(s)

(4.52)

This completes the modelling of a car windscreen wiper system, which is basically a feedback control system that regulates the speed and position of the wiper blades. Equation4.52 is used to calculate the torque required by the motor. The transfer function is used to analyse the stability and performance of the system, design control algorithms, and simulate the system’s behaviour under different conditions. The torque required by the motor T is given by the expression:     ωo Rh 1 T = F µ fs f1 l , ωm Re e where µ is the maximum dry coefficient of friction, F is the force of the blade onto the screen, fs is the multiplier for joint friction, fl is the tolerance factor, l is the wiper arm length, ω o is the

346

Design and Analysis of Control Systems

maximum angular velocity of the arm, ω m is the mean angular velocity of the motor crank, e is the efficiency of the motor gear unit, and Rh is the motor winding resistance.

4.17

SYSTEM STEADY-STATE TRACKING

A stable dynamic system achieves steady-state tracking in the presence of a reference input with or without a disturbance signal. The steady-state error is the difference between the reference and the system’s output in the limit as time goes to infinity (i.e. when the response has reached steady state). The steady-state error will depend on the type of reference (e.g., step or ramp) and type of disturbance (e.g., step or ramp). Steady-state error analysis is only of value for stable systems. System stability must be checked before performing steady-state error analysis. A tracking control system is designed to have a low steady-state error. A steady-state error of zero means perfect system tracking, which is the ideal objective. Control system types (Type 0, I, II or III) are defined with respect to a reference r(t) or a disturbance w(t) as a measure of how capable the system is of attaining perfect tracking of step, ramp or parabolic reference signals with zero steady-state position, velocity or acceleration error. For example, Type II control systems are excellent (produce zero steady-state error) for step and ramp reference signals but give a constant steady-state error for parabolic inputs. Type III control systems can perfectly track step, ramp and parabolic reference signals with zero steady-state position, velocity and acceleration error. 4.17.1

STEADY-STATE ERRORS AND SYSTEM TYPE

As already indicated, system types can be defined with respect to a reference r(t) or a disturbance w(t). Consider the generic feedback system shown in Figure 4.72. Let the transfer function from the reference r(t) to the output y(t) be Tr (s), and the transfer function from the disturbance w(t) to the tk output y(t) be Tw (s). For a generic reference, r(t) = , it will be shown that the steady-state error k! with respect to the reference is given by (1 − Tr ) . s→0 sk

erss = lim

tk Similarly, for a generic disturbance, w(t) = , it will be shown that the steady-state error with k! respect to the disturbance is given by ewss = lim

s→0

−Tw . sk

The system type can then be defined in terms of these steady-state errors erss , and ewss . The general transfer function with respect to the reference input, Tr (s), is obtained from Figure 4.72 by setting the disturbance to zero, i.e., w(t) = 0.

Figure 4.72 A general control system.

347

Characteristics of Feedback Control Systems

Figure 4.73 Obtaining the transfer function w.r.t the disturbance.

Tr (s) =

D(s)G(s) . 1 + β D(s)G(s)

The general transfer function with respect to the disturbance, Tw (s), is obtained from Figure 4.72 by setting the reference input to zero, i.e., r(t) = 0. Redrawing Figure 4.72 leads to the diagram in Figure 4.73, from which Tw (s) is easily obtained. Tw (s) =

G(s) . 1 + β D(s)G(s)

It is important to note that the same system characteristic equation is obtained from both Tr (s) and Tw (s), 1 + β D(s)G(s) = 0. This is because a system only has one characteristic equation, which portrays the system dynamics. The starting point in deriving the steady-state error erss is considering the Laplace transform of the reference error, Er (s) = R(s) −Y (s) = R(s) − Tr R(s) = R(s)(1 − Tr ) erss = lim sEr (s)

Final value theorem

s→0

= lim sR(s)(1 − Tr ) s→0

s(1 − Tr ) s→0 sk+1

= lim

 where R(s) = L

 tk 1 = k+1 k! s

(1 − Tr ) . s→0 sk

= lim

For the special case of a unit step input, it follows that: Er (s) = R(s) −Y (s) = R(s) − Tr R(s) = R(s)(1 − Tr ) erss

= lim sEr (s) s→0

Final value theorem

= lim sR(s)(1 − Tr ) s→0

= lim

s→0

s(1 − Tr ) s

where R(s) = L [1] =

1 s

348

Design and Analysis of Control Systems

= lim (1 − Tr ). s→0

The steady-state error ewss , due to a disturbance function, is derived by considering the Laplace transform of the error due to the disturbance (with no reference input). Ew (s) = 0 −Y (s) = −TwW (s) ewss

= lim sEw (s) s→0

Final value theorem

= − lim sTwW (s) s→0

−sTw = lim k+1 s→0 s = lim

s→0

 tk 1 where W (s) = L = k+1 k! s 

−Tw . sk

For the special case of a unit step disturbance, it follows that: Ew (s) = 0 −Y (s) = −TwW (s) ewss = lim sEw (s) s→0

Final value theorem

= − lim sTwW (s) s→0

= lim

s→0

−sTw s

where W (s) = L [1] =

1 s

= lim −Tw . s→0

tk that will make the The system type (Type 0, I, II or III) is the degree (k) of the polynomial k! r w steady-state errors ess and ess nonzero constants. Thus, there are two categories of system types: one with respect to reference and the other with respect to the disturbance. 4.17.2

ILLUSTRATIVE EXAMPLES

Example 4.6 Consider the closed-loop control system shown in Figure 4.74.

Figure 4.74 PID control system problem.

349

Characteristics of Feedback Control Systems

(a) Use proportional control D(s) = KP . Does this controller provide additional damping? (b) Use PD control, D(s) = KP + KD s. Determine the tracking and disturbance-rejection properties for step inputs on θr (t) and w(t). KI (c) Use PI control, D(s) = KP + . Discuss the effect of this controller on the stability of the s system. KI (d) Use PID control, D(s) = KP + + KD s. Discuss the effect of this controller on the stabils ity and steady-state errors of the system. What are the two system types when the PID is employed? Solution 4.6 The transfer function with respect to reference Tr (s) is obtained from the block diagram by setting the disturbance to zero as follows: Θ(s) = Tr (s) = Θr (s)

D(s) Js2 D(s) 1+ 2 Js

D(s) . D(s) + Js2

=

The transfer function with respect to disturbance is obtained by setting the reference to zero, and then redrawing the diagram as shown in the following figure.

Obtaining the Transfer Function w.r.t. the Disturbance Tw (s) is then obtained as follows: Θ(s) Tw (s) = = W (s)

=

1 Js2 D(s) 1+ 2 Js

1 . D(s) + Js2

Thus, the general system characteristic equation for the system (it is the same from both Tw (s) and Tr (s)) is given by D(s) + Js2 = 0. a) Proportional (P) controller D(s) = KP . The specific characteristic equation becomes Js2 + KP = 0 r s = ±j

KP =⇒ ξ = 0. J

This means that the proportional controller does not provide any damping.

350

Design and Analysis of Control Systems

b) Proportional and Derivative (PD) controller D(s) = KP + KD s. The transfer function with respect to reference is given by D(s) D(s) + Js2

Tr =

[KP + KD s] . Js2 + [KP + KD s]

= For tracking properties

(1 − Tr ) s→0 sk   [KP + KD s] 1− 2 Js + [KP + KD s] = lim s→0 sk   Js2 Js2 + KD s + KP = lim s→0 sk   Js2−k = lim . s→0 Js2 + KD s + KP

erss = lim

For a step reference input θr (t) = 1, which means Θr (s) = k = 0 =⇒ erss =

(4.53)

1 ⇔ k = 0. From Equation 4.53, s

0 = 0. KP

This means there is a zero steady-state error for a step input. For disturbance rejection, the transfer function with respect to the disturbance is given by 1 D(s) + Js2 1 = 2 Js + [KP + KD s]

Tw =

The corresponding steady-state error is given by ewss = lim

s→0

−Tw sk −

= lim

s→0

1 Js2 + [K p + KD s] sk

 = lim − s→0

 s−k . Js2 + [KP + KD s]

For a step disturbance w(t) = 1, which means Ωr (s) = k = 0 =⇒ ewss =

1 s

⇔ k = 0. From Equation 4.54

1 . KP

(4.54)

351

Characteristics of Feedback Control Systems

This means there is always a steady-state error due to a step disturbance. The error decreases as K p is increased. c) Proportional and Integral (PI) controller, D(s) = KP +

KI . s

The characteristic equation takes the form D =0 Js2 KI KP + s =0 =⇒ 1 + Js2 1+

=⇒ Js3 + KP s + KI = 0. The coefficient of s2 is missing, which means (Routh-Hurwitz stability criterion) at least one pole is not in LHP, which implies that the PI is an unstable control strategy for this system. d) Proportional and Integral and Derivative (PID) controller D(s) = KP +

KI + KD s. s

This means that the characteristic equation of the system becomes 1 KI [KP + + KD s] = 0 2 Js s =⇒ Js3 + KD s2 + KP s + KI = 0. 1+

There is control over all the poles and the system can be made stable by choosing KD , KP , and KI such that all the poles are strictly in the LHP. The transfer function with respect to the reference is given by D(s) Js2 Tr = D(s) 1+ 2 Js KI 1 [KP + + KD s] 2 s = Js 1 KI 1 + 2 [KP + + KD s] Js s =

KD s2 + KP s + KI . Js3 + KD s2 + KP s + KI

The steady-state error with respect to reference is given by erss = lim

(1 − Tr ) sk

= lim

(1 − Tr ) s0

s→0

s→0

= lim (1 − Tr ) s→0

from r(t) = 1 ⇔ R(s) =

1 =⇒ k = 0 s

352

Design and Analysis of Control Systems

 = lim 1 − s→0

= 1−

KD s2 + KP s + KI Js3 + KD s2 + KP s + KI



KI KI

= 0. Hence, there is no steady-state error with respect to a step input, which means with a PID controller, the system can effectively track step inputs with no steady-state errors. The transfer function with respect to the disturbance is given by Tw =

Y (s) G(s) = W (s) 1 + β D(s)G(s)

1 Js2 = D(s) 1+ 2 Js 1 Js2

= 1+ =

1 [KP + KsI + KD s] Js2 s

Js3 + KD s2 + KP s + KI

.

The corresponding steady-state error is given by ewss = lim

−Tw sk

= lim

−Tw s0

s→0

s→0

from w(t) = 1 ⇔ W (s) =

1 =⇒ k = 0 s

= lim −Tw s→0

= lim − s→0

=−

s Js3 + KD s2 + KP s + KI

0 KI

= 0. Hence, there is no steady-state error with respect to a step disturbance, which means with a PID controller, the system effectively rejects step disturbances. In general, the system type with respect to the reference is the value of k such that (1 − Tr ) ≡ nonzero constant. s→0 sk

erss = lim

For the specific unit feedback control system under discussion the expression for erss can be simplified as follows: erss = lim

s→0

(1 − Tr ) sk

353

Characteristics of Feedback Control Systems

KD s2 + KP s + KI Js3 + KD s2 + KP s + KI = lim s→0 sk   Js3 Js3 + KD s2 + KP s + KI = lim s→0 sk   Js3−k r =⇒ ess = lim . s→0 Js3 + KD s2 + KP s + KI  1−



(4.55)

To obtain the system type try increasing values of k starting from 0 in the expression in Equation 4.55 until a nonzero constant is obtained. 0 =0 KI 0 k = 1 =⇒ erss = =0 KI 0 k = 2 =⇒ erss = =0 KI J k = 3 =⇒ erss = (a nonzero constant) KI k = 0 =⇒ erss =

=⇒ system type with respect to reference is Type III. This means with a PID controller, this system can effectively track step, ramp, and parabolic inputs without any steady-state errors. For cubic inputs, there is a constant nonzero error, and for higherorder inputs the system becomes unstable, i.e., the system is not able to track these higher-order inputs because there is an infinitely growing reference error. In general, the system type with respect to the disturbance is the value of k such that ewss = lim

s→0

−Tw ≡ nonzero constant. sk

For the specific unit feedback control system under discussion the expression for ewss can be simplified as follows: −Tw s→0 sk

ewss = lim

s Js3 + KD s2 + KP s + KI = − lim s→0 sk   s1−k = − lim . s→0 Js3 + KD s2 + KP s + KI

(4.56)

To obtain the system type try increasing values of k starting from 0 in the expression in Equation 4.56 until a nonzero constant is obtained. 0 =0 KI 1 k = 1 =⇒ ewss = − =⇒ System type with respect to disturbance is Type I. KI k = 0 =⇒ ewss = −

354

Design and Analysis of Control Systems

This means with a PID controller, this system can effectively reject step disturbances without any steady-state errors. For ramp inputs, there is a constant nonzero error, and for higher-order inputs the system becomes unstable, i.e., the system is not able to handle these higher-order disturbances because of infinitely growing errors. Example 4.7 Consider the a dynamic system whose plant transfer function is given by G(s) =

1 20s2

The plant is in a unity feedback arrangement with a controller D(s) (a cascade controller) as depicted in Figure 4.75. (a) Use a P controller, D(s) = 80. Does this controller provide any damping? (b) Use a PD controller, D(s) = 80 + 10s. Determine the tracking property for a unit step input. (c) Use a PI controller, D(s) = 80 + 5/s. Discuss the effect of this controller on the stability of the system. (d) Use a PID controller, D(s) = 80+ 5/s+10s. Discuss the effect of this controller on the stability and steady-state error (for a unit step input) of the system. Solution 4.7 (a) D(s) is the general controller, a cascade compensator. For the case of a proportional controller D(s) = 80, it follows that: D(s) 2 T (s) = 20s D(s) 1+ 20s2

=

=⇒ T (s) =

D(s) 20s2 + D(s) 80 20s2 + 80

=⇒ 2ζ ωn = 0 =⇒ ζ = 0 This means a proportional controller does not provide any damping for the system. (b) Consider the proportional and derivative (PD) controller D(s) = 80 + 10s.

Figure 4.75 Unity feedback system.

355

Characteristics of Feedback Control Systems

The transfer function with respect to the reference is given by Tr = =

D(s) D(s) + 20s2 80 + 10s 20s2 + 10s + 80

.

For tracking properties (for a unit step input), the steady-state error is investigated. Er (s) = R(s) −Y (s) = R(s)[1 − Tr (s)] erss

= lim sR(s)[1 − Tr (s)] (Final value theorem) s→0   1 = lim s [1 − Tr (s)] (for a unit step input) s→0 s    80 + 10s s = lim 1− s→0 s 20s2 + 10s + 80 80 = 1− 80 =⇒ erss = 0 =⇒

A zero steady-state error for a step reference input implies that a PD controller provides perfect tracking for a unit step reference input. (c) Using the proportional and integral (PI) controller, 5 D(s) = 80 + . s The characteristic equation takes the form 1+

D(s) =0 20s2   5 80 + s =⇒ 1 + =0 20s2 =⇒ 20s3 + 80s + 5 = 0.

The coefficient of s2 is missing, which means (Routh-Hurwitz stability criterion) at least one pole is not in LHP, which implies that the PI is an unstable control strategy for this system. (d) Next let us consider a PID controller given by : 5 D(s) = 80 + + 10s s   80s + 5 + 10s2 20s3 =⇒ T (s) =   80s + 5 + 10s2 1+ 20s3 T (s) =

10s2 + 80s + 5 20s3 + 10s2 + 80s + 5

356

Design and Analysis of Control Systems

This PID controller makes the system potentially stable, because now all coefficients of the characteristic equation are present and positive. In terms of tracking, the steady-state error is given by   s 80s + 10s2 + 5 ess = lim 1 − t→∞ s 20s3 + 80s + 10s2 + 5 =0 This means use of the PID controller produces a system that tracks a reference input perfectly, with a steady-state error equal to zero.

4.18

SENSITIVITY

The sensitivity of a control system is concerned with how the controlled variable varies with respect to variations in system parameters. These parameters could be plant gains, controller gains or any other parameter on which the system depends. Although the modelling carried out so far has assumed that these parameters are constant, they could vary with changes in operating conditions such as temperature and pressure in practical systems. 4.18.1

DEFINITION OF SENSITIVITY

Consider a system with a general (open- or closed-loop) transfer function H(s), which is dependent on a parameter K. Suppose that a change in operating conditions causes the parameter K to drift from its original value to K + δ K. This parameter variation in turn forces the transfer function to change from H(s) to H(s) + δ H(s). The sensitivity SKH of the transfer function with respect to δH δK variation in the parameter K is defined as the ratio of to such that H K δH δK / H K dH dK ≈ / H K K dH = . H dK

SKH =

4.18.2

OPEN- AND CLOSED-LOOP SENSITIVITY

Consider the general open- and closed-loop transfer functions for a system with a proportional control K and plant gain G(s) Hol (s) = KG(s) Hcl (s) =

KG(s) . 1 + KG(s)

H

The open-loop sensitivity SK ol with respect to parameter K (the gain) is obtained as follows: SK ol =

dHol dK / Hol K

=

K dHol . Hol dK

H

(4.57) (4.58)

357

Characteristics of Feedback Control Systems

From Equation 4.57 dHol = G(s). dK Hence, the sensitivity function is given by H

SK ol = =

K dHol Hol dK K G(s) KG(s)

= 1. This means that H

SK ol =

δ Hol δ K / =1 Hol K

=⇒

δ Hol δK = , Hol K

(4.59)

which means that a 5% error in K would yield a 5% error in Hol (s), and hence, an error of 5% in the output (variable being controlled). Put differently, the system sensitivity with respect to proportional controller gain changes is 100% for open-loop control systems. H The closed-loop sensitivity SK cl with respect to parameter K (the gain) is obtained as follows: SK cl =

dHcl dK / Hcl K

=

K dHcl . Hcl dK

H

From Equation 4.58   KG(s) dHcl d = dK dK 1 + KG(s) =

=

[1 + KG(s)] G(s) − KG(s)G(s) [1 + KG(s)]2 G(s) [1 + KG(s)]2

.

Hence the closed-loop sensitivity function is given by SKHcl =

K dHcl Hcl dK

=

=

K G(s)  KG(s) [1 + KG(s)]2 1 + KG(s)

1 . 1 + KG(s)

358

Design and Analysis of Control Systems

This means that H

δ Hcl δ K 1 / = Hcl K 1 + KG(s)   δ Hcl δK 1 =⇒ . = Hcl 1 + KG(s) K

SK cl =

Comparing this equation with open-loop sensitivity Equation 4.59, it is clear that the error in the controlled quantity is less sensitive to variations in the proportional controller gain K by a factor 1 + KG(s). This is a major advantage of closed-loop systems over open-loop control systems. The term 1 + KG(s) is called the return difference of the feedback path. Example 4.8 A unity feedback control system has the following transfer function: G(s) =

K s(s + a)

a) Compute the sensitivity of the closed-loop transfer function to changes in the parameter K. b) Compute the sensitivity of the closed-loop transfer function to changes in the parameter a. c) If the unity gain in the feedback changes to a value β ̸= 1, compute the sensitivity of the closed-loop transfer function with respect to β . Solution 4.8 a) The transfer function of the system is given by K G(s) s(s + a) = H(s) = K 1 + G(s) 1+ s(s + a) =

K s2 + as + K

.

From the general definition of sensitivity dH dK / H K K dH = . H dK

SKH =

For the specific transfer function dH (s2 + as + K) − K = dK (s2 + as + K)2 =

s2 + as . (s2 + as + K)2

Hence, the sensitivity function is given by SKH =

K dH H dK

=

K(s2 + as + K) s2 + as × 2 K (s + as + K)2

=

s2 + as . s2 + as + K

359

Characteristics of Feedback Control Systems

b) From the general definition of sensitivity dH da / H a a dH . = H da

SaH =

For the specific transfer function dH −sK . = 2 da (s + as + K)2 Hence, the sensitivity function is given by SaH =

a dH H da

=

−sK a(s2 + as + K) × 2 K (s + as + K)2

=

−as . (s2 + as + K)

c) The transfer function of the system is given by H(s) =

G(s) . 1 + β G(s)

From the general definition of sensitivity SaH = =

dH dβ / H β β dH . H dβ

For the specific transfer function −G2 (s) dH = . dβ [1 + β G(s)]2 Hence the sensitivity function is given by SβH = =

β dH H dβ β (1 + β G) −G2 G (1 + β G)2

−β G = = 1+βG

=

−β K s(s + a) βK 1+ s(s + a)

−β K . s(s + a) + β K

360

Design and Analysis of Control Systems

Example 4.9 Consider the car’s cruise control problem discussed in previous sections. The openloop and closed-loop transfer functions were found to be Hol (s) = KG(s) =

bK ms + b

KG(s) Hcl (s) = = 1 + KG(s)

(4.60)

bK bK ms + b = . bK ms + b + bK 1+ ms + b

(4.61)

Compare the sensitivity of the open-loop and closed-loop systems with respect to the controller gain K. Solution 4.9 Consider the open-loop and closed-loop transfer functions for the car’s cruise control system. The open-loop sensitivity SKH with respect to parameter K (the gain) is obtained as follows: SK ol =

dHol dK / Hol K

=

K dHol . Hol dK

H

From Equation 4.60 dHol b = . dK ms + b Hence the sensitivity function is given by H

SK ol =

K dHol Hol dK K

=

bK ms+b



b ms + b

= 1. This means that H

SK ol =

δ Hol δ K / =1 Hol K

=⇒

δ Hol δK = , Hol K

which means that a 5% error in K would yield a 5% error in Hol (s). This in turn means that an error of 5% in the controller gain K causes an error of 5% in the car speed (the variable or output being controlled). H The closed-loop sensitivity SK cl with respect to parameter K (the gain) is obtained as follows: SK cl =

dHcl dK / Hcl K

=

K dHcl Hcl dK

H

361

Characteristics of Feedback Control Systems

From Equation 4.61  bK dHcl d  ms + b  =  bK  dK dK 1+ ms + b      bK b bK b 1+ − ms + b ms + b ms + b ms + b = 2  bK 1+ ms + b 

=

(ms + b + bK)b − bKb (ms + b + bK)2

=

b(ms + b) . (ms + b + bK)2

Hence, the closed-loop sensitivity function is given by SKHcl =

K dHcl Hcl dK

=

= =

K b(ms + b) × bK (ms + b + bK)2 ms + b + bK

ms + b ms + b + bK 1 . bK 1+ ms + b

This means that H

SK ol =

1 bK 1+ ms + b  

δ Hcl δ K / = Hcl K

=⇒

δ Hcl  = Hcl

1  δK . bK  K 1+ ms + b

Hence, compared with the open-loop system, the error in the controlled quantity is less sensitive to variations in the plant gain K by a factor 1+

bK ≡ 1 + KG(s). ms + b

If there is an error of 5% in the controller gain K the error in the car speed will be 5% . bK 1+ ms + b

362

Design and Analysis of Control Systems

Figure 4.76 Defining system sensitivity.

Thus the percentage error will be far less than 5%. Hence this example illustrates a key advantage of closed-loop systems over open-loop systems. Example 4.10 (a) Explain why sensitivity is an essential concept in control systems. (b) Consider a unity feedback control system depicted in Figure 4.76. Define and derive the sensitivity function for both the open-loop and closed-loop systems. (c) Explain why open-loop control is not such a great idea and why closed-loop control is more desirable. Solution 4.10 (a) The sensitivity of a control system is concerned with how the controlled variable varies with respect to variations in system parameters. These parameters could be plant gains, controller gains or any other parameter on which the system depends. Although the modelling carried out so far has assumed that these parameters are constant, they could vary with changes in operating conditions such as temperature and pressure in practical systems. (b) With respect to the unity feedback control system depicted in Figure 4.76, the objective is to find the impact of changes in system parameters on the response of the system. In general, the sensitivity of a function F with respect to changes in parameter P is given by △F/F △t→∞ △P/P   P △F = lim △t→∞ F △P   P dF ≈ F dP

SF:P = lim

This is the general definition of sensitivity. Using this formulation, from Figure 4.76, the sensitivity function of the open-loop and closed-loop systems can be determined. This is accomplished by starting with the respective transfer functions. ToL = KG(s) TCL =

KG(s) 1 + KG(s)

The sensitivity of the open-loop system is obtained as follows: TOL = KG(s)   K dTOL =⇒ STOL = TOL dK K =⇒ STOL = G(s) KG(s) =⇒ STOL = 1

363

Characteristics of Feedback Control Systems

The sensitivity function of a closed-loop system is obtained as follows: KG(s) 1 + KG(s)   K dTCL = TCL dK  

TOL = =⇒ STCL

   K G(s)  KG(s)  [1 + KG(s)]2 1 + KG(s) 1 = 1 + KG(s)

 =⇒ STCL =   =⇒ STCL

(c) It is important to note that the sensitivity of an open-loop system is 1, that is, 100%. This means is there is an error of 5% in the controller gain K, there will be an error of 5% in the open-loop response. The error is reproduced without any attenuation. This is the case for any plant G(s). The sensitivity does not depend on the plant or controller. This means for all plants or dynamic systems, the open-loop control system reproduces the errors due to control gain inaccuracies, modelling imperfections and external disturbances. This is the main reason why an open-loop control system is not such a great idea. It only works if you know the model perfectly, the control design is perfect and there are no external disturbances to the dynamic system being controlled. On the other hand, the closed-loop system depends on the control gain K and the plant G(s). For example, by choosing a very large value for K, the sensitivity of the system can be dramatically reduced. This means a closed-loop system can be designed to reduce or eliminate the errors due to modelling and control gain inaccuracies, including those due to external disturbances. This is the main reason why a closed-loop control system is more desirable than an open-loop one. Despite modelling errors, control design imperfections, and external disturbances, it works well. This is not the case with an open-loop control system. Example 4.11 Consider a plant, G(s), which is subjected to a control gain, K, such that G(s) =

1 s(s + 10)

Investigate the sensitivity of the plant’s open-loop and closed-loop systems. Solution 4.11 From the (open-loop and closed-loop control system) sensitivity function derivations in Example 4.10, it follows that STOL = 1 (Always. It does not depend on plant or controller.) 1 STCL = 1 + KG(s) 1 = 1 1+K s(s + 10) s(s + 10) = s(s + 10) + K s(s + 10) =⇒ STCL = 2 s + 10s + K These are the open- and closed-loop sensitivity functions for the system. Clearly, open-loop sensitivity is always 100%. It does not depend on the plant or the controller, while closed-loop sensitivity

364

Design and Analysis of Control Systems

does. The open-loop system reproduces (100% propagation without attenuation) controller and modelling errors, while the closed-loop system can be designed to reject these by increasing K.

4.19

SYSTEM STABILITY

Stability is the most important system specification. If a system is unstable, transient response and steady-state errors are moot points. An unstable system cannot be designed for a specific transient response or steady-state error requirement. What, then, is stability? There are many definitions for stability, depending upon the kind of system or the point of view. In this section, we limit ourselves to linear, time-invariant systems. A stable system always gives responses appropriate to the stimulus. For linear time-invariant systems, there are three ways of understanding and quantifying stability. • Bounded Input Bounded Output (BIBO) stability. • Asymptotic internal stability. • Routh-Hurwitz stability. These three methods are consistent with each other and are used interchangeably. 4.19.1

BOUNDED INPUT-BOUNDED OUTPUT STABILITY

A system is said to be bounded input-bounded output stable if every bounded input produces a bounded output. Consider a general system with an input r(t), output y(t), and impulse response h(t). By using convolution Z ∞

y(t) = −∞

h(τ)r(t − τ)dτ,

if the input r(t) is bounded, it means there exists a constant M such that |r(t)| ≤ M < ∞. It follows that |y| = |

Z ∞ −∞



Z ∞ −∞

≤M

h(τ)r(t − τ)dτ|

|h(τ)||r(t − τ)|dτ

Z ∞

|h(τ)|dτ.

−∞

Therefore, the output is bounded if and only if Z ∞

|h(τ)|dτ

−∞

is bounded. Hence a system is said to be BIBO stable if and only if its impulse response h(t) is such that Z ∞ |h(τ)|dτ ≤ ∞. −∞

4.19.2

ASYMPTOTIC INTERNAL STABILITY

A general system transfer function is given by H(s) =

Y (s) U(s)

365

Characteristics of Feedback Control Systems

=

b0 sm + b1 sm−1 + b2 sm−2 + · · · + bm a0 sn + a1 sn−1 + a2 sn−2 + · · · + an

=K

Πmj=1 (s − z j ) Πni=1 (s − pi )

.

(4.62)

(4.63)

The solution to the input-output (homogeneous) differential equation that corresponds to the characteristic equation from the transfer function represented in Equation 4.62 is given by n

y(t) = ∑ Ki e pi t , i=1

where {pi } are the roots of the characteristic equation and Ki depends on initial conditions. The roots are the system’s poles and have special significance with respect to the stability of any linear time-invariant system. The location of these poles in the complex plane determines whether a system is stable or not. A system is stable if and only if e pi t decays to zero for all poles {pi } as t is increased to ∞, i.e., e pi t −→ 0 as t −→ ∞ for all {pi }. This will happen if and only if all the poles {pi } are strictly in the LHP, that is, Re{pi } < 0. Hence, stability is guaranteed only when all system poles are strictly in the left-hand plane (LHP). Those poles that are strictly in the right-hand plane (RHP) lead to instability, while non-repeated poles on the imaginary axis lead to marginal stability. This is the asymptotic internal stability criterion and can be determined by computing the location of the roots of the characteristic equation and checking whether the real parts of the roots are strictly less than zero. These three exhaustive scenarios for system stabilty are demonstrated in Figures 4.77 and 4.78. Figures 4.79 and 4.80 summarise the different types of stability for dynamic systems and the associated behaviours and characteristics. It is important to note that repeated poles on the imaginary axis lead to instability and not marginal stability. In a nutshell, as already stated, stable systems have closed-loop transfer functions with poles strictly in the left half-plane (LHP). Unstable systems have closed-loop transfer functions with at least one pole in the right half-plane (RHP). Marginally stable systems have closed-loop transfer functions with non-repeated poles on the imaginary axis. Example 4.12 By locating their poles in the s-plane, determine the stability of systems with the following Transfer Function models. (a) T (s) =

1 s+2

T (s) =

1 s−3

(b)

(c) T (s) =

1 s2 + 1

(d) T (s) =

1 s2

366

Design and Analysis of Control Systems

Figure 4.77 Impact of pole location on stability (asymptotic internal).

Figure 4.78 Impact of pole location on stability [13].

Characteristics of Feedback Control Systems

Figure 4.79 Stability in dynamic systems: a summary.

Figure 4.80 Stability in dynamic systems: a summary (cont.).

367

368

Design and Analysis of Control Systems

(e) T (s) =

1 (s + 1)(s − 3)

(f) T (s) =

1 s2 + 2s + 2

Solution 4.12 Locating poles in the s-plane, and deducing stability: (a) T (s) =

1 s+2

The pole is at −2 (pole in the LHP) hence the system is stable. (b) T (s) =

1 s−3

The pole is at +3 (pole in the RHP) hence the system is unstable. (c) T (s) =

1 s2 + 1

The poles are at − j and + j (non-repeated poles on the Imaginary axis) hence the system is marginally stable. (d) 1 s2 The poles are two zeros (repeated poles) at the origin (repeated poles on the Imaginary Axis) hence the system is unstable. T (s) =

(e) T (s) =

1 (s + 1)(s − 3)

The poles are at −1 and 3 (at least one pole in the RHP) hence the system is unstable. (f) T (s) =

1 s2 + 2s + 2

The poles are at −1 + j and −1 − j (all poles in the LHP) hence the system is stable. 4.19.3

ROUTH-HURWITZ STABILITY CRITERION

It is not always easy to explicitly determine the roots of high-order polynomial functions (characteristic equations). The Routh-Hurwitz stability criterion allows the determination of stability without solving for the roots of the characteristic equation. This is achieved by analysing the coefficients of the characteristic equation and expressions derived from these coefficients. Consider the general characteristic equation a0 sn + a1 sn−1 + a2 sn−2 + · · · + an = 0. (4.64) From the definition of asymptotic stability, all roots of the characteristic equation must have negative real parts, which implies that a necessary but not sufficient condition for stability is that all the coefficients {ai } must be present and positive. It is instructive to restate that stability of any linear time-invariant system can be determined from the location of its poles, and this is consistent with the Routh-Hurwitz stability criteria discussed in this section.

369

Characteristics of Feedback Control Systems

4.19.3.1

Summary of the Routh-Hurwitz Stability Procedure

• Inspect the characteristic equation; if any coefficient is missing (zero) or negative, then the system is unstable. • If all the coefficients are present, then construct a triangular array that is a function of the coefficients {ai }. • For stability, all the elements in the first column of the array must be positive. This is the necessary and sufficient condition for stability. 4.19.3.2

The Array

Arrange the coefficients of the characteristic equation in two rows, beginning with the first and second coefficients and followed by even-numbered and odd-numbered coefficients. Subsequent rows are then added. The complete general Routh-Hurwitz array is represented as follows: sn

a0

a2

a4

···

sn−1

a1

a3

a5

···

sn−2

b1

b2

b3

···

sn−3

c1

c2

c3

···

.. .

.. .

.. .

.. .

s2







s1







s0







The b and c terms are computed as follows:  a0 − det a1 b1 = a1  a0 − det a1 b2 = a1  a1 − det b1 c1 = b1  a1 − det b1 c2 = b1

a2 a3



a4 a5



a3 b2



a5 b3



=

a1 a2 − a0 a3 a1

=

a1 a4 − a0 a5 a1

=

b1 a3 − a1 b2 b1

=

b1 a5 − a1 b3 b1

If at least one element of the first column is not positive, this means that there are some roots in the RHP, which in turn means that the system is unstable. The number of roots in the RHP is equal to the number of sign changes. For example, if the first column consists of the following five elements:

370

Design and Analysis of Control Systems

2, − 1, 7, 0.5, 13 =⇒ 2 sign changes =⇒ 2 roots in the RHP. Put differently, given the general system’s characteristic equation in Equation 4.64, the procedure to establish the complete Routh-Hurwitz array (shown in the preceding section) is as follow: Write the coefficients of the characteristic equation in two rows • • • • • • • •

First row begins with a0 Second row begins with a1 The other coefficients then alternate between the two rows Continue with the process until no coefficients are left The two rows should be of the same length Add a zero as last coefficient if necessary There should be n + 1 rows (of same length) including the first row. Last entries of lower rows are mostly zeroes

Example 4.13 Consider a closed-loop unity feedback control system given in Figure 4.81. Using the Routh-Hurwitz criterion determine the stability of a control system with K = 1 030, and a plant transfer function G(s) given by G(s) =

1 s3 + 10s2 + 31s

(Hint: First find the Transfer Function model T (s).) Solution 4.13 The closed-loop Transfer Function model is given by   1, 030 s3 + 10s2 + 31s   T (s) = 1030 1+ 3 s + 10s2 + 31s 1, 030 = 3 s + 10s2 + 31s + 1, 030 Therefore the closed-loop characteristic equation given by s3 + 10s2 + 31s + 1, 030 = 0. The completed Routh-Hurwitz array is then given in Figure 4.82. There are four rows with three elements each, where the last two entries of the last two rows are zeroes. In Figure 4.82 there are two sign changes in the first column (1 to − 72) and then (−72 to 1, 030). This means there are two poles in the RHP, hence the system is unstable. It is essential to note that any row can be multiplied by a positive constant without changing the values of the rows below. In this example, 10 and 1,030 can be multiplied by 0.1 without affecting the values of rows three and four. This is illustrated in Figure 4.83. As before, there are two sign changes in the first column (1 to − 72) and then (−72 to 103). This means there are two poles in the RHP, hence the system is unstable as already established.

Figure 4.81 Closed-loop unity feedback block diagram.

371

Characteristics of Feedback Control Systems

Figure 4.82 Routh-Hurwitz array.

Figure 4.83 Routh-Hurwitz array.

4.19.3.3

A Special Case: First Term in a Row Is Zero

If the first term in one of the rows is zero, replace the zero with ε and proceed with the RouthHurwitz stability procedure, and then apply the stability criterion as ε −→ 0.

4.20

WORKED EXAMPLES OF SYSTEM STABILITY

Example 4.14 Consider a linear and time-invariant system with a Transfer Function model given by T (s) =

N(s) . D(s)

(a) If a unit impulse signal is applied, demonstrate the implication of the location of the system’s poles to the stability of the system. (b) Extend the analysis and establish the conditions for stability for a general unity feedback closed-loop system depicted in Figure 4.84. Solution 4.14 (a) Given a linear and time-invariant system described by a Transfer Function model T (s) =

N(s) , D(s)

372

Design and Analysis of Control Systems

Figure 4.84 A general unity feedback control system.

if a unit impulse signal is applied it follows that:

The roots of the system (p1 , p2 , . . . , pi , . . . , pn ) are the poles of the system, and have special significance with respect to system stability of an linear time-invariant system. The location of these poles determine whether a system is stable or not. For the unit impulse, if the inverse Laplace transform of Y (s) is taken, the output signal y(t) is obtained as y(t) = A1 e p1 t + A2 e p2 t + · · · + An e pn t This is the system’s time response. Clearly, from the expression of y(t), the response grows without bound for any pole pi > 0, implying system instability. Hence, stability is guaranteed only when poles are strictly in the left-hand plane (LHP). Those strictly in the right-hand plane (RHP) lead to instability, while poles on the imaginary axis lead to marginal stability. (b) The analysis of the impact of pole location on stability can be extended to general closedloop control systems. Given the general unity feedback control system depicted in Figure 4.84, the objective is to design a controller C(s) such that the closed-loop transfer function is given by T (s) =

G(s)C(s) Y (s) = R(s) 1 + G(s)C(s)

With respect to this system, for stability, all poles must lie in the left half-plane (LHP). This means all roots satisfying the equation [1 + G(s)C(s) = 0] must lie in the left half-plane (LHP). Any poles in the right-hand plane (RHP) imply instability, and any poles on the imaginary axis lead to marginal stability. Hence, it can be concluded that the stability of any linear time-invariant closed-loop control system can be determined from the location of its closed-loop poles. This is called asymptotic stability criteria and is consistent with the Routh-Hurwitz stability criteria.

373

Characteristics of Feedback Control Systems

Example 4.15 Consider the process plant control system shown below.

A Process Plant Control System Find the range of K that will make the system stable. (Hint: First find the Transfer Function model T (s).) Solution 4.15 The closed-loop Transfer Function model is obtained as follows: T (s) =

KG(s) 1 + KG(s) 

 K s(s2 + s + 1)(s + 2)   = K 1+ s(s2 + s + 1)(s + 2) = =

K s(s2 + s + 1)(s + 2) + K K . s4 + 3s3 + 3s2 + 2s + K

Therefore, the characteristic equation is given by s4 + 3s3 + 3s2 + 2s + K = 0. Using the Routh-Hurwitz procedure, an array of coefficients can be constructed. s4

1

3

K

s3

3

2

0

s2

7 3

K

0

0

0

0

0

s1 s0

2−

9K 7

K

For stability, K must be positive, and all coefficients in the first column must be positive. Therefore,   9K K > 0 and 2 − >0 7 14 =⇒ 0 < K < . 9 14 This is the range of K that will permit system stability. When K = , the system becomes oscillatory 9 with a constant amplitude. This is called marginal stability or indeterminate stability.

374

Design and Analysis of Control Systems

Example 4.16 Consider the closed-loop control system shown below.

Closed-Loop Control System (a) (b) (c) (d)

Determine the transfer function from r to y. Determine the transfer function from w to y. Find the range of (K1 , K2 ) for which the system is stable. What is the system type with respect to r and w?

Solution 4.16 The solution is outlined below. a) Tr (s) =

10(K1 + K2 s) Y (s) = . R(s) s[s(s + 1) + 20] + 10(K1 + K2 s)

Tw (s) =

Y (s) 10s = . W (s) s[s(s + 1) + 20] + 10(K1 + K2 s)

b)

c) The characteristic equation is given by s3 + s2 + (10K2 + 20)s + 10K1 = 0. Routh-Hurwitz array s3 1

10K2 + 20

s2

1

10K1

s1

10K2 + 20 − 10K1

0

s0

10K1

0

For stability 10K1 > 0 ⇒ K1 > 0 10K2 + 20 − 10K1 > 0 ⇒ K2 > K1 − 2. Therefore, the conditions for stability are given by K1 > 0 and K2 > K1 − 2. d) System type (i) The expression of the steady-state error with respect to reference can be simplified as follows:

Characteristics of Feedback Control Systems

375

(1 − Tr ) sk   10(K1 + K2 s) 1 −  s[s(s + 1) + 20] + 10(K1 + K2 s)   = lim    s→0 sk

erss = lim

s→0



 s[s(s + 1) + 20]  s[s(s + 1) + 20] + 10(K1 + K2 s)   = lim   s→0  sk  = lim

s→0

 s1−k [s(s + 1) + 20] . s[s(s + 1) + 20] + 10(K1 + K2 s)

(4.65)

To obtain the system type try increasing values of k starting from 0 in the expression in Equation 4.65 until a nonzero constant is obtained. 0 =0 10K1 20 2 k = 1 =⇒ erss = = (a nonzero constant) 10KI KI k = 0 =⇒ erss =

=⇒ System type with respect to reference is I. (ii) The expression of the steady-state error with respect to disturbance can be simplified as follows: −Tw sk   10s − s[s(s + 1) + 20] + 10(K1 + K2 s) = lim s→0 sk

ewss = lim

s→0

−10s1−k . s→0 s[s(s + 1) + 20] + 10(K1 + K2 s)

= lim

(4.66)

To obtain the system type try increasing values of k starting from 0 in the expression in Equation 4.66 until a nonzero constant is obtained. 0 =0 10K1 10 1 k = 1 =⇒ ewss = − =− (a nonzero constant) 10KI KI k = 0 =⇒ ewss = −

=⇒ System type with respect to disturbance is I.

376

Design and Analysis of Control Systems

Example 4.17 The following figure shows the speed control of an assembly plant.

Speed Control of an Assembly Plant Determine and plot the range of K and p that permits stable operation. Solution 4.17 The transfer function for the system is given by T (s) =

K s3 + (p + 1)s2 + ps + K

.

The stability condition is established by using the following Routh-Hurwitz array s3

1

p

s2

1+ p

K

s1

b

0

s0

K

0

From this array, the conditions for stability are established as follows: K>0 (1 + p) > 0 =⇒ p > −1 but p > 0

(coefficient of s)

=⇒ p > 0 p2 + p − K >0 1+ p  ⇒ K < p2 + p

b=

 =⇒ 0 < K < p2 + p and p > 0 The curve of K vs. p where K = p2 + p, is then plotted in Figure 4.85. For the system to be stable, the desired region is the shaded area, as illustrated in Figure 4.85 Example 4.18 The feedback control system of a chemical plant is given below. How many of the system’s poles are not in the LHP (Figure 4.86)?

377

Characteristics of Feedback Control Systems

Figure 4.85 Stability region for assembly plant.

Figure 4.86 Chemical plant control system.

Solution 4.18 The system transfer function is given by KG(s) 1 + KG(s)   2 4.5 5 s + 3s4 + 2s3 + 6s2 + 6s   = 2 1 + 4.5 5 s + 3s4 + 2s3 + 6s2 + 6s 9 = 5 . s + 3s4 + 2s3 + 6s2 + 6s + 9

T=

The next step is constructing the Routh-Hurwitz array. s5

1

2

6

s4

3

6

9

s3

ε

3

0

378

Design and Analysis of Control Systems

6ε − 9 ε

s2 s1

3−

s0

0

0

ε2 2ε − 3

0

0

3

0

0

The 0 that is the first element of the s3 row is replaced by ε. The stability criterion is applied by taking the limit as ε → 0. There are two sign changes in the first column of the array (from 0 to −∞ and then from −∞ to 3) which means that there are two poles that are not in the LHP.

4.21

SYSTEM TYPE BASED ON STABILITY

Systems can be put in categories based on the type of stability they exhibit – absolute, marginal and conditional. 4.21.1

ABSOLUTELY STABLE SYSTEM

A stable control system is one that is stable for all the range of system values. This occurs if all the poles of the control system’s transfer function are strictly in the LHP. 4.21.2

MARGINALLY STABLE SYSTEM

A marginally stable system produces an output signal with constant amplitude and constant frequency of oscillations for a bounded input. This occurs if any non-repeated pole of the control system’s transfer function lies on the imaginary axis. 4.21.3

CONDITIONALLY STABLE SYSTEM

A conditionally stable control system is one that is only stable for a specific range of system values. Outside that range, the system is unstable. An example of a control system exhibiting conditional stability is given in Figure 4.87. The system Transfer Function model is given by T (s) =

=

Figure 4.87 Control system with conditional stability.

G(s) 1 + G(s) K s(s + 3)2 K 1+ s(3 + 3)2

379

Characteristics of Feedback Control Systems

K s(s + 3)2 + K K = 3 2 s + 6s + 9s + K = 0 (characteristic equation) =

=⇒ s3 + 6s2 + 9s + K

(4.67)

The Routh-Hurwitz array is obtained as s3 s2 s1 s0

1 6 54 − K 6 K

6 K 6 0

For stability the elements of the first column must be greater than zero. This means [(54−K)/6] > 0 or K > 0, which gives K < 54 or K > 0. For instability the condition is K > 54 or K < 0, and marginal stability is obtained when K = 54 or K = 0. These results can also be obtained by directly solving the characteristic equation (Equation 4.67) as follows: s3 + 6s2 + 9s + K|s= jω = 0 =⇒ (−6ω 2 + K) + (−ω 3 + 9ω) j = 0 √ =⇒ ω = ± 9, 0 and K = ω 2 =⇒ K = 54, 54, 0 =⇒ K = 54 or 0 The root locus (MATLAB plot) of this control system is given in Figure 4.88 and confirms the analytically established results. From this MATLAB plot, when K > 54 or K < 0, the system is unstable. For K < 54 or K > 0, it is stable and marginally stable for K = 54 or K = 0. This validates the outcomes from the Routh-Hurwitz method and graphically illustrates conditional stability. Root locus techniques are introduced and developed in detail in Chapter 5.

Figure 4.88 Root locus showing conditional stability.

380

4.22

Design and Analysis of Control Systems

TIME DELAYS IN CONTROL SYSTEMS

Time delays or dead-times often occur in the signals that flow between components of a control system. Such time delays exist in thermal, hydraulic, and pneumatic systems. For example, when a tap is turned on at one end of a long hose pipe, it takes some time before the water appears at the far end. Also, when the feed to a conveyor belt is changed, it takes some time to observe the change at the other end. In fact, time delays are inevitable in networked control systems. Their occurrence may degrade the performance of such systems. It might even lead to system instability. It is prudent to identify the maximum allowable time delay for maintaining the system stability once the structure of a networked control system is established. This will also enhance the controller design process. Time delays between inputs and outputs are quite common in industrial, engineering, economic, and biological systems. Lags due to transportation, measurement, analysis time, computation and communication introduce time delays into control system loops. This complicates the analysis and design of feedback control systems and makes satisfactory control more difficult to achieve. 4.22.1

IMPACT OF TIME DELAY IN CONTROL SYSTEMS

Various issues arise because of time delays. Given the high prevalence of time delays in most practical control systems, we must outline some of these matters. • Time delays produce poor performance and sluggish responses in control systems. • Unbounded negative phase angle due to time delays may lead to instability in feedback systems. • The overshoot and the settling time are lengthened when a time delay occurs. The closed-loop control system may become unstable when the time delay is over a particular value. • An increase in time delays makes a control system closer to instability unless retuned. Time delay slows down the setpoint response. • To avoid compromising the performance of the closed-loop system, time delays must be explicitly accounted for. A good example is the use of the Smith Predictor. • Instability in feedback control systems results from an imbalance between system dynamic lags and the strength of the corrective action. • When time delays are present in the control loop, controller gains have to be reduced to maintain stability. • The larger the time delay is relative to the time scale of the process dynamics, the greater the reduction required. As explained in Chapter 2, a pure time delay function can be modelled by considering a general system g(t) delayed by time τ, such that the system is represented by g(t − τ). Taking Laplace transforms of this function leads to L g(t − τ) = e−sτ G(s) = Gτ (s)G(s). Hence, the Laplace transform of a pure time delay is given by Gτ (s) = e−sτ . Figure 4.89 shows the Block Diagram model of a general closed-loop system with a time delay, while Figure 4.90 shows a specific example of a such a system. From Figure 4.89, the Transfer Function model of the delayed system is obtained as follows:

Characteristics of Feedback Control Systems

381

Figure 4.89 Time delay in a general closed-loop control system.

Figure 4.90 Time delay in a specific closed-loop control system.

D(s)G(s)e−sτ 1 + D(s)G(s)e−sτ D(s)G(s) = . D(s)G(s) + esτ

TD (s) =

Hence the characteristic equation of the system with time delay is given by D(s)G(s) + esτ = 0 This expression is not algebraic, which indicates the source of instability and poor performance. On the other hand, the system without time delay has the following Transfer Function model and characteristic equation: D(s)G(s) 1 + D(s)G(s) =⇒ 1 + D(s)G(s) = 0

T (s) =

Clearly, its characteristic equation is algebraic. A specific closed-loop control system with a time delay shown in Figure 4.90 can be used to demonstrate the impact of time delays in closed-loop systems. The Transfer Function models (their detailed derivations are left as an exercise for the reader) for the delayed and un-delayed systems are given by 2.5s + 0.5 2.5s + 0.5 + 5s2 e3.4s + 5se3.4s 2.5s + 0.5 T (s) = 2 . 5s + 7.5s + 0.5

TD (s) =

The characteristic equation of the system with a time delay is not algebraic, while that of the one without such a delay is. The unit step responses illustrating the impact of the time delay in this specific closed-loop control system are shown in Figure 4.91. A special class of LTI models with time delays are state-space equations with delayed terms, which leads to time delays in the statespace matrices (A, B, C, D). Figure 4.92 shows unit step responses demonstrating the impact of such state-space model time delays. When the characteristic equation of a control system is not an algebraic polynomial, several classical control techniques cannot be used, for example, traditional root locus analysis. There is a need to replace the time delay function e−sτ with a rational function using approximation methods such as Pade approximation. This is explored and detailed in Chapter 5.

382

Design and Analysis of Control Systems

Figure 4.91 Unit step response to a closed-loop system with a time delay.

Figure 4.92 Step response: closed-loop system with time delays in model.

4.22.2

TIME DELAY AND THE SMITH PREDICTOR CONTROLLER

The Smith Predictor Controller is another technique for removing the time delay function from the characteristic equation. The use of this innovative method in control systems with time delays is its best-known application. Consider the generic control system with time delay depicted in Figure 4.89. As already indicated, the characteristic equation is given by D(s)G(s) + esτ = 0.

Characteristics of Feedback Control Systems

383

Expressing the controller and the plant transfer function in terms of numerators and denominators such that, D(s) = nD (s)/dD (s) and G(s) = nG (s)/dG (s), it follows that the characteristic equation can also be represented by nD (s)nG (s) + esτ dD (s)gD (s) = 0. This is clearly not an algebraic polynomial, which complicates the system’s stability and root locus analysis. Figure 4.93 shows a Smith Predictor Controller scheme for a system with a time delay. A key feature is a minor loop around the controller D(s). This diagram can be rearranged into Figure 4.94. If the plant transfer function and the time delay are known perfectly, then the Smith prediction error E p (s) = 0, and hence the closed-loop system reduces to that in Figure 4.95. From this diagram, the overall Transfer Function model for the control system with a time delay (after transformation by the Smith Predictor Controller) is given by D(s)G(s)e−sτ 1 + D(s)G(s) =⇒ 1 + D(s)G(s) = 0.

TSP (s) =

Figure 4.93 Smith predictor controller for time delay systems.

Figure 4.94 Rearranging the smith predictor diagram.

Figure 4.95 Final smith predictor closed-loop diagram.

384

Design and Analysis of Control Systems

Figure 4.96 Proportional controller.

Clearly the time delay function e−sτ has been removed from the characteristic equation of the closed-loop system. This characteristic equation can also be expressed as nD (s)nG (s) + dD (s)gD (s) = 0. Again, it is clear that the time delay function has been eliminated. This means the stability and root locus analysis of a Smith Predictor controlled time-delay system are independent of the time delay. In fact, the Smith Predictor Controller is widely used in the broader class of nonminimum-phase systems. It also has applications in the control of nonlinear systems.

PROBLEMS Problem 4.1 Consider the unity feedback control system depicted in Figure 4.96. (a) (b) (c) (d) (e)

Explain all the labelled signals. Explain the terms KP and G(s). Describe how the control system works. Find the Transfer Function model of the control system. Deduce one advantage and one disadvantage of a proportional controller.

Problem 4.2 Consider the following generic feedback control system.

A General Control System (a) Find the transfer function from the reference r(t) to the output y(t) i.e. Tr (s). (b) Find the transfer function from the disturbance w(t) to the output y(t) i.e. Tw (s). tk (c) For a generic reference, r(t) = , show that the steady-state error with respect to the k! reference is given by (1 − Tr ) erss = lim . s→0 sk tk (d) For a generic disturbance, w(t) = , show that the steady-state error with respect to the k! disturbance is given by −Tw ewss = lim k . s→0 s

385

Characteristics of Feedback Control Systems

(e) Explain the meaning of system type in terms of the steady-state errors erss , and ewss . Problem 4.3 In the diagram for Problem 4.1, a PID controller is used in a unit feedback system such that D(s) = K p + G(s) =

KI + KD s s

1 Js2

r(t) = 1 w(t) = 1. (a) What is the effect of the PID on the stability of this specific system? (b) Find the steady-state errors erss , ewss for the system. (c) What are the system types with respect to the reference and disturbance, respectively? Problem 4.4 Design Project: The closed-loop control system of the DC motor can be derived and shown to be given by Figure 4.97. The terms Ω(s), Ωr (s) are the Laplace transforms of the actual angular speed w(t) and the desired angular speed wr (t), respectively. E(s) and U(s) are the Laplace transforms of the error and the control signals. The controller D(s) can take the form of any one of the four controllers P, PI, PD, and PID. A PID controller contains all three control components (proportional, derivative, and integral) and hence, its transfer function can be represented as follows: KI + KD s s KD s2 + KP s + KI = . s

D(s) = KP +

Thus, the corresponding block diagram representation takes the form shown in Figure 4.98. This is the generic PID controller, and the other controllers P, PI, PD can be modelled in a similar fashion. (a) Characteristics of PID Controllers By employing MATLAB, implement four closed-loop control systems for the DC motor by using the four controllers (P, PI, PD, and PID) and the data provided. The objective is to satisfy the given

Figure 4.97 Closed-loop motor speed control (PID).

Figure 4.98 A PID controller: block diagram form.

386

Design and Analysis of Control Systems

design requirements. From the results discuss the benefits and drawbacks of the controller components; Proportional (P), Integral (I), and Derivative (D). Also, show and explain how the three components compensate for each other’s drawbacks. (b) Tuning of PID Controllers From the MATLAB implementation explain how you choose the gains (KP , KI , KD ) for the controllers (P, PI, PD and PID). DATA Assume the following values for the physical parameters, which were derived by experiment from an actual motor: Moment of inertia of the rotor J = 0.012 kg m2 Damping ratio of the mechanical system b = 0.105 Nm s Electromotive force constant K = 0.01 Nm/A Electric resistance R = 1 Ω Electric inductance L = 0.505 H DESIGN REQUIREMENTS Desired angular speed wr (t) = 1 rad/ sec Settling time ts = 1 sec Overshoot M p ≤ 20% Steady-state error ess (t) ≤ 5% Problem 4.5 a) Implement the illustrative example of standard PID controller performance presented in Section 4.15.3. Demonstrate that the graphs presented in that section are correct. b) Repeat this practical implementation for the second-order system given in Equation 4.68, i.e, determine the various control gains (KP , TI and TD ) which will produce similar graphs for the different cases discussed in Section 4.15.3. G(s) =

1 s2 + 1.6s + 8

(4.68)

Problem 4.6 Consider a dynamic system whose plant transfer function is given by G(s) =

1 20s2

The plant is in a unity feedback arrangement with a controller D(s) (a cascade controller) as depicted in Figure 4.99. Consider the following four PID-type controllers P controller, D(s) = 80 PD controller, D(s) = 80 + 10s PI controller, D(s) = 80 + 5/s PID controller,D(s) = 80 + 5/s + 10s

Figure 4.99 Unity feedback system.

387

Characteristics of Feedback Control Systems

(a) For each of the four controllers determine: (i) the steady-state error for a unit step input (ii) the effect of the controller on the stability of the system (b) Of the P and PD controllers, which one provides damping? Justify your answer. Problem 4.7 Consider the control system shown in the following diagram.

A Control System The controller D(s) is given by, D(s) = K

s+α . s2 + ω 2

(a) Prove that the system is capable of tracking a sinusoidal reference input, r(t) = sin(ωt), with a zero steady-state error. (b) Find the range of K and α such that the closed-loop system remains stable. For the case where α = 1, is the closed-loop system asymptotically stable for some set of values of K? Problem 4.8 Consider the helicopter altitude control problem shown in the following diagram,

Helicopter Altitude Control Problem where J = helicopter inertia, θr (t) = re f erence helicopter attitude, θ (t) = actual helicopter attitude, Ks = sensor gain, Kr = re f erence scaling gain, ω(t) = disturbance torque. (a) Use proportional control D(s) = KP . Does this controller provide additional damping? (b) Use PD control, D(s) = KP + KD s. Determine the tracking and disturbance-rejection properties (i.e., find erss and ewss ) for step inputs on θr (t) and ω(t). KI (c) Use PI control, D(s) = KP + . Discuss the effect of this controller on the stability of the s system. KI (d) Use PID control, D(s) = KP + + KD s. Discuss the effect of this controller on the stability s and steady-state errors of the system. (e) What are the two system types (one with respect to the reference and the other with respect to the disturbance) when the PID controller is employed?

388

Design and Analysis of Control Systems

Problem 4.9 The differential equation describes a DC motor speed control y˙ + 60y = 600va − 1, 500ω, where y(t) is the motor speed, va (t) is the armature voltage, and ω(t) is the load torque. Assume the armature voltage is computed using the PI control law KP [r(t) − y(t)] + KI

Z t

[r(t) − y(t)] dt,

0

where r(t) is the reference motor speed. (a) Derive a block diagram representation of the system. (b) (i) Compute the transfer function from r(t) to y(t) when ω(t) = 0. (ii) What is the steady-state error due to a ramp reference motor speed, r(t) = t? (iii) Deduce the system type with respect to the reference input r(t). (c) (i) Compute the transfer function from ω(t) to y(t) when r(t) = 0. (ii) What is the steady-state error due to a disturbance input of the form ω(t) = t ? (iii) Deduce the system type with respect to the disturbance ω(t). (d) For the transfer function in (b) (i), compute the values for KP and KI so that the characteristic equation of the closed-loop system will have roots at −60 ± 60 j. (e) Using the computed values of KP and KI , derive the system’s natural response due to the transfer function from ω(t) and y(t). (f) Verify the solutions to parts b (ii), c (ii), and (e) using MATLAB. Problem 4.10 Consider the following unit feedback control system.

Control System Problem (a) Discuss the effect of different values of (a, b, K) on the shape of the step response of the system. (b) Determine the system type with respect to r(t). Problem 4.11 Consider the following control system where the feedback gain β is subject to variations.

Control System with Variations in β The objective is to design a controller for this system so that the output y(t) accurately tracks the reference input r(t).

389

Characteristics of Feedback Control Systems

(a) Let β = 1, and the following three options for the controller Di (s) are available: D1 (s) = K, D2 (s) =

K K , D3 (s) = 2 , s s

where K is a constant. Choose the controller (including a particular value for K) that will 1 result in a type 1 system with a steady-state error of less than 10 . (b) Next, suppose that, due to harsh conditions, there is some attenuation in the feedback path modelled by β = 0.9. Find the steady-state error due to a ramp input for the choice of Di (s) in part (a). (c) If β = 0.9, what is the system type for part (b)? Problem 4.12 A control system has the structure shown in the following diagram.

Control System Determine the gain at which the system will become unstable. ( Ans: 0 < K < 1.5) Problem 4.13 Designers have developed a small, fast, vertical-takeoff fighter aircraft that is invisible to radar (stealth aircraft). This aircraft concept uses quickly turning jet nozzles to steer the airplane. The control system for the heading or direction control is shown in the following diagram.

Vertical-Takeoff Aircraft Determine the maximum gain of the system for stable operation. Problem 4.14 A system is represented by equation x˙ = Ax where   0 1 0 0 1  A= 0 −1 −c −2 Find the range of c where the system is stable. Problem 4.15 Determine the stability of systems with the following Transfer Function models by locating their poles in the s-plane. (a) T (s) =

1 (s + 1)(s + 2)(s − 3)

(b) T (s) =

s+1 (s + 3)3

390

Design and Analysis of Control Systems

(c) T (s) =

1 (s2 + 1)2

(d) T (s) =

1 s2 − 2s + 1

Problem 4.16 Utilising Routh-Hurwitz criterion, determine the stability of the following polynomials: (a) (b) (c) (d)

s2 + 5s + 2 s4 + s3 + 3s2 + 2s + K s5 + s4 + 2s3 + s + 5 s5 + s4 + 2s3 + s2 + s + K

Problem 4.17 Arc welding is one of the most important areas of application for industrial robots. In most manufacturing welding situations, uncertainties in the part’s dimensions, the geometry of the joint, and the welding process require sensors to maintain the weld quality. Several systems use a vision system to measure the geometry of the puddle of melted metal, as shown in the following diagram. The system uses a constant rate of feeding the wire to be melted.

Arc Welding Control System (a) Calculate the maximum value of K for the system that will result in a stable system. (b) For 1/2 of the maximum value found in part (a), determine the roots of the characteristic equation. (c) Estimate the overshoot of the system in part (b) when it is subjected to a step input. Problem 4.18 A feedback control system is shown in the following diagram.

Feedback Control System The process transfer function is G(s) =

K(s + 40) , s(s + 10)

and the feedback transfer function is H(s) = 1/(s + 20).

391

Characteristics of Feedback Control Systems

(a) Determine the limiting value of gain K for a stable system. (b) For the gain that results in marginal stability, determine the magnitude of the imaginary roots. (c) Reduce the gain to 1/2 of the magnitude of the marginal value and determine the relative stability of the system (1) by shifting the axis and using the Routh-Hurwitz criterion and (2) by determining the root locations. Show that the roots are between −1 and −2. Problem 4.19 A unity feedback control system is shown in the following diagram.

Unity Feedback Control System Determine the relative stability of the system with the following transfer function by locating the complex roots in the s-plane. (a) G(s) =

65 + 33s s2 (s + 9)

(b) G(s) =

24 s(s3 + 10s2 + 35s + 50)

(c) G(s) =

3(s + 4)(s + 8) s(s + 5)2

Problem 4.20 On July 16, 1993, the elevator in Yokohama’s 70-storey Landmark Tower, operating at a peak speed of 45 km/h (28 mph), was inaugurated as the fastest super-fast elevator. To reach such a speed without leaving passengers’ stomachs on the ground floor, and the lift accelerates for longer periods rather than precipitously. Going up, it reaches full speed only on the 27th floor; it begins to decelerate 15 floors later. The result is a peak acceleration similar to that of other skyscraper elevators – a bit less than a tenth of the force of gravity. Admirable ingenuity has gone into making this safe and comfortable. Special ceramic brakes had to be developed; iron ones would melt. Computer-controlled systems damp out vibrations. The lift has been streamlined to reduce wind noise as it hurtles up and down. One proposed system for the elevator’s vertical position is shown in the following diagram.

Elevator Vertical Position Control Determine the range of K for a stable system. Problem 4.21 Consider the case of a navy pilot landing an aircraft on an aircraft carrier. The pilot has three basic tasks, the first of which is to guide the aircraft’s approach to the ship along the extended centreline of the runway. The second task is maintaining the aircraft on the correct glideslope; the third task is maintaining the correct speed. A model of the lateral position control system is shown in the following diagram.

392

Design and Analysis of Control Systems

Lateral Position Control System Determine the range of stability for K ≥ 0. Problem 4.22 A chemical control system is shown in the following diagram. It is desired that the system be stable and the steady-state error for a unit step input be less than or equal to 0.05 (5%).

A Chemical Control System (a) Determine the range of α that satisfies the error requirement. (b) Determine the range of α that satisfies the stability required. (c) Select an α that meets both requirements. Problem 4.23 The control of the spark ignition of an automotive engine requires constant performance over a wide range of parameters. The control system is shown in the following diagram, where a controller gain K is to be selected.

Spark Ignition Control System The parameter p is equal to 2 for many autos but can equal zero for those with high performance. Select a gain K that will result in a stable system for both values of p.

393

Characteristics of Feedback Control Systems

Problem 4.24 An automatically guided vehicle on Mars is represented by the following figure.

Mars Vehicle Control System The system has a steerable wheel in both the front and back of the vehicle, and the design requires that H(s) = Ks + 1. Determine (a) (b) (c) (d)

the value of K required for stability, the value of K when one root of the characteristic equation is equal to s = −5, the value of the two remaining roots for the gain in part (b). Find the system’s response to a step command for the gain selected in part (b).

Problem 4.25 A traffic control signal is designed to control the distance between vehicles as shown in the following diagram

A Traffic Control Signal Control (a) Determine the range of gain K for which the system is stable. (b) If Km is the maximum value of K so that the characteristic roots are on the jω-axis, then K = Km /N where 6 < N < 7. We desire that the peak time be less than 2 seconds and the percentage overshoot be less than 18%. Determine an appropriate value of N. Problem 4.26 Consider a unity-feedback control system whose open-loop transfer function is G(s) =

K s(Js + B)

Discuss the effects varying the values of K and B have on the unit-ramp response’s steady-state error. Sketch typical unit-ramp response curves for small, medium, and large values of K. Problem 4.27 The following diagram shows three systems: System I is a positional servo system. System II is a positional servo system with PD control. System III is a positional servo system with velocity feedback.

394

Design and Analysis of Control Systems

Three Control Systems Compare the unit-step, unit-impulse, and unit-ramp responses of the three systems. Which system is best with respect to the speed of response and maximum overshoot in the step response? Problem 4.28 Consider the position control system shown in the following figure.

Control System: Multiple Loop Control Write a MATLAB programme to obtain a unit-step response and a unit-ramp response of the system. Plot curves x1 (t) vs. t, x2 (t) vs. t, and e(t) vs. t [where e(t) = r(t) − x1 (t)] for both unit-step and unit-ramp response. Problem 4.29 Determine the range of K for the stability of a unity-feedback control system whose open-loop transfer function is K G(s) = s(s + 1)(s + 2) Problem 4.30 Consider the unity-feedback control system with the following open-loop transfer function: 10 G(s) = s(s − 1)(2s + 3) Is this system stable?

395

Characteristics of Feedback Control Systems

Problem 4.31 Consider the system x˙ = Ax where A is given by 

0 A =  −b3 0

1 0 −b2

 0 1  −b1

(A is called the Schwarz matrix.) Show that the first column of the Routh’s array of the characteristic equation |sI − A| = 0 consists of 1, b1 , b2 , and b1 b3 . Problem 4.32 Without using the Routh-Hurwitz criterion, determine if the following systems are asymptotically stable, marginally stable, or unstable. In each case, the closed-loop transfer function is given. (a) M(s) =

10(s + 2) s3 + 3s2 + 5s

(b) M(s) =

K s3 + 5s + 5

(c) M(s) =

100 s3 − 2s2 + 3s + 10

(d) M(s) =

s−1 (s + 5)(s2 + 2)

Problem 4.33 Using the Routh-Hurwitz criterion, determine the stability of the closed-loop system that has the following characteristic equations. Determine the number of roots of each equation that are in the right half s-plane and on the jω-axis. (a) s3 + 25s2 + 10s + 450 = 0 (b) s3 + 25s2 + 10s + 50 = 0 (c) s3 + 25s2 + 250s + 10 = 0 Problem 4.34 Given the system in state equation form, dx(t) = Ax(t) + Bu(t) dt where 

 1 0 0 (a) A =  0 −3 0  0 0 −2   1 0 0 (b) A =  0 −2 0  0 0 3



 1 B = 0  1   0 B = 1  1

Problem 4.35 Robot control: Let us consider the control of a robot arm. It is predicted that there will be about 100,000 robots in service throughout the world by 2,000. The robot is a six-legged micro robot system using highly flexible legs with high gain controllers that may become unstable and oscillate. Under this condition, the characteristic polynomial is given by q(s) = s5 + s4 + 4s3 + 24s2 + 3s + 63

396

Design and Analysis of Control Systems

Problem 4.36 Welding Control: Large welding robots are used in today’s auto plants. The welding head is moved to different positions on the auto body, and rapid, accurate response is required. A block diagram of a welding head positioning system is shown in the following diagram.

Welding Robot Control System It is desired to determine the range of K and a for which the system is stable. The characteristic equation is K(s + a) 1 + G(s) = 1 + =0 s(s + 1)(s + 2)(s + 3)

5 Root Locus Design Methods 5.1

INTRODUCTION

This chapter introduces and discusses the notion of root locus design, explains the procedure of creating root loci and outlines their uses. Definitions of the necessary terms are provided, including a step-by-step guide to constructing a root locus and details of designing and evaluating controllers using the root locus method. Given a feedback control system, the root locus illustrates how the poles of the closed-loop system vary with system parameters, in particular the closed-loop gain. It is a method that shows how changes in the system’s feedback characteristics and other parameters influence the pole locations. Root locus is a powerful graphic method for analysis and design of control systems. Although the root locus method is commonly used to study the effect of control gain variations, it is also used to plot the roots of any polynomial expressed in the Evans root locus form. Most control systems work by regulating the system they are controlling around a desired operating point. In practice, control systems must have the ability to regulate around an operating point and reject disturbances and be robust to changes in their environment. The root locus method helps the designer of a control system understand the controller’s stability and robustness properties at an operating point. The material in this chapter enables the reader to create a root locus and use the locus to understand the closed-loop system behaviour given an open-loop system and a feedback controller. Case studies and examples that illustrate how to use the root locus for designing a control system are presented.

5.2 5.2.1

ROOT LOCUS BACKGROUND

Root locus is a powerful graphic method used to analyse and design control systems. Given a feedback control system, the root locus illustrates how the poles of the closed-loop system vary with system parameters, particularly the closed-loop gain. It is a graph of the location of the roots as the system parameters vary in the s-plane. The study of control systems with respect to system parameters assumes importance in light of the following issues: • How changes in the system’s feedback characteristics and other parameters influence the pole locations. • Identifying the locations of the closed-loop pole in the s-plane as the parameter changes (this produces the root locus). • Use of the root locus to design and analyse feedback control systems. • Use of computers (MATLAB) to generate root loci. • Root locus design when two or more parameters are varying, e.g., PID where there are three adjustable parameters. • Determination of control system stability from the intersection (or lack of intersection) of the root loci with the imaginary axis. • The addition of poles and zeros (compensation) to the open-loop transfer function to influence the root locus, thus satisfying design specifications (compensator design by the root locus method). • Stability can be improved by the addition of a zero and worsened by the addition of a pole.

DOI: 10.1201/9781032718880-5

397

398

5.2.2

Design and Analysis of Control Systems

DEFINITION

Root locus can be understood as a common technique involving iterating on a design by manipulating the compensator gain, poles, and zeros in the root locus diagram. As a system parameter, K varies over a range of values, the root locus diagram shows the trajectories of the closed-loop poles of the feedback system. Consider the block diagram of a general closed-loop system with a proportional controller gain K shown in Figure 5.1. The closed-loop transfer function is given by Y (s) R(s) KG(s) = 1 + KG(s) b(s) , ≡ a(s)

T (s) =

(5.1) (5.2)

where a(s) and b(s) are polynomials in s of order n and m, respectively. The characteristic equation of a system is based on the transfer function that models the system. There is only one characteristic equation for a given system. The characteristic equation is defined by equating the transfer function’s denominator to zero. Hence, for the system in Figure 5.1 the characteristic equation is given by a(s) = 0 and is thus obtained from 1 + KG(s) = 0.

(5.3)

This is called the Evans root locus form. The root locus is defined as the set of roots that satisfy this equation or as the path traced by the location of the roots of the characteristic equation (the poles of the closed-loop system) in the s-plane as the closed-loop control gain K is varied from zero to infinity, i.e., K ≥ 0. Graphically, the locus is the set of paths in the complex plane traced by the closed-loop poles as the gain is varied. The characteristic equation defines where the poles will be located for any control gain value, K. In other words, it defines the characteristics of the system behaviour for various values of controller gain. On the root locus, the characteristic equation is always satisfied. This means the root locus consists of all values of s that satisfy the system characteristic equation given by Equation 5.3, as the loop control gain K varies from zero to infinity. If the system is a non-unity feedback control system, then Equation 5.3 (the Evans root locus form) takes the structure 1 + KG(s)H(s) = 0. The root locus analysis of unity and non-unity feedback control systems is exactly the same, hence we will concentrate on unity feedback control systems. In the context of root locus design methods, the control gain K is also called the root locus gain. As the gain is varied, the corresponding variations in the poles of the closed-loop system determine the root locus. As the gain increases from zero to infinity, the poles move from the open-loop poles

Figure 5.1 Closed-loop unity feedback block diagram.

399

Root Locus Design Methods

along the locus toward open-loop zeros or toward infinity. The root locus gain, K, appears in both the numerator and the denominator of the closed-loop transfer function in Equation 5.1. The root locus is created using only the denominator (the characteristic equation) of the closed-loop transfer function. The varying parameter does not have to be the controller gain, but rather any system parameter. Hence, in general, the root locus illustrates how the poles of the closed-loop system vary with changes in any system parameter. However, before the root locus is determined, the system characteristic equation must be written in the form of Equation 5.3. As indicated earlier, this format is called the Evans root locus form, and it is generalised for any system parameter q by 1 + qP(s) = 0,

(5.4)

where P(s) is a function of s. In this book, emphasis is placed on the root loci with respect to the control gain K (also referred to as the root locus gain) in which case Equation 5.4 is reduced to Equation 5.3, that is, q = K and P(s) = G(s). However, all the principles and techniques that are discussed can be applied to root loci with respect to any varying system parameter by starting with Equation 5.4. 5.2.3

MAGNITUDE AND ANGLE CRITERIA

The magnitude and angle criteria are direct results of the definition of the root locus. They constitute another way of expressing the root locus requirements (magnitude and phase angle). The root locus characteristic Equation 5.3 can be expressed in terms of a magnitude and a phase angle as follows: 1 + KG(s) = 0 KG(s) = −1 |KG(s)| ∠KG(s) = −1 + j0. The equation of the magnitude defines the magnitude criterion |KG(s)| = 1.

(5.5)

Every point on the root locus must satisfy this criterion. The magnitude criterion is used to determine the locations of a set of roots in the s-plane for a given value of K. The equation for the phase angle ∠KG(s) defines the angle criterion   0 −1 ∠KG(s) = tan −1 =⇒ ∠KG(s) = (2k + 1)180◦ ,

(5.6)

where k is an integer. Every point on the root locus must satisfy the angle criterion. This criterion is used to determine the departure angles for the parts of the root locus near the open-loop poles and the arrival angles for the parts of the root locus near the open-loop zeros. Equations 5.5 and 5.6 (magnitude criterion and angle criterion) describe what is called the explicit Evans root locus form. For a non-unity feedback control system, the explicit Evans root locus form takes the following format: |KG(s)H(s)| = 1

(5.7) ◦

∠KG(s)H(s) = (2k + 1)180 .

(5.8)

When used together, the magnitude criterion and the angle criteria can be deployed to determine whether a point in the s-plane is on the root locus. On the root locus, the characteristic equation is always satisfied, and hence both the magnitude and angle criteria are also always satisfied.

400

Design and Analysis of Control Systems

5.2.4

BREAKPOINT, DEPARTURE AND ARRIVAL ANGLES

The angle of departure is the angle at which the locus leaves a pole in the s-plane. The angle of arrival is the angle at which the locus arrives at a zero in the s-plane. By convention, both types of angles are measured relative to a ray starting at the origin and extending to the right along the real axis in the s-plane. Both arrival and departure angles are found using the angle criterion. When there are multiple poles or zeros at a point in the complex plane, the angles are evenly distributed about the point. Breakpoints (break-away or break-in points) occur on the locus where two or more loci converge (in) or diverge (out). Breakpoints often occur on the real axis, but they may appear anywhere in the s-plane. The loci that approach/diverge from a breakpoint do so at angles spaced equally about the breakpoint. The angles at which they arrive/leave are a function of the number of loci that approach/diverge from the breakpoint. Breakpoints indicate places on the locus where multiple roots exist for some value of the root locus gain. A breakpoint may have more than two loci leading to/from it. The breakpoint indicates a point where a third- or higher-order root exists for some value of K.

5.3

CONSTRUCTING THE ROOT LOCUS

This section outlines the steps to creating a root locus and illustrates the important properties of each step in the process. By the end of this section, the reader should be able to sketch a root locus given the open-loop poles and zeros of a system. Using these steps, the locus will be detailed enough to evaluate the stability and robustness properties of the closed-loop controller. In many cases, the designer of a control system needs a quick estimate of the behaviour of the resulting closed-loop system, and a root locus provides this kind of information. 5.3.1

SUMMARY OF THE ROOT LOCUS STEPS

The procedure of drawing a root locus can be summarised into eight steps: Step 1: Step 2: Step 3: Step 4: Step 5: Step 6: Step 7: Step 8:

5.3.2

Express the characteristic equation in the Evans root locus form Obtain and draw the open-loop poles (×) and zeros (◦) Draw the part of the locus that lies on the real axis Locate the centroid and sketch the asymptotes (if any) Determine the breakpoint locations (if any) Determine the angles of arrival/departure Calculate the imaginary axis crossings (if any) Draw the rest of the locus by connecting the poles with the breakpoints, axis crossings, asymptotes, and arrival angles. It is important to note that only the locus in the upper or lower half-plane has to be drawn since the root locus is always symmetric about the real axis.

SIMPLE RULES FOR CONSTRUCTING THE ROOT LOCUS

Before the details of establishing the root locus are presented, it is prudent to outline the basic rules involved. This enables the construction of rough sketches of root loci. • Number of finite open-loop zeros (◦) is denoted by m • Number of finite open-loop poles (×) denoted by n is equal to the number of branches • Root locus starts at open-loop poles (×) (finite ones or those at infinity) and ends at open-loop zeros (◦) (finite ones or those at infinity) • If KG (s → ∞) → 0 then there are open-loop zeros at infinity

401

Root Locus Design Methods

• If KG (s → ∞) → ∞ then there are open-loop poles at infinity • Number of finite open-loop poles (n) minus number of finite open-loop zeros (m), that is, (n − m), gives the number of zeros at infinity • Root locus is symmetric about the real axis • Real-axis segments of the root locus are to the left of an odd number of real-axis finite poles/zeros With these simple rules, rough sketches of root loci for various control systems can be obtained as illustrated in the following six examples. Example 5.1 Consider the general unity feedback control system given in Figure 5.1, where the plant Transfer Function model is given by G(s) =

1 s(s + 4)

Find the root locus of the system. Solution 5.1 The simple rules for constructing the root locus are applied as follows: • • • • •

Number of finite open-loop zeros is 0 Number of finite open-loop poles (0, −4) is 2 hence the number of branches is 2 KG (s → ∞) → 0 hence there are open-loop zeros at infinity Number of open-loop zeros at infinity is given by (2 − 0) = 2 Given that (i) the root locus starts at poles and ends at zeros, (ii) the root locus is symmetric about the real axis, and that (iii) the real-axis segments of the root locus are to the left of one real-axis finite pole, the root locus is easily sketched as shown in Figure 5.2.

Example 5.2 Consider the general unity feedback control system given in Figure 5.1, where the plant Transfer Function model is given by G(s) = Find the root locus of the system.

Figure 5.2 Sketch of root locus.

1 s(s + 1)(s + 2)

402

Design and Analysis of Control Systems

Solution 5.2 The simple rules for constructing the root locus are applied as follows: • • • • •

Number of finite open-loop zeros is 0 Number of finite open-loop poles (0, −1, −2) is 3 hence the number of branches is 3 KG (s → ∞) → 0 hence there are open-loop zeros at infinity Number of open-loop zeros at infinity is given by (3 − 0) = 3 Given that (i) the root locus starts at poles and ends at zeros, (ii) the root locus is symmetric about the real axis, and (iii) a break-away point between −1 and 0, the root locus is easily sketched as shown in Figure 5.3.

Example 5.3 Consider the general unity feedback control system given in Figure 5.1, where the plant Transfer Function model is given by G(s) =

s+3 s(s + 1)(s + 2)(s + 4)

Find the root locus of the system. Solution 5.3 The simple rules for constructing the root locus are applied as follows: • • • • •

Number of finite open-loop zeros (−3) is 1 Number of finite open-loop poles (0, −1, −2, −4) is 4 hence the number of branches is 4 KG (s → ∞) → 0 hence there are open-loop zeros at infinity Number of open-loop zeros at infinity is given by (4 − 1) = 3 Given that (i) the root locus starts at poles and ends at zeros, (ii) the root locus is symmetric about the real axis, and (iii) a break-away point between −1 and 0, the root locus is easily sketched as shown in Figure 5.4

Figure 5.3 Sketch of root locus.

403

Root Locus Design Methods

Figure 5.4 Sketch of root locus.

Example 5.4 Consider the general unity feedback control system given in Figure 5.1, where the plant Transfer Function model is given by G(s) =

s+3 (s + 1)(s − 2)

Find the root locus of the system. Solution 5.4 The simple rules for constructing the root locus are applied as follows: • • • • •

Number of finite open-loop zeros (−3) is 1 Number of finite open-loop poles (−1, 2) is 2 hence the number of branches is 2 KG (s → ∞) → 0 hence there are open-loop zeros at infinity Number of open-loop zeros at infinity is given by (2 − 1) = 1 Given that (i) the root locus starts at poles and ends at zeros, (ii) the root locus is symmetric about the real axis, (iii) a break-away point between −1 and 2, and (iv) a break-in point after −3, the root locus is easily sketched as shown in Figure 5.5. Although this root locus looks complicated, it only has two branches. This is the case because a branch must start at a pole and end at a zero.

Example 5.5 Consider the general unity feedback control system given in Figure 5.1, where the plant Transfer Function model is given by G(s) = Find the root locus of the system.

s+1 (s + 2)(s2 + 2s + 2)

404

Design and Analysis of Control Systems

Figure 5.5 Sketch of root locus.

Solution 5.5 The simple rules for constructing the root locus are applied as follows: • • • • •

Number of finite open-loop zeros (−1) is 1 Number of finite open-loop poles (−2, −1 ± j) is 3 hence the number of branches is 3 KG (s → ∞) → 0 hence there are open-loop zeros at infinity Number of open-loop zeros at infinity is given by (3 − 1) = 2 Given that (i) the root locus starts at poles and ends at zeros, and (ii) the root locus is symmetric about the real axis, the root locus is easily sketched as shown in Figure 5.6.

Figure 5.6 Sketch of root locus.

405

Root Locus Design Methods

Example 5.6 Consider the general unity feedback control system given in Figure 5.1, where the plant Transfer Function model is given by G(s) =

s2 + 2s + 2 s5 + 9s4 + 33s3 + 51s2 + 26

Find the root locus of the system. Solution 5.6 The simple rules for constructing the root locus are applied as follows: • • • • •

Number of finite open-loop zeros (−1 ± j) is 2 Number of finite open-loop poles (0, −1, −2, −3 ± 2 j) is 5 hence the number of branches is 5 KG (s → ∞) → 0 hence there are open-loop zeros at infinity Number of open-loop zeros at infinity is given by (5 − 2) = 3 Given that (i) the root locus starts at poles and ends at zeros, (ii) the root locus is symmetric about the real axis, and (iii) a break-away point and break-in point at the same spot between −1 and 0, the root locus is easily sketched as shown in Figure 5.7

It is important to note that the sketches presented in the preceding six examples, while they have the correct shape of the root loci of the control system under consideration, they are not the fully accurate root loci. For example, the imaginary axis crossings, the asymptotes and the breaking points are not accurately determined in these sketches. For more accurate root locus sketches it is important that the root locus steps are explored in full, and implemented in the construction of root loci. Alternatively, MATLAB can be used to obtain the accurate root locus of any dynamic system.

Figure 5.7 Sketch of root locus.

406

5.3.3

Design and Analysis of Control Systems

DETAILS OF THE ROOT LOCUS STEPS

All the eight root locus steps outlined in Section 5.3.1 are now covered in detail. The important properties of each step are then illustrated. Using these eight steps, the root locus obtained is detailed enough to accurately determine all aspects of the locus such as the imaginary axis crossings, the asymptotes and the breaking points. Furthermore, the detailed locus enables the evaluation of the stability and robustness properties of the closed-loop controller. 5.3.3.1

Step 1: Evans Root Locus Form

Write the system characteristic equation in the Evans root locus form 1 + KG(s) = 0, where K is the system parameter of interest and G(s) is a function of s. 5.3.3.2

Step 2: Open-Loop Zeros and Poles

Locate the open-loop poles and zeros and denote them by × and ◦, respectively, on the s-plane. Factor G(s) into poles and zeros and rewrite the characteristic equation as follows: 1 + KG(s) = 0 1+K

Πm i=1 (s − zi ) =0 Πnj=1 (s − p j )

Πnj=1 (s − p j ) + KΠm i=1 (s − zi ) = 0. Since the locus represents the path of the roots (specifically, paths of the closed-loop poles) as the root locus gain is varied, the starting point is the location of the roots when the gain of the closedloop system is 0. Each locus starts at an open-loop pole and ends at an open-loop zero. If the system has more poles than zeros, then some of the loci end at zeros located infinitely from the poles. Draw the poles and zeros exactly as they appear in the open-loop system. Include all of the poles and zeros, i.e., poles and zeros of both the controller and the uncontrolled system. The poles will be the starting points of the loci, and the zeros will be the ending points. When K = 0, then the roots of the characteristic equation give the poles of G(s) When K = ∞, then the roots of the characteristic equation give the zeros of G(s) The loci of characteristic equation roots begin at the poles and end at the zeros of G(s) as K increases from 0 to ∞. Note: most functions G(s) have zeros that lie at ∞ because they have more poles than zeros i.e. n > m. (n is the number of poles and m is the number of zeros) (n − m) branches of the root locus approaching the (n − m) zeros at infinity. 5.3.3.3

Step 3: Real Axis Crossings

Draw the part of the locus that lies on the real axis. Locate the segments of the real axis that are part of the root loci. The root locus on the real axis lies in a section to the left of an odd number of poles and zeros. Many root loci have paths on the real axis. The real axis portion of the locus is determined by applying the following rule:

407

Root Locus Design Methods

If an odd number of open-loop poles and open-loop zeros lie to the right of a point on the real axis, that point belongs to the root locus. Note that the real axis section of the root locus is determined entirely by the number of open-loop poles and zeros and their relative locations. Since the final root locus is always symmetric about the real axis, the real axis part is fairly easy to carry out. Start at positive infinity on the real axis. Move toward the origin until a pole or zero is encountered on the real axis. Draw a line from this pole/zero until the next pole or zero on the real axis is reached. If there are no more poles/zeros, the locus extends to negative infinity on the real axis. Otherwise, the locus starts again at the next pole/zero and continues to its successor, and so on. If there are no poles or zeros on the real axis, then there will be no real axis component to the root locus. Some systems have more than one pole or zero at the same location (this indicates a double, triple, or even higher-order root to the characteristic equation). If there are an odd number of poles or zeros at the same location, the real axis part of the locus continues after the location of that pole/zero. If the number of poles/zeros at the location is even, the real axis part of the locus stops at that location. Pick any point on the real axis. If there is an odd number of roots to the right of that point, that point on the axis is a part of the locus. If there are multiple roots, then the real axis part depends on whether there is an even or odd number of roots at the same point. 5.3.3.4

Step 4: Centroid and Asymptotes

The asymptotes indicate where the poles will go as the gain approaches infinity. For systems with more poles than zeros, the number of asymptotes is equal to the number of poles minus the number of zeros. In some systems, there are no asymptotes; when the number of poles is equal to the number of zeros, then each locus is terminated at a zero rather than asymptotically to infinity. The asymptotes are symmetric about the real axis, and they stem from a point defined by the relative magnitudes of the open-loop roots. This point is called the centroid. Note that it is possible to draw a root locus for systems with more zeros than poles, but such systems do not represent physical systems. Some of the poles can be thought of as being located at infinity in these cases. First, determine how many poles, n, and how many zeros, m, are in the system, then locate the centroid. The number of asymptotes is equal to the difference between the number of poles and the number of zeros. The location of the centroid σ on the real axis is given by: σ=

∑ni=1 σ pi − ∑mj=1 σz j , n−m

where pi and z j are the poles and zeros, respectively. Since pi and z j are symmetric about the real axis, their imaginary parts get cancelled out. Once the centroid has been located, the next step is to draw the asymptotes at the proper angles. The asymptotes will leave the centroid and angles defined by   2q + 1 where q = 0, 1, 2, . . . (n − m − 1) . ±180◦ n−m 5.3.3.5

Step 5: Breakpoints

Breakpoints occur where two or more loci join and then diverge. Although they are most commonly encountered on the real axis, they can also occur elsewhere in the complex plane. Each breakpoint is a point where a double (or higher-order) root exists for some value of K. Mathematically, from the root locus equation 1 + KG(s) = 0 KG(s) = −1

408

Design and Analysis of Control Systems

−1 G(s) −1  = b(s) a(s) a(s) , =− b(s)

K=

where the transfer function G(s) consists of a numerator, b(s) and denominator, a(s), then the breakpoints can be determined from the roots of   da(s) db(s) − b(s) − a(s) dK ds ds = = 0. 2 ds [b(s)] If K is real and positive at a value s that satisfies this equation, then the point is a breakpoint. There will always be an even number of loci around any breakpoint because, for each locus that enters the breaking point, there must be one that leaves. Perhaps the easiest way to find breakpoints is by trial and error. First, determine the characteristic equation of the system in terms of K. In the vicinity of the suspected breakpoint, substitute values for s in this equation. A breakpoint occurs when the characteristic equation is minimised. To calculate the breakpoint explicitly requires obtaining the derivative of the characteristic equation in terms of s and then equating it to zero. The resulting equation is then solved for K and s. 5.3.3.6

Step 6: Angles of Arrival/Departure

The angle criterion determines which direction the roots move as the gain moves from zero (angles of departure, at open-loop poles) to infinity (angles of arrival, at open-loop zeros). A grade of departure/arrival is calculated at the complex open-loop poles and zeros. 5.3.3.6.1

Angle of Departure

At each complex pole, add up the angles from the zeros to the current pole, then subtract the angles from the other poles to the current pole. In mathematical terms, for a given pole, the angle of departure is n

m

θdep = 180 − ∑ θi − ∑ φ j , i=1

j=1

where θi is the angle between the ith pole and the given pole and φ j is the angle between the jth zero and the given pole. These angles can be calculated using trigonometry. 5.3.3.6.2

Angle of Arrival

At each zero, add up the angles from the poles to the current zero, then subtract the angles from the other zeros to the current zero. In mathematical terms, for a given zero, the angle of arrival is n

m

θarr = 180 + ∑ θi − ∑ φ j , i=1

j=1

where θi is the angle between the ith pole the given zero, and φ j is the angle between the jth zero and the given zero. By convention, the arrival and departure angles are measured relative to the real axis, so that the positive real axis is 0.

Root Locus Design Methods

409

Note that single poles and zeros on the real axis will always have arrival/departure angles equal to 0◦ or 180◦ due to the symmetry of the complex conjugates. 5.3.3.7

Step 7: Axis Crossings

The points where the root locus intersects the imaginary axis indicate the values of K at which the closed-loop system is marginally stable. The closed-loop system will be unstable for any gain for which the locus is in the right half-plane of the complex plane. If the root locus crosses the imaginary axis from left to right at a point where K = K0 and then stays completely in the right half-plane, then the closed-loop system is unstable for all K > K0 . Therefore, knowing the value of K0 is very useful. Some systems are particularly complex when their locus dips back and forth across the imaginary axis. In these systems, increasing the root locus gain will cause the system to go unstable initially and then become stable again. Not every locus will have imaginary axis crossings. First, determine if the locus will definitely cross the imaginary axis (e.g., if there are more than two asymptotes), or if there is a good chance that the locus crosses the imaginary axis (for example, if there are poles or zeros close to the imaginary axis and/or the arrival/departure angles leads to the conclusion that the locus may cross the axis). There are three ways to find the points where the locus intersects the imaginary axis: (1) Trial and error (bracketing). (2) The Routh-Hurwitz stability criterion. (3) Solving for omega (ωd ) and K. The method used depends on the accuracy required for the locations of the axis crossings. 5.3.3.7.1 Trial and Error Start at the origin in the complex plane. Move up the imaginary axis in discrete steps and calculate the phase of the forward loop transfer function at each step. If the phase at the last point was 180◦ (or vice versa), then an axis crossing lies between the two points. If the phase is equal to 180◦ , then the point is on the locus and is an imaginary axis crossing point. By bracketing regions on the imaginary axis, the axis crossings can be quickly determined. Rather than working up from the origin in regular steps, bracketing uses a binary search approach in which two points are tested, and then another point is chosen based on whether there was a phase change between the two points. If there was a phase change, the third point is chosen between the two. If not, it is chosen outside the two. 5.3.3.7.2 Routh-Hurwitz Stability Criterion From the characteristic equation, the matrix of coefficients is created, as is done when determining the stability of the system (as developed in Chapter 4). Then, from the matrix of coefficients, solve for K such that the stability criterion is met. Then solve for s to determine where on the imaginary axis the gain K is in effect. Note that this method can be very difficult to use, especially for systems with many poles and zeros. 5.3.3.7.3 Solving for ωd and K Let s = jωd in the characteristic equation, equating both the real and imaginary parts to zero, then solve for ωd and K. The values of ωd are the frequencies at which the root loci cross the imaginary axis. The value of K is the root locus gain at which the crossing occurs.

410

5.3.3.8

Design and Analysis of Control Systems

Step 8: Sketch the Complete Root Locus

The complete root locus can be drawn by starting from the open-loop poles, connecting the real axis section, breakpoints, and axis crossings, then ending at either the open-loop zeros or along the asymptotes to infinity and beyond. If the hand-drawn locus is not detailed enough to determine the behaviour of your system, then MATLAB or some other computer tool can be used to calculate the locus exactly. Now sketch in the rest of the locus. Use the asymptotes, arrival angles, departure angles, breakpoints, and axis crossings to guide the sketch. The final locus will include these points and will connect them smoothly. The shapes of the locus parts will depend on the proximity of the open-loop roots to each other. In general, zeros tend to repel the locus, whereas poles tend to attract the locus. One locus segment tends to attract another locus segment until a breakpoint forms. Typically, the only time needed to determine exactly the locus shape is when the locus is near the imaginary axis or in regions where a detailed understanding of the system behaviour in the time domain is required. In these cases, if the previous steps did not yield locus details sufficiently accurate for the specific purposes, then use a computer tool to generate the locus exactly. These are some root locus plots for a variety of systems. They include the construction marks for arrival/departure angles, asymptotes, breakpoints, and axis crossings. Note that in some cases, a slight change in pole or zero coordinates can result in a markedly different locus. Note also, however, that such small changes to the roots will not change more general locus characteristics, such as the number of asymptotes. 5.3.4

DETERMINING THE ROOT LOCUS GAIN (CONTROL GAIN)

The root locus shows graphically how the system roots will move as the root locus gain is changed. Often, however, one must determine the gain at critical points on the locus, such as points where the locus crosses the imaginary axis. The magnitude criterion is used to determine the value of the root locus gain K at any point on the root locus. The gain is calculated by multiplying the lengths of the distance between each pole to the point then dividing that by the product of the lengths of the distance between each zero and the point. Note that a linear change in position on the locus usually does not correspond to a linear change in the root locus gain. 5.3.5

ROOT LOCUS FOR SECOND-ORDER SYSTEMS

In Chapter 3, the s-plane or complex plane was introduced as a two-dimensional space defined by two orthogonal axes, the real number axis and the imaginary number axis. A complex pole (a complex number) has both a real component (−σ ) and an imaginary component (ωd ) such that s ≡ −σ + jωd . As discussed in Chapter 3, any second-order system’s characteristic equation can be represented by the general equation s2 + 2ξ ωn s + ωn2 = 0. The corresponding general roots of this equation are given by √ −b ± b2 − 4ac s= 2a p −2ξ ωn ± 4ξ 2 ωn2 − 4ωn2 = 2 p = −ξ ωn ± ωn ξ 2 − 1

411

Root Locus Design Methods

= −ξ ωn ± jωn

p

1−ξ2

≡ −σ ± jωd , where σ = ξ ωn and ωd = ωn

p 1 − ξ 2.

The following figure compares the generic root with a general point in the s-plane.

S-plane From this figure it follows that tan θ =

σ ωd

ξω p n 1−ξ2 ξ sin θ tan θ = p = 2 cos θ 1−ξ =

ωn

⇒ sin θ = ξ

(and cos θ =

p

1 − ξ 2)

⇒ θ = sin−1 ξ . Since the roots (−σ ± jωd ) and the control gain (root locus gain) K are related by the characteristic equation or the magnitude criteria 1 + KG(s) = 0 |KG(s)| = 1, it follows that the gain K required to produce a particular damping ratio ξ or damping angle θ can be determined. Similarly, given the root locus gain (control gain) K, the system damping ratio ξ or damping angle θ can be obtained. These properties will be illustrated later.

5.4

WORKED EXAMPLES OF ROOT LOCUS DESIGN

Now that the complete set of rules for drawing the root locus have been presented, some accurate and detailed examples of root loci are presented in this section.

412

Design and Analysis of Control Systems

Figure 5.8 Root locus diagram.

Example 5.7 Consider the feedback control system shown below where K ≥ 0 (Figure 5.8). (a) Find the root locus of the system with respect to the controller gain K. (b) For what range of K is the system stable? (c) Find the value of the gain K that will allow the system to have a root at √ 3 1 . s=− + j 2 2 (d) Find the controller gain K that will be required to give the system a damping ratio of ξ = 0.5. Solution 5.7 (a) The closed-loop transfer function of the system is given by   1 K s(s + 1) .  T= 1 1+K s(s + 1) Hence, the characteristic equation (in Evans root locus form) is given by 1+ 1+

K =0 s(s + 1)

K =0 (s + 0)(s + 1)

The system has two poles, one at s = 0, another at s = −1, and no zeros. Rewriting the equation gives s(s + 1) + K = 0 For K = 0, the roots of the characteristic equation are at 0, and −1. The two roots are given from the solution of the quadratic equation. s2 + s + K = 0 1 1√ r1 , r2 = − ± 1 − 4K 2 2 The root locus is thus described by the following equations:

f (x) =

       

r1 , r2 = −1 and 0

1 1√ 1 − 4K r1 , r2 = − ± 2 2       1 1 √  r1 , r2 = − ± j 4K − 1 2 2

for K = 0 for

0≤K≤

1 for K > . 4

1 4 .

413

Root Locus Design Methods

1 the roots are real (note that K is not negative) and lie between −1 and 0. For 4 1 1 1 K = 0, the roots are − ± =⇒ −1 and 0, when K > the roots are complex. The two complex 2 2 4 1 roots are of the form − ± a j, where a is a real positive constant. Thus, the root locus starts from 2 1 the poles at 0 and −1 and move toward each other until both roots are equal to − , then as K 2 1 continues to increase the roots become complex and the locus moves from the point (− , 0) toward 2 1 positive and negative infinity, parallel to the imaginary axis, describing the line − ± a j. The root 2 locus is shown in Figure 5.9. (b) To find the range of K for which the system is stable, the objective is to exclude any roots that are not in the LHP. Such a root is at s = 0 (where K = 0). Therefore, the range of K for which the system is stable is given by K > 0. For 0 ≤ K ≤

(c) The value of the gain K that will allow the system to have a root at √ 1 3 s=− + j , 2 2 is determined by using the magnitude criterion. |KG(s)| = 1

Figure 5.9 Root locus for G(s) =

1 . s(s + 1)

414

Design and Analysis of Control Systems

K s(s + 1) = 1 |K| =1 |s| |s + 1| Substituting for the root and solving for K leads to K =1 =⇒ √ √ 3 1 3 1 + 1 − + j − + j 2 2 2 2 K r =1 1 3 1 3 + + 4 4 4 4 K =⇒ =1 1×1 =⇒ K = 1

=⇒ r

If the root is at s = −0.5, K is similarly obtained from magnitude criterion K s(s + 1) = 1 K =1   1 1 − 2 −2 +1 K   1 1 =1 − 2 2 1 K= 4 If the root s = 0, K is similarly obtained from the magnitude criterion K 0(0 + 1) = 1 ⇒ K = 0 (d) The general pole is of the form s = −σ + jωd , where 1 2 σ σ and tan θ = =⇒ ωd = ωd tan θ σ=

If the damping ratio, ξ = 0.5, then the damping angle is given by θ = sin−1 ξ = 30◦ 1 √ 3 2 =⇒ ωd = = . tan 30◦ 2

415

Root Locus Design Methods

Therefore, the root for ξ = 0.5 is given by √ 1 3 s=− + j . 2 2 Hence, the value of K that allows the system to have this root is then obtained by the magnitude criterion. In fact, this problem is now similar to (c). Substituting for the root in the expression for the magnitude criterion, and solving for K leads to |KG(s)| = 1 K =1 √ √ 1 3 1 3 + 1 − + j − + j 2 2 2 2 s

1 3 + 4 4

K s

1 3 + 4 4

 =1

K =1 1×1 K = 1. Example 5.8 Finding root locus for the given block diagram.

Feedback System (a) Find the root locus of the system with respect to the controller gain K. (b) For what range of K is the system stable. (c) Find the value of the gain K that will allow the system to have a root at s = −1. Solution 5.8 (a) The starting point is deriving the characteristic equation 1 + KG = 0 1+K

2s + 4 2K(s + 2) = 0 ⇒ 1+ 2 =0 s2 + 4s s + 4s

The characteristic equation is s(s + 4) + 2K(s + 2) = 0 K = 0 gives poles at 0, −4 and K = ∞ gives zeros at −2. s2 + s(4 + 2K) + 4K = 0 s = −(2 + K) ± = −(2 + K) ±

q

(2 + K)2 − 4K

p

4 + K2

416

Design and Analysis of Control Systems

The expression under the square root, (4 + K 2 ) is always positive, and moreover p (2 + K) ≧ 4 + K 2 f or 0 ≤ K ≤ ∞ This implies that s is negative. As K → ∞, s1 → 2 and s2 → −2K, which means s → ∞. The root locus (shown in Figure 5.10) lies on the negative real axis between 0 and −2, −4 and ∞ as K goes from 0 to infinity. (b) To find the range of K for which the system is stable, the objective is to exclude any roots that are not in the LHP. Such a root is at s = 0 (where K = 0). Therefore, the range of K for which the system is stable is given by K > 0. (c) Finding gain at specific root location say s = −1. Substituting s = −1 in the magnitude criterion K |2s + 4| =1 |s| |(s + 4)| |−1| |−1 + 4| 3 = . K= 2 |−1 + 2| 2

Figure 5.10 Root locus for G(s) =

2s + 4 . s2 + 4s

417

Root Locus Design Methods

Example 5.9 Consider the block diagram of a feedback control system shown below, where the controller gain is such that K ≥ 0.

Root Locus System (a) Find the system root locus with respect to the controller gain K. (b) For what range of K is the system stable? (c) Find the value of the gain K that will allow the system to have a root at s = −1 + 3 j. Solution 5.9 (a) The closed-loop transfer function of the system is given by   1 K 2 s + 2s − 3   T= 1 1+K 2 s + 2s − 3 Hence, the characteristic equation (characteristic) (in Evans root locus form) is given by K =0 s2 + 2s − 3 K 1+ =0 (s − 1)(s + 3) 1+

The system has two poles, one at s = −3, another at s = 1, and no zeros. Rewriting the equation gives s2 + 2s + (K − 3) = 0 For K = 0, the roots of the characteristic equation are at −3, and 1. The two roots are given from the solution of the quadratic equation s2 + 2s + (K − 3) = 0

√ s = −1 ± 4 − K.

For 0 ≤ K ≤ 4 the roots are real (take note K is non negative) and lie between −3 and 1. For K = 0, the roots are −1 ± 2 =⇒ −3 and 1, when K > 4 the roots are complex. The two complex roots are of the form −1 ± a j, where a is a real positive constant. Thus, the root locus starts from the poles at 1 and −3 and move toward each other until both roots are equal to −1. Then as K continues to increase the roots become complex and the locus moves from point (−1, 0) toward positive and negative infinity, parallel to the imaginary axis, describing −1 ± a j. All points on the root locus must satisfy the magnitude and angle criteria. The root locus is shown in Figure 5.11. (b) For the range of K that permits system stability, the objective is to exclude any roots that are not in the LHP. Such roots are in the range 0 ≤ s ≤ 1. The critical point is where the root locus crosses the imaginary axis into the RHP. Stability means the roots should be strictly in the LHP √ −1 ± 4 − K < 0 ⇒ 4−K < 1 =⇒ K > 3.

418

Figure 5.11 Root locus for G(s) =

Design and Analysis of Control Systems

1 . s2 + 2s − 3

Therefore, the range of K for which the system is stable is given by K > 3. It is important to note that at K = 3, the system is marginally stable (it has a root 0 at the origin). The same result can be obtained by using the Routh-Hurwitz criteria as shown below. s2 + 2s + (K − 3) = 0. s2 s1 s0

1 K −3 2 0 (K − 3) 0 (K − 3) > 0 K > 3.

The advantage of determining stability conditions from the Routh-Hurwitz array instead of root locus plots is that it is unnecessary to know the roots’ location. This advantage is essential in higher-order systems, where it is not easy to obtain the roots. (c) The gain at the specific root location, s = −1 + 3 j, is obtained by substituting this root in the expression for the magnitude criterion K s2 + 2s − 3 = 1 K |s2 + 2s − 3|

=1

419

Root Locus Design Methods

K =1 2 (−1 + 3 j) + 2 (−1 + 3 j) − 3 K (1 − 6 j − 9) + (−2 + 6 j) − 3 = 1 K −13 = 1 K = 13. Example 5.10 Consider the following block diagram where K has a fixed value (e.g., K=1) and the parameter of interest is c.

Feedback System (a) Find the root locus of the system with respect to the parameter c. (b) For what range of c is the system stable? 1 (c) Find the value of the gain c that will allow the system to have a root at s = − . 2 Solution 5.10 (a) The characteristic equation of the system is given by 1 + KG(s) = 0 K =0 1+ s(s + c) s(s + c) + K = 0. The Evans root locus form is obtained as follows: s(s + c) + K = 0 s2 + cs + K = 0 s2 + K + cs = 0. Dividing throughout by (s2 + K) gives the Evans root locus form cs s2 + K + =0 s2 + K s2 + K cs 1+ 2 = 0. s +K For K = 1, the Evans root locus form reduces to 1+

cs s2 + 1

= 0.

The roots of the characteristic equation s2 + cs + 1 = 0,

420

Design and Analysis of Control Systems

are of the general form √ c2 − 4 2 √ c 4 − c2 =⇒ s = − ± j 2 2

c s=− ± 2

√ c For c = 0 the roots are at s = ± j 1. Using the general form of the roots, let the real part be x = − , 2 the imaginary part √ 4 − c2 y=± 2 Consider the following function √  c 2 4 − c2 2 2 + ± x +y = − 2 2

!2

c2 4 − c2 + =1 4 4 =1 =

This is the equation of a circle of radius 1 with its centre at the origin. Since the parameter c is positive, the root locus will include only the LHP (or second and third quadrants of the circle) because x is always negative. The root locus is shown in the following figure (Figure 5.12).

Figure 5.12 Root locus for G(s) =

cs . s2 + 1

421

Root Locus Design Methods

(b) To determine the range of c for which the system is stable, the objective is to exclude any roots that are not in the LHP. Such roots are at s = ± j1 (where c = 0) and at s = 0 (where c = ∞). Therefore, the range of c for which the system is stable is given by 0 < c < ∞. (c) The value of the parameter c that will allow the system to have a root at 1 s=− , 2 is obtained using the magnitude criterion. |cP(s)| = 1 cs s2 + 1 = 1 c |s| =1 2 |s + 1| 1 c − 2 =1 =⇒ 1 + 1 4 5 5 =⇒ c = × 2 = . 4 2 Example 5.11 Consider a system whose open-loop transfer function is given by TOL = KG(s) =

K s(s + 1)(s + 2)

(a) Find the range of K for which the unity feedback closed-loop system is stable. (b) Sketch the root locus of the unity feedback closed-loop system. (c) Use a Routh-Hurwitz table to prove that your answer in (a) is correct. Solution 5.11 (a) It is important to note that this control system is exactly the same as that in Example 5.2. The closed-loop transfer function is given by TCL =

KG(s) 1 + KG(s) 

 K s(s + 1)(s + 2)   =⇒ TCL = K 1+ s(s + 1)(s + 2) =⇒ TCL =

K s3 + 3s + 2s + K

From analysing the characteristic equation, the system is stable if all coefficients are present and positive. This means K > 0 for stability to be achieved. In terms of imaginary axis crossing at s = jω

422

Design and Analysis of Control Systems

s3 + 3s2 + 2s + K = 0 −ω 3 j + 3(−ω 2 ) + 2 jω + K = 0 j(−ω 3 + 2ω) + (K − 3ω 2 ) = 0 =⇒ (−ω 3 + 2ω) = 0 and (K − 3ω 2 ) = 0 =⇒ ω 2 = 2 and K = 3ω 2 √ =⇒ ω = ± 2 and K = 6 This means K = 6 marks marginal stability, and therefore the stability range for the closed-loop system is given by: 0 0 and

=⇒ 0 < K < 6 This is the condition for stability obtained by using the Routh-Hurwitz table. It is exactly the same as that obtained in (a) using the root locus method. Example 5.12 The following figure shows the speed control system of an assembly plant (Figure 5.15). (a) Determine and plot the range of K and p that permits stable operation. (b) In the assembly plant, if p = 2, sketch the root locus of the system as K varies from 0 to ∞.

424

Design and Analysis of Control Systems

Figure 5.15 Control system for an assembly plant.

Solution 5.12 (a) The transfer function for the system is given by T (s) =

K . s3 + (p + 1)s2 + ps + K

Stability condition is established by using the Routh-Hurwitz array. s3 s2 s1 s0

1 1+p b K

p K 0 0

where the conditions f or stability are established as f ollows :

K>0 (1 + p) > 0 =⇒ p > −1 but p > 0

(coefficient of s)

=⇒ p > 0 b=

p2 + p − K >0 1+ p

⇒ K < p2 + p =⇒ 0 < K < p2 + p and p > 0. (b) For p = 2, the characteristic equation, expressed in Evans root locus form, is given by 1+

K = 0. s3 + 3s2 + 2s

With the substitution (p = 2), this representation now describes a control system identical to that in Examples 5.2 and 5.11. A rough sketch of the identical root locus is presented in Figure 5.16 for completeness. Example 5.13 Consider a chemical processing plant represented by the following system, KG(s) =

K . s(s + 1)(s + 5)

(a) Draw the root locus of the system. (b) For what value of K are the roots on the imaginary axis?

425

Root Locus Design Methods

Figure 5.16 Root locus for G(s) =

1 . s3 + 3s2 + 2s

Solution 5.13 (a) The starting point is expressing the transfer in the Evans root locus form. KG(s) =

K s(s + 1)(s + 5)

1 + KG(s) = 0 1+

K =0 s(s + 1)(s + 5)

s(s + 1)(s + 5) + K =0 s(s + 1)(s + 5) The root locus is shown in the following diagram (Figure 5.17). (b) The characteristic equation is given by 1 + KG(s) = 0 K 1+ =0 s(s + 1)(s + 5) s(s + 1)(s + 5) + K = 0 s3 + 6s2 + 5s + K = 0 The Routh-Hurwitz array is obtained as s3 1 5 s2 6 K 30 − K s1 6 6 0 s K 0

426

Design and Analysis of Control Systems

Figure 5.17 Root locus.

For stability, the coefficients in the first column should be greater than zero. On the imaginary axis the coefficients are equal to zero. 30 − K = 0 or K = 0 6 =⇒ K = 30 or 0.

=⇒

Alternatively, the same results can be obtained by directly solving the characteristic equation. This is done as follows: s3 + 6s2 + 5s + K|s= jω = 0 =⇒ (−6ω 2 + K) + (−ω 3 + 5ω) j = 0 √ =⇒ ω = ± 5, 0 K = 30, 30, 0 =⇒ K = 30 or 0 Example 5.14 Consider the unity feedback system in Figure 5.18. Sketch the root locus for the system.

Figure 5.18 Unity feedback system.

427

Root Locus Design Methods

Solution 5.14 The derivation of the root locus proceeds as follows: Open-loop finite poles 0, −1, −2, −4 Open-loop finite zeros −3 K For very large values of s, the expression 3 goes to zero which means that there are 3 zeros at s infinity for the open loop system Break-away point is obtained from the equation

0=

1 1 1 1 + + + σ σ +1 σ +2 σ +4

The asymptotes are obtained as follows: ∑ f inite poles − ∑ f inite zeros Number o f poles − number o f zeros (−4 − 2 − 1 + 0) − (−3 + 0 σa = 4−1 −4 σa = 3 = −1.333

σa =

(2m + 1)π Number o f poles − number o f zeros = π/3 for m=0

θa =

=π = 5π/3

for m=1 for m=2

All the preceding information is used to draw the complete root locus diagram given in Figure 5.19. It is instructive to note this root locus is a much more accurate version of that obtained in Example 5.3 and depicted in Figure 5.4. Example 5.15 Draw root locus plots for the following systems (a) KG(s) = (b) KG(s) = (c) KG(s) =

K(s2 +2s+12) s(s2 +2s+10) K(s+3) s(s+1)(s2 +4s+5) K(s+2) s4

Solution 5.15 (a) K(s2 + 2s + 12) s(s2 + 2s + 10) √ √ K(s + 1 + j 11)(s + 1 − j 11) = s(s + 1 + j3)(s + 1 − j3) n−m = 3−2 = 1

KG(s) =

φasy = 180 Angle of departure: φ = −18.4 Angle of arrival: φ = 16.7◦ Imaginery axis crossing ( jω crossing): none (by Routh criterion)

428

Design and Analysis of Control Systems

Figure 5.19 Root locus diagram.

Figure 5.20 Root locus (a).

Hence, the root locus is of the form shown in Figure 5.20. (b) K(s + 3) s(s + 1)(s2 + 4s + 5) K(s + 3) KG(s) = s(s + 1)(s + 2 + j)(s + 2 − j)

KG(s) =

429

Root Locus Design Methods

n−m = 4−1 = 3 φasy = ±60◦ , 180 2 α =− 3 Angle of departure from: φ = −153.4◦ Imaginary axis crossing ( jω crossing): s = ± j1.38 Break-away point: s = −0.44 Break-in point: s = −3.65 Hence, the root locus takes the form shown in the following in Figure 5.21. (c) K(s + 2) s4 n−m = 4−1 = 3

KG(s) =

φasy = ±60◦ , 180 2 α= 3 Angle of departure from s = 0 : φ = ±45◦ , 135◦ Imaginary axis crossing ( jω crossing): s = 0, K = 0 Break-away point: s = 0 Break-in point: s = − Thus, the root locus takes the structure shown in Figure 5.22.

Figure 5.21 Root locus (b).

8 3

430

Design and Analysis of Control Systems

Figure 5.22 Root locus (c).

Example 5.16 Consider a modular robotic system that is described as follows: KG(s) =

K(s + 2) . s(s − 1)(s + 6)2

(a) Draw the root locus for the system. (b) Verify that the root locus does not cross the imaginary axis by using the Routh-Hurwitz array. Solution 5.16 (a) K(s + 2) s(s − 1)(s + 6)2 n−m = 4−1 = 3

KG(s) =

φasy = ±60◦ , 180 α = −3 Breakaway point: s = 0.488 Imaginary axis crossing ( jω crossing): none Hence, the root locus is of the form shown in Figure 5.23. (b) The characteristic equation is given by 1 + KG(s) = 0 1+

K(s + 2) =0 s(s − 1)(s + 6)2

s(s − 1)(s + 6)2 + K(s + 2) = 0 s4 + 11s3 + 24s2 + (K − 36)s + 2K = 0

431

Root Locus Design Methods

Figure 5.23 Root locus.

The Routh-Hurwitz array is obtained as s4 s3 s2 s1 s0

1 11 300 − K 11 b1 2K

24 2K K − 36 0 2K

0

0 0

0 0

For stability, all the coefficients in the first column must be greater than zero. At the point of crossing the imaginary axis the coefficient b1 is zero. b1 = 0 ⇒ K 2 − 94K + 10, 800 = 0 √ −b ± b2 − 4ac K= p2a 94 ± 942 − 4 × 10, 800 = 2 p = 47 ± −8, 591 = 47 ± j92.69 ⇒ K is not a real number since it has an imaginary component. Hence, there is no imaginary axis (s = jω) crossing for K > 0. Example 5.17 Consider a dynamic system whose state-variable matrix equations are expressed as follows: x˙ = Ax + Bu y = Cx + Du u = r − y.

432

Design and Analysis of Control Systems

The matrices A, B, C, and D are given by   0 1 0 0 1 , A= 0 −160 −56 −14 C=



1

0

0



,



 0 B = 1  −14

D = [0].

(a) Use MATLAB to find the transfer function of the system. (b) Draw the system block diagram, indicating the matrices A, B, C, and D, the vectors x˙ and x, and the variables r(t), u(t) and y(t). (c) Compare the root locus obtained by using the State-Variable Matrix model and that obtained using the transfer function form. Solution 5.17 (a) The transfer function is obtained by using the following MATLAB command: [num, den] = ss2tf(A, B, C, D). num =



0

0

den =



1

14

1

0 56

 160



Therefore, transfer function is given by T (s) =

s s3 + 14s2 + 56s + 160

(b) The system block diagram is given by

State-Space System (c) The root locus plot for this system can be obtained with MATLAB by use of the following command: rlocus(A,B,C,D) This command will produce the same root locus plot as can be obtained by use of the rlocus(num,den) command, where num and den are obtained from [num, den] = ss2tf(A, B, C, D). The resultant root locus diagram is shown in Figure 5.24. There is a SISO Design Tool in MATLAB, that is a graphical-user interface that allows the user to tune the control parameters from root locus design and system response simulation.

433

Root Locus Design Methods

Figure 5.24 Root locus for a system defined in state-space.

5.5

DYNAMIC COMPENSATION: LEAD AND LAG

The addition of poles and zeros (compensation) to the open-loop transfer function to reshape the root locus diagram to satisfy design specifications constitutes compensator design by the root locus method. For example, stability can be improved by adding a zero and worsened by adding a pole. Lead compensation acts mainly to lower rise time and decrease the transient overshoot. Generally, lead compensation approximates derivative control. Lag compensation is usually used to improve the steady-state accuracy of the system, and it approximates integral control. Compensation with the transfer function of the form s+z D(s) = , s+ p is called lead compensation if z < p and lag compensation if z > p. 1 + KD(s)G(s) = 0

To explain the basic stabilising effect of lead compensation of a system, a simplified D(s) = s + z is considered. This is the same as the proportional-derivative (PI) control. This compensation is applied to the case of a second-order system with a transfer function KG(s) =

K . s(s + 1)

The uncompensated system G(s) has the root locus shown in Figure 5.25. Shown in Figure 5.26 is the root locus of the compensated system produced by D(s)G(s) in the circle. The effect of the zero

434

Design and Analysis of Control Systems

Figure 5.25 Root locus before compensation.

Figure 5.26 Root locus after compensation.

is to move the root locus to the left more stable part of the s-plane. The root locus of the compensated case was produced by using D(s) = s + 2, such that KG(s) =

K(s + 2) . s(s + 1)

It is important to note that this root locus is similar to that of Example 5.4, the only difference is the location of the open-loop zero and two poles. The trouble with choosing D(s) based on only a zero is that the physical realisation would contain a differentiator that would greatly amplify the inevitable high-frequency noise from the sensor signal.

435

Root Locus Design Methods

5.6

EXTENSIONS OF ROOT LOCUS METHOD

The root locus technique is a graphical scheme to show the locations of possible roots of an equation as a real parameter varies. So far, only polynomial equations for linear systems have been considered. Two additional types of systems are considered, systems with (i) time delay and (ii) nonlinear elements. 5.6.1

TIME DELAY

As discussed in Chapter 4, time delays often arise in control systems, both from delays in the process itself and delays in the processing of sensed signals. Figure 5.27 illustrates the Block Diagram model of a generic plant with a time delay function. Such a function presents a challenge to the use of classical root locus methods. Chemical plants often have processes with a time delay representing the time material takes to flow through the pipes. In measuring the altitude of a spacecraft en route to Mars, there is a significant time delay for the sensed quantity to arrive back on Earth because of the speed of light. There is also a small time delay in any digital control system due to the cycle time of the computer and the fact that data is being processed at discrete intervals. Time delays always reduce the stability of a system; therefore, it is important to be able to analyse its effect. In this section, the use of the root locus for such analysis is presented, though frequency-response methods are easier. As an example of a system with time delay, consider the system transfer function G(s) =

e−5s (10s + 1)(60s + 1)

where the term e−5s arises from the time delay. The root locus equations are 1 + KG(s) = 0 1+K

e−5s =0 (10s + 1)(60s + 1)

600s2 + 70s + 1 + Ke−5s = 0. Since it is not a polynomial, the previous methods cannot be used to plot the root locus. 5.6.1.1

The Pade Approximation

One approach is to find an approximate rational function for the non-rational function e−5s . It consists of matching the series expansion of the transcendental function e−5s with the series expansion of a rational function whose numerator is a polynomial of degree p and the denominator is a polynomial of degree q. The result is called (p, q) Pade approximation to e−5s . Initially, the approximant to e−s is computed, and in the final result, substitute sτ for s to allow for any delay. To illustrate the process, start with (1, 1) as an approximation. In this case, the objective is to select bo , b1 , and ao so that the error given by bo s + b1 e−s − = ε, ao s + 1

Figure 5.27 A dynamic system with a time delay function.

436

Design and Analysis of Control Systems

is small. Expand the rational function using Maclaurin series and match as many terms of the initial terms as possible. The two series are e−s = 1 − s +

s2 s3 s4 − + −··· 2! 3! 4!

bo s + b1 = b1 + (bo − ao b1 )s − ao (bo − ao b1 )s2 + a2o (bo − ao b1 )s3 + · · · ao s + 1 Matching the first four coefficients b1 = 1 (bo − ao b1 ) = −1 1 −ao (bo − ao b1 ) = 2 1 a2o = − . 6 Substituting sτ for s the resulting approximant is Gτ (s) = e−sτ τ

=

e−s 2 τ

es 2



1 − sτ2 −sτ + 2 . sτ = 1+ 2 sτ + 2

(5.9)

This is the first-order Pade approximation. If it is assumed that p = q = 2, then five parameters are obtained, and a better match is possible. In this case, (2, 2), the approximation is available as Gτ (s) = e−sτ 2 2

τ

=

e−s 2 τ

es 2



1 − sτ2 + s12τ

2 2

1 + sτ2 + s12τ

=

12 s2 − 6s τ + τ2 12 s2 + 6s τ + τ2

.

(5.10)

This is the second-order Pade approximation. In some cases, a very crude approximation is acceptable, and the (0, 1) can be used, which is a first-order lag given by e−sτ ∼ =

1 1 + sτ

(5.11)

It is important to note that by using these three approximations, the exponential function is eliminated and the characteristic equation of the system becomes a polynomial (an algebraic expression). Hence the classic root locus techniques can then be applied. This allows the stability of a control system with a time delay function to be determined and root locus of such a system to be drawn. 5.6.1.2

Stability and Root Locus through Pade Approximation

Example 5.18 Consider a dynamic system whose plant transfer function and proportional controller are given (respectively) by 1 20s D(s) = K = 20 G(s) =

The plant is in a unity feedback arrangement with the proportional controller K as depicted in Figure 5.28.

437

Root Locus Design Methods

Figure 5.28 Closed-loop unity feedback block diagram.

(a) If the plant is delayed by one second, draw the new block diagram. (b) Find the exact Transfer Function model of the delayed system. (c) Using the first-order Pade approximation, find the approximate Transfer Function model of the system. (d) Hence, determine whether the delayed system is stable.

Solution 5.18 (a) The new block diagram is established as shown in Figure 5.29. (b) The Transfer Function model is derived as follows:

T (s) =

=

= =⇒ T (s) =

KG(s)e−s 1 + KG(s)e−s   20e−s 20s   20e−s 1+ 20s e−s s + e−s 1 ses + 1

(c) From the first-order Pade approximation (for τ = 1) it follows that: Gτ (s) = e−sτ τ

=

e−s 2 s τ2



e −s + 2 =⇒ e−s = . s+2

Figure 5.29 Unity feedback system.

1 − sτ2 −sτ + 2 = . 1 + sτ2 sτ + 2

438

Design and Analysis of Control Systems

Hence the approximate Transfer Function model is given by T (s) = =

=⇒ T (s) =

1 ses + 1 1  s+2 s +1 −s + 2 −s + 2 2 s +s+2 

(d) To determine the stability of the system the characteristic equation is considered. s2 + s + 2 = 0 The roots (poles) of the characteristic equation are √ 1 7 s=− ± j . 2 2 These are two different poles in the LHP, hence the system is delayed system is stable. It is important to note that by using the Pade approximation, the exponential function is eliminated and the characteristic equation of the system becomes a simple polynomial (an algebraic expression). Thus the stability of the system is then easily determined by directly locating the system poles or by Routh-Hurwitz techniques. Efficacy of the Pade approximation method can be further illustrated by considering the closedloop control system with a time delay presented in Chapter 4, shown in Figure 5.30. The Transfer Function model is given by TD (s) =

2.5s + 0.5 2.5s + 0.5 + 5s2 e3.4s + 5se3.4s

Clearly, the characteristic equation of the control system is not an algebraic polynomial, and hence traditional root locus analysis cannot be applied. Using a first-order Pade approximation, the time delay function is eliminated and an approximant control system in Figure 5.31 is obtained. The new Transfer Function model (details of the derivation are left as an exercise for the reader) is then given by Figure 5.32, which means: TPade (s) =

−2.5s2 + 0.9705s + 0.2941 5s3 + 5.441s2 + 3.9115s + 0.2941

The characteristic equation is an algebraic polynomial, hence classical root locus can now be applied.

Figure 5.30 A closed-loop control system with a time delay.

Root Locus Design Methods

439

Figure 5.31 Closed-loop system with a time delay: Pade approximation.

Figure 5.32 Transfer function from Pade approximation.

5.6.2

NONLINEAR SYSTEMS

Real control systems are inherently nonlinear, and the linear analysis and design methods that have been described so far use linear approximations to the real models. There is one important category of nonlinear systems for which some significant analysis (and design) can be done. This comprises the systems in which the nonlinearity has no dynamics and is well approximated as a gain that varies as the size of its input signal varies. Some of these systems are shown in Figure 5.33.

Figure 5.33 Examples of nonlinear elements.

440

5.7

Design and Analysis of Control Systems

COMPUTER-AIDED DETERMINATION OF THE ROOT LOCUS

To use the root locus as a design tool and verify computer-generated loci, it is very important to be able to sketch root loci. The control engineer can then quickly predict, for design purposes, the effect of an added pole, or even several of them, or can quickly confirm computer output in a qualitative sense. For this reason, it is important to understand the guidelines for sketching the loci and be able to plot numerous example loci by hand. The computer can be used to determine accurate loci and to establish exact values of the parameters. It is especially useful in computing the closed-loop pole sensitivity to those parameters because their values may be known only to a certain tolerance at the time of the design and may be subject to drift over the life of the system. There are two basic approaches to machine computation of the root locus. In the first approach, the root locus problem is formulated as a polynomial in the form a(s) + Kb(s) = 0. For a sequence of values of K varying from near to zero to a large value, the polynomial for its n roots is solved by any of the many available numerical techniques. An advantage of this method is that it computes all the roots for each value of K, guaranteeing that a specific value, if required, is also included. One of the disadvantages of the method is that the resulting root locations are very unevenly distributed in the s-plane. For example, near a point of multiple roots, the sensitivity of the root locations to the K-value is very great, and the roots just “fly through” such points; the plots appear to be coarse, so it is easy to miss important features. (On the other hand, the method has the advantage that near a zero, the root moves very slowly, since it takes an infinite value of the parameter to push the root all the way into the zero.) A second disadvantage of this method is that the equation must be a polynomial. In cases of time delay, it involves a transcendental equation. Hence an approximation such as the Pad´e method must be used to reduce the given problem to an equivalent polynomial. Such approximations limit the range of values for which the results are accurate, and checking the accuracy is difficult unless a means is available to solve the true equation at critical points. A final disadvantage is that not many algorithms can solve polynomials easily at points of multiple roots. This problem is related to the great sensitivity of the roots to the parameter at these points, as mentioned earlier. A method related to factoring is possible when a State-Variable Matrix model is available. The alternative to polynomial factoring is a method based on curve tracing. It hinges on the fact that a point on the positive root locus is a point where the phase of G(s) is 180◦ . Thus, given a point on the locus at so with gain Ko , a circle of radius δ around so can be drawn, and the circle is searched for a new point where the angle condition is met, and the new gain is larger than Ko . This method can be easily arranged to include a delay term such as e−λ s ; the resulting points will be spaced δ radians apart, a value that the designer can specify. A disadvantage of this method is that only one branch of the locus is generated at a time (although computer logic can be easily set up through each of the open-loop poles to produce a complete graph). A second disadvantage is that the designer needs to monitor the selection of δ to ensure that the search for 180◦ converges on some points and avoids wasting too much time with a small value of δ at less critical points. 5.7.1

MATLAB

The issues involved in using MATLAB to determine the root locus of a dynamic system can be summarised as follows: • Closed-loop poles • Plotting the root locus of a transfer function

441

Root Locus Design Methods

• Choosing a value of K from root locus • Closed-loop response • Key MATLAB commands used: cloop, rlocfind, rlocus, sgrid, step 5.7.1.1

Open-Loop System

The following example illustrates the implementation of an open-loop system: num=[1 7]; den=conv(conv([1 0],[1 5]),conv([1 15],[1 20])); rlocus(num,den) axis([−22 3 −15 15]) 5.7.1.2

Choosing a Value of K from the Root Locus

The plot above shows all possible closed-loop pole locations for a pure proportional controller. Obviously, not all of those closed-loop poles will satisfy our design criteria. To determine what part of the locus is acceptable, the command sgrid(ξ , ωn ) can be used to plot lines of constant damping ratio and natural frequency. Its two arguments are the damping ratio ξ and natural frequency ωn (these may be vectors if the objective is to look at a range of acceptable values). In this problem, an overshoot 0.7) and a rise time of 1 second (a natural frequency ωn > 1.8) are required. Enter in the MATLAB command window: ξ = 0.7; ωn = 1.8; sgrid(ξ ,ωn ) From the aforementioned plot, there is part of the root locus inside the desired region. So, in this case, only a proportional controller is required to move the poles to the desired region. The rlocfind command in MATLAB can be used to choose the desired poles on the locus: [kd, poles] = rloc f ind(num, den) 5.7.1.3

Closed-Loop Response

To find the step response, the closed-loop transfer function has to be known. This can be computed using the rules of block diagrams, or MATLAB can be used to implement the closed-loop transfer function as follows: [numcl, dencl] = cloop((kd)*num, den)

PROBLEMS Problem 5.1 Consider a general closed-loop unity feedback control system shown in Figure 5.34, where K is the controller gain and G(s) is the plant Transfer Function model. Using simple rules, sketch the root loci for the two systems with the following plant Transfer Function models: G

=

G(s) =

1 s(s + 2)(s + 3) s+4 s(s + 2)(s + 3)(s + 5)

442

Design and Analysis of Control Systems

Figure 5.34 Closed-loop unity feedback block diagram.

Problem 5.2 Consider a general closed-loop unity feedback control system shown in Figure 5.34, where K is the controller gain and G(s) is the plant Transfer Function model. (a) For the two systems with the following plant Transfer Function models G(s) =

3s − 4 s2 − 2s + 2

G(s) = −

3s − 4 . s2 − 2s + 2

(i) Obtain the root loci for the two systems. (ii) Is there any value of controller gain K that will stabilise either of these plants? (b) Sketch the root loci for the dynamic systems with the following plant Transfer Function models: (s + 2)(s + 6) (i) G(s) = 2 s + 8s + 25 s2 + 4 (ii) G(s) = 2 s +1 s2 + 1 (iii) G(s) = s2 1 (iv) G(s) = . (s + 1)3 (s + 4) Problem 5.3 Consider a general closed-loop unity feedback control system shown in Figure 5.34, where K = 1 is the controller gain and G(s) is the plant Transfer Function model given by G(s) =

s2 + s + a s(s + 1)(s2 + s + 1.25)(s2 + s + 4.25)

(a) Plot the root locus for the plant as a varies from 1.25 to 4.25. (b) Find the exact value of a for which the pair of poles that asymptotically approach zero changes. Problem 5.4 Determine the system type for the following unity feedback systems whose forward path transfer functions are given below. K (1 + s)(1 + 10s)(1 + 20s) 10(s + 1) (b) KG(s) = 3 2 s (s + 5s + 5) 100 (c) KG(s) = 3 s (s + 2)2 (a) KG(s) =

443

Root Locus Design Methods

Problem 5.5 Determine the step and ramp error constants of unity feedback systems whose forward path transfer functions are given by: 1, 000 (1 + 0.1s)(1 + 10s) K(1 + 2s)(1 + 4s) (b) KG(s) = s2 (s2 + s + 1) 100 (c) KG(s) = s(s2 + 10s + 100) (a) KG(s) =

Problem 5.6 The forward path transfer functions of unity feedback control systems are given below. (a) KG(s) =

K(s + 4) s(s2 + 4s + 4)(s + 5)(s + 6)

K s(s + 2)(s + 5)(s + 10) K(s2 + 2s + 10) (c) KG(s) = s(s + 5)(s + 10) K(s2 + 4) (d) KG(s) = (s + 2)2 (s + 5)(s + 6)

(b) KG(s) =

Construct the root loci for K ≥ 0. Problem 5.7 The characteristic equation of the liquid level control system is written as 0.006s(s + 12.5)(As + K) + 250N = 0. (a) For A = K = 5, construct the root loci of the characteristic equation as N varies from 0 to ∞. (b) For N = 10 and K = 50, construct the root loci of the characteristic equation for A ≥ 0. (c) For A = 50 and N = 20, construct root loci for K ≥ 0. Problem 5.8 The block diagram of a control system with feedback is shown in the following diagram.

(a) Construct the root loci of the characteristic equation for K ≥ 0 when Kt = 0. (b) Set K = 10, construct the root loci of the characteristic equation for Kt ≥ 0. Problem 5.9 Consider a dynamic system whose plant transfer function and proportional controller are given (respectively) by 1 20s D(s) = K = 40 G(s) =

The plant is in a unity feedback arrangement with the proportional controller K as depicted in Figure 5.35.

444

Design and Analysis of Control Systems

Figure 5.35 Closed-loop unity feedback block diagram.

(a) If the plant is delayed by two seconds, draw the new block diagram. (b) Find the exact Transfer Function model of the delayed system. (c) Using the first-order Pade approximation, find the approximate Transfer Function model of the system. (d) Hence, determine whether the delayed system is stable. (e) Use MATLAB to draw the root locus of the delayed system using the exact Transfer Function model (where the Pade approximation is implemented in MATLAB). Then use MATLAB to draw the root locus using the hand-calculated approximate Transfer Function model. Compare the two results. Problem 5.10 Consider a dynamic system whose plant transfer function and PI controller are given (respectively) by 5 s+1   1 D(s) = 0.1 1 + 5s G(s) =

The plant is in a unity feedback arrangement with the PI controller D(s) as depicted in Figure 5.36. (a) If the plant is delayed by 3.4 seconds, draw the new block diagram. (b) Find the exact Transfer Function model of the delayed system. (c) Using the first-order Pade approximation, find the approximate Transfer Function model of the system. (d) Hence, determine whether the delayed system is stable. (e) Use MATLAB to draw the root locus of the delayed system using the exact Transfer Function model (where the Pade approximation is implemented in MATLAB). Then use MATLAB to draw the root locus using the hand-calculated approximate Transfer Function model. Compare the two results.

Figure 5.36 Closed-loop unity feedback block diagram.

Design 6 Frequency-Response Methods 6.1

INTRODUCTION

The frequency-response design method is a popular approach in industry because it provides solutions when there are significant uncertainties in the plant model. These frequency-based techniques are also intuitive and use experimental information for design purposes. More specifically, a plant’s raw phase and magnitude outputs under sinusoidal excitation can be used to design control systems. With frequency-response methods, it is also not necessary to obtain the system transfer function to design a controller [11]. In most of the work in previous chapters, the input signals used were functions such as an impulse, a step, and a ramp function. This chapter considers the steady-state response of a system to a sinusoidal input signal (sinusoid). It will be observed that the response of a linear time-invariant system to a sinusoidal input signal is an output sinusoidal signal at the same frequency as the input. However, the magnitude and phase of the output signal differ from those of the input sinusoidal signal, and the amount of difference is a function of the input frequency. Hence, the frequency response of a system is defined as the steady-state response of the system to a sinusoidal input signal. The sinusoid is a unique input signal, and the resulting output signal for a linear system is sinusoidal in the steady state. This chapter’s objective is to investigate the steady-state response of the system to sinusoidal inputs as the frequency varies. The design of feedback control systems in industry is accomplished using frequency-response methods more often than any other. Frequency-response design is popular primarily because it provides good designs in the face of uncertainty in the plant model. For systems with poorly known or changing high-frequency resonances, their feedback compensation can be modified to alleviate the effects of those uncertainties. This modification is carried out more efficiently using frequency-response design than any other method. As already explained, another advantage of using frequency response is the ease with which experimental information can be used for design purposes. Raw measurements of the output amplitude and phase of a plant undergoing a sinusoidal input excitation are sufficient to design a suitable feedback control system. No intermediate processing of the data (such as finding poles and zeros or determining system matrices) is required to arrive at the system model. The wide availability of computers and the use of AI have rendered this advantage less critical now than it was years ago; however, frequency response is often still the most cost-effective design method for relatively simple systems. Hence, the frequency-response design method offers a practical and important alternative approach to the analysis and design of control systems. The frequency response of a system is defined as the steady-state response of the system to a sinusoidal input signal. The sinusoid is a unique input signal. The resulting output signal for a linear system is sinusoidal in the steady state. It differs from the input waveform only in amplitude and phase angle. The transfer function G(s) is analysed when s = jω and methods for graphically displaying the complex number G( jw) as ω varies are developed. The main technique covered involves Bode plots. Time-domain performance measures are developed in terms of the frequency response and then applied in system performance evaluation.

6.2

DEFINITION OF THE FREQUENCY RESPONSE

The frequency response of a system with a general transfer function G(s) is defined as the steadystate response of the system to a sinusoidal input signal u(t). The sinusoid is a unique input signal DOI: 10.1201/9781032718880-6

445

446

Design and Analysis of Control Systems

with amplitude Uo and frequency ω, and the resulting output signal y(t) is also sinusoidal, and it differs from the input waveform only in amplitude and phase angle. u(t) = Uo sin ωt

(6.1)

Uo ω . (6.2) s2 + ω 2 From the definition of a transfer function, the output signal y(t) can be established as shown below U(s) =

G(s) =

Y (s) U(s)

Y (s) = G(s)U(s) = G(s) y(t) = L

−1

Uo ω 2 s + ω2

  Uo ω G(s) 2 s + ω2

y(t) = AUo sin(ωt + φ ),

(6.3)

where A and φ are the magnitude and phase angle of the transfer function G(s). Thus, the sinusoidal output y(t) differs from the input u(t) in amplitude by a constant factor A, and in phase by angle φ . 6.2.1

MAGNITUDE AND PHASE ANGLE

To establish the magnitude A and the phase angle φ , the substitution of s = jω is effected in G(s) such that G( jω) = G(s)|s= jω = Re G( jω) + j Im G( jω) = X(ω) + jY (ω) = Ae jφ . A is the magnitude of the transfer function G( jω) A = |G( jω)| q = [Re G( jω)]2 + [Im G( jω)]2 q = X(ω)2 +Y (ω)2 . φ is the phase angle of the function G( jω) φ = arg G( jω) = ∠G( jω)   Im G( jω) = tan−1 Re G( jω)   Y (ω) = tan−1 . X(ω)

447

Frequency-Response Design Methods

The term arg G( jω) is the shorthand form of the expression “the argument of G( jω)” which means the phase angle of the complex function G( jω). Example 6.1 Find the magnitude and phase angle of the transfer function G(s) =

5 . 3s + 4

Solution 6.1 The solution proceeds as follows: G( jw) =

5 3( jω) + 4

=

5(4 − j3ω) (4 + j3w)(4 − j3ω)

=

20 − j15ω (16 + 9ω 2 )

=

20 − j15ω + (16 + 9ω 2 ) (16 + 9ω 2 )

=⇒ G( jω) = X(ω) + jY (ω) Therefore, the magnitude is given by A = |G( jω)| q = X 2 (ω) +Y 2 (ω) s 225ω 2 400 + = 2 2 (16 + 9ω ) (16 + 9ω 2 )2 =

p 1 400 + 225ω 2 2 (16 + 9ω )

5 . =√ 16 + 9ω 2 The phase angle is given by φ = arg G( jω) Y (ω) X(ω)   −15ω = tan−1 20   3ω = − tan−1 . 4 = tan−1

6.2.2

COMBINING MAGNITUDES AND PHASE ANGLES

Consider a transfer function that is composed of five transfer functions such that, G(s) =

G1 (s)G2 (s) . G3 (s)G4 (s)G5 (s)

448

Design and Analysis of Control Systems

The overall magnitude and phase angle can be expressed in terms of the individual magnitudes and phase angles. The analysis is carried out as follows:

G( jω) =

G1 ( jω)G2 ( jω) G3 ( jω)G4 ( jω)G5 ( jω)

=

|G1 ( jω)|e j arg G1 ( jω) |G2 ( jω)|e j arg G2 ( jω) |G3 ( jω)|e j arg G3 ( jω) |G4 ( jω)|e j arg G4 ( jω) |G5 ( jω)|e j arg G5 ( jω)

=

A1 e jφ1 A2 e jφ2 A3 e jφ3 A4 e jφ4 A5 e jφ5

=

A1 A2 j(φ1 +φ2 −φ3 −φ4 −φ5 ) e A3 A4 A5

= Ae jφ . Therefore, the composite magnitude and phase angle are given by A=

|G1 ( jω)||G2 ( jω)| |G3 ( jω)||G4 ( jω)||G5 ( jω)|

φ = φ1 + φ2 − φ3 − φ4 − φ5 .

(6.4) (6.5)

This means that the composite magnitude and phase angle can be obtained from the magnitudes and phase angles of the individual transfer functions that constitute the composite transfer function. Equations 6.4 and 6.5 can be generalised for a transfer function with m multiplier transfer functions and n divisor transfer functions such that G(s) =

Πmj=1 G j (s) Πni=1 Gi (s)

,

(6.6)

where the product symbol Π is defined by the expression Πmj=1 G j (s) = G1 (s)G2 (s) . . . Gm (s). The magnitude and phase angle are obtained by using the substitution s = jω in Equation 6.6. G(s) = =⇒ A =

Πmj=1 G j ( jω) Πni=1 Gi ( jω) Πmj=1 |G j ( jω)| Πni=1 |Gi ( jω)| m

=⇒ φ =

n

∑ φ j − ∑ φi .

j=1

(6.7) (6.8)

i=1

These formulae in Equations 6.7 and 6.8 allow the computation of the magnitude and phase angle of a transfer function without first expressing the transfer function in terms of a real part and an imaginary part as done in Example 6.1. Applying Equations 6.7 and 6.8 in Example 6.1 simplifies the solution as follows: G( jw) =

5 G1 ( jω) = 3( jω) + 4 G2 ( jω)

449

Frequency-Response Design Methods

A=

|G1 ( jω)| |5| = |G2 ( jω)| | j3ω + 4|

5 =√ 16 + 9ω 2

φ = arg G1 ( jω) − arg G2 ( jω) = φ1 − φ2     −1 0 −1 3ω = tan − tan 5 4   3ω = − tan−1 . 4 Example 6.2 Consider the RC filter circuit shown in Figure 6.1. Find the magnitude and phase angle of the circuit’s transfer function, which is given by G(s) =

V2 (s) . V1 (s)

Solution 6.2 By using the KCL and KVL, expressions for the input and output voltages are determined. Z

v1 (t) = i(t)R + Z

v2 (t) =

i(t) dt C

i(t) dt. C

Taking Laplace transforms of the two equations leads to   1 V1 (s) = I(s) R + sC   1 V2 (s) = I(s) . sC

Figure 6.1 A simple RC filter.

450

Design and Analysis of Control Systems

Hence, the transfer function is given by G(s) =

V2 (s) V1 (s)

  1 I(s) sC   = 1 I(s) R + sC =

1 1 + sRC

=⇒ G( jw) =

1 . 1 + jwRC

The magnitude and phase angle are obtained using Equations 6.7 and 6.8, A=

|G1 ( jω)| |1| = |G2 ( jω)| |1 + jωRC|

=p

1 1 + (ωRC)2

φ = φ1 − φ2     0 ωRC = tan−1 − tan−1 1 1 = − tan−1 (ωRC) . Example 6.3 Find the magnitude and phase angle of the transfer function G(s) =

s(2s + 1) . (s + 1)(2s + 3)(s2 + 2s + 10)

Solution 6.3 The magnitude and phase angles are computed by using Equations 6.7 and 6.8 G( jw) =



A=

jω(2 jω + 1) ( jω + 1)(2 jω + 3)(( jω)2 + 2 jω + 10) G1 ( jω)G2 ( jω) G3 ( jω)G4 ( jω)G5 ( jω)

|G1 ( jω)||G2 ( jω)| |G3 ( jw)||G4 ( jw)||G5 ( jw)|

| jw||(2 jw + 1)| |( jw + 1)||(2 jw + 3)||(( jw)2 + 2 jw + 10)| p w (4w2 + 1) p p =p (w2 + 1) (4w2 + 9) [4w2 + (10 − w2 )2 ]

=

451

Frequency-Response Design Methods

φ = φ1 + φ2 − φ3 − φ4 − φ5         −1 w −1 2w −1 2w −1 w = tan + tan − tan − tan 0 1 1 3   2w − tan−1 . 10 − w2

6.3

BODE PLOTS

In this section, the principles behind Bode plots are introduced on the foundation of the previous discussion on polar plots. The advantages of Bode plots and the techniques used to draw them are then presented. 6.3.1

DEFINITION

The Bode plot for a transfer function consists of two parts: • A plot of the logarithmic magnitude M with respect to frequency ω • A phase angle φ with respect to frequency ω The horizontal scale, ω, is logarithmically calibrated. These two logarithmic plots simplify the determination and graphical portrayal of system frequency response. It is important to note that logarithmic scales are used because they considerably simplify the construction, manipulation, and interpretation of the Bode plots. 6.3.2

BACKGROUND

It is standard practice to measure power gain in decibels, that is, units of one-tenth of a bel such that   P2 . |H( jw)|db = 10 log10 P1 Power can be expressed in terms of voltage and resistance, P=

V2 R 

V22 V12 / R R



V2 V1

=⇒ |H( jw)|db = 10 log10  2 V2 = 10 log10 V12 =⇒ |H( jw)|db = 20 log10



 .

It is within this context that the Bode magnitude or logarithmic magnitude M is defined for any transfer function G(s). M = |G( jw)|db = 20 log10 |G( jw)| = 20 log10 A. The Bode plots for a transfer function consist of two figures: A plot of the logarithmic magnitude M with respect to frequency ω, and the phase angle φ with respect to frequency ω. The horizontal scale

452

Design and Analysis of Control Systems

ω is logarithmically calibrated. The logarithmic plots are used to simplify the determination of the graphical portrayal of the system frequency response. It is important to note that logarithmic scales are used because they considerably simplify the construction, manipulation, and interpretation of the Bode plots. The set of Bode equations (M and φ ) for the RC filter circuit in Example 6.2 are given by M = 20 log10 |G( jw)| 1 = 20 log10 p 1 + (wRC)2 q = 20 log10 1 − 20 log10 1 + (wRC)2 = −20 log10

q

1 + (wRC)2

φ = − tan−1 (wRC) .

(6.9) (6.10)

The next sections deal with techniques that are employed to draw Bode plots from equations such as Equations 6.9 and 6.10. 6.3.3

ADVANTAGES OF BODE PLOTS

A Bode plot gives the magnitude and phase of a system in response to a sinusoidal input for a range of frequencies. Each point on the Bode plot represents the system response for a given frequency input. Such plots can be used to establish system stability (by finding stability margins), study speed of response and show how different vibrations affect a control system. • Bode plots of systems in series or parallel can be simply added to produce the composite Bode plots, thus obtaining the combined frequency response. • For reciprocal factors, the magnitude and phase need only be changed in sign. • The phase-gain relationship is given in terms of logarithms of phase and gain. • Low- to high-frequency system behaviour can be displayed on a single Bode plot. • Gain crossover, phase crossover, gain margin, and phase margin are easily determined from the Bode plots. • Bode plots can be determined experimentally without explicit knowledge of the transfer function. • For design purposes, the effects of adding controllers and their parameters are easily visualised on the Bode plot. • Dynamic compensator design can be based entirely on Bode plots. • Bode plots can be used to stabilise systems by changing system gains. • Bode plots can also be used to study the impact of non-frequency inputs. • The use of the log scale for frequency allows a broader range of frequencies to be displayed on a single plot than is feasible using a linear scale. It is essential for the control systems engineer to be able to hand-plot frequency responses. This technique enables the engineer to analyse simple systems and check computer results for more complicated examples. Approximations can be used to quickly sketch the frequency response, deduce stability, and determine the form of the needed dynamic compensations. Hand-plotting is also helpful in the interpretation of experimentally generated frequency-response data.

453

Frequency-Response Design Methods

6.3.4

BODE PLOT TECHNIQUES

A general transfer function can be expressed in terms of a constant, zero factors and pole factors as follows: Πmj=1 (s + z j ) G(s) = K n Πi=1 (s + pi ) G( jw) = K

Πmj=1 ( jw + z j ) Πni=1 ( jw + pi )

= Ae jφ ,

(6.11)

where A is the magnitude given by A=K

Πmj=1 |( jw + z j )| Πni=1 |( jw + pi )|

,

and φ is the phase angle given by m

φ=

n

∑ arg( jw + z j ) + ∑ arg

j=1

i=1

1 . ( jw + pi )

Expressing the magnitude A in decibels gives the Bode magnitude M, M = |G( jw)|db = 20 log10 A = 20 log10 K

Πmj=1 |( jw + z j )|

!

Πni=1 |( jw + pi )| m

n

= 20 log10 K + ∑ 20 log10 |( jw + z j )| + ∑ 20 log10 j=1

i=1

1 . |( jw + pi )|

Hence, the magnitude in decibels and phase angle for any transfer function can be obtained as follows: m n 1 M = 20 log10 K + ∑ 20 log10 |( jw + z j )| + ∑ 20 log10 (6.12) |( jw + pi )| j=1 i=1 m

φ=

n

1

∑ arg( jw + z j ) + ∑ arg ( jw + pi ) .

(6.13)

i=1

j=1

From these two equations, it can be deduced that any transfer function whose Bode plots are of interest can be first expressed in terms of a constant, zero factors, and pole factors. The Bode plots of the constant and the individual factors are then drawn separately and then simply added to produce the composite Bode plots (φ and M) for the entire transfer function. There are six unique forms that these zero and pole factors can take: a simple (first-order) zero factor, a simple (first-order) pole factor, an integrator factor, a derivative factor, a complex (second-order) pole factor, and a complex (second-order) zero factor. The form of a simple zero factor (with a real zero at s = −z j ) is established as follows: G(s) = (s + z j ) = z j (1 +

s ) zj

  s =⇒ z j (a constant) and 1 + (a simple zero factor). zj

454

Design and Analysis of Control Systems

Similarly, a simple pole factor (with a real pole at s = −pi ) takes the form G(s) =

1 (s + pi ) 1

=

s ) pi

pi (1 + =⇒

1 1  (a simple pole factor). (a constant) and  s pi 1+ pi

The integrator factor (a pole at s = 0) is of the form 1/s and derivative factor (a zero at s = 0) takes the form of s. A complex (second-order) pole factor occurs when there are two complex poles, −p1 and −p2 , (a conjugate pair such that p1 = a + jb and p2 = a − jb). This can be formulated as follows: G(s) = =

1 1 (s + (a + jb)) (s + (a − jb)) 1 s2 + 2as + (a2 + b2 )

=⇒

1 s2 + 2ξ wn s + w2n

1

Therefore, G(s) =

" w2n



(expressing result in standard form)

  2 # s s 1 + 2ξ + wn wn 

1 (a constant gain) w2n

(6.14)

and

1 

 2 # (a complex pole factor). s s 1 + 2ξ + wn wn

"



A complex (second-order) zero factor occurs when there are two complex zeros −z1 and −z2 (a conjugate pair such that z1 = a + jb and z2 = a − jb). G(s) = (s + (a + jb))(s + (a − jb)) = s2 + 2as + (a2 + b2 ) =⇒ s2 + 2ξ wn s + w2n "    2 # s s 2 = wn 1 + 2ξ + wn wn ⇒ w2n (a constant gain) and "    2 # s s 1 + 2ξ + (a complex zero factor). wn wn

(6.15)

455

Frequency-Response Design Methods

These are the six unique zero and pole factors that a transfer function can be broken into. Hence, including two forms of the constant function, there are eight exhaustive elementary functions that one has to be able to draw in order to be in a position to establish the Bode plots of any transfer function. An advantage of Bode representation is that the log-magnitude and phase need only be changed in sign for reciprocal factors. Hence, the log-magnitude and phase of the simple pole factor are the negatives of those of simple zero factor, the log-magnitude and phase for the integrator factor are the negatives of those of the derivative factor, and the log-magnitude and phase of the complex pole factor are negatives of those of the complex zero factor. This reduces the eight elementary functions into four categories. Hence, a careful study and construction of Bode plots for four classes of factors are sufficient and exhaustive. 6.3.5

FOUR CLASSES OF BASIC FACTORS

The four categories of basic factors are a constant, a simple zero or pole, a derivative or an integrator, and a complex zero or pole. • Constant Factors G(s) = K and G(s) =

1 . K

• Simple (First-Order) Zero and Pole Factors G(s) = 1 +

s 1 . and G(s) =  s zj 1+ pi

• Integrator and Derivative Factors G(s) =

1 and G(s) = s. s

• Complex (Second-Order) Pole and Zero Factors G(s) = "



"



1 

 2 # s s + 1 + 2ξ wn wn

and

  2 # s s + . G(s) = 1 + 2ξ wn wn The approach takes any given transfer function and expresses it in terms of the basic factors. The next step is to draw the Bode plots corresponding to these functions using the method of asymptotes, then add them up to get the Bode plot for the whole system. In most analyses, it is sufficient to draw the asymptotes of the Bode plots for each relevant basic factor and then combine these to get the asymptotes of composite Bode plots. The actual Bode plots can be deduced from the asymptotes but are usually unnecessary for analysis and interpretation. Matlab always produces the actual Bode plots. The Bode plots of these four classes of basic factors are established by analysis and drawn by hand in the following sections. For each pair in a category, the Bode plots of one factor are the reflection across the horizontal axis of those of the other factor. Put differently, in each class, the Bode plots of one factor are negatives (flip images about the horizontal axis) of those of the other. In the diagrams of the Bode plots, the straight lines are the asymptotes, whereas the curved lines are the actual Bode plots.

456

6.4

Design and Analysis of Control Systems

CONSTANT FACTORS (GAIN)

The simplest factor is a positive constant function (greater or less than one) depicted by G(s) = K or G(s) =

1 K

=⇒ G( jw) = K or G( jw) = 6.4.1

1 . K

MAGNITUDE

The magnitude is expressed by M = 20 log10 |G( jw)| = 20 log10 |K|. Similarly for the reciprocal

1 , K M = 20 log10 |G( jw)| 1 = 20 log10 | | K = −20 log10 |K|.

For example, G(s) = 10 G( jw) = 10 M = 20 log10 10 = 20 G(s) =

1 10 

M = 20 log10

1 10



= 0 − 20 log 10 = −20 6.4.2

PHASE ANGLE

The phase angles of constant factors K and

1 are obtained as follows: K

φ = arg K = tan−1

  0 K

=0 φ = arg

1 K

= tan−1 = 0.

    0 0 − tan−1 1 K

457

Frequency-Response Design Methods

The magnitudes and phase angles can then be plotted as shown in Figures 6.2 and 6.3. It is important to note that the Bode plots of K are the reflection across the 1 horizontal axis of those of . Put differently, the Bode plots of K are negatives (flip images about K 1 the horizontal axis) of those of . K

Figure 6.2 Bode plots for a constant factor G(s) = K = 10.

Figure 6.3 Bode plots for a constant factor G(s) =

1 1 = . K 10

458

Design and Analysis of Control Systems

If K is negative, the bode magnitude is the same as that for positive K, but the phase angle is −180◦ . It is important to note that −180◦ and 180◦ represent the same angle. The choice of −180 is simply a convention adopted in most literature and computer packages such as MATLAB.

6.5

A SIMPLE ZERO FACTOR

In the simple zero factor form established above, let the zero at s = −z j be represented by s = −wc , where wc is called the cut-off, breakaway or corner frequency. This is the frequency at which two Bode asymptotes meet, and its importance will become clear as the Bode plots are constructed. G(s) = 1 +

s ωc

⇒ G( jw) = 1 +

6.5.1

jω . ωc

MAGNITUDE v u  2 ! u ω t M = 20 log10 . 1+ ωc

For low frequencies, that is, w ≪ wc , it follows that v u  2 ! u ω t 1+ M = 20 log10 ωc √ =⇒ M ≈ 20 log10 1 =⇒ M ≈ 0. Hence, the asymptote for w ≪ wc is M = 0. For high frequencies, that is, w ≫ wc , the expression for the magnitude can be simplified. v u  2 ! u ω M = 20 log10 t 1 + ωc  =⇒ M ≈ 20 log10

ω ωc



 (because

ω ωc

2 ≫ 1 for ω ≫ ωc )

=⇒ M ≈ 20 log10 ω − 20 log10 ωc ≈ 20 log10 ω =⇒ Gradient of M = 20 db/decade and M = 0 at ω = ωc . Hence, the asymptote for w ≫ wc is M = 20 log10 w. At the cut-off frequency (w = wc ) the two asymptotes (w ≪ wc and w ≫ wc ) meet with the value of the magnitude being M = 0. With the two asymptotes (w ≪ wc and w ≫ wc ) established including the value of M at the cut-off frequency (w = wc ), an asymptote Bode plot of M can be drawn. From the asymptote curve, the actual plot can then be sketched, as illustrated in Figure 6.4.

459

Frequency-Response Design Methods

Figure 6.4 Bode plots for a simple zero factor G(s) = 1 +

6.5.2

s ; where wc = 10. wc

PHASE ANGLE  w φ = tan /1 wc   w . = tan−1 wc −1



(6.16)

Evaluating this expression at limit frequencies leads to w = 0 =⇒ φ = 0

(6.17)

w −→ ∞ =⇒ φ −→ 90



(6.18)

At the cut-off frequency, the phase angle is −1



φ = tan

wc wc



= tan−1 1 = 45◦ . With the limits in Equation 6.17 and 6.18 and the cut-off phase angle (45◦ ), the phase angle plot can be sketched as shown in Figure 6.4 using the method of asymptotes. For any simple zero factor, the only parameter that needs to be specified is the breakaway frequency (e.g., wc = 10).

460

6.6

Design and Analysis of Control Systems

A SIMPLE POLE FACTOR

In the simple pole factor form established above, let the pole at s = −pi be represented by s = −wc . 1

G(s) =

1+

s wc 1

⇒ G( jw) =

1+

6.6.1

jw wc

.

MAGNITUDE 1 M = 20 log10 v u  2 ! u t 1+ w wc v u  2 ! u w = −20 log10 t 1 + wc   s ≡ − The magnitude for the zero factor 1 + . wc

For low frequencies, that is, w ≪ wc v u  2 ! u w M = −20 log10 t 1 + wc √ =⇒ M ≈ −20 log10 1 =⇒ M ≈ 0. Hence, the asymptote for w ≪ wc is M = 0. For high frequencies w ≫ wc v u  2 ! u w t M = −20 log10 1+ wc  =⇒ M ≈ −20 log10

w wc



 (because

w wc

2 ≫ 1 for w ≫ wc )

=⇒ M ≈ −20 log10 w + 20 log10 wc ≈ −20 log10 w

(6.19)

=⇒ Gradient of M = −20 db/decade and M = 0 at w = wc . Hence, the asymptote for w ≫ wc is M = −20 log10 w. At the breakaway frequency (w = wc ), the two asymptotes (w ≪ wc and w ≫ wc ) meet with the value of the magnitude being (M = 0). With the two asymptotes (w ≪ wc and w ≫ wc ) established including the value of M at the cut-off frequency, an asymptote Bode plot of M can be drawn. From the asymptote curve, the actual plot can then be sketched as illustrated in Figure 6.5.

461

Frequency-Response Design Methods

  s ; wc = 10. Figure 6.5 Bode plots for a simple pole factor G(s) = 1/ 1 + wc

6.6.2

PHASE ANGLE φ = tan−1 (0/1) − tan−1 =⇒ φ = − tan−1



w wc



 w /1 wc

 (6.20)

=⇒ φ = − Phase angle for

  s 1+ . wc

Taking limits in this expression leads to w = 0 =⇒ φ = 0

(6.21) ◦

w −→ ∞ =⇒ φ −→ −90 .

(6.22)

At the cut-off frequency, the phase angle is −1



φ = − tan

wc wc



= − tan−1 1 = −45◦ . With the limits in Equations 6.21 and 6.22 and the breakaway phase angle (−45◦ ), the phase angle plot can be sketched as shown in Figure 6.5. Note that for any simple pole factor, the only parameter that needs to be specified is the breakaway frequency (e.g., wc = 10).

462

Design and Analysis of Control Systems

It is important to note that the Bode plots of a simple pole are the reflection across the horizontal axis of those of a simple zero. Put differently, the Bode plots in Figure 6.5 are simply the negatives (flip images about the horizontal axis) of those for the simple zero factor shown in Figure 6.4, thus confirming the Bode property for reciprocal factors. A practical example of a system containing a simple pole factor is the RC filter circuit shown in 1 . Example 6.2. Its Bode plots will be similar to those in Figure 6.5 with a cut-off frequency wc = RC

6.7 6.7.1

AN INTEGRATOR FACTOR MAGNITUDE G(s) =

1 s

⇒ G( jw) =

M = 20 log10

1 . jw

1 w

= −20 log10 w =⇒ Gradient of M = −20 db/decade. To plot this straight line with the given negative gradient, only one point needs to be located. For example, w = 1 =⇒ M = 0. With one point and a gradient, the magnitude plot can be drawn as shown in Figure 6.6.

1 Figure 6.6 Bode plots for an integrator factor G(s) = . s

463

Frequency-Response Design Methods

6.7.2

PHASE ANGLE

φ = tan−1 (0/1) − tan−1 (w/0) = − tan−1 ∞ =⇒ φ = −90◦ .

(6.23)

The Bode plot for the integrator can thus be drawn as shown in Figure 6.6. There is no need for asymptotes. For a cascade of integrators such that G(s) =

1 , sk

the magnitude and phase are obtained as in the case of a single integrator. 1 | ( jw)k = −20k log10 w

M = 20 log10 |

=⇒ Gradient of M = −20k db/decade φ = arg

1 ( jw)k

= tan−1 (0/1) − k tan−1 (w/0) = −k tan−1 ∞ =⇒ φ = −90k◦ .

6.8

A DERIVATIVE FACTOR

The magnitude and phase Bode plots for the derivative factor are obtained similarly to the integrator factor. In fact, they are the negatives (flip images about the horizontal axis) of the magnitude and phase Bode plots of the integral factor, respectively. 6.8.1

MAGNITUDE G(s) = s ⇒ G( jw) = jw M = 20 log10 w =⇒ Gradient of M = 20 db/decade.

To plot this straight line with given positive gradient, only one point has to be located. For example, w = 1 =⇒ M = 0. With one point and a gradient, the magnitude plot can be drawn as shown in Figure 6.7. 6.8.2

PHASE ANGLE φ = tan−1 (w/0) = tan−1 ∞ =⇒ φ = 90◦

(6.24)

464

Design and Analysis of Control Systems

Figure 6.7 Bode plots for a derivative factor G(s) = s.

The Bode plot for the derivative factor can thus be drawn as shown in Figure 6.7. There is no need for asymptotes. It is important to note that the Bode plots of the derivative factor are the reflection across the horizontal axis of those of an integrator factor. Put differently, the Bode plots of a derivative factor are the negatives (flip images about the horizontal axis) of the integrator Bode plots. For a cascade of derivative factors such that G(s) = sk , the magnitude and phase are obtained as in the case of a single derivative factor. M = 20 log10 wk = 20k log10 w =⇒ Gradient of M = 20k db/decade φ = k tan−1 (w/0) = k tan−1 ∞ =⇒ φ = 90k◦ .

6.9

A COMPLEX POLE FACTOR

As has already been shown, the complex pole factor is established by considering the general transfer function of a standard second-order system, H(s) =

b(s) s2 + 2ξ wn s + w2n

465

Frequency-Response Design Methods

Figure 6.8 A complex pole factor (wn = 10) with values of ξ (0.1, 0.3, 1.0).

such that the complex pole factor is given by G(s) = "

1 

 2 # s s 1 + 2ξ + wn wn 

1   2 # . jw jw 1 + 2ξ + wn wn

=⇒ G( jw) = "

6.9.1



(6.25)

MAGNITUDE 1   2 # | jw jw 1 + 2ξ + wn wn

M = 20 log10 | "



"

  2 # jw jw = −20 log10 | 1 + 2ξ + wn wn s = −20 log10



    w 2 w2 2 1 − 2 + 2ξ wn wn

(6.26)

466

Design and Analysis of Control Systems

For low frequencies, i.e., w ≪ wn the magnitude becomes √ M ≈ −20 log10 1 ≈0 Hence, the asymptote for w ≪ wn is M = 0. For high frequencies w ≫ wn , s     w2 2 w 2 M = −20 log10 1 − 2 + 2ξ wn wn   w =⇒ M ≈ −40 log10 wn =⇒ M ≈ −40 log10 w + 40 log10 wn

(6.27)

=⇒ Gradient of M = −40 db/decade and M = 0 at w = wc . Hence, the asymptote for w ≫ wn is M = −40 log10 w. At the breakaway frequency (w = wn ) the two asymptotes (w ≪ wn and w ≫ wn ) meet with the value of the magnitude being (M = 0). The two asymptotes (w ≪ wn and w ≫ wn ) are independent of the value of the damping ratio ξ . In the actual Bode plot derived from the asymptotes, near the frequency (w = wn ) a resonant peak Mpeak occurs, as can be expected from Equation 6.25. The resonant peak is of the form Mpeak = −20 log10 2ξ . This clearly shows that the damping ratio ξ determines the magnitude of the resonant peak. There are large peaks for small values of ξ . The magnitude Bode plots are shown in Figure 6.8 for various values of ξ . For example for ξ = 0.1 the magnitude plot has a resonant peak at Mpeak = −20 log10 (2 × 0.1) = 13.98. It is important to note that the resonant peak for a complex pole factor is a maximum of the Bode magnitude function on the logarithmic scale. 6.9.2

PHASE ANGLE 

 w 2ξ  wn  . φ = − tan−1   w2  1− 2 wn

(6.28)

This means the phase angle is a function of both w and ξ . Analysing this expression leads to w = 0 =⇒ φ = 0◦ w = wn =⇒ φ = −90◦ w = ∞ =⇒ φ = −180◦ . The phase angle curve is skew-symmetric about the inflection point, where φ = −90◦ , with 0◦ and −180◦ as asymptotes as shown in Figure 6.8. This diagram shows the phase Bode plots for various values of ξ .

467

Frequency-Response Design Methods

6.10

A COMPLEX ZERO FACTOR

The Bode plots for a complex zero factor are obtained by simply reversing the sign of the magnitude M and phase angle φ for the complex pole factor. They can also be derived from first principles and analysed just as done for the complex pole factor, and the Bode plots are similarly drawn. 6.10.1

MAGNITUDE s     w2 2 w 2 1 − 2 + 2ξ M = 20 log10 wn wn

(6.29)

For low frequencies, i.e., w ≪ wn the magnitude becomes √ M ≈ 20 log10 1 ≈0 Hence, the asymptote for w ≪ wn is M = 0. For high frequencies w ≫ wn , s     w 2 w2 2 1 − 2 + 2ξ M = 20 log10 wn wn   w =⇒ M ≈ 40 log10 wn =⇒ M ≈ 40 log10 w + 40 log10 wn

(6.30)

=⇒ Gradient of M = 40 db/decade and M = 0 at w = wc . Hence, the asymptote for w ≫ wn is M = 40 log10 w. At the breakaway frequency (w = wn ) the two asymptotes (w ≪ wn and w ≫ wn ) meet with the value of the magnitude being (M = 0). The two asymptotes (w ≪ wn and w ≫ wn ) are independent of the value of the damping ratio ξ . In the actual Bode plot derived from the asymptotes, near the frequency (w = wn ) a resonant peak Mpeak occurs, as can be expected from Equation 6.29. The resonant peak is of the form Mpeak = 20 log10 2ξ . This clearly shows that the damping ratio ξ determines the magnitude of the resonant peak. There are large peaks for small values of ξ . The magnitude Bode plots for various values of ξ are just flip images (reflection across the horizontal axis) of those of the complex pole factor shown in Figure 6.8. For example for ξ = 0.1 the magnitude plot has a resonant peak at Mpeak = 20 log10 (2 × 0.1) = −13.98. It is important to note that the resonant peak for a complex zero factor is a minimum of the Bode magnitude function on the logarithmic scale. 6.10.2

PHASE ANGLE 

 w 2ξ  wn  . φ = tan−1   w2  1− 2 wn

(6.31)

468

Design and Analysis of Control Systems

This means the phase angle is a function of both w and ξ . Analysing this expression leads to w = 0 =⇒ φ = 0◦ w = wn =⇒ φ = 90◦ w = ∞ =⇒ φ = 180◦ . The phase angle curve is skew-symmetric about the inflection point, where φ = 90◦ , with 0◦ and 180◦ as asymptotes. The phase Bode plots for various values of ξ are just flip images (reflection across the horizontal axis) of those of the complex pole factor shown in Figure 6.8. The relationship (reflection across the horizontal axis) between the Bode plots of a complex zero factor and those of a complex pole factor can be summarised as follows: Complex zero factor =

1 pole factor Complex

= − Complex pole factor (logarithmic scale) Put differently, the Bode plots of a complex zero factor are the negatives (flip images about the horizontal axis) of those of a complex pole factor. The drawing of Bode plots of a complex zero factor is left as an exercise for the reader.

6.11

DRAWING BODE PLOTS OF COMPLEX SYSTEMS

Given the transfer function of a complex dynamic system, express and break it up into the basic factors (functions) discussed in the preceding sections. Identify the cut-off frequencies wc or wn and draw the Bode plots corresponding to the established basic factors using the method of asymptotes described in the preceding sections. The next step is adding up the asymptotes of these basic factors to get the composite asymptotes of the Bode plot for the whole system. If necessary, the actual Bode plot can then be fitted around the asymptotes. The benefits of hand-plotting Bode plots include the following [11]: • Enables the handling of simple control problems without using a computer. • Facilitates the performance of sanity checks on computer results for more complex systems. • The hand-drawn Bode plots can be used to determine the stability of a system and help ascertain the form of the compensator required. • Hand-plotting of Bode plots can be used to interpret experimentally generated frequencyresponse data. Examples presented in the next section illustrate the techniques of hand-drawing Bode plots for complex systems using the basic factor approach. Of prime importance are the asymptotes of the Bode plots and not the actual Bode plots. The latter are easily fitted around the established asymptotes and can also be produced using MATLAB.

6.12

WORKED EXAMPLES OF BODE PLOTS

Example 6.4 Consider the following system transfer function, G(s) =

2s + 2 s2 + 10s

.

Frequency-Response Design Methods

469

(a) Show that the function can be broken into constant, zero, pole, and integral factors. (b) Obtain the overall system Bode plots. Solution 6.4 (a) The transfer function can be broken into basic transfer functions as follows: 2s + 2 s2 + 10s  s 1 5 1+ 1 =  s  s 1+ 10 1  1 1 s ⇛ , 1+ ,  s , s . 5 1 1+ 10

G(s) =

The four basic factors are a constant, a zero, a pole and an integrator. The cut-off frequency for the zero function is given by wc = 1 and that for the pole function by wc = 10. (b) The four basic plots are then separately drawn as discussed above, and then added together to produce the composite Bode plots. The results are shown in the following Figures 6.9–6.14.

Figure 6.9 Bode plots for the constant factor 1/5.

470

Design and Analysis of Control Systems

Figure 6.10 Bode plots for the zero function (1 + 1s ).

Example 6.5 Consider the following transfer function: G(s) =

1 s2 + 10s + 100

.

(a) Express the transfer function in terms of a constant factor and a complex pole factor. (b) Draw the Bode plots for the two factors. (c) Derive the overall Bode plots for the system. Solution 6.5 (a) The starting point is expressing the transfer function in terms of basic factors. This is achieved as follows: G(s) = =

1 s2 + 10s + 100 1 "

 2 # s s 100 1 + + 10 10

471

Frequency-Response Design Methods

Figure 6.11 Bode plots for the pole factor 1/(1 + s/10).

G( jw) =

1 "

 2 # jw jw 100 1 + + 10 10 1

=⇒ w2n

=⇒

  2 # . jw jw 1 + 2ξ + wn wn

"

1 100



(a constant gain) and 1

"

 2 # jw jw 1+ + 10 10

(a complex pole factor)

where wn = 10 and ξ = 0.5 =⇒ M peak = −20 log10 2ξ = 0.

472

Design and Analysis of Control Systems

Figure 6.12 Bode plots for the integrator factor 1/s.

(b), (c) The two functions, a constant gain and a complex pole factor, are then sketched separately and then added together to produce the asymptotes of the overall Bode plots (Magnitude and Phase Angle). The system’s actual Bode plots can be obtained by using MATLAB. Example 6.6 (a) Show that the following transfer function can be broken into zero, pole, integrator, and complex factors. G(s) =

1, 000(1 + s) . s(s + 2)(s2 + 8s + 64)

(b) Sketch the Bode plots of these basic factors. (c) Deduce the overall Bode plots for the system. Solution 6.6 (a) The transfer function can be expressed in terms of basic factors as follows: G(s) =

1, 000(1 + s) s(s + 2)(s2 + 8s + 64)

Frequency-Response Design Methods

473

Figure 6.13 Overall magnitude bode plot for (2s + 2)/{s(s + 10)}.

1, 000 (1 + s) 2 × 64 = s s s s(1 + )[1 + + ( )2 ] 2 8 8 125 (1 + s) 16 =  s s s s 1+ [1 + + ( )2 ] 2 8 8   125 s 1 1 1 ⇛ , 1+ ,   s 2 . s , s , s 16 1 1+ 1 + + 2 8 8 Thus there are five basic factors: a zero, a pole, an integrator, and a complex function. The cut-off frequency for the zero factor is given wc = 1 and that for the pole factor by wc = 2. For the complex pole, the natural frequency wn and the damping ratio ξ are found by comparing it to the standard form, i.e.

474

Design and Analysis of Control Systems

Figure 6.14 Overall phase angle bode plot for (2s + 2)/{s(s + 10)}.

1 1+

s  s 2 + 8 8 1

=

jw 1+ + 8



jw 8

2

1   2 # jw jw 1 + 2ξ + wn wn

=⇒ "



=⇒ wn = 8 and ξ = 0.5 =⇒ Mpeak = −20 log10 2ξ = 0.

475

Frequency-Response Design Methods

(b), (c) The five basic plots can now be drawn separately and then added to produce the composite Bode plots. The system’s actual Bode plots can be obtained by using MATLAB. Example 6.7 Consider a dynamic system with the following transfer function, G(s) =

5s + 5 2.5s2 + 25s

Draw the Bode plots of the system. Solution 6.7 To draw the Bode plot of a system, its transfer function must be broken down into factors that can be easily drawn. 5s + 5 2.5s2 + 25s 5(1 + s) = 2.5s(s + 10)    s 1   1+ 1 1 s  = s  5 1+ 10

G(s) =

There are four factors: 1 a constant 5 s a simple zero • 1+ 1 1 • an integrator s 1 • a simple pole s 1+ 10 The next step is to draw the Bode plots of each of the four factors, and then add up the curves to get the composite magnitude and phase angle curves (M and φ ) of the dynamic system: •

5s + 5 2.5s2 + 25s Example 6.8 Consider a dynamic system with the following transfer function, G(s) =

G(s) =

s+2 (s + 3)(s + 4)

Draw the Bode plots of the system. Solution 6.8 To draw the Bode plot of a system its transfer function must be broken down into factors that can be easily drawn. s+2 (s + 3)(s + 4)  s 2 1+ 2 =  s  s 12 1 + 1+ 3 4 s   1+ 1   2 s  = s 6 1+ 1+ 3 4

G(s) =

476

Design and Analysis of Control Systems

There are are four basic factors: 1 a constant 6 s • 1+ a simple zero 2 1 • s a simple pole 1+ 3 1 • s a simple pole 1+ 4 •

The next step is to draw the Bode plots of each of the four factors and then adding up the curves to get the composite magnitude and phase angle curves (M and φ ) of the system: G(s) =

s+2 (s + 3)(s + 4)

Example 6.9 Consider a dynamic system with the following transfer function: G(s) =

1 s2 + 100s + 10, 000

Draw the Bode plots of the system. Solution 6.9 To draw the Bode plot of a system, its transfer function must be broken down into factors that can be easily drawn. Consider the given transfer function: G(s) =

1 s2 + 100s + 10, 000

It is then compared with the standard form of a second-order system given by G(s) =

1 s2 + 2ζ ωn s + ωn2

Manipulating the given transfer function leads to: 1 s2 + 100s + 10, 000 1 =  s 2 s 10, 000(1 + + 100 100

G(s) =

1

= 1002 (1 +

  jω jω 2 + 100 100

This outcome is compared with a rearranged standard second-order system. G(s) = =

1 s2 + 2ζ ωn s + ωn2 1 ωn2 (1 + 2ζ

jω + ωn



jω ωn

2

477

Frequency-Response Design Methods

Hence, there are two factors: • •

1 10, 000

a constant

1   jω jω 2 1+ + 100 100

a complex pole

where: ωn = 100 2ζ = 1 =⇒ ζ = 0.5 M peak = −20log(2ζ ) = −20log(1) =0 The next step is drawing the Bode plots of each of the two factors separately, and then adding up the curves to get the composite magnitude and phase angle curves (M and φ ) of the system: G(s) =

1 s2 + 100s + 1, 000

Example 6.10 Consider a dynamic system with the following transfer function, G(s) =

s+1 s2 + 1, 000

Draw the Bode plots of the system. Solution 6.10 To draw the Bode plot of a system, its transfer function must be broken down into factors that can be easily drawn. G(s) =

=

s+1 s2 + 1, 000 s+1 s(s + 1, 000)

  1  s 1 1+ 1, 000 1 s   = s 1+ 1, 000

There are four factors. 1 constant 1, 000 s • 1+ simple zero 1 •

478

Design and Analysis of Control Systems

1

• •

simple pole

1+

s 1, 000

1 s

an integrator

Draw the Bode plots of these four factors and add up the plots to get the composite Bode plots. Example 6.11 Consider a dynamic system with the following transfer function: G(s) =

5s + 5 2.5s2 + 250s

Draw the Bode plots of the system. Solution 6.11 To draw the Bode plot of a system, its transfer function must be broken down into factors that can be easily drawn. 5s + 5 2.5s2 + 250s 5(s + 1)  = s  2.5 × 100 × 5 × 1 + 100 s 1+ 5 1   = 2.5 × 100 s 1 + s 100

G(s) =

   s 1   1+ 1 1 s   = s 50 1+ 100 There are four factors. 1 constant 50 s • 1+ simple zero 1 1 • simple pole s 1+ 100 1 • an integrator s



Draw the Bode plots of these four factors and add up the plots to get the composite Bode plots. Example 6.12 Consider a dynamic system with the following transfer function: G(s) = Draw the Bode plots of the system.

s+5 (s + 2)(s + 4)

479

Frequency-Response Design Methods

Solution 6.12 To draw the Bode plot of a system, its transfer function must be broken down into factors that can be easily drawn. G(s) =

s+5 (s + 2)(s + 4)

 s s 1+ 5 =  s s 2 1+ ×4 1+ 2 4  s   1 + 5 5  = s s  8 1+ 1+ 2 4

There are four factors 5 constant 8 s • 1+ simple zero 5 1 • s simple pole 1+ 2 1 • simple pole s 1+ 4 •

Draw the Bode plots of these four factors and add up the plots to get the composite Bode plots. Example 6.13 Consider a dynamic system with the following transfer function: G(s) =

1 s2 + 10s + 100

Draw the Bode plots of the system. Solution 6.13 Compare this system with the standard form of a second-order system given by G(s) =

1 s2 + 2ζ ωn s + ωn2

It follows that: G(s) =

=

1 s  s 2 100(1 + + 10 10 1 jω 102 (1 + + 10



jω 10

2

480

Design and Analysis of Control Systems

Compare this outcome with a generic complex pole extracted as follows: 1

G(s) = ωn2 (1 + 2ζ

jω + ωn



jω ωn

2

Hence, there are two factors: •

1 100

a constant 1



jω 1+ + 10



jω 10

2

a complex pole

where: ωn = 10 2ζ = 1 =⇒ ζ = 0.5 M peak = −20log(2ζ ) = −20log(1) =0 Next, draw the Bode plots of the two factors and then add up the curves to get the composite magnitude and phase angle curves (M and φ ) of the system: G(s) =

6.13

1 s2 + 10s + 100

NONMINIMUM PHASE SYSTEMS

As explained in Chapter 3, a nonminimum phase system has at least one zero in the right-hand plane (RHP). Such a configuration arises when the system includes a nonminimum phase element or when there is an unstable minor loop. Consider two transfer functions that have the same structure except 1 1 while the other has a zero at − such that that one system has a zero at wz wz s s 1+ 1− s s wz wz G1 (s) = s , G2 (s) = s , 0≤ w ≤ w . z p 1+ 1+ wp wp 6.13.1

MAGNITUDE AND PHASE

The two systems have the same magnitude (M1 = M2 ) but different phase angles (φ1 and φ2 ). This can be demostrated as follows: The two complex quantities G1 ( jw) and G2 ( jw) differ by a factor G2 ( jw) G1 ( jw) jw 1− wz = . jw 1+ wz

G( jw) =

Frequency-Response Design Methods

481

Figure 6.15 The effect of a nonminimum phase system: bode plots of G1 (s) and G2 (s).

The magnitude of this factor is always unity which means the magnitudes of the two systems are equal. On the other hand, the phase angle of this factor is given by φ = φ2 − φ1 = arg G( jw)     jw jw −1 −1 = tan − − tan wz wz   jw = −2 tan−1 wz =⇒ φ varies from 0 to 180◦ as w varies from 0 to ∞. This is the difference between the phase angles φ1 and φ2 . The Bode plots for the two systems G1 (s) 1 1 1 1 and G2 (s) are shown in Figure 6.15, where = and = . wz 100 wp 10 Nonminimum phase systems arise in two different ways. One way is when a system includes a nonminimum phase element or elements. The other situation may occur when a minor loop is unstable. Nonminimum phase systems are slow in response because of their faulty behaviour at the start of response. In most practical systems, excessive phase lag should be carefully avoided. In particular, in system design where speed of response is of primary importance, nonminimum phase elements should be avoided.

6.14

IMPACT OF TIME DELAY ON BODE PLOTS

As already discussed in Chapters 4 and 5, time delays often occur in the signals that flow between components of a control system. The time delay function or transport lag is a nonminimum phase system with an excessive phase lag with no attenuation at high frequencies. Such transport delays exist in thermal, hydraulic, and pneumatic systems. For example, when a tap is turned on at one end of a long hose pipe, it takes some time before the water appears at the far end. Also, when the feed

482

Design and Analysis of Control Systems

to a conveyor belt is changed, it takes some time to observe the change at the other end. A pure time delay function can be modelled by considering a general system g(t) delayed by time τ, such that the system is represented by g(t − τ). Taking Laplace transforms of this function leads to L g(t − τ) = e−sτ G(s) = Gτ (s)G(s). Hence, the Laplace transform of a pure time delay is given by Gτ (s) = e−sτ . Figure 6.16 illustrates the Block Diagram model of a generic plant with a time delay function. To deal with the time delay function, the Pade approximation was required in root locus analysis. The first- and second-order Pade approximations are as follows, respectively:

Gτ (s) = e−sτ τ

=

e−s 2 s τ2

e



1 − sτ2 −sτ + 2 . = 1 + sτ2 sτ + 2

Gτ (s) = e−sτ 2 2

τ

=

e−s 2 τ

es 2



1 − sτ2 + s12τ

2 2

1 + sτ2 + s12τ

=

12 s2 − 6s τ + τ2 12 s2 + 6s τ + τ2

.

In frequency-response methods (Bode and Nyquist), an exact analysis of the delay function is possible. The Pade approximation is not necessary.

6.14.1

BODE PLOTS OF A TIME DELAY

The frequency response of the time delay function is given by the magnitude and phase of Gτ ( jω) where: Gτ ( jω) = e−sτ |s= jω = e− jωτ = cos jωτ − sin jωτ (from Euler’s Equation). The Bode plots (magnitude and phase angle) of the time delay function can then be derived easily.

Figure 6.16 A dynamic system with a time delay function.

483

Frequency-Response Design Methods

6.14.1.1

Magnitude

The magnitude is expressed by Mτ = 20 log10 |G( jω)| = 20 log10 | cos2 ( jωτ) + sin2 ( jωτ) | = 20 log 1 = 0. This means the magnitude, Mτ , of a pure time delay function, is 0. 6.14.1.2

Phase Angle

The phase angle of a time delay function is obtained as follows: φτ = arg G( jω)   −1 − sin jωτ = tan cos jωτ = −ωτ (radians) = −57.3ωτ (degrees). This means the phase angle varies linearly with the frequency ω. The phase angle (φτ = −ωτ) is shown in Figure 6.17. It is important to note that the curve for the phase angle is not a linear curve because it is being plotted with respect to log ω. It is important to reiterate that the plot of the φτ versus ω is a straight line with a gradient of −τ or −57.3τ. However, for Bode plots, we plot φτ versus the common logarithm of ω, that is φτ versus log ω. Hence the curve is as depicted in Figure 6.17. The complete set of Bode plots for the time delay function is presented in Figure 6.18.

Figure 6.17 Phase angle for a pure time delay function G(s) = e−sτ .

484

Design and Analysis of Control Systems

Figure 6.18 Bode plots for a time delay function.

6.14.2

IMPACT OF TIME DELAY ON BODE PLOTS

As has already been established, the Laplace transform of a pure time delay function is given by Gτ (s) = e−sτ .

485

Frequency-Response Design Methods

The magnitude and phase angle of such a function were derived to be: Mτ = 0 φτ = −ωτ

(in radians)

= −57.3ωτ

(in degrees)

This means if there is a time delay, τ, in a plant, G(s), whose Bode plots are M andφ , the new Bode plots (MD and φ D ) are obtained as follows: MD = M + 0 =M

(the magnitude is not affected)

φ D = φ − ωτ

(in radians)

= φ − 57.3ωτ

(in degrees)

The following example illustrates the impact of a pure time delay on the Bode plots of a dynamic system. Example 6.14 For a dynamic system whose plant transfer function is given by G(s) =

s+1 s2 + 100s

(a) Draw the two Bode plots (magnitude and phase angle) for the system. (b) Draw the Bode plots for a pure time delay function represented by g(t − τ). (c) If the above system has a time delay of one second, how does this affect its Bode plots. Solution 6.14 To draw the Bode plot of a system, its transfer function must be broken down into factors that can be easily drawn. s+1 s2 + 100s (1 + s)  = s  100 1 + 100    s 1   1+ 1 1 s  = s  100 1+ 100

G(s) =

There are four factors 1 a constant 100 s • 1+ a simple zero 1 1 • an integrator s 1 • a simple pole s 1+ 100 •

The next step is drawing the Bode plots of the four factors separately, and then adding up the curves to get the composite magnitude and phase angle curves (M and φ ) of the system: G(s) =

s+1 s2 + 100s

486

Design and Analysis of Control Systems

The results are shown in the following Figures 6.19–6.24: (b) A pure time delay function can be modelled by considering a general system g(t) delayed by time τ, such that the system is represented by g(t − τ). Taking Laplace transforms of this function leads to L g(t − τ) = e−sτ G(s) = Gτ (s)G(s) where Gτ (s) = e−sτ

Figure 6.19 Bode plots for the constant factor 1/100.

487

Frequency-Response Design Methods

Figure 6.20 Bode plots for the zero function (1 + 1s ).

This is the Laplace transform of a pure time delay function. The magnitude and phase angle of such a function are as derived earlier in Section 6.14: Gτ (s) = e−sτ =⇒ Gτ ( jw) = e− jωτ =⇒ Mτ = 0 Gτ ( jw) = e− jωτ =⇒ φτ = −ωτ

(in radians)

= −57.3ωτ

(in degrees)

The plot of Mτ versus ω is just the ω axis (Mτ = 0). The plot of φτ versus ω is a straight line with a gradient of −τ or−57.3τ. However, for Bode plots we plot φτ versus the common logarithm of ω, that is φτ versus log10 ω. This produces the curve in Figure 6.25.

488

Design and Analysis of Control Systems

Figure 6.21 Bode plots for the pole factor 1/(1 + s/100).

(c) The magnitude and phase angle of a time delay function for the specific case, where τ = 1 second, are given by: Mτ = 0 φτ = −ω

(in radians)

= −57.3ω

(in degrees)

The Bode plots of this specific time delay function where τ = 1 second have the same structures as those the general pure time delay in (b). The impact of a pure time delay on any plant G(s) (whose original Bode plots are M and φ ) can be established from the derivations in (a). Let MD be the magnitude for the system with time delay, and φD be the phase angle for the same system with time delay. Since Mτ = 0, it means the time delay function does not affect the magnitude of the original system, while the phase angle is delayed by ωτ or 57.3ωτ. Hence, the new magnitude (MD ) and new phase angle (φD ) of a system G(s),

489

Frequency-Response Design Methods

Figure 6.22 Bode plots for the integrator factor 1/s.

which is subjected to a time delay, are then derived as follows: GD (s) = Gτ (s)G(s) GD (s) = e−τs G(s) =⇒ MD = M + 0 =M

(no change)

=⇒ φD = φ + φτ = φ − ωτ

(in radians)

= φ − 57.3ωτ

(in degrees)

490

Design and Analysis of Control Systems

Figure 6.23 Overall magnitude bode plot for (s + 1)/{s(s + 100)}.

For the specific case where τ = 1 second it follows that: GD (s) = Gτ (s)G(s) GD (s) = e−s G(s) =⇒ MD = M + 0 =M

(no change)

=⇒ φD = φ + φτ = φ −ω

(in radians)

= φ − 57.3ω

(in degrees)

Frequency-Response Design Methods

491

Figure 6.24 Overall phase angle bode plot for (s + 1)/{s(s + 100)}.

Thus, the new Bode plots of the plant G(s) which has been subjected to a pure time delay function, where τ = 1 second, are obtained by adjusting the original Bode plots of G(s) obtained in (a). Only the phase angle is affected by the pure time delay, while the magnitude is not affected. The phase angles of the original system and that of pure time delay function are drawn separately and then added up to obtain the new phase angle. It is important to note that the phase angle (and magnitude) curves are drawn with respect to the common logarithm of the natural frequency (log10 ω). The Bode plots of the time delay function and its impact on the system under consideration are summarised in the following Figures 6.26–6.30: Similarly, if a time delay function of one second is applied to the system in Example 6.4, the impact of the time delay will be as illustrated in the following diagrams:

492

Design and Analysis of Control Systems

Figure 6.25 Phase angle for a pure time delay function G(s) = e−sτ .

6.15

BODE PLOTS USING MATLAB

As indicated earlier, the actual (not asymptotes) Bode plots of systems can be drawn using MATLAB software. The key command is “bode(.)”, and it is applied to the system Transfer Function model or the State-Variable Matrix model. A few sample algorithms are presented here to illustrate the procedure involved. 6.15.1

A SINGLE PLOT

Consider the transfer function G(s) =

2s2 + 3s . s3 + 12s2 + 7s + 11

num = [2 3 0]; den = [1 12 7 11]; bode(num,den) 6.15.2

SEVERAL PLOTS ON THE SAME CURVE

Consider the transfer function G(s) =

1  2 # s s 100 1 + + s 10 10

=⇒

num1 = [1]; den1 = [100];

"

1 , 100

1 1 and . s s s2 1+ + 10 100

Frequency-Response Design Methods

Figure 6.26 Bode plots for a time delay function.

num2 = [1]; den2 = [1/100 1/10 1]; num3 = [1]; den3 = [1 0]; sys1 = tf(num1,den1); sys2 = tf(num2,den2); sys3 = tf(num3,den3); bode(sys1,sys2,sys3)

493

494

Design and Analysis of Control Systems

Figure 6.27 Impact of time delay on phase bode plot for (s + 1)/{s(s + 100)}.

6.15.3

SYSTEM IN STATE-SPACE

Consider a system with the following state-variable matrices 

0

 A=   −24.5

A = [0 1;−24.5 −4.5]; B = [0;24.5]; C = [1 0]; D = [0]; bode(A,B,C,D)

1 −4.5



   , B=  

0

  , C=

24.5



1

0



, D=



0



.

Frequency-Response Design Methods

495

Figure 6.28 Final (with time delay) phase bode plot for (s + 1)/{s(s + 100)}.

6.16

MODELS FROM EXPERIMENTAL FREQUENCY DATA

Bode plots can be used to derive system models or verify models (obtained by other methods) using experimental frequency data information. The model obtained is in the form of a transfer function deduced directly from the frequency response. The issues and methods involved in the derivation of dynamic models by using experimental data were covered in Chapter 2. Sometimes the only practical or feasible way to obtain a model is in the form of an approximate one from frequencyresponse data. It is relatively easy to obtain the frequency response of a system experimentally. The procedure is to introduce a sinusoidal input and then measure the gain (logarithm of the amplitude ratio of output to input) and the phase difference between output and input. The curves plotted from this data then constitute the system model. Using the methods given in previous sections, one can derive the model directly from this information. Model verification using experimental data is accomplished by extracting an approximate transfer function from the plots by fitting straight lines to the data and estimating breakpoints (cut-off frequency), that is, finding the poles and zeros. General plots of complex pole factors with different damping ratios (Figure 6.8) are used to estimate the damping ratios of complex factors from the frequency overshoot. The model obtained from the data is then compared with the model derived by other methods for the purpose of model validation.

496

Design and Analysis of Control Systems

Figure 6.29 Impact of time delay on phase bode plot for (s + 1)/{s(s + 10)}.

6.17

COMPENSATION

Compensation is when dynamic elements are typically added to feedback control systems to improve their stability and error characteristics. The basic elements of such feedback include proportional (P), derivative (D), and integral (I). There are two kinds of dynamic compensation: the lead network, which approximates proportional-derivative (PD) feedback; and the lag network, which approximates proportional-integral (PI) control. In this section, compensation is discussed in terms of frequency-response characteristics. In many cases, the compensation will be implemented in using a microprocessor. Techniques for converting the continuous compensation D(s) into a form that can be coded in the computer will be discussed under Digital Control Systems, in Chapter 8. The frequency response stability analysis to this point has considered the closed-loop system to have the characteristic equation: 1 + KG(s) = 0. With the introduction of compensation, the closed-loop characteristic equation becomes 1 + D(s)G(s) = 0; where D(s) is the transfer function of a PI, PD or PID controller.

6.17.1

PD AND LEAD COMPENSATORS

The PD and Lead compensators are closely related where the latter is a product of a simple modification of the former.

497

Frequency-Response Design Methods

Figure 6.30 Final (with time delay) phase bode plot for (s + 1)/{s(s + 10)}.

6.17.1.1

The PD Compensator

The PD Compensator is derived from the general PD controller as follows: D(s) = KP + KD s   KD s = KP 1 + KP = KP (1 + TD s)    s    = KP  1 +  1  TD 1 is the cut-off frequency. It is TD KD important to note that the time constant of the PD compensator is given by TD = . It is instructive KP to note that the PD compensator has the same structure as the Standard PD controller.

This is the standard and useful form of the PD compensator, where

498

Design and Analysis of Control Systems

The PD compensator has a stabilising effect on the root locus. This is supported by the increase 1 in phase and the +20 dB/decade slope and frequencies above the breakpoint . The gain keeps on TD increasing with higher frequencies. This is undesirable as it will amplify high-frequency noise (a typical differentiation effect). The PD controller increases the system bandwidth, which makes its response faster so that it can perform the required corrective actions needed to stabilise the system [11]. 6.17.1.2

The Lead Compensator

For the PD compensator, it is desirable for the bandwidth (phase) to increase and the gain to increase at the cross-over frequency that is provided by the PD compensator. However, increased gain at higher frequencies is not desirable. To reduce the high-frequency (HF) amplification of the PD controller, a first-order pole is added to the PD controller at frequencies much higher than the break1 . This produces the lead compensator, which is obtained by modifying point of the PD controller TD the PD compensator to alleviate the problem of high frequency amplification. This is done by adding 1 as follows: a first-order pole at s = − αTD D(s) = KP (1 + TD s) 



 s   = KP  1 +  1   TD  

=

 s   KP  1 +  1   TD adding a pole s  1+  1 αTD

=

KP (1 + TD s) 1 + αTD s

= KP

TD s + 1 αTD s + 1

This is the lead compensator. It is important to note that α is the ratio between the zero and the pole. Put differently, it is the ratio between the zero and the pole breakpoint frequencies. 6.17.2

PI AND LAG COMPENSATORS

The PI and lag compensators are closely related where the latter is a product of a simple modification of the former. 6.17.2.1

The PI Compensator

For many applications, it is essential to keep the bandwidth low and reduce the steady-state error. This is done by using a PI or lag compensator. The PI compensator is derived from the general PI controller as follows:

499

Frequency-Response Design Methods

KI D(s) = KP +  s  1 = KP 1 + TI s   KP 1 = s+ s TI

(6.32) (6.33) (6.34)

1 is the cut-off frequency. It is TI KP important to note that the PI compensator time constant is given by TI = . It is instructive to note KI that the structure of the PI compensator is similar to that of the standard PI controller. The PI compensator provides infinite gain at low frequency. This is desirable as it reduces the steady-state error. The problem is the phase decrease at frequencies below the breakpoint, which tends to destabilise the system. This leads to the case for a lag compensator [11]. This is the standard and useful form of the PI compensator, where

6.17.2.2

The Lag Compensator

The lag compensator is obtained by modifying the PI compensator to ensure finite increased lowfrequency gain with enough phase margin stability. This is done by replacing the pole at s = 0 with 1 as follows: a low frequency finite pole at s = − αTI KI D(s) = KP +  s  1 = KP 1 + TI s   KP 1 = s+ s TI   1 KP s + TI  replacing the pole at s = 0 =  1 s+ αTI = αKP

TI s + 1 αTI s + 1

This is the lag compensator. It is important to note that α is the ratio between the zero and the pole. Put differently, it is the ratio between the zero and the pole breakpoint frequencies. The key benefit of a lag compensator is the increased low-frequency gain for better steady-state behaviour. Lag still has increased low-frequency gain, but now it is finite. The lag compensator could also be considered as reducing the crossover frequency (bandwidth) and thus providing better phase margin (PM). 6.17.3

GENERIC COMPENSATOR

Both the lead and lag compensators derived above satisfy the format: D(s) = K

s+z s+ p

where z < p for a lead compensator and z > p for a lag compensator. It can be shown that for the derived compensators to satisfy this general format above; α < 1 for a lead compensator and α > 1

500

Design and Analysis of Control Systems

for a lag compensator. This can be demonstrated as follows: For the lead compensator, it is clear 1 1 that z = and p = . The condition z < p means: TD αTD 1 TD =⇒ α For the lag compensator it is clear that z = 1 TI =⇒ α


p means: TI αTI >

1 αTI

> 1

(as expected)

In summary, it is important to note that a PD compensator is converted into a lead compensator by 1 adding a first-order pole at s = − , while a PI compensator is converted into a lag compensator αTD 1 by replacing the pole at s = 0 with a finite low-frequency pole at s = − . αTI 6.17.4

PID COMPENSATOR

The PID Compensator is derived from the general PID controller as follows: KI D(s) = KP + + KD s   s KD s KI + = KP 1 + KP s KP   1 = KP 1 + + TD s TI s where TD =

KP KD and TI = KP KI

Hence, the PID compensator has the same format as what is called the standard PID controller which is popular in industry. The standard PID controller was discussed in detail in Chapter 4. It can be shown that cascading the PD and PI compensators produces a PID compensator. This is done by starting with the expressions of the PI and PD compensators and then combining them as follows: KI PI = KP +  s  s = KP 1 + TI   KP 1 = s+ s TI PD = KP + KD s = KP [1 + sTD ]

501

Frequency-Response Design Methods

Cascading the two expressions of the PD and PI compensators (assuming different proportional control gains for each) leads to:   KP1 1 D(s) = s+ KP2 [1 + sTD ] s TI   KP1 KP2 1 = s+ [1 + sTD ] s TI     KP1 KP2 1 1 s+ s TD + = s TI s    1 TD = KP1 KP2 1 + + sTD + TI sTI      TD TI 1 TI + TD = KP1 KP2 1+s + TI TI + TD s (TI + TD )   1 = KP′ 1 + ′ + TD′ s , which is a PID compensator TI s where 

KP′

= KP1 KP2

TD′

=

TI′

TI + TD TI



TD TI TI + TD = TI + TD

Thus the PID Compensator has two equivalent forms:    KP1 KP2 1 D(s) = s+ (1 + sTD ) s TI   1 ′ ′ = KP 1 + ′ + TD s , TI s

(6.35) (6.36)

The first form in Equation 6.35 is more useful and easier to implement as it allows the separate design of PD and PI compensation effects. It is important to note that these two expressions are equivalent and can be derived from the traditional PI, PI and PID controllers, where the traditional and industrial forms of the PID controller are respectively given by KI D(s) = KP + + KD s  s  1 = KP 1 + + TD s TI s where TD =

KD KP and TI = KP KI

In fact, as indicated earlier, the standard PID controller is the same as the PID compensator. The PID compensator simultaneously improves both the transient behaviour and the steady-state response. The compensation obtained by using the PID compensator approximates the combination of lead and lag compensators, and hence the PID compensator is sometimes referred to as a lead-lag compensator. It provides simultaneous enhancement in transient and steady-state responses.

502

6.17.5

Design and Analysis of Control Systems

LEAD-LAG COMPENSATOR

A proper lead-lag compensator is obtained by combining the lead and lag compensators into a single compensator. This structure seeks to improve low-frequency gain (reduce steady-state error) and the phase margin at the crossover frequency (transient response). In most cases, rather than designing it in one go, one can also design the lead compensator to address the one set of requirements and then design a lag compensator to address another set of requirements. A proper lead-lag compensator is obtained by considering the lead and lag compensators derived in the preceding sections [11]: Dlead (s) = KP

TD s + 1 αTD s + 1

Dlag (s) = αKP

TI s + 1 αTI s + 1

Hence a proper lead-lag compensator is obtained by cascading the lead and lag compensators assuming different proportional control gains and pole-zero ratios for the two. This is achieved as follows:    TD s + 1 TI s + 1 D(s) = KP1 α 2 KP2 α 1 TD s + 1 α 2 TI s + 1    TI s + 1 TD s + 1 = α 2 KP1 KP2 α 1 TD s + 1 α 2 TI s + 1 This is the fully fledged generic lead-lag compensator – the ultimate compensator – which simultaneously enhances the transient and steady-state responses. 6.17.6

SUMMARY OF COMPENSATION CHARACTERISTICS [?]

• PD control adds phase lead at all frequencies above the breakpoint. If there is no change in gain on the low-frequency asymptote, PD compensation will increase the crossover frequency and the speed of response. The increase in magnitude of the frequency response at the higher frequencies will increase the system’s sensitivity to noise. • Lead compensation adds phase lead at a frequency band between the two breakpoints, usually selected to bracket the crossover frequency. If there is no change in gain on the low-frequency asymptote, lead compensation will increase both the crossover frequency and the speed of response over the uncompensated system. If K is selected so that the low-frequency magnitude is unchanged, then the steady-state errors of the system will increase. • PI control increases the frequency-response magnitude at frequencies below the breakpoint, decreasing steady-state errors. It also contributes phase lag below the breakpoint, which must be kept at a low frequency to avoid excessively degrading the stability. • Lag compensation increases the frequency response magnitude at frequencies below the two breakpoints, decreasing steady-state errors. Alternatively, with suitable adjustments in K, lag compensation can be used to decrease the frequency-response magnitude at frequencies above the two breakpoints so that wc yields an acceptable phase margin. Lag compensation also contributes phase lag between the two breakpoints, which must be kept at low frequencies to keep the phase decrease from degrading the PM excessively.

PROBLEMS Problem 6.1 Draw Bode plots for a sixth-order Pade approximation to a pure delay of 1 second.

503

Frequency-Response Design Methods

Problem 6.2 Sketch the polar plot of the frequency response for the following transfer functions: (a)

KG(s) =

1 (1 + 0.5s)(1 + 2s)

(b)

KG(s) =

(1 + 0.5s) s2

(c)

KG(s) =

s + 10 s2 + 6s + 10

(d)

KG(s) =

30(s + 8) s(s + 2)(s + 4)

Problem 6.3 A rejection network that can be utilised instead of the twin-T network is the bridged-T network shown below.

Bridged-T network The transfer function of this network is given by G(s) =

s2 + ωn2 , s2 + 2(ωn s/Q) + ωn2

2 ωn L (ωn L)2 , and Q = , and R2 is adjusted so that R2 = . Show that the expression LC R1 4R1 given for G(s) is correct. where ωn2 =

Problem 6.4 A control system for controlling the pressure in a closed chamber is shown below, together with its flow graph model.

Pressure Controller

504

Design and Analysis of Control Systems

Flow Graph Model The transfer function for the measuring element is G(s) =

100 , s2 + 15s + 100

and the transfer function for the valve is D1 (s) =

1 . (0.1s + 1)(s/15 + 1)

The controller function is given by Dc (s) = s + 1. Obtain the frequency response characteristics for the loop transfer function   1 Dc (s)D1 (s)G(s) . s Problem 6.5 Autonomous, self-driven vehicles can be used in warehouses, airports, and many other applications. These vehicles follow a wire embedded in the floor and adjust the steerable front wheel to maintain proper direction. The sensing coils, mounted on the front wheel assembly, detect an error in the direction of travel and adjust the steering. The overall control system open-loop transfer function is Kv K = . DG(s) = s(s + π)2 s(s/π + 1)2 It is desired to have a bandwidth of the closed-loop system exceed 2πrad/s. (a) Set Kv = 2π and sketch the Bode diagram. (b) Using the Bode diagram, obtain the logarithmic-magnitude vs. phase angle curve. Problem 6.6(a) Calculate the magnitude and phase of G(s) =

1 , s+1

for ω = 0.1, 0.2, 0.5, 1, 2, 5, and 10rad/sec. (b) Sketch the asymptotes for G(s), and compare these with the computed results from part (a).

505

Frequency-Response Design Methods

Problem 6.7 Draw the Bode plots for each of the following systems. Compare the sketches with the plots obtained using MATLAB. 1

(a)

G(s) =

(b)

G(S) =

s (s + 1)(s + 10)(s2 + 5s + 2500)

(c)

G(s) =

4s(s + 10) (s + 50)(4s2 + 5s + 4)

(d)

G(s) =

10(s + 4) s(s + 1)(s2 + 2s + 5)

(e)

G(s) =

1, 000(s + 1) s(s + 2)(s2 + 8s + 64)

(f)

G(s) =

(s + 5)(s + 3) s(s + 1)(s2 + s + 4)

(g)

G(s) =

4, 000 s(s + 40)

(h)

G(s) =

100 s(1 + 0.1s)(1 + 0.5s)

(i)

G(s) =

1 s(1 + s)(1 + 0.02s)

(s + 1)2 (s2 + s + 4)

Problem 6.8 Consider the following system transfer function, G(s) =

(3/2)s + (12/8) . (6/4)s2 + 150s

(a) Show that the function can be broken into four simple factors: a constant, a zero, a pole, and an integral. (b) Write down the cut-off frequencies of the zero and pole factors. What is the significance of these frequencies? (c) Draw the Bode plots of these four factors separately. (d) Obtain the two overall Bode plots (magnitude and phase angle) for the system. (e) Derive and draw the Bode plots for a pure time delay function represented by f (t − τ). (f) If the above system has a time delay of one second, how does this affect its Bode plots? Illustrate your answer. Problem 6.9 A certain system is represented by the asymptotic Bode diagram shown below. Find and sketch the response of this system to a unit step input (assuming zero initial conditions).

506

Design and Analysis of Control Systems

Problem 6.10 Prove that the magnitude slope of −1 in the Bode plot corresponds to −20dB per decade. (a) Sketch the polar plot for an open-loop system with a transfer function 1/s2 , that is, sketch 1 , s2 s=C 1

where C1 is a contour enclosing the entire RHP. (Hint: assume C1 takes a small detour around the poles at s = 0.) (b) Repeat part (a) for an open-loop system whose transfer function is G(s) =

1 . s2 + ωo2

Problem 6.11 Draw a Nyquist diagram for each of the following systems and compare the result with that obtained using the MATLAB command Nyquist. (a)

KG(s) =

K(s + 2) s + 10

(b)

KG(s) =

K (s + 10)(s + 2)2

(c)

KG(s) =

K(s + 10)(s + 1) (s + 100)(s + 2)3

Using the plots, estimate the range of K for which each system is stable, and qualitatively verify the result using root locus plot (generated by hand or using MATLAB). Problem 6.12 Draw a Nyquist diagram for each of the following systems, and compare the result with that obtained using the MATLAB command Nyquist. (a)

KG(s) =

K(s + 1) s+2

(b)

KG(s) =

K (s + 1)(s + 2)2

(c)

KG(s) =

K(s + 4)(s + 1) s(s + 2)4

507

Frequency-Response Design Methods

Problem 6.13 The forward-path transfer function of a unity feedback control system is G(s) =

K s(s + 6.54)

Analytically, find the resonant peak Mpeak , resonant frequency ωr , and bandwidth BW of the closedloop system for the following values of K: (a) K = 5

(b) K = 21.39

(c) K = 100.

Problem 6.14 Use MATLAB to find the resonant peak, Mpeak , resonant frequency ωr , and bandwidth BW of the following unity feedback control systems. Make sure that the systems are stable. (a)

G(s) =

5 s(1 + 0.5s)(1 + 0.1s)

(b)

G(s) =

10(s + 1) s(s + 2)(s + 10)

(c)

G(s) =

(d)

G(s) =

0.5 s(s2 + s + 1) 100e−s s(s2 + 5s + 5)

7 State-Space Design Methods 7.1

INTRODUCTION

The use of the state-space approach has often been referred to as modern control system design instead of classical control system design (transfer-function based techniques, root locus and frequency-response methods). This chapter discusses state-space analysis and design methods for a broad range of dynamic systems. Chapter 2 introduced state-space analysis by modelling dynamic systems using matrices of coefficients of state variables. This produced the State-Variable Matrix model or State-Space model, which lays the basis of state-space design and analysis. As explained in Chapter 2, a state-space representation is where a dynamic system is described directly from the differential equations. The framework can be effectively used for modelling Large Scale Systems (LSS), Multi Input Multi Output (MIMO) systems and nonlinear systems. Because of the approach’s extensive use of matrices, it lends itself to easy computer implementation and computer-based design. State-space methods use state variables, i.e., the analysis and the design are carried out in the state-space. The techniques involve designing dynamic compensation by working directly with the state-variable descriptions of dynamic systems. These methods are simpler because they deal directly with the system states that are in simple first-order differential equations. Further advantages of state-space methods include the ability to study more general models, facilitating the use of ideas of geometry in differential equations, and providing connections between internal and external descriptions. As already indicated, state-space methods are easily applied to MIMO systems and implemented using computer software such as MATLAB.

7.2

DEFINITIONS

The state-space description of a linear system was previously discussed under the modelling of dynamic systems in Chapter 2, where the representation was called the State-Variable Matrix Form model. Hence this description is also called the State-Space model. As outlined in Chapter 2, it involves four system matrices (or vectors): the input control signal vector (or scalar) u(t), the state vector x(t) as well as its time derivative x˙ (t), and the output vector y(t). Hence the State-Space model of a dynamic system is represented as follows: x˙ = Ax + Bu y = Cx + Du The parameters and variables are defined as follows: A represents the system matrix. B represents the input matrix. C represents the output matrix. D represents the feed-forward matrix. x(t) represents the state vector. u(t) represents the input or control vector. 508

DOI: 10.1201/9781032718880-7

509

State-Space Design Methods

7.3 7.3.1

BLOCK DIAGRAM AND THE TRANSFER FUNCTION STATE-SPACE DESCRIPTION AND THE BLOCK DIAGRAM

The state-space description of a linear system was previously discussed under the modelling of dynamic systems (Chapter 2). It involves four system matrices (or vectors): the input control signal vector (or scalar) u, the state vector x as well as its time derivative x˙ , and the output vector y. The general form is given by x˙ = Ax + Bu

(7.1)

y = Cx + Du.

(7.2)

The system state x is an n-vector where n is the number of the states in that system and u is an m-vector of control inputs. The output of the system is represented by an l-vector y. The system matrices A, B, C and D are known, respectively, as the plant (or generally as the system) matrix, input matrix, output matrix and the feed-through matrix. The plant matrix A is an n × n square matrix while the input matrix B is a n × m matrix. Matrices C and D have dimensions l × n and l × m respectively. At present, only linear time-invariant (LTI) systems are considered, hence, these system matrices are constant. The corresponding block diagram for this system is shown in Figure 7.1 As illustrated in Figure 7.1, the physical significance of the different matrices can easily be seen where the feed-through matrix D feeds part of the input control u signal forward to the output signal y, the output matrix C transforms the system states x to output signals y and the input matrix B transforms the input control signal u to the derivative of the system states x˙ . Finally the plant A (or system matrix) that represents the plant dynamics, transforms the state vector x to its time derivative x˙ . Figure 7.1 does not include the reference input that is necessary in the control system. Such control systems in which there are no reference inputs r(t) are known as regulators. Usually, the state, control input, and output vector are functions of time. Therefore, the statespace presentation for a single-input single-output (SISO) system can be given as .

x(t) = Ax(t) + Bu(t) y(t) = Cx(t) + Du(t),

(7.3) (7.4)

where in this case (SISO) the input and output are scalar quantities u(t) and y(t), respectively. The input matrix B and the feedfoward matrix D become column matrix or n-vector and a scalar constant, respectively, whereas the output matrix C is a row matrix or the transpose of an n-vector. In MATLAB, the system matrices A, B, C, D, can be obtained from the system transfer function by the command “tf2ss.’’ However, as it will be discussed later, MATLAB returns system matrices

Figure 7.1 Block diagram of a system in state-space.

510

Design and Analysis of Control Systems

in one of the canonical forms that might be different from the results one could get by normal transformation of the transfer function to State-Variable Matrix model. The MATLAB syntax for ‘‘tf2ss’’ is [A, B, C, D] = tf2ss(num, den), where num is a row matrix of the coefficients of the numerator of the transfer function in descending powers of s. For MIMO systems, it will have as many rows as the number of outputs in the output vector y, therefore, for a SISO system it will be a row vector whose elements are the coefficients of the descending powers of s in the numerator of G(s). Vector den contains the coefficients of the denominator of the transfer function also in descending powers of s. Example 7.1 The dynamics of a hypothetical system can be presented by a third-order linear ordinary differential equation as d2x dx d3x + 4 + 3 + 5x − 9 = 0. dt 3 dt 2 dt If the system output is linearly dependent on the parameter x such that y = x, transform the system differential equation into the corresponding State-Variable Matrix model. Do the same problem using MATLAB. (Hint: Assume a control step input of strength 9.) Solution 7.1 Letting x1 = x, x2 = x, ˙ x3 = x, ¨ it follows that .

x1 = x2 .

x2 = x3 .

x3 =

d3x = −5x1 − 3x2 − 4x3 + 9 dt 3

y = x1. In State-Variable Matrix model, this becomes  .       x1 0 1 0 x1 0 .  x2  =  0 0 1   x2  +  0  9 . x3 −5 −3 −4 x3 1  x1  x2  + [0] 9. x3  y=



1

0

0



Therefore, the system matrices become     0 1 0 0  0 1 , B =  0 , C = 1 A= 0 −5 −3 −4 1

0

0



, D = [0].

To get the state-space matrices using MATLAB, the system transfer function is required. The transfer function of this system for a step input of strength 9 can be derived from the equation of the Transfer Function model as discussed in Chapter 2: T (s) = C(sI − A)−1 B + D =

s2 . s3 + 4s2 + 3s + 5

511

State-Space Design Methods

As such, the MATLAB code for this problem becomes den=[1 4 3 5]; num=[1 0 0]; [A,B,C,D]=tf2ss(num,den). The resulting system matrices are     −4 −3 −5 1  0 0 , B =  0 , C = 1 A= 1 0 1 0 0

0

0



, D = [0] .

Although structurally there is a difference between the MATLAB results and the results presented above, these two are representing the same information. The structural difference arises because MATLAB presents the state vector starting with the higher-order element of the state in the form   x3 x =  x2  x1 whereas the results of the computation are presented starting with the lower-order element of the state vector resulting in   x1 x =  x2  . x3 Since these differences occur, the reader is reminded to exercise caution in arranging the state vector when using MATLAB in cases like the one discussed above. 7.3.2

TRANSFER FUNCTION MODEL: A REVISIT

As explained in Chapter 2, sometimes it is necessary to move from the state-space to the frequency domain (Laplace domain). This is accomplished by converting the State-Space model into the Transfer Function model. The transfer function of the continuous dynamic system was derived in Chapter 2. That derivation is reviewed here for clarity and completeness. Consider a dynamic system with the following State-Space model. x˙ (t) = Ax(t) + Bu(t) y(t) = Cx(t) + Du(t). Taking Laplace transforms of both these equations leads to sX(s) = AX(s) + BU(s) Y(s) = CX(s) + DU(s). From Equation 7.5 it be can be seen that X(s) [sI − A] = BU(s), =⇒ X(s) = (sI − A)−1 BU(s), which, when used in, Equation 7.6 leads to Y(s) = C (sI − A)−1 BU(s) + DU(s)   = C (sI − A)−1 B + D U(s) T(s) =

Y(s) U(s)

=⇒ T(s) = C (sI − A)−1 B + D,

(7.5) (7.6)

512

Design and Analysis of Control Systems

This is the transfer matrix of a dynamic (continuous-time) system. For SISO systems where B is a column vector, and C is a row vector, this transfer matrix becomes a scalar function, known as The Transfer Function Form model of the dynamic system. It is important to note that the transfer function does not depend on the initial condition x(to ), hence it can been be easily extracted by setting x(to ) = 0 as has just been done, while applying Laplace transforms. Furthermore, for any such SISO system, the transfer function is unique, which means there is only one transfer function for a dynamic system. It is important to note that while the State-Space model is not unique, the opposite is true for the transfer function. In the expression for the transfer function, the matrix (sI−A)−1 has special importance in statespace analysis and is known as the system Resolvent Matrix Φ(s). It will be shown later that the Resolvent Matrix is a Laplace transform of the Transition Matrix Φ(t), which represents the natural dynamics of the system. As explained in Chapter 2, this inverse matrix (Resolvent Matrix) is given by Φ(s) = (sI − A)−1 =

adj(sI − A) , det |sI − A|

(7.7)

where adj(sI − A) refers to the adjoint (or adjugate) of matrix (sI − A), which means the transpose of the matrix of the cofactors of (sI − A). Hence, the transfer function can be written as T(s) =

Y(s) C[adj(sI − A)]B + D|sI − A| = . U(s) |sI − A|

(7.8)

From this transfer function (Equation 7.8), the characteristic equation is obtained by setting the denominator equal to zero, i.e., |sI − A| = 0. 7.3.2.1

(7.9)

System Poles and Zeros

As can be seen, the characteristic equation of the system equals the algebraic characteristic equation of the matrix A whose roots are the eigenvalues of A. Therefore, since the poles of a system are the roots of its characteristic equation, it follows that the poles of the system in question will be just the eigenvalues of the plant (system) matrix. The zeros of the system are determined from the numerator of the transfer function. Since for a SISO system B is a column matrix while C is a row matrix and D is a scalar, it can be seen that the numerator represents a scalar polynomial in s, the roots of which are the zeros of the system in question. Thus to get the zeros of the system, the solution of the following equation must be determined, C[adj(sI − A)]B + D|sI − A| = 0.

(7.10)

In MATLAB, the transfer function of a system can be derived from its state variable presentation by the command “ss2tf.” This returns the two vectors that contain the coefficients of the numerator and denominator of the transfer function, respectively. Since the command has been designed to handle both MIMO and SISO systems, it becomes necessary to specify the input for which the transfer function is required. However, if the input is not specified, MATLAB sets the default input. The syntax is [num, den] = ss2tf(A, B, C, D, iu), where A,B,C and D are the system matrices and iu is an integer between 1 and n specifying the input for which the transfer function is required. The poles and zeros of the system can also be determined in MATLAB by using the command “ss2zp’’. This returns a matrix of the zeros zi of the transfer functions where each column represents the zeros corresponding to one input yi and two vectors one of which is a vector containing the poles pi of the system, and the other is a vector that contains the

513

State-Space Design Methods

gains ki of each numerator transfer function. For SISO systems, however, the zeros will be returned in a column vector instead of a matrix. The transfer function in terms and zeros and poles is given by (s − z1 )(s − z2 ) · · · (s − zn−1 )(s − zn ) T (s) = k . (7.11) (s − p1 )(s − p2 ) · · · (s − pm−1 )(s − pm ) The syntax for the command “ss2zp’’ is [z, p, k] = ss2zp(A, B, C, D, iu), where iu represents the single input in the system for which the poles and zeros are sought. In general, the use of this command is the same as the “ss2tf’’ discussed previously. In Chapter 2, several examples demonstrating the calculation of the Transfer Function model from state-space (using Equation 7.8) were discussed. A few more are presented here. Example 7.2 Given the following State-Space model, derive the Transfer Function model.      2 −3 −8 x1 1 5 3  x2  + 4 r x˙ =  0 −3 −5 −4 x3 6

 y= 1

3

   x1 6 x2  x3

Solution 7.2 It is imperative to note that the Transfer Function model is obtained on the premise that: x˙ = Ax + Bu y = Cx + Du =⇒ T (s) = C(sI − A)−1 B + D A couple of steps then become necessary:   s−2 3 8 s − 5 −3  sI − A =  0 3 5 s−4 The key task is to find (sI − A)−1 The determinant is given by s − 5 −3 0 −3 0 s − 5 det (sI − A) = s − 2 −3 +8 5 s + 4 3 s + 4 3 5 =⇒ det(sI − A) = s3 − 3s2 − 27s + 103 Using cofactors to find the adjoint of (sI − A), it follows that: 2 s −s−5 28 − 3s 2  −9 s + 2s − 32 ad j(sI − A) = 9 19 − 5s

 31 − 8s 3s − 6  2 s − 7s + 10

514

Design and Analysis of Control Systems

We can now compute the transfer function as follows: T (s) = C(sI − A)−1 B + D  = 1

=

3

 6

2 s −s−5 1  −9 s3 − 3s2 − 27s + 103 9

28 − 3s s2 + 2s − 32 19 − 5s

  31 − 8s 1 3s − 6  4 + 0 s2 − 7s + 10 6

49s2 − 403s + 644 s3 − 3s2 − 27s + 103

This is the Transfer Function model of the dynamic system. Example 7.3 Given the following State-Space model, derive the Transfer Function model. 

3 x˙ =  1 −3

−5 −8 −6

 y= 1

    2 x1 5 7 x2  + −3 r 2 x3 2

−4

   x1 3 x2  x3

Solution 7.3 The following is imperative to note

x˙ = Ax + Bu y = Cx + Du T (s) = C(sI − A)−1 B + D Evaluate (sI − A)  s−3 5 sI − A =  −1 s + 8 3 6

 −2 −7  s−2

Now find (sI − A)−1 We can start with finding the determinant s + 8 det(sI − A) = s − 3 6

−1 −7 − 5 −7 s−2

−1 3 − 2 3 s−2

s + 8 6

This simplifies to det(sI − A) = s3 + 3s2 + 19s − 133 Finding the adjoint using cofactors, the following is obtained: 2 s + 6s + 26 −2 − 5s 2  s − 23 s − 5s + 12 ad j(sI − A) = −30 − 3s 33 − 6s

 2s − 19 7s − 19  2 s + 5s − 19

515

State-Space Design Methods

We can now compute the following

 T (s) = 1 −4

T (s) = C(sI − A)−1 B + D 2 s + 6s + 26 −2 − 5s  1 2 − 5s + 12  s − 23 3 3 s s + 3s2 + 19s − 133 −30 − 3s 33 − 6s

  2s − 19 5 7s − 19  −3 + 0 2 s2 + 5s − 19

This produces T (s) =

7.4

23s2 − 48s − 7 s3 + 3s2 + 19s − 133

SYSTEM RESPONSE BY STATE-TRANSITION MATRIX

Having modelled a dynamic system in state space, the system response must be obtained. As discussed in Chapter 3, this is also called solving the model and involves using the mathematical model to determine certain features of the system cause-and-effect relationships. Once the system response is obtained, the function of a controller in a control system is then to influence the system response or behaviour of the plant. There are two techniques or methods for finding the system response for dynamic systems modelled in state space – direct calculation of the State-Transition Matrix and use of Laplace transforms to obtain the State-Transition Matrix. Direct calculation of the State-Transition Matrix can be achieved in three ways – by directly solving the differential equations involved in the State-Space model, diagonalisation or using the Cayley-Hamilton theorem. Three main mathematical approaches are used to obtain the system response: direct solution of differential equations in the time domain, the use of the Laplace transform to solve differential equations in the frequency domain, and the deduction of system behaviour from the system transfer function. For linear time-invariant (LTI) systems, the state-space description given by Equation 7.3, x˙ (t) = A x(t) + B u(t), is a system of first-order linear ordinary differential equations in the system states. In this equation, the control input is regarded as a forcing function. The time response of such systems (without feed-through) depends on the output matrix C and the system state vector x(t) such that y(t) = Cx(t).

(7.12)

Since for LTI (linear time-invariant) systems C is constant, the system response depends entirely on the state vector. Therefore, in the analysis of such system time response, one is interested in getting the time function of the state vector. There are several analytical techniques for determining the system time response, but this chapter will only focus on two methods as described in the following sections. 7.4.1

DIRECT SOLUTION OF THE DIFFERENTIAL EQUATION

A system of equations in state-space form has an analytical solution analogous to that of a first-order differential equation of the form dx(t) = ax(t) + bu(t). (7.13) dt This equation can be rewritten as dx(t) − ax(t) = bu(t), dt

516

Design and Analysis of Control Systems

where u(t) is known as the forcing function. The description matrices replace the coefficients in the solution form. The solution for such an equation has two parts; the homogeneous part that assumes that the forcing function u(t) is zero, and the nonhomogeneous part that takes into account the effect of the forcing function. The general solution is obtained using the superposition principle to sum up the two parts of the solution. The particular solution is then obtained by considering the given initial conditions. The general form of the homogeneous part is xhom = Ceat .

(7.14)

The nonhomogeneous part can be obtained by the method of variation of parameters where xnonhom = C(t)eat so that

(7.15)

dx(t) = C′ (t)eat + aC(t)eat , dt

which gives Z t

C(t) =

e−aτ bu(τ)dτ.

(7.16)

0

The independent time variable t in the integral of Equation 7.16 has been changed to τ to distinguish it from the t in the upper limit of the integration. Combining these results gives the nonhomogeneous solution as xnonhom = eat Z t

=

Z t

e−aτ bu(τ)dτ

0

e−a(t−τ) bu(τ)dτ.

(7.17)

0

The general solution to the given first-order equation, which is a sum of homogeneous and nonhomogeneous parts of the solution, becomes x(t) = Ceat +

Z t

e−a(t−τ) bu(τ)dτ.

(7.18)

0

If the initial condition at t = 0 is x(to ), then the constant C is found to be C = x(to ). This gives the particular solution as Z x(t) = x(to )eat +

t

e−a(t−τ) bu(τ)dτ.

(7.19)

0

This is the scalar solution to the differential equation, and it combines the homogeneous and nonhomogeneous parts. 7.4.2

DIRECT STATE-TRANSITION MATRIX METHOD

The state-space solution can also be obtained to be similar to Equation 7.19. Before proceeding, it is worth recalling that the exponential eat can be expressed in power series as eat = 1 + at +

1 1 1 1 (at)2 + (at)3 + (at)4 + (at)5 + · · · 2! 3! 4! 5!

(7.20)

Analogous to the scalar expansion is the matrix exponential eAt , which is defined as eAt = I + At +

1 1 1 1 (At)2 + (At)3 + (At)4 + (At)5 + · · · , 2! 3! 4! 5!

(7.21)

517

State-Space Design Methods

where A is a square matrix and I is an identity matrix. With this information in mind, attention is paid to the state Equation 7.4 x˙ (t) = A x(t) + B u(t) whose homogeneous form is dx(t) = A x(t) dt

(7.22)

or dx(t) = A x(t) dt, which gives Z t

x(t)hom = x(to ) + A

x(τ)dτ.

(7.23)

0

Since x(t) is on both sides of Equation 7.23, the equation is revolving about x(t) (disregard the subscript hom ) resulting in an infinite power series as follows: Z t

x(t)hom = x(to ) + A

x(τ)dτ 0

Z t

= x(to ) + A

0

Z t

x(to ) + A

Z t

= x(to ) + A

0

 x(τ)dτ dτ

0

Z t

x(to ) + A

0

x(to ) + A

Z t

  x(τ)dτ dτ . . . dτ.

0

Expansion of this series and simplification gives   1 1 1 1 2 3 4 5 x(t)hom = I + At + (At) + (At) + (At) + (At) + · · · x(to ). 2! 3! 4! 5!

(7.24)

As has been shown above, the bracketed power series represents a matrix exponential eAt , therefore, the homogeneous solution of this state equation can be written as x(t)hom = eAt x(to ),

(7.25)

which is analogous to the scalar homogeneous solution shown before. The matrix exponential eAt is very important in control theory and is given a special name, the Fundamental Matrix, or more commonly, the State-Transition Matrix expressed by Φ(t) = eAt   1 1 1 1 = I + At + (At)2 + (At)3 + (At)4 + (At)5 + · · · 2! 3! 4! 5!

(7.26)

Physically, the State-Transition Matrix transforms the state from the given initial condition at to to another condition at t in the absence of the control effort. It represents the natural dynamics or free response of the system excited by initial conditions only. Normally it is presented with the two time limits as Φ(t,to ), which indicates that it is transforming the system state at to to another state at t, however, at this time, it is sufficient to show it with only a one-time element as Φ(t). The nonhomogeneous part is again obtained using the method of variation of parameters by letting (7.27) x(t)nonhom = eAt C(t) so that

.

x(t)nonhom = AeAt C(t) + eAt C′ (t),

518

Design and Analysis of Control Systems

which, when used in the nonhomogeneous equation, gives the coefficient of the nonhomogeneous solution as C′ (t) = e−At Bu(t) or

Z t

C(t) =

e−Aτ Bu(τ).

(7.28)

0

Hence, the nonhomogeneous solution becomes x(t)nonhom = eAt Z t

=

Z t

e−Aτ Bu(τ)dτ

0

eA(t−τ) Bu(τ)dτ.

(7.29)

0

As discussed earlier, the general solution x(t) is a linear combination of the nonhomogeneous and the homogeneous solutions, thus x(t) = eAt x(to ) +

Z t

eA(t−τ) Bu(τ)dτ

(7.30)

0

or by replacing eAt with Φ(t) Equation 7.30 becomes Z t

x(t) = Φ(t)x(to ) +

Φ(t − τ)Bu(τ)dτ.

(7.31)

0

The similarity between the previously derived scalar solution and this matrix solution is self-evident where the former is given by Equation 7.19 as follows: at

Z t

x(t) = e x(to ) + 7.4.3

e−a(t−τ) bu(τ)dτ.

0

DIAGONALISATION

Since Equation 7.26 for the State-Transition Matrix (Fundamental Matrix) is a sum of an infinite series, one can use it only by truncation after a number of terms to obtain just an approximation of the transition matrix.The exact computation of eAt can be time-consuming, particularly if the dimension of A is very large. One of the simplest ways of computing eAt is by diagonalisation of matrix A so that eAt will also be diagonal. It is known from linear algebra that if matrix A has real distinct eigenvalues, then there exists a Transformation Matrix P such that P−1 AP =Λ.

(7.32)

More details about eigenvalues and eigenvectors are presented in Section 7.6.5, while matrix diagonalisation is covered in Section 7.6.6. Λ is called the Modal Matrix and satisfies the equation Λ = diag[λ1 (A), λ2 (A), λ3 (A), . . . , λn (A)]

(7.33)

where λi (A) i = 1, 2, . . . , n are the eigenvalues of A. Therefore, A = PΛP−1 so that

−1 t

eAt = ePΛP

(7.34)

,

(7.35)

519

State-Space Design Methods

which by the definition of the State-Transition Matrix in Equation 7.26, becomes eAt = I + PΛP−1t + ∞

=

1 1 (PΛP−1 )2t 2 + · · · + (PΛP−1 )nt n + · · · 2! n!

1

∑ n! (PΛP−1 )nt n .

(7.36)

n=0

But since (PΛP−1 )n = (PΛP−1 )(PΛP−1 )(PΛP−1 ) · · · (PΛP−1 ) = PΛn P−1 then e

At

! 1 nn = P ∑ Λ t P−1 n=0 n! ∞

(7.37)

where ∞

1

∑ n! Λnt n = eΛt .

(7.38)

n=0

This means the State-Transition Matrix is then given by Φ(t) = eAt = PeΛt P−1 .

(7.39)

Now, if Λ is a diagonal matrix, it can be shown that the State-Transition Matrix eΛt is also a diagonal matrix given by h i eΛt = diag eλ 1 (A)t , eλ 2 (A)t , eλ 3 (A)t , . . . , eλ n (A)t

(7.40)

Example 7.4 Use the diagonalisation method to determine the State-Transition Matrix for the system matrix Agiven by   6 −2 −7 A =  −2 −3 2  1 −2 −2 Solution 7.4 The eigenvalues for this matrix are −1, −3 and 5 with corresponding eigenvectors       1 1 −5 vλ =−1 =  0  , vλ =−3 =  1  , vλ =5 =  1  1 1 −1 therefore, the diagonal matrix Λ is given by 

−1 Λ= 0 0

 0 0 −3 0  , 0 5

and the corresponding Modal Matrix P is given by  1 1 P= 0 1 1 1

 −5 1 . −1

520

Design and Analysis of Control Systems

By using Equation 7.39 it follows that the State-Transition Matrix is given by    −t  −1 e 0 0 1 1 −5 1 1 −5 Φ(t) = eAt =  0 1 1   0 e−3t 0   0 1 1  1 1 −1 1 1 −1 0 0 e5t  − 41 2e−6t − e−8t − 5 e5t 1 −3t = − 14 e5t  4e 1 −6t − 4 2e − e−8t − 1 e5t

−e−t + e−3t e−3t −t −e + e−3t



7.4.4

1 4 1 4

  6e−6t − e−8t − 5 e5t  − 41 e−3t + 41 e5t  −6t −8t 5t 6e − e − 1 e

SYSTEM RESPONSE BY LAPLACE TRANSFORM METHOD

The approaches discussed in preceding sections where the State-Transition Matrix is directly computed (direct solution of differential equations or diagonalisation) are somewhat mechanical and is in some respect termed as the classical approach. The approach that is simpler is that of using Laplace transforms. This method proceeds by taking Laplace transforms of the standard system state space equation given by .

x(t) = Ax(t) + Bu(t). This leads to sx(s)−x(to ) = Ax(s) + Bu(s).

(7.41)

This can be rearranged as (sI − A)x(s) =x(to ) + Bu(s) or x(s) = (sI − A)−1 x(to ) + (sI − A)−1 Bu(s).

(7.42)

−1

The matrix (sI − A) is the system Resolvent Matrix defined in Equation 7.7, which is a Laplace transform of the State-Transition Matrix Φ(t), Φ(s) = (sI − A)−1 . Therefore, the inverse Laplace transform of the Resolvent Matrix gives the State-Transition Matrix as Φ(t) = L−1 [(sI − A)−1 ].

(7.43)

With this definition, the inverse Laplace transform of Equation 7.42 becomes x(t) = Φ(t)x(to ) + L−1 [(sI − A)−1 Bu(s)].

(7.44)

−1

Applying the convolution theorem on L−1 [(sI − A) Bu(s)] produces the complete solution of the equation as Z t

x(t) = Φ(t)x(to ) +

Φ(t − τ)Bu(τ)dτ.

(7.45)

0

This is the same as Equation 7.31 which was obtained by direct computation of the State-Transition Matrix. It is clear that the Laplace transform approach is simpler and more elegant. Since the convolution theorem applies equally well to the two components of the convolution, it follows that L−1 [(sI − A)−1 Bu(s)] =

Z t 0

Φ(t − τ)Bu(τ)dτ =

Z t 0

Φ(τ)Bu(t − τ)dτ.

(7.46)

521

State-Space Design Methods

The choice of which term between Φ(t) and u(t) should be subjected to a time delay in the integration depends on the overall computation advantage offered. Example 7.5 For the system described by the state-space equation 

  x˙1 (t) 0  x˙2 (t)  =  0 x˙3 (t) 0

1 −2 1

    0 x1 (t) 0 −5   x2 (t)  +  1  u(t) 2 x3 (t) 0

determine its Resolvent Matrix and the State-Transition Matrix using the Laplace transform. If the initial state is x1 (0) = 2,

x2 (0) = 0,

x3 (0) = 2,

determine the time response of the states to a step function u(t) = 2, and if the output equation is  y(t) =



1

1

0



 x1 (t)  x2 (t)  , x3 (t)

determine the output y(t) under these conditions. Solution 7.5 The Resolvent Matrix was defined in Equation 7.7 as Φ(s) = (sI − A)−1 . Therefore, for this system, 

−1 s+2 −1

s Φ(s) =  0 0     =  

1 s 0 0

−1 0 5  s−2

s−2 s (s2 + 1) s−2 s2 + 1 1 s2 + 1

5 s (s2 + 1) 5 − 2 s +1 s+2 s2 + 1





   .  

From the Resolvent Matrix, the State-Transition Matrix can be determined using equation7.43 as i h Φ(t) = L−1 (sI − A)−1        s−2 5 −1 1 −1 −1 − L L L s  2 s (s2 + 1)     s (s + 1)   s − 2 −1 −1 − 5 = 0 L L  2 2  s +1  s + 1  1 s + 2 0 L−1 2 L−1 2 s +1 s +1

    ,   

522

Design and Analysis of Control Systems

where   1 L s   s−2 −1 L s (s2 + 1)   5 L−1 − 2 s (s + 1)   s−2 L−1 2 s +1   5 L−1 − 2 s +1   1 −1 L s2 + 1   −1 s + 2 L s2 + 1 −1

= 1   2 2s 1 = L − + 2 = −2 + 2 cost + sint + s s + 1 s2 + 1   5 s = L−1 − + 5 2 = −5 + 5 cost s s +1   s 2 = L−1 2 − 2 = cost − 2 sint s +1 s +1 −1

= −5 sint

= sint −1

= L



 2 s + = cost + 2 sint. s2 + 1 s2 + 1

Therefore, the State-Transition Matrix becomes  1 −2 + 2 cost + sint   cost − 2 sint Φ(t) = eAt =   0  0 sint

−5 + 5 cost



−5 sint

  .  

cost + 2 sint

This is the State-Transition Matrix. The free response of the system (homogeneous solution) for the given initial condition is given as x(t) = eAt x(0). This means  

1

−2 + 2 cost + sint

−5 + 5 cost



 x1 (t)   x2 (t)  =  0 x3 (t) hom  0  =

cost − 2 sint

−5 sint

sint

cost + 2 sint



−8 + 10 cost , −10 sint 2 cost + 4 sint

and the nonhomogeneous solution is given by x(t)inh = L−1 [(sI − A)−1 Bu(s)]. 2 where for the step input u(t) = 2 it follows that u(s) = . s

    2   0    2

523

State-Space Design Methods

Therefore, the following expression is obtained, 

1  s      −1 (sI − A) Bu(s) =  0     0 

s−2 s (s2 + 1) s−2 s2 + 1 1 s2 + 1



5 2 s (s + 1)



5 s2 + 1

s+2 s2 + 1

       0 2    1  s  0   

 2 s−2  s2 s2 + 1         2 s−2  = . 2  s (s + 1)        2 

s (s2 + 1) Thus, the time response becomes 

   −1 2 s − 2 L   s2 s2 + 1           x1 (t)  −1  s − 2   x2 (t)  = 2 2  L s (s + 1)    x3 (t) nonhom        −1  2 L 2 s (s + 1)   2 − 4t − 2 cost + 2 sint =  −4 + 2 sint + 4 cost  . 2 − 2 cost Therefore, the time response x(t) = x(t)hom + x(t)nonhom becomes 

     x1 (t) −8 + 10 cost 2 − 4t − 2 cost + 2 sint  x2 (t)  =   +  −4 + 2 sint + 4 cost  −10 sint x3 (t) 2 cost + 4 sint 2 − 2 cost 

 −6 + 8 cost − 4t + 2 sint . =  −8 sint − 4 + 4 cost 4 sint + 2

524

Design and Analysis of Control Systems

The output y(t) is measured through the output matrix as     x1 (t) y(t) = 1 1 0  x2 (t)  x3 (t)  =



1

1

0



 −6 + 8 cost − 4t + 2 sint  −8 sint − 4 + 4 cost  4 sint + 2

= −10 + 12 cost − 4t − 6 sint.

7.5

SYSTEM CONTROLLABILITY AND OBSERVABILITY

This section summarises the definitions of controllability and observability, and examples of the concepts are presented. In the derivation of the transfer function, the implication of pole-zero cancellation to controllability and observability is stated and explored through examples. 7.5.1

SUMMARY OF DEFINITIONS

The definitions of controllability and observability of any linear time-invariant control system are summarised as follows: 7.5.1.1

Controllability

If an input to the system can be found that takes every state variable from a desired initial state to a desired final state in finite time, the system is said to be controllable. Put differently, a system is said to be controllable, if and only if, closed-loop pole locations can be arbitrarily assigned using state feedback. Controllability is determined by deriving the Controllability Matrix CM .   CM = B AB A2 B A3 B · · · An−1 B (7.47) A system is controllable if the determinant of CM does not equal to zero. If the system is statecontrollable, then a state feedback controller can be designed. Hence, the controllability property plays a critical role in many control problems, such as the stabilisation of unstable systems by feedback or optimal control. 7.5.1.2

Observability

This is the ability to deduce all the state variables from knowledge of inputs u(t) and output y(t). Put differently, a system is observable, if and only if, its outputs are representative of all its modes. Observability is determined by deriving the Observability Matrix OM .   C  CA     CA2    OM =  CA3  (7.48)    ..   .  CAn−1 A system is observable if det OM does not equal to 0. If a system is completely state-observable, then a state-space observer can be designed for it. Hence, the observability property is essential in the estimation problem addressed later in this chapter (Section 7.12) and Chapter 9.

525

State-Space Design Methods

In summary, a system is controllable if the input can be designed to take the system from any initial state to any final state, and is observable if its state can be recovered from its outputs. Hence controllability and observability are critical in the design of control systems since they guarantee the existence of a stabilising observer-based controller. This leads to the definitions of two other important concepts – stabilisability and detectability. • A linear system (continuous or discrete) is stabilisable if all unstable modes are controllable. • A linear system (continuous or discrete) is detectable if all unstable modes are observable. Stabilisability and detectability play very central roles in optimal control theory covered in Section 7.11. 7.5.2

IMPLICATION OF POLE-ZERO CANCELLATION

In the derivation of the Transfer Function model of any linear time-invariant control system, if there are any pole-zero cancellations, this has implications for the controllability and observability. Consider the general equation of the Transfer Function model: T (s) = C(sI − A)−1 B + D 7.5.2.1

Controllability

A necessary and sufficient condition for controllability is that no single pole of the system is cancelled by a zero in all of the elements of the transfer-function matrix [sI − A]−1 B. If such cancellation occurs, the system cannot be controlled in the direction of the cancelled mode. 7.5.2.2

Observability

A necessary and sufficient condition for observability is that no single pole of the system is cancelled by a zero in all of the elements of the matrix C[sI − A]−1 . If such cancellation occurs, the cancelled mode cannot be observed in the output. These two conditions mean that pole-zero cancellation can lead to a system being uncontrollable or unobservable, or both, depending on the location of the cancellation. A system is both unobservable and uncontrollable if there is at least one pole-zero cancellation by a zero in the elements of the transfer-function matrix [sI − A]−1 B and there is at least one pole-zero cancellation by a zero in the elements of the matrix C[sI − A]−1 . Put differently, all this can be summarised as follows: If a pole-zero cancellation occurs in a system, then the mode (pole) that was cancelled cannot be controlled or observed. For example, if in the process of deriving the system Transfer Function model, the following transpires: T (s) = C(sI − A)−1 B 4(s + 7) = (s + 7)(s2 + 13s + 1) 4 = , s2 + 13s + 1 the pole-zero cancellation involving the two (s + 7) terms implies that the system is not observable. This can be confirmed by calculating the system Observability Matrix, OM , whose determinant will be obtained as zero

526

7.5.3

Design and Analysis of Control Systems

WORKED EXAMPLES OF CONTROLLABILITY & OBSERVABILITY

Worked examples illustrating controllability and observability concepts are presented here. Example 7.6 Consider the following three State-Space models. Define the terms A, B, C, D, x and u. Find the observability and controllability matrices for each of the three systems and determine whether each system is controllable or observable. (a) 

0 A= 0 −4

1 0 −3

 C= 0

   0 0 1  B = 0 −2 1

5

 1

  D= 0

0

1 21 − 4

(b) " A=

−5

 C= 5

 4

#

  0 B= 1

  D= 0

(c)  −2 A= 0 −7

 C= 4

−1 −2 −8

6

 8

   −3 2 1  B = 1 −9 2

  D= 0

Solution 7.6 (a) Consider the State-Space model given by x˙

= Ax + Bu

y

= Cx + Du

The definition of the terms A, B, C, D, x and u are as follows: A represents the system matrix. B represents the input matrix. C represents the output matrix. D represents the feed-forward matrix. x(t) represents the state vector. u(t) represents the input or control vector.

527

State-Space Design Methods

Given that:



0 A= 0 −4  C= 0

   0 0 1  B = 0 −2 1

1 0 −3

5

 1

  D= 0

For controllability:  CM = B

AB

 CM = B

A2 B AB

···

 An−1 B

 A2 B





0 A2 =  0 −4

det CM

     0 1 0 0 0 0 1  × 0 =  1  AB =  0 −4 −3 −2 1 −2      1 0 0 1 0 0 0 1 0 1 × 0 0 1  = −4 −3 −2 −3 −2 −4 −3 −2 8 2 1     0 0 1 0 A2 B = −4 −3 −2 × 0 8 2 1 1   1 = −2 1   0 0 1 =⇒ CM = 0 1 −2 1 −2 1 1 −2 − 0 0 −2 + 1 0 1 = −1 = 0 × 1 −2 1 1 −2 1

This is a non-zero quantity, which means CM is full rank, and hence the system is controllable. For observability:   C  CA    2   OM =  CA   ..   .  CAn−1

For the specific case under consideration: 

 C OM =  CA  CA2

 CA = 0

5

 0  1 × 0 −4

1 0 −3

 0  1  = −4 −2

−3

 3

528

Design and Analysis of Control Systems



     0 0 1 0 0 0 1 1 × 0 0 1  = −4 −3 −2 −2 −4 −3 −2 8 2 1   0 0 1     =⇒ CA2 = 0 5 1 × −4 −3 −2 = −12 −13 −9 8 2 1     C 0 5 1 3 =⇒ OM =  CA  =  −4 −3 −12 −13 −9 CA2 −4 −3 −4 −3 3 3 +1 −5 =⇒ det OM = 0 × −12 −13 −12 −9 −13 −9 0 A2 =  0 −4

1 0 −3

= −5(36 + 36) + (52 − 36) − 360 + 16 = −344 This is a non-zero quantity, which means OM is full rank and hence the system is observable. (b) Given that: " A=

0

1 21 − 4

−5

 C= 5

 4

 CM = B

# B=

  0 1

  D= 0  AB

#   " # 1 1 0 = −21 AB = −21 × 1 −5 4 4 # " 0 1 =⇒ CM = −21 1 4 "

0

=⇒ det CM = (0 − 1) = −1 This is a non-zero quantity, which means CM is full rank and hence the system is controllable. For observability:   C OM = CA " # 1   0   CA = 5 4 −21 = −20 −16 −5 4   5 4 =⇒ OM = −20 −16 =⇒ det OM = −80 − (−80) = 0

529

State-Space Design Methods

This means OM is not full rank and hence the system is unobservable: (c) Given that:  −2 A= 0 −7  C= 4

−1 −2 −8

6

   −3 2 1  B = 1 −9 2

 8

 CM = B

  D= 0 AB

 A2 B

        −2 −1 −3 2 −4 −1 −6 −11 4 2 = 0  AB =  0 −2 1  × 1 =  −2 −7 −8 −9 2 −14 −8 −18 −40       −2 −1 −3 −2 −1 −3 25 28 32 A2 =  0 −2 1  ×  0 −2 1  = −7 −4 −11 −7 −8 −9 −7 −8 −9 77 95 94       25 28 32 2 142 =⇒ A2 B = −7 −4 −11 × 1 =  40  77 95 94 2 437   2 −11 142 0 40  =⇒ CM = 1 2 40 437 1 0 1 40 0 40 = 6, 407 + 142 + 11 =⇒ det CM = 2 × 2 40 2 437 40 437 This is a non-zero quantity, which means CM is full rank, and hence the system is controllable. For observability:   C OM =  CA  CA2   −2 −1 −3     CA = 4 6 8 ×  0 −2 1  = −64 80 −78 −7 −8 −9   25 28 32     =⇒ CA2 = 4 6 8 × −7 −4 −11 = 674 848 814 77 95 94     C 4 6 8 =⇒ OM =  CA  = −64 80 −78 674 848 814 CA2 80 −78 −64 −78 −64 80 = −737, 128 =⇒ det OM = 4 × −6 +8 848 814 674 814 674 848 This is a non-zero quantity, which means OM is full rank and hence the system is observable.

530

Design and Analysis of Control Systems

Example 7.7 Consider a system with the following State-Variable Matrix model:     0 1 0 0 0 1  B = 0 A= 0 −6 −11 −6 1

 C= 6

2

0



  D= 0

(a) Explain the concepts of controllability and observability (b) Evaluate the controllability and observability of the system (c) Show that the system’s transfer function is given by

T (s) =

2 s2 + 3s + 2

Solution 7.7 (a) Controllability: If an input to the system can be found that takes every statevariable from a desired initial state to a desired final state, the system is said to be controllable. This is determined by deriving the Controllability Matrix CM .   CM = B AB A2 B · · · An−1 B A system is controllable if det CM does not equal to 0. Observability: This is the ability to deduce the state variables from a knowledge of input u(t) and output y(t). It is determined by deriving the Observability Matrix OM .   C  CA    OM =  .   ..  CAn−1

A system is observable if det OM does not equal to 0. (b) The Controllability Matrix is given by  CM = B

AB

 0 A2 B = 0 1 

0 1 −6

 1 −6 25

To determine controllability it is necessary to compute det CM as follows: 1 −6 0 −6 0 1 = −1 det CM = + 0 −0 +1 −6 25 1 25 1 −6 Since this is a non-zero number, it means the system is controllable. Next, focusing on the observability, it follows that:   C OM =  CA  CA2   1 0   0   0 1 = 0 6 2 CA = 6 2 0  0 −6 −11 −6

531

State-Space Design Methods

 CA2 = 0

  0 2 0 −6

6

 1 0  0 1  = −12 −22 −11 −6

−6



This means:



   C 6 2 0 6 2 OM =  CA  =  0 −12 −22 −6 CA2 Hence the determinant of OM is obtained as follows: 6 det OM = 6 −22

0 0 6 2 2 =0 +0 −2 −12 −22 −12 −6 −6

Since the determinant of OM is zero, it means OM is not a full rank matrix, and hence the system is not observable. (c) The Transfer Function model is obtained from the following expression: T (s) = C(sI − A)−1 B + D Hence to get it, a number of steps must be followed 

s (sI − A) = 0 0

0 s 0

  0 0 0 −  0 s −6

  1 0 s 0 1  = 0 −11 −6 6

0 s −1 − (−) det (sI − A) = s 6 11 s + 6

−1 s 11

0 −1 + 0 6 s+6

= s(s[s + 6] + 11) + 1(6) =⇒ det (sI − A) = s3 + 6s2 + 11s + 6 Using the cofactor method the adjoint matrix of (sI − A) is obtained as: 2 s + 6s + 11 s+6 −6 s2 + 6s ad j (sI − A) =  −6s −11s − 6 Hence the transfer function is obtained as follows: T (s) = C(sI − A)−1 B + D where (sI − A)−1

=

adj (sI − A) det (sI − A)

 0 −1  s+6

 1 s s2

s 11

532

Design and Analysis of Control Systems

=⇒ T (s) =

2   s + 6s + 11 6 2 0  −6 s3 + 6s2 + 11s + 6 −6s

=

2s + 6 s3 + 6s2 + 11s + 6

=

2(s + 3) (s + 3)(s + 2)(s + 1)

=

2 (s + 2)(s + 1)

=

2 s2 + 3s + 2

s+6 s2 + 6s −11s − 6

  1 0 s  1 + [0] s2 1

Q.E.D

This is the transfer function of the system. The pole-zero cancellation involving the (s + 3) terms implies that the system is not observable, which means the Observability Matrix, OM , is not full rank, as already established by calculating the OM in (b), whose determinant was obtained as zero. There is a shorter method of obtaining the same transfer function in the preceding question. Observing that the given state-space matrices represent the FLIP of the controllable canonical form, we flip-back to get the controllable canonical form as follows:   −6 −11 −6 0 0 A= 1 0 1 0   1 B = 0 0   C= 0 2 6   D= 0 This is the complete canonical state-space model of the given system. Working backwards we can the extract the transfer function as follows: T (s) =

2s + 6 s3 + 6s2 + 11s + 6

2(s + 3) s3 + 6s2 + 11s + 6 2(s + 3) = (s + 3)(s2 + 3s + 2) 2 = 2 Q.E.D s + 3s + 2 =

Example 7.8 Consider the electrical circuit shown in Figure 7.2, where the input u(t) is a current and the output y(t) is a voltage. (a) Explain why the state variables should be chosen as x1 = iL and x2 = vC . (b) Write down state-variable equations for the system, and deduce the system’s State-Space model (A, B,C, D).

533

State-Space Design Methods

Figure 7.2 Electrical circuit.

(c) Explain the concepts of controllability and observability. How are these properties determined for a system whose model is given by matrices (A, B,C, D)? (d) In Figure 7.2, what conditions on R, L and C will guarantee that the system is: (i) controllable? (ii) observable? (e) For the State-Space model derived in (b), if R = 5 Ω; L = 1 H and C = 1 F, design a block diagram for the circuit with one integrator for each state variable. (f) Derive the circuit’s transfer function using matrix algebra. Solution 7.8 (a) The capacitor and the inductor are the two energy-storing elements in the system, and the corresponding variables are represented by the vector:   iL vC (b) The generic equations for any capacitor and any inductor are respectively given by iC = =⇒ v˙C = vL = =⇒ i˙L =

CdvC dt iC C LdiL dt vL L

From the circuit, the necessary equations are established using KCL and KVL.

Also from KVL: LiL + RiL = vC + RCv˙C From KCL: u =⇒ u

= iL + iC = iL +Cv˙C

Also from KCL: y = vR = RCv˙C The next step is to find the expressions for the derivatives of the chosen state variables, that is:   i˙L v˙C

534

Design and Analysis of Control Systems

These are obtained as: i˙L v˙C

=

−2R 1 R iL + vC + u L L L −iL u + = C C

The output is given by y = −RiL + Ru Hence, the state-variable matrices are obtained from the following State-Space model:        i˙L −2R/L 1/L iL R/L  =  + u v˙C −1/C 0 vC 1/C  y = −R

   iL 0   + [R]u vC

This is the State-Variable Matrix model of the electrical circuit which is a dynamic system. (c) Controllability: If an input to the system can be found that takes every state variable from a desired initial state to a desired final state, the system is said to be controllable. This is determined by deriving the Controllability Matrix CM .   CM = B AB A2 B · · · An−1 B A system is controllable if det CM does not equal to 0. Observability: This is the ability to deduce the state variables from a knowledge of input u(t) and output y(t). It is determined by deriving the Observability Matrix OM .   C  CA    OM =  .   ..  CAn−1

A system is observable if det OM does not equal to 0. (d) For the given system, the controllability matrix is obtained as   CM = B AB



R L  =⇒ CM =   1 C det CM =



 −2R2 1 + L2 LC     −R LC

R2 1 − L2C LC2

535

State-Space Design Methods

L The system is controllable if det CM ̸= 0. Thus, the system is controllable if R2 ̸= . In terms of C observability: The Observability Matrix is given by   C OM = CA   −R 0 =  −2R2 −R  L L =⇒ det OM

=

R2 L

R2 For the system to be observable, ̸= 0. This means the system is unobservable for R = 0 or L = ∞. L Alternatively, the system is observable for R ̸= 0 or L ̸= ∞. (e) Consider the general state-variable matrices for the system (A, B, C, D) and substitute the following values: R = 5 Ohms, C=1 F and L = 1 H. The following specific state-variable matrices are produced:   −10 1 A= −1 0

B=

  5 1

 C = −5

 0

  D= 5 Using these matrices, and working backwards, the following state-variable equations are obtained: x˙1 = −10x1 + x2 + 5u x˙2 = −x1 + u y = −5x1 + 5u From these state-variable equations, a system block diagram can be obtained by using an integrator for each state variable as illustrated in Figure 7.3. The final block diagram is shown in Figure 7.4. Using the derived specific state-variable matrices, the Transfer Function model can be obtained as follows:

Figure 7.3 Process of deriving block diagram.

536

Design and Analysis of Control Systems

Figure 7.4 Derived block diagram.

T (s) = C(sI − A)−1 B + D "   −1 #     s 0   −10 1 5 = −5 0 − + 5 0 s −1 0 1       1 1 + 5s = −5 0 2 + 5 s + 5 s + 10s + 1 −25s − 5 = 2 +5 s + 10s + 1 (−25s − 5) + 5(s2 + 10s + 1) = s2 + 10s + 1 5s2 + 25s = 2 s + 10s + 1 This Transfer Function model can also be obtained by a second method discussed in Chapter 2; taking Laplace transforms of the state-variable equations and then deriving T (s). As indicated above the state-variable equations are given by x˙1 = −10x1 + x2 + 5u x˙2 = −x1 + u y = −5x1 + 5u Taking Laplace transforms of these equations, eliminating X1 (s) and X2 (s), while leaving only Y(s) and U(s), is carried out as follows: sX1 (s) = −10X1 (s) + X2 (s) + 5U(s) ........ (i) sX2 (s) = −X1 (s) +U(s) ........ (ii) Y (s) = −5X1 (s) + 5U(s) ......... (iii)

537

State-Space Design Methods

From (i) and (ii) it follows that X2 (s) = X1 (s)(s + 10) − 5U(s) =⇒ s[X1 (s)[s + 10] − 5U(s)] = −X1 (s) +U(s) =⇒ X1 (s)[s(s + 10) + 1] = U(s)[5s + 1]   5s + 1 =⇒ X1 (s) = U(s) s(s + 10) + 1 Substituting this outcome into (iii) yields   5s + 1 + 5U(s) Y (s) = −5U(s) s(s + 10) + 1   −5(5s + 1) = U(s) +5 s(s + 10) + 1 Y (s) Given that T = U(s) −5(5s + 1) =⇒ T = +5 s(s + 10) + 1 = =

−25s − 5 + 5s2 + 50s + 5 s2 + 10s + 1 2 5s + 25s 2 s + 10s + 1

Clearly, this result is the same as that obtained in the preceding section using the matrix algebra expression: T (s) = C(sI − A)−1 B + D (7.49) This should be the case because the Transfer Function model of any dynamic system is unique, irrespective of the method used to obtain it. There is no pole-zero cancellation involved in establishing this Transfer Function model using the two methods. This means the system is both controllable and observable. The work covered in this previous section illustrates interchangeability between the different system models: the Transfer Function, State-Variable Matrix (State-Space), Input and Output and Block diagram. In terms of controllability and observability for the specific case where R = 5 and C = L = 1, the following can be asserted:

=⇒

R2 L

= 25

R2 L

̸= 0

=⇒ System is observable R2 =⇒ R2 =⇒ System is controllable

= 25 and ̸=

L C

L =1 C

538

Design and Analysis of Control Systems

The system is both controllable and observable, which is consistent with the absence of any pole-zero cancellation in the derivation of the Transfer Function model. The controllability and observability can also be established by directly determining CM and OM and showing that their determinants R2 L are non-zero. However, this would be repetitive because the conditions ̸= 0 and R2 ̸= were L C derived from the same premise. The second set of specific circuit parameters: Once again, considering the general state-variable matrices for the given circuit, (A, B, C, D), but now substituting the following values: R = 1 Ohms, C = 1 F and L = 1 H. The following specific state-variable matrices are produced:   −2 1 A= −1 0

B=

  1 1

 C = −1

 0

  D= 1 Using these matrices, and working backwards the following state-variable equations are obtained: x˙1 = −2x1 + x2 + u x˙2 = −x1 + u y = −1x1 + u From these state-variable equations, the system block diagram in Figure 7.5 is obtained by using an integrator for each state variable. (f) Using the derived specific state-variable matrices, the Transfer Function model can be obtained as follows: T (s) = C(sI − A)−1 B + D "     s 0 −2 = −1 0 − 0 s −1

Figure 7.5 Derived block diagram.

−1 #     1 1 + 1 0 1

539

State-Space Design Methods

 = −1 =

=

 0

    1 s+1 + 1 s2 + 2s + 1 s + 1

−s − 1 + s2 + 2s + 1) s2 + 10s + 1 s2 + s s2 + 2s + 1

=

s(s + 1) s2 + 2s + 1

=

s(s + 1) (s + 1)(s + 1)

=

s s+1

This Transfer Function model can also be established by working backwards from the State-Space model to obtain the state-variable equations as already illustrated: x˙1 = −2x1 + x2 + u x˙2 = −x1 + u y = −x1 + u = x˙2 The Transfer Function model is obtained by taking Laplace transforms of these equations, and eliminating the Laplace transforms of the two state variables, X1 (s) and X2 (s), and leaving only Y (s) and U(s) as follows: sX1 (s) = −2X1 (s) + X2 (s) +U(s) =⇒ X1 (s) [s + 2] = X2 (s) +U(s) X2 (s) +U(s) =⇒ X1 (s) = s+2 sX2 (s) = −X1 (s) +U(s) X2 (s) +U(s) +U(s) =⇒ sX2 (s) = − s+2     1 1 =⇒ X2 (s) s + = U(s) 1 − s+2 s+2   s+1 =⇒ X2 (s) = U(s) 2 s + 2s + 1

Y (s) = −X1 (s) +U(s) = sX2 (s)   s+1 =⇒ Y (s) = sU(s) 2 s + 2s + 1 Y (s) T (s) = U(s) s(s + 1) =⇒ T (s) = s2 + 2s + 1

540

Design and Analysis of Control Systems

=

s(s + 1) (s + 1)(s + 1)

=

s s+1

Clearly, the Transfer Function model obtained by this method is the same as that obtained by matrix algebra. This is the case because for any dynamic system the Transfer Function model is unique. In establishing this Transfer Function model (using both methods) there is pole-zero cancellation involving the term (s + 1) . This means the system is uncontrollable. The work covered in this previous section illustrates interchangeability between the different system models: the Transfer Function, State-Variable Matrix (State-Space), Input and Output and Block diagram. In terms of controllability and observability for R = 1 and C = L = 1, the following can be asserted:

=⇒

R2 L

= 1

R2 L

̸= 0

=⇒ System is observable R2 =⇒ R2

= 1 and =

L =1 C

L C

=⇒ System is uncontrollable This shows that the system is uncontrollable, as already established by the pole-zero cancellation. However, the system is still observable. This means that pole-zero cancellation implies that a system is uncontrollable but not necessarily unobservable. Example 7.9 Consider the electrical circuit in Figure 7.6, where the input u(t) is a voltage and the output y(t) is the total current flowing through the capacitor and inductor as indicated. (a) Explain why the state variables should be chosen as x1 = vC and x2 = iL . (b) Write down state-variable equations for the system, and deduce the system’s State-Space model (A, B, C, D).

Figure 7.6 Electrical circuit.

541

State-Space Design Methods

(c) Explain the concepts of controllability and observability. How are these properties determined for a system whose model is given by matrices (A, B, C, D)? (d) What conditions relating R1 , R2 , L and C will render the system uncontrollable? (e) What conditions relating R1 , R2 , L and C will render the system unobservable? (f) For the State-Space model derived in (b), if R1 = R2 = 5 and L = C = 1, design a block diagram for the circuit with one integrator for each state variable. (g) Derive the circuit’s transfer function using matrix algebra. Solution 7.9 (a) The variables vC and iL are chosen because the capacitor and inductor are the two energy-storing elements in the system. Whenever we are dealing with a capacitor or an inductor, the starting point is always considering the following equations: iC

=

=⇒ v˙C

=

vL

=

=⇒ i˙L

=

CdvC dt iC C LdiL dt vL L

(b) The chosen state variables vC and iL can be represented as follows:     x1 v = C x2 iL The aim is to find the State-Variable Matrix model (State-Space model) in the following format:     x˙1 x = A 1 + Bu x˙2 x2   x y = C 1 + Du x2 From the circuit the state-variable equations are developed as follows: x˙1 = v˙C = x˙1 =

iC C

u −x1 + R1C R1C

vL x˙2 = i˙L = L =

1 [u − R2 x2 ] L

x˙2 =

−R2 x2 u + L L

y = x2 +

(u − x1 ) R1

542

Design and Analysis of Control Systems

From these equations the state-space matrices are then obtained as:   −1 0   A =  R1C −R  2 0 L   1  R1C    B=   1  L   −1 C= 1 R1   1 D =  R1  This is the State-Space model of the electrical circuit. (c) Controllability: If an input to the system can be found that takes every state variable from a desired initial state to a desired final state. It is determined by deriving the Controllability Matrix.   CM = B AB A2 B · · · An−1 B A system is controllable if det CM does not equal to zero. Observability: The ability to deduce the state variables from a knowledge of input u(t) and output y(t). It is determined by deriving the Observability Matrix OM . 

C CA .. .



    OM =     n−1 CA A system is observable if det OM does not equal to zero.

(d) To establish the condition for the system to be uncontrollable, we consider the Controllability Matrix:   CM = B AB  1  R1C  CM =   1 L

−1  (R1C)2     −R2 L2

Obtaining the determinant of this matrix gives det CM =

−R2 1 + . 2 R1CL L(R1C)2

543

State-Space Design Methods

For the system to be uncontrollable this determinant must be equal to zero. This means: −R2 1 + =0 2 R1CL L(R1C)2 =⇒ R1 R2C = L (e) To establish the condition for unobservability, we consider the Observability Matrix:   C OM = CA  −1  R1C  OM =   1 R21C

1



   −R1  L

Equating this to zero produces: R2 1 − 2 =0 R1 L R1C =⇒ R1 R2C = L Clearly, the condition that leads the system to be uncontrollable is the same as that which makes it unobservable. (f) For the specific case where, R1 = R2 = 5 Ohms, L = 1 H and C = 1 F; we substitute these values in the state-space matrices A, B, C, and D. This produces the following specific state-space matrices: −1  5 A= 

0

  

0 −5   1 5 B=  1 

 −1 1 5   1 D= 5

C=

From these matrices, we can work backwards and establish corresponding specific state-variable equations are as: −x1 u x˙1 = + 5 5 x˙2 = −5x2 + u −x1 u y= + + x2 5 5

544

Design and Analysis of Control Systems

Figure 7.7 Derived block diagram.

From these state-variable equations, the block diagram of this specific control system can be drawn as illustrated in Figure 7.7. (g) The transfer function T (s) is obtained from the expression: T (s) = C(sI − A)−1 B + D Applying in this equation the specific state matrices obtained previously leads to:   −1   −1 1       0 −1 1  5  s 0   5  − T (s) = 1     + 0 s 5 5 0 −5 1  1  1+ s   5    1 1   −1 5  = 2 +  5s + 26s + 5 5 1 s+ 5   1 1 −1 = 2 s − 1 + 5s + 1 + 5s + 26s + 5 5 5 This leads to the expression for the transfer function as follows: 1 −1 1 T (s) = × s + 5s + 5s2 + 26s + 5 5 5 =

−s + 25s + 5s2 + 26s + 5 5 (5s2 + 26s + 5)

=

50s + 5s2 + 5 5 (5s2 + 26s + 5)

=

s2 + 10s + 1 5s2 + 26s + 5

=

s2 + 10s + 1 (5s + 1) (s + 5)

545

State-Space Design Methods

This is the Transfer Function model of the circuit. It can also be obtained using the Laplace transform method which was discussed earlier. This activity is left as an exercise for the reader. The DC gain is calculated assuming a step input, such that: DC gain = = =

lim T (s)

s→0 02 + 10 × 0 + 1

(5 × 0 + 1) (0 + 5) 1 5

There is no pole-zero cancellation in the derivation of the Transfer Function model, and hence the system is controllable and observable. This is also confirmed by considering the condition for controllability R1 R2C ̸= L. This is clearly satisfied as we use the given values for the circuit parameters as follows 5x5x1̸=1. It is important to note that this system’s controllability condition is similar to that for observability. So for this case, the system is both controllable and observable. Second case of specific circuit parameters; when R1 = R2 = 1 Ohms, L = 1 H and C = 1 F. A new set of specific state-space matrices is obtained:   −1 0 A= 0 −1   1 B= 1   C = −1 1   D= 1 From these matrices, we can work backwards and establish corresponding specific state-variable equations are as: x˙1 = −x1 + u x˙2 = −x2 + u y = −x1 + x2 + u From these state-variable equations, the block diagram of this specific control system can be drawn as illustrated in the following figure by using an integrator for each state variable as indicated before.

546

Design and Analysis of Control Systems

The transfer function of this new specific system is obtained as follows: T (s) = C(sI − A)−1 B + D "     s 0 −1 = −1 1 − 0 s 0

 = −1

  s+1 1 0

0 s+1

−1   1 +1 1

   s+1 +1 s+1

=

 1 −1 2 (s + 1)

=

(−s − 1 + s + 1) + (s + 1)2 (s + 1)2

1

−1 #   0 1 +1 −1 1

(s + 1)2 (s + 1)2 =⇒ T (s) = 1 =

This is the Transfer Function model of the specific circuit which means T (s) =

Y (s) =1 U(s)

The model can also be obtained using the Laplace transform method discussed earlier. This activity is left as an exercise for the reader. In the derivation (both methods) of the Transfer Function model, there is a repeated pole-zero (s + 1)2 . This implies an uncontrollable or/and unobservable system. This is concancellation; (s + 1)2 firmed by considering the condition for uncontrollability for the system: R1 R2C = L. Substituting the specific values of the circuit parameters shows that this condition is satisfied as follows: 1 × 1 × 1 = 1. All this attests to the uncontrollability of the system. Since the condition for controllability is the same as that of observability, it means the system is both uncontrollable and unobservable. Y (s) The result that the transfer function of the system is given by = 1 can also be deduced from U(s) the block diagram where it is clear that x1 (t) = x2 (t) and hence y(t) = u(t). This assertion can be systematically derived from the block diagram as follows: x˙1 (t) = u(t) − x1 (t) x˙2 (t) = u(t) − x2 (t) sX1 (s) = U(s) − X1 (s) sX2 (s) = U(s) − X2 (s)

547

State-Space Design Methods

Equating the two expressions of U(s) leads to: X1 (s)[s + 1] = X2 (s)[s + 1] X1 (s) = X2 (s) x1 (t) = x2 (t) Substituting this in the expression of the output: y(t) = −x1 (t) + x2 (t) + u(t) = −x1 (t) + x1 (t) + u(t) = u(t) Hence the transfer function is given by T (s) =

Y (s) =1 U(s)

Hence the transfer function obtained from the block diagram is the same as that derived using matrix algebra. This specific circuit is a repeater which reproduces the input as an output. CONTROLLABILITY FOR THE TWO CASES 1. R1 = R2 = 5 Ohms, L = 1 H and C = 1 F 2. R1 = R2 = 1 Ohms, C = 1 F and L = 1 H Uncontrollability condition: R1 R2C = L Case 2: 1 × 1 × 1 = 1 For case 2, the system confirms that the system is uncontrollable as implied by of the pole-zero cancellation in the derivation of the transfer function: T (s) =

(s + 1)2 =1 (s + 1)2

Case 1: 5 × 5 × 1 ̸= 1 For case 1, the system is controllable and confirms the absence of the pole-zero cancellation in the derivation of the transfer function: s2 + 10s + 1 T (s) = (5s + 1) (s + 5) OBSERVABILITY FOR THE TWO CASES 1. R1 = R2 = 5 Ohms, L = 1 H and C = 1 F 2. R1 = R2 = 1 Ohms, C = 1 F and L = 1 H Unobservability condition: R1 R2C = L Case 2: 1x1x1=1 For case 2, the system is unobservable Case 1: 5x5x1̸=1 For case 1, the system is observable.

548

Design and Analysis of Control Systems

The results for controllability and observability are the same because, for this particular dynamic system, the condition for observability is the same as that for controllability. It is important to note that the occurrence of a pole-zero cancellation in the derivation of a system’s Transfer Function model means the system is uncontrollable but not necessarily unobservable.

7.6

CANONICAL STATE-SPACE MODELS

There are techniques available for obtaining State-Space models from Transfer-Function ones, as discussed in Chapter 2. This section presents Canonical State-Space models in the controllable, observable, diagonal and Jordan forms. For any given system, there are essentially an infinite number of possible State-Space models that will give the identical Input-Output model. Thus, it is desirable to have certain standardised State-Space model structures. These are called canonical forms. Given a system Transfer Function model, it is possible to obtain each of the canonical models. Furthermore, given any particular canonical form, it is possible to transform it into another form. For all these State-Space models, the starting point is to consider the Transfer Function model of a generic dynamic system expressed as a ratio of two polynomials as follows: T (s) =

b(s) Y (s) = U(s) a(s)

(7.50) (7.51)

= 7.6.1

b0 sn + b1 sn−1 + b2 sn−2 + · · · + bn sn + a1 sn−1 + a2 sn−2 + · · · + an

(7.52)

CONTROLLABLE CANONICAL STATE-SPACE MODEL

The generalised controllable canonical form was introduced in Chapter 2. The controllable canonical form is a system description in which all states can be modified by the control input. This is in keeping with the definition of controllability: If an input to the system can be found that takes every state variable from a desired initial state to a desired final state, the system is said to be controllable. Hence, the controllable canonical form is very convenient in the design of the system controllers. The controllable canonical form arranges the coefficients of the transfer function denominator across one row of the A matrix. It is the most direct method for converting from the Transfer Function model of a system to a State-Space model. From Equation 7.52, the State-Space model (A, B, C, D) for the controllable canonical form is then extracted as follows:   0 1 0 ··· 0  0 0 1 ··· 0     .. . . . .. .. · · · ..  A= .     0 0 0 ··· 1  −an −an−1 −an−2 · · · −a1   0 0     B = 0  ..  . 1  C = bn − an b0

bn−1 − an−1 b0

bn−2 − an−2 b0

···

b1 − a1 b0



549

State-Space Design Methods

D = [b0 ] This is the Controllable Canonical State-Space model for a generic dynamic system. It is prudent to note that the stacking of variables in the state-variable vector, x(t), is in ascending order, that is:   x1  x2     x3      x(t) =  ...    xn−2    xn−1  xn The controllable canonical form is important in discussing the pole-placement approach to control systems design. In fact, as stated in Chapter 2, the MATLAB function tf2ss, which converts from the Transfer Function model to the State-Space model, uses the controllable canonical method. [A, B,C, D] = tf2ss (num, den) . The State-Space model (A, B, C, D) obtained from MATLAB is in the controllable canonical form. The only difference is that MATLAB uses a descending order for the state variables, whereas the generalised Controllable Canonical form uses an ascending order. Hence the order of the rows and that of the elements of the rows are reversed, as demonstrated in Example 7.1. Nonetheless, the MATLAB outcome and the results of the Controllable Canonical method contain the same information. 7.6.2

OBSERVABLE CANONICAL STATE-SPACE MODEL

A system is said to be observable if all the state variables can be deduced from knowledge of inputs and outputs. Hence the observable canonical form is named as such because of its use in the design of observers. From Equation 7.52, the State-Space model (A, B,C, D) for the observable canonical form is then extracted as follows:   0 0 0 ··· −an 1 0 0 · · · −an−1     ..  A =  ... ... ... · · · .    0 0 0 · · · −a2  0 0 1 ··· −a1 

 bn − an b0 bn−1 − an−1 b0      B = bn−2 − an−2 b0    ..   . b1 − a1 b0  C= 0

0 ···0

D = [b0 ]

 1

550

Design and Analysis of Control Systems

This is the Observable Canonical State-Space model for a generic dynamic system. The observable canonical form is important in designing observers. It is instructive to note the following relationships between the observable and controllable canonical forms:

7.6.3

Aobsv

= ATcont

Bobsv

T = Ccont

Cobsv

= BTcont

Dobsv

= Dcont

DIAGONAL CANONICAL STATE-SPACE MODEL

Consider the case where in the Transfer Function model in Equation 7.53, the denominator polynomial involves only distinct roots. The transfer function can be re-written as: T (s) =

=

b(s) Y (s) = U(s) a(s) b0 sn + b1 sn−1 + b2 sn−2 + · · · + bn (s + p1 )(s + p2 ) · · · (s + pn )

= b0 +

c2 cn c1 + + ··· + s + p1 s + p2 s + pn

(7.53)

From Equation 7.53, the State-Space model (A, B,C, D) for the diagonal canonical form is then extracted as follows:  −p1  0   A =  ...   0 0

0 −p2 .. .

0 0 .. .

0 0

0 0

··· ···

0 0 .. .



     0  −pn

··· ··· ···

  1 1     B = 1  ..  . 1  C = c1

c2

· · · cn−1

cn



D = [b0 ] This is the Diagonal Canonical State-Space model for a generic dynamic system. The diagonal canonical form is also known as the modal canonical form. It plays an important role in the analysis of MIMO systems, where decoupling of the states becomes crucial. 7.6.4

JORDAN CANONICAL STATE-SPACE MODEL

The case where the denominator polynomial of Equation 7.52 involves multiple roots leads to the derivation of the Jordan canonical form. For this situation, the preceding diagonal canonical form is

551

State-Space Design Methods

simply adjusted. Suppose, for example, that the poles pi are different from one another, except that the first three poles are equal, that is, p1 = p2 = p3 . Then the factored form of the transfer becomes T (s) =

=

Y (s) b(s) = U(s) a(s) b0 sn + b1 sn−1 + b2 sn−2 + · · · + bn (s + p1 )3 (s + p4 )(s + p5 ) · · · (s + pn )

c1 c2 c3 c4 c5 + + + + (s + p1 )3 (s + p1 )2 (s + p1 ) s + p4 s + p5 cn + ···+ s + pn = b0 +

The State-Space model (A, B,C, D) for the Jordan canonical form is then extracted as follows:   .. .. .. −p 1 0 . 0 . . . . . . 0 1     .. .. ..  0 −p1 1 . . . . . . . . .      .. .. ..  0 0 −p1 . 0 . . . . . . 0      .. .. ..  ··· ··· ··· . ··· . ··· ··· . ··· ··· ···      .. .. ..   . 0 . −p4 . . . . 0   0   . . .   (7.54) ··· · · · .. · · · .. · · · · · · .. · · · ··· ··· .  ···   . . .   .. .. −p  . . . .. . . 0  5   . . .    ··· ··· · · · .. · · · .. · · · · · · .. · · · ··· ···    .. .. ..    . . . . . . . . . .  . .     .. .. ..  .  . . . . . . 0 . . −p 0 n−1   .. .. .. 0 . 0 . . . 0 . . 0 0 −pn   0 0   1   B = 1    ..  . 1   · · · cn−1 cn C = c1 c2 D = [b0 ] This is the Jordan Canonical State-Space model for a generic dynamic system. It is an alternative to the diagonal canonical form and applies to asymmetrical systems with multiple order eigenvalues for which diagonal transformation is impossible. Example 7.10 Consider the system given by the Transfer Function model T (s) =

s+5 s2 + 5s + 4

Develop the State-Space model in the (a) controllable canonical form, (b) observable canonical form, and (c) diagonal canonical form.

552

Design and Analysis of Control Systems

Solution 7.10 (a) Controllable Canonical Form:        x˙1 (t) 0 1 x1 (t) 0 = + u(t) x˙2 (t) −4 −5 x2 (t) 1    x1 (t)   1 + 0 u(t) x2 (t)

   y(t) = 5 (b) Observable Canonical Form:    x˙1 (t) 0 = x˙2 (t) 1

   y(t) = 0

1

(c) Diagonal Canonical Form:    x˙1 (t) −1 = x˙2 (t) 0

   y(t) = 4 7.6.5

    −4 x1 (t) 5 + u(t) −5 x2 (t) 1    x1 (t)   + 0 u(t) x2 (t)

    0 x1 (t) 1 + u(t) −4 x2 (t) 1    x (t)   −1 1 + 0 u(t) x2 (t)

MATRIX EIGENVALUES AND EIGENVECTORS

The eigenvalues of an n × n matrix A are the roots of the following characteristic equation |λ I − A| = 0 The eigenvalues are also called the characteristic roots. The eigenvector is a vector, v, that satisfies the matrix equation: (λ I − A)v = 0 It is instructive to note that only square matrices have eigenvalues and eigenvectors associated with them. To obtain both the eigenvalues and eigenvectors, the eigenvalues must be determined first. Consider the following matrix   0 1 0 0 1 . A= 0 −6 −11 −6 The characteristic equation is obtained as follows:    λ 0 0 0    0 λ 0 |λ I − A| = − 0 0 0 λ −6 λ −1 0 λ −1 = 0 6 11 λ + 6 = λ 3 + 6λ 2 + 11λ + 6 = (λ + 1)(λ + 2)(λ + 3)

 1 0 0 1  −11 −6

553

State-Space Design Methods

This means the characteristic equation is given by |λ I − A| = 0 =⇒ (λ + 1)(λ + 2)(λ + 3) = 0. Hence, the eigenvalues of A, which are the roots of the characteristic equation, are –1, –2 and –3. To get the corresponding three eigenvectors, xi , consider the equation: (λ i I − A)vi 

−1 λi 11

λi =⇒  0 6

= 0

    0 xi 0 −1  yi  = 0 λi +6 zi 0

Substituting for λ i with -1, -2, and -3 separately, in the preceding equation and solving for vi in each case, leads to the following eigenvectors: 

     1 1 1 v1 = −1 , v2 = −2 and v3 = −3 . 1 4 9 Eigenvalues and eigenvectors have a significant role in the design and analysis of control systems. Specifically, eigenvalues and eigenvectors of the system matrix, A, are of fundamental importance in establishing the system’s response. They determine the relationship between the individual system state variables (elements of the state-variable vector, x(t)) and the stability of the system. Furthermore, the eigenvalues and eigenvectors can be used to calculate the matrix exponential of the system matrix through spectral decomposition.

7.6.6

MATRIX DIAGONALISATION

Consider n × n matrix A of the controllable canonical State-Space model derived in the preceding section: 

0 0 .. .

   A=   0 −an

1 0 .. .

0 1 .. .

0 −an−1

0 −an−2

··· ··· ··· ··· ···

0 0 .. .



     1  −a1

If this matrix has distinct eigenvalues, then a Transformation Matrix, P, consisting of these eigenvalues is defined as follows: 

1 λ1 λ 21 .. .

     P=  n−2 λ  1  λ n−1 1

··· ··· ···

1 λ2 λ 22 .. .

1 1 λ 32 .. .

λ n−2 2

λ n−2 3

··· ···

λ n−1 2

λ n−1 3

···

1 λn λ 2n .. .



       n−2  λn   λ n−1 n

554

Design and Analysis of Control Systems

This Transformation Matrix is called the Vandermonde matrix. It then follows that the transformation x = Pz will lead to a diagonal matrix, A∗ , as follows:   λ1 0 0 ··· 0 0 λ2 0 ··· 0    .. .. ..  (7.55) A∗ = P−1 AP =  ... . . ··· .    0  0 0 ··· 0 0 0 0 ··· λn This means matrix diagonalisation can be used to transform any State-Space model to the Diagonal Canonical State-Space model. In this case, the Controllable Canonical State-Space model was converted to the Diagonal Canonical State-Space model. The rest of the model matrices are transformed as follows: B∗ = P−1 B

(7.56)



C = CP

(7.57)

D∗ = D

(7.58)

Hence giving the new transformed system takes the following form: .

z(t) = A∗ z(t) + B∗ u(t) ∗



y(t) = C z(t) + D u(t).

(7.59) (7.60)

The Transformation Matrix P is called a similarity Transformation Matrix. The dynamics of the system are preserved by similarity transformation, hence the eigenvalues, eigenvectors, poles, zeros, and the system characteristic equation are all preserved. Example 7.11 For the following system x˙ (t) = Ax(t) + Bu(t), the plant matrix A is given as 

0 A= 0 −6

1 0 −11

 0 1 . −6

Find the Transformation Matrix P, which will transform it to a Diagonal Canonical State-Space model. If the input matrix is   0 B =  0 , 1 find the corresponding input matrix for the Diagonal Canonical State-Space model. Solution 7.11 The given system matrix is already in controllable canonical form, therefore, to transform it to diagonal canonical form; the transformation matrix must be the Vandermonde matrix, which requires prior knowledge of the system eigenvalues. For the system matrix A, the eigenvalues are computed from its characteristic equation s −1 0 −1 |sI − A| = 0 s 6 11 s + 6 = s3 + 6s2 + 11s + 6 = (s + 3) (s + 2) (s + 1) .

555

State-Space Design Methods

Therefore, the eigenvalues are λ1 = −1, λ2 = −2, λ3 = −3, so, according to equation7.55, the Vandermonde matrix is   1 1 1 P =  −1 −2 −3  . 1 4 9 This transforms the system to diagonal canonical form according to Equations 7.55–7.58 as A∗ = P−1 AP 

1 −2 4

−1  1 0 −3   0 9 −6



0 −2 0

 0 0 . −3

1 =  −1 1 −1 = 0 0

 1 0 1 1 0 1   −1 −2 −11 −6 1 4

 1 −3  9

Also the input matrix B transforms to B∗ = P−1 B 

1 =  −1 1 

1 2

1 −2 4

−1   1 0 −3   0  9 1



=  −1  . 1 2

The matrices A∗ , B∗ , C∗ , and D∗ constitute the Diagonal Canonical State-Space model of the system.

7.7

SIMILARITY TRANSFORMATIONS

The idea of similarity transformation has already been introduced in preceding sections. Here it is developed further with more details. The state-space description of a system can be expressed using another variable without losing the system input-output relationship. This transformation involves redefinition of the state variables and the system matrices leaving the control as well as the output signal unaltered. It can be easily observed that, since this transformation involves the system matrices and states only, preserving the input-output relationship of the system, the system properties such as the transfer function, poles, zeros, and the characteristic equation remain unchanged. Such transformation is known as the similarity transformation and requires pre-definition of the state Transformation Matrix P, which transforms any other state variable z(t) to x(t) x(t) = Pz(t), so that

.

(7.61)

.

x(t) = Pz(t). Consider the general dynamic system State-Space model given by .

x(t) = Ax(t) + Bu(t)

(7.62)

y(t) = Cx(t) + Du(t).

(7.63)

556

Design and Analysis of Control Systems

Under similarity transformation, the transformed dynamic state Equations 7.62 and 7.63 become .

Pz(t) = APz(t) + Bu(t) y(t) = CPz(t) + Du(t),

(7.64) (7.65)

which can be rearranged as .

z(t) = P−1 APz(t) + P−1 Bu(t)

(7.66)

y(t) = CPz(t) + Du(t).

(7.67)

This state-space dynamic equation can be simplified by redefining the transformed system matrices as A∗ = P−1 AP

(7.68)

B∗ = P−1 B

(7.69)

C∗ = CP

(7.70)



D =D

(7.71)

and hence giving a system of the following form: .

z(t) = A∗ z(t) + B∗ u(t) ∗



y(t) = C z(t) + D u(t).

(7.72) (7.73)

This is the Transformed State-Space model for a general dynamic system. As stated before, since the dynamics of the system are preserved by similarity transformation, the eigenvalues, eigenvectors, poles, zeros, and the system characteristic equation are all preserved. Using such transformation, several State-Space models of the system can be defined and derived. In fact, all the Canonical State-Space models discussed in previous sections can be derived using similarity transformation. The case of the Diagonal Canonical State-Space model has already been demonstrated. The use of similarity transformation to derive the other three (controllable, observable, and Jordan) canonical forms will be briefly outlined in the next section. Knowing the Transformation Matrix P, the similarity transformation can be carried out in MATLAB using the command “ss2ss” and its syntax is [At, Bt, Ct, Dt] = ss2ss(A, B, C, D, P). However, for some of the special canonical forms discussed in the following sections where the transformation matrices are well established, special commands besides the “ss2ss” do exist, as will be seen next. However, for some other forms, it remains for the user to determine the Transformation Matrix P in advance and use the “ss2ss” command.

7.8

CANONICAL STATE-SPACE MODELS: REVISITED

As already indicated, all the Canonical State-Space models discussed in this book can be derived using similarity transformation. This is briefly outlined here for the Controllable Canonical StateSpace, Observable Canonical State-Space and Jordan Canonical State-Space models. 7.8.1

CONTROLLABLE CANONICAL STATE-SPACE MODEL

As explained earlier, this form is very useful in the design of the system controllers. For any given system presentation A, B, C and D, the similarity Transformation Matrix P, which puts the system in controllable canonical form, is given as a product of the Controllability Matrix CM .   CM = B AB A2 B A3 B · · · An−1 B .

557

State-Space Design Methods

and the triangular matrix Q of coefficients of the characteristic equation. |sI − A| = sn + an sn−1 + an−1 sn−2 + · · · + a3 s2 + a2 s + a1 = 0, where Q is an upper triangular matrix given by  a2 a3 a4  a3 a · ·· 4   ..  a4 . an−1   .. Q= . an−1 an    an−1 an 1   an 1 0 1 0 0

··· an−1

an−1 an

an 1

an

1

0

1

0

0 0 ···

0 ··· 0

0 .. .

 1 0    0   ..  .    0   0  0

0 0

(7.74)

so that P = CM Q.

(7.75)

The transformed system will then be in controllable canonical form in which 

0

   0    0   A∗ = P−1 AP =  · · ·    0    0  −a1

1

0

0

0

1

0

0

0

1

···

···

···

0

0

0

0

0

0

−a2

−a3

−a4

.. . .. . .. . .. . .. . .. . .. .

0 0 0 ··· 1 0 −an−1

0



     0    ···    0    1   0

(7.76)

−an

Other system matrices will be transformed according to Equations 7.68–7.71, and the states are transformed according to Equation 7.61. It is important to note that for SISO systems, the input matrix B∗ in the controllable canonical form gets the following form.   0  0      B∗ =  0  (7.77)  ..   .  1 while the output matrix C∗ becomes C∗ =



b1

b1

···

bn−2

bn−1



.

(7.78)

The block diagram corresponding to the controllable canonical form is shown in Figure 7.8. It can easily be seen from these figures that the control signal u(t) has some effect on each of the system . states xi , and hence the meaning of controllability. In MATLAB, there is no direct command for transforming a system to a controllable canonical form, as discussed here. However, a number of tools are available in MATLAB, and their combination can lead to a system controllable canonical form. This includes the matrix manipulation power

558

Design and Analysis of Control Systems b1 b n-1 u

x1

-1

s

-a 1

-1

s

xn-1

-1

s

bn

xn

y

. -a n-1 -a

n

Figure 7.8 System in controllable canonical form.

of MATLAB and other computational tools. The other tools available include the “canon” command. This command in companion form returns the system matrix into companion (observable) canonical form whose transpose is the controllable canonical form. Also the ‘‘ctrbf” command returns a decomposition of the system matrices into the controllable and uncontrollable subspaces. It is highly recommended to consult the MATLAB users’ manual on the use of the “canon” and the “ctrbf’’ commands for the system transformation to controllable canonical form. Example 7.12 The coefficient matrices of the state equations in the dynamic system given by Equation 7.4 are     1 2 2 1   A= 1 4 3  B= 0  C= 2 1 1 . 3 1 3 1 Transform the system to state controllable canonical form. Solution 7.12 The characteristic equation of A is s − 1 −2 |sI − A| = −1 s − 4 −3 −1

−2 −3 s−3



= s3 − 8s2 + 8s + 1. Thus, the coefficients of the characteristic equation are a1 = 1, a2 = 8, and a3 = −8. The matrix of the coefficients is   8 −8 1 Q =  −8 1 0  . 1 0 0 The Controllability Matrix of the system is  CM = B  1 = 0 1

AB 3 4 6

A2 B 

23 37  . 31



559

State-Space Design Methods

Therefore, from Equation 7.75 the Transformation Matrix P becomes P = CM Q    1 3 23 8 −8 1 =  0 4 37   −8 1 0  1 6 31 1 0 0   7.0 −5.0 1.0 4.0 0 . =  5.0 −9.0 −2.0 1.0 Thus, from Equations 7.68–7.71 the transformed matrices A∗ , B∗ and C∗ can be obtained for the controllable canonical form. This is left as an exercise for the reader. 7.8.2

OBSERVABLE CANONICAL STATE-SPACE MODEL

This is a form of system presentation in which the system is completely observable as defined in the preceding sections. The similarity Transformation Matrix that brings the system in the observable canonical form is given as the inverse of the product of the characteristic coefficients matrix Q and the Observability Matrix OM , both of which have been defined in the preceding sections with 

C CA CA2 CA3 .. .

     OM =      CA(n−2) CA(n−1) and



a2  a3    a4   Q =  ...    an−1   an 1

      ,    

a3 a4 .. .

a4 ···

··· an−1

an−1 an

an 1

an−1

an

1

0

an−1

an

1

0

an 1 0

1 0 0

0 0 ···

0 ··· 0

0 .. . 0 0

 1 0    0   ..  . .    0   0  0

The observable canonical form Transformation Matrix is then given as P = (QOM )−1 .

(7.79)

Hence, the transformed system can be deduced using the previous transformation formulae. Of interest can be the plant matrix A∗ = P−1 AP, which takes the following general form

560

Design and Analysis of Control Systems

         ∗ −1 A = P AP =        

0

0

0

.. . .. . .. . .. . .. . .. . .. .

0

1

0

0

0

0

1

0

0

···

···

···

···

0

0

1

0

0

0

0

1

0

0

0

0

−a1

0 0

−a2

0

−a3

···

···

0

−an−2

0

−an−1

1

−an

         .       

(7.80)

The input matrix becomes     B =P B=   ∗

−1

b1 b2 b3 .. .

    ,  

(7.81)



(7.82)

bn and the output matrix becomes C=



0

0

···

1

.

The corresponding signal flow chart and block diagram for observable canonical form are shown in Figures 7.8 and 7.9 respectively. Note that in these diagrams, the output y(t) is connected to each of the system states, which makes it possible to estimate the system states from the output measurements Example 7.13 Transform the system .

x = Ax + Bu y = Cx u

b2

bn

Σ

-an

−1

s

Σ

-a2

Figure 7.9 System in observable canonical form.

b1

−1

s

Σ

-a1

s

−1

y

561

State-Space Design Methods

into observable canonical form, given that the system matrices are     1 2 2 1  A= 1 4 3  B= 0  C= 2 3 1 3 1

1

1



.

Solution 7.13 The Observability Matrix for this system is   2 1 1 9 10  . OM =  6 45 58 69 From the previous example, for this system 

 −8 1 1 0 . 0 0

8 Q =  −8 1

Thus, the observable Transformation Matrix P is P = (QOM )−1  1  61    14 =  − 61    12

− −

61

3 61 19 61

25 61

9 61



   4  . − 61    47  61

Thus, the observable canonical form becomes A∗ = P−1 AP   0 0 −1 =  1 0 −8  0 1 8   10 B∗ =  −8  3   C∗ = 0 0 1 . Example 7.14 In this example, the objective is to demonstrate the effect of trying to transform a system to observable canonical form. Consider the system     3 4 1 A= C= , 1 2 2 which has been identified before as unobservable. Solution 7.14 The characteristic equation of A is ψ(s) = s2 − 6s + 5

562

Design and Analysis of Control Systems

so that

 Q=



−6 1 1 0

The observability has been determined before and is given by   1 2 OM = , 5 10 therefore, the observable Transformation Matrix becomes P = (QOM )−1  =  =

−6 1

−1 1

1 0 −2 2



1 5

2 10

−1

−1 .

From here, it is clear that the matrix QOM is singular. Therefore, it is not invertible, in which case no Transformation Matrix P for observable canonical form can be formed. This is the reason that the system is said to be unobservable. 7.8.3

JORDAN CANONICAL STATE-SPACE MODEL

This is an alternative to the diagonal canonical form for unsymmetrical systems with multiple order eigenvalues for which oblique transformation is impossible. The Jordan canonical form (JCF) is a block diagonal matrix in which each n × n diagonal block matrix corresponds to n-multiplicity eigenvalues. The main diagonal elements of the JCF are the eigenvalues (with their multiplicities), and for the repeated eigenvalues, the entry above the main diagonal is a 1, and the rest are 0s. A typical structure of the Jordan canonical form is   .. .. .. 1 0 . 0 . . . . . . 0   −p1   . . . .. ..  0 −p1 1 . . . .. . . .      .. .. ..  0 0 −p1 . 0 . . . . . . 0      .. .. ..  ··· ··· ··· . ··· . ··· ··· . ··· ··· ···      .. .. ..   . 0 . −p4 . . . . 0   0   . . .   (7.83) ··· · · · .. · · · .. · · · · · · .. · · · ··· ··· .  ···   .. .. .    . . . . −p5 . .. . . 0    . . .    ··· ··· · · · .. · · · .. · · · · · · .. · · · ··· ···    .. .. ..    .  . . . . . . . . . . .     .. .. ..  . . . . . . . 0 . . −pn−1 0    .. .. . 0 . 0 . . . 0 . .. 0 0 −p n

The transformation to Jordan canonical form is done by a matrix that is formed by augmentation of the system of generalised eigenvectors: p1 , p2 , p3 , . . . , pn so that T = [ p1

p2

p3

···

pn ] .

(7.84)

563

State-Space Design Methods

Since the idea of generalised eigenvectors can be confusing, it is worth recalling some ideas from linear algebra here. It is known that if A is n x n matrix with n eigenvalues λi , i = 1, 2, 3, . . . , n, its non-zero eigenvectors pi must satisfy the equation (λi I − A)pi = 0.

(7.85)

This results in n-eigenvectors, each corresponding to one eigenvalue. If, however, A has multiple order eigenvalues and is non-symmetric, not all the eigenvectors will be found by Equation 7.85 above. This equation will result in eigenvectors that are less in numbers than n, which is required for the matrix of order n like A. The remaining eigenvectors corresponding to each λ j of m-order multiplicity are found by using the m − 1 generalised eigenvector equations (λ j I − A)p j+q = p j+q−1

q = 1, 2, . . . , (m − 1).

(7.86)

For q = 1, the normal Equation 7.85 holds.

7.9

TRANSFER FUNCTION DIRECT DECOMPOSITION

When a transfer function is given in the classical rational form, one may wish to decompose it to one of the state-space canonical forms without having to go back to the time domain. Three decomposition approaches are possible, all of which require the preparation of the signal flow charts depending on how the transfer function is given. These are direct decomposition, cascade decomposition, and parallel decomposition. This section presents the principles of Direct Decomposition method. Direct decomposition takes advantage of the fact that the coefficients of the numerator and denominator of the transfer function can, by inspection, be used to form the observable or controllable canonical form. The method constructs the signal flow chart from the transfer function from which the appropriate canonical form can be extracted. The method for constructing the signal flow chart from the transfer function for the controllable canonical form differs from that of the observable canonical form. The two methods are discussed here. 7.9.1

DECOMPOSITION TO CONTROLLABLE CANONICAL FORM

For a transfer function G(s) =

N(s) Y (s) = U(s) D(s)

this goes through four stages: (1) Express G(s) in negative powers of s by multiplying the numerator and denominator by s−n . Y (s) N(s)s−n = U(s) D(s)s−n (2) Multiply N(s) and D(s) by a dummy variable x(s) representing the state variables. Y (s) N(s)s−n x(s) = U(s) D(s)s−n x(s) (3) Equate the numerator and denominator of both sides thereby forming two equations. Y (s)D(s)s−n x(s) = U(s)N(s)s−n x(s) (4) Construct the signal flow chart using the two equations by forming states using xi = s−i x(s) where i = 1, 2, . . . , n

564

7.9.2

Design and Analysis of Control Systems

DECOMPOSITION TO OBSERVABLE CANONICAL FORM

This goes through three stages: (1) Express G(s) in negative powers of s. Y (s) N(s)s−n = U(s) D(s)s−n (2) Cross-multiply N(s) and D(s) by u(s) and y(s) respectively, and express y(s) in terms of ascending negative powers of s, y(s) itself and u(s). (3) Draw the corresponding signal flow chart. Cascade decomposition is a special form of direct decomposition that applies to a transfer function presented in the pole-zero form. A separate signal flow chart is made for each factor of the pz transfer function and then connected in cascade to complete the signal flow chart for the whole system. Parallel decomposition applies when the transfer function is expressed as a sum of its partial fractions. As such, it can be applied to any system for which the denominator of the transfer function has been factored. A separate signal flow graph is made for each of the partial fractions. The overall signal flow graph for the system is a parallel combination of the individual signal flow graphs for the partial fractions.

7.10

FULL STATE FEEDBACK CONTROL

State feedback control has been widely applied in most control systems not only because of its simplicity but also due to its flexibility in handling system dynamics for disturbance rejection and stability characteristics. Under state feedback control, the control action is achieved by feeding back a linear combination of the system states to the reference input r(t) trough a feedback matrix K, as shown in Figure 7.10 to produce the control input u(t). In this way, the control signal u(t) becomes u(t) = −Kx(t) + r(t).

(7.87)

As discussed in Chapter 4, there are two types of control systems: A tracking system and a regulator system. The preceding equation is for a tracking system where there is a reference input. For the case of regulators where r(t) = 0, the control signal will be u(t) = −Kx(t).

(7.88)

D

r

+ Σ -

u

+

B

Σ

+

. x

x

C +

+

A

K

Figure 7.10 Block diagram for the state feedback control.

Σ

y

565

State-Space Design Methods

Full state feedback control scheme assumes that all the system states xi (t) in the state vector x(t) are available for feedback. However, it may not be possible to have all the system states in real practice because of the large quantity or the cost of sensors needed to measure them. Some of these states might require costly and specialised sensors. Even when all states are available for measurement, there is still the problem of measurement noise. Furthermore, it is desirable to do state transformations to enable design insights in some cases, but the new states may have no physical meaning and cannot be measured. All these issues lead to state estimation, in which the whole state vector can be estimated using data obtained from measurements of only a few states of the state vector or more usually from the output. As such, the control design is normally carried out in two stages. (a) Control law design, which assumes that all states of the system are available for feedback. (b) Observer (or estimator) design, which estimates the entire state vector from measurements of the portion of the state vector from y(t) = Cx(t) + D(u). Together, the estimator and the control provide the full state feedback control and the structure for such control is shown in Figure 7.11. This section addresses the first stage of the full state feedback control design with the assumption that the entire state vector is available for feedback. Estimator design will be discussed in later sections. With the control input u(t) given by Equation 7.87, the state equation of the system in the closed-loop becomes x˙ (t) = Ax(t) + B[−Kx(t) + r(t)] = (A − BK)x(t) + Br(t).

(7.89)

If the system is completely controllable, then the feedback matrix K exists for which the characteristic equation of the closed-loop system becomes |sI − A + BK| = 0.

(7.90)

(recall Equation 7.9). The task of the control design becomes that of determining the elements of the matrix K. The most popular design technique for determination of K for SISO systems is the pole placement method, which is discussed in the next section. It should be noted that for SISO systems B is a column vector while K is a row vector; therefore, the product BK is an outer product of vectors B and K, which result in a matrix so that matrix algebra rules are still observed in Equations 7.89 and 7.90. u

. x = Ax + Bu

(Control Law) -K

y

x

C

^ x

Estimator

Σ

Figure 7.11 Combined estimator and control law in full state feedback control.

r

566

7.10.1

Design and Analysis of Control Systems

POLE PLACEMENT DESIGN METHOD

As has already been explained, the placement of the closed-loop poles is called the pole-placement problem. Pole placement is a method that seeks to place the poles of the closed-loop system at some predetermined locations. However, the poles can only be placed if the system is completely state controllable. This is the major significance of controllability analysis. Although this method has some drawbacks in handling complex systems, it is still fairly sufficient for most small control systems, and it gives the best introduction to the design of complex systems. The basic concept behind the method is to get K, which will satisfy the closed-loop transfer function in Equation 7.90 at desired pole locations si , i = 1, 2, . . . , n. In principle, the poles can be placed anywhere. However, the positions of the poles are directly related to the systems bandwidth, rise time and settling time. Hence, if the pole is placed such that excessive bandwidth or settling time is required, the system requires excessive control effort. This might be neither practical nor achievable for the system. Implementation of the pole placement method will be described here through the following illustrative regulator example, i.e., no reference input. (The reference input will be added after discussing the state estimators.) Suppose the system .

x(t) = Ax(t) + Bu(t) is to be controlled by full state feedback such that u(t) = −Kx(t), where the closed-loop poles are placed at locations p1 , p2 , p3 , . . . , pn . This means that the required closed-loop transfer function of the controlled system is given by ψ(s) = (s − p1 )(s − p2 )(s − p3 ) · · · (s − pn ) = 0,

(7.91)

which can be expanded as ψ(s) = sn + qn sn−1 + qn−1 sn−2 + · · · + q3 s2 + q2 s + q1 = 0. Let the system matrix A and the input matrix B, respectively, be     A=  

··· ··· ···

a11 a21 a31 .. .

a12 a22 a31 .. .

an1

an2

··· ···





   B=  

b1 b2 b3 .. .

a1n a2n a3n .. .

      

ann

   .  

bn Therefore, if the feedback matrix K is K=



k1

k2

k3 · · ·

kn



,

(7.92)

567

State-Space Design Methods

then the closed-loop system has the following system matrix  a11 − b1 k1 a12 − b1 k2 · · ·  a21 − b2 k1 a22 − b2 k2 · · ·   A − BK =  a21 − b2 k1 a32 − b3 k2 · · ·  .. ..  . . ··· an1 − bn k1

a1n − b1 kn a2n − b2 kn a3n − b3 kn .. .

      

an2 − bn k2

···

ann − bn kn

−a12 +b1 k2 s − a22 +b2 k2 −a32 +b3 k2 .. .

··· ··· ···

s − ann +bn kn

whose characteristic function is ψ(s) = | sI − A + BK | s − a11 +b1 k1 −a21 +b2 k1 = −a21 +b2 k1 .. . −an1 +bn k1

−an2 +bn k2

··· ···

−a1n +b1 kn −a2n +b2 kn −a3n +b3 kn .. .

(7.93)

= 0. Comparison of this characteristic equation and the demanded one in Equation 7.92 can lead to the determination of the values of ki and hence matrix K. Clearly, the algebra behind such a problem is very cumbersome and might in some cases be insoluble. On the other hand, however, if system (A, B) is controllable, the closed-loop system can be expressed in its controllable canonical form as  ∗  a11 − b∗1 k1∗ a∗12 − b∗1 k2∗ · · · a∗1n − b∗1 kn∗  a∗ − b∗ k∗ a∗ − b∗ k∗ · · · a∗ − b∗ kn∗  2 1 22 2 2 2n 2  21  ∗ ∗ ∗ ∗ ∗ ∗ ∗ ∗   ∗ A∗ −B∗ K∗ =  a21 − b2 k1 a32 − b3 k2 · · · a3n − b3 kn  .   .. .. ..   . . ··· . a∗n1 − b∗n k1∗

a∗n2 − b∗n k2∗

···

a∗nn − b∗n kn∗

Matrices A∗ and B∗ are calculated using Equations 7.68 and 7.69. For SISO systems, they have been found to be as given in Equations 7.76 and 7.77. In this case, the closed-loop transfer function becomes s −1 ··· 0 0 s · · · 0 ∗ ∗ ∗ 0 0 · · · 0 | sI − A +B K | = .. .. .. . . ··· . −a1 − k∗ −a2 − k∗ · · · s − an − k∗ n 1 2 whose expansion can be easily determined to be ∗ ψ(s) = sn + (an + kn∗ )sn−1 + (an−1 + kn−1 )sn−2 + · · ·

· · · + (a3 + k3∗ )s2 + (a2 + k2∗ )s + (a1 + k1∗ ). A comparison of this equation with the demanded one in 7.92 shows that. ai + ki∗ = qi

i = 1, 2, 3, . . . , n,

from which the elements of the feedback matrix can be computed as ki∗ = qi − ai

i = 1, 2, 3, . . . , n

(7.94)

568

Design and Analysis of Control Systems

or in vector form K∗ = q − a

(7.95)

where q = [q1

q2

···

qn ]

(7.96)

a = [a1

a2

···

an ].

(7.97)

It is emphasised again that this procedure applies only to SISO systems in controllable canonical form, and that the order of the elements in vectors a, q, and K∗ are as shown above. Improper order of the elements will give wrong results. This matrix K∗ is the feedback gain for the system in controllable canonical form, i.e., the control effort is such that u(t) = −K∗ x∗ (t),

(7.98)

where from Equation 7.61 x∗ (t)= P−1 x(t). Therefore, for the original system (not in controllable canonical form), this control effort becomes u(t) = −K∗ P−1 x(t)

(7.99)

so that the corresponding feedback gain matrix K is K = K∗ P−1 .

(7.100)

In MATLAB, pole placement design is accomplished by using command “place(A,B,P)”, which computes the state feedback matrix K such that the eigenvalues of A − BK are those specified in vector P. Whenever used, the matrix algebra rules must be observed as to the dimensions of A, B and P so that equation A − BK remains valid. This means that P and B must be an n-dimensional vector where A is an n × n square matrix. If complex eigenvalues are required, they must appear in consecutive complex conjugate pairs in P. The “place” syntax is: K = place(A, B, P). Example 7.15 For system matrices given as   2 1 1 3 4  A= 2 −1 −1 −2



 1 B =  2 , 1

design a state feedback controller K to place the poles of the system at s = −2, −5, −6 . Confirm the results using MATLAB. Solution 7.15 The given system is not in controllable canonical form. Transforming it into a controllable canonical form using methods discussed previously, the Transformation Matrix P is a needed. With this Transformation Matrix, the whole exercise of transforming the system can be skipped, as will be seen in this example. First, get the characteristic equation of the uncontrolled system. |sI − A| = 0 =⇒ s3 − 3s2 − s + 3 = 0, from which the vector a of the coefficients is found to be   a = 3 −1 −3 .

569

State-Space Design Methods

Now the controlled system needs to have the poles at −2, −5 and −6; thus the characteristic equation of the closed-loop system becomes (s + 2)(s + 5)(s + 6) = 0, which expands to s3 + 13s2 + 52s + 60 = 0. Therefore, the vector q of the coefficients of the controlled system becomes   q = 60 52 13 which, according to Equation 7.95, gives the gain matrix K for the system in canonical form as K∗ = q − a. From these results it follows that 

K∗T

   60 3 =  52  −  −1  13 −3

K∗ =



57

53

16



.

This corresponds to the canonical gain matrix. To convert it into the gain matrix K of the original state-space system, the Transformation Matrix P is a needed where P =CM Q so that K can be calculated by using Equation 7.100. Now, from the characteristic equation of the uncontrolled system, the coefficients matrix is   −1 −3 1 Q =  −3 1 0  1 0 0 and the Controllability Matrix of the system is CM = [ B | AB | A2 B ]   1 5 17 =  2 12 26  . 1 −5 −7 Therefore, the Transformation Matrix P is   1 5 17 −1 P =  2 12 26   −3 1 −5 −7 1   1 2 1 =  −12 6 2  , 7 −8 1

 −3 1 1 0  0 0

570

Design and Analysis of Control Systems

and the inverse is 11  64    13 P−1 =   64    27 



5 64



3 64

 1 64    7  . − 64    15  −

11 64

64

64

Therefore, the required feedback gain is 11  64     13 16   64    27 

K=



57

53

64



5 64



3 64

11 64

 1 64    7  , − 64    15  −

64

which gives K=



27. 313

−4. 187 5

−2. 937 5



.

Note that the system matrix A in controllable canonical form becomes A∗ = P−1 AP   0 1 0 =  0 0 1 . −3 1 3 In MATLAB, this problem could be solved as follows: A = [1 2 1; 2 3 4; −1 − 1 − 2]; B = [1; 2; 1]; P = [−2 − 5 − 6]; K = place(A, B, P). 7.10.2

POLE PLACEMENT USING ACKERMANN’S FORMULA

The pole placement method described in the previous section is cumbersome in that, it requires the system to be converted to controllable canonical form and the gain matrix K∗ is computed element by element. Ackermann’s formula offers an elegant way of determining the feedback matrix K for pole placement using information from the required closed-loop characteristic equation without converting the system to controllable canonical form. It gives a direct relationship between the closed-loop characteristic equation ψ(s) = sn + qn sn−1 + qn−1 sn−2 + qn−2 sn−3 + · · · + q3 s2 + q2 s + q1 = 0,

(7.101)

and the system matrix A (not in controllable canonical form) so that by using the earlier relationship between ki , ai and qi matrix K is obtained directly where K∗ = q − a.

(7.102)

571

State-Space Design Methods

Derivation of Ackermann’s formula originates from Cayley-Hamilton’s theorem, which states that every square matrix satisfies its characteristic equation. For matrix A, whose characteristic equation is given by ψ(s) = sn + an sn−1 + an−1 sn−2 + an−2 sn−3 + · · · + a3 s2 + a2 s + a1 = 0,

(7.103)

then according to Cayley-Hamilton’s theorem ψ(A) = An + an An−1 + an−1 An−2 + an−2 An−3 + · · · + a3 A2 + a2 A + a1 I = 0. It is assumed that A is not in controllable canonical form; therefore, the first task would be to convert it to controllable canonical form (A∗ , B∗ ) by similarity transformations A∗ = P−1 AP ∗

(7.104)

−1

B = P B.

(7.105)

The state vector will also be transformed to x∗ = P−1 x.

(7.106)

Therefore ψ(A∗ ) = A∗ n + an A∗ n−1 + an−1 A∗ n−2 + · · · + a3 A∗ 2 + a2 A∗ + a1 I = 0. Similarly, for the controlled closed-loop system, whose characteristic equation is given in Equation 7.101 the theorem will be satisfied as follows: ψ(A∗ −B∗ K) = (A∗ −B∗ K∗ )n + qn (A∗ −B∗ K∗ )n−1 + · · · + q3 (A∗ −B∗ K∗ )2 + q2 (A∗ −B∗ K∗ ) + q1 I = 0.

(7.107)

However, since there is no a priori information about matrix A∗ −B∗ K∗ , this equation is not immediately useful. Instead, the method gets simplified by letting B∗ K∗ = 0, which results in ψ(A∗ ) = A∗n + qn A∗n−1 + qn−1 A∗n−2 + · · · + q2 A∗ + q1 I.

(7.108)

This step is done for mathematical convenience; in fact Equation 7.108 is no longer equal to zero. From Equation 7.106 A∗n can be written as A∗ n = −(an A∗ n−1 + an−1 A∗ n−2 + · · · + a3 A∗ 2 + a2 A∗ + a1 I),

(7.109)

which, when used in Equation 7.108 above, gives ψ(A∗ ) = (qn − an )A∗n−1 + (qn−1 − an−1 )A∗n−2 + · · · · · · + (q2 − a2 )A∗ + (q1 − a1 )I.

(7.110)

If e1 is a vector of the first column of an identity matrix i.e. e1 =



1

0

0 ···

0

T

,

572

Design and Analysis of Control Systems

then because of the structure of A∗ eT1 A =



0

1

eT1 A2 = eT2 A = eT1 A3

= eT3 A



0

···

0

0

0 1

T

= eT2 T ··· 0 = eT3

= eT4

.. . eT1 An−1 = eTn−1 A = eTn .

(7.111)

Using this result and by multiplying Equation 7.110 by eT1 gives eT1 ψ(A∗ ) = (qn − an )eT1 A∗n−1 + (qn−1 − an−1 )eT1 A∗n−2 + · · · (q2 − a2 )eT1 A∗ + (q1 − a1 )eT1 I = (qn − an )eTn + (qn−1 − an−1 )eTn−1 + · · · + (q2 − a2 )eT2 + (q1 − a1 )eT1 = [(qn − an ) (qn−1 − an−1 )· · · (q2 − a2 ) (q1 − a1 )] .

(7.112)

Since from the previous section it was shown that q − a = K∗ , then eT1 ψ(A∗ ) =



kn

kn−1

kn−2

···

k1



= K∗

(7.113)

K∗

where is defined for the system in controllable canonical form. Hence, the control input u(t) becomes x∗ = −K∗ x∗ = −K∗ P−1 x.

(7.114)

Therefore, the required system gain based on the configuration is now in canonical form, K = K∗ P−1 = eT1 ψ(A∗ )P−1 .

(7.115)

Since A∗ = P−1 AP (a similarity transformation) then A∗n = P−1 An P so that ψ(A∗ ) = P−1 ψ(A)P.

(7.116)

K = eT1 P−1 ψ(A).

(7.117)

Hence, Equation 7.115 becomes From the relation that P =CM Q it follows that

P−1 = (CM Q)−1 = Q−1 (CM )−1 ,

(7.118)

which, when used in Equation 7.117 above, for K gives K = eT1 Q−1 (CM )−1 ψ(A).

(7.119)

573

State-Space Design Methods

It is interesting to note that eT1 Q−1 = eTn (the reader may verify this), so that Ackermann’s formula gets the following form: K = eTn (CM )−1 ψ(A) (7.120) or by letting eTn (CM )−1 = bT this formula can be written as K = bT ψ(A).

(7.121)

Though attractive, the application of Ackermann’s formula is limited to SISO systems only, where the assumptions for A∗ and B∗ hold. MATLAB implements Ackermann’s formula through command “acker(A,B,P).” This command is similar in all respects to the command “place” discussed in the previous section. However, it is known to be numerically unreliable, particularly for problems of order greater than 10 or weakly controllable systems. As such, for problems of that kind, it should be avoided. Its syntax is K = ac ker(A, B, P). Example 7.16 Consider the system described by   2 1 1 3 4  A= 2 −1 −1 −2



 1 B =  2 . 1

Use Ackermann’s formula to design a state feedback controller K so that the characteristic equation of the system becomes s3 + 13s2 + 52s + 60 = 0. Solution 7.16 The Controllability Matrix for the given system is CM = [ B | AB | A2 B ]   1 5 17 =  2 12 26  , 1 −5 −7 and hence, the inverse is 

−0.3594 (CM )−1 =  −0.3125 0.1719

0.3906 0.1875 −0.0781

 0.5781 −0.0625  . −0.0156

The Cayley-Hamilton matrix of A in the closed-loop system is given by ψ(A) = A3 +13A2 +52A+60I   243 117 117 =  202 328 308  . −85 −85 −65 Therefore, the direct application of Ackermann’s formula K = eTn (CM )−1 ψ(A) gives  K=



0

0

1



1  2 1

5 12 −5

−1  17 243 26   202 −7 −85

 117 117 328 308  −85 −65

574

Design and Analysis of Control Systems

which gives the gain matrix as K=



27. 313

−4. 187 5

−2. 937 5



.

It will be realised that this is the same problem as in the previous example. To minimise the computing round-off error, the inverse of C was used in its raw form to show the exactness of the results for the two methods. However, using the computed inverse of C will give the same results with some rounding-off errors as shown below    0.5781 243 117 117   −0.3594 0.3906 K = 0 0 1  −0.3125 0.1875 −0.0625   202 328 308  0.1719 −0.0781 −0.0156 −85 −85 −65   = 27. 322 −4. 178 5 −2. 928 5 . In MATLAB, the Ackermann’s formula could be used to solve this problem as shown below giving the same results. A = [1 2 1; 2 3 4; −1 − 1 − 2]; B = [1; 2; 1]; P = roots([1 13 52 60]); K = acker(A, B, P)

7.11

INTRODUCTION TO OPTIMAL CONTROL

The pole placement and the consequent Ackermann’s formula apply to SISO systems only. For MIMO systems, there are many conflicting elements to be selected under some constraints to put the system under optimal conditions. The controller design turns out to be an optimisation problem in which some form of performance index or cost function has to be optimised. This is called optimal control. In its general form, optimal control handles all states with appropriate weights. As stated before, some conditions may not be measurable, so some means of estimating them from noisy measurements are done. The performance index, which has to be optimised under optimal control, is defined as a combination of the estimated state vector, the control vector and/or the output vector. Several optimal control strategies can be defined depending on the form of the performance index, state vector and estimation policy, and underlying assumptions. It is important to note that optimal control can be placed into three broad categories: (i) nonlinear time-variant, (ii) nonlinear time-invariant, and (iii) linear time-variant. In this section, the basic principles of the optimal control methods are discussed, but first, a brief overview of optimisation theory is given. 7.11.1 7.11.1.1

OVERVIEW OF OPTIMISATION THEORY The Optimisation Problem and Optimality Conditions

An optimisation problem is a mathematical problem that involves finding the best or optimal solution of a given problem under some constraints. The general structure of an optimisation problem can be posed as follows: “Minimise (or maximise) a function f (x) subject to the condition that x∈ Ω.” In this optimisation problem, the function f :Rn ⇒ R to be optimised is a real valued function called the objective function or the performance (cost) index. The vector x is an n-vector of nindependent variables, i.e., x = [x1 x2 · · · xn ]T ∈ Rn

575

State-Space Design Methods

whose elements are called decision variables. The set Ω ∈ Rn is called the constraint set or feasible set that captures the optimisation constraints. Often the constraints (equality and inequality) set take the form Ω = {x:h(x) = 0; g(x) ≤ 0},

(7.122)

where h and g are given vector functions. Depending on the presence or absence of the constraints, the optimisation may be referred to as constrained or unconstrained optimisation. Under unconstrained optimisation of a real valued function f (x), the first and necessary condition for it to be optimal at point x∗ is that the gradient of f (x∗ ) must be zero, ∇ f (x∗ ) =



∂ f (x∗ ) ∂ xi

T = 0.

(7.123)

When this condition is met, the point x∗ is called a stationary point. The stationary point may be a maximum, minimum, or an inflection point. In most control problems, the interest is minimising the given objective function. Therefore, the minimum point has to be sought. The minimum of f (x) at x∗ is found when the second derivative or the Hessian of f is positive definite. ∂ 2 f (x∗ ) ∇ f (x ) = ∂ xi ∂ x j 



2

T > 0.

(7.124)

Note that the same conditions must be satisfied for maximisation problems but with opposite polarity for the second condition. In fact, a maximisation problem may be turned into a minimisation problem by sign inversion. x2 ]T , which minimises the function f (x) = 2x14 +

Example 7.17 Determine the vector x = [x1 x22 − 4x1 x2 + 4 and the minimal value of f (x).

Solution 7.17 The first condition for a stationary point is ∇ f (x) = 0 and for the given function it becomes   ∂ f (x)  3  8x1 − 4x2  ∂ x1     . ∇ f (x) =  =  ∂ f (x)  2x2 − 4x1 ∂ x2 Applying the first condition for a stationary point gives two equations in x1 and x2 as 8x13 − 4x2 = 0 2x2 − 4x1 = 0, whose simultaneous solution gives  x=

0 0



 ,

1 2



 ,

−1 −2

 .

576

Design and Analysis of Control Systems

The condition for a minimum is ∇2 f (x) > 0 and for the given function ∂ 2 f (x∗ ) ∇ f (x ) = ∂ xi ∂ x j 2







∂ 2 f (x)  ∂ x2  1   = 2  ∂ f (x)  ∂x ∂x 1 2 

 =

24x12 −4

∂ 2 f (x) ∂ x1 ∂ x2 ∂ 2 f (x) ∂ x22 −4 2

       

 ,

and for this to be positive definite, all its eigenvalues must be positive. Thus, expressions for the eigenvalues have to be determined from the characteristic equation 2 4 λ I−∇2 f (x) = λ − 24x1 =0 4 λ −2 = (λ − 24x12 )(λ − 2) − 16 = 0 = λ 2 − (2 + 24x12 )λ + (48x12 − 16) = 0 which gives the eigenvalues as λ1 , λ2 = (1 + 12x12 ) ±

q (1 + 12x12 )2 + (16 − 48x12 ),

the conditions for all eigenvalues to be positive are (1 + 12x12 ) > 0 q (1 + 12x12 ) > (1 + 12x12 )2 + (16 − 48x12 ), which gives the limiting values of x1 as x1 ≥ √13 and x1 ≤ − √13 . For the functional stationary values found above, the only points that satisfy these conditions are     1 −1 x= and . 2 −2 Hence, these are the values of x that minimise f (x). The functional value at these points is f (x) = 1. Note that at x= [0 0]T the functional value is f (x) = 3 and hence it is not a minimum. 7.11.1.2

Constrained Optimisation: The Lagrangian and the Lagrange Multipliers

When the constraint set is defined, the problem becomes a constrained optimisation problem. The constraints may be equality or inequality constraints. For this chapter, only equality constraints are considered. Suppose the objective (vector) function f(x) is to be minimised subject to constraints that h(x) = 0. The standard procedure of solving such an optimisation problem is to combine both the

577

State-Space Design Methods

objective function f(x) and the constraint equation h(x) using Lagrange multipliers λ into one equation known as the Lagrangian L(x, λ ) where L(x, λ ) = f(x) + λ T h(x)

(7.125)

and λ ∈ Rn is a vector of Lagrange multipliers, which are to be determined on the course of getting the solution to the given optimisation problem. Thus, the necessary conditions that minimise f(x) and yet satisfy the constraint equation h(x) are then contained in the Lagrangian L(x, λ ) and are ∂ L(x, λ ) = h(x) = 0 ∂λ

(7.126)

∂ L(x, λ ) = ▽f(x) + λ T ▽h(x) = 0. ∂x

(7.127)

Although in most applications the values of the Lagrange multipliers are not needed, they must be determined, however, as intermediate values that allow complete determination of the optimal quantities of interest, i.e., vector x∗ and possibly the minimum value of the objective function f(x∗ ). By introducing the Lagrange multipliers, the constrained problem of minimising the objective function f(x) is reduced to an unconstrained problem of minimising the Lagrangian L(x, λ ) without constraints. 7.11.1.3

Objective Functions

In control systems design, the objective function is normally chosen by the control designer. This might be to minimise the absolute error, mean square error or anything that fits into the problem at hand. The objective function can be linear, quadratic or polynomial. However, in most applications, the quadratic objective function is used, i.e., 1 f (x)Q = xT Wx, 2

(7.128)

though in some cases, the linear function f (x)L = Wx

(7.129)

is also used subject to some constraints. In the quadratic objective function, the matrix W, known as the weight matrix, is chosen to be symmetric and positive definite. It expresses the relative importance of the various decision variables x in optimising the problem. 7.11.2

BASIC OPTIMAL CONTROL PROBLEM

In designing optimal feedback controllers, the basic problem faced is that of selecting the elements of the feedback matrix K that optimise the various criteria imposed on the system states x(t) and the control signals u(t). These criteria are altogether contained in a quadratic objective function that is normally expressed as i 1 1 Th T (7.130) f (x) = xT (T )W0 x(T )+ x (t)W1 x(t) + uT (t)W2 u(t) dt, 2 2 0 where the weight matrices W1 and W2 are positive definite expressing the relative importance of the different states and controls in the controlled system as a whole. The term 21 xT (T )W0 x(T ) is a penalty term that dictates the final required final state x(T ), in which the weight matrix W0 is also symmetric positive definite. Therefore, it is required to minimise this performance function subject to the conditions that x˙ (t) = Ax(t)+Bu(t) (7.131) Z

578

Design and Analysis of Control Systems

where u(t) = −Kx(t).

(7.132)

As can be seen, this is a constrained optimisation problem with equality constraints that can then be solved using the method of Lagrange multipliers. Important here is how correctly the weight matrices in the objective function are defined. Since the constraint Equation 7.130 is time-dependent, it must also be satisfied at all times, therefore, the corresponding Lagrangian becomes 1 T T 1 x (t)W1 x(t) + uT (t)W2 u(t)+ L(x, λ ) = xT (T )W0 x(T ) + 2 2 0   dx(t) λ (t) − Ax(t) − Bu(t) dt dt Z

Z T

= φ (x(T )) +

H(x, u, λ )dt.

(7.133) (7.134)

0

Since the final state is fixed, unconstrained minimisation of this Lagrangian over the time interval [0, T ] can be seen as just the unconstrained minimisation of the Hamiltonian H(x, u, λ ). The conditions for optimisation are obtained when the partial derivative of the Hamiltonian with respect to each of its variables is equal to zero. That is ∂ H(x, u, λ ) dx(t) = − Ax(t) − Bu(t) = 0 ∂λ dt

(7.135)

dλ (t) ∂ H(x, u, λ ) = W1 x(t) + AT λ (t)+ =0 ∂x dt

(7.136)

∂ H(x, u, λ ) = W2 u(t) + BT λ (t) = 0. ∂x

(7.137)

Equation 7.135 is the original state equation that was introduced in the Hamiltonian as a constraint, and Equation 7.136 is an equation resembling the state equation but in λ (t). These equations are known respectively as the state and costate equations, while Equation 7.137 is the control equation. The solution of the control equation gives T u(t) = −W−1 2 B λ (t).

(7.138)

Comparing this equation and Equation 7.132, it is evident that if the Lagrange multiplier as λ (t) can be determined as a function of x(t) then the optimal feedback gain matrix K will be determined as a function of the weight matrix W2 and the input matrix B. This can be done by combining the three equations as follows: First, the control equation and the state equation are combined to remain with only two equations, which can hence be solved (though at a cost). The combination of the state equation and the control equation gives T 0 = x˙ −Ax(t) + BW−1 2 B λ (t).

(7.139)

Thus, Equation 7.139 and the costate equation can be presented in a matrix form known as the Hamiltonian system, as follows: " #    x˙ (t) x(t) A −BW−1 BT 2 . = , (7.140) λ (t) −W1 −AT λ (t) where the coefficient matrix is known as the Hamiltonian matrix. To find the optimal control, the Hamiltonian system must be solved, taking into account the boundary conditions x(t) and λ (t)

579

State-Space Design Methods

and inputs over the interval [0,T]. However, normally, λ (0) and λ (T ) are unknown, and this further complicates the problem. For this introductory material, the free final state x(T ) given an initial state x(0) will be assumed over this control interval [0,T]. Notice that this becomes a two-point boundary value problem that remains difficult to solve. A popular method that has been very successful in handling this problem is the sweep method, which was suggested by Bryson and Ho in 1975. This method assumes that x(t) and λ (t) are linearly related such that λ (t) = S(t)x(t).

(7.141)

This reduces the two-point boundary value problem in x(t) and λ (t) to a single point problem in S(t) which can thus be solved with ease. Before dwelling on the solution of the Hamiltonian system in Equation 7.140, consider the time derivative of Equation 7.141. This becomes .

.

λ (t) = S(t)x(t) + S(t)˙x(t).

(7.142)

Replacing x˙ (t) by the state Equation 7.139 and using the relation in 7.141 gives . .   T λ (t) = S(t)x(t) + S(t) Ax−BW−1 2 B S(t)x(t) .

(7.143)

.

Also by using Equation 7.141 in the costate equation gives λ (t) as .

λ (t) = −W1 x(t) − AT S(t)x(t).

(7.144)

Now, equating Equations 7.143 and 7.144 gives .   T −W1 x(t) − AT S(t)x(t) =S(t)x(t) + S(t) Ax(t)−BW−1 2 B S(t)x(t) ,

(7.145)

so that on elimination of x(t) and doing some rearrangement, this problem reduces to .

T −S(t) = AT S(t) + S(t)A − BW−1 2 B S(t) + W1 ,

(7.146)

for all t < T. This is a matrix equation known as the Algebraic Riccati Equation (shortened ARE) which must be solved for S(t) with the final condition of S(T) so that λ (t) in Equation 7.141 can be determined and hence the control input in Equation 7.138 becomes T u(t) = −W−1 2 B S(t)x(t),

(7.147)

which gives the feedback gain matrix K as T K = W−1 2 B S(t).

(7.148)

Normally, the Riccati equation in 7.146 is solved by the backward in time approach. The gain obtained by solving the ARE this way is normally time-varying as it varies with S(t) even if the system is time-invariant. For most practical applications, it is not desired to have a time-varying K, therefore, a steady-state solution of ARE for S(t) is required. This steady-state solution of ARE is ˙ = 0 so that the steady-state Riccati equation becomes obtained from Equation 7.146 by making S(t) T 0 = AT S(t) + S(t)A − BW−1 2 B S(t) + W1 .

(7.149)

If the steady-state conditions of x(T ) or S(T ) are known (which is usually the case), the common approach to solving the Riccati equation is by backward propagation in time approach from time T to t giving x(t) and S(t) respectively, and this is usually done off-line. However, in this section, it has been assumed that the initial condition of x(0) is known. Therefore, it will be solved by the

580

Design and Analysis of Control Systems

forward approach, which seems simpler than the backward approach, though both approaches are similar. A brief discussion on the backward approach is given at the end of this section. The analytical solution of ARE is determined from the Hamiltonian Equation 7.140 and the sweep Equation 7.141. #  "   T x˙ (t) A −BW−1 x(t) 2 B . = λ (t) −W1 −AT λ (t) λ (t) = S(t)x(t). This is a system of linear time-invariant ordinary deferential equations which, as has been stated before, can be solved either by forward or backward propagation in time using an appropriate transition matrix of the Hamiltonian matrix H   A −BW−1 BT 2 . (7.150) H= −W1 −AT The solution S(t) of the Riccati equation can then be determined analytically in terms of the eigenvalues and eigenvectors of this Hamiltonian matrix. Due to the special structure of this matrix, if H has distinct eigenvalues, then it can be written as a diagonally structured matrix such that its eigenvalues are ±λi , i.e., if λi is an eigenvalue of H then so is −λi . Consequently, a left modal Transformation Matrix E of H can be defined as   E11 E12 E= (7.151) E21 E22 such that ˆ −1 , H = EΛE

(7.152)

ˆ is a diagonal matrix of the eigenvalues of H arranged so that fast eigenvalues are in the where Λ upper left block matrix −Λ   −Λ 0 ˆ Λ= . (7.153) 0 Λ Associated with the right modal Transformation Matrix is the right modal Transformation Matrix DT defined as DT = E−1 , (7.154) which can be used in Equation 7.152 to give ˆ T. H = EΛD In this case, D is defined as  DT = 

DT11

DT21

DT12

DT22

 .

The right modal Transformation Matrix D is related with the left modal Transformation Matrix E as   ET22 −ET12 . DT =  T T −E21 E11

581

State-Space Design Methods

Therefore, the Hamiltonian system can then be written as  " #    DT11 x˙ (t) E11 E12 −Λ 0  . = E21 E22 0 Λ λ (t) DT12

DT21 DT22







x(t) λ (t)

q2 (t)]T is defined as  T     D11 DT21  q1 (t) x(t)   q(t) = = q2 (t) λ (t) DT12 DT22

 .

The normal state vector q(t) = [q1 (t)

(7.155)

or equivalently 



x(t) λ (t)

 =

E11 E21

E12 E22





q1 (t) q2 (t)

.

(7.156)

Some mathematical operations show that this normal state vector must satisfy the state dynamics      q˙ 1 (t) −Λ 0 q1 (t) = q˙ 2 (t) 0 Λ q2 (t) whose solution is





q1 (t) q2 (t)

 =

e−Λt 0

0 eΛt



q1 (0) q2 (0)

 ,

(7.157)

where q(0) is the initial value of the normal state at t = 0. This shows that the normal state q1 (t) is completely stable while q2 (t) is completely unstable. If the state vector at t = 0 is x(0), then the initial normal states can be determined as   T    D11 DT21  q1 (0)  x(0) , = (7.158) q2 (0) λ (0) DT12 DT22 which together with Equation 7.157 gives    −Λt  T q1 (t) e 0 D11 = T Λt q2 (t) D 0 e 12

DT21 DT22



x(0) λ (0)

 .

On employing the sweep method it follows that λ (0) = S(0)x(0), so that 

q1 (t) q2 (t)



 =

e−Λt 0

0 eΛt



  DT11 + DT21 S(0) x(0)    DT12 + DT22 S(0) x(0) 

  e−Λt DT11 + DT21 S(0) x(0) . =  eΛt DT12 + DT22 S(0) x(0) 

Using this in Equation 7.156 gives 

x(t) λ (t)



 =

E11 E21

E12 E22



  e−Λt DT11 + DT21 S(0) x(0) .   Λt T T e D12 + D22 S(0) x(0) 

582

Design and Analysis of Control Systems

Now, since x(t) must have a stable response, it must thus be generated by a stable normal state q1 (t), where  q1 (t) = e−Λt DT12 + DT22 S(0) x(0). (7.159) This calls for  E12 eΛt DT12 + DT22 S(0) x(0) = 0. Since E12 is not necessarily a null matrix, this means DT12 + DT22 S(0) = 0 which gives T S(0) = −D−T 22 D12 . −T T T Since D−T 22 = E11 , D12 = −E21 then T S(0) = E−T 11 E21 .

Knowing S(0) together with x(0), it becomes possible to solve by the linear system by forward propagation until the steady state is reached. Alternatively, the solution of the Riccati equation can be determined from the boundary condition S(T) by backward approach if the steady-state boundary condition of the Riccati equation S(T) is known. In this case, the Riccati solution can be determined by using Equation 7.159 with some slight modifications to be in the backward-in-time propagation where the steady-state normal state of the stabilising solution becomes q1 (T ) = −(E22 − S(T )E12 )−1 (E21 − S(T )E11 ). Since by backward method q1 (t) = e−Λ(T −t) q1 (t)e−Λ(T −t) then the analytical solution to the equation becomes S(t) = (E21 + E22 q1 (t))(E11 + E12 q1 (t))−1 . Currently, almost all Computer Aided Control System Design (CACSD) packages include routines that solve both the steady-state and the differential algebraic Riccati equation (ARE). In MATLAB, the routines ARE and RIC are used to solve this equation.

7.12

ESTIMATOR DESIGN

So far, in the discussion on full state feedback control, it has been assumed that all states would be available for feedback. In practice, this is not always the case not only because of the cost of sensors that would be required to measure all the states but also the fact that some of the states are physically inaccessible. The common approach used by most control systems is to take measurements of the outputs that are assumed to have sufficient information about the dynamics of a system. Based on these measurements, the entire state vector can be estimated. It is this estimated state vector xˆ that is fed back into the control loop to achieve the required control action u(t) = −Kˆx(t).

(7.160)

In summary, it can be stated that it is more practical to reconstruct the states of a system from a few measurements. The entity that does this state estimation is called the observer or simply the estimator. In this section, the general principles of state estimation involved are presented. There are two methods available for the estimation task. These are the full-order estimators and the reduced-order

583

State-Space Design Methods

estimator. The full-order estimator reconstructs the entire state vector from a set of measurements. In contrast, the reduced-order estimator reconstructs only those states that are not contained in the measurement (the unmeasurable states). Both have their advantages and disadvantages, as discussed later. In this context, the terms observer and estimator can be used interchangeably. Before embarking on observer design, it is prudent to restate the concept of observability. It refers to the ability to deduce information about a system’s state variables by monitoring only its output. If a system is observable, its states can be observed with a state observer or estimator. Hence, it is essential to check the observability of a system before trying to design an observer. For a nonobservable system, the estimator will not converge, giving the wrong state information. 7.12.1

FULL-ORDER STATE ESTIMATOR

As indicated earlier, a full-order estimator reconstructs the entire state vector from a set of measurements. Consider an idealised noiseless LTI system (in this case, a general MIMO system may be assumed which will assist further in the discussion of reduced-order estimators in the next section) whose dynamics may be presented as x˙ (t) = Ax(t) + Bu(t)

(7.161)

y(t) = Cx(t) + Du(t).

(7.162)

where the output is measured using

Here the description “idealised noiseless” has been stressed because real systems will have additive noise in both the measurement and the state equation. There is a need to estimate the vector xˆ from the measurements y so that the error between these estimates and the true state is minimal. There may be several ways of defining the minimal error, such as minimum square error or minimum absolute error and many others, some of which will be discussed later. If the estimation error is defined as x˜ (t) = x(t) − xˆ (t) (7.163) and since the estimate xˆ (t) must satisfy Equation 7.161, i.e., .

xˆ (t) = Aˆx(t) + Bu(t)

(7.164)

thus, this estimation error vector x˜ (t) will satisfy .

.

x˜ (t) = x˙ (t)−xˆ (t) = Ax(t) + Bu(t) − Aˆx(t)−Bu(t) = Ax(t)−Aˆx(t) = A˜x(t).

(7.165)

This shows that the error follows the same dynamics as the true states, which means that the error at any time t will be given by x˜ (t) =eAt x˜ (0), (7.166) where x˜ (0) is the initial error. Now if the initial estimate is very close to the true state and all the eigenvalues of the system matrix A represent stable poles, then the error will keep decreasing and hence, the state estimate will converge to the true state vector. However, in most cases the system A is not stable (which is one of the reasons that the controller is to be designed). Therefore, means must be provided that will ensure that the plant poles in the estimation process allow the error to

584

Design and Analysis of Control Systems

decay to zero. To accomplish this sort of state estimation, the model is designed to mimic the plant dynamics as .

xˆ (t) = Aˆx(t) + Bu(t),

(7.167)

and is connected in parallel with the plant. The outputs of the model and the plant are compared and then the error is fed back through some estimation gain L in a way that the closed-loop estimation model matrix will have fast eigenvalues. The actual output measurement y(t) is due to the true state x(t) and the model output yˆ (t) is due to the estimated state xˆ (t). The difference between them gives the measurement error. In fact, the term “measurement error” as used here is misleading, as there are no errors in measurements insofar as an ideal noiseless system has been assumed. The correct term to be used here is the measurement residual or innovations. y˜ (t) = y(t) − yˆ (t).

(7.168)

Since y(t) = Cx(t) + Du(t) yˆ (t) = Cˆx(t) + Du(t), then y˜ (t) = Cx(t) − Cˆx(t) = C˜x(t).

(7.169)

All these estimation equations established up to this stage, produce the general structure of a fullorder estimator as depicted in Figure 7.12. The next step is investigating the effect of feeding back the residual, y˜ (t), through a gain matrix L. This action changes the estimation model dynamics such that it can be expressed mathematically as .

xˆ (t) = Aˆx(t) + Bu(t) + L˜y(t) = Aˆx(t) + Bu(t) + LC˜x(t)

Figure 7.12 Full-order state estimator.

(7.170)

585

State-Space Design Methods

Now since x˜ (t) = x(t) − xˆ (t), then .

xˆ (t) = Aˆx(t) + Bu(t) + LC [x(t) − xˆ (t)] = (A − LC)ˆx(t) +Bu(t) + LCx(t).

(7.171)

Hence, on differentiating Equation 7.163 and using Equations 7.161 and 7.170, it follows that .

.

x˜ (t) = x˙ (t)−xˆ (t) = Ax(t) + Bu(t) − (A − LC)ˆx(t) − Bu(t) − LCx(t) = (A − LC)x(t) − (A − LC)x(t) = (A − LC)˜x(t).

(7.172)

This is the modified state estimation error dynamics whose solution is x˜ (t) = e(A−LC)t x˜ (0).

(7.173)

As can be seen, the error dynamics depend only on the initial estimate x˜ (0) and the three system model matrices A, L, and C. The error has no relationship with the control input u(t). For this error to converge to zero very fast, the matrix exponential e(A−LC)t must decrement, which requires that the eigenvalues of A − LC to be negative large (i.e., fast eigenvalues). This implies that the observation gain matrix should be made large enough. In fact, if the observation closed-loop poles are made large negative, the state estimate error will converge to zero irrespective of the initial condition. The selection of the estimator gain matrix L, which results in fast eigenvalues of the closed-loop system A − LC, can be done in just the same way as in the determination of the controller gain matrix K through the assignment of the eigenvalues of the estimation closed-loop model A − LC. Figure 7.13 shows an integrated full-order state estimator. Since the characteristic equation of this estimator model is given by α(s) ≜ det [sI − (A − LC)] = 0,

(7.174)

by using polynomial expansion, it is then possible to get values of the gain matrix L. Suppose that it is required to place the poles of the closed-loop observer model at si = β1, β2, β3, . . . , βn.

(7.175)

Essentially, this means that the polynomial form of the characteristic Equation 7.174 becomes α(s) = (s − β1 )(s − β2 )(s − β3 ) · · · (s − βn ).

(7.176)

Now, since the model matrix A and the output matrix C are both known by comparison of the coefficients of Equation 7.174 and those of Equation 7.176, it becomes possible to get the elements

Figure 7.13 An integrated full-order state estimator.

586

Design and Analysis of Control Systems

of L. However, as it was shown in the controller design, Equation 7.174 might have many unknown elements than can be computed through this comparison. To be able to get all aspects of L, the estimator model (A, C) must be in observable canonical form, just the same as it was necessary for the control system (A, B) to be in the control canonical form. Alternatively, one can use Ackermann’s formula in almost the same way as it was used in the determination of the controller gain matrix K. Derivation of the observer, Ackermann’s formula, will not be given here as it follows similar steps and reasoning as for the controller design. For the observer design, it is given as L = α(A)O−1 eT1 , (7.177) where α(A) is the Cayley-Hamilton matrix characteristic equation of the system A, O is the observability matrix of the system (A, C) and e1 is the vector of the first column of an identity matrix. This is how L is calculated. However, it is prudent to emphasise the following: • The L matrix serves as a correction signal to the plant model. • If there is a lot of uncertainty, disturbances, and inaccuracies, then a large L is required to provide extensive corrective action. • The estimator poles should be two to five times faster than the controller poles. This means further to the left in the LHP than those of the controller, such that the estimator can eliminate the differences between the estimator and the plant. • Since the estimator is much faster than the controller, the controller’s response will be visible in the system response and not that of the estimator. 7.12.2

DUALITY OF ESTIMATION AND CONTROL

As explained in Section 7.10, state feedback control involves both processes of estimation and control. Consequently, the requisite system development involves the design of both a control law and an estimator. The result is a combined estimator and control system as depicted in Figure 7.14. The estimation and control problems are mathematically equivalent. The control problem is that of determining the gain matrix K so that the poles of the closed-loop system are fast eigenvalues of the matrix A − BK and the estimation problem is that of determining the estimator gain that gives fast eigenvalues of the observation closed-loop system A − LC. Now, since the eigenvalues of (A − LC) are the same as those of the system (A − LC)T where (A − LC)T = AT −CT LT ,

(7.178)

it follows that the method of determining the control gain matrix K in the system A − BK is the same as that of determining the transpose of the estimation gain matrix LT in the system AT −CT LT ,

u

. x = Ax + Bu

(Control Law) -K

y

x

C

^ x

Figure 7.14 Combined estimator and control system.

Estimator

Σ

r

587

State-Space Design Methods

where the matrices A and B in the control equations are replaced by AT and CT . This equivalence is known as the duality of estimation and control and is summarised in the following table: Controller A B C D

Estimator (Observer) AT CT BT D

This duality property allows us to use the same design tools for estimation as for control problems by using substitutions shown in the table above. For this reason, MATLAB commands “acker” and “place”, which are used in the controller design, are also applicable to estimator design where the syntax becomes ′ ′ ′ L=acker(A , C , Pe ) (7.179) and







L=place(A ,C ,Pe ) .

(7.180)

Here, Pe is the vector that contains the desired estimator error poles. 7.12.3

REDUCED-ORDER ESTIMATOR

The full-order estimator reconstructs the entire state vector using measurements of some of the state variables. This estimator gives some estimates that are redundant because some of the states can be available directly from the output measurements. The only advantage of this type of estimator is that, when the measurements are corrupted with noise, the full state estimator smoothens by filtering the measured states and reconstructing the unmeasured states. However, on the other hand, it is associated with unnecessary computational load because it computes even those states that can be available from measurements, for example, if the output matrix contains rows in which all entries are zero except one. The need to alleviate this load leads to the idea of a reduced-order estimator, which separates the states that are available directly from the measurements in estimating the unknown states. This reduces the order of the estimator by the number of the measured outputs, and hence, the name “reduced-order estimator”. Effectively, this estimator reconstructs only those states that are not contained in the measurement (the unmeasurable states). Suppose that for the linear system x˙ (t) = Ax(t) + Bu(t) y(t) = Cx(t) + Du(t), the state vector is x(t) = [x1 (t) x2 (t) x3 (t) x4 (t) x5 (t)]T ,

(7.181)

and the measurement vector is y(t) = [ y1 (t)

y2 (t)

If the output and the feed-through matrices C and D are  1 0 0 0 C= 0 1 0 0 0 1 1 1   0 D =  0 , 1

y3 (t)]T .

 0 0  1

(7.182)

588

Design and Analysis of Control Systems

then the measurements will contain  





y1 (t) 1  y2 (t)  =  0 y3 (t) 0

0 1 1

0 0 1

0 0 1



0   0   1 

x1 (t) x2 (t) x3 (t) x4 (t) x5 (t)

    0   +  0  u(t)   1



 x1 (t) . x2 (t) = x2 (t) + x3 (t) + x4 (t) + x5 (t) + u(t)

(7.183)

Clearly, from this measurement vector, the states x1 (t) and x2 (t) are directly available as measurements y1 (t) and y2 (t), respectively. However, from this same measurement vector, there is no sufficient information that can lead to the determination of the states x3 (t), x4 (t), and x5 (t). Therefore, an estimator that estimates only these unknown states will be a reduced estimator of order 3, i.e., the state vector of order 5 has been reduced by order of the states that are directly available from the measurements. Generally, if the measurement vector y1 is such that part of it contains measurements ym , which represent directly the states of the system and some measurements that do not, e.g., an underdetermined linear combination of the states as in Equation 7.183, say yR , it is possible to partition it as   ym   y =  ...  . (7.184) yR Corresponding to this partitioning, the output and the feed-through matrices can also be partitioned as     Cm 0  ..  C = .  D = ···  (7.185) D R CR so that the measurement equation becomes      ym Cm      y =  ...  =  ...  x+  yR CR

 0 ..  u. .  DR

(7.186)

Since the aim is to separate the states that are directly obtainable in these measurements from the rest of the state elements, a coordinate transformation P which transforms x to a new variable z, is required so that the new state-variable z can be partitioned to two components that correspond to the measurements ym , and the rest zR is unknown (to be estimated). The modal (diagonal) canonical form may be the best choice for this matter. Thus x = Pz where



 ym   z =  ...  . zR

589

State-Space Design Methods

The Transformation Matrix T can also be partitioned as   P = Pm · · · PR , so that the state vector x can be expressed as 

x=



Pm

···

 ym  .   ..  . zR

PR

(7.187)

The Transformation Matrix P must be associated with a left inverse T where   Tm   T =  ...  TR such that  TP =

I 0 0 I

Using this inverse on Equation 7.187 gives    Tm  ..    . x =  TR

 .

(7.188)

 ym ..  . .  zR

(7.189)

Therefore, the similarity transformation from x variable to the z variable gives z˙ = P−1 APz + P−1 Bu y = CPz + Du, which can be expressed as    y˙ m  ..    . =

     Tm ym Tm  .   .  ..  A  P m · · · PR  ..  +  ..  Bu .  z˙ R TR zR TR         ym Cm ym 0    ..   ..   .   .   .  =  .  Pm · · · PR  ..  +  ..  u. zR CR zR DR

This structure can be simplified further and expressed in block matrix form as follows:        y˙ m Amm · · · AmR ym Bm  ..   .. ..   ..  +  ..  u  . = . ··· .  .   .  z˙ R 

ARm 



ym Cmm  ..   ..  . = . yR CRm

···

ARR

zR 

(7.190)

BR 



· · · CmR ym ..   ..  +  ··· .  .   · · · CRR zR

 0 ..  u, .  DR

(7.191)

590

Design and Analysis of Control Systems

where Amm = Tm APm

AmR = Tm APR

ARR = TR APR

ARm = TR APm

Cmm = Cm Pm

CRm = CR Pm

CRR = CR PR

CmR = Cm PR Bm = Tm B

(7.192)

BR = TR B. Notice that from Equations 7.186 and 7.189 it follows that Cm = Tm

(7.193)

and Equation 7.187 can be expanded as 

 Tm    TP =  ...  Pm · · · PR TR   Tm Pm · · · Tm PR   .. .. = . . ··· . TR Pm · · · TR PR

Therefore, according to Equation 7.188 Tm Pm = Cm Pm = Cmm = I Tm PR = Cm PR = CmR = 0 TR Pm = 0

(7.194)

TR PR = I. Since ym is available as a measurement, it is only the unknown state vector zR that needs to be determined. Expansion of Equation 7.190 gives the state equation for this unknown state vector as z˙ R = ARm ym + ARR zR + BR u,

(7.195)

and the measurements are ym = Cmm ym + CmR zR = ym . yR = CRm ym + CRR zR + DR u, which can be written as yR − CRm ym = CRR zR + DR u,

(7.196)

where ym is known to be representing some of the untransformed states xm . Now, suppose that the unknown state vector xR , which has been transformed by P to zR , is estimated as zˆ R , this estimation will affect the measurement yR only and the corresponding measurement residual y˜ r becomes y˜ r = (yR − CRm ym ) − CRR zˆ R .

(7.197)

As done before, the estimation process requires that this residual be fed back to the estimation model through the estimator gain LR , which alters the model dynamics to ·

zˆ R = ARm ym + ARR zˆ R + BR u + LR y˜ r = ARm ym + ARR zˆ R + BR u + LR [(yR − CRm ym ) − CRR zˆ R ] = (ARR − LR CRR ) zˆ R + (ARm − LR CRm ) ym + BR u + LR yR .

(7.198)

591

State-Space Design Methods

The transformed reduced state estimate zˆ R is thus the solution of this equation. The estimator design is thus that of selecting the gain LR , which will ensure that the estimated vector zˆ R gives a fair representation of the true states by minimising the estimation error z˜ R . Defining the estimation error as z˜ R = zR − zˆ R , (7.199) it can be shown that by taking the time derivative of this error and using Equations 7.195 and 7.198, this error will have the dynamics described as ·

z˜ R = ARR zR − (ARR − LR CRR ) zˆ R + LR CRm ym − LR yR ,

(7.200)

which on replacing yR and doing some simplifications and rearrangements, becomes ·

z˜ R = (ARR − LR CRR ) zR − (ARR − LR CRR ) zˆ R − LR DR u = (ARR − LR CRR ) z˜ R − LR DR u.

(7.201)

Since u is a control forcing function, the homogeneous solution of this error dynamics equation is z˜ R (t) = e(ARR −LR CRR )t z˜ (0).

(7.202)

The estimate zˆ R becomes more acceptable as a true estimate of the state zR if the error z˜ R is minimised. For this error to decay to zero over a short time t, the closed-loop system ARR − LR CRR must have eigenvalues that are large and negative, i.e., the roots of the characteristic equation αr (s) ≜ det [sI − (ARR − LR CRR )] = 0

(7.203)

should be large and negative. This calls for large gain LR . Design of the reduced-order estimator becomes selecting the gain matrix LR which provides the reduced submatrices ARR and CRR that are established by Equation 7.203 where ARR = TR APR CRR = CR PR . The transformation matrices PR , TR and the submatrix CR are all predetermined as explained. Normally Equation 7.203 is written as αr (s) ≜ det [sI − (TR A − LR CR ) PR ] = 0,

(7.204)

which essentially represent the same thing. After this point, all that remains is the same as that of a full-order estimator, as explained previously. All methods that apply to the full-order estimator can also be applied to this reduced-order estimator. The estimator output, in this case, is zˆ R , however it is required that the output should be xˆ R , therefore, the estimator output is then re-transformed back to the original state xˆ R by xˆ R = PR zˆ R .

(7.205)

Notice that if all measurements do not give the states directly, then Cm and hence Cmm will be dimensionless null matrices. Thus, Tm and Pm will also be dimensionless null matrices, which results in dimensionless matrices Amm , ARm , AmR , Cmm , CmR and CRm . This leaves matrix ARR with the same dimension as that of A while matrix CRR will have the dimension of C. The reduced-order estimator then becomes the full-order estimator. The main disadvantage with the estimator just described is that it requires that there must be some measurements yR that do not provide sufficient information for the determination of some of

592

Design and Analysis of Control Systems

the elements in the state vector. If the whole of the measurements vector contains information that leads to direct determination of some elements of the state vector, but not the whole state vector, in which case yR and hence CR becomes zero, this method fails. Consider, for example, for the state and measurement vector described by Equations 7.181 and 7.182, the output (measurement) matrices are   2 0 0 0 0 C= 0 1 0 0 0  0 0 3 0 0 D = [0] , and hence, the measurement vector is given by  





y1 (t) 2  y2 (t)  =  0 y3 (t) 0

0 1 0

0 0 3

0 0 0



0   0   0 

 2x1 (t) =  x2 (t)  . 3x3 (t)

x1 (t) x2 (t) x3 (t) x4 (t) x5 (t)

     



(7.206)

As can be seen, the whole of this measurement vector can be used to determine the states x1 (t), x2 (t)and x3 (t) directly. There is no measurement component that is not directly linked (one-to-one link) to the state elements, as such yR is zero, which then leads to CR and CRR being zero. This means the characteristic equation given in Equation 7.203 reduces to αr (s) ≜ det [sI − ARR ] = 0,

(7.207)

which does not help anything toward the estimator design. To resolve this problem, a different approach is used, although the fundamental principle of transforming the system so that the state vector can be shown as an augmented vector of the states which are directly available as measurements ym and those which are not available directly from the measurements zR . Since CR is a null matrix of zero dimension, then both TR and PR are null matrices of zero dimension so that T = Tm and P = Pm Then Amm = Tm APm = TAP and Cmm = Cm Pm = CP, which can be partitioned as Cmm =



Im

···

0



,

(7.208)

where Im is an identity matrix whose dimension is the same as that of the measurement vector. The transformed system matrix Amm can be partitioned arbitrarily to conform with the partitioning of

593

State-Space Design Methods

the transformed state vector z as 

A11  .. A= . A21 and also

··· ··· ···

 A12 ..  .  A22

(7.209)



 B1   B =  ...  . B2

(7.210)

Therefore, the complete state description of the transformed system becomes        y˙ m A11 · · · A12 ym B1  ..   .. ..   ..  +  ..  u  . = . ··· .  .   .  z˙ R 

y

A21 

=



Im

· · · A22   ··· 0 

zR 

(7.211)

B2

ym ..  . .  zR

(7.212)

Expansion of these equations gives y˙ m = A11 ym + A12 zR + B1 u z˙ R = A21 ym + A22 zR + B2 u y = ym .

(7.213) (7.214) (7.215)

Now, if zˆ R is a vector of the estimates of the unknown states, then the measurements residual yr can be calculated directly from the measured state Equation 7.213 instead of the measurement Equation 7.215 as yr = (˙ym − A11 ym − B1 u) − A12 zˆ R . (7.216) This is the basic difference between this approach and the previous approach. As usual, this residual is fed to the unknown state equation through the estimator gain LR in the estimated dynamics of the following form: ·

zˆ R = A21 ym + A22 zˆ R + B2 u + LR yr .

(7.217)

On replacing yr from Equation 7.216, this equation can be expanded and rearranged as ·

zˆ R = (A22 − LR A12 ) zˆ R + (A21 − LR A11 ) ym + (B2 − LR B1 ) u + LR y˙ m .

(7.218)

On replacing y˙ m by Equation 7.213, and taking similar approaches as shown before for the other methods, the error dynamics can be expressed by using Equations 7.214 and 7.218, which becomes ·

z˜ R = (A22 − LR A12 ) z˜ R ,

(7.219)

z˜ R (t) = e(A22 −LR A12 )t z˜ R (0).

(7.220)

which gives According to the principle of estimation that requires this error to decay to almost zero, there is a need to select L large enough so that the roots of the equation α(s) ≜ det [sI − (A22 − LR A12 )] = 0

(7.221)

594

Design and Analysis of Control Systems

are large and negative. The steps that follow are the same as those discussed before. The estimator output will be zˆ R , and has to be re-transformed back to the original state of the system xˆ R by the transformation xˆ R = PˆzR , (7.222) thus, completing the estimation process. 7.12.4

COMPENSATOR DESIGN: CONTROL LAW AND ESTIMATOR

So far, the previous discussion on control and estimator design has been treating the two problems as independent of one another. No explicit mention of the effect of the dynamics of the estimator on that of the controller or vice versa has been made. In both cases, it was assumed that there is no input reference signal r(t) for the system to track, in which case the control system was a regulator. This section examines the combined system dynamics under the effect of both the controller and the estimator in the presence of the reference signal r(t). Such a system is shown in Figure 7.15. Consider the plant given by x˙ (t) = Ax(t) + Bu(t) y(t) = Cx(t) + Du(t), and controlled by a full state feedback controller K in the presence of a reference signal r(t) such that the control signal becomes u(t) = −Kx(t) + r(t). Since the actual state x(t) of the system for feedback is not available as mentioned in the previous sections, the estimated state xˆ (t) is the one that is fed back so that u(t) = −Kˆx(t) + r(t),

(7.223)

where this estimated state is generated by an estimator that has a general form ·

xˆ (t) = Ae xˆ (t) + Bu(t) + Ly(t), and Ae is the estimator closed-loop matrix. Thus the compensator for the dynamic system has two components: a controller and estimator as illustrated in Figure 7.16. Recall that for a full-order estimator Ae = A − LC r(t)

u(t)

+

y(t)

Plant -

Estimator

K Controller

Figure 7.15 Combined control law and estimator.

(7.224)

^ x(t)

595

State-Space Design Methods

Figure 7.16 A combined compensator for a tracking control system.

and for the reduced-order estimator, there are two forms used to represent Ae , as discussed in the preceding section. A popular design criterion for estimators is the reduction of the estimation error x˜ (t) whose dynamics has been found to be ·

·

x˜ (t) = x˙ (t)−xˆ (t) = Ae [x(t) − xˆ (t)] = Ae x˜ (t).

(7.225)

The closed-loop dynamics of the plant in the presence of both the controller and the estimator becomes x˙ (t) = Ax(t) + B [−Kˆx(t) + r(t)] = Ax(t) − BKˆx(t) + Br(t),

(7.226)

which, on eliminating xˆ using the relation x˜ (t) = x(t) − xˆ (t) gives x˙ (t) = (A − BK)x(t) + BK˜x(t) + Br(t).

(7.227)

Similarly, the output equation becomes y(t) = Cx(t) + D [−Kˆx(t) + r(t)] = Cx(t) − DKˆx(t) + Dr(t) = (C − DK) x(t) + DK˜x(t) + Dr(t). Combining Equations 7.225, 7.227, and 7.228 into matrix equations gives " #      x˙ (t) A − BK BK x(t) B · = + r(t) 0 Ae x˜ (t) 0 x˜ (t)     x(t) y(t) = C − DK DK + [D] r(t). x˜ (t)

(7.228)

(7.229) (7.230)

596

Design and Analysis of Control Systems

In most cases, the systems of interest are causal, in which case the feed-foward matrix D is zero, hence, the output equation can be written as     x(t) y(t) = C 0 (7.231) x˜ (t) The characteristic equation of this system is sI − (A − BK) −BK ψ(s) = 0 sI − Ae

= 0.

Since the augmented system is block triangular, the characteristic equation can be written as ψ(s) = det [sI − (A − BK)] . det [sI − Ae ] = 0. This indicates that the closed-loop poles of the whole system are just the poles of the plant that result from the selection of the feedback gain K and the desired estimator poles, as dictated by the estimator closed-loop system matrix Ae , which depends on the choice of the estimator gain L. Thus, the state feedback gain K and the estimator gain L can be selected separately for the desired closed-loop behaviour. This means that the design of the feedback controller and that of the state estimator can be carried out separately. This is a special case of what is known as the separation principle, which holds when noise and other additive disturbances are included in the system. The closed-loop transfer function GCL (s) from the reference input signal r(t) to the output measurement y(t) can be determined using the same principles as discussed before.     A − BK · · · BK B    .. ..  (7.232) A∗ =  B∗ =  ...  . ··· .  ···

0

Ae

0 

C∗ =

h

. C ..



x X∗ =  · · ·  . x˜

i 0

(7.233)

The system and the output Equations 7.229 and 7.231 can then be written in compact form as ˙ ∗ = A∗ X∗ +B∗ r X y = C∗ X∗ so that the closed-loop transfer function becomes GCL (s) = C∗ (sI − A∗ )−1 B∗ . Reusing the original matrices given in Equations 7.232 and 7.233 gives  −1  sI − (A − BK) · · · −BK     .. .. GCL (s) = C · · · 0    . ··· . 0 · · · sI − Ae

 B ..  , .  0

which, by the advantage of the triangularity of the block diagonal augmented matrix A∗ reduces to GCL (s) = C [sI − (A − BK)]−1 B.

(7.234)

This is precisely the same result as that obtained when full state feedback is applied. For the estimator error to vanish quickly in the augmented system, it is required that the estimator poles be chosen to be faster than those of the closed-loop poles, at least twice as much.

State-Space Design Methods

7.12.5

597

A ROBUST TRACKING CONTROL SYSTEM

In the combined compensator, the choice of gains will produce a steady-state error to a reference input, but the result is not robust. Any changes in the plant parameters will produce a non-zero steady-state error. There is a need for integral control to achieve a robust tracking control system. For the state-feedback (control and estimator) design, an integrator can be inserted into the system in the forward path between the summing point and the plant, as illustrated in Figure 7.17. There are other techniques that can be used to give a control system the ability to track, with non-zero steady-state error, a non-decaying input and reject a non-decaying disturbance such as a sinusoidal signal, step function or ramp. The strategy is to include the dynamic mathematical models of these external signals in the formulation of the combined control and estimation problem, which is then solved in error space. This is because the problem of tracking reference input and rejecting the disturbance can be conceptualised as an effort to design a control law to provide regulation of the error. As time progresses, that error must converge to zero. The control must also be structurally stable or robust. This is the essence of robust control – an approach to controller design that

Figure 7.17 Integral control: in pursuit of robust tracking.

Figure 7.18 A robust tracking control system.

598

Design and Analysis of Control Systems

Figure 7.19 Electrical circuit.

explicitly deals with uncertainty. The strategy concentrates on the trade-offs between performance and stability in the presence of uncertainty in the system model as well as the exogenous inputs to which it is subjected. Ostensibly, the uncertainty is due to modelling errors, such as parameter inaccuracies, model order errors, unmodelled nonlinearities, and neglected disturbances. The robust control methods are designed to function effectively provided the uncertain parameters or disturbances are modelled and found to lie within a certain range. Figure 7.18 shows a robust tracking control system.

PROBLEMS Problem 7.1 Consider the electrical circuit shown in Figure 7.19, where the input u(t) is a current and the output y(t) is a voltage. (a) Explain why the state variables should be chosen as x1 = iL and x2 = vC . (b) Write down state-variable equations for the system, and deduce the system’s State-Space model (A, B,C, D). (c) Explain the concepts of controllability and observability. How are these properties determined for a system whose model is given by matrices (A, B,C, D)? (d) In Figure 7.19, what conditions on R, L, and C will guarantee that the system is: (i) controllable? (ii) observable? (e) For the State-Space model derived in (b), if R = 1 Ohms; L = 1 H and C = 1 F: (i) Determine the controllability and observability of the system. (ii) Derive the circuit’s Transfer Function model using matrix algebra. (iii) What is the significance of the pole-zero cancellation in the derivation of the Transfer Function model? (iv) Derive the circuit’s Transfer Function model using the state-variable equations of the system which are obtained by working backwards from the State-Space model (A, B, C, D). Why is this result the same as that obtained in (ii)? (v) Design a block diagram for the circuit with one integrator for each state variable. Problem 7.2 A linear system x˙

= Ax + Bu

y

= Cx + Du

599

State-Space Design Methods

has D = 0, where the plant and input matrices (A, B) are given by     3 −2 0 1 (A, B) =  0 0 1  ,  0  4 −3 0 1 (a) Discuss the controllability and stability of this system. (b) Determine the feedback matrix K that will place the poles of the system at −1, −1 ± j. Problem 7.3 For the linear system x˙

= Ax + Bu

= Cx + Du,   −1 0 0 −3  , B =  0  , C= 0 −3 1 y



0 A = 1 0

0 0 1





1 0

0 1

 ,

D = [0] .

(a) Design a full-order state estimator for the system. (b) Find an optimal control that minimises the cost function Z ∞

=

 XT QX + uT Ru dt,

0

where



3 R = I, Q =  0 0 Problem 7.4 A linear system is described by  −2 0 x˙ =  0 −2 0 −3 y

=



−1

1

0

0 2 0

 0 0 . 1

   1 1 1 x+ 2 u −2 2 

x.

Using standard notation used in this chapter (a) Find the transfer function of this system and establish its controllability, observability and stability. (b) Find a non-singular transformation T such that T −1 AT is diagonal. (c) Determine the State-Transition Matrix Φ(s). Problem 7.5 The state transfer function G(s) is given by G(s) =

s2 + 7s + 10 . s3 + 8s2 + 19s + 122

(a) Find the controllable and observable canonical forms of the system. (b) Draw the signal flow graph for the canonical form in (a). Problem 7.6 Optimisation theory is a wide subject that was introduced in this chapter as a way toward optimal control theory. Use the given introduction to establish the minimum of f (x) = x1 ,

600

Design and Analysis of Control Systems

subject to the conditions (x1 − 1)2 + x22 2

(x1 + 1)

= 1,

x1

T

where x=

= 1

+ x22 x2

.

Problem 7.7 Obtain a state-space representation for a system whose differential equation is given by ... x + 3x¨ + 3x˙ + x = u˙ + u, where the output is y = x. (a) Use this result to determine the system transition matrix φ (t) and φ (s). (b) Use Ackermann’s formula to determine the controller K that places the roots of this system at −1, −2 ± 2 j. Problem 7.8 (a) Check the controllability and observability of the following two systems.     1 4 3 −1   2 16  x+  0  u, y = −1 3 0 x x˙ =  0 0 −25 −20 0 

1 x˙ =  0 −2

0 0 −4

   0 1 0  x+  0  u, −3 −1

y=



1

0

0



x.

(b) For each of the systems of part (a), find the controllable modes, uncontrollable modes, observable modes, and unobservable modes. (Hint: use the MATLAB commands ctrbf and obsvf.) (c) For each of the systems of part (a), assuming the states are available for feedback, determine whether a state feedback controller can be designed to stabilise the system. (d) For each of the systems of part (a), assuming the states are not available for feedback, determine whether a controller and an observer can be designed to stabilise the system. Problem 7.9 A certain system with state x is described by the state matrices     −2 1 1 A = , B= −2 0 3 C

=



1

0



, D = [0] .

Find the transformation T so that if x = Tz, the state matrices describing the dynamics of z are in controllable canonical form. Compute the new matrices Az , Bz , Cz , and Dz . Problem 7.10 Consider the control system shown below. (a) Find the transfer function from U(s) to Y (s). (b) Write the state equations for the system using the state variables indicated.

601

State-Space Design Methods

Problem 7.11 Using the indicated state variables, obtain the state equations for the two systems shown below. Find the transfer function for each system using both block-diagram manipulation and matrix algebra. +

u

G1 =

Σ

-

x3

H2 =

1 s+4

u

x3

x4

1 s+4

1 s

+

s+2 s+3

x2

x2

Σ

+

x1

Σ

y

Σ +

.

+

+

1 s

H1 =

+

x1

1 2s

G2 =

x1 +

1 s

-

+

y

Σ

5

1 s+10

u

+

x4 +

1 s+2

Σ -

x3

x2

4 s

1 s+3

+

x1 +

y

Σ

’ Problem 7.12 For each of the transfer functions below, write the state equations in both control and observer canonical form. In each case, draw a block diagram and give the appropriate expressions for A, B, and C (a)

s2 + 1 (control of an inverted pendulum) s2 (s2 − 1)

(b)

3s + 4 s2 + 2s + 2

Problem 7.13 The linearised equations of motion for a satellite are x˙

= Ax + Bu

y

= Cx + Du

where 

A

D u

0  3ω 2 =   0 0

1 0 0 −2ω

  0 0  0 2ω  , B =  0 1  0 0

= [0]     u1 y1 = , y= . u2 y2

0 1 0 0

 0  0  , C = 1 0  0 1

0 0

0 1

0 0

 ,

602

Design and Analysis of Control Systems

The inputs u1 and u2 are the radial and tangential thrusts, the state variables x1 and x2 are the radial and angular deviations from the reference (circular) orbit and the outputs, y1 and y2 , are the radial and angular measurements, respectively. (a) (b) (c) (d)

Show that the system is controllable using both control inputs. Show that the system is controllable using only a single input. Which one is it? Show that the system is observable using both measurements. Show that the system is observable using only one measurement. Which one is it?

Problem 7.14 An unstable robot system is described by the vector differential equation        d x1 1 0 x1 1 = + u(t). −1 2 x2 1 dt x2 Both state variables are measurable, and so the control signal is set as u(t) = −K(x1 + x2 ). (a) Design gain K to minimise the performance index. (b) Evaluate the minimum value of the performance index. (c) Determine the sensitivity of the performance to a change in K. Assume that the initial conditions are  x(0) =

1 1

 .

(d) Is the system stable without feedback signals due to u(t)? Problem 7.15 A feedback system has a plant transfer function G(s) =

K Y (s) = R(s) s(s + 70)

It is desired that the √velocity error constant Kv be 35 and the overshoot to a step be approximately 4% so that ξ is 1/ 2. The settling time (2% criterion) desired is 0.11 sec. Design an appropriate state-variable feedback system. Problem 7.16 The following differential equations represent linear time-invariant systems. Write the dynamic equations (state equations and output equations) in vector form. d 2 y(t) dy(t) +4 + y(t) = 5r(t) dt 2 dt

(a)

(b)

(c)

(d)

2

d 3 y(t) d 2 y(t) dy(t) + 3 +5 + 2y(t) = r(t) 3 2 dt dt dt

d 3 y(t) d 2 y(t) dy(t) +5 +3 + y(t) + 3 dt dt 2 dt

Z t

y(τ)dτ

= r(t)

o

d 4 y(t) 3 d 3 y(t) 5 dy(t) + + + y(t) = 2r(t). dt 4 2 dt 3 2 dt

603

State-Space Design Methods

Problem 7.17 A linear time-invariant system is described by the differential equation d 3 y(t) d 2 y(t) dy(t) +3 +3 + y(t) = r(t). 3 dt dt 2 dt (a) Let the state variables be defined as x1 = y, x2 = dy/dt, x3 = d 2 y/dt 2 . Write the state equations of the system in vector-matrix form. (b) Find the State-Transition Matrix φ (t) of A. (c) Let y(0) = 1, dy(0)/dt = 0, d 2 y(0)/dt 2 = 0, and r(t) = us (t). Find the state transition equation of the system. (d) Find the characteristic equation and the eigenvalues of the system.

Note 1 Direct dependance of the variables on time t will not be shown, for example, instead of writing y(t) it will be written as y, {similarly for x instead of x(t), z instead of z(t) and u instead u(t)} unless where the context requires otherwise.

8 Digital Control Systems 8.1

INTRODUCTION

In the Fourth Industrial Revolution, with the breakthrough in computer technology (both hardware and software), computers have been used as controllers for a broad range of dynamic systems. Most control systems that are implemented today are based on computer control where digital computers (usually microprocessors) are used. Under digital control, the controller is normally a digital computer, whereas the environment being controlled is usually characterised by continuous-time analogue signals. This creates a system with a hybrid of signals, i.e., some points have analogue signals while others have digital signals. Such systems are generally known as sampled data systems, meaning that analogue signals are sampled to represent discrete-time digital signals and vice versa. Some means of interfacing the digital computer to the analogue environment must be provided. In most applications, the interfacing is provided through the Digital to Analogue (D/A) and the Analogue to Digital (A/D) converters. However, a computer has three main features that distinguish its performance as a controller from classical analogue controllers. In the first place, a computer is a sequential machine that executes one instruction at a time; as such, the signals on which it works are discrete-time instead of continuous-time signals. On the other hand, the signals found in most physical systems are naturally continuous in time. Therefore, a need arises to discretise the system signals so that the computer in digital control can handle them. Second, the computer works on finite digital signals (numbers) instead of analogue signals, usually found in real-life systems. Again, this calls for signal digitisation, which, together with the discretisation, if not correctly handled, will have some undesirable effects on the system being controlled. Furthermore, unlike analogue electronics, digital computers cannot integrate. Therefore, the differential equations describing system compensation must be approximated by reducing them to algebraic equations. This chapter introduces the principles involved in digital control systems and discusses the issues that arise in the design and implementation of digital controllers. In particular, the motivation is to develop effective methods so that the digitisation and discretisation effects of continuous-time analogue signals are eliminated or, instead, minimised. This chapter proceeds by covering the digitisation process, sampled-data systems, the Z-transform, discrete-time systems, design and comparison of digital controllers, and digital state-space analysis. The discrete-time PID controller is derived, and the physical implementation of digital controllers is discussed. Extensive examples, including MATLAB exercises, are used to illustrate the material in this chapter.

8.2 8.2.1

DIGITISATION: SAMPLED DATA SYSTEMS GENERAL STRUCTURE

The traditional way to implement control systems is through analogue implementations, ostensibly using operational amplifiers, as illustrated in Chapter 4. This chapter explores the design of control systems that can be implemented on a digital computer. The term digital computer explicitly means an alternative to an analogue computer. To digitise, signals must be sampled and quantised. The first step is to sample a signal and convert to a quantised digital number with an A/D converter, thus obtaining a new digital control value. The next step is to convert the digital value to a voltage and keep it constant until the next control value has been computed. This is accomplished using a D/A converter with a ZOH (Zero-Order Hold) circuit.

604

DOI: 10.1201/9781032718880-8

605

Digital Control Systems

Under digital control, the controller is a digital computer or device, whereas the environment being controlled is usually characterised by continuous-time analogue signals. This creates a system with a hybrid of signals, i.e., some points have analogue signals while other points have digital signals. Such systems are generally known as sampled data systems, meaning that analogue signals are sampled to represent discrete-time digital signals and vice versa. Some means of interfacing the digital computer to the analogue environment must be provided. In most applications, the interfacing is supplied through the Digital to Analog (D/A) and the Analog to Digital (A/D) converters. The A/D converter takes analogue measurements from analogue environments and converts them to digital signals, which the computer can then use. Conversely, the D/A converter converts digital signals into equivalent analogue signals, which can then be applied to the controlled analogue environment. Figures 8.1 and 8.2 show the basic structures of the digital control system. The general digitisation process can also be represented as in Figure 8.3.

The Digital Controller r(t) Digital Computer

A/D

y(t) D/A

System

Figure 8.1 Digital control system: analogue input and analogue output. The Digital Controller r(t)

Digital Computer

D/A

y(t) System

A/D

Figure 8.2 Control systems structure: analogue output and digital input.

Figure 8.3 General digitisation process.

606

8.2.2

Design and Analysis of Control Systems

DATA SAMPLING

The process of data sampling can be viewed as an on-off switching process that takes on analogue signals at some specific interval resulting in a train of impulses. At the end of each switching pulse, the sampled signal is held at that level until the next sampling interval. In digital control terminology, the interface that takes on analogue signals and converts them to digital signals in this fashion is called the sample and hold. There are two elements in the sample-and-hold unit. The first unit is called the sampler, which does the switching, and the second unit is called the hold, which, as the name implies, holds the sampled signal at the same level until the next sampling instant. Figures 8.4 and 8.5 demonstrate how a digital signal is obtained by data sampling, while Figure 8.6 shows a digital signal representing a unit step response [12]. This section discusses the principles of the sample-and-hold process, which is divided into two parts, the “sampler” and the “hold.” 8.2.2.1

Analysis of the Data Sampler

As noted before, the data sampler can be viewed as an on and off switch that takes on analogue signals, resulting in a train of impulses. A simple schematic diagram of the data sampler is shown in Figure 8.7. The process sampling can be expressed as a product of the analogue signal being sampled r(t) and the train of impulses Σδ (t − kT ) where T is the sampling interval, and k is a series of integers, 0, 1, 2, 3, 4, . . . so that the sampled signal r∗ (t) or the discrete signal can be expressed as

Figure 8.4 Sampling to get a digital signal.

Figure 8.5 A digital signal representing a continuous signal.

607

Digital Control Systems

Figure 8.6 A digital signal representing a unit step response.

Figure 8.7 Data sampler.

r∗ (t) =





r(t)δ (t − kT ),

(8.1)

k=−∞

where δ (t) is the Dirac delta function. This mathematical representation is technically known as impulse modulation and, in the analysis, it is represented as a sample-and-hold process. The time interval between the consecutive sampling instants is known as the sampling interval, and for periodic sampling, it is constant. The sampling frequency or simply the sample rate, expressed in samples per second (or Hertz), is the reciprocal of the sampling interval fs =

1 T

(Hz),

where fs is the sampling frequency. Note that if Ns is the number of discrete samples per unit time (say per second), then there will be Ns sampling intervals T in this unit time. Thus Ns T = 1,

(8.2)

which shows that the frequency fs equals the number of discrete-time samples Ns per unit time. Most often, it becomes necessary to express the sampling frequency in radians per second. Mathematically, 2π ωs = 2π fs = , T where ωs is the circular or sampling frequency in radians per second. Mathematical analysis of the sampled signal can be carried out using the Laplace transform analysis, Fourier analysis or Z-transform analysis. The Fourier transform and the Z-transform analysis will be described later; at this stage, the Laplace transform (a more familiar technique) analysis is presented.

608

Design and Analysis of Control Systems

If the Laplace transform of the sampled signal is carried out on Equation 8.1, then Z ∞

L [r∗ (t)] = R∗ (s) =

r∗ (τ)e−sτ dτ,

−∞

which gives R∗ (s) =



Z ∞

r(τ)δ (τ − kT )e−sτ dτ.



−∞ k=−∞

Interchanging the summation and the integration gives R∗ (s) =



Z ∞

r(τ)e−sτ δ (τ − kT )dτ.



k=−∞ −∞

(8.3)

If τ = kT, this equation simplifies to R∗ (s) =





Z ∞

e−skT

r(τ)δ (τ − kT )dτ.

(8.4)

−∞

k=−∞

For all functions f (t) which are continuous at ξ , the impulse function δ (t) has a shifting property given by Z ∞

−∞

f (t)δ (t − ξ )dt = f (ξ ).

This property, when used in Equation 8.4, gives ∞

R∗ (s) =

e−skT r(kT ).



(8.5)

k=−∞

The sampled signal is normally defined for positive integers k. Hence, the range of integers from −∞ to 0 (two-sided Laplace transform given in this equation) are rarely used. In most cases, it is the one-sided Laplace transform defined as R∗ (s) =



∑ e−skT r(kT )

(8.6)

k=0

that is used. This transform, Equation 8.6, gives the continuous-time model of the sampled data signal and a mathematical model of the sampler. 8.2.2.2

Analysis of the Hold Operation

The hold operation takes the impulses produced by the sampler to produce a piecewise constant signal of the sample-and-hold device. It is normally presented as a linear filter. Depending on the degree to which the sampled signal is held, several forms of hold units can be defined. However, the most common forms of hold are the zero-order hold (ZOH), and the first-order hold (FOH). The ZOH maintains the sampled signal at a constant level, while the FOH holds the signal linearly between the sampling instants. Since, in sampling processes, the next sampling value is never known, this type of hold (FOH) is never used in the sample-and-hold operation, though it might be used in the signal recovery (data extrapolation) process, discussed in the next sections. The only feasible hold used in this case is the ZOH that is modelled as shown in Figure 8.8. The sampling process is illustrated in Figure 8.4 while the relationship between digital samples is compared with the continuous signal in Figure 8.5. For any sampled signal input r∗ (t), the output of the ZOH is defined as rH (t) = r(kT )

kT ≤ t ≤ (k + 1)T.

609

Digital Control Systems

Figure 8.8 Zero-order hold (ZOH).

Therefore, its transfer function can be defined as ZOH(s) =

L[rH (t)] . L[r∗ (t)]

(8.7)

Considering single sampling instants, the ZOH receives only one impulse from the sampler, not the whole train of impulses, therefore, r∗ (t) = δ (t). If this impulse is of unit strength, then rH (t) will be a pulse of unit height and duration T (equal to sampling interval). Using the time delay function, the impulse response of the hold can be shown to be rH (t) = 1(t) − 1(t − T ), where 1(t) is the unit step function. Therefore, the transfer function in Equation 8.7 can be derived as follows: ZOH(s) =

=

L[rH (t)] L[r∗ (t)] L[1(t) − 1(t − T )] L[δ (t)]

1 e−sT − s , = s 1 so that

1 − e−sT . (8.8) s Thus, the whole A/D converter, which acts as a sample-and-hold unit, can be modelled as a combination of a sampler and a ZOH whose diagrammatic representation is shown in Figure 8.8. In real practice, there is no interest in the intermediate signal r∗ (t). Normally, it is the output of the hold unit that is of interest because even if one wanted r∗ (t), there is no way it could be obtained from the A/D converter. For this reason, the transfer function of the whole A/D converter is regarded as just the transfer function of the ZOH unit. For notational convenience, the sampled signal r(kT ) where T is the sampling period will just be presented as r(k) unless where the context makes it necessary to present it as r(kT ). Generally, signals of the form r(kT ± nT ) for any k, n = 1, 2, 3, 4, . . . will be presented simply as r(k ± n) (Figure 8.9). ZOH(s) =

Figure 8.9 A/D converter (sampler and ZOH).

610

Design and Analysis of Control Systems

Example 8.1 A continuous-time signal r(t) = 2 sin 4t + cos 2t is sampled at a sampling rate of 10 rad/sec using a ZOH. If the sampling starts at the time when t = 0, determine the sampling interval T, sample rate in samples per seconds and the sampled value when t = 4 sec. Solution 8.1 Since, ωs =

2π , the sampling interval can be calculated as T 2π ωs 2π = 0.628 sec. = 10

T=

Also the sampling rate in samples per sec is Ns = fs =

1 = 1.59 samples/sec T

at t = 4 sec, the complete samples covered will be N = Trunc

t  T

4 = Trunc 0.628



= 6, i.e., the whole number of the samples covered so far due to ZOH operation is k = 6. Since the sample value at any instant k is r(k) = r(t)δ (t − k) = (2 sin 4t + cos 2t)(δ (t − k)). The sample value at k = 6 becomes r(6) = 2 sin(24) + cos(12) = −0.967. 8.2.3 8.2.3.1

CHARACTERISTICS OF DISCRETE TIME SIGNALS Fourier Series Representation of Discrete-Time Signals

It is well known that any periodic signal has the property of repeating itself. For continuous-time signals, the periodic property is expressed as r(t) = r(t + T ),

(8.9)

where T is the period, in time units, of the signal when the signal has the same value as that in the previous period. For discrete-time signals the periodicity is measured in terms of the number of the signals. Normally the periodic property of discrete-time signals is given as r(k) = r(k + N),

(8.10)

611

Digital Control Systems

where N is the period given in number of samples that the discrete samples assume their previous values cyclically. By Fourier theory, any periodic signal can be represented as a sum of the harmon2π ics of the fundamental frequency ω = . (The reader should not confuse this period N and the N sampling frequency Ns given in Equation 8.2.) This is done for periodic discrete-time signals as ∞

r(k) =

n

an e2π j( N )k ,



(8.11)

n=−∞

√ where j = −1. The Fourier series equations are given by N−1

n

∑ an e2π j( N )k

r(k) =

n=0 N−1

an =

2πn 1 r(k)e− j( N )k , ∑ N k=0

(8.12)

which together form the discrete-time Fourier series pair expressing the discrete-time signal. Using the fact that r(k) = r(k + N), it can also be shown that an = an+N 8.2.3.2

Spectrum of Sampled Signal

The spectrum of the sampled signal refers to the frequency components found in a sampled data signal. To study the spectrum of the sampled data signal, deep understanding of the Discrete Fourier transform (DFT) is required. Recall that the sampled signal is expressed as ∞

r∗ (t) =



r(t)δ (t − kT )

k=−∞ ∞

= r(t)



δ (t − kT )

(8.13)

k=−∞

The Fourier series of the sum of pulses becomes ∞



δ (t − kT ) =

k=−∞

On introducing the sampling frequency ωs =

1 ∞ 2π j( n )t ∑e N . T n=−∞

2π in radians per second, Equation 8.14 becomes T





δ (t − kT ) =

k=−∞

(8.14)

1 ∞ jnωs t ∑e . T n=−∞

(8.15)

Using this equation, the sampled signal in Equation 8.13 is expressed as r∗ (t) =

r(t) ∞ jnωs t ∑e . T n=−∞

(8.16)

612

Design and Analysis of Control Systems

The Laplace transform of Equation 8.16 is given by R∗ (s) = L(r∗ (t)) =

Z ∞ r(t) ∞

T

−∞ ∞

=

1 ∑ T n=−∞



e jnωs t e−st dt

n=−∞

Z ∞

r(t)e−(s− jnωs )t dt.

−∞

Using the shift in frequency theorem for Laplace transforms it can be noted that Z ∞ −∞

r(t)e−(s− jnωs )t dt = R(s − jnωs ),

(8.17)

thus R∗ (s) =

1 ∞ ∑ R(s − jnωs ) T n=−∞

=

1 ∞ ∑ R(s ± jnωs ) T n=0

=

1 ∞ ∑ R [ j (ωo ± nωs )] . T n=0

(8.18)

This indicates that the sampled signal train has an infinite number of frequencies that are all integral multiples of the sampling frequency ωs . This situation can be illustrated graphically. 8.2.4 8.2.4.1

THE Z-TRANSFORM Definition of the Z-Transform

The shift in time (q-operator) and the difference (δ -operator) are transformations that enable the system to be analysed in the time domain using algebraic rules. Most often, in control engineering, there is much interest for the analysis to be carried out in both the time and the frequency domains. As such, similar to the Laplace transform for the case of continuous-time systems, which transforms the system from the time domain to the frequency domain, The Z-transform is the transformation function that converts the discrete-time system from the time domain to the frequency domain. That is why the Z-transform is regarded as the discrete-time equivalent of the Laplace transform. In this section, the basics of the Z-transform and their properties are discussed. The application of the Z-transforms in the analysis of the discrete-time systems will be apparent in the following sections. Appendix A contains Z-transforms and their properties. Given discrete-time signal values {r(k)}∞ k=−∞ the Z-transform is defined as follows: R(z) ≜ Z{r(k)} ∞

=



r(k)z−k ,

k=−∞

where z is a complex variable with bounded magnitude such that for some values ro and Ro ro < |z| < Ro ∞

so that the series ∑ r(k)z−k converges. k=−∞

(8.19)

613

Digital Control Systems

The definition is general and holds for all values of k, but since the sequence {r(k)} is normally defined for positive integers k, the one-sided Z-transform is commonly used. This is defined as ∞

R(z) =

∑ r(k)z−k ,

(8.20)

k=0

where the z-variable is bounded such that for some value ro ro < |z|. 8.2.4.2

The Z-Transform and the Laplace Transform

To establish the relationship between the Z-transform and the Laplace transform, consider a continuous-time signal r(t) sampled at even intervals T of time giving discrete-time signals {r(k)}∞ k=−∞ . It was shown that the Laplace transform of such discrete-time signals is R∗ (s) =





r(k)e−skT .

k=−∞

From the definition of the Z-transform, the Z-transform of this signal is given by ∞

R(z) =



r(k)z−k .

k=−∞

Since both the Laplace transform and the Z-transform map the same sampled signal from the time space to the complex frequency space, they are equal, i.e., F ∗ (s) = F(z) or





r(k)e−skT =

k=−∞





r(k)z−k .

(8.21)

k=−∞

From this equality, it follows that e−skT = z−k or simply z = esT .

(8.22)

This is the key equation that transforms a system between the Laplace domain (continuous-time space) to the Z-domain (discrete-time space). It shows how the Z-transform and the Laplace transform are related. Using this relationship, it is possible to analyse the system in the continuous-time domain using Laplace transforms and then convert the results into the discrete-time domain and vice versa. The relationship is widely used in control design using the pole-zero matching technique, which will be discussed in later sections. The most obvious conclusion that can be drawn from this relationship is that if the system has a pole or zero at s = −a in the s-plane, then in the z-plane, this pole or zero will be at e−aT . Similarly, a zero at b in the z-plane implies a zero at T1 ln b in the s-plane. The relationship between the two transforms described above can further be studied from the table of the Laplace and Z-transforms for various continuous-time signals using a sampling interval of T. 8.2.4.3

Properties of the Z-Transforms

The Z-transforms along with their counterpart Laplace transforms for different discrete-time signals are shown in Appendix A. This table is helpful in most digital control systems analysis and design applications. The reader is encouraged to become familiar with it. In this section, the fundamental properties of the Z-transform are discussed. These help simplify most transformation problems as dictated by the property itself.

614

8.2.4.3.1

Design and Analysis of Control Systems

Linearity

The Z-transform is a linear operator. This means that if two discrete-time signals f (k) and g(k) are linear such that the principle of superposition holds, i.e., if for some scalars α and β r(k) = α f (k) + β g(k) implies that r(k + n) = α f (k + n) + β g(k + n) then the Z-transform becomes Z{r(k)} = Z{α f (k) + β g(k)} = αZ{ f (k)} + β Z{g(k)} or simply Z{α f (k) + β g(k)} = αF(z) + β G(z)

(8.23)

where F(z) = Z{ f (k)} and G(z) = Z{g(k)}

8.2.4.3.2

The Convolution Theorem

It has been shown before that in continuous-time systems for which the principle of superposition hold, the signal u(t) is said to be a convolution of two signals r(t) and h(t) if Z ∞

u(t) = −∞

r(τ)h(t − τ)dτ

and the time scale is valid for both positive and negative values. It was also shown that since in most applications time is counted only on the positive side to some time limit t, the convolution then becomes Z t u(t) = r(τ)h(t − τ)dτ. 0

In either of these cases, the convolution is expressed symbolically as u(t) = r(t) ∗ h(t). The discrete-time equivalent of this convolution is the convolution summation. If the discrete-time signals are available for any instant k in the range (−∞, ∞), then the discrete-time signal u(k) is said to be the convolution of two discrete-time signals r(k) and h(k) if ∞

u(k) =



r( j)h(k − j).

j=−∞

Since the sampled signal is normally only available at positive finite k instants of time, the convolution summation becomes k

u(k) =

∑ r( j)h(k − j).

j=0

This is also expressed as u(k) = r(k) ∗ h(k). Now, if the Z-transform is taken on both sides of this convolution summation, the result is U(z) = Z{u(k)} " ∞

=

k

#

∑ ∑ r( j)h(k − j) k=0

j=0

z−k .

(8.24)

615

Digital Control Systems

Rearranging the terms in this expression (Equation 8.24) gives ∞



j=0

k=0

∑ r( j) ∑ h(k − j)z−k .

U(z) =

(8.25)

Letting (k − j) = m and using this substitution in Equation 8.25 leads to ∞

U(z) =



∑ r( j)



m=− j

j=0

"



=

h(m)z−m− j −1

∑ r( j)z− j ∑

h(m)z−m +

m=− j

j=0



∑ h(m)z−m

# .

(8.26)

m=0

Considering the one-sided Z-transforms, where it is assumed that signals at negative instants are not defined, then −1



h(m)z−m = 0.

m=− j

which reduces Equation 8.26 to U(z) =





j=0

m=0

∑ r( j)z− j ∑ h(m)z−m .

(8.27)

From the definition of the Z-transform it follows that ∞

∑ r( j)z− j = Z{r(k)} = R(z)

j=0 ∞

∑ h(m)z−m = Z{h(k)} = H(z).

(8.28)

m=0

Thus, the Z-transform of the convolution summation becomes U(z) = R(z)H(z), which means that the Z-transform of a discrete-time signal that is a convolution of two discrete-time signals is just a product of the Z-transform of the individual signals. 8.2.5

Z-TRANSFORM IN DIGITAL CONTROL: A SUMMARY

As detailed previously, the Z-transform is the mathematical tool for the analysis of discrete systems. It is equivalent to the Laplace transform’s role in continuous systems. Its key property is that Z( f (k − 1)) = z−1 F(z)

(8.29)

With this relationship we can find the transfer function of a system based on a difference equation. The inverse Z-transform aims to give us either a sample time series of the system or a general expression of the system. The relationship between s and z as established in Equation 8.22 is given by z = esT ,

(8.30)

616

Design and Analysis of Control Systems

where T is the sampling time. This equation effectively transforms a system from the s-plane to the z-plane by mapping the system poles from the s-plane to the z-plane. Hence we have the Table of Z-transforms which is analogous to the Table of Laplace transforms. The Z-transform of a transfer function G(s) is given by    G(s) (8.31) G(z) = 1 − z−1 Z s Important to note that the capital Z above denotes the Z-transform operator – a function and not a variable. After taking the Z-transform from the tables, the Z is not kept as a variable. For example:   2 T    2 z(z + 1)  1  Z 3 =  (z − 1)3  s This is from the Z-transform Table. Hence after obtaining this Z-transform, in the following expression  2  T  2 z(z + 1)   Z  (z − 1)3  the Z that appears outside of the brackets should not be there. 8.2.6

USE OF THE SYSTEM DC GAIN

A key property used in digitisation approximation methods is the equality between the DC gains of the continuous and discrete systems. 8.2.6.1

Continuous-Time System DC Gain

As already explained in Chapter 2, for a continuous system, the DC gain is obtained by using the Final Value Theorem (FVT) while assuming a unit step input. U(s) = 1/s. Y (s) U(s) =⇒ Y (s) = D(s)U(s) DC gain = yss = lim sY (s) (applying FVT) D(s) =

s→0

= = =⇒ DC gain =

lim sD(s)U(s)

s→0

lim sD(s)

s→0

lim D(s)

s→0

This is the DC gain for a continuous-time system.

1 s (8.32)

617

Digital Control Systems

8.2.6.2

Discrete-Time System DC Gain

For a discrete-time system, the DC gain is obtained by using the Final Value Theorem while assuming a unit step input, i.e., U(z) = z/(z − 1). Y (z) U(z) =⇒ Y (z) = D(z)U(z) D(z) =

DC gain = yss =

lim (z − 1)Y (z) (applying FVT)

z→1

lim (z − 1) D(z)U(z)   z = lim (z − 1) D(z) z→1 z−1 =⇒ DC gain = lim zD(z) =

z→1

z→1

(8.33)

This is the DC gain of a discrete-time system. In digitisation approximation methods, the equality between this discrete DC gain and that of the continuous-time system is of significant use and importance. This is demonstrated in the following sections.

8.3

KEY DIGITAL CONTROL DESIGN METHODS

There are two methods used in designing digital controllers: (i) Equivalent Digital Control Design (EDCD) and (ii) Direct Digital Control Design (DDCD). EDCD involves designing an analogue controller and then converting this controller into the digital domain via a mapping process. DDCD involves designing a digital controller directly in the digital domain by using discrete transfer functions or state-space formulations in the digital environment. 8.3.1

EQUIVALENT DIGITAL CONTROL DESIGN

There are three key EDCD methods: (i) Matched Pole-Zero, (ii) Modified Matched Pole-Zero, and (iii) Tustin (Bilinear Approximation). The general EDCD process is illustrated in Figure 8.10. 8.3.1.1

Matched Pole-Zero (MPZ)

The Matched Pole-Zero (MPZ) method is an equivalent digital control design approach where the controller is converted to the z-plane by matching zeros and poles of a continuous system and those of its corresponding discrete system. This mapping is achieved by using the relationship which is defined by the Z-transform: z = esT (8.34) This equation relates the s-plane poles or zeros to the z-plane poles or zeros, respectively. To derive the MPZ digital controller, consider a general continuous controller given by D(s) = Kc

(s + a) , (s + b)

(8.35)

where Kc is the continuous controller gain, −a is the zero and −b is the pole. From the s to z transformation of zeros and poles given by Equation 8.34, the z plane zero and z plane pole are e−aT and e−bT , respectively. Hence, the equivalent MPZ digital controller is given by  z − e−aT D(z) = Kd , (8.36) (z − e−bT )

618

Design and Analysis of Control Systems

Figure 8.10 Equivalent digital control design (EDCD) [11].

where the expression for the controller gain Kd is derived by considering the general continuous controller in Equation 8.35. Kd is obtained by using the fact that the DC gains of the two systems are equal. Using the expressions for the continuous and discrete DC gains (obtained from the Final Value Theorem) in Section 8.2.6, it follows that: DC gain (continuous) = = = DC gain (digital) = = = =⇒ Kc

a b

=

lim D(s)   (s + a) lim Kc s→0 (s + b) a Kc b lim zD(z) z→1 " # z − e−aT lim z Kd z→1 (z − e−bT )   1 − e−aT Kd 1 − e−bT   1 − e−aT Kd 1 − e−bT s→0

619

Digital Control Systems

=⇒ Kd

a = Kc b



1 − e−bT 1 − e−aT

 (8.37)

Now, with an expression of Kd , the definition of the MPZ digital controller is complete. The controller can also be expressed interchangeably as:  z − e−aT D(z) = Kd (z − e−bT )    z − e−aT a 1 − e−bT =⇒ D(z) = Kc b 1 − e−aT (z − e−bT )

(8.38) (8.39)

However, the first expression, Equation 8.38, is easier to use and manipulate. This is the MPZ equivalent digital controller for any continuous controller of the format in Equation 8.35 and any sampling rate. Its structure and properties will be further illustrated through worked examples in later sections. At this stage, it is prudent to note that from its general equation, it is evident that the DC gain (obtained from the FVT for discrete systems) of the MPZ is independent of the sampling rate. It is obtained as follows: DC gain (MPZ) =

lim zD(z)    z − e−aT a 1 − e−bT = lim zKc z→1 b 1 − e−aT (z − e−bT ) a = Kc b z→1

This is the same as the DC gain of the continuous system, as expected [11]. 8.3.1.2

Modified Matched Pole-Zero (MMPZ)

Modified Matched Pole-Zero (MMPZ) is an equivalent digital control design approach that matches zeros and poles like MPZ but takes into account computational time delay. The control signal u(k) is dependent on error signal e(k). This means that the current output is dependent on the current sample, which implies zero computational time. This is not a realistic assumption. Indeed, if the sampling is fast enough, this should be close to being realisable. However, in most cases, the computational time is of significance. In that scenario, the MPZ is inadequate, creating the motivation for the MMPZ method. The MMPZ controller is expressed as a rational function with one less power of z in the numerator than the denominator. This is a more practical and realistic approximation of the continuous controller than the MPZ method. However MMPZ’s performance is poor compared to MPZ because it uses data from (k − 1) at k. Mathematically, the continuous and MMPZ controllers are represented together as follows: (s + a) (s + b)  z − e−aT D(z) = Kd , (z − 1) (z − e−bT ) D(s) = Kc

(8.40)

where Kd , the digital control gain is as defined for the MPZ in Equation 8.37. This is the MMPZ equivalent controller and it derivation and properties will be illustrated through worked examples in later Sections [11].

620

8.3.1.3

Design and Analysis of Control Systems

Tustin (Bilinear Approximation)

The Tustin equivalent control design method involves transforming a controller from the s to the z domain by performing the following substitution in the original continuous controller given by Equation 8.35:   2 z−1 s = T z+1   2 z−1 +a T z+1   (8.41) =⇒ D(z) = Kc 2 z−1 +b T z+1   z(aT + 2) + aT − 2 = Kc (8.42) z(bT + 2) + bT − 2     aT − 2  z+  aT + 2  aT + 2     (8.43) =⇒ D(z) = Kc  bT − 2  bT + 2 z+ bT + 2 These three general equations are different but equivalent ways of representing the Tustin equivalent controller for any continuous controller of the structure in Equation 8.35 using any sampling time T . It is instructive to note that the DC gain of a Tustin controller is independent of the sampling rate. Using the Final Value Theorem for discrete systems, it can be shown that this DC gain is given by DC gain (tustin)

=

lim zD(z) 

z→1

   2 z−1 + a  T z+1    = lim z  Kc   z→1  2 z−1 +b T z+1 a = Kc b This is the same as the DC gain of the continuous system and the MPZ digital controller. Furthermore, at very high sampling rates, that is, as T → 0 (high-frequency sampling), it is also clear that the Tustin (Bilinear) controller gain, Kt , approaches the continuous controller gain, Kc , and becomes independent of z. This is illustrated as follows: Kt

=

lim D(z)    z(aT + 2) + aT − 2 = lim Kc T →0 z(bT + 2) + bT − 2 T →0

= Kc The derivation of the Tustin controller and its properties will be explored and demonstrated through worked examples in later sections. 8.3.2

ASSESSMENT AND COMPARISON OF EDCD METHODS

(EDCD) methods (MPZ, MMPZ, Tustin) are only effective at high sampling rates ≥20ωn (20 times natural frequency). At 10ωn (10 times natural frequency) or lower, distortions and instability kick in.

Digital Control Systems

621

Figure 8.11 Impact of sampling on EDCD methods [11].

This is why we use Direct Digital Control Design (DDCD), which does not depend on the sampling rate. Furthermore, the approximation methods involve back and forth A/D and D/A conversions. At high frequencies, the structures of the MPZ and Tustin digital controllers (gains, poles, and zeros) become the same. For (i), it is because, with high sampling rates, more data, less distortion, and the digital system becomes closer to the continuous one. For (ii), because the frequency increases from 1 to 100 Hz, performance improves towards being the same as the continuous system. The controllers converge into the same controller producing the same good performance. The impact of sampling on EDCD methods is shown in Figure 8.11. In fact, at very high frequencies, the Tustin digital controller is identical to the MPZ controller. The poles, zeroes and controller gains are precisely the same. This is the impact of using a higher sampling rate (100 Hz instead of 1 Hz). The performance of the controllers dramatically improves as the outputs of the system they are controlling become exactly the same as that of the continuous control system. Hence the MPZ and Tustin controllers converge to become identical controllers, which produce the same outstanding performance. As already established, the DC gain for the continuous controller, the MPZ digital controller and the Tustin digital controller are the same and independent of sampling time. All these assessments and comparisons of equivalent digital control methods are shown in Figure 8.11. 8.3.3

DIRECT DIGITAL CONTROL DESIGN

As explained earlier, with the DDCD method, the digital controller is designed directly in the digital domain. Everything is digital; the input R(z), output Y (z), plant G(z), sensor signal H(z), and controller D(z), thus producing a direct digital controller. Why DDCD? Equivalent Digital Control Design (EDCD) methods (MPZ, MMPZ, Tustin) are only effective at high sampling rates ≥20ωn (20 times natural frequency). At 10ωn (10 times natural frequency) or lower, distortions and instability kick in. This is why we use Direct Digital Control Design (DDCD), which does not depend on the sampling rate. Furthermore, the approximation methods involve back and forth A/D and D/A conversions. All this is avoided by using DDCD. (DDCD involves digitising both controller and plant model.)

622

Design and Analysis of Control Systems

For the DDCD controller, a digital plant is required. To compute the digital plant model, the impact of the ZOH circuit on the plant dynamics needs to be considered. The digitisation of the plant by ZOH is shown in Figure 8.12 This will result in an exact system description as the ZOH circuit describes accurately what is occurring between samples. As discussed earlier, for a plant G(s) preceded by a ZOH, the discrete transfer function is given by    G(s) −1 G(z) = 1 − z Z s The complete generic DDCD system: input R(z), output Y (z), plant G(z), sensor signal H(z), and controller D(z), is shown in Figure 8.13. In most analysis unity feedback is used which means H(z) = 1. This simplified version of the DDCD system is depicted in Figure 8.14. As explained in the motivation for the DDCD system, its performance is markedly and distinctively better than that of the EDCD configuration. Figure 8.15 compares the two systems with respect to the continuous-time system. For a negative unity feedback direct digital control system shown in Figure 8.14, it follows that the Transfer Function model of the closed-loop system is given by Y (z) D(z)G(z) TD (s) = = . (8.44) R(z) 1 + D(z)G(z) Hence the digital characteristic equation is given by 1 + D(z)G(z) = 0.

(8.45)

Thus the closed-loop poles or roots can be obtained, and the continuous-time root locus analysis and techniques will apply without modification to the digital system. The Root Locus design methods in

Figure 8.12 Digitising the plant: G(s) → G(z).

Figure 8.13 Generic direct digital control design (DDCD).

Figure 8.14 Direct digital control design (DDCD).

623

Digital Control Systems

Figure 8.15 Comparison: DDCD, EDCD and continuous design methods [11].

z-plane are similar to those of continuous systems. Similarly, the analysis and design of digital PIDtype controllers (P, PD, PI and PID) are the same as the process followed for continuous systems. The same applies to digital compensators (lead, lag, and lead-lag). However, it is prudent to note that the significant difference is that the stability boundary is now the unit circle for a digital system and not the imaginary axis. This means any poles inside the unit circle imply system stability, those outside the unit circle indicate instability, and those lying on the unit circle suggest marginal stability. These matters will be discussed in more detail later. [11]. 8.3.4

STATE-SPACE ANALYSIS

Digital state-space representation is also called the Discrete State-Variable Matrix modelling. The Digital State-Space model involves four system matrices (or vectors): the input control signal vector (or scalar) u(k), the state vector x(k) as well as its time derivative x˙ (k), and the digital output vector y(k). Hence the Digital State-Space model of a dynamic system is represented as follows: x(k + 1) = Fx(k) + Gu(k) y(k) = Hx(k) + Ju(k) The parameters and variables are defined as follows: F represents the system matrix. G represents the input matrix. H represents the output matrix. J represents the feed-forward matrix. x(k) represents the digitised state vector. u(k) represents the digital input or digital control vector.

624

Design and Analysis of Control Systems

The four matrices F, G, H, and J are similar to their corresponding continuous-time versions A, B, C, and D [12]. The discrete state-space system matrix F for a digital system is given by = eAT

F

= I + AT +

A2 T 2 A3 T 3 An T n + +···+ 2! 3! n!

An T n n=0 n! ∞

=



A practical approximation is: F = I + AT +

A2 T 2 2!

This expression with three terms is sufficient for small T . The three terms are enough for such cases. The discrete state-space input matrix G for a digital system is given by An T n+1



G =

∑ (n + 1)! B

n=0

=

  An T n+1 AT 2 A2 T 3 + +···+ B IT + 2 3! (n + 1)!

A practical version is hence:   AT 2 A2 T 3 G = IT + + B 2 3! This expression with three terms is sufficient for small T . The three terms are enough for such cases. With matrices F, G, the digital output matrix H and the digital feed-forward matrix J, that completes the Digital State-Space model. Hence state-space analysis can be carried out in the same way as done for the continuous-time systems. Example 8.2 The continuous-time state-space description of a system is given by x˙ (t) = Ax(t) + Bu(t) y(t) = Cx(t) + Du(t), where  A= C=



2 1 1

4 5



1



 B=

1 2



D=0

Give the corresponding discrete-time state-space description of this system using a sampling time of 0.001 seconds (sampling rate of 1,000 Hz). Solution 8.2 The solution is obtained by direct application of the general state matrix equation An T n . n=0 n! ∞

F=



625

Digital Control Systems

With the given sampling interval, two terms may be sufficient, so that 2

F=

An T n n=0 n!



= I + AT +

A2 T 2 2

= I + AT     1 0 2 4 = + T 0 1 1 5   1 + 2T 4T = T 1 + 5T   1.002 0.004 = 0.001 1.005 For matrix G even only one term is enough: ∞

G=

n=0 2

=

An T n+1

∑ (n + 1)! B An T n+1

∑ (n + 1)! B

n=0

  AT 2 A2 T 3 + B = IT + 2 3!    T 0 1 = 0 T 2   T = 2T   0.001 = 0.002 The matrices H and J remain the same as C and D of the continuous-time system   H= 1 1 J = [0] With the four matrices (A, B, C and D) obtained, they can now be used in the discrete-time statespace system described by x(k + 1) = Fx(k) + Gu(k)

(8.46)

y(k) = Hx(k) + Ju(k).

(8.47)

It can be shown that the higher the sampling rate, the closer the system is to the continuous-time transfer function. For example, if the sampling time is reduced to 10−6 seconds (increase of sampling rate to 106 Hz), it means the two discrete system matrices approach the following outcomes:     1 0 0 F= G= 0 1 0 Hence the discrete-time State-Space model becomes x(k + 1) = x(k) y(k) = Hx(k) + Jx(k).

(8.48) (8.49)

This is precisely the same as the continuous-time system. The higher the sampling rate, the closer the digital system gets to the continuous-time system.

626

8.3.5

Design and Analysis of Control Systems

SYSTEM TRANSFER FUNCTION

When the system is presented in discrete-time form using state variables as shown above, the analysis of the transfer function, controllability, observability, similarity transformation and canonical forms are done in exactly the same way as was shown for continuous-time systems in Chapter 7. For the transfer function, consider the discrete-time state-space system given by x(k + 1) = Fx(k) + Gu(k) y(k) = Hx(k) + Ju(k). Taking the Z-transform of both these equations leads to zX(z) = FX(z) + GU(z) Y(z) = HX(z) + JU(z).

(8.50) (8.51)

From Equation 8.50 it be can be seen that X(z) [zI − F] = GU(z), =⇒ X(z) = (zI − F)−1 GU(z), which, when used in Equation 8.51, leads to Y(z) = H (zI − F)−1 GU(z) + JU(z)   = H (zI − F)−1 G + J U(z) T(z) =

Y(z) U(z)

=⇒ T(z) = H (zI − F)−1 G + J This is the transfer matrix of a digital system. It is similar in form to the one obtained for continuoustime systems. For SISO systems where G is a column vector and H is a row vector, this transfer matrix becomes a scalar function, known as the transfer function of the digital system. 8.3.6

CONTROLLABILITY AND OBSERVABILITY

As already stated, the controllability and observability of digital systems follow the same analysis as is the case with continuous systems. The digital state-space matrices simply replace the continuous ones [12]. Consider the general discrete-time state-space system given by x(k + 1) = Fx(k) + Gu(k) y(k) = Hx(k) + Ju(k). 8.3.6.1

Controllability

If an input to the digital system can be found that takes every discrete state from a desired initial discrete state to a desired final discrete state, the system is controllable. Mathematically, this is established by deriving the Controllability Matrix, which is given by   CM = G FG F 2 G · · · F n−1 G A system is controllable if det CM does not equal to zero, that is: det CM ̸= 0

(8.52)

627

Digital Control Systems

8.3.6.2

Observability

This refers to the ability to deduce the discrete state variables from a knowledge of the discrete inputs u(t) and discrete output y(t). Mathematically, it is determined by deriving the Observability Matrix:   H  HF    OM =  .   ..  HF n−1

A system is observable if det OM does not equal to zero, that is: det OM ̸= 0 8.3.7

(8.53)

STABILITY OF DIGITAL CONTROL SYSTEMS

The location of the poles of the digital transfer function determines whether a digital control system is stable or not. For stability, the poles must lie strictly inside the unit circle, that is, |z| < 1. Poles at z = 1, that is, on the circumference of the unit circle, imply marginal stability. Poles outside the unit circle, that is, |z| > 1, suggest instability of the digital control system. Figure 8.16 illustrates the determination of the stability of digital control systems by the location of the poles of the system. This is compared with the same analysis for continuous-time systems in Figure 8.17, where for stability, the system poles must strictly lie in the LHP, for marginal stability, the poles must be on the imaginary axis, and for instability, the poles must strictly lie in the RHP.

8.4

WORKED EXAMPLES OF DIGITAL CONTROL SYSTEMS

Example 8.3 A compensator for a heat exchange control system is given by D(s) =

10s + 20 . 0.1s + 0.5

Figure 8.16 Digital control: stability by pole location.

628

Design and Analysis of Control Systems

Figure 8.17 Continuous-time control: stability by pole location.

(a) Use the Matched Pole Zero (MPZ) method to design an equivalent digital compensator D(z). Use a sampling time of 1 second. (b) What is the Modified Matched Pole Zero (MMPZ) method? What is its advantage? What is its drawback? (c) For the same continuous-time compensator D(s), use the Tustin approximation method to design a digital compensator D(z). Use a sampling time of 1 second. (d) Compare the two compensators obtained in (a) and (c). (e) If we use a much lower sampling time, say 0.01 seconds, what will happen to the performance of the two compensators? Why is this the case? (f) What is the critical limitation of digital approximation methods? Describe the direct discrete design of digital systems. Solution 8.3 (a) Given the continuous controller, an equivalent digital controller must be designed using the MPZ method. The first step is expressing D(s) in terms of a controller gain, a zero factor and a pole factor. Thereafter, the general structure of the MPZ digital controller is then derived. 10s + 20 0.1s + 0.5 100(s + 2) = s+5   z − e−2 D(z) = Kd z − e−5

D(s) =

For T = 1, using the FVT, the continuous-time DC gain is obtained as follows:   100(s + 2) lim D(s) = lim s→0 s→0 s+5 = 40

629

Digital Control Systems

Similarly, the DC gain of the discrete system is given by: z − e−2 lim zD(z) = lim z(Kd ) z→1 z→1 z − e−5   1 − e−2 = Kd 1 − e−5 



Equating the two DC gains (continuous and discrete) leads to:   1 − e−2 40 = Kd 1 − e−5 =⇒ Kd = 45.95 The final expression of the digital controller is then given by:   z − e−2 D(z) = 45.95 z − e−5   z − 0.1353 =⇒ D(z) = 45.95 z − 0.0067 This is the MPZ controller. It is an equivalent digital controller to the continuous controller D(s). (b)The MMPZ is an equivalent digital control design method that takes into account computational time delay (MPZ does not do this). It is expressed as a rational function with one less power of z in the numerator than the denominator. It has the advantage of being a more practical and realistic representation of the digital controller than the MPZ. However, its drawback is that its performance is poor compared to MPZ because it uses data from (k − 1) at (k). (c) For the same continuous controller, the task is to design an equivalent digital controller using the Tustin method. Consider the continuous controller:   s+2 D(s) = 100 s+5 Tustin approximation method entails the following substitution:   2 z−1 s = T z+1     2 z−1  T z+1 +2  D(z) = 100   2 z−1  +5 T z+1 For T = 1 we get: 



 2z  D(z) = 100  7z 3  + 2 2   400z = 7z + 3   57.14286z =⇒ D(z) = z + 0.42857

630

Design and Analysis of Control Systems

This is the Tustin equivalent digital controller. Its DC gain is computed as explained before:   57.14286z lim zD(z) = lim z z→1 z→1 z + 0.42857 = 40.00 This shows that the DC gains of the continuous controller, the MPZ controller and the Tustin controller are exactly the same. In fact, this equality is independent of sampling rate, that is, it does not depend on sampling time T . This can be demonstrated by using an expression of this Tustin controller which assumes an arbitrary sampling time T as follows:     2 z−1 + 2   T z+1    lim zD(z) = lim z(100)    2 z−1 z→1 z→1 +5 T z+1     2 1−1 + 2  T 1+1   = 100   2 1−1  +5 T 1+1   2 = 100 5 = 40.00 (d)(i) The structure of the controllers (the controller gains, poles, and zeros) are different. (d)(ii) The two controllers have the same DC gain of 40. (d)(iii) The MPZ method matches the poles and zeros of the continuous system with those of the discrete one. At the same time, the Tustin approach converts the continuous function into the discrete one. (d)(iv) The performance of both the MPZ and MMPZ controllers depend on the sampling rate. The higher the sampling rate, the better the performance. (d)(v) The roots of both controllers are strictly inside the unit circle |z| < 1 and hence both control systems are stable. (d)(vi) Although different, their system outcomes converge to that of the continuous system as time approaches infinity. (e)(i) Lower sampling time gives a higher sampling rate, which leads to better performance for both controllers. The two controllers will converge towards the same structure (controller gains, zeros, poles) as their performance converges to that of the continuous system. This happens when sampling time is changed from T = 1 to T = 0.01, thus increasing the sampling rate. What follows is an illustration of the convergence of the two methods for the case where T = 0.01.

Kd

  1 − e−5(0.01)  = 40 1 − e−2(0.01) (1 − 0.951225) = 40 (1 − 0.98020)   0.048775 = 40 0.01980

631

Digital Control Systems

= 98.54  z − e−0.02 z − e−0.05   z − 0.9802 = 98.54 z − 0.9512 

=⇒ D(z) = 98.54

This is the MPZ digital controller with a higher sampling rate. The many decimal places are necessary to capture the controller’s accuracy correctly. A digital controller with a higher sampling rate can also be designed using the Tustin approximation method. This entails the following substitution in the continuous controller:   2 z−1 s= T z+1 This leads to the derivation of the discrete controller as follows: 



  z (1 + T ) + (T − 1)    D(z) =   × 100   5T 5T z 1+ + −1 2 2     0.99  1.01 z − 1.01     × 100 =   0.975  1.025 z − 1.025   z − 0.9802 =⇒ D(z) = 98.54 z − 0.9512 This is the Tustin digital controller with a higher sampling rate. It is identical to the one obtained using the MPZ method using the same sampling rate. The poles, zeroes and controller gains are exactly the same. This is the impact of using a higher sampling rate (100 Hz as opposed to 1 Hz). The performance of the controllers dramatically improves as the outputs of the system they are controlling become exactly the same as that of the continuous control system. Hence, the MPZ and Tustin controllers converge to become identical controllers which produce the same great performance. As already established the DC gain for the continuous controller and that of the MPZ digital controller are the same at 40. We need to find the DC gain for the Tustin digital controller. As explained earlier, the DC gain of any digital control system can be obtained using the Final Value Theorem while assuming a unit step input, i.e., U(z) = z/(z − 1). Y (z) U(z) =⇒ Y (z) = D(z)U(z) D(z) =

DC gain

= yss =

lim (z − 1) D(z)U(z)   z = lim (z − 1) D(z) z→1 z−1 = lim zD(z) =

=⇒ DC gain

lim (z − 1)Y (z) (applying FVT)

z→1

z→1

z→1

This is the DC gain for any discrete controller. Hence for the specific Tustin digital controller in this exercise it follows that:

632

Design and Analysis of Control Systems

DC gain =

lim zD(z)    z − 0.9802 =⇒ DC gain = lim z 98.54 z→1 z − 0.9512   0.0198 = 98.54 0.0488 = 40.0 z→1

This is as expected. So, the DC gains of the three controllers (Continuous, MPZ, and Tustin) are the same since the outputs of the systems they are controlling all converge to the same value as the time approaches infinity. As shown earlier, this equality of DC gains for the three controllers is always accurate and is independent of sampling rate; that is, it does not depend on sampling time T . (f) The MPZ, MMPZ, and Tustin digitisation (EDCD) methods are dependent on the sampling rate. They are good at sampling rate ≥20ωn (20 times the natural frequency). At 10ωn (10 times the natural frequency) or lower, distortions and instability kick in. They involve the back and forth use of A/D and D/A converters. On the other hand, the Direct Digital Control Design (DDCD) stabilises the system, as it does not depend on the sampling rate. DDCD involves digitising all the components of a closed-loop control system. A digital plant model is required. Everything is in the digital domain (input, plant, output, controller and sensor). Digitising the plant will result in an exact system description as the ZOH circuit describes accurately what is occurring between samples. Figures 8.18 and 8.19 illustrate the DDCD process, where the digital sensor H(z) = 1.    G(s) −1 G(z) = 1 − z Z s It is important to note that the capital Z above denotes the Z operator – a function that is different from the variable z. Example 8.4 In a robotic arm control system, the angle θ (t) represents the robotic arm orientation, ω(t) is the robot’s angular speed, u(t) is the input force, and the output y(t) is the angle θ (t). The robotic arm control model can be represented by the following equations: θ˙ (t) = ω(t) ˙ ω(t) = u(t)

Figure 8.18 Digitising the plant: G(s) → G(z).

Figure 8.19 Direct digital control design.

633

Digital Control Systems

(a) Show that the State-Space model for the system can be represented by   0 1 A= 0 0   0 B= 1   C= 1 0   D= 0 (b) Show that the plant Transfer Function model for the system is given by G(s) =

1 s2

(c) From the continuous State-Space model (A, B,C, D) obtained in (a) derive the digital StateSpace model (F, G, H, J). (d) From the results in (c), find the discrete plant Transfer Function G(z). (e) Using the direct discrete design method, from the continuous plant transfer in (b), develop the discrete plant transfer function G(z). Is the G(z) in (e) the same as that obtained in (d)? (f) Explain the significance of the poles of the continuous and discrete Transfer Functions, G(s) and G(z). (g) Show that the digital system is controllable and observable for all sample times (T > 0). Solution 8.4 (a) The robotic arm control model can be represented by the following equations: θ˙ (t) = ω(t) ˙ ω(t) = u(t) Choose the state variables as the angular displacement and the angular speed:   θ w The equations of motion are expressed as follows: θ˙ (t) = 0 × θ (t) + w(t) + 0 × u(t) w(t) ˙ = 0 × θ (t) + 0 × w(t) + u(t) y(t) = θ (t) + 0 × w(t) + 0 × u(t) From these equations, the State-Variable Matrix model is then extracted as follows:        0 1 θ (t) 0 θ˙ (t) + u(t) = 0 0 w(t) 1 w(t) ˙  y(t) = 1

   θ (t)   0 + 0 u(t) w(t)

Hence the state-space matrices are obtained as follows:   0 1 A= 0 0

634

Design and Analysis of Control Systems

  0 B= 1   C= 1 0   D= 0 (b) The Transfer Function model of the system is obtained by finding the ratio of the Laplace transforms of the input u(t) and the output θ (t): w(t) ˙ = u(t) =⇒ θ¨ (t) = u(t) =⇒ s2 θ (s) = U(s) This leads to: G(s) = = =

Θ(s) U(s) Θ(s) s2 Θ(s) 1 s2

Alternatively, the transfer function is derived from the State-Variable Matrix model as follows: G(s) = C (sI − A)−1 B + D       s −1 −1 0 = 1 0 0 s 1     1 s 1 0 = 1 0 2 s 0 s 1 1 = 2 s The result is the same as that obtained previously. (c) The discrete state-space matrices are obtained as follows, using three-term approximations: A2 T 2 F = I + AT +   2 1 T = 0 1   AT 2 A2 T 3 G = IT + + B 2 3!       T2 0 1 T 0 0 = + 0 T 1 2 0 0   2 T 0 = T 2  1 0 T  2 T  2  =  T

635

Digital Control Systems

  H= 1 0   J= 0 It is important to note that H and J are the same as the continuous-time matrices C and D. (d) Using the preceding discrete state-space matrices (the discrete State-Space model), the discrete Transfer Function model is obtained as follows: G(z) = H (zI − F)−1 G + J  −1  T 2  z − 1 −T      = 1 0   2  0 z−1 T  T2  z − 1 +T   1    = 1 0  2  (z − 1)2 0 z−1 T  T2 (z + 1)  2      = 1

 0

1 (z − 1)2

T (z − 1)

T2

(z + 1) =⇒ G(z) = 2 (z − 1)2 (e) The Transfer Function model can also be obtained using the Z-transform method as follows: G(s) = G(z) = =

1 s2 

G(s) 1−z Z s    1 1 − z−1 Z 3 s −1





It is important to note that the Z above is a function and not a variable. Applying the Z transform function leads to:    T 2 z(z + 1) G(z) = 1 − z−1 2 (z − 1)3   2  z − 1 T z(z + 1) = z 2 (z − 1)3 =

T2 (z + 1) 2 (z − 1)2

Clearly, this Transfer Function model is the same as that obtained in question (d). (f) In general, the location of the poles of the digital transfer function determines whether the system is stable or not. The poles must lie strictly inside the unit circle |z| < 1. Poles at z = 1 imply marginal stability. There are two poles at z = 1; hence the discrete system is marginally stable. These two poles also correspond to two integrators in the plant’s continuous model given by the transfer function:

636

Design and Analysis of Control Systems

G(s) =

1 s2

This means there are two poles at s = 0 for the continuous system. As explained in Chapter 5, poles lying strictly in the LHP indicate system stability for a continuous system, while those strictly in the RHP imply instability. Poles lying on the imaginary axis indicate marginal stability. With two zeros at the origin, this means they lie on the imaginary axis. Hence the continuous system is marginally stable. This example shows that digitisation does not affect the stability properties of a system. (g) The discrete system’s controllability matrix is determined as follows:   CM = G FG  2   2  T   T   1 T  2  =  2    0 1 T T  2   2  T T 2 + T  2  2  =   T T  2  2 T 3T  2 2  =  T

T

The determinant of CM is then obtained as follows: det CM

=

T 3 3T 3 − 2 2

= −T 3 As this is not equal to zero, this means the system is controllable for all sampling times where T ̸= 0. Since a negative sampling time has no physical meaning, T > 0 is a tighter and more accurate condition. Next, observability is considered.     H 1 0 OM = = HF 1 T It follows that det OM = (1 × T ) − 0 = T . As this is not equal to zero, it means the system is observable for all sampling times where T ̸= 0. Since a negative sampling time has no physical meaning, T > 0 is a tighter and more accurate condition. Example 8.5 (a) Compare and contrast the Matched Pole Zero (MPZ) and the Modified Matched Pole Zero (MMPZ) methods. Explain the advantages and drawbacks involved. (b) A compensator for an electrical power control system is given by D(s) =

10s + 30 0.1s + 0.4

Use the Matched Pole Zero (MPZ) method to design an equivalent digital compensator D(z). Use a sampling time of 1 second.

637

Digital Control Systems

(c) For the same continuous-time compensator D(s) in (b), use the Tustin approximation method to design a digital compensator D(z). Use a sampling time of 1 second. (d) Compare the DC gains of the two compensators obtained in (b) and (c), and explain the significance of your answer. (e) If we use a much high sampling rate, say 100 Hz, what will happen to the structure and performance of the two compensators? Give reasons for these outcomes. (f) Why do we use the direct discrete design of digital systems? Outline the processes involved in carrying out such a design. Solution 8.5 (a) i. MPZ: The MPZ method is an equivalent digital control design approach where a designed controller is converted to the z-plane by matching zeros and poles of the continuous system and its corresponding discrete system. (s + a) D(s) = Kc (s + b)  z − e−aT Dd (z) = Kd (z − e−bT ) ii. MMPZ: The MMPZ is an equivalent digital control design approach which matches zeros and poles like MPZ, but takes into account computational time delay. It is expressed as a rational function with one less power of z in the numerator than the denominator. This is a more practical and realistic than the MPZ. It is a better, more practical, and more realistic representation of the dynamic system. However MMPZ performance is poor compared to MPZ because it uses data from (k − 1) at k. Mathematically, the continuous and MMPZ controllers are represented as follows: D(s) = Kc

D(z) = Kd

(s + a) (s + b)

 z − e−aT (z − 1) (z − e−bT )

(b) The objective is to design the MPZ equivalent controller to the continuous controller represented by the equation: D(s) =

10s + 30 0.1s + 0.4

The continuous controller can be simplified into a controller gain, a zero factor, and a pole factor as follows: 10s + 30 0.1s + 0.4   s+3 = 100 s+4

D(s) =

For T = 1, the MPZ digital controller is obtained by mapping the zero (−3) and the pole (−4) into the z-space as follows:  z − e−3 D(z) = Kd (z − e−4 )

638

Design and Analysis of Control Systems

Using FVT, the DC gain of the continuous system is given by 

s+3 lim D(s) = lim 100 s→0 s→0 s+4 = 75.



Similarly, the DC gain of the MPZ digital controller is given by  z − e−3 (z − e−4 )  1 − e−3 = Kd . (1 − e−4 )

lim zD(z) = limz→1 z(Kd )

z→1

Equating the two DC gains leads to  1 − e−3 75 = Kd . (1 − e−4 ) Making Kd the subject, we obtain the following: Kd

= 77.49

 z − e−3 =⇒ D(z) = 77.49 (z − e−4 ) This can also be expressed as: D(z) = 77.49

(z − 0.0498) (z − 0.0183)

This is the MPZ digital controller. It is equivalent to the continuous controller. (c) For the Tustin method, consider the simplified continuous controller: 

s+3 D(s) = 100 s+4



Using Tustin approximation means the following substitution in the continuous controller: s=

2 T



z−1 z+1



This leads to the derivation of the discrete controller as follows:    2 z−1 + 3  T z+1   D(z) = 100   2 z−1  +4 T z+1      3T 3T z 1+ 2 + 2 −1   = 100   z (1 + 2T ) + (2T − 1)  

639

Digital Control Systems

Substituting T = 1 into the aforementioned equation leads to the following:   5z + 1 D(z) = 50 3z + 1   1 z+ 250 5   = 1 3 z+ 3 =⇒ D(z) = 83.33

(z + 0.2) (z + 0.333)

This is the Tustin digital controller, which is equivalent to the continuous controller. (d) The DC gains of the continuous system, MPZ and Tustin methods can be derived and shown to be equal. The starting point is finding DC gain for MPZ digital controller: DC gain =

lim zD(z)

z→1



z − 0.0498 = lim z (77.49) z→1 z − 0.0183 = 75



Hence the DC gain obtained using the MPZ method is 75. The next step is finding the DC gain for the Tustin digital controller: DC gain

=

lim zD(z)

z→1

  1   z+ 250 5   = lim z z→1 1 3 z+ 3 = 75 Hence the DC gain obtained using the Tustin method is 75. Clearly, the DC gains of the MPZ and Tustin methods are the same. They are also exactly equal to that of the continuous system. This means as time (t) approaches infinity, the two controllers converge and produce outputs that are exactly similar to that of the continuous system. In fact, this equality of DC gains for the three controllers is independent of the sampling rate, that is, it does not depend on sampling time T . This can be demonstrated as follows:     2 z−1 + 3  T z+1   lim zD(z) = lim z(100)   2 z−1  z→1 z→1 +4 T z+1    2 1−1 + 3   T 1+1  = 100    2 1−1 +4 T 1+1   3 = 100 4 = 75 

640

Design and Analysis of Control Systems

(e)(i) A higher sampling rate of 100 Hz as opposed to 1 Hz will lead to better performance by both MPZ and Tustin controllers. (e)(ii) The structures of the controllers (gains, poles and zeros) become the same. This is because, with high sampling rates, more data, less distortion, and the digital system becomes closer to the continuous one. Furthermore, as the frequency increases from 1 to 100 Hz and performance improve towards being the same as the continuous system, the controllers converge into the same controller producing the same good performance. What follows is an Illustration of the case where the sampling frequency is 100 Hz. The digital controllers D(z) obtained by using MPZ and Tustin methods at 100 Hz (T = 0.01) are obtained as follows: MPZ method: As from before, the MPZ controller is of the form  z − e−0.03 D(z) = Kd , (z − e−0.04 ) where Kd is obtained by equating the DC gains of the continuous and MPZ controllers as follows: lim D(z) =

z→1

= lim D(s) =

s→0

=

 z − e−0.03 lim z(Kd ) z→1 (z − e−0.04 )  1 − e−0.03 Kd (1 − e−0.04 )   s+3 lim 100 s→0 s+4 75

Equating the two DC gains leads to: 75 =⇒ Kd

 1 − e−0.03 (1 − e−0.04 )   1 − e−0.04 = 75 1 − e−0.03   1 − 0.96078 = 75 1 − 0.97044   0.03922 = 75 0.02956 = Kd

= 99.51 Hence the MPZ controller is given by  1 − e−0.03 1 − e−0.04   z − 0.97044 = 99.51 z − 0.96078 

D(z) = 99.51

The high decimal places are necessary to fully capture the accuracy and behaviour of the controller.

641

Digital Control Systems

Tustin method:      3T 3T z 1+ 2 + 2 −1   D(z) = 100   z (1 + 2T ) + (2T − 1)  Substituting for T = 0.01 

 1.015z − 0.985 1.02z − 0.98    1.015 z − 0.985/1.015 = 100 1.02 z − 0.98/1.02   z − 0.97044 =⇒ D(z) = 99.51 z − 0.96078 D(z) = 100

This is the Tustin digital controller. It is exactly the same as that obtained using the MPZ method. The poles, zeros and controller gains have the same values. This is the impact of using a higher sampling rate (100 Hz as opposed to 1 Hz). The performance of the controllers dramatically improves as the outputs of the system they are controlling become exactly the same as that of the continuous control system. Hence the MPZ and Tustin controllers converge to become identical controllers which produce the same great performance. As already established, the DC gain for the continuous controller and that of the MPZ digital controller are the same at 75. We need to find the DC gain for the Tustin digital controller. As explained earlier, the DC gain of any digital control system can be obtained using the Final Value Theorem while assuming a unit step input, i.e., U(z) = z/(z − 1). Y (z) U(z) =⇒ Y (z) = D(z)U(z) D(z) =

DC gain

= yss =

lim (z − 1) D(z)U(z)   z = lim (z − 1) D(z) z→1 z−1 = lim zD(z) =

=⇒ DC gain

lim (z − 1)Y (z) (applying FVT)

z→1

z→1

z→1

This is the DC gain for any discrete controller. Hence for the specific Tustin digital controller in this exercise it follows that: DC gain =

lim zD(z)    z − 0.97044 =⇒ DC gain = lim z 99.51 z→1 z − 0.96078   0.02956 = 99.51 0.03922 = 75.00 z→1

This is as expected. So, the DC gains of the three controllers (Continuous, MPZ and Tustin) are exactly the same since the outputs of the systems they are controlling all converge to the same value as the time approaches infinity.

642

Design and Analysis of Control Systems

Figure 8.20 Digitising the plant: G(s) → G(z).

Figure 8.21 Direct digital control design.

(f) Equivalent Digital Control Design (EDCD) methods (MPZ, MMPZ, Tustin) are only effective at high sampling rates ≥20ωn (20 times natural frequency). At 10ωn (10 times natural frequency) or lower, distortions and instability kick in. This is why we use Direct Digital Control Design (DDCD), which does not depend on the sampling rate. Furthermore, the approximation methods involve back and forth A/D and D/A conversions. This is avoided by using DDCD, which involves digitising both the controller and plant model. To compute the digital plant model, the impact of the ZOH circuit on the plant dynamics needs to be considered as follows: G(z) = 1 − z

−1





G(s) Z s



This will result in an exact system description since the ZOH circuit describes accurately what is happening between samples. Figures 8.20 and 8.21 demonstrate the DDCD process.

8.5

MATLAB IMPLEMENTATION OF DIGITAL SYSTEMS

Octave, MATLAB, the Control Systems Toolbox (a MATLAB module) or Simulink (a MATLABbased graphical programming environment) can be used to implement digital control systems. Some of the relevant basic MATLAB commands include c2d , pzmap, zgrid , ‘zoh’, step , and rlocus. Simulink is used for more complex and elaborate control systems. To compare the continuoustime system with the digital versions, the continuous-time system can easily be obtained from the MATLAB code for the EDCD versions by setting the sampling time to zero (T = 0.0). 8.5.1

MASS-SPRING-DAMPER SYSTEM

Consider the simple mass-spring-damper system in Figure 8.22. The Transfer Function model (derived in Chapter 2) is given by: 1 X(s) = 2 F(s) ms + bs + k Assuming the closed-loop bandwidth frequency is >1 rad/sec, we will choose the sampling time T equal to 0.01 sec (frequency of 100 Hz). We can create a MATLAB m-file and enter the following commands:

643

Digital Control Systems

Figure 8.22 Mass, spring and damper system.

% Generating a discrete-time model m = 1; b = 10; k = 20; s = tf(‘s’); sys = 1/(m ∗ s2 + b ∗ s + k); T = 1/100; sysd = c2d(sys,T,’zoh’) sysd = 4.837e−05 z + 4.678e−05 ———————————– z2 - 1.903z + 0.9048 This is the digital Transfer Function model of the simple mass-spring-damper system obtained using MATLAB. The reader can demonstrate that this outcome is the same as obtained by manual digitisation calculations. Similarly, the continuous-time State-Space model of the simple mass-springdamper system can be digitised in MATLAB. The continuous State-Space model was derived in Chapter 2 as:        0 1 0 x˙ x = + 1 f (t) v˙ v − mk − mb m y=



1

0





x v



With all constants assuming the same values as before, the following MATLAB m-file converts the above continuous-time State-Space model to a discrete-time State-Space model: % Generating state-space matrices A = [0 1; -k/m -b/m]; B = [ 0; 1/m]; C = [1 0]; D = [0]; T = 1/100; sys = ss(A,B,C,D); sysd= c2d(sys,Ts,’zoh’)

644

Design and Analysis of Control Systems

The MATLAB function sysd will spit out the discrete-time state-space matrices such that the discrete State-Space model is given by        x(k) 0.9990 0.0095 x(k − 1) 0 = + f (k − 1) v(k) −0.1903 0.9039 v(k − 1) 0.0095 y(k − 1) =



1

0





x(k − 1) v(k − 1)

 .

This discrete-time State-Space system can then be simulated in MATLAB and compared with the continuous-time one. This is left as an exercise to the reader, in addition to showing that the discretetime matrices obtained in MATLAB are the same as those obtained using manual digitisation calculations. 8.5.2

BALL AND BEAM CONTROL SYSTEM

Consider the classical ball and beam problem presented in Figure 8.23. The system consists of a beam with a guide within which the ball can roll on the beam. A motor and angular sensor are mounted at the pivot point of the beam. The angle between the beam and the reference line is adjusted by changing the motor orientation angle at the pivot point. Some mechanism (usually a resistive sensor) exists that is used to determine the ball’ on the beam. The ball only travels along the beam with one degree of freedom (it only moves in the plane of the page and is constrained to move along the beam). The objective of the ball and beam system is to adjust the angle θ (t) to move the ball to a different reference point x(t) on the beam. The transfer function for this system is given by: G(s) =

X(s) 5g = 2 Θ(s) 7s

Using MATLAB the following can be done: • Design a continuous controller D(s) for the system. • Design and implement a digital controller D(z) using EDCD to convert your controller D(s) from (a) to the digital domain. • Design and implement a digital control using DDCD, making use of either the discrete root locus or the discrete pole placement methods.

Figure 8.23 Ball and beam model.

Digital Control Systems

645

The EDCD and DDCD can be implemented in MATLAB using the following code: %Implementation of EDCD g=9.8;TD=1; T=0.1; alpha=0.106; s = tf(’s’); plant system = (5 ∗ g)/(7 ∗ s2 ); D plant system = c2d(plants ystem, T,′ zoh′ ); lead conp = (T D ∗ s + 1)/(al pha ∗ T D ∗ s + 1); D lead conp = c2d(lead conp, 0.1,′ tustin′ ); system = f eedback(D lead conp ∗ D p lant system, 1); system1 = f eedback(plant system ∗ lead conp, 1); figure(1) step(system1) [y1,t] = step(system,4); figure(2) stairs(t,y1); stepinfo(system) % Implementation of DDCD T=6.7; alpha=0.106; g=9.8; z = tf(’z’,T); plant system = (5 ∗ g)/(7 ∗ s2 ); D lead conp2 = 0.28 ∗ (z − al pha)/z ∗ (z − 0.78); D plants ystem2 = c2d(plants ystem, T,′ zoh′ ); figure(3) system3 = f eedback(Dl eadconp2 ∗ D p lants ystem2, 1); step(system3,1000) stepinfo(system3) figure(4) rlocus(system3) It is important to note that the continuous-time system is simply obtained by using sampling time T = 0.0 in the EDCD MATLAB code. Figure 8.24 shows a comparison between the continuoustime and discrete-time unit step responses. The reader should tackle the full MATLAB exercise for this ball and beam experiment, which is in MATLAB Appendix B.18. 8.5.3

DIGITISING A PID CONTROLLER AND THE PLANT

Consider the PID control system shown in Figure 8.25. The first step is finding the values for K, TD , and TI so that the closed-loop poles satisfy a given criteria, say ζ > 0.5 and ωn > 1 rad/sec. The obtained PID controller is then digitised in MATLAB using Tustin’s, MPZ and MMPZ methods. MATLAB can be used to simulate the unit step response of each of these digital implementations for sample times of T = 1, 0.1, and 0.01 sec; thus demonstrating the impact of sampling time. Part of the MATLAB code to digitise the PID controller D(s) and the plant transfer function G(s) can be written as follows: % Digitising PID and plant transfer function T=0.01; K=1.817; TI=10; TD=0.912; s=tf(‘s’); D=K*(1+TD*s+1/(TI*s)); G=1/s(s+1); Dd=c2d(C,T,’MPZ’);

646

Design and Analysis of Control Systems

Figure 8.24 Digital and continuous unit step responses.

Figure 8.25 A PID control system.

Gd=c2d(G,T,’zoh’); sys=feedback(Dg*Gd,1); step(sys) The reader should tackle the full MATLAB exercise of digitising the PID controller and plant transfer function, which is in MATLAB Appendix B.18. Example 8.6 Consider a plant with a Transfer Function model given by G(s) =

8(s + 4) s(s + 1)(s + 10)

Using MATLAB do the following: (a) (b) (c) (d) (e)

Find the discrete-time plant Transfer Function model. Design the digital PID controller. Generate the unit-step response to the open-loop discrete-time system. Generate the unit-step response for the discrete-time feedback system. Generate the unit-step response for the discrete-time feedback system with a discrete-time PID controller. (f) Generate the root locus for the discrete-time feedback system. (g) Generate the root locus for the discrete-time feedback system with a discrete-time PID controller. (h) Investigate impact of parameter changes on digital control system.

Digital Control Systems

647

Solution 8.6 The MATLAB code to answer this question consists of the following: % Inputting the Continuous-time Transfer Function Model s = tf(’s’); G = 8*(s+4)/(s*(s+1)*(s+10)) % (a) Convert Model to Discrete Time DG = c2d(G,0.1) DGf = feedback(DG,1); % (b) Design Digital PID Controller [C, info] = pidtune (DG,’pidf’) [Kp,Ki,Kd,Tf] = piddata(C) H = feedback(C*DG, 1) % (c) - (e) Step Response figure; subplot(3,1,1) step(DG,15,’k’), title(’Step Response for discrete open-loop system’); subplot(3,1,2) step(DGf,10,’b’), title(’Step Response for discrete feedback system’); subplot(3,1,3) step(H,15,’r’), title(’Step Response for discrete feedback system with discrete PID controller’); stepinfo(H) % (f) - (h) Impact of Parameter Changes G2 = 8*(s+4)/(s*(s+0.001)*(s+1)); DG2 = c2d(G2,0.1); H2 = feedback(C*DG2,1); figure; subplot(2,2,1); step(H2,15),grid, title(’Step Response discrete system and PID controller with parameter change’); stepinfo(H2) subplot(2,2,2); rlocus(H2), title(’rlocus discrete system and PID controller with parameter change’); subplot(2,2,3); step(H,4), grid, legend(’H’), title(’Step Response primitive system and PID controller’); stepinfo(H) subplot (2,2,4); rlocus(H), title(’rlocus primitive system and PID controller’) The MATLAB charts obtained from this code are shown in Figures 8.26–8.28. 8.5.4

THE DIGITAL PID CONTROLLER

The general performance of the discrete-time PID controller is the same as the continuous-time PID controller. All characteristics of the PID controller discussed in the continuous-time systems apply well to discrete-time systems, subject to some adjustments of the discrete-time version. Ways of implementing the discrete-time PID controller are discussed.

648

Design and Analysis of Control Systems

Figure 8.26 MATLAB plots.

It was shown that the continuous-time three term PID control has the general form D(s) = KP +

KI + KD s, s

(8.54)

where the three parameters or gains KP , KI , and KD are chosen to give the desired system dynamics. These gains are normally known respectively as the proportional, integral, and derivative gain. As a matter of convenience, this controller is normally presented as   1 D(s) = KP 1 + + TD s , (8.55) TI s where the three parameters KP , TI , and TD give a complete description of the controller. While KP is the proportional gain, TI and TD are known as the integral (or reset) time and derivative time respectively. The function of each of these terms in controlling the system has been well explained in Chapter 4. It was shown that the proportional gain improves the system responsiveness but at the cost of introducing steady-state error and some degree of instability (oscillations). The derivative term introduces some damping in the system dynamics to curb the oscillations created by the proportional gain. In contrast, the integral term reduces the steady-state error produced by the proportional gain. While it is possible to employ only the proportional controller, the derivative and the integral

649

Digital Control Systems

Figure 8.27 MATLAB plots.

controllers cannot stand alone in controlling the system. The methods of tuning these gains have also been discussed, particularly the Ziegler-Nichols method. In this section, ideas presented for the continuous-time system will be extended to cover the discrete-time systems. Recall that the transfer function for the continuous-time system is defined as D(s) =

U(s) , E(s)

(8.56)

where the U(s) and E(s) are the Laplace transforms of the control signal and the error signal, respectively. 8.5.4.1

Proportional Controller (P)

The continuous-time proportional controller is just the amplification of the control error signal. Proportional control implementation involves multiplication of the error signal with appropriate ampli-

650

Design and Analysis of Control Systems

Figure 8.28 MATLAB plots.

fication factor as the design may require. In general, it has been shown that the transfer function for the continuous-time proportional controller is D(s) = KP , which gives the control signal as U(s) = D(s)E(s) = KP E(s). The corresponding time domain representation is simply u(t) = KP e(t). Therefore, at any sampling instant k, the control signal in the discrete-time environment becomes u(k) = KP e(k).

(8.57)

To derive the discrete-time equivalent of the proportional controller D(z), first take the Z-transform of Equation 8.57, which gives U(z) = KP E(z),

651

Digital Control Systems

from which the proportional controller is derived as DP U(z) E(z) = KP .

D(z) =

This shows that the discrete-time proportional controller has the same gain as that of the continuoustime system. 8.5.4.2

Integral Controller (I)

Implementation of the integral digital controller involves numerical integration of the error signal or numerical differentiation of the control signal. The common numerical differentiation and integration methods, which include the trapezoidal rule (also known as the Tustin’s rule or the bilinear transformation) and Euler’s forward as well as the backward rectangular rules have been described. If the continuous-time derivative controller were to stand alone, its transfer function would have been KP D(s) = , TI s so that U(s) =

KP E(s). TI s

(8.58)

U(s)s =

KP E(s). TI

(8.59)

This can also be written as

In time domain, Equations 8.58 and 8.59 can be written respectively as u(t) =

KP TI

Z t

e(t)dt 0

and du(t) KP = e(t). dt TI

(8.60)

Employment of different numerical methods on these two equations leads to different versions of the discrete-time integral controller; however, the most popular method employed is the one that requires that the control signal be a function of all past values of the control and error signal as well as the current error signal. Derivation of such a controller stems from the application of Euler’s backward approximation of the differential equation, which gives u(k) − u(k − 1) KP = e(k), T TI such that u(k) = u(k − 1) +

KP T e(k), TI

(8.61)

where T is the sampling interval. Notice that, since all past values of e(.) up to (k − 1), as well as the past values of u(.) up to (k − 2), are embedded in u(k − 1), then this controller will be employing information from all past values of u(.) up to (k − 1) as well as e(.) up to k. In implementation, this poses a problem known as the integral wind-up effect, or simply the reset wind-up in which the total error at time k grows beyond some allowable limits.

652

Design and Analysis of Control Systems

The Z-transform then gives U(z) = z−1U(z) +

KP E(z) TI

whose rearrangement gives   KP E(z) U(z) 1 − z−1 = TI so that U(z) E(z) KP = TI [1 − z−1 ] KP z = . TI (z − 1)

D(z) =

8.5.4.3

(8.62)

Derivative Controller (D)

If the continuous-time derivative controller were to stand alone, its transfer function would have been U(s) E(s) = KP TD s.

D(s) =

The inverse Laplace transfer gives this control signal in the time domain as u(t) = KP TD

de(t) . dt

(8.63)

Again, the intention here is to derive the control signal as a function of all past error signal up to time k. However, in this case, it doesn’t need all past values, only the current and the previous error signals are sufficient as shown below. By employing Euler’s backward rule of numerical differentiation, the error signal at time k gives e(k) − e(k − 1) de(t) = , dt t=k T where T is the sampling interval. Therefore, by using this approximation at any sampling instant k, it follows that u(k) = KP TD =

e(k) − e(k − 1) T

KP TD [e(k) − e(k − 1)] . T

Now the Z-transform of this expression gives  KP TD  E(z) − z−1 E(z) T   KP TD = E(z) 1 − z−1 T

U(z) =

653

Digital Control Systems

so that U(z) E(z)  KP TD  = 1 − z−1 T KP TD (z − 1) = . Tz

D(z) =

Notice that, although by using Euler’s forward rule, one may be tempted to carry out the same derivation, such a controller will have practical limits, as it will require error signals that are ahead of time. 8.5.4.4

The Complete PID Controller

Using the results in the previous sections, a complete PID controller can now be constructed as follows:   Tz TD (z − 1) + D(z) = KP 1 + TI (z − 1) Tz TI (z − 1)T z + T 2 z2 + TD TI (z − 1)2 TI (z − 1)T z  2 TI T + T + TD TI z2 + (−TI T − 2TD TI ) z + TD TI = KP TI T z2 − TI T z

= KP

or

 TI T + T 2 + TD TI + (−TI T − 2TD TI ) z−1 + TD TI z−2 TI T (1 − z−1 ) The control difference equation can then be evaluated from D(z) = KP

D(z) =

(8.64)

U(z) , E(z)

which gives  1 − z−1 U(z) =

  KP  TI T + T 2 + TD TI + (−TI T − 2TD TI ) z−1 + TD TI z−2 E(z). TI T

Hence, the inverse Z-transform gives u(k) − u(k − 1) =

  KP  TI T + T 2 + TD TI e(k) + (−TI T − 2TD TI ) e(k − 1) + TD TI e(k − 2) . TI T

Hence, the control signal becomes u(k) = u(k − 1)+   KP  TI T + T 2 + TD TI e(k) + (−TI T − 2TD TI ) e(k − 1) + TD TI e(k − 2) TI T = u(k − 1) + Ae(k) + Be(k − 1) +Ce(k − 2),

(8.65)

654

Design and Analysis of Control Systems

where

8.5.5

A=

 KP TI T + T 2 + TD TI TI T

B=

KP (−TI T − 2TD TI ) TI T

C=

KP TD . T

TIME DELAY IN DIGITAL CONTROL SYSTEM

As already discussed in previous chapters time delay (dead-time or transport lag) does occur in control systems. Figure 8.29 illustrates a digital control system with time delay. The analysis and compensation of time delays in digital systems are carried out in a similar fashion to continuoustime control systems. The time delays are handled in the same way in the two spaces. However a few matters are peculiar to digital systems. • Whereas the time delay is modelled as e−sτ in continuous control systems, it is depicted by z−1 in digital ones. • The discrete-time delay τ is always a multiple of the sampling period T . • Discrete-time delays are equivalent to poles at z = 0. Consequently, it is always possible to absorb these delays into the digital model dynamics. • Keeping delays separate is better for performance, especially for systems with long delays compared to the sampling period. 8.5.6

IMPLEMENTATION OF DIGITAL CONTROL SYSTEMS

In implementing digital control systems, two distinct issues have to be addressed: hardware and software problems. This section gives a brief overview of such issues in a real application. The hardware to be used in the construction of digital control systems needs to be fast enough to go with the real system dynamics. The qualities of the hardware for control are discussed in this section. There must be a cost balance between the hardware and the controlled system. Memory requirements and the single board computers (microcontrollers) are discussed. Transformation of the controller difference equations into software is explained. Software for control must be fast to match system dynamics. Hence, items of interest in implementing software for control are discussed. The real-time computing techniques should be addressed and emphasis placed on multitasking policies. Techniques for handling integral and reset wind-up effects are essential. Usually, the two will have to be integrated after designing the software and the hardware. Before the system is put into operation, it must be tested and debugged where necessary. Any software and the hardware must be compatible. All such issues should be discussed.

Figure 8.29 Time delay in a digital control system.

655

Digital Control Systems

PROBLEMS Problem 8.1 Assess the controllability, observability, and stability of the following digital system:     −1 0.8 1 x(k + 1) = x(k) + u(k) 0.5 −1.6 2   1 0 x(k) + [0] u(k). y(k) = Problem 8.2 Represent the SISO system with the following transfer function using state-space: z + 0.3 Y (z) = . U(z) z2 − 0.6z − 0.16 Problem 8.3 The following open-loop continuous-time transfer function represents a second-order system that is to be controlled using a digital computer with ZOH: G(s) =

1 . (s + 1)(s + 10)

Choose a suitable sampling interval T and design a digital PID controller that ensures that the settling time ts ≤ 1sec, damping ratio ξ ≥ 0.5 and there is zero steady-state error to a step input. Problem 8.4 A unit feedback digital control system at a sampling interval of 0.05 sec. If the plant transfer function is given by z + 0.8 G(z) = , (z − 1)(z − 0.8) design the controller D(s) using root locus methods so that the closed-loop system satisfies the following time domain specification: Settling time



0.4 sec

Damping ratio



0.7

Steady-state error = Zero (step input). Problem 8.5 Find the time function y(kT ) corresponding to the following closed form Z-transforms Y(z): (a) Y (z) =

4z z2 − 1

2z z2 − 0.5z − 0.5 0.522z2 + 0.361z − 0.203 (c) Y (z) = 3 z − 2.347z2 + 1.797z − 0.449

(b) Y (z) =

For each of these, use the following methods: (i) Inverse transform method (ii) Partial fraction expansion and use of Z-transform table (iii) Power series expansion Problem 8.6 Suppose that the following continuous-time lead compensator D(s) =

10(s + 1) s + 10

656

Design and Analysis of Control Systems

is to be used in controlling the plant G(s) =

20 . s(s + 1)(s + 10)

If this system is to be implemented using a digital computer at a sampling interval 0.1 sec, determine the transfer function D(z) of the controller using: (a) Pole-zero matching method (b) Tustin’s conversion method (c) Bilinear transformation method Problem 8.7 (a) Compute the poles and zeros of the discrete-time systems that result from discretising the continuous-time system with transfer function G(s) =

10(s2 + 0.2s + 2) , (s2 + 0.5s + 1)(s + 10)

for sampling intervals ranging from Ts = 1 sec to Ts = 0.01 sec. (b) Create a root locus plot showing the evolution of the poles and zeros computed in part (a) as a function of Ts . (c) Repeat parts (a) and (b) for G(s) =

(s + 0.1 + 2i)(s + 0.1 − 2i) . (s + 0.1 + i)(s + 0.1 − i)(s + 0.1 + 0.5i)(s + 0.1 − 0.5i)

(d) Repeat parts (a), (b), and (c) for different discretisation methods. Problem 8.8 (a) Transform G(z) =

(z + 0.3)(z − 0.3) (z − 0.1)(z − 0.5 + 0.5i)(z − 0.5 − 0.5i)

into a continuous-time equivalent system by means of ZOH method of sampling intervals ranging from Ts = 0.01 sec to Ts = 1 sec. (b) Determine the poles and zeros of the resulting continuous-time systems. (c) Plot the root locus for each of the continuous-time systems. Problem 8.9 The Z-transform of a discrete-time filter h(k) at a 1 Hz sample rate is H(z) =

1 + (1/2)z−1 . [1 − (1/2)z−1 ][1 + (1/2)z−1 ]

(a) Let u(k) and y(k) be the discrete input and output of this filter. Find a difference equation relating u(k) and y(k). (b) Find the natural frequency and the damping coefficient of the filter’s poles. (c) Is the filter stable? Problem 8.10 Use the Z-transform to solve the difference equation y(k) − 3y(k − 1) + 2y(k − 2) = 2u(k − 1) − 2u(k − 2) where u(k) = k, k ≥ 0 = 0, k < 0 y(k) = 0, k > 0.

657

Digital Control Systems

Problem 8.11 A unity feedback system has an open-loop transfer function given by 250 . s[(s/10) + 1]

G(s) =

The following lag compensator added in series with the plant yields a phase margin of 50o , Dc (s) =

(s/1.25) + 1 . 50s + 1

(a) Using the matched pole-zero approximation, determine an equivalent digital realisation of this compensator. (b) The following transfer function is a lead network designed to add about 60o of phase at ω1 = 3 rad/sec, s+1 . H(s) = 0.1s + 1 Assume a sampling period of T = 0.25 sec, and compute and plot in the z-plane the pole and zero locations of the digital implementations of H(s) obtained using (i) Tustin’s method and (ii) pole-zero mapping. For each case, compute the amount of phase lead provided by the network at z1 = e jω1 T . (c) Using log-scale for the frequency range ω = 0.1 to ω = 100 rad/sec, plot the magnitude Bode plots for each of the equivalent digital systems found in part (a), and compare with H(s). (hint: Magnitude Bode plots are given by |H(z)| = H(e jωT ) ) Problem 8.12 (a) The following transfer function is a lag network designed to introduce a gain attenuation of 10 (−20dB) at ω = 3 rad/sec, H(s) =

10s + 1 . 100s + 1

Assume a sampling period of T = 0.25 sec, and compute and plot in the z-plane the pole and zero locations of the digital implementations of H(s) obtained using (1) Tustin’s method and (2) polezero mapping. For each case, compute the amount of gain attenuation provided by the network at z1 = e jωT . (b) For each of the equivalent digital systems in part (a), plot the Bode magnitude curves over the frequency range ω = 0.01 to 10 rad/sec. Problem 8.13 Write a computer programme to compute Φ and Γ from A, B, and the sample period T . Use the programme to compute Φ and Γ when (a)  A=

−1 0

0 −2



−3 1

−2 0



 , B=

1 1



1 0



, T = 0.2 sec.

(b)  A=

 , B=

, T = 0.2 sec.

Problem 8.14 Consider the following discrete-time system in state-space form:        x1 (k + 1) 0 −1 x1 (k) 0 = + u(k). x2 (k + 1) 0 −1 x2 (k) 10 Use state feedback to relocate all of the system’s poles to 0.5.

658

Design and Analysis of Control Systems

Problem 8.15 The characteristic equation of a sampled system is z2 + (K − 1.5)z + 0.5 = 0. Find the range of K so that the system is stable. (Answer: 0 < K < 3) Problem 8.16 A unit ramp r(t) = t, t > 0 is used as an input to a process where G(s) =

1 , (s + 1)

as shown in the diagram below. r* (t)

r(t)

y(t)

G(s)

Determine the output y(kT ) for the first four sampling instants. Problem 8.17 A closed-loop system has a hold circuit, as shown in Problem 8.16. Determine G(z) when T = 1 sec and 2 . G p (s) = s+2 Problem 8.18 Determine which of the following digital transfer functions are physically realisable. 10(1 + 0.2z−1 + 0.5z−2 ) z−1 + z−2 + 1.5z−3 1.5z−1 − z−2 (b) Gc (z) = 1 + z−1 + 2z−2 z + 1.5 (c) Gc (z) = 3 2 z +z +z+1 z−1 + 2z−2 + 0.5z−3 (d) Gc (z) = z−1 + z−2 (a) Gc (z) =

(e) Gc (z) = 0.1z + 1 + z−1 (f) Gc (z) = z−1 + z−2 Problem 8.19 Consider the digital control system x[(k + 1)T ] = Ax(kT ) + Bu(kT ) where  A=

0 −1

−1 −1



 ,

B=

0 1

 .

The state feedback control is described by u(kT ) = −Kx(kT ), where   k2 K = k1 . Find the values of k1 and k2 so that the roots of the characteristic equation of the closed-loop system are at 0.5 and 0.7.

659

Digital Control Systems

Problem 8.20 (a) Compare and contrast the Matched Pole Zero (MPZ) and the Modified Matched Pole Zero (MMPZ) methods. Explain the advantages and drawbacks involved. (b) A compensator for an electrical power control system is given by D(s) =

50s + 150 0.5s + 2

Use the Matched Pole Zero (MPZ) method to design an equivalent digital compensator D(z). Use a sampling time of 1 second. (c) For the same continuous-time compensator D(s) in (b), use the Tustin approximation method to design a digital compensator D(z). Use a sampling time of 1 second. (d) Compare the DC gains of the compensators obtained in (b) and (c). Explain the significance of your answer. (e) If we use a much high sampling rate, say 100 Hz, redo (b) and (c). What will happen to the structure and performance of the two compensators? Give reasons for these outcomes. (f) Why do we use the direct discrete design of digital systems? Outline the processes involved in carrying out such a design. Problem 8.21 In a robotic arm’s control system, the angle θ (t) represents the robotic arm orientation, ω(t) is the robot’s angular speed, u(t) is the input force, and the output y(t) is the angle θ (t). The robotic arm control model can be represented by the following equations: θ˙ (t) = ω(t) ˙ ω(t) = 2u(t) (a) Find the plant State-Space model (A, B,C, D) for the system. (b) Use two different methods to show that the plant Transfer Function model of the system is given by G(s) =

2 s2

(c) From continuous-time State-Space model (A, B,C, D) obtained in (a) derive the digital StateSpace model (F, G, H, J). (d) From the results in (c), find the discrete plant Transfer Function model G(z). (e) Using the direct discrete design method, from the continuous plant transfer in (b), develop the discrete plant Transfer Function model G(z). Is the G(z) in (e) the same as that obtained in (d)? Why is this the case? (f) Explain the significance of the poles of the continuous and digital Transfer Function models of the robot arm. What is the impact of digitisation on the stability of a dynamic system? (g) Show that the digital system is controllable and observable for all sample times (T > 0). (h) Use two different values of T to demonstrate the correctness of (g).

9 Advanced Control Systems 9.1

INTRODUCTION

In this chapter, advanced topics and issues involved in the design and analysis of control systems are addressed. In particular, the subjects of discrete-time estimation (both state-space and information space), optimal stochastic control, and nonlinear control systems are presented. Adaptive control systems and robust control are briefly introduced. A multisensor system may employ a range of different sensors, with different characteristics, to obtain information about an environment. The diverse and sometimes conflicting information obtained from multiple sensors gives rise to the problem of how the information can be combined in a consistent and coherent manner. This is the data fusion problem. Multisensor fusion is the process by which information from a multitude of sensors is combined to yield a coherent description of the system under observation. All data fusion problems involve an estimation process. An estimator is a decision rule that takes as an argument a sequence of observations and computes a value for the parameter or state of interest. General recursive estimation is presented and, in particular, the Kalman filter is discussed. A Bayesian approach to probabilistic information fusion is outlined and the notion and measures of information are defined. This leads to the derivation of the algebraic equivalent of the Kalman filter, the (linear) Information filter. State estimation for systems with nonlinearities is considered and the extended Kalman filter treated. Linear information space is then extended to nonlinear information space by deriving the extended Information filter. This filter forms the basis of decentralised estimation and control methods for nonlinear systems. The estimation techniques are then extended to LQG stochastic control problems including systems involving nonlinearities, that is, the nonlinear stochastic control systems. In most of the work in the previous eight chapters, it has been assumed that the dynamics of systems to be controlled can be described completely by a set of linear differential equations and that the principle of superposition holds. Such systems are known as linear dynamic systems. However, in most applications, these assumptions are not valid, and the systems are termed nonlinear dynamic systems. The nonlinearity of dynamic systems can be inherent or deliberately added to improve the control action. This chapter addresses the challenge of nonlinear systems, their analysis and control design. Various nonlinear control system techniques, such as gain scheduling, feedback linearisation, adaptive and robust control, are presented, compared, and contrasted. An extended example of nonlinear control of a solar PV power system is considered.

9.2

STATE-SPACE ESTIMATION

In this section, the principles and concepts of estimation are introduced. An estimator is a decision rule that takes as an argument a sequence of observations and computes a value for the parameter or state of interest. The Kalman filter is a recursive linear estimator that successively calculates a minimum variance estimate for a state that evolves over time, on the basis of periodic observations that are linearly related to this state. The Kalman filter estimator minimises the mean squared estimation error and is optimal with respect to a variety of important criteria under specific assumptions about process and observation noise. The development of linear estimators can be extended to the problem of estimation for nonlinear systems. The Kalman filter has found extensive applications in such fields as aerospace navigation, robotics, and process control.

660

DOI: 10.1201/9781032718880-9

661

Advanced Control Systems

9.2.1

SYSTEM DESCRIPTION

A very specific notation is adopted to describe systems throughout this chapter [4]. The state of nature is described by an n-dimensional vector x = [x1 , x2 , . . . , xn ]T . Measurements or observations are made of the state of x. These are described by an m-dimensional observation vector z. A linear discrete-time system is described as follows: x(k) = F(k)x(k − 1) + B(k)u(k − 1) + w(k − 1),

(9.1)

where x(k) is the state of interest at time k, F(k) is the State-Transition Matrix from time (k − 1) to k, while u(k) and B(k) are the input control vector and matrix, respectively. The vector, w(k) ∼ N(0, Q(k)), is the associated process noise modelled as an uncorrelated, zero mean, white sequence with process noise covariance, E[w(i)wT ( j)] = δi j Q(i). The system is observed according to the linear discrete equation z(k) = H(k)x(k) + v(k),

(9.2)

where z(k) is the vector of observations made at time k. H(k) is the observation matrix or model and v(k) ∼ N(0, R(k)) is the associated observation noise modelled as an uncorrelated white sequence with measurement noise covariance, E[v(i)vT ( j)] = δi j R(i). It is assumed that the process and observation noises are uncorrelated, i.e., E[v(i)wT ( j)] = 0. The notation due to Bar-Shalom [4] is used to denote the vector of estimates of the states x( j) at time i given information up to and including time j by xˆ (i | j) = E [x(i) | z(1), . . . , z( j)] . This is the conditional mean, the minimum mean square error estimate. This estimate has a corresponding variance given by h i P(i | j) = E (x(i) − xˆ (i | j)) (x(i) − xˆ (i | j))T | z(1), . . . , z( j) .

9.2.2

KALMAN FILTER ALGORITHM

A great deal has been written about the Kalman filter and estimation theory in general [4, 5, 20]. An outline of the Kalman filter algorithm is presented here without derivation. Figure 9.1 summarises its main functional stages. For a system described by Equation 9.1 and being observed according to Equation 9.2, the Kalman filter provides a recursive estimate xˆ (k | k) for the state x(k) at time k, given all information up to time k in terms of the predicted state xˆ (k | k − 1) and the new observation z(k). The one-step-ahead prediction, xˆ (k | k − 1), is the estimate of the state at a time k given only information up to time (k − 1). The Kalman filter algorithm can be summarised in two stages: 9.2.2.1

Prediction xˆ (k | k − 1) = F(k)ˆx(k − 1 | k − 1) + B(k)u(k)

(9.3)

P(k | k − 1) = F(k)P(k − 1 | k − 1)FT (k) + Q(k).

(9.4)

662

Design and Analysis of Control Systems

Figure 9.1 The Kalman filter algorithm.

9.2.2.2

Estimation xˆ (k | k) = [1 − W(k)H(k)] xˆ (k | k − 1) + W(k)z(k) P(k | k) = P(k

| k − 1) − W(k)S(k)WT (k),

(9.5) (9.6)

where W(k) and S(k) known as the gain and innovation covariance matrices, respectively, are given by W(k) = P(k | k − 1)HT (k)S−1 (k), (9.7) S(k) = H(k)P(k | k − 1)HT (k) + R(k).

(9.8)

The matrix 1 represents the identity matrix. From Equation 9.5, the Kalman filter state estimate can be interpreted as a linear weighted sum of the state prediction and observation. The weights in this averaging process are {1 − W(k)H(k)} associated with the prediction and W(k) associated with the observation. The values of the weights depend on the balance of confidence in prediction and observation as specified by the process and observation noise covariances.

9.3

THE INFORMATION FILTER

The Information filter is essentially a Kalman filter expressed in terms of measures of information about the parameters (states) of interest rather than direct state estimates and their associated covariances [23]. This filter has also been called the inverse covariance form of the Kalman filter [5, 20]. In this section, the contextual meaning of information is explained, and the Information filter is derived. 9.3.1 9.3.1.1

INFORMATION SPACE Bayesian Theory

The probabilistic information contained in z about x is described by the probability distribution function, p(z|x), known as the likelihood function. Such information is considered objective because it is based on observations. The likelihood function contains all the relevant information from

663

Advanced Control Systems

the observation z required in order to make inferences about the true state x. This leads to the formulation of the likelihood principle, which states that all that is known about the unknown state is what is obtained through experimentation. Thus, the likelihood function contains all the information needed to construct an estimate for x. However, the likelihood function does not give the complete picture, if, before measurement, information about the state x is made available exogenously. Such a priori information about the state is encapsulated in the prior distribution function p(x) and is regarded as subjective because it is not based on any observed data. How such prior information and the likelihood information interact to provide a posteriori (combined prior and observed) information, is solved by Bayes’ theorem, which gives the posterior conditional distribution of x given z, p(x, z) = p(x|z)p(z) = p(z|x)p(x) ⇔ p(x|z) =

p(z|x)p(x) . p(z)

(9.9)

where p(z) is the marginal distribution. To reduce uncertainty, several measurements can be taken over time before constructing the posterior. The set of all observations up to time k is defined as △

Zk = {z(1), z(2), . . . , z(k)}.

(9.10)

The corresponding likelihood function is given by △

Λk (x) = p(Zk |x).

(9.11)

This is a measure of how “likely” a parameter value x is, given that all the observations in Zk are made. Thus, the likelihood function serves as a measure of evidence from data. The posterior distribution of x, given the set of observations Zk , is now computed as p(x|Zk ) =

p(Zk |x)p(x) . p(Zk )

(9.12)

It can also be computed recursively after each observation z(k) as follows: p(x|Zk ) =

p(z(k)|x)p(x|Zk−1 ) . p(z(k)|Zk−1 )

(9.13)

In this recursive form, there is no need to store all the observations. Only the current observation z(k) at step k is considered. This recursive definition has reduced memory requirements, and hence, it is the most commonly implemented form of Bayes’ theorem. 9.3.1.2

Measures of Information

The term information is employed in the Fisher sense, that is, a measure of the amount of information about a random state x present in the set of observations Zk , up to time k. The score function, sk (x), is defined as the gradient of the log-likelihood function, △

sk (x) = ∇x ln p(Zk , x) =

∇x p(Zk , x) . p(Zk , x)

By considering sk (x) as a random variable, its mean is obtained from Z

E [sk (x)] =

∇x p(Zk , x) p(Zk , x)dz p(Zk , x) Z

= ∇x

p(Zk , x)dz = 0.

(9.14)

664

Design and Analysis of Control Systems

The Fisher information matrix J (k) is then defined as the covariance of the score function, h i △ T J (k) = E {∇x ln p(Zk , x)}{∇x ln p(Zk , x)} . (9.15) Expressing this result as the negative expectation of the Hessian of the log-likelihood gives   J (k) = −E ∇x ∇Tx ln p(Zk , x) . (9.16) For a non-random state x the expression of the Fisher information matrix becomes   J (k) = −E ∇x ∇Tx ln p(Zk |x) .

(9.17)

The notion of Fisher information is useful in estimation and control. It is consistent with information in the sense of the Cramer-Rao lower bound (CRLB) [5]. According to the CRLB, the mean squared error corresponding to the estimator of a parameter cannot be smaller than a certain quantity related to the likelihood function. Thus, the CRLB bounds the mean squared error vector of any unbiased estimator xˆ (k | k) for a state vector x(k) modelled as random. E[{x(k) − xˆ (k | k)}{x(k) − xˆ (k | k)}T |Zk ] ≥ J -1 (k).

(9.18)

In this way, the covariance matrix of an unbiased estimator is bounded from below. It follows from Equation 9.18 that the CRLB is the inverse of the Fisher information matrix, J (k). This is a very important relationship. A necessary condition for an estimator to be consistent in the mean square sense is that there must be an increasing amount of information (in the sense of Fisher) about the parameter in the measurements, i.e., the Fisher information has to tend to infinity as k → ∞. The CRLB then converges to zero as k → ∞ and thus the variance can also converge to zero. Furthermore, if an estimator’s variance is equal to the CRLB, then such an estimator is called efficient. Consider the expression for the Fisher information matrix in Equations 9.15 or 9.16. In the particular case where the likelihood function, Λk (x), is Gaussian, it can be shown that the Fisher information matrix, J (k), is equal to the inverse of the covariance matrix P(k | k), that is, the CRLB is the covariance matrix. This is done by considering the probability distribution function of a Gaussian random vector x(k) whose mean and associated covariance matrix are xˆ (k | k) and P(k | k), respectively. In particular, p(x(k)|Zk ) = N (x(k), xˆ (k | k), P(k | k)) ( ) [x(k) − xˆ (k | k)]T P−1 (k | k) [x(k) − xˆ (k | k)] △ 1 = exp − , A 2 where A =

p

det(2πP(k | k)). Substituting this distribution into Equation 9.16 leads to   J (k) = −E ∇x ∇Tx ln p(x(k)|Zk ) " (h )# iT h i x(k)−xˆ (k | k) P−1 (k | k) x(k)−xˆ (k | k) T = E ∇x ∇x + ln A 2 " =E

iT

i !#

x(k)−xˆ (k | k) P−1 (k | k) x(k)−xˆ (k | k)

h

∇x ∇Tx

h

2

h n o i = E P−1 (k | k) [x(k) − xˆ (k | k)] [x(k) − xˆ (k | k)]T P−1 (k | k) = P−1 (k | k)P(k | k)P−1 (k | k) = P−1 (k | k) −1

= (CRLB) .

(9.19) (9.20)

665

Advanced Control Systems

Thus, assuming Gaussian noise and minimum mean squared error estimation, the Fisher information matrix is equal to the inverse of the covariance matrix. This information matrix is central to the filtering techniques employed in this chapter. Although the filter constructed from this information space is algebraically equivalent to the Kalman filter, it has been shown to have advantages over the Kalman filter in multisensor data fusion applications. These include reduced computation, algorithmic simplicity, and easy intialisation. In particular, these attributes make the Information filter easier to decouple, decentralise, and distribute. These are important filter characteristics in multisensor data fusion systems. 9.3.2

INFORMATION FILTER DERIVATION

The two key information-analytic variables are the information matrix and information state vector. The information matrix has already been derived above as the inverse of the covariance matrix, △

Y(i | j) = P−1 (i | j).

(9.21)

The information state vector is a product of the inverse of the covariance matrix (information matrix) and the state estimate, △

yˆ (i | j) = P−1 (i | j)ˆx(i | j) = Y(i | j)ˆx(i | j)

(9.22)

The variables, Y(i | j) and yˆ (i | j), form the basis of the information space ideas that are central to the material presented in this chapter. The Information filter is derived from the Kalman filter algorithm by post-multiplying the term {1 − from Equation 9.5, by the term  W(k)H(k)} P(k | k − 1)P−1 (k | k − 1) (i.e., post-multiplication by the identity matrix 1), 1 − W(k)H(k) = [P(k | k − 1) − W(k)H(k)P(k | k − 1)] P−1 (k | k − 1)   = P(k | k − 1) − W(k)S(k)S−1 (k)H(k)P(k | k − 1) × P−1 (k | k − 1)   = P(k | k − 1) − W(k)S(k)WT (k) P−1 (k | k − 1) = P(k | k)P−1 (k | k − 1).

(9.23)

Substituting the expression of the innovation covariance S(k), given in Equation 9.8, into the expression of the filter gain matrix W(k), from Equation 9.7 gives W(k) = P(k | k − 1)HT (k)[H(k)P(k | k − 1)HT (k) + R(k)]−1 ⇔ W(k)[H(k)P(k | k − 1)HT (k) + R(k)] = P(k | k − 1)HT (k) ⇔ W(k)R(k) = [1 − W(k)H(k)]P(k | k − 1)HT (k) ⇔ W(k) = [1 − W(k)H(k)]P(k | k − 1)HT (k)R−1 (k).

(9.24)

Substituting Equation 9.23 into Equation 9.24 gives W(k) = P(k | k)HT (k)R−1 (k).

(9.25)

Substituting Equations 9.23 and 9.25 into Equation 9.5 and pre-multiplying through by P−1 (k | k) gives the update equation for the information state vector as P−1 (k | k)ˆx(k | k) = P−1 (k | k − 1)ˆx(k | k − 1) + HT (k)R−1 (k)z(k),

666

Design and Analysis of Control Systems

or yˆ (k | k) = yˆ (k | k − 1) + HT (k)R−1 (k)z(k).

(9.26)

A similar expression can be found for the information matrix associated with this estimate. From Equations 9.6, 9.7 and 9.23 it follows that P(k | k) = [1 − W(k)H(k)] P(k | k − 1)[1 − W(k)H(k)]T + W(k)R(k)WT (k).

(9.27)

Substituting in Equations 9.23 and 9.25 gives    T P(k | k) = P(k | k)P−1 (k | k − 1) P(k | k − 1) P(k | k)P−1 (k | k − 1)    T + P(k | k)HT (k)R−1 (k) R(k) P(k | k)HT (k)R−1 (k) .

(9.28)

Pre- and post-multiplying by P−1 (k | k) then simplifying, gives the information matrix update equation as P−1 (k | k) = P−1 (k | k − 1) + HT (k)R−1 (k)H(k) (9.29) or Y(k | k) = Y(k | k − 1) + HT (k)R−1 (k)H(k).

(9.30)

The information state contribution i(k) from an observation z(k), and its associated information matrix I(k) are defined, respectively, as follows: △

i(k) = HT (k)R−1 (k)z(k), △

I(k) = HT (k)R−1 (k)H(k).

(9.31) (9.32)

The information propagation coefficient L(k | k − 1), which is independent of the observations made, is given by the expression L(k | k − 1) = Y(k | k − 1)F(k)Y−1 (k − 1 | k − 1).

(9.33)

With these information quantities well defined, the linear Kalman filter can now be written in terms of the information state vector and the information matrix. 9.3.2.1

Prediction yˆ (k | k − 1) = L(k | k − 1)ˆy(k − 1 | k − 1)  −1 Y(k | k − 1) = F(k)Y−1 (k − 1 | k − 1)FT (k) + Q(k) .

9.3.2.2

(9.34) (9.35)

Estimation yˆ (k | k) = yˆ (k | k − 1) + i(k)

(9.36)

Y(k | k) = Y(k | k − 1) + I(k).

(9.37)

This is the information form of the Kalman filter [23]. Despite its potential applications, it is not widely used, and it is thinly covered in literature. Bar-Shalom [5] and Maybeck [20] briefly discuss the idea of information estimation, but do not explicitly derive the algorithm in terms of information as done above, nor do they use it as a principal filtering method.

667

Advanced Control Systems

9.3.3

FILTER CHARACTERISTICS

By comparing the implementation requirements and performance of the Kalman and Information filters, a number of attractive features of the latter are identified: • The information estimation Equations 9.36 and 9.37 are computationally simpler than the state estimation Equations 9.5 and 9.6. This can be exploited in partitioning these equations for decentralised multisensor estimation. • Although the information prediction Equations 9.34 and 9.35 are more complex than Equations 9.3 and 9.4, prediction depends on a propagation coefficient that is independent of the observations. It is thus again easy to decouple and decentralise. • There are no gain or innovation covariance matrices and the maximum dimension of a matrix to be inverted is the state dimension. In multisensor systems the state dimension is generally smaller than the observation dimension, hence it is preferable to employ the Information filter and invert smaller information matrices than use the Kalman filter and invert larger innovation covariance matrices. • Intialising the Information filter is much easier than for the Kalman filter. This is because information estimates (matrix and state) are easily intialised to zero information. However, to implement the Information filter, a start-up procedure is required where the information matrix is set with small non-zero diagonal elements to make it invertible. These characteristics are useful in the development of decentralised data fusion and control systems. Consequently, this chapter employs information space estimation as the principal filtering technique.

9.4

THE EXTENDED KALMAN FILTER (EKF)

In almost all real data fusion problems, the state or environment of interest does not evolve linearly. Consequently, simple linear models will not be adequate to describe the system. Furthermore, the sensor observations may not depend linearly on the states that describe the environment. A popular approach to solve nonlinear estimation problems has been to use the extended Kalman filter (EKF) [4, 23]. This is a linear estimator for a nonlinear system obtained by linearisation of the nonlinear state and observations equations. For any nonlinear system, the EKF is the best linear unbiased estimator with respect to minimum mean squared error criteria. The EKF is conceptually simple and its derivation follows from arguments of linearisation and the Kalman filter algorithm. The difficulty arises in implementation. It can be made to work well, but may perform badly or even become unstable with diverging estimates. This is most often due to lack of careful modelling of sensors and environment. Failure to understand the limitations of the algorithm exacerbates the problem. 9.4.1

NONLINEAR STATE-SPACE

The system of interest is described by a nonlinear discrete-time state transition equation in the form x(k) = f (x(k − 1), u(k − 1), (k − 1)) + w(k),

(9.38)

where x(k − 1) is the state vector and u(k − 1) is a known input vector, both at time (k − 1). The vectors x(k) and w(k) represent the state vector and some additive process noise vector, respectively, both at time-step k. The nonlinear function f(·, ·, k) is the nonlinear state transition function mapping the previous state and current control input to the current state. It is assumed that observations of the state of this system are made according to a nonlinear observation equation of the form z(k) = h (x(k), k) + v(k),

(9.39)

668

Design and Analysis of Control Systems

where z(k) is the observation made at time k, x(k) is the state at time k, v(k) is some additive observation noise, and h(·, k) is a nonlinear observation model mapping current state to observations. It is assumed that both noise vectors v(k) and w(k) are linearly additive Gaussian, temporally uncorrelated with zero mean, which means ∀k,

E[w(k)] = E[v(k)] = 0, with the corresponding covariances being given by E[w(i)wT ( j)] = δi j Q(i),

E[v(i)vT ( j)] = δi j R(i).

It is assumed that process and observation noises are uncorrelated, i.e., E[w(i)vT ( j)] = 0, 9.4.2

∀i, j.

EKF DERIVATION

The derivation of the EKF follows from that of the linear Kalman filter, by linearising state and observation models using Taylor’s series expansion [5, 20]. 9.4.2.1

State Prediction

It is assumed that there exists an estimate at time (k − 1) that is approximately equal to the conditional mean, xˆ (k − 1 | k − 1) ≈ E[x(k − 1) | Z k−1 ]. (9.40) The objective is to find a prediction xˆ (k | k − 1) for the state at the next time k based only on the information available up to time (k − 1). Expanding Equation 9.38 as a Taylor series about the estimate xˆ (k − 1 | k − 1), the following expression is obtained. x(k) = f (ˆx(k − 1 | k − 1), u(k − 1), (k − 1)) + ∇fx (k) [x(k − 1) − xˆ (k − 1 | k − 1)]   + O [x(k − 1) − xˆ (k − 1 | k − 1)]2 + w(k)

(9.41)

where ∇fx (k) is the Jacobian of f evaluated at x(k − 1) = xˆ (k − 1 | k − 1). Truncating Equation 9.41 at first-order, and taking expectations conditioned on the first (k − 1) observations, gives an equation for the state prediction.   xˆ (k | k − 1) = E x(k) | Z k−1   ≈ E f xˆ (k − 1 | k − 1) + A + w(k) | Z k−1 , u(k − 1), (k − 1) (where A = ∇fx (k) [x(k − 1) − xˆ (k − 1 | k − 1)]) = f (ˆx(k − 1 | k − 1), u(k − 1), (k − 1)) .

(9.42)

This follows from the assumption that the estimate xˆ (k − 1 | k − 1) is approximately equal to the conditional mean (Equation 9.40) and that the process noise w(k) has zero mean. The state estimate error at a time i, given all observations up to time j, is defined as △

x˜ (i | j) = x(i) − xˆ (i | j),

(9.43)

and the state covariance is defined as the outer product of this error with itself conditioned on the observations made  △  P(i | j) = E x˜ (i | j)˜xT (i | j) | Z j . (9.44)

669

Advanced Control Systems

In particular, the prediction error x˜ (k | k − 1) can be found by subtracting the true state x(k) given in Equation 9.41 from the prediction given in Equation 9.42 x˜ (k | k − 1) = x(k) − xˆ (k | k − 1) = f (ˆx(k − 1 | k − 1), u(k), k) + ∇fx (k) [x(k − 1) − xˆ (k − 1 | k − 1)]   + O [x(k − 1) − xˆ (k − 1 | k − 1)]2 + w(k) − f (ˆx(k − 1 | k − 1), u(k), k) ≈ ∇fx (k) [x(k − 1) − xˆ (k − 1 | k − 1)] + w(k) = ∇fx (k)˜x(k − 1 | k − 1) + w(k).

(9.45)

The prediction is unbiased when the previous estimate is unbiased and the condition that the noise sequences are zero mean and white hold. Taking expectations conditioned on the observations made up to time (k − 1) of the outer product of the prediction error gives an expression for the prediction covariance in terms of the covariance of the previous estimate.  △  P(k | k − 1) = E x˜ (k | k − 1)˜xT (k | k − 1) | Z k−1   ≈ E (∇fx (k)˜x(k − 1 | k − 1) + w(k)) A | Z k−1   where A = (∇fx (k)˜x(k − 1 | k − 1) + w(k))T   = ∇fx (k)E x˜ (k − 1 | k − 1)˜xT (k − 1 | k − 1) | Z k−1 ∇fTx (k)   + E w(k)wT (k) = ∇fx (k)P(k − 1 | k − 1)∇fTx (k) + Q(k).

(9.46)

The last two lines follow from the fact that the estimate and true state at time (k − 1) are statistically dependent only on the noise terms v( j) and w( j), j ≤ (k − 1). Hence, by assumption, they are uncorrelated with the current process noise w(k). 9.4.2.2

Observation Prediction and Innovation

The next objective is to obtain a predicted observation and its corresponding innovation to be used in updating the predicted state. This is achieved by expanding Equation 9.39, describing the observations made, as a Taylor series about the state prediction xˆ (k | k − 1). z(k) = h (x(k)) + v(k) = h (ˆx(k | k − 1)) + ∇hx (k) [ˆx(k | k − 1) − x(k)] +   O [ˆx(k | k − 1) − x(k)]2 + v(k)

(9.47)

where ∇hx (k) is the Jacobian of h evaluated at x(k) = xˆ (k | k − 1). Again, ignoring second- and higher-order terms and taking expectations conditioned on the first (k − 1) observations gives an equation for the predicted observation. zˆ (k | k − 1)  △  = E z(k) | Z k−1   ≈ E h (ˆx(k | k − 1)) + ∇hx (k) [ˆx(k | k − 1) − x(k)] + v(k) | Z k−1 = h (ˆx(k | k − 1)) ,

(9.48)

670

Design and Analysis of Control Systems

where the last two lines follow from the fact that the state prediction error and the observation noise both have zero mean. After taking an observation z(k), the innovation can be found by subtracting the predicted observation as ν(k) = z(k) − h (ˆx(k | k − 1)) . (9.49) The innovation covariance can now be found from the mean squared error in the predicted observation. The error in the predicted observation can be approximated by subtracting this prediction from the Taylor’s series expansion of the observation in Equation 9.47 as △

z˜ (k | k − 1) = z(k) − zˆ (k | k − 1) = h (ˆx(k | k − 1)) + ∇hx (k) [ˆx(k | k − 1) − x(k)]   + O [ˆx(k | k − 1) − x(k)]2 + v(k) − h (ˆx(k | k − 1)) ≈ ∇hx (k) [ˆx(k | k − 1) − x(k)] + v(k).

(9.50)

A clear distinction is made between the “estimated” observation error z˜ (k | k − 1) and the measured observation error, the innovation, ν(k). Squaring the expression for the estimated observation error and taking expectation conditions on the first (k − 1) measurements gives an equation for the innovation covariance.   S(k) = E z˜ (k | k − 1)˜zT (k | k − 1)   = E AAT (where A = (∇hx (k) [ˆx(k | k − 1) − x(k)] + v(k))) = ∇hx (k)P(k | k − 1)∇hTx (k) + R(k).

(9.51)

This follows from the fact that the state prediction is dependent only on the noise terms v( j) and w( j), j ≤ (k − 1). Consequently, by assumption, it is statistically uncorrelated with the current observation noise v(k). 9.4.2.3

Update Equations

Equipped with the prediction and innovation equations, a linearised estimator can be proposed. It gives a state estimate vector xˆ (k | k) for the state vector x(k), which is described by the nonlinear state transition of Equation 9.38, and which is being observed according to the nonlinear observation Equation 9.39. It is assumed that a prediction xˆ (k | k − 1) for the state at time k has been made on the basis of the first (k − 1) observations Z k−1 according to Equation 9.42. The current observation is z(k). The estimator is assumed to be in the form of a linear unbiased average of the prediction and innovation so that, xˆ (k | k) = xˆ (k | k − 1) + W(k) [z(k) − h(ˆx(k | k − 1))] .

(9.52)

The gain matrix W(k) is chosen so as to minimise conditional mean squared estimation error. This error is equal to the trace of the estimate covariance P(k | k), which itself is simply the expected value of the state error x˜ (k | k) squared. From Equation 9.52 and the approximate observation error given in Equation 9.50, the state error becomes x˜ (k | k) = xˆ (k | k) − x(k) = [ˆx(k | k − 1) − x(k)] + W(k) [h(x(k)) − h(ˆx(k | k − 1))] + W(k)v(k)

671

Advanced Control Systems

≈ [ˆx(k | k − 1) − x(k)] − W(k)∇hx (k) [ˆx(k | k) − x(k)] + W(k)v(k) = [1 − W(k)∇hx (k)] x˜ (k | k − 1) + W(k)v(k).

(9.53)

The estimate is unbiased when the prediction is unbiased and the condition that the noise sequences are zero mean and white hold. Taking the expectation condition on the observations made up to time k of the outer product of the state error gives an expression for the state covariance in terms of the prediction covariance.  △  P(k | k) = E x˜ (k | k)˜xT (k | k) | Z k   ≈ [1 − W(k)∇hx (k)] E x˜ (k | k − 1)˜xT (k | k − 1) | Z k−1   × [1 − W(k)∇hx (k)]T + W(k)E v(k)vT (k) WT (k) ≈ [1 − W(k)∇hx (k)] P(k | k − 1)[I − W(k)∇hx (k)]T + W(k)R(k)WT (k).

(9.54)

The gain matrix W(k) is now chosen to minimise the mean squared estimation error L(k), which is defined as L(k) = E[˜xT (k | k)˜x(k | k)] = trace[P(k | k)]. (9.55) Minimisation of this error calls for ∂L = −2(1 − W(k)∇hx (k))P(k | k − 1)∇hTx (k) + 2W(k)R(k) = 0, ∂ W(k)

(9.56)

which, on simplification and rearrangement, provides an expression for the gain matrix as  −1 W(k) = P(k | k − 1)∇hTx (k) ∇hx (k)P(k | k − 1)∇hTx (k) + R(k) = P(k | k − 1)∇hTx (k)S−1 (k).

(9.57)

With this gain matrix, Equation 9.52 becomes the best (minimum mean squared error) linear unbiased estimator for the state x(k) under the stated conditions. This completes the derivation of the extended Kalman filter. 9.4.3 9.4.3.1

SUMMARY OF THE EKF ALGORITHM Prediction xˆ (k | k − 1) = f (ˆx(k − 1 | k − 1), u(k − 1), (k − 1)) T

P(k | k − 1) = ∇fx (k)P(k − 1 | k − 1)∇fx (k) + Q(k − 1). 9.4.3.2

(9.58) (9.59)

Estimation xˆ (k | k) = xˆ (k | k − 1) + W(k) [z(k) − h(ˆx(k | k − 1))]

(9.60)

P(k | k) = P(k | k − 1) − W(k)S(k)WT (k).

(9.61)

The gain and innovation covariance matrices are given by W(k) = P(k | k − 1)∇hx T (k)S−1 (k)

(9.62)

S(k) = ∇hx (k)P(k | k − 1)∇hx T (k) + R(k).

(9.63)

The Jacobians ∇fx (k) and ∇hx (k) are typically not constant, being functions of both the state and time-step. It is clearly evident that the EKF is very similar to the Kalman filter algorithm, with the

672

Design and Analysis of Control Systems

substitutions F → ∇fx (k) and H → ∇hx (k) being made in the equations for the variance and gain propagation. It is prudent to note a number of problematic issues specific to the EKF. Unlike the linear filter, the covariances and gain matrix must be computed online as estimates and predictions are made available, and will not, in general, tend to constant values. This significantly increases the amount of computation that must be performed online by the algorithm. Also, if the nominal (predicted) trajectory is too far away from the true trajectory, then the true covariance will be much larger than the estimated covariance and the filter will become poorly matched. This might lead to severe filter instabilities. Finally, the EKF employs a linearised model that must be computed from an approximate knowledge of the state. Unlike the linear algorithm, this means that the filter must be accurately intialised at the start of operation to ensure that the linearised models obtained are valid. All these issues must be taken into account to achieve acceptable performance for the EKF.

9.5 9.5.1

THE EXTENDED INFORMATION FILTER (EIF) NONLINEAR INFORMATION SPACE

In this section, the linear Information filter is extended to a linearised estimation algorithm for nonlinear systems. The general approach is to apply the principles of the EKF and those of the linear Information filter to construct a new estimation method for nonlinear systems. This generates a filter that predicts and estimates information about nonlinear state parameters given nonlinear observations and nonlinear system dynamics. All the computation and tracking is in information space. The new filter will be termed the extended Information filter (EIF) [23]. In addition to providing a solution to the nonlinear estimation problem, the EIF also has all the advantages of the Information filter and resolves some of the problems associated with the EKF. In particular, information space allows easy intialisation of filter matrices and vectors. Given the importance of accurate intialisation when using linearised models, this is an extremely desirable characteristic. As discussed before, a major drawback of the EKF is excessive computational burden. Carrying out the prediction and estimation processes in terms of information will significantly reduce this load by simplifying the prediction and estimation equations. These equations are then easily partitioned and decentralised. It is proposed that estimation for nonlinear systems, in particular multisensor systems, is best carried out using information variables rather than state variables. 9.5.2

EIF DERIVATION

The derivation of the extended Information filter uses principles from both the derivations of the Information filter and the EKF. The EIF cannot be extrapolated from these two filters in an obvious manner. This is because, in the nonlinear case, the function operator h cannot be separated from x(k) in the nonlinear observation equation z(k) = h (x(k), k) + v(k),

(9.64)

and yet the derivation of the Information filter depends on this separation, which is possible in the linear observation equation. The derivation of the EIF proceeds by considering equations from the derivation of {1 − W(k)∇hx (k)} from Equation 9.53 by the term the Post-multiplying  EKF algorithm. P(k | k − 1)P−1 (k | k − 1) , i.e., post-multiplication by the identity matrix 1 leads to 1 − W(k)∇hx (k) = [P(k | k − 1) − W(k)∇hx (k)P(k | k − 1)]× P−1 (k | k − 1) = [P(k | k − 1) − W(k){S(k)S−1 (k)}∇hx (k)×

673

Advanced Control Systems

P(k | k − 1)]P−1 (k | k − 1) = [P(k | k − 1) − W(k)S(k)WT (k)]P−1 (k | k − 1) ⇔ 1 − W(k)∇hx (k) = P(k | k)P−1 (k | k − 1).

(9.65)

Substituting the expression of the EKF innovation covariance matrix from Equation 9.63 in the EKF gain matrix given in Equation 9.62 produces W(k) = P(k | k − 1)∇hx T (k)[∇hx (k)P(k | k − 1)∇hx T (k) + R(k)]−1 ⇔ W(k)[∇hx (k)P(k | k − 1)∇hx T (k) + R(k)] = P(k | k − 1)∇hx T (k) ⇔ W(k)R(k) = [1 − W(k)∇hx (k)]P(k | k − 1)∇hx T (k). Now, substituting Equation 9.65 into this expression gives W(k) = P(k | k)∇hx T (k)R−1 (k).

(9.66)

To use Equations 9.65 and 9.66 to derive the EIF, the EKF state estimation Equation 9.60 must be expressed in the same form as that for the conventional Kalman filter. This is done by adding and subtracting the term W(k)∇hx (k)ˆx(k | k − 1) to the left-hand side of Equation 9.60 (addition of zero): xˆ (k | k) = xˆ (k | k − 1) + W(k) [z(k) − h(ˆx(k | k − 1))] = xˆ (k | k − 1) − W(k)∇hx (k)ˆx(k | k − 1) + W(k) [z(k) − h(ˆx(k | k − 1))] + W(k)∇hx (k)ˆx(k | k − 1) = [1 − W(k)∇hx (k)]ˆx(k | k − 1) + W(k) [ν(k) + ∇hx (k)ˆx(k | k − 1)] = [1 − W(k)∇hx (k)]ˆx(k | k − 1) + W(k)z′ (k),

(9.67)

where z′ (k) is the “equivalent” linearised observation vector, z′ (k) = ν(k) + ∇hx (k)ˆx(k | k − 1),

(9.68)

and the innovation vector is given by ν(k) = z(k) − h(ˆx(k | k − 1)). Equation 9.67 is now in a form similar to that of a linear Kalman filter. The derivation of the EIF then proceeds by substituting Equations 9.65 and 9.66 into Equation 9.67. xˆ (k | k) = [1 − W(k)∇hx (k)]ˆx(k | k − 1) + W(k)z′ (k) = [P(k | k)P−1 (k | k − 1)]ˆx(k | k − 1) + [P(k | k)∇hx T (k)R−1 (k)]z′ (k). Pre-multiplying both sides by P−1 (k | k) gives P−1 (k | k)ˆx(k | k) = [P−1 (k | k)P(k | k)P−1 (k | k − 1)]ˆx(k | k − 1) + [P−1 (k | k)P(k | k)∇hx T (k)R−1 (k)]z′ (k) = P−1 (k | k − 1)ˆx(k | k − 1) + ∇hx T (k)R−1 (k)z′ (k) ⇔ yˆ (k | k) = yˆ (k | k − 1) + i(k).

(9.69)

674

Design and Analysis of Control Systems

This is the extended information estimation equation where the information contribution from nonlinear observation z(k) is given by i(k) = ∇hx T (k)R−1 (k)z′ (k), where z′ (k) is the “equivalent” linearised observation given by Equation 9.68. The vector z′ (k) gives an expression of the system observations if the nonlinear system is replaced by an equivalent linearised system. It depends on the innovation, the state prediction, and the Jacobian evaluated at this prediction. To compute the information matrix update, Equations 9.65 and 9.66 are substituted into the EKF variance update equation, P(k | k) = [1 − W(k)∇hx (k)]P(k | k − 1)[1 − W(k)∇hx (k)]T + W(k)R(k)WT (k) T

= [P(k | k)P−1 (k | k − 1)]P(k | k − 1)[P(k | k)P−1 (k | k − 1)] + T

[P(k | k)∇hx T (k)R−1 (k)]R(k)[P(k | k)∇hx T (k)R−1 (k)] .

(9.70)

Pre- and post-multiplying both sides by P−1 (k | k) gives ⇔ P−1 (k | k) = P−1 (k | k − 1) + ∇hx T (k)R−1 (k)∇hx (k) ⇔ Y(k | k) = Y(k | k − 1) + I(k).

(9.71) (9.72)

This is the linearised information matrix update equation where the associated matrix contribution is given by I(k) = ∇hx T (k)R−1 (k)∇hx (k). (9.73) To obtain the corresponding prediction equations, consider the EKF state and variance prediction equations. Pre-multiplying the state prediction Equation 9.58 by P−1 (k | k − 1) and inverting the variance prediction Equation 9.59 gives the information vector prediction as xˆ (k | k − 1) = f (ˆx(k − 1 | k − 1), u(k − 1), (k − 1)) ⇔

P−1 (k

| k − 1)ˆx(k | k − 1) = P−1 (k | k − 1)× f (ˆx(k − 1 | k − 1), u(k − 1), (k − 1))

⇔ yˆ (k | k − 1) = Y(k | k − 1)f (ˆx(k − 1 | k − 1), u(k − 1), (k − 1)) . The linearised information matrix prediction is obtained as follows: P(k | k − 1) = ∇fx (k)P(k − 1 | k − 1)∇fx T (k) + Q(k − 1)  −1 ⇔ Y(k | k − 1) = ∇fx (k)Y−1 (k − 1 | k − 1)∇fx T (k) + Q(k) . This completes the derivation of the EIF; the entire algorithm can be summarised as follows: 9.5.3 9.5.3.1

SUMMARY OF THE EIF ALGORITHM Prediction yˆ (k | k − 1) = Y(k | k − 1)f (k, xˆ (k − 1 | k − 1), u(k − 1), (k − 1))  −1 Y(k | k − 1) = ∇fx (k)Y−1 (k − 1 | k − 1)∇fx T (k) + Q(k) .

(9.74) (9.75)

675

Advanced Control Systems

9.5.3.2

Estimation yˆ (k | k) = yˆ (k | k − 1) + i(k) Y(k | k) = Y(k | k − 1) + I(k).

(9.76) (9.77)

The information state contribution and its associated information matrix are given, respectively, as I(k) = ∇hx T (k)R−1 (k)∇hx (k) i(k) = ∇hx (k)R−1 (k) [ν(k) + ∇hx (k)ˆx(k T

(9.78) | k − 1)] ,

(9.79)

where ν(k) is the innovation given by ν(k) = z(k) − h (ˆx(k | k − 1)) . 9.5.4

(9.80)

FILTER CHARACTERISTICS

This filter has several attractive practical features, in particular: • The filter solves, in information space, the linear estimation problem for systems with both nonlinear dynamics and observations. In addition to having all the attributes of the Information filter, it is a more practical and general filter. • The information estimation Equations 9.76 and 9.77 are computationally simpler than the EKF estimation equations. This makes the partitioning of these equations for decentralised systems easy. • Although the information prediction Equations 9.74 and 9.75 are of the same apparent complexity as the EKF ones, they are easier to distribute and fuse because of the orthonormality properties of information space parameters. • Since the EIF is expressed in terms of information matrices and vectors, it is more easily intialised than the EKF. Accurate intialisation is important where linearised models are employed. Some of the drawbacks inherent in the EKF still affect the EIF. These include the nontrivial nature of Jacobian matrix derivation (and computation) and linearisation instability. 9.5.5

DECENTRALISED ESTIMATION

These estimation techniques that have been developed and discussed form the basis of decentralised estimation. The notation and system description have been introduced and explained. Estimation theory and its use were discussed, in particular, the Kalman filter algorithm was outlined. The Information filter was then derived as an algebraic equivalent to the traditional Kalman filter. Its attributes were outlined and discussed. The extended Kalman filter was then presented as a state-space solution to the estimation problem for a system characterised by both nonlinear system evolution and nonlinear measurements. The original and novel contribution of this chapter is the extended Information filter, EIF. This algorithm provides an estimation technique in extended information space for nonlinear systems. It was derived from first principles, explained, and appraised. It has all the attributes of the linear Information filter and fewer of the problems associated with the EKF. The simulated examples of estimation in linear and nonlinear systems validated the Information filter and EIF algorithms with respect to those of the Kalman filter and EKF. For the EIF and EKF, examples involving nonlinearities in both system evolution and observations were considered. The key benefit of information estimation theory is that it makes fully decentralised estimation for multisensor systems attainable [23].

676

9.6

Design and Analysis of Control Systems

OPTIMAL STOCHASTIC CONTROL

This section extends the estimation algorithms to the problem of sensor-based control by introducing stochastic control ideas. In particular, the LQG control problem and its solution are outlined. For systems involving nonlinearities, the nonlinear stochastic control problem is discussed. This section describes the optimal stochastic control problem and its solution. The practical design of stochastic controllers for problems described by the LQG assumptions, Linear system model, Quadratic cost criterion for optimality, and Gaussian white noise inputs are briefly discussed. Problems involving nonlinear models are then considered. 9.6.1

STOCHASTIC CONTROL PROBLEM

Most control problems of interest can be described by the general system configuration in Figure 9.2. There is some dynamic system of interest whose behaviour is to be affected by an applied control input vector u(k) in such a way that the controlled state vector x(k) exhibits desirable characteristics. These characteristics are prescribed, in part, as the controlled state vector x(k) matching a reference state vector xr (k) as closely and quickly as possible. The simplest control problem is in the LQG form and hence it is important to understand the meaning of the LQG assumptions. • Linear system model: Linearity is assumed where a linear system obeys the principle of superposition and its response is the convolution of the input with the system impulse response. • Quadratic cost function: A quadratic cost criterion for optimality is assumed such that the control is optimal in the sense of minimising the expected value of a quadratic performance index associated with the control problem. • Gaussian noise model: White Gaussian noise process corruption is assumed.

Figure 9.2 Stochastic control system configuration.

Advanced Control Systems

9.6.1.1

677

Problem Statement

Let the system of interest be described by the n-dimensional stochastic discrete-time difference equation x(k) = F(k)x(k − 1) + B(k)u(k − 1) + D(k)w(k), (9.81) where u(k) is the r-dimensional control input to be applied and w(k) is the zero mean white Gaussian discrete-time noise. The objective is to determine the control vector u(k) that minimises the quadratic cost function    J(N) = E ∑Ni=1 eTr (k)X(k)er (k) + [u(k) − ur (k)]T U(k)[u(k) − ur (k)] , where er (k) = [x(k) − xr (k)] and X(k) is an n-by-n real and positive semi-definite cost weighting matrix, reflecting the relative importance of maintaining individual state component deviations at small values. U(k) is an r-by-r real, symmetric, and positive definite cost weighting matrix reflecting the relative importance of maintaining individual control component deviations at small values [21]. There are several reasons for the use of a quadratic cost function of states and control: • Quadratics are a good description of many control objectives, such as minimising mean squared error or energy. • Inherently, such a function enhances the adequacy of the linear perturbation model. • This combination of modelling assumptions yields a tractable problem whose solution is in the form of a readily synthesised, efficiently implemented, feedback control law. 9.6.2

OPTIMAL STOCHASTIC SOLUTION

In this subsection, the solution to the LQG control problem outlined above is presented. Deterministic methods cannot be used to solve for an optimal control vector u(k) from the function J(N) because of the stochastic nature of the problem [21]. The dynamic driving noise term w(k) prevents perfect, ahead-of-time knowledge of where the system will be at time (k + 1). There is no single optimal history of states and controls, but an entire family of trajectories. Two closely related techniques are employed in determining an optimal stochastic control solution [21]. • Optimality principle: An optimal policy has the property that for any initial states and decision (control law), all remaining decisions must constitute an optimal policy with regard to the state that results from the first decision. • Stochastic dynamic programming: This is a technique of stepping backward in time to obtain optimal control. It is dependent on the Markov nature of the discrete-time process. Two further structural properties are essential for the solution to be realised. These are separation and certainty equivalence principles. A control problem is said to be separable if its optimal control depends only on an estimate xˆ (k | k) of the state x(k) and not at all on the accuracy of the estimate. It is also said to be certainty equivalent if, being separable, the control is exactly the same as it would be in a related deterministic problem. The two principles imply that the problem of seeking a linear control law for a linear dynamical system with Gaussian measurement noise subject to a quadratic performance index can be cast in terms of two separate problems: • Optimal deterministic control • Optimal stochastic estimation These two problems can be solved separately to yield an optimal solution to the combined problem. The optimal stochastic estimation problem has been solved in previous sections. The basis of these algorithms is the Kalman filter and its algebraic equivalent, the Information filter. Although only

678

Design and Analysis of Control Systems

the information space algorithms are extended to stochastic control algorithms in this section, the state-space estimation algorithms can be similarly extended. The cost minimising control function is given by u(k) = −G(k)[ˆx(k | k) − xr (k)],

(9.82)

where G(k) is the associated optimal deterministic control gain. Its value is generated from the solution to the Backward Riccati recursion [21], G(k) = [U(k) + BT (k)K(k)B(k)]−1 [BT (k)K(k)F(k)],

(9.83)

where K(k) is the n-by-n symmetric matrix satisfying the Backward Riccati difference equation [21],   K(k) = X(k) + FT (k)K(k + 1)F(k) − FT (k)K(k + 1)B(k)G(k)   = X(k) + FT (k)K(k + 1) [F(k) − B(k)G(k)] . (9.84) This equation is solved backward from the terminal condition, K(N + 1) = Xf (k). The untracked state estimate xˆ (k | k) is reconstructed from the tracked information estimate and the (information matrix), xˆ (k | k) = Y−1 (k | k)ˆy(k | k). (9.85) 9.6.2.1

Solution Statement

The optimal stochastic control for a problem described by linear system models driven by white Gaussian noise, subject to a quadratic cost criterion, consists of an optimal linear Information filter cascaded with the optimal feedback gain matrix of the corresponding deterministic optimal control problem. This means the optimal stochastic control function is equivalent to the associated optimal deterministic control function with the true state replaced by the conditional mean of the state given the measurements. Illustration of this stochastic control solution is shown in Figure 9.3. The importance of this result is the synthesis capability it yields. Under the LQG assumptions, the design of the optimal stochastic controller can be completely separated into the design of the appropriate Information filter and the design of an optimal deterministic controller associated with the

Figure 9.3 Optimal stochastic control.

679

Advanced Control Systems

original problem. The feedback control gain matrix is independent of all uncertainty, so a controller can be designed assuming that x(k) is known perfectly all the time. Similarly, the filter is independent of the matrices that define the controller performance measures. The estimation algorithm can thus be developed ignoring the fact that a control problem is under consideration. 9.6.2.2

Algorithm Summary

Estimation is carried out according to the Information filter Equations 9.36 and 9.37. The information estimate yˆ (k | k) is used to generate the state estimate and then the control signal. xˆ (k | k) = Y−1 (k | k)ˆy(k | k)

(9.86)

u(k) = −G(k) [ˆx(k | k) − xˆ r (k | k)] .

(9.87)

The control law is generated as follows:  −1  T  G(k) = U(k) + BT (k)K(k)B(k) B (k)K(k)F(k)   K(k) = X(k) + FT (k)K(k + 1) [F(k) − B(k)G(k)] .

(9.88) (9.89)

This is the optimal stochastic LQG control solution for single-sensor and single actuator-system. Before extending it to multisensor and multiactuator systems, the case of stochastic control problems with nonlinearities is considered. 9.6.3

NONLINEAR STOCHASTIC CONTROL

The separation and certainty equivalence principles do not hold for nonlinear systems. Several methods have been employed in literature to attempt to solve this problem [21]. These include linear perturbation control (LQG direct synthesis), closed-loop controller (“dual control” approximation) and stochastic adaptive control. 9.6.3.1

Solution Statement

In this discussion, assumed certainty equivalence design is invoked. This is a synthesis technique that separates the stochastic controller into the cascade of an estimator and a deterministic optimal control function even when the optimal stochastic controller does not have the certainty equivalence property. It must be emphasised that, by definition, certainty equivalence assumes that the separation principle holds. Thus, the first objective is to solve the associated deterministic optimal control, ignoring the uncertainties and assuming perfect access to the entire state. Deterministic dynamic programming is used to generate the control law as a feedback law. The second objective is to solve the nonlinear estimation problem. This has already been done by deriving the EKF and EIF. To utilise the advantages of information space, the EIF is used. Finally, the assumed certainty equivalence control law is computed by substituting the linearised information estimate from the EIF in the deterministic control law. This is the assumed certainty equivalence nonlinear stochastic control algorithm, illustrated in Figure 9.4. One important special case of this design methodology is the cascading of an EIF equivalent of a constant gain EKF to a constant gain linear quadratic state feedback controller. The constant gain EKF has the basic structure of an EKF, except that the constant gain is precomputed based on linearisation about the nominal trajectory. This filter is robust against divergence. However, there is no fundamental reason to limit attention to constant gain designs other than computational burden of the resulting algorithms. Equipped with both single sensor LQG and nonlinear stochastic control algorithms, the next step is to extend them to multisensor and multiactuator control systems.

680

Design and Analysis of Control Systems

Figure 9.4 Nonlinear stochastic control.

9.6.4

CENTRALISED CONTROL

Just as in the multisensor data fusion case, there are three broad categories of multiactuator control architectures: centralised, hierarchical and decentralised. A centralised control system consists of multiple sensors forming a decentralised observer. The control realisation remains centrally placed, whereby information from the sensors is globally fused to generate a control law. Figure 9.5 shows such a system. Only observations are locally taken and sent to a centre where estimation and control occurs centrally. The information prediction equations are the same as those of a single-sensor system, Equations 9.34 and 9.35. 9.6.4.1

Control Generation

Global information estimates are centrally computed from global information predictions and observations generated by the different sensors. The state estimate is reconstructed from the tracked central information vector and matrix. The control vector is then computed from the state error and globally generated control law. The entire algorithm is illustrated in Figure 9.5. The main feature of this arrangement is the ability to employ several sensors while retaining a single central actuator. In addition to using multiple sensors, it would be even more beneficial if multiple actuators could be used, such that control achieved locally is the same as that achieved with a centralised controller. This is the motivation behind decentralised multisensor-based control. The approach adopted is to initially derive a fully connected decentralised control system, and then proceed to eliminate the full connection constraint to produce a scalable decentralised control system.

9.7

NONLINEAR CONTROL SYSTEMS

In most of the material covered in the previous eight chapters, it has been assumed that the dynamics of systems to be controlled can be described completely by a set of linear differential equations and that the principle of superposition holds. Such systems are known as linear dynamic systems. However, in most applications, these assumptions are not valid, and the systems are termed nonlinear dynamic systems. The nonlinearity of dynamic systems can be inherent or deliberately added to im-

681

Advanced Control Systems A CENTRAL PROCESSOR

^ x (k|k)

u (k)

^ y (k|k)

I1, i1

i (k)

I n, i n

I (k)

z 1 (k)

z 2 (k)

z n (k

Sensors

Figure 9.5 Centralised control.

prove the control action. The material in the next sections addresses the whole concept of nonlinear systems, their analysis, and control design. 9.7.1

NONLINEAR DYNAMIC SYSTEMS

The modelling of nonlinear dynamic systems was introduced and explored in Chapter 2, through the process of linearisation. With this approach a linearised State-Variable Matrix model is obtained and thereafter classical linear control techniques are then applicable. As explained in Chapter 2, the main characteristic of nonlinear dynamic systems is their failure to follow the principle of superposition. This means the combined output of two or more inputs is not necessarily equal to the resultant of the outputs due to the individual inputs. There are two reasons for this behaviour to manifest. The foremost reason is that the dynamics of such a system is described by a set of nonlinear differential equations, and the other reason is the energy losses, response delays and size limitations in the system itself. Nonlinear problems usually arise because the structure or the fixed elements of the system are inherently nonlinear. Another source of nonlinearities is nonlinear compensation introduced into the system for purposes of improving system behaviour. Figure 9.6 shows some examples of nonlinear elements, whereas Figure 9.7 illustrates a nonlinear element in a control system. Nonlinear dynamic behaviour that follows nonlinear differential equations can be easy to handle if it can be modelled completely. However, the nonlinearities due to energy losses, delays, and size limitations are normally difficult to model precisely, and hence, the control is quite difficult in some situations. In general, any nonlinear differential equation which describes nonlinear dynamics can be decomposed into a linear differential equation and a hard nonlinear element due to either energy loss, size limitation or response delays. In this section, different hard nonlinearities are discussed. Mathematical models for such nonlinearities will be given. Like linear systems, the analysis of nonlinear systems is mainly concerned with the study of the dynamics of such systems, thereby identifying stability zones and other parameters of interest

682

Design and Analysis of Control Systems

Figure 9.6 Examples of nonlinear elements.

Figure 9.7 A nonlinear element in a control system.

for control purposes depending on the problem. In this section, three methods for the analysis of nonlinear systems will be discussed. These are phase plane methods, describing function analysis methods and Lyapunov stability method.

9.8

ANALYSIS OF NONLINEAR CONTROL SYSTEMS

There are a number of effective techniques used for analysing nonlinear feedback systems. They include the following: • • • • • • • •

Describing Function Method Phase Plane Method Lyapunov Stability Analysis Singular Perturbation Method The Popov Ccriterion and the Circle Criterion for Absolute Stability Centre Manifold Theorem Small-Gain Theorem Passivity Analysis

683

Advanced Control Systems

Control design techniques for nonlinear systems can be subdivided into three broad categories: • Gain scheduling: These are techniques which treat the nonlinear system as a linear one in a limited range of operation and use linear control system design techniques for each region. The linearisation of nonlinear dynamic systems was presented in Chapter 2. The strategy is obtaining a family of linear controllers, each of which provides satisfactory control for a different operating point of the system. • Feedback linearisation: These methods introduce auxiliary nonlinear feedback such that the system can be treated as linear for purposes of control design. The idea is to algebraically transform the nonlinear system dynamics into a fully linearised system so that linear feedback control techniques could be applied. This is achieved by determining a feedback law and a change of coordinates that transform a nonlinear system into a linear and controllable one. • Lyapunov-based methods: These techniques are anchored in the concept of asymptotic stability which is introduced in Chapter 4. They include the following: Lyapunov redesign, ControlLyapunov function, nonlinear damping, backstepping, and sliding mode control. These methods will be discussed in detail in later sections. 9.8.1

DESCRIBING FUNCTION ANALYSIS

When studying the frequency response of linear dynamic systems in Chapter 6, the input was treated as a sinusoid. In that case, the output was also a sinusoid with the same frequency as the input signal, but differing in amplitude and phase. For nonlinear systems subjected to sinusoid input, the output will be periodic but not sinusoid. Fourier expansion of such output gives an infinite number of sinusoids with frequencies that are harmonics of the input frequency. Using the advantage of the complex representation of the signals, the describing function is defined as the ratio of the fundamental component of the nonlinear output and input signals. The describing-functions technique is an extension of frequency-response methods to nonlinear systems. Classically, it is applicable only to nonlinear elements whose outputs in response to sinusoidal inputs with period T (i.e., of the form A sin 2πt/T ) are also periodic with period T . The output can then be written as a Fourier series: ∞

∑ Bn sin(nω t + φ ).

n=1

The describing function is the ratio of the complex Fourier coefficient B1 e jφ1 , which is essentially a frequency response function of an approximation of the nonlinear element. In general, B1 and φ1 are functions of both input frequency ω = 2π/T and the input amplitude A. Therefore, the following can be written: B1 = B1 (A, ω) and φ1 = φ1 (A, ω). To apply the method, the describing functions must first replace the nonlinearities in the block diagram. Then the frequency domain techniques can be used to analyse the system, with some modification to account for the dependence of B1 and φ1 on A. 9.8.2

PHASE PLANE METHODS

For any second-order system, there are two states x1 and x2 . A plane formed by these states is called the phase plane. Given some initial conditions of the system dynamics, one can plot the variation of the two states in this phase plane. This plot is known as the phase motion trajectory of the system for the given initial condition. Different initial conditions result in different motion trajectories forming system phase portraits. Phase plane analysis is a graphical method that uses the information from the phase portrait of the system to analyse the dynamics of the system in question. It enables one to visualise the dynamics of the system without having to solve nonlinear differential equations analytically. One of the major advantages is that it can handle all types of

684

Design and Analysis of Control Systems

nonlinearities, i.e., smooth, strong, and hard nonlinearities. On the other hand, however, it has the limitation that it cannot handle higher-order systems because of the computational complexity as well as the complexity associated with graphical presentation of higher-order systems. The only way phase plane methods can be applied to higher-order systems is by approximation of these systems to second-order equivalents. This section discuses the fundamentals of phase plane analysis, presenting the theoretical basics of the method that eventually lays a strong groundwork for understanding the stability of systems. The methods for plotting and interpretation of the system phase plots are presented.

9.9

DESIGN OF NONLINEAR CONTROL SYSTEMS

In this section, techniques for designing control systems for dynamic systems with nonlinearities are outlined and assessed. A detailed practical example of a nonlinear control system is presented – a Solar PV Power System 9.9.1

LINEARISATION METHODS

These techniques are an extension of the linear feedback control methods. As already explained there two key approaches: gain scheduling and feedback linearisation. Gain scheduling involves treating the nonlinear system as a linear one in a limited range of operation and then using linear control system design techniques for each region. The idea is to linearise of the plant’s dynamic systems about equilibrium points so that linear feedback control techniques can be applied. On the other hand, feedback linearisation introduces auxiliary nonlinear feedback such that the system can be treated as linear for purposes of control design. The idea is to algebraically transform the nonlinear system dynamics into a fully linearised system so that linear feedback control methods could be applied. These two linearisation approaches are quite popular in the design analysis of nonlinear control systems. 9.9.2

INTRODUCTION TO ADAPTIVE CONTROL

In some control systems, certain parameters are either not constant or vary in an unknown manner. One way of minimising the effects of such contingencies is by designing for minimum sensitivity. If, however, parameter variations are large or very rapid, it may be desirable to design for the capability of continuously measuring them and changing the compensation so that the system performance criteria are always satisfied. This is called adaptive control design. Put differently, an adaptive control system automatically compensates for variations in system dynamics by (on-the-fly) adjustment of the controller so that optimum system performance is preserved. Figure 9.8 shows a general block diagram of an adaptive control system. It consists of a

Figure 9.8 Adaptive control system design.

Advanced Control Systems

685

normal feedback loop with the plant and the controller, and a second loop with a controller adjustment mechanism. Some nonlinear systems are difficult to model for several technical reasons. Modern approaches to such systems, and even those that can be modelled with uncertainty, use adaptive control strategies. Under such strategies, the controller is made to tune itself with the nonlinear and even timevarying plant dynamics producing the required control action. The most popular methods under this scheme are the Self-Tuning Regulator (STR) and the Model Reference Adaptive Controller (MRAC). This section discusses the basics of these adaptive control strategies. The Model Reference Adaptive Controller requires the presence of the model that represents the required dynamics. The controller works by adapting the parameters in accordance with adaptation of the difference between the actual system dynamics and the model reference. 9.9.3

INTRODUCTION TO ROBUST CONTROL

A control system designed using the methods and concepts of earlier chapters assumes knowledge of the model of the plant and controller and constant parameters. The plant model will always be an inaccurate representation of the actual physical system because of the following issues: • • • • • •

Parameter changes Unmodelled dynamics Unmodelled time delays Changes in the equilibrium point (operating point) Sensor noise Unpredicted disturbance inputs

Figure 9.9 shows a control system with three types of uncertainty. To start with, there is uncertainty in the model of the plant. Next, there are disturbances that occur in the plant system, and finally, there is sensor noise. Each of these uncertainties can have an additive or multiplicative component, thus complicating the control design process. Robust control is an approach to handling model uncertainties associated with time-varying systems, both linear and nonlinear. The goal of robust systems design is to retain assurance of system performance in spite of model inaccuracies and changes. A system is robust when it has acceptable changes in performance due to model changes or inaccuracies. A robust control system exhibits the desired performance despite the presence of significant plant (process) uncertainty. Put differently,

Figure 9.9 A control system with three types of uncertainty.

686

Design and Analysis of Control Systems

robust control is an approach to controller design that explicitly deals with uncertainty, thus ensuring desired performance. It is instructive to note the difference between robust and adaptive control systems. Robust control design involves fixed compensators with satisfactory performance over a specified range of plant parameter variations, while adaptive control design extracts knowledge of the plant parameters on the fly and automatically redesigns the control law. Hence, adaptive control is different from robust control in that it does not need a priori information about the bounds on the uncertainties. Robust control guarantees that if the parameter variations are within given ranges, the control law need not be modified, while adaptive control involves changing the control law to maintain optimum performance in the presence of uncertain or time-varying parameters. The robust control system structure that incorporates potential system uncertainties is shown in Figure 9.10. This model includes the sensor noise N(s), the unpredicted disturbance input D(s), and a plant G(s) with potentially unmodelled dynamics or parameter changes. The unmodelled dynamics and parameter changes may be significant or very large, and for these systems, the challenge is to create a design that retains that desired performance. Designing highly accurate systems in the presence of significant plant uncertainty is a classical feedback design problem. The theoretical basis for the solution of this problem dates back to the work of H.S. Black and H.W. Bode in the early 1930s, when this problem was referred to as the sensitivities design problem. A significant amount of literature has been published since then regarding the design of systems subject to large plant uncertainty. The designer seeks to obtain a system that performs adequately over a large range of uncertain parameters. A system is said to be robust when it is durable, hardy, and resilient. A control system is robust when (i) it has low sensitivities, (ii) it is stable over the range of parameter variations, and (iii) the performance continues to meet the specifications in the presence of a set of changes in the system parameters. Robustness is the sensitivity to effects that are not considered in the analysis and design phase – for example, disturbances, measurement noise, and unmodelled dynamics. The system should be able to withstand these neglected effects when performing the tasks for which it was designed. For small parameter perturbation, the differential sensitivities and root sensitivity are used as a measure of robustness. System sensitivity is defined as SαT =

∂ T /T ∂ α/α

where α is the parameter, and T the transfer function of the system. Root sensitivity is defined as Sαri =

Figure 9.10 Robust control system design.

∂ ri ∂ α/α

687

Advanced Control Systems

Figure 9.11 Closed-loop system (variable parameter).

When the zeros of T (s) are independent of the parameter α, it can be shown that n

SαT = − ∑ Sαri i=1

1 , (s + ri )

for an nth-order system. For example, if there is a closed-loop system, as shown in Figure 9.11, where the variable parameter is α, then T (s) = 1/[s + (α + 1)], and SαT =

−α . s+α +1

Furthermore, the root is r1 = +(α + 1), and −Sαri = −α

9.9.4

NONLINEAR CONTROL FOR A SOLAR PV POWER SYSTEM

A simple example of a nonlinear control system in the energy sector is a photovoltaic (PV) power system. PV power systems convert sunlight into electrical energy and consist of PV panels, a DCAC inverter, and a maximum power point tracking (MPPT) controller. The PV panels produce a nonlinear current-voltage (I-V) characteristic, which varies with changing sunlight conditions. The DC-AC inverter converts the DC power produced by the PV panels into AC power suitable for use in the electrical grid. The MPPT controller adjusts the operating point of the PV panels to maximise their power output (Figure 9.12). P-V curve

PV panel power (W)

250 1000W/m2 800W/m2 600W/m2

200 150 100 50 0 0

10

20 30 40 PV panel voltage (V)

Figure 9.12 Solar PV nonlinear power-voltage characteristic.

50

60

688

Design and Analysis of Control Systems

The Transfer Function model for a photovoltaic module describes the relationship between the input (solar irradiance and temperature) and output (current and voltage) of the module. The Transfer Function model can be expressed as:   GT (t) (1 + α p [Tcell (t) − Tcell,STC ]) , Ppv (t) = R pv D pv GT,STC where Ppv (t) is the output power of the Photovoltaic generator, R pv is the rated capacity, D pv is the Photovoltaic derating factor, GT is the incident solar radiation, G(T,STC) is the incident radiation in STC (standard test conditions), α p is the power temperature coefficient, Tcell and T(cell,STC) is the cell temperature at operating and STC conditions, respectively. The output current I is given by   G [1 + α (T − Tref )] , I = Isc + (I pv − Isc ) Gref where Isc is the short-circuit current, I pv is the current at the maximum power point, G is the solar irradiance, Gref is the reference irradiance, α is the temperature coefficient of the module, T is the cell temperature, and Tref is the reference temperature. The transfer function describes the effect of changes in solar irradiance and temperature on the output current of the PV module (Figure 9.13). It is essential to note that the Transfer Function model for a Photovoltaic module is nonlinear and depends on a number of variables, such as the type of module, the operating conditions, and the load connected to the module. As a result, accurate modelling and characterisation of the PV module are crucial for designing and analysing PV systems. A nonlinear control algorithm, such as a perturb and observe or incremental conductance algorithm, can be used for the MPPT controller to control the PV power system. 9.9.4.1

Nonlinear Control System

These algorithms use nonlinear mathematical models to estimate the maximum power point of the PV panels and adjust the operating point to track it. The control system for the PV power system requires sensors to measure the DC voltage and current of the PV panels, as well as an actuator to adjust the duty cycle of the DC-AC inverter. The control algorithm calculates the necessary duty cycle based on the measured values and the estimated maximum power point. The PV power system

Figure 9.13 Solar PV control system.

689

Advanced Control Systems

Figure 9.14 Solar PV current-voltage curve at different operating points.

is an illustrative example of a nonlinear control system in the energy sector. It illustrates the need for advanced control techniques to optimise energy production and the efficiency of renewable energy sources, where nonlinearities are involved (Figure 9.14). 9.9.4.2

Maximum Power Point Tracking (MPPT)

The intersection between the I-V curve and the load line of the photovoltaic system is called the operating point. This intersection can be at any point of the I-V and P-V curves. In the P-V curve of the photovoltaic system, there is a unique point which has the MPP. This point is near to knee of the I-V curve, and at this point, the photovoltaic system operates at its maximum power with maximum efficiency. Figure 9.14 shows the I-V curve with different operating points. As seen, the point marked as MPP has the maximum power value. Moreover, the largest rectangular shape belongs to the MPP, whose area is the PV’s produced power. The values VMPP and IMPP are the voltage and current at the MPP, respectively. By changing the irradiance, the I-V and P-V curves are changed. Thus, the intersection point between the I-V curves and load lines is also changed. To solve these problems, the MPPT is implemented as a controller to adjust the duty cycle of the power electronic converter, which is an interface between the PV system and load to extract the maximum available power from the PV and transfer it to the load. Adjusting the duty cycle changes the impedance seen by the source to match the operating point at MPP, allowing maximum available power to be transferred to the load.

PROBLEMS Problem 9.1 A nonlinear system is described by the following equations, x˙1 = 2x1 + 3x12 − 2x2 + u x˙2 = x2 − 0.25(sin x1 + cos x2 ) + u.

690

Design and Analysis of Control Systems

(a) With u = 0 plot the state-space trajectories in the neighbourhood of the origin defined by |x1 | ≤ 1, |x2 | ≤ 2. (b) Using Lyapunov’s first method, linearise the above system about x1 = x2 = u = 0, and assess the stability of this linearised system. Problem 9.2 Investigate whether the following functions V (x) are positive definite. (a)

V (x) = 2x12 + 2x1 x2 + 4x22 − 4x1 x3 + 5x32

(b)

V (x) = x12 + 4x1 x2 + 5x22 − 6x1 x3 − 8x2 x3 + 8x32

Problem 9.3 Construct a Lyapunov function for the system x˙1 = x12 + x22 − x1 x˙2 = x12 − x22 − x2 and use it to investigate the stability at the origin. State the domain of attraction if the system is asymptotically stable. Problem 9.4 A nonlinear system is described by x˙1 = x1 + x2 − (x13 + x1 x22 ) x˙2 = x2 − x1 − (x23 + x1 x22 ) If r2 = x12 + x22 , show that the system has a limit cycle at r = 1. Problem 9.5 For the following system, determine the equilibrium points and discuss the stability of each equilibrium using Lyapunov’s (indirect) linearisation method. 1 x˙1 = x1 − x12 − x1 x2 2 1 x˙2 = x2 − x22 + x1 x2 . 2 Sketch the flow of trajectories in the (x1 , x2 ) phase plane and classify each equilibrium point (for example, node or saddle). Problem 9.6 Sketch the isoclines for the equations: (a)

dy = xy(y − 2) dx

(b)

x¨ + ωn2 x + µx3 = 0

(c) x¨ − x(1 ˙ − x2 ) + x = 0. Problem 9.7 The equations describing the motion of an Earth satellite in the orbit plane are d2θ dr dθ +2 =0 dt 2 dt dt  2 d2r dθ k2 = − − r . dt 2 dt pr2

r

691

Advanced Control Systems

dθ A satellite is nearly in a circular orbit determined by r and ≡ ω. An exactly circular orbit is dt defined by r = ro = constant, ω = ωo = constant. dro dωo = 0 and = 0, the first differential equation is eliminated for a circular orbit. The dt dt k2 second equation reduces to ro2 ωo2 = 2 . Find a set of linear equations that approximately describe pro the differences. Since

Problem 9.8 Show that the describing function for the saturation element in the previous problem is given by       2 1 B1 jφ1 −1 1 −1 1 e = sin + cos sin . A π A A A   d2x dx Problem 9.9 Show that equation 2 = f x, can be equivalently described by a pair of dt dt first-order differential equations. Problem 9.10 Show that the phase plane trajectory of the solution of the differential equation d2x + x = 0, dt 2 with initial conditions x(0) = 0 and x(0) ˙ = 1, is a circle of unit radius centred at the origin. Problem 9.11 Determine the equation of the phase plane trajectory of the differential equation d 2 x dx + = 0, dt 2 dt with the initial conditions x(0) = 0 and x(0) ˙ = 1.

Design and Analysis of 10 AI-Based Control Systems 10.1

INTRODUCTION

With the advent of the Fourth Industrial Revolution (4IR) and extensive use of Artificial Intelligence (AI), there are new advanced methods of designing and analysing control systems. Most of these techniques are data-driven, and more specifically AI-based. In this chapter, these latest control system methods are presented. To start with, concepts such as Data-Driven Modelling, Artificial Intelligence, Machine Learning, Deep Learning, Fuzzy Logic, and Artificial Neural Networks are introduced and discussed in detail. AI-based design and analysis of control systems are then presented and evaluated. The 4IR, history of industrial revolutions, key 4IR technologies and their impact, are then discussed. Large-scale examples of 4IR applications are presented, specifically, an Intelligent Fleet Management System is studied. The details of the design and analysis of the control systems for an unmanned autonomous vehicle Drone and a driverless car are outlined, discussed, and evaluated.

10.2

DATA-DRIVEN DYNAMIC SYSTEM MODELLING

Modelling of a dynamic system can be accomplished by deriving a mathematical representation of the physical behaviour of the system, as presented in Chapter 2. An alternative strategy is to obtain the model using an empirical approach where the model mathematical equations are obtained from time series data analysis. This is achieved by inputting data into the system and observing the system output, which is then compared with the predicted output. The model is then obtained iteratively by training aimed at minimising the error between the two. This is illustrated in Figure 10.1. The training or calibration is carried out using Computational Intelligence, AI or Machine Learning methods. Recent developments in Computational Intelligence, particularly in the area of Machine Learning, have greatly expanded empirical modelling capabilities. Dynamic system models derived using these new approaches are called data-driven models.

Figure 10.1 Data-driven modelling of dynamic systems. 692

DOI: 10.1201/9781032718880-10

AI-Based Design and Analysis of Control Systems

693

Put differently, data-driven modelling is based on analysing the data about a system by establishing the relationship between the system state variables (input, internal, and output) without explicit knowledge of the physical behaviour of the system. As explained, data-driven modelling is based on Computational Intelligence, AI, and Machine Learning methods. The main techniques include neural networks, fuzzy rule-based systems, genetic algorithms, evolutionary computing or a combination of these. As already indicated, the methods used to analyse the data leading to the derivation of the system model involve contributions from the following overlapping fields: • AI: the overarching study of how human intelligence can be incorporated into computers. • Computational intelligence: which includes neural networks, fuzzy systems, evolutionary computing, and other areas within AI and Machine Learning. • Soft Computing, which is close to Computational Intelligence, but with particular emphasis on fuzzy rule-based systems induced from data. • Machine learning: a subset of AI that concentrates on the theoretical foundations used by Computational Intelligence and Soft Computing. • Data Mining and Knowledge Discovery in databases are often focused on huge databases and are associated with applications in banking, financial services and customer resources management. • Intelligent Data Analysis tends to focus on data analysis in medicine and research and incorporates methods from statistics and Machine Learning. Data-driven modelling is therefore focused on Computational Intelligence and Machine Learning methods that can be used to build models for complementing or replacing physically based models. A Machine Learning algorithm is used to determine the relationship between a system’s inputs and outputs using a training data set that is representative of all the system behaviours. This process is illustrated in Figure 10.1. Once the model is refined through training, it can be evaluated using an independent data set to establish its effectiveness with respect to general input data. Most popular Computational Intelligence techniques include neural networks, fuzzy rule-based systems, genetic algorithms, and model integration approaches. Data-driven models rely on the Computational Intelligence and Machine Learning methods and thus assume the availability of large amounts of data describing the modelled system’s physics. 10.2.1

AI-BASED APPROACHES TO CONTROL SYSTEMS

As already explained, the use of AI is now dominating the latest control methods. AI-based control systems use artificial intelligence techniques, such as Machine Learning and neural networks, to control the behaviour of a plant or process. These systems can be used in a variety of applications, from industrial control systems to autonomous vehicles and robotics. An illustration of an AI-based control system is a self-driving car. The car’s control system uses Machine Learning algorithms to analyse data from sensors, cameras, and other sources to determine the car’s position, speed, and direction. Based on this data, the system makes decisions about steering, acceleration, and braking to navigate the road safely. A predictive maintenance system for an industrial plant is another example of an AI-based control system. It uses Machine Learning algorithms to analyse data from sensors and other sources to detect patterns that indicate when equipment is likely to fail. Based on this data, the system can predict when maintenance is needed, allowing operators to schedule maintenance before equipment fails, minimising downtime and reducing maintenance costs. AI-based control systems can also be used in home automation applications, such as heating, ventilation, and air-conditioning, where the behaviour of a system needs to be controlled in response to changing conditions. AI-based control systems provide more efficient and effective control than traditional control systems because they can intelligently adapt to changing conditions and optimise performance

694

Design and Analysis of Control Systems

over time. Consequently, AI is now widely used to design, analyse, and optimise control systems. As already articulated, AI-based control systems use Machine Learning algorithms to model and predict system behaviour, allowing them to make real-time adjustments and optimise system performance. Hence, the starting point in embracing such systems is a solid understanding of AI, Machine Learning, and related fields of study.

10.3

INTRODUCTION TO ARTIFICIAL INTELLIGENCE

As introduced in Chapter 1, Artificial Intelligence refers to intelligence programmed into and demonstrated by machines. This capability is in contrast to the natural intelligence manifested by humans and other animals. Put more succinctly, AI refers to the development of computer systems that can perform tasks that ordinarily require human intelligence, such as perception, reasoning, learning, decision-making, obstacle avoidance, and path planning. These AI computer programs can process and analyse data, learn from it, and make decisions based on that learning. Techniques used in AI include Machine Learning, Deep Learning, reinforcement learning, and Neural Networks. AI has a wide range of applications, from speech recognition and natural language processing to image and video analysis, mechatronics, autonomous vehicles, and robotics. AI is also used in various industries, such as healthcare, finance, education, and manufacturing, to improve efficiency, reduce costs, and develop new products and services. AI is currently transforming society in many ways as an effective tool in solving human challenges. It is still a rapidly evolving field with great opportunities and potential dangers. There are concerns about the ethical and social implications of AI, including issues related to privacy, bias, fake news/data/images, autonomous weapons, accountability, terrorism, and job displacement. 10.3.1

TRADITIONAL (SINGLE-TASK) ARTIFICIAL INTELLIGENCE

Single-task AI systems are designed to perform a specific task or set of tasks, rather than being able to perform a wide range of tasks. They are also called traditional AI systems. Single-task AI is typically developed using Machine Learning algorithms that are trained on a specific data set to perform a particular function. Examples of single-task AI systems include speech recognition software, image recognition software, obstacle avoidance, path planning, and fraud detection systems. Most robots, drones, and driverless cars are driven by traditional AI systems. These systems are highly specialised and are typically designed to perform their task at a very high level of accuracy. Overall, single-task AI systems are highly effective for performing specific tasks, but they are not capable of the same level of flexibility and adaptability as more general-purpose AI systems. 10.3.2

ARTIFICIAL GENERAL INTELLIGENCE

General-purpose AI systems are designed to perform a wide range of tasks. The ultimate and most advanced form of such capability is called Artificial General Intelligence (AGI). This is the intelligence of a machine that can learn, understand and execute any intellectual task that a human being can. The idea is to have in a machine the ability to perform multiple complex tasks as opposed to specific task competence. Unlike narrow AI systems that are designed to perform specific functions, such as image recognition, language translation, obstacle avoidance and path planning, AGI is able to reason, learn, and adapt to new situations while accomplishing multiple complex tasks in much the same way as a human being. The concept of AGI is sometimes associated with the idea of strong AI, which suggests that it is possible to create machines that are not only capable of mimicking human intelligence, but are actually sentient, conscious, and self-aware, that is, machines that manifest intentionality, consciousness, and feelings. However, it is important to note that AGI and strong AI remain largely areas of current active research – inconclusive work in progress. Whether and when such systems will be fully developed are open questions. Ethical considerations, religious beliefs or

AI-Based Design and Analysis of Control Systems

695

fear of the unknown cannot be allowed to deter the pursuit of both AGI and strong AI. The research and experimentation must continue. 10.3.3

MACHINE LEARNING

A special subset of AI is Machine Learning – the study of algorithms and statistical models that computer systems use to perform tasks effectively, without using explicit instructions, but relying on patterns and inference. This class of AI involves training machines to learn from data, rather than explicitly programming them to perform a specific function. In ML, algorithms use statistical models and mathematical techniques to automatically improve their performance on a given task based on the data they are trained on. The goal is to develop models that can generalise well to new, unseen data and make accurate predictions or decisions. Machine Learning has a wide range of applications, including natural language processing, computer vision, robotics, and predictive analytics. There are four ways of implementing Machine Learning: supervised, unsupervised, semisupervised and reinforcement. 10.3.3.1

Supervised Learning

In supervised learning, the algorithm is trained on labelled data, where the correct output is provided for each input. The aim is to construct a model that makes effective predictions based on information and evidence in the presence of various uncertainties. This means a supervised learning program considers a known set of input data and known output data and trains the model. The objective is to generate acceptable or reasonable predictions for the response to new input data. Supervised learning uses techniques such as classification and regression to develop predictive models. Specifically, classification methods predict categorical responses. For example, they can be used to establish whether an electronic message is bona fide or whether a growth is cancerous. Classification models place input data into medical imaging, image and speech recognition, and credit scoring categories. On the other hand, regression techniques predict continuous responses, such as temperature changes or power demand fluctuations. Applications of such methods include electricity load forecasting and algorithmic trading [22]. 10.3.3.2

Backpropagation

A popular supervised learning method is called Backpropagation. It is a widely used algorithm for training artificial neural networks. The algorithm works by adjusting the weights of the connections between neurons in a neural network to minimise the difference between the network’s output and the desired output for a given input. Thus, supervised learning is achieved by propagating the error backwards through the network from the output layer to the input layer. This is done by computing the gradient of the error with respect to each weight in the network, using the chain rule of calculus to recursively calculate the gradients for each layer. These gradients are then used to update the weights of the network using an optimisation algorithm such as stochastic gradient descent. By iteratively adjusting the weights of the network using the backpropagation algorithm, the network can learn to approximate complex input-output mappings, such as recognising images or translating languages. 10.3.3.3

Unsupervised Learning

In unsupervised learning, the algorithm is trained on unlabelled data, and the goal is to discover patterns or structure in the data. Hence, unsupervised learning identifies hidden patterns or intrinsic structures in provided information. This approach is used to attain inferences from datasets composed of input data without labelled responses. The technique of clustering is the most widely used unsupervised learning strategy. Among other functions, the method is used for exploratory data

696

Design and Analysis of Control Systems

analysis to find hidden patterns in data sets. Clustering applications include object recognition, gene sequence analysis, and market research. Semi-supervised learning combines supervised and unsupervised techniques. 10.3.3.4

Reinforcement Learning

In reinforcement learning, the algorithm learns by trial and error, receiving feedback in the form of rewards or punishments as it interacts with an environment. It is a way of implementing Machine Learning by training an agent to make decisions in an environment by receiving feedback in the form of rewards or punishments. The agent learns through trial and error by interacting with the environment, with the goal of maximising its cumulative reward over time. It takes actions based on its current state and the rewards it expects to receive from those actions. The environment then responds with a new state and a reward, which the agent uses to update its policy for selecting actions in the future. By repeating this process many times, the agent gradually learns to make better decisions and maximise its reward. Reinforcement learning has been successfully applied to a variety of domains, including robotics, game playing, and control systems. One of the most notable examples is AlphaGo, an AI program developed by DeepMind that defeated a world champion at the game of Go using a combination of reinforcement learning and neural networks. Other applications of reinforcement learning include autonomous driving, recommendation systems, and supply chain optimisation. 10.3.4

DEEP LEARNING

Deep Learning is a subset of Machine Learning. It is part of a broader family of Machine Learning methods. Deep Learning is about learning data representations, as opposed to task-specific algorithms. Put succinctly, Deep Learning is a special class of Machine Learning that involves training neural networks with multiple layers to learn representations of data that can be used for classification, prediction, or other tasks. In Deep Learning, the neural network architecture is designed to automatically learn hierarchical representations of the data, with each layer learning more abstract features than the previous layer. This makes Deep Learning particularly effective for tasks involving complex data, such as images, speech, and natural language processing. Deep Learning algorithms use backpropagation to adjust the weights of the neural network during training, based on the error between the predicted output and the actual output. The weights of the neural network are adjusted in a way that minimises this error, allowing the network to learn to make more accurate predictions over time. In fact, Deep Learning uses complex algorithms modelled on the human brain. This enables the processing of unstructured data such as documents, images, and text. As a subset of Machine Learning, Deep Learning can also be implemented using, supervised, unsupervised, semi-supervised or reinforcement learning techniques. Deep Learning has seen significant advances in recent years, due in part to the availability of large amounts of data and advances in computing power. Deep Learning has been used to achieve state-of-the-art results in a variety of applications, including image and speech recognition, natural language processing, and autonomous driving. In summary, Deep Learning is an advanced and specialised subset of Machine Learning, and Machine Learning is a subset of Artificial Intelligence. Put differently, Deep Learning is Machine Learning, and Machine Learning is Artificial Intelligence. This is illustrated in Figure 10.2. Clearly, Machine Learning is a category of Artificial Intelligence while Deep Learning is an advanced and more complex aspect of Machine Learning. 10.3.5

DEEP REINFORCEMENT LEARNING

Deep Reinforcement Learning is achieved by implementing Deep Learning using reinforcement learning techniques, where deep neural networks are used to approximate the action-value or policy

AI-Based Design and Analysis of Control Systems

697

Figure 10.2 Artificial intelligence, machine learning and deep learning: the relationships.

function in reinforcement learning. In Deep Reinforcement Learning, the agent uses a deep neural network to learn a policy for selecting actions in an environment, based on the current state. The neural network takes the state as input and outputs a probability distribution over actions. The agent then selects an action based on this distribution, and the environment responds with a new state and a reward. The neural network is trained using backpropagation to adjust the weights based on the error between the predicted reward and the actual reward received. The use of deep neural networks in reinforcement learning allows for more complex representations of the state-action space, which can lead to better performance on tasks with high-dimensional or continuous state and action spaces. Deep Reinforcement Learning has been used to achieve stateof-the-art results in a variety of applications, including game playing, robotics, and natural language processing. One of the most notable examples of Deep Reinforcement Learning is AlphaGo Zero, a version of the AlphaGo program that learned to play Go without any human expert knowledge, using only reinforcement learning with a deep neural network. Other examples include Deep Reinforcement Learning for drones, driverless cars, and service/industrial robots. 10.3.6

GENERATIVE AI

This is a special category of AI that involves the use of algorithms and models to create new content or generate novel ideas. This can include the creation of images, music, text, or even entire virtual environments. Unlike traditional AI systems that are designed to perform specific tasks or solve specific problems. Generative AI is designed to create something new that did not exist before. It uses complex algorithms and models that are trained on large amounts of data to create new content

698

Design and Analysis of Control Systems

that is often indistinguishable from content created by humans. The goal is to be able to replicate or mimic the creative abilities of humans, and to be able to produce new content that is original and creative. There are a number of different techniques that can be used to create generative systems, including Machine Learning, Deep Learning, and probabilistic modelling. Generative AI has many potential applications in fields such as art, music, and design, as well as in areas such as drug discovery and materials science. However, it also raises important ethical questions about the use of AI-generated content and the potential for misuse or abuse of these technologies. 10.3.6.1

ChatGPT and Google Bard

Two popular examples of Generative AI are ChatGPT and Google Bard. ChatGPT is a conversational agent or chatbot powered by a language model based on the GPT (Generative Pre-trained Transformer) architecture. It is one example of a large language model trained by OpenAI, which is designed to generate human-like language in response to a given prompt or context. ChatGPT is designed to hold a natural language conversation with humans, and it can respond to a wide range of topics and questions. It can generate text that resembles human speech and can provide answers, explanations, and suggestions based on the input it receives. This Generative AI application is trained on vast amounts text data, including books, articles, and websites, using unsupervised learning techniques. This allows ChatGPT to generate coherent and contextually appropriate responses to a wide range of inputs. It is an example of how Generative AI technology can be used to create conversational interfaces that can interact with humans in a natural way. ChatGPT has a wide range of potential applications in such fields as customer service, management (personal assistant), education, news and media, healthcare, language translation, and creative writing. However, it is prudent to state that ChatGPT can produce incorrect, biased and harmful responses or answers. Furthermore, its ability to produce authoritative text, even when the facts are not correct, can easily be used by people with bad motives. This chatbot also poses potential ethical and social dangers such as violation of privacy, phishing emails, plagiarism, and creation of malware. Bard is Google’s experimental, conversational, AI chat service. It is meant to function in a similar way to ChatGPT. The key difference is that Bard obtains its information from the web. Bard is powered by Google’s Language Model for Dialogue Applications (LaMDA), which was built on Transformer, Google’s neural network architecture that it developed and open-sourced in 2017. It is important to note that GPT-3, the language model ChatGPT functions on, was also built on Transformer. The potential applications and challenges of Google Bard are similar to those outlined for ChatGPT. However, ChatGPT has had an exponential uptake (first mover advantage) while Bard has largely underwhelmed. 10.3.7

MACHINE LEARNING WORKFLOW

As explained earlier, Machine Learning involves developing algorithms and models that enable machines to learn from data, recognise patterns, make predictions, and improve performance over time. It is used in a broad range of applications, including image recognition, natural language processing, speech recognition, recommender systems, fraud detection and autonomous vehicles. Machine Learning and control systems can be combined to create Intelligent Control Systems. These systems use Machine Learning algorithms to learn from data and optimise the control strategies used to regulate physical systems. This approach can lead to more efficient and effective control of complex systems with dynamic and uncertain environments. A Machine Learning system structure can be summarised as follows: • Data collection: The first exercise in any Machine Learning task is to gather relevant data. Depending on the application, this data could be in various forms, such as images, text, audio, or sensor readings.

AI-Based Design and Analysis of Control Systems

699

Figure 10.3 Machine learning workflow.

• Data pre-processing: After data collection, the next step is processing and cleaning the data to remove noise, missing values or outliers. This stage may also involve transforming the data into an appropriate format for Machine Learning algorithms to analyse. • Feature engineering: This step involves selecting and extracting relevant features from the data that will be used to train the Machine Learning models. Feature engineering requires domain expertise and creativity to identify the most informative features. • Model selection: After selecting the features, the next step is to choose an appropriate Machine Learning algorithm to train on the data. This step involves considering the characteristics of the data and the specific problem to be solved. • Model training: This step involves using the selected Machine Learning algorithm to learn the underlying patterns in the data. The algorithm is applied and trained on a subset of the information called the training data set. • Model evaluation: After training the model, it is evaluated on a separate section of the data which is designated as the test set. The assessment aims to assess the model’s performance and determine whether it can generalise well to new data. • Model deployment: Finally, the trained model can be deployed to make predictions on new data. This could sometimes involve integrating the developed model into a broader system or application. In summary, the structure of a Machine Learning system, shown in Figure 10.3, involves a cyclical process of collecting data, processing and engineering features, selecting and training models, evaluating performance, and deploying the model. This process is often iterative, with each iteration aiming to improve the model’s performance [6, 29]. 10.3.7.1

Selecting the Suitable Algorithm

Selecting the ideal or optimum algorithm can seem daunting. There are many supervised and unsupervised Machine Learning algorithms. Each solution takes a different strategy for the learning process. Consequently, a one-size-fits-all approach is inapplicable. Establishing a suitable algorithm is partly based on trial and error. Even well-trained data engineers and scientists can only assert the efficacy of an algorithm if they test it. It is instructive to note that highly flexible models might overfit data by modelling minor variations that could be due to noise signals. However, basic models are easier to interpret but might produce inaccurate results. Consequently, choosing the suitable algorithm involves trade-offs between such characteristics as model speed, accuracy and complexity. A trial-and-error approach is at the core of Machine Learning. If a technique or algorithm does not work, another strategy is pursued. MATLAB provides tools to help engineers and scientists test various Machine Learning models and then select the most efficacious. The broad classification of ML methods is shown in Figure 10.4.

10.4

APPLYING AI TECHNIQUES TO CONTROL SYSTEMS

As explained earlier, AI-based control systems use Artificial Intelligence techniques to design, analyse, and optimise control systems. These methods can be applied to several types of control systems,

700

Design and Analysis of Control Systems

Figure 10.4 Machine learning classification [6, 22, 29].

including industrial plants, robotics, and general automation. AI-based control systems use Machine Learning algorithms to model and predict system behaviour, allowing them to make real-time adjustments and optimise system performance. These algorithms can be trained using data collected from sensors, feedback loops and other sources, allowing them to adapt to changing conditions and improve over time. Some common AI-based approaches to control systems include fuzzy logic control, neural network control, and reinforcement learning. Fuzzy logic control uses fuzzy sets and rules to make decisions based on inaccurate or uncertain data. Neural network control uses artificial neural networks to model system behaviour and make predictions. As discussed earlier, reinforcement learning involves training an agent to make decisions based on feedback and rewards. 10.4.1

CLASSIFICATION OF PID CONTROL SYSTEMS

PID control systems can be classified into various categories based on their application, implementation and methodology. A broad PID classification framework, indicating the role of AI, is shown in Figure 10.5 [18].

AI-Based Design and Analysis of Control Systems

701

Figure 10.5 Classification of PID controllers.

10.4.1.1

Based on Application

• Industrial control: PID control is widely used in industrial applications, such as process control, robotics, and automation. AI-based control, such as fuzzy logic and neural networks, is also used in industrial control to handle complex and nonlinear systems. In industrial applications, AI techniques such as Machine Learning, Artificial Neural Networks, and Fuzzy Logic can be used to enhance the performance of PID control. These techniques can optimise the PID controller parameters, adapt the controller to changes in the process dynamics, and improve the accuracy and reliability of the sensor measurements. AI can also be used for fault detection and diagnosis, which is essential for maintaining the safety and reliability of the system. • Automotive control: PID control is commonly used in automotive applications, such as engine control, cruise control, and anti-lock braking systems. AI-based control, such as reinforcement learning and Deep Learning, is also being explored for advanced automotive control. In automatic control, AI techniques such as reinforcement learning, Deep Learning, and evolutionary algorithms can be used to enhance the autonomy and intelligence of the control system. Reinforcement learning can train the control system to learn from its environment and optimise its behaviour over time. Deep Learning can be used for perception tasks such as object recognition and tracking. Evolutionary algorithms can be used to optimise the control system parameters. AI can also be used for fault detection and diagnosis, which is essential for maintaining the safety and reliability of the system. • Household automation: PID control can be used in home automation applications, such as temperature control, lighting control, and security systems. AI-based control, such as voice recognition and natural language processing, is also used in smart home systems. AI techniques such as Deep Learning, natural language processing, and computer vision can be used to enhance the system’s comfort, energy efficiency and security. Deep Learning can predict user preferences and adapt the control system to their needs. Natural language processing can be used for voice control and human-machine interaction. Computer vision can be used for gesture recognition and facial recognition, which can enhance the security of the system. AI can also be used for fault detection and diagnosis, which is essential for maintaining the safety and reliability of the in-home system.

702

10.4.1.2

Design and Analysis of Control Systems

Based on Implementation

• Analog control: PID control can be implemented using analogue circuits, such as operational amplifiers and resistors. Analog PID controllers are still used in some applications, such as heating and cooling systems. AI can be used to optimise the controller parameters for better control performance using techniques such as gradient descent, evolutionary algorithms, or Machine Learning. By optimising the controller parameters, AI can reduce the control system’s steady-state error, overshoot and settling time. This improves accuracy and responsiveness. AI can also adapt the controller to changes in the process dynamics using techniques such as online parameter estimation, adaptive control and model-based control. By adapting the controller, AI can compensate for variations in the process behaviour, which can result from changes in the operating conditions, wear and tear of the equipment, or other factors. This makes the control system more robust and reliable, thus reducing the need for manual tuning and adjustment [18]. In addition, AI can be used to implement advanced control strategies such as fuzzy logic, gain scheduling, and model predictive control, which can improve the performance of the PID controller. Fuzzy logic can be used to handle nonlinearities and uncertainties in the process by defining membership functions that capture the degree of truth of different control actions. Gain scheduling can be used to adjust the controller parameters based on the operating conditions by defining lookup tables that map the controller parameters to the process variables. Model predictive control can be used to optimise the control inputs over a finite horizon by predicting the future behaviour of the process using a dynamic model and solving an optimisation problem. • Digital control: PID control can be implemented using digital circuits, such as microcontrollers and digital signal processors (DSPs). Digital PID controllers are widely used in modern control systems, as they offer better accuracy and flexibility. As was the case with Analogue Control, AI can be used to optimise the controller parameters and adapt the digital controller to changes in the process dynamics. One of the main advantages of digital PID control is the ability to implement complex control algorithms with high accuracy and speed using advanced digital signal processing techniques. This allows for more precise control of the process variables, which can improve the performance and reliability of the control system. AI can be used to optimise the controller parameters using techniques such as gradient descent, evolutionary algorithms, or Machine Learning. By optimising the controller parameters, AI can reduce the control system’s steady-state error, overshoot, and settling time, improving its accuracy and responsiveness. In addition, AI can be used to adapt the controller to changes in the process dynamics using techniques such as online parameter estimation, adaptive control, or model-based control. By adapting the controller, AI can compensate for variations in the process behaviour, which can result from changes in the operating conditions, wear and tear of the equipment, or other factors. AI can also be used to implement advanced control strategies such as fuzzy logic, gain scheduling, and model predictive control in digital PID control systems. Fuzzy logic can be used to handle nonlinearities and uncertainties in the process by defining membership functions that capture the degree of truth of different control actions. Digital PID control combined with AI can provide a powerful and flexible control system that can satisfy the requirements of a wide range of applications, from industrial processes to home automation systems. By leveraging the benefits of digital processing and AI, efficient and optimum control systems can be designed in real-time. 10.4.1.3

AI-Based Control System Methodologies

Intelligent Control is achieved by using AI-based control methods, such as fuzzy logic, neural networks, and evolutionary algorithms. These are intelligent control methods that use data-driven mod-

AI-Based Design and Analysis of Control Systems

703

els and decision-making algorithms to regulate the output variable. Intelligent control methods are based on nonlinear systems theory and can handle complex and uncertain systems. • Fuzzy Logic Control (FLC) is a rule-based control system that is particularly useful for systems that are difficult to model mathematically or have uncertain or imprecise information. FLC uses fuzzy sets and rules to map inputs to outputs and can handle nonlinear systems with multiple inputs and outputs. FLC is easy to implement and can be effective in controlling complex systems. • Neural Network Control (NNC) is a form of model-free control that uses artificial neural networks to map inputs to outputs. NNC can learn the control behaviour from input-output data and handle complex systems with nonlinearities and uncertainties. NNC requires substantial amounts of data for training but can adapt to changes in the system and improve control performance over time. Genetic Algorithm Control (GAC) is a form of optimisation-based control that uses evolutionary algorithms to find the optimal control parameters based on a fitness function that evaluates the control performance. GAC can handle systems with multiple and conflicting objectives and find solutions that are impossible with traditional optimisation techniques. • Reinforcement Learning (RL), as explained earlier, is a form of learning-based control that uses a trial-and-error approach to learn the optimal control policy. RL can handle complex systems with unknown dynamics and learn from feedback signals. RL requires substantial data for training and can be slow to converge. However, it can achieve optimal control performance. Hybrid Control (HC) methods combine classical and intelligent control methods to achieve better performance and robustness. Two examples of such approaches are fuzzy PID control and neural network PID control. These are hybrid control strategies combining classical and intelligent control advantages. 10.4.2

MACHINE LEARNING AND PID CONTROL

Machine Learning and PID control can be used together to create intelligent control systems. One way to combine Machine Learning and PID control is to use Machine Learning to learn the optimal PID parameters based on historical data. This approach is known as adaptive PID control. Instead of manually tuning the PID parameters, Machine Learning algorithms can learn from data to adjust the parameters in real-time, leading to more efficient and robust control. Another way is to replace the PID controller with a Machine Learning model that directly controls the system. In this approach, the Machine Learning model inputs the error signal and outputs the control signal directly without needing a PID controller. This approach is known as model-based control and can lead to more accurate and adaptive control in complex systems. Once the Machine Learning model is trained, it can be used to adjust the PID parameters in real time based on the current conditions. For example, suppose the Machine Learning model predicts that a higher proportional gain will lead to better control under current conditions. In that case, it can adjust the proportional gain accordingly without manual tuning. This approach is known as adaptive PID control. It combines the benefits of traditional PID control, such as stability and robustness, with the adaptability of Machine Learning, leading to more efficient and effective control in dynamic environments. 10.4.3

DEEP REINFORCEMENT LEARNING AGENTS

As explained earlier, Deep Reinforcement Learning enables the implementation of controllers and decision-making systems for complex systems such as robots and autonomous systems. In principle, Deep Reinforcement Learning involves implementing deep neural networks that can learn complex behaviours by training them with data generated automatically from physical or simulated systems.

704

Design and Analysis of Control Systems

Figure 10.6 Deep reinforcement learning agents.

Unlike other Machine Learning techniques, there is no need for predefined training datasets, labelled or unlabelled. Usually, what is required is a simulation model that represents the relevant environment. Deep Reinforcement Learning can be used to control complex and high-dimensional systems, such as drones, by learning control policies that map sensor inputs to control actions. An example of such a strategy for control is in the autonomous navigation of drones. In this application, a drone must learn to navigate an environment while avoiding obstacles and reaching a goal location. Deep Reinforcement Learning agents, illustrated in Figure 10.6, consist of a deep neural network policy that maps an input state to an output action, including an algorithm responsible for adjusting and updating that framework. Widely used examples of such algorithms include Deep Q-network (DQN), Deep Deterministic Policy Gradient (DDPG), Soft Actor Critic (SAC), and proximal policy optimisation (PPO). The algorithm adjusts and updates the policy based on the observations and rewards collected from the environment. The objective is to maximise the expected long-term reward. reinforcement learning MATLAB Toolbox assists in creating Deep Reinforcement Learning agents through programming facilitation. It also interactively works with the reinforcement learning Designer application. It is important to emphasise that training with Deep Reinforcement Learning algorithms is dynamic. The agent interacts with the environment around it. For robotics and autonomous systems applications, performing this training with actual hardware can be costly and hazardous. Consequently, virtual models of the environment that generate data through simulations are preferred for Deep Reinforcement Learning. A model of the environment in MATLAB and Simulink that describes the system dynamics can be built. This modelling includes how such dynamics are affected by actions taken by the agent and a reward that evaluates the efficacy of the action executed. Indeed, these can be continuous or discrete models. They can represent the system at varying levels of effectiveness and accuracy. In addition, the simulations can be conducted in parallel to speed up training. Existing MATLAB and Simulink models can be reused for Deep Reinforcement Learning with minor modifications or no adjustments. 10.4.4

MATLAB REINFORCEMENT LEARNING ENVIRONMENTS

In a reinforcement learning scenario, an agent is trained to complete a task, where the environment models the external system (the world) with which the agent interacts. In control systems applica-

AI-Based Design and Analysis of Control Systems

705

Figure 10.7 DRL control system.

tions, this external system is often called the plant. As shown in Figure 10.7, the environment has the following functions: • Reception of actions from the agent. • Returning observations in response to the agent’s actions. • Generation of a reward measures the action’s effectiveness in achieving the task. The drone can be equipped with a set of sensors, such as cameras, lidar, and inertial measurement units (IMUs), to perceive its environment and estimate its position and velocity. The DRL algorithm aims to learn a policy that maps the drone’s sensor inputs to its control actions, such as the drone’s linear and angular velocities. To train the DRL algorithm, the drone can be placed in a simulated environment, such as a 3D simulator, where it can learn how to navigate to the goal location through trial and error. The drone can receive rewards for reaching the goal location and penalties for colliding with obstacles or deviating from the desired trajectory. The DRL algorithm uses these rewards and penalties to update the policy parameters and improve the drone’s performance. During training, the DRL algorithm can use a deep neural network, such as a convolutional neural network (CNN) or, alternatively, a recurrent neural network (RNN), to represent the policy. The neural network inputs the drone’s sensor inputs and outputs the drone’s control actions. The neural network is trained using a variant of the Q-learning algorithm, such as deep Q-learning (DQN), to learn an optimal policy. After training, the learned policy can be deployed on the drone’s onboard computer, allowing it to navigate autonomously in real-world environments. The DRL algorithm can adapt to changes in the environment or system dynamics and continue to improve the drone’s performance over time.

10.5

FUZZY LOGIC CONTROL SYSTEMS

Fuzzy logic control is a branch of control systems engineering that uses fuzzy logic to control complex systems. Fuzzy logic is a mathematical framework that can represent uncertainty and approximate reasoning. It allows control systems to make decisions based on imprecise, ambiguous or incomplete information. Fuzzy logic control systems have been successfully applied in various fields, including automotive control, robotics, and process control. Here we present an overview of fuzzy logic control systems, including their advantages, disadvantages, and applications. The system can be anything from a simple household appliance to a complex manufacturing process. A fuzzy logic control system aims to achieve a desired output by adjusting the inputs. A fuzzy logic control system consists of four main components:

706

Design and Analysis of Control Systems

Figure 10.8 Fuzzy logic system.

• Fuzzifier: converts crisp inputs into fuzzy inputs by mapping them to the membership functions of fuzzy sets. • Inference engine: uses fuzzy logic rules to determine the output based on the fuzzy inputs. • Rule base: contains a set of fuzzy logic rules that define how the system should behave under different input conditions. • Defuzzifier: converts the fuzzy output into a crisp output. The inputs and outputs of a fuzzy logic control system are often represented by linguistic variables, which are variables that can be described using natural language terms. For example, a linguistic variable for a heating system may be “room temperature,” which can take values like “cold,” “warm,” and “hot.” Figure 10.8 shows a basic Fuzzy Logic System. BENEFITS OF FUZZY LOGIC CONTROL SYSTEMS Fuzzy logic control systems have several advantages over traditional control systems: • Robustness: Fuzzy logic control systems are more robust than traditional ones because they can handle imprecise or incomplete information. This makes them particularly useful in systems with uncertain or unpredictable environments. • Flexibility: Fuzzy logic control systems are more flexible than traditional ones because they can handle nonlinear relationships between inputs and outputs. This makes them particularly useful in complex systems. • Transparency: Fuzzy logic control systems are more transparent than traditional ones because they use natural language terms to describe inputs and outputs. This makes them easier to understand and interpret. CHALLENGES OF FUZZY LOGIC CONTROL SYSTEMS Fuzzy logic control systems also have some disadvantages: • Complexity: Fuzzy logic control systems can be more complex than traditional ones because they require more rules and membership functions. • Subjectivity: Fuzzy logic control systems can be more subjective than traditional ones. The membership functions and rules are based on human knowledge and expertise. • Computational requirements: Fuzzy logic control systems can require more computational resources than traditional ones because they involve more complex calculations. 10.5.1

FUZZY PID CONTROL SYSTEM

Fuzzy PID control combines the strengths of both techniques to achieve better control performance. In a fuzzy PID control system, the PID controller parameters (proportional gain, integral gain, and derivative gain) are adjusted based on fuzzy rules determined by the fuzzy inference engine.

AI-Based Design and Analysis of Control Systems

707

Figure 10.9 Fuzzy PID control system.

The fuzzy inference engine takes the error and error rate as inputs and produces the change in controller parameters as output. The fuzzy rules and membership functions are defined based on expert knowledge or through a trial-and-error process. The fuzzy PID control system has several advantages over traditional PID control. Fuzzy logic can handle nonlinearities and uncertainties in the system, which are difficult to model mathematically. Fuzzy logic can also adjust the PID controller parameters in real-time, resulting in better control performance. In addition, fuzzy logic can provide a more intuitive and understandable representation of the control system, which can be useful in designing and optimising the control system. Figure 10.9 displays a Fuzzy PID Control System. 10.5.2

TANK WATER LEVEL CONTROL SYSTEM

The control system of the water level in a tank has nonlinear characteristics. To overcome these a Fuzzy PID control system can be employed. This is a variant of traditional PID control that utilises fuzzy logic to improve control accuracy and response time. In a Fuzzy PID control system, the control rules are defined using linguistic variables such as “large,” “medium,” and “small” to represent the inputs and outputs of the system. The membership functions are used to define the degree of membership of an input or output in each linguistic variable. For the water level control system, the Fuzzy PID controller would take input from sensors that measure the water level in the tank and the water flow rate into the tank. The controller would then use fuzzy logic to determine the optimal valve opening to achieve the desired water level in the tank. The fuzzy logic controller would first fuzzify the input data by mapping the sensor readings to linguistic variables such as “high,” “medium,” and “low.” The controller would then apply a set of fuzzy control rules to determine the optimal valve opening. The output of the fuzzy controller is a linguistic variable that is defuzzified to a crisp value that can be used to control the valve. The defuzzification process involves aggregating the fuzzy control signals from each rule and determining the most appropriate crisp control signal based on each output value’s overall degree of membership. In the case of the water tank system, the crisp control signal is used to adjust the valve controlling the water flow into the tank. The results for the Fuzzy PID control system for water level control are shown in Figures 10.10–10.13. Fuzzy PID control has several advantages over traditional PID control. One benefit is that it can handle nonlinear systems more effectively. Traditional PID control is designed to handle linear systems and can become unstable in nonlinear systems. Fuzzy PID control can handle nonlinear systems because it uses linguistic variables to represent the system’s behaviour. Another advantage of fuzzy PID control is that it can handle systems with large uncertainties. Traditional PID control is sensitive to uncertainties in the system’s behaviour. Fuzzy PID control can handle uncertainties because it uses linguistic variables to represent the uncertainties.

708

Figure 10.10 Level of membership functions.

Figure 10.11 Fuzzy PID: rate of membership functions.

Figure 10.12 Fuzzy PID control system for water level.

Design and Analysis of Control Systems

AI-Based Design and Analysis of Control Systems

709

Figure 10.13 Fuzzy PID water level control surface rule.

10.5.3

ADVANCED FUZZY LOGIC CONTROL SYSTEMS

In recent years, there have been several advancements in the application of fuzzy logic in PID control theory. The key improvements include Adaptive Fuzzy PID Control, Fuzzy Gain Scheduling, Fuzzy Model Predictive Control and Fuzzy Fractional Order PID Control. 10.5.3.1

Adaptive Fuzzy PID Control

Adaptive fuzzy PID control is a method of control that adapts the rules of a fuzzy PID controller in real time based on the current system behaviour. The goal of adaptive fuzzy PID control is to improve the controller’s performance and to handle changes in the system’s behaviour. The adaptive fuzzy PID controller consists of a fuzzy inference system, a PID controller, and an adaptation mechanism. This is illustrated in Figure 10.14. The fuzzy inference system is used to generate the control output based on the current error and the change in error. The PID controller is used to adjust the control output based on the proportional, integral, and derivative terms. The adaptation mechanism is used to adjust the fuzzy rules in real time based on the system’s behaviour. The adaptive fuzzy

Figure 10.14 Adaptive fuzzy PID control system.

710

Design and Analysis of Control Systems

PID controller has three inputs: the error (e), the change in error (de/dt), and the output of the PID controller (u). These inputs are used to generate the control output (u) using a set of fuzzy rules. The fuzzy rules define how the inputs are mapped to the output using linguistic variables and fuzzy membership functions. The fuzzy rules are defined based on expert knowledge or experimental data. The fuzzy rules are adjusted in real time using an adaptation mechanism. The adaptation mechanism uses a set of performance indices to evaluate the controller’s performance. The performance indices are calculated based on the current error, the change in error, and the output of the PID controller. The adaptive fuzzy PID controller has several advantages over traditional PID control. It can handle complex and dynamic systems more effectively by adjusting the fuzzy rules in real time. It can improve the controller’s performance by adapting the fuzzy rules to the system’s behaviour. It can handle nonlinear systems more effectively using fuzzy logic to generate the control output. It can handle systems with uncertain or varying parameters by adapting the fuzzy rules to changes in system behaviours. 10.5.3.2

Fuzzy Gain Scheduling

Fuzzy gain scheduling is a PID control method that uses fuzzy logic to schedule the PID gains based on the current system behaviour. The basic idea is to use fuzzy logic to determine the appropriate PID gains for different operating conditions rather than using a fixed set of gains for all operating conditions. This approach can improve the controller’s performance by adapting the gains to system behaviour changes. The fuzzy gain scheduling for PID control has four components: a fuzzy inference system, a PID controller, a gain scheduler, and a rule base. The fuzzy inference system generates the gains for the PID controller based on the current system behaviour. The PID controller adjusts the control output based on the error signal and the gains generated by the fuzzy inference system. The gain scheduler selects the appropriate gains for the current operating condition based on the inputs from the fuzzy inference system. The rule base contains a set of fuzzy rules that define how the inputs are mapped to the output using linguistic variables and fuzzy membership functions. The fuzzy gain scheduling for PID control has three inputs: the error (e), the change in error (de/dt), and the output of the PID controller (u). These inputs are used to generate the gains for the PID controller using a set of fuzzy rules. The fuzzy rules define how the inputs are mapped to the gains using linguistic variables and fuzzy membership functions. The fuzzy rules are defined based on expert knowledge or experimental data. The gain scheduler selects the appropriate gains for the current operating condition based on the inputs from the fuzzy inference system. The gain scheduler uses a set of membership functions to partition the input space into different operating regions. Each operating region corresponds to a set of PID gains that are appropriate for that region. The gain scheduler uses a set of fuzzy rules to determine the appropriate gains for the current operating condition. The fuzzy rules are defined based on the inputs from the fuzzy inference system and the membership functions. The fuzzy gain scheduling PID control (illustrated in Figure 10.15 for position control of a drone) has several advantages over traditional PID control. It can handle complex and dynamic systems more effectively by adapting the gains to system behaviour changes. It can improve the controller’s performance by selecting the appropriate gains for different operating conditions. It can handle systems with uncertain or varying parameters by adapting the gains to system behaviour changes. In conclusion, fuzzy gain scheduling is a powerful PID control method that uses fuzzy logic to schedule the PID gains based on the current system behaviour. It can improve the controller’s performance by adapting the gains to changes in the system’s behaviour and selecting the appropriate gains for different operating conditions. 10.5.3.3

Fuzzy Model Predictive Control

Fuzzy model predictive control (FMPC) is an advanced control technique that combines fuzzy logic and Model Predictive Control (MPC) for controlling complex systems. FMPC is an extension of the traditional MPC that uses a fuzzy logic-based model to predict the system’s behaviour and generate

AI-Based Design and Analysis of Control Systems

711

Figure 10.15 Fuzzy gain scheduling PID control (drone position).

control actions. In FMPC for PID control, the controller uses a fuzzy system model to predict the system’s future behaviour and generate optimal control actions. The fuzzy model is constructed using fuzzy rules and membership functions, and it describes the relationships between the system inputs and outputs. The fuzzy model is then used to predict the system’s future behaviour over a finite time horizon. The FMPC for PID control has three components: a fuzzy model, a predictive controller and an optimisation algorithm. The fuzzy model is used to predict the system’s behaviour over a finite time horizon. The predictive controller generates optimal control actions based on the predicted behaviour of the system. The optimisation algorithm is used to solve the optimisation problem that minimises a cost function over the prediction horizon. The FMPC for PID control uses a set of fuzzy rules to describe the relationships between the system inputs and outputs. The fuzzy rules are defined based on expert knowledge or experimental data, and they use linguistic variables and fuzzy membership functions to describe the system’s behaviour. The fuzzy model is then constructed using the fuzzy rules and membership functions, and it is used to predict the system’s future behaviour. The predictive controller uses the predicted behaviour of the system to generate optimal control actions. The controller solves an optimisation problem that minimises a cost function over the prediction horizon. The cost function is typically defined as a function of the error between the predicted and desired outputs, and the control actions that minimise the cost function are selected as the optimal control actions. The optimisation algorithm is used to solve the optimisation problem that minimises the cost function over the prediction horizon. The algorithm uses a combination of numerical methods and heuristics to find the optimal control actions that minimise the cost function. FMPC for PID control has several advantages over traditional PID control. It can handle complex and nonlinear systems more effectively by using a fuzzy model to predict the system’s behaviour. It can generate optimal control actions based on the predicted behaviour of the system rather than relying on a fixed set of gains. It can handle systems with uncertain or varying parameters by adapting the fuzzy model and the optimisation algorithm to system behaviour changes. In summary, FMPC for PID control is a powerful method of control that combines fuzzy logic and Model Predictive Control to control complex systems. It uses a fuzzy model to predict the system’s behaviour and generate optimal control actions. FMPC for PID control has several advantages over traditional PID control, including improved performance and the ability to handle complex and nonlinear systems.

712

Design and Analysis of Control Systems

Figure 10.16 Fuzzy fractional order PID control system.

10.5.3.4

Fuzzy Fractional Order PID Control

Fuzzy fractional order PID control is a method that uses fractional order calculus to design the fuzzy PID controller. Fractional order calculus is a mathematical tool that can handle systems with noninteger order dynamics. This approach can handle systems with a long memory and slow dynamics more effectively than traditional PID control. It is an advanced control technique that combines the benefits of fuzzy logic control and fractional order PID control. Fractional order control has gained significant attention in recent years because it can provide better control performance compared to integer-order control. In this control technique, the PID controller parameters are fractional values that can be tuned to achieve the desired control performance. The fuzzy fractional order PID controller uses a fuzzy logic controller to determine the control actions based on the system’s error and error rate. The controller’s parameters are fractional values, optimised using an optimisation algorithm to achieve the desired control performance. The fuzzy logic controller uses fuzzy rules and membership functions to map the input variables to the output variable. The input variables are the system error and error rate, while the output variable is the control action. The fuzzy rules and membership functions are designed based on expert knowledge or experimental data. A conversion algorithm transforms the fuzzy controller output into fractional-order PID controller parameters. The fractional order PID controller uses fractional order derivatives and integrals to calculate the control action. Fractional order derivatives and integrals can capture more complex dynamics than integer order derivatives and integrals. The controller’s parameters, namely the proportional, integral, and derivative gains, are fractional values, which can be optimised using an optimisation algorithm to achieve the desired control performance. A Fuzzy Fractional Order PID Control System is shown in Figure 10.16, whereas Figure 10.17 displays the comparison of different Fuzzy PID control systems.

10.6

ARTIFICIAL NEURAL NETWORKS-DRIVEN CONTROL SYSTEMS

Artificial Neural Networks (ANNs) are novel computational methods that simulate the human brain’s intelligent and parallel information-handling features. With extensively interconnected and powerful processing elements, they are emerging as the adaptive controllers of the 4IR. An ANN comprises many neurons as interconnected processing units. It is structurally similar to the configuration of the biological brain. The main attractive properties of ANNs include the self-learning and distributed memory capabilities. The self-learning features are applied in learning the system dynamics, thereby tuning the controller parameters automatically. As shown in Figure 10.18, the ANN consists of three layers: input, hidden (middle), and output. In the hidden layer, more layers can be used. The input consists of several signals which go to each neuron. By using the activating function, the activation level is computed by each neuron and then sent to other neurons in the next layer.

AI-Based Design and Analysis of Control Systems

713

Figure 10.17 Comparison of different Fuzzy PID control systems.

Figure 10.18 A typical artificial neural network.

Consider the Solar PV Power System discussed in Chapter 9. A simple structure of the Solar PV MPPT-based ANN is shown in Figure 10.19. The irradiance, temperature, VOC , and ISC are usually used at the input layer of ANN. The output layer usually contains the duty cycle, voltage, or current. The user defines the number of nodes in each layer, which can be varied based on necessity. The hidden layer and training process are the most influential factors for an ANN’s best performance. Sometimes, the training process takes some months to respond properly under different climate conditions. When a specific ANN is used for the training process of a PV system or climate, it is possible that by changing the condition, the implemented ANN may not respond accurately, which is one of the main disadvantages of this method. Moreover, the number of hidden nodes can be increased to increase the accuracy, but it will increase the computational time. So, it is possible that the MPP-based ANN could not respond fast under rapidly changing climate conditions [25]. ANN can be used to model and control nonlinear and complex systems by learning the underlying system dynamics from data. ANN-based control can handle uncertainties and disturbances in the system and can be used to achieve more precise and accurate control of systems. ANNs can be used to implement PID control. An ANN-based PID controller can be trained to perform the same

714

Design and Analysis of Control Systems

Figure 10.19 An MPPT-based artificial neural network.

Figure 10.20 An ANN-based PID control system for a drone.

function as a conventional PID controller. The ANN takes as input the error signal (i.e., the difference between the setpoint and the process variable) and generates an output that is used to adjust the process variable. The ANN is trained using a set of training data that includes input-output pairs. The training process involves changing the weights of the connections between the neurons to minimise the error between the desired output and network output. One advantage of using an ANN-based PID controller is that it can adapt to changes in the process dynamics over time. This is because the ANN can be retrained using new data to update its parameters. A further advantage is that it can be used for nonlinear systems that are difficult to model using conventional control techniques. However, ANN-based PID controllers can be more complex than conventional PID controllers, and the training process can be time-consuming. In addition, the performance of the ANN-based PID controller may depend on the quality and quantity of the training data. Figure 10.20 shows an ANN-based PID control system for a drone.

10.7

THE FOURTH INDUSTRIAL REVOLUTION

As introduced in Chapter 1, the Fourth Industrial Revolution (4IR) is a global, all-embracing technological transformation that will radically and fundamentally alter the way individuals, institu-

AI-Based Design and Analysis of Control Systems

715

tions, businesses, communities and countries operate, work, organise, relate and innovate; primarily because of the scale, scope, speed, and complexity of the changes. It is characterised by a range of innovative technologies that are combing the physical, digital and biological worlds, impacting all disciplines, economies and industries, and even redefining the very meaning of humanity. Consequently, this revolution demands an interdisciplinary, comprehensive, holistic and integrated ecosystem approach involving all stakeholders across the globe, such as multilateral institutions, national governments, the public sector, civil society, the private sector, academia and the media. The 4IR requires new and different skills such as critical thinking, how to think, problem-solving, structured thinking, blended learning, interdisciplinary approaches, ecosystem thinking, emotional intelligence (EQ), cultural intelligence (CQ), judgement, acquisition of key competencies, negotiation, cognitive capacity, flexibility, knowledge production, knowledge management and knowledge ownership. The 4IR can also be interpreted as the digital transformation of industrial value chains, from manufacturing and value creation processes to the smartisation of plants, suppliers, customers, and distribution channels [31]. 10.7.1

HISTORY OF INDUSTRIAL REVOLUTIONS

Globally, the development and growth of industry, commerce, and society have undergone four major disruptive phases. These stages of industrialisation are illustrated in Figure 10.21.

Figure 10.21 The four industrial revolutions.

716

10.7.1.1

Design and Analysis of Control Systems

The First Industrial Revolution

The First Industrial Revolution (1IR) occurred from about 1760–1840, primarily in Europe and the United States through mechanisation and manufacturing through water and steam power. It marked the transition of many agricultural societies to industrial and urban economies. The development of the steam engine drove the textile and iron industries. The invention of other machine tools, such as the screw-cutting lathe, cylinder boring machine, and milling machine, further enhanced the revolution. In summary, the 1IR took place in the late 18th century and early 19th century, dramatically transforming factories through the advent of steam engines. This revolution enabled further mechanisation of physically demanding and highly repetitive tasks. Productivity increased, and production costs decreased. In turn, living standards improved, and cities around factories grew. Steam engines also enhanced the development of the printing press and railways network. 10.7.1.2

The Second Industrial Revolution

Around the late 19th into early 20th centuries, the Second Industrial Revolution (2IR) took shape. The introduction of electric power led to unprecedented economic growth in pre-existing industries and the proliferation of new ones, such as telecommunications, electricity and steel. This led to the division of labour and enhanced mass production. In the early 20th century, the 2IR further unleashed the chemical, petroleum refining, distribution, and automotive sectors. Huge industrial corporations such as US Steel, General Electric, Standard Oil and Bayer AG came into existence during this period. In summary, the 2IR was a period of tremendous industrial transformation that was ushered in the late 19th century with the introduction of industrial assembly lines powered by electrical energy from oil and gas. This enabled considerable gains in efficiency, leading to mass production. 10.7.1.3

The Third Industrial Revolution

The 1960s ushered in the Third Industrial Revolution (3IR). This was the computer and digital revolution, whose key manifestation became the Information Communication Technologies (ICTs) and their extensive application across all industries. The development of semiconductors, mainframe computing, personal computing, hand-held devices, mobile telephony, and the Internet drove the 3IR. Digital transformation from analogue electronics and mechanically driven plants to ICTs, production automation, and advanced engineering techniques (including advanced control systems) was the hallmark of this period. Put differently, the 3IR was a significant global industrial transformation driven by the widespread application of electronics and basic ICTs. This was particularly so in manufacturing, leading to new opportunities for engineering automation and ICT-driven commerce in general. This revolution led to novel technological advances and further increased productivity worldwide. 10.7.1.4

The Fourth Industrial Revolution

Indeed, the Fourth Industrial Revolution (4IR) is upon us. It started around 2011 and is anchored by and built upon the 3IR. It represents emerging technology breakthroughs in several fields, including big data, robotics, Artificial Intelligence, advanced analytics, virtual and augmented reality, the Metaverse, nanotechnology, high-performance computing, biotechnology, the Internet of Things (IoT), Internet of Everything (IoE), 3D printing and autonomous vehicles. These 4IR technologies have dramatically improved private and public institutions’ efficiency, productivity, quality, impact and scale. It is instructive to emphasise that the 4IR is built on the foundation of 3IR drivers – electronics and ICTs. The physical world of industrial production is merging with the digital world of information. 4IR is characterised by connected supply chains, digitised industrial production processes, connected workers, machine components, materials, to-be-manufactured objects, and even

AI-Based Design and Analysis of Control Systems

717

Figure 10.22 4IR smart, digital and interconnected production.

logistics, with continuous data exchange. This brave new world is characterised by self-organising factories, sophisticated Chatbot systems such as ChatGPT, drones, driverless cars, and autonomous warfare (Figure 10.22). 10.7.2

KEY 4IR TECHNOLOGIES

The key technologies of the 4IR include Artificial Intelligence (AI), Artificial General Intelligence (AGI), Intelligent Algorithms (IA), Augmented Reality (AR), Human Augmentation (Human 2.0), Big Data, Internet of Things (IoT), Internet of Everything (IoE), Nanotechnology, Advanced Analytics, Autonomous vehicles, Drones, Driverless cars, 3D Printing/Additive Manufacturing, Biotechnology (The Human Genome Project), Materials Science, Renewable Energy, Energy Storage, Robotics and Mechatronics, Quantum Computing, Cloud Computing, Blockchain Technology, Chatbot Systems such as ChatGPT, Virtual Classrooms, Laboratories, Libraries, and Teachers (augmentation of the educational experience). 10.7.2.1

The Internet of Things (IoT)

The Internet of Things is the network of physical objects with embedded technology to interact, communicate and sense their internal states and external environment, forming the heart of the Fourth Industrial Revolution. It is an ecosystem of the Internet, where through the Internet, there are connections between people and things, between things and things and between people and people. Communication through the Internet between the cyber-physical systems permeates everything, at least 25.6 billion things. This has dramatically advanced the quality of life and the productivity of society, individuals, and enterprises. IoT is the underlying pillar of smart living, cities, energy, transport, industry, health, building, and homes – the smartisation of everything. 10.7.2.2

The Internet of Everything (IoE)

A broader concept that refers to devices and consumer products connected to and controlled through the Internet and linked to expanded digital features. IoE is a philosophy in which technology’s

718

Design and Analysis of Control Systems

future is characterised by many diverse types of appliances and devices connected to the global Internet. 10.7.2.3

Big Data Analytics

Big data are large sets of complex structured and unstructured data of high volume, high velocity, and a wide variety of information assets. This data demands innovative forms of information processing that are cost-effective. It cannot be operated by traditional processing techniques and algorithms, thus enhancing insight, decision-making, and process automation. Big data analytics improves and increases the competitive advantage of manufacturing companies. It promotes data collection from multiple reliable sources, comprehensive data analytics for real-time decision-making, monitors manufacturing activity processes, and helps detect failure. 10.7.2.4

Cloud Computing

Cloud computing entails storing and computing vast amounts of data in remote servers. This promotes manufacturing activities, enabling modularisation and reliable service delivery. Cloud computing enables on-demand remotely and conveniently accessed networks of shared configurable digital resources. These include networks, servers, services, storage, and applications. Such resources can be released with minimal management effort or service-provider interaction. It is Internetbased computing, where all information is digitalised in the cloud (remote storage and processing), whereby users can access it anytime. 10.7.2.5

3D Printing/Additive Manufacturing

3D printing or Additive Manufacturing is a digital fabrication technology that produces physical objects in a dimensional structure using geometrical representation. 3D printing is an automated process with three-dimensional (3D) shapes. This helps optimise smart manufacturing and lean manufacturing. 10.7.2.6

Mechatronics

The synergistic integration of mechanics, electronics and computer science to produce optimum products such as robots, drones and advanced space vehicles. These entities are dynamic systems that we can describe as mechatronic systems. Their modelling and control is more complex but follows the same methodology as electromechanical systems. The definition and extensive impact of mechatronics are illustrated in Figure 10.23. A Modular and Scalable Wheeled Mobile Robot is a great example of a mechatronic system [23]. 10.7.2.7

Robotics

Robotics refers to the design, construction and operation of machines (robots) that carry out activities that are normally ascribed to human beings, such as vision, path planning, navigation, and decision-making. The field is about the intelligent connection of perception to action. Put differently, robotics is an interdisciplinary field of computer science, electro-mechatronics, and information engineering to compute and control sensory feedback and data in supporting activities and applications that substitute human actions. The objective is the performance of varieties of tasks, which also acquire information from the environment and more intelligently in response. Robots constitute an advanced class of dynamic systems called robotic systems. Such systems are a subset of mechatronic systems and are similarly modelled and then controlled. Modelling and control system design of Modular and Scalable Wheeled Mobile Robots is detailed in Ref. [23].

AI-Based Design and Analysis of Control Systems

719

Figure 10.23 Mechatronics: definition and impact.

10.7.2.8

Augmented Reality

Augmented reality is a technology between virtual reality and telepresence. The real world is observed but supplemented with a virtual experience. Virtual reality refers to computer-generated 3D environments that allow a person to enter and interact with a synthetic environment across varying degrees in the artificial computer world. This is the simulation of some reality or simulation of a complex phenomenon. Telepresence entails extending a user’s sensory-motor facilities and problem-solving abilities within a confined or remote environment. AR is designed within three aspects: combining real and virtual worlds, interactivity in real time, and registration in 3D. 10.7.2.9

Human Augmentation (Human 2.0)

Technologies that enhance human productivity or capability and add to the human body’s capacity. The task is the creation of cognitive and physical improvements as an integral part of the human body and its capabilities. For example, active control systems create limb prosthetics with characteristics exceeding the highest natural human performance. Human augmentation entails machines and humans working together. It is about improving human performance using machine intelligence – the robot and the human being as one. 10.7.2.10

Artificial Intelligence

As already articulated, Artificial Intelligence (AI) or Machine Intelligence refers to intelligence programmed into and demonstrated by machines. This capability is in contrast to the natural in-

720

Design and Analysis of Control Systems

telligence manifested by humans and other animals. Artificial General Intelligence (AGI) is the intelligence of a machine that can learn, understand and execute any intellectual task that a person can. Artificial Intelligence is the ability of computer or technological applications to perform complex functions associated with human intelligence. 10.7.2.11

Superintelligence

A superintelligent agent’s intelligence far surpasses that of the brightest and most gifted person. Such agents would replace humans as the dominant life form on Earth (the fear!). Such capability in a machine also raises the debate about having human emotions (empathy, shame, embarrassment, love, sympathy) in a machine. Can we have emotional intelligence, cultural intelligence, existential intelligence and intentionality in a machine? Can a machine say: “I think, therefore I am,” as postulated by Rene Descartes? 10.7.2.12

Blockchain Technology

Blockchain is a shared, secure and immutable ledger that facilitates recording of transactions, tracking of assets and building of trust in a business network. In this regard, an asset is defined as any tangible (land, house, car, cash) or intangible (patents, intellectual property, branding, copyrights) entity. A Blockchain is a secure series of blocks that contain information. The information that is stored inside a block is a function of the type of blockchain. The first block in the series or chain is called the Genesis Block. It is a storage technology for saving data (storing different kinds of information) on decentralised networks. It provides a way to create a tamper-proof log of sensitive activity securely and efficiently. Put differently, Blockchain Technology creates a decentralised technological environment where cryptographically validated transactions and data are secure and safe. They are not within the control of any third-party organisation, whereby completed transactions are recorded in an immutable ledger in a verifiable, transparent, secure and permanent way with a timestamp and other details. Blockchain is a chronological chain of blocks. Each block is considered a page in a ledger, and the chain grows continuously as miners discover new blocks that are then appended to the existing Blockchain. 10.7.2.13

Cryptocurrency

A cryptocurrency is a digital money system developed to work as a medium of exchange via a computer network that is not controlled by or reliant on any central regulatory authority. Governments or central banks or ordinary banks are not required to uphold or maintain cryptocurrency. It is a medium of exchange like the US dollar. All cryptocurrencies have a monetary value. Cryptography is the study and practice of efficient secure communications techniques that enable only the intended recipient and the sender of a message to view its contents. Crypto means concealed, secret, secure, and protected. Bitcoin is an example of a cryptocurrency. It is an application of Blockchain. Bitcoin is a decentralised digital currency that can be transferred on the bitcoin network, peer to peer. Bitcoin transactions are formally verified by network nodes through cryptography and stored in a publicly distributed ledger called a Blockchain. A Bitcoin Block (in a Blockchain) contains information about the sender, receiver, and the number of bitcoins to be transferred. The Bitcoin Ecosystem is a public network where anyone, including a malicious participant, can participate without restriction. Even though a central authority does not organise it, it functions smoothly! 10.7.2.14

The Metaverse (Digital Twins)

This is a novel innovation where virtual spaces are created in which users can interact with a computer-generated environment and other users. For example, “Meta (Facebook) is a way to bring

AI-Based Design and Analysis of Control Systems

721

the metaverse to life and help people connect, find communities and grow businesses” as Mark Zuckerberg puts it. A digital twin of an industrial plant is a virtual plant designed to be an exact digital replica of the plant (with all its physical assets) to allow rapid and off-plant monitoring/testing. Digital twin (replica) Technology is simulation technology that integrates a multidisciplinary, multiphysical quantity, multi-probability and multiscale in complete utilisation of physical model, operation history, sensor update, and other data. Digital twin technology is a virtual model that completely corresponds to and is consistent with the physical entities in the real world, simulating its behaviour and performance in a real-time environment. It is the foundation of an intelligent manufacturing system, whereby, for optimality, feedback is realised from the physical to the virtual world and vice versa.

10.8

A 4IR EXAMPLE: AN INTELLIGENT FLEET MANAGEMENT SYSTEM

Fleet management systems provide the ability to manage operational vehicles owned by companies or organisations. The basis of these systems is the GPS device that determines the location of the vehicles by receiving signals from GPS satellites orbiting the Earth. This location data can then be used to track the movements of the vehicles in real time, monitor driver behaviour and performance, optimise routes, and improve fuel efficiency. Additionally, fleet management systems often incorporate other features such as maintenance scheduling, vehicle diagnostics, and driver communication. By providing a comprehensive view of the fleet, these systems enable organisations to increase efficiency, reduce costs, and improve overall productivity. 10.8.1

SYSTEM COMPONENTS

A fleet management system typically consists of hardware and software components. The hardware component includes GPS devices installed in vehicles to track their location and other sensors such as accelerometers and temperature sensors. The software component includes a central server that receives data from the GPS devices and provides a user interface for fleet managers to view the data and perform various tasks. The GPS devices installed in the vehicles continuously send data to the central server, including the vehicle’s location, speed, and direction of travel. This data is then processed by the software, which can provide real-time information on the location and status of each vehicle in the fleet. Fleet managers can use this information to optimise routing, improve fuel efficiency, and ensure that vehicles are being used in the most cost-effective manner possible. In addition to tracking the location of vehicles, a fleet management system can also monitor other aspects of vehicle performance, such as fuel consumption, engine performance, and maintenance schedules. This allows fleet managers to proactively identify and address issues before they become more serious problems. The details of a typical Intelligent Fleet Management System are illustrated in Figure 10.24. 10.8.2

SUMMARY OF INTELLIGENT CAPABILITIES

The capabilities of an Intelligent Fleet Management System include the following: • Real-time monitoring and control of vehicle speed and receiving alerts in case of overspeeding. • Calculation of vehicle usage, fuel consumption, distance travelled, time the vehicle was turned on, and many other metrics. • Ability to control the vehicle to move on a specific route, receive alerts in case of deviation from the authorised path and turn off the vehicle remotely. • Two-way communication with the driver and receiving messages. • Ability to install and connect with various vehicle sensors and send data to the centre.

722

Design and Analysis of Control Systems

Figure 10.24 A 4IR fleet management system.

• Customisation of hardware and software according to the needs of organisations and companies. • Providing practical and management reports, including analysis thereof. • Ability to generate reports on fuel alerts received from vehicles. • Reports on geographical entry and exit, speed violations, vehicle stops, and actual usage of vehicles, in addition to obtaining appropriate analysis to increase fleet efficiency. • Ability to monitor and evaluate vehicle performance, leading to planned maintenance and repair services for the fleet.

10.9

DESIGN AND ANALYSIS OF A DRONE’S CONTROL SYSTEM

An Unmanned Aerial Vehicle (UAV) designed to fly autonomously or be piloted remotely is called a drone. It is typically equipped with a range of sensors and cameras that allow it to gather data and capture images or videos from the air. Drones are engaged in a variety of applications, including agriculture, construction, surveying, event coverage, military attacks, and film-making. Depending on their intended use, drones can be simple, small and lightweight, or complex, large and heavy-duty. 10.9.1

COMPONENTS OF THE CONTROL SYSTEM

A control system for a drone typically includes the following components: • Sensors: The drone has various sensors such as accelerometers, gyroscopes, magnetometers, GPS, and cameras. These sensors provide real-time data about the drone’s orientation, speed, altitude, location, and surroundings. • Flight controller: The flight controller is the drone’s brain that processes the data from the sensors and sends commands to the motors to adjust the drone’s orientation, speed, and altitude. The flight controller uses a PID algorithm to maintain stability and control the drone’s movements.

AI-Based Design and Analysis of Control Systems

723

• Communication system: The drone is equipped with a communication system that allows it to receive commands from the ground control station and send back telemetry data. The communication system can use different technologies such as Wi-Fi, Bluetooth, or radio frequency. • Power system: The drone has a power system that consists of a battery or multiple batteries and a power distribution board. The power system provides electricity to the motors, flight controller, and other electronic components. • Motors and propellers: The drone has four or more motors and propellers that generate the thrust needed to lift and manoeuvre the drone. The motors and propellers can be controlled independently to adjust the drone’s orientation, speed, and altitude. Figure 10.25 shows the drone communication framework.

10.9.2

DESIGNING THE DRONE CONTROL SYSTEM

In a drone system, the PID controller uses feedback from various sensors, such as accelerometers, gyroscopes, and GPS, to adjust the drone’s position, orientation and altitude. The PID controller works by continuously measuring the drone’s current state, comparing it to the desired state, and then adjusting its control signals to minimise the difference between the two states. The control model for a drone can be further enhanced by incorporating additional capabilities such as obstacle avoidance, autonomous flight modes, and computer vision. These features can be implemented using advanced algorithms and Machine Learning techniques. For example, if the drone is not maintaining a constant altitude, the PID controller will use feedback from the drone’s altitude sensor to adjust the drone’s motor speed to achieve the desired altitude. The Proportional term (P) will calculate the control signal proportional to the difference between the desired and current altitudes. The Integral term (I) will calculate the control signal proportional to the accumulated error over time. The Derivative term (D) will calculate the control signal proportional to the change in the error rate. By continuously adjusting the drone’s control signals using the PID controller, the drone can maintain a stable flight and follow a pre-planned flight path (Figure 10.26).

Figure 10.25 Drone communication framework.

724

Design and Analysis of Control Systems

Figure 10.26 A simple drone control system.

The Transfer Function model of a drone system depends on the specific model and design of the drone. It can be represented as a set of linear equations that describe the link between the input and output signals of the system. The transfer function of a drone includes the dynamics of the drone’s various subsystems, such as the motors, propellers, control surfaces, and sensors. The model can be derived from the system’s equations of motion, which describe the drone’s translational and rotational dynamics. These equations consider the drone’s mass, inertia, aerodynamics, and control inputs. In general, the Transfer Function model of a drone can be represented as a set of linear, time-invariant equations in the form of a State-Space model – the State-Variable Matrix form. The state vector contains the drone’s position, velocity, altitude, and angular rates, and the input vector contains the motor commands or control signals. Due to the nonlinearity and complexity of the drone system, the transfer function is approximated. This is achieved by linearisation around a specific operating point. The derivation of the linear State-Variable Matrix model for the drone is left as an exercise for the reader. A more complex Transfer Function for a drone system can be derived by considering the drone’s translational and rotational motion dynamics. 10.9.3

ADVANCED DRONE CONTROL SYSTEMS

Several advanced control strategies can be used to improve the performance and capabilities of drones. Here are some examples of those sophisticated methods: • Model Predictive Control (MPC): This is a nonlinear control strategy that uses a model of the drone’s dynamics to predict its future behaviour and generate optimal control actions that satisfy a set of constraints. MPC can handle nonlinearities and uncertainties in the system and can be used to achieve more advanced flight behaviours, such as trajectory tracking, obstacle avoidance, and formation flying. • Adaptive Control (AC): Adaptive control is a control strategy that adjusts the control parameters of the drone in real time based on the changes in the system’s dynamics or the environment. Adaptive control can improve the drone’s robustness to uncertainties and disturbances and can be used to achieve more precise and accurate flight behaviours. • Sliding Mode Control (SMC): This is a nonlinear control strategy that generates control actions that keep the drone on a sliding surface defined by a reference trajectory. SMC can handle uncertainties and disturbances in the system and can achieve fast and precise tracking of reference trajectories.

AI-Based Design and Analysis of Control Systems

725

Figure 10.27 Fuzzy gain scheduling PID drone control system.

• Reinforcement Learning (RL): This is a Machine Learning technique that can be used to train a drone to learn optimal control policies from interaction with the environment. RL can handle complex and nonlinear dynamics and can be used to achieve more advanced flight behaviours, such as autonomous navigation, search and rescue, and precision agriculture. • Fuzzy gain scheduling PID control: This technique discussed in Section 10.5.3 is a control method that uses fuzzy logic to schedule the PID gains based on the current system behaviour. The basic idea is to use fuzzy logic to determine the appropriate PID gains for different operating conditions rather than using a fixed set of gains for all operating conditions. Figure 10.27 displays a typical Fuzzy Gain Scheduling PID drone control system. These advanced control strategies require more sophisticated algorithms and sensors than traditional PID control, but they can significantly improve the performance and capabilities of drones. The choice of control strategy depends on the specific application and requirements of the drone system, as well as the computational resources and sensors available.

10.10

DESIGN AND ANALYSIS OF A DRIVERLESS CAR’S CONTROL SYSTEM

Driverless cars are autonomous vehicles that use a combination of sensors, cameras, and artificial intelligence (AI) algorithms to navigate roads and make driving decisions without human input. They use various technologies to control their movement without human intervention. These technologies may include sensors, cameras, Global Positioning System (GPS), radar, and computer algorithms allowing the vehicle to perceive its surroundings and decide how to navigate them. The primary goal of autonomous cars is to increase safety, efficiency, and convenience by reducing human error and eliminating the need for a human driver. However, the development and implementation of autonomous vehicles also raise ethical, legal, and social implications. In summary, a self-driving car or driverless car is a vehicle that uses various sensors to know its environment, detect obstacles and navigate roads to its destination without any human input. Some of these sensors include cameras, lidar, and radar. Several companies, including Tesla, Google, and Uber, are developing autonomous vehicles. They have the potential to revolutionise transportation by reducing accidents and congestion, thus making transportation more efficient and accessible. However, there are still challenges to overcome before they can be widely adopted, including safety concerns and regulatory issues.

726

Design and Analysis of Control Systems

There is a range of levels of autonomy for driverless cars. These start from level 0 (no automation) and end at level 5 (full automation). At level 0, the driver has complete control of the vehicle, while at stage 5, the car is capable of total autonomous driving. There is no need for human input at that last level. At other lower levels of autonomy, such as levels 1 and 2, the driver may still have some control over the vehicle, such as overriding the autonomous systems in certain situations. However, as the level of autonomy increases, the driver’s role becomes less significant, and the vehicle’s systems take on more responsibility. Regardless of the level of autonomy, safety is a top priority for driverless car manufacturers. The software and hardware systems are designed to handle a broad range of driving scenarios and hazards. However, it is essential to note that driverless cars are still a developing technology. There are ongoing efforts to improve their safety, performance and regulatory framework. 10.10.1

AUTOMATION LEVELS FOR A DRIVERLESS CAR

The five automation levels for a driverless car are illustrated in Figure 10.28 and the details presented in the following section [14]. • Level 0: autonomous driving is a stage where no automation is involved in the vehicle’s operation. The driver has complete control over the car. All driving activities, such as steering, accelerating and braking, are entirely manual. At this level, the vehicle has no assistance systems or driver aids, and the driver is solely responsible for the vehicle’s safe operation. Level 0 is considered the baseline for autonomous driving, and all other levels build on this foundation by gradually introducing more advanced automation technologies. While there are no automation systems involved in Level 0, it is still vital for drivers to exercise caution, obey traffic laws, and maintain focus while driving. • Level 1: autonomous driving is a stage where minimal automation is involved in the vehicle’s operation. At this stage, the car is equipped with basic driver assistance systems that provide partial control over certain aspects of driving, such as steering or acceleration/deceleration. However, the driver maintains full responsibility for the vehicle’s operation, and their attention is always required. Examples of Level 1 driver assistance systems include lane departure warning systems, adaptive cruise control, and automatic emergency braking. Lane departure warning systems use cameras and other sensors to detect when a vehicle is drifting out of its lane and provide an alert to the driver. Adaptive cruise control adjusts the vehicle’s speed to maintain a safe distance from the one in front. The automatic emergency braking system applies the brakes if a collision is imminent.

Figure 10.28 Automation levels for a driverless car.

AI-Based Design and Analysis of Control Systems

727

• Level 2: autonomous driving is a stage in which the vehicle has more advanced automation features than level 1 and can control some driving aspects under specific circumstances. It is considered partial automation, as the driver must still monitor the vehicle’s operation while being ready to take control at any time. Level 2 autonomous driving involves advanced driver assistance systems that help the driver with accelerating, braking and steering tasks. The driver must always remain alert and aware of the vehicle’s operation and keep their hands on the wheel. The automation system has a limited operational design domain and may issue warnings if the driver becomes distracted or disengaged. Level 2 autonomous driving represents a significant step towards fully autonomous behaviour. Still, the driver remains responsible for the vehicle’s safe operation, and the automation system is designed to assist the driver rather than replace them. • Level 3: autonomous driving refers to a stage where the vehicle can handle most driving tasks under specific conditions. The driver can disengage from driving under those conditions. The car can control all aspects of driving. Still, the driver must be ready to take control of the vehicle if the automation system encounters a situation it cannot handle or if the driving conditions change. Level 3 autonomous driving is only available in limited scenarios, and the driver remains responsible for the vehicle’s safe operation. The driver must stay alert and aware of the vehicle’s operation and be able to take control of the car within a few seconds if necessary. Level 3 autonomous driving represents a significant advancement in autonomous vehicle technology, but it is still far from complete autonomy. • Level 4: autonomous driving refers to a stage where the vehicle is fully autonomous and can operate without human input in specific conditions. The car can control all aspects of driving, including navigating through traffic, making turns, and avoiding obstacles. The driver does not need to monitor the vehicle actively. However, it may be necessary to take control of the vehicle. Level 4 autonomous driving is designed to work in specific conditions, such as on highways or in areas with well-defined routes. The vehicle’s automation system can handle most situations. However, there may be limitations to the technology, such as extreme weather, which require human intervention. Level 4 autonomous driving represents a significant advancement in autonomous vehicle technology, as it allows the vehicle to operate without human input in many situations. Still, it is not yet fully capable of working in all conditions. • Level 5: autonomous driving is the ultimate stage where the vehicle is fully autonomous and can operate without human input in all conditions. The vehicle’s automation system can handle all driving tasks, including accelerating, braking, and steering. The driver is not required to monitor or control the car at any time. Level 5 autonomous driving represents the ultimate goal of autonomous vehicle technology, potentially reducing road accidents and congestion. However, achieving level 5 autonomy requires significant technological advancements, and several technical, regulatory, and ethical challenges still need to be addressed before fully autonomous vehicles become widespread. 10.10.2

DESIGNING THE CONTROL SYSTEM

A control model in autonomous driving refers to algorithms and software that control the vehicle’s motion and behaviour. The system is responsible for interpreting sensor data, making decisions about the vehicle’s actions, and sending commands to the vehicle’s actuators, such as the steering, braking, and acceleration mechanisms. Several approaches to designing control models for autonomous vehicles include rule-based systems, Machine Learning, and reinforcement learning. Rule-based systems rely on predefined rules that determine the vehicle’s behaviour in different situations. At the same time, Machine Learning and reinforcement learning models use data to train the vehicle’s behaviour over time. The control model must be robust and able to handle unexpected situations, such as road obstacles or traffic conditions. It must also balance safety, efficiency and

728

Design and Analysis of Control Systems

comfort, making decisions that prioritise the safety of passengers and other road users while providing a smooth and comfortable ride. The control model is a critical component of autonomous driving technology. It enables the vehicle to operate safely and efficiently in various conditions. The control model of a driverless car involves the following key features: • Perception system: This component consists of sensors such as cameras, lidar, and radar that provide real-time data about the vehicle’s surroundings, including traffic, road conditions and obstacles. • Localisation system: This component uses the data from the perception system to determine the vehicle’s precise location relative to its environment. This is often done using GPS, other sensors and algorithms that can triangulate the vehicle’s position. • Mapping system: The mapping system creates a digital map of the vehicle’s environment, including roadways, traffic signals, and other landmarks. This helps the car understand its location and how it should move through the environment to a desired destination. • Planning system: The planning system takes the data from the perception, localisation, and mapping techniques and uses it to plan a safe and efficient route for the vehicle. This includes making decisions about speed, lane changes, and other driving manoeuvres. • Control system: The control system (the brain) is responsible for executing the decisions made by the planning system, controlling the car’s acceleration, braking and steering to keep it on the planned route and avoid obstacles or other hazards. All these components work together seamlessly, enabling the driverless car to safely navigate its environment without human input. Advanced AI algorithms are often used to integrate and optimise these capabilities, making the vehicle more efficient and safer over time as it collects and analyses more data. The elements of the control system of a driverless car are displayed in Figure 10.29. 10.10.3

OBJECTIVE FUNCTIONS FOR A DRIVERLESS CAR

In a driverless car system, an objective function is a mathematical function that defines a specific goal or objective that the system is designed to achieve. The system’s control algorithm uses the objective function to optimise the car’s performance and behaviour. The specific objective function used in a driverless car system will depend on the desired behaviour and performance of the system. Some common objective functions in driverless car systems include: • Minimising travel time: The objective function is to minimise the time it takes from the car’s starting point to its destination, subject to speed limits and traffic conditions. • Maximising safety: The objective function is to maximise the car’s safety and that of its passengers by avoiding collisions, following traffic rules, and responding to unexpected situations. • Maximising energy efficiency: The objective function is to maximise the car’s energy efficiency by minimising fuel consumption or electricity usage, subject to constraints such as battery life. • Maximising passenger comfort: The objective function is to maximise passenger comfort by minimising the car’s acceleration, deceleration, and jerky movements. • Minimising operational costs: The objective function is to minimise the operating costs of the driverless car system by optimising the use of resources such as fuel, electricity, and maintenance. The specific objective function used in a driverless car system will depend on the priorities and requirements of the system’s designers and operators. The control algorithm of the driverless car system will use the objective function to make decisions about the car’s actions and behaviour, to optimise the system’s performance and achieve its objectives.

AI-Based Design and Analysis of Control Systems

729

Figure 10.29 Control system for a driverless car.

10.11

ARTIFICIAL INTELLIGENCE AND ROBOTICS: GREAT EXPECTATIONS AND DAUNTING EXISTENTIAL RISKS

Artificial Intelligence (AI) is both exciting and challenging. There are opportunities and dangers. There are fascinating possibilities and existential risks. The starting point is to understand the basic definitions of key concepts such as Robotics, AI, Machine Learning, Deep Learning, Single-Task AI, Artificial General Intelligence (AGI), Strong AI, and Generative AI (e.g. OpenAI’s ChatGPT, Google’s Bard, and Stability AI’s Stable Diffusion). 10.11.1

DEFINITIONS

AI refers to intelligence programmed into and demonstrated by machines. This capability is in contrast to the natural intelligence manifested by humans and other animals. Put more succinctly; AI refers to the development of computer systems that can perform tasks that ordinarily require human intelligence, such as perception, reasoning, learning, decision-making, obstacle avoidance, and path planning. Robotics is the design, construction and operation of physical machines that autonomously carry out functions ordinarily ascribed to human beings, such as obstacle avoidance, path plan-

730

Design and Analysis of Control Systems

ning, language processing, perception, reasoning, learning, and decision-making. Put differently, Robotics is the intelligent connection of perception to action in a machine that operates autonomously. With the advent and growth of AI, most robots are driven by AI-based automation. Single-task AI systems are designed to perform a specific task or set of functions rather than being able to perform a wide range of tasks. They are also called traditional AI systems. Single-task AI is typically developed using Machine Learning algorithms trained on a specific data set to perform a particular function. Examples of single-task AI systems include speech recognition software, image recognition software, obstacle avoidance, path planning, and fraud detection systems. Traditional AI systems drive most robots, drones and driverless cars. These highly specialised systems are typically designed to perform their task at a very exacting level of accuracy. Overall, single-task AI systems are highly effective for performing specific tasks. However, they are incapable of the flexibility and adaptability of more general-purpose AI systems. General-purpose AI systems are designed to perform a wide range of tasks. The ultimate and most advanced form of such capability is called Artificial General Intelligence (AGI). This is the intelligence of a machine that can learn, understand, and execute any intellectual task that a human being can. The idea is to have in a device, machine or computer system the ability to perform multiple complex tasks as opposed to specific task competence. Unlike narrow AI systems designed to perform specific functions, such as image recognition, language translation, obstacle avoidance and path planning, AGI can reason, learn, and adapt to new situations while accomplishing multiple complex tasks in much the same way as a human being. The concept of AGI is sometimes associated with the idea of Strong AI, which suggests that it is possible to create machines that are not only capable of mimicking human intelligence but are sentient, conscious and self-aware, that is, machines that manifest intentionality, consciousness and feelings. A special subset of AI is Machine Learning – the study of algorithms and statistical models that computer systems use to perform tasks effectively without explicit instructions, relying on patterns and inference. This category of AI involves training machines to learn from data rather than explicitly programming them to perform a specific function. In Machine Learning, algorithms use statistical models and mathematical techniques to automatically improve their performance on a given task based on the data they are trained on. The goal is to develop models that can generalise well to new, unseen data and make accurate predictions or decisions. Deep Learning is a subset of Machine Learning. It involves training neural networks with multiple layers to learn representations of data that can be used for classification, prediction, or other tasks. In Deep Learning, the neural network architecture is designed to automatically learn hierarchical representations of the data, with each layer learning more abstract features than the previous layer. This makes Deep Learning particularly effective for tasks involving complex data, such as images, speech, and natural language processing. Generative AI (Gen AI) is a special type of AI that involves using algorithms and models to create new content or generate novel ideas. This can include the creation of images, music, text, or even entire virtual environments. Unlike traditional AI systems designed to perform specific tasks or solve specific problems, Generative AI is designed to create something novel that did not exist before – new content and knowledge. Recent highly popular examples of Generative AI are OpenAI’s ChatGPT, Google’s Bard and Stability AI’s Stable Diffusion. 10.11.2

OPPORTUNITIES AND CHALLENGES IN EQUAL MEASURE

Why the excitement about AI now? Why the concerns and fears about AI now? What is going on? While AI has been around for a while, the early stages of the field witnessed sluggish growth. Now there is an exponential growth of AI capabilities through various levels: basic intelligence, millipede

AI-Based Design and Analysis of Control Systems

731

level, mouse level, monkey level, average Joe/Jane level, Albert Einstein level, and beyond Albert Einstein (i.e., Super Intelligence). . . .. of course, AI will not STOP at human-level intelligence! It is baseless human arrogance to think otherwise. To dramatise the possibility of Super Intelligence, ChatGPT 4’s mastery of general knowledge far surpasses that of any living human being. The spectacular and unprecedented exponential growth in AI capabilities has set the world ablaze with great expectations and acute worries from ordinary folks and experts. Prof Geoffrey Hinton, a winner of the Turing Award (considered the Nobel Prize for Computing), a man considered the godfather of AI for his pioneering work on developing the Supervised Learning algorithm called Backpropagation, on 1 May 2023 quit his job at Google, warning about the growing dangers from developments in the AI field. Elon Musk, CEO of SpaceX, Tesla and Twitter, has also widely expressed concerns about AI, saying on 17 April 2023: “It has the potential of civilisation destruction.” What are the issues? Once one highly intelligent agent is developed, it can be replicated effortlessly worldwide. Hence an infinite number of these agents can be created, which then learn together simultaneously. This can lead to multiplicity (large numbers or swarms) of equally Super-Intelligent Agents. All these developments are unprecedented, and hence the excitement and fear, in equal measure. With this exponential growth and broad nature of AI capabilities, it is clear that AI has applications in every industry, sector and society, ranging from automated manufacturing, autonomous mining, speech recognition, and natural language processing to image and video analysis, mechatronics, robotics, and autonomous weapons. AI is thus used in various industries, such as healthcare, finance, education, and manufacturing, to improve efficiency, reduce costs, and develop new products and services. Consequently, AI is transforming society in many ways as an effective tool for solving human challenges. The impact is ubiquitous. This is the source of the excitement. As a recent phenomenon, Generative AI has taken the world by storm. For example, ChatGPT has over 100 million users worldwide, while its website now generates 1 billion monthly visitors. This user and traffic growth was achieved in a record-breaking 2-month period from December 2022 to April 2023. These are fascinating statistics demonstrating the dramatic upsurge in the AI revolution. Applications of AI are in every industry, sector and society. In 2024, Gen AI is projected to add $4.2 trillion annually to the global economy. This is in addition to the $11.5 trillion that nongenerative AI and its 4IR technologies are already contributing. This dramatic impact is due to the unprecedented increase in productivity occasioned by the adoption of AI in all sectors of the economy. Just to dramatise the impact of AI and Gen AI in particular it is prudent to state the GDPs of the key world economies: USA $25 trillion, China $19 trillion, Japan $5 trillion, Germany $4 trillion, India $3.5 trillion, and Africa (collective GDP) $3.1 trillion. This means that the economy produced by Gen AI alone, would be the fourth largest after the USA, China and Japan. This is staggering! It is imperative to stress that the exponential growth of AI is due to two factors: availability of lots of data (Big Data) for machine learning, and the upsurge in computational power of computers While the AI opportunities are immense, it is instructive and prudent that the broad range of threats and risks is appreciated, understood and appraised. These dangers include privacy violation, bias, discrimination, copyright violation, lack of accountability, insecurity, job losses, autonomous weapons, cyber-attacks, terrorism, over-reliance on AI, and unchecked limitations of AI. In particular, a critical danger occurs when AI capabilities get into the hands of bad actors such as thieves, cheats, terrorists and warmongers. Furthermore, there are also existential risks – concerns that AGI could eventually become so powerful that it poses an existential threat to humanity by accident or design. What happens when we have a million Super-Intelligent Agents whose capabilities surpass those of Albert Einstein, Isaac Newton, Elon Musk, Prof Edward Witten (winner of

732

Design and Analysis of Control Systems

a Fields Medal – the Nobel Prize for Mathematics) or Prof Andrew Wiles (Mathematician who in 1994 proved the 350-year old Fermat’s Last Theorem). How can humans control a society or world with a million such Super-Intelligent Agents? Food for Thought! 10.11.3

AI RISK MITIGATION AND MANAGEMENT

Clearly, there is a need for careful and systematic mitigation of all these dangers and risks. First and foremost, many of these AI threats and challenges are not inherent to AI itself. They arise from how AI is developed, deployed, and regulated. There is a lot that can be done. For example, we must have diverse (including women, Africans, Blacks, Asians, Indians, different cultures/languages, etc.) teams of AI developers and regulators. With respect to the impact of AI on jobs, there are three broad categories. Clearly, there will be job displacement (outright job losses). This reality must be accepted. The second class is that of modified jobs (human augmentation). Third, there will be new jobs created by the upsurge of AI adoption. The challenge is to ensure that the modified jobs plus the new jobs are more than the destroyed jobs. That is the hopeful and aspirational disposition that we must convert into reality. However, this net increase in employment is not guaranteed for every country. To achieve it, there must be active and deliberate intervention by policy makers, regulators and the private sector. For a start, AI technology must be owned by the majority, with a primary focus on solving global challenges such that the quality of life is improved for all citizens of the planet. Specifically, on the matter of jobs, there must be reskilling of job losers, retooling and preparation for the modified jobs. More importantly, there must be deliberate and systemic development of new skills, capabilities and competencies to take advantage of the new jobs. In fact, there must be continuous reskilling and education as AI evolves. AI must be developed ethically and responsibly, with appropriate safeguards to mitigate these risks. However, the dangers persist. How do you have universal ethical or moral “rules of the road” with bad actors such as thieves, cheats, terrorists and warmongers? Unlike nuclear weapons, which are hard to access or develop, AI is easy to access and develop by anyone. One only needs a good brain, computing power and high-speed, reliable connectivity. Therein lies the unprecedented threat presented by AI. More significantly, it is imperative to note that the existential risks caused by the possibility of Super Intelligence remain unresolved! We must accept this reality. AGI and Strong AI remain ostensibly areas of current active research - inconclusive work in progress. Whether and when such systems will be fully developed are open questions. Let us all keep an open mind. More specifically, ethical considerations, religious beliefs or fear of the unknown should not be allowed to deter the pursuit of AGI and Strong AI. The research and experimentation must continue. Once a technology has been invented, it cannot be uninvented. Nobody can put it back once the genie is out of the bottle. We must brace ourselves for a brave AI-driven new world fraught with opportunities and threats. 10.11.4

THE WAY FORWARD

As a way forward, we must concentrate on developing responsible AI for solving global problems and improving the quality of life of all people worldwide. Specifically, Africans must be key players (not just consumers) in the development of AI. They must proactively use AI to solve African socio-economic problems. Across the world, folks must understand the strengths and limitations of AI. More specifically, there must be effective AI risk mitigation and management. There must be careful attention to the design, development, and deployment of AI systems. Africa and the Global South must be key and integral participants in these AI endeavours. Globally, all marginalised and emerging economies must proactively participate in the development of guidelines and regulations

AI-Based Design and Analysis of Control Systems

733

on the use of AI. Thereafter, these inclusively derived guidelines and regulations must be diligently and effectively implemented. Dynamic Law Commissions must be set up at national, regional, continental and global levels with the mandate to develop new but harmonised laws and regulations to enable and facilitate the adoption, management and mitigation of all new technologies, including AI. Yes, the AI revolution must continue unhindered, globally. However, Africans and those from the Global South must be key players in developing and constructing AI systems – research, coding, testing, deployment and regulations – and not just passive consumers of AI products. Indeed, Artificial Intelligence and Robotics present great expectations and daunting existential risks.

PROBLEMS Problem 10.1 (a) Outline the process of designing a PID controller using MATLAB Simulink to control temperature control in an industrial plant? (b) Discuss the steps and the library used in MATLAB. (c) Draw the block diagram of the designed temperature control system Problem 10.2 (a) Design a simple PID control system for an air conditioner that controls room temperature at 24◦ . (b) Use the MATLAB Simulink to design the PID controller (c) Discuss PID Auto tuning (d) Extract the result of the PID controller Problem 10.3 (a) How can Fuzzy logic be used to tune a PID controller for effective temperature controll control? (use information from Problem 10.2) (b) What are the Fuzzy input and output, and how do they integrate with the PID? (c) Write the Fuzzy Logic membership function of the system (d) Write the Fuzzy Logic rules that use to create this controller. (e) Extract the Fuzzy Logic -surface rule for the temperature control system. Problem 10.4 (a) How can a PID controller be designed to achieve maximum power point tracking (MPPT) in a solar photovoltaic (PV) system? (b) What are the key steps involved in selecting the controller gains and tuning the PID controller for optimal performance? (c) Design a simple multi-loop PID controller for a water temperature control system in the thermal water mixing process using a PID controller. Consider diagram in Figure 10.30. Problem 10.5 (a) How can intelligent algorithms such as Genetic Algorithms (GA), Particle Swarm Optimisation (PSO), and Evolutionary Algorithms (EA) be utilised to automatically determine the optimal PID controller parameters for effective temperature control (b) Draw a flow chart for one of the optimisation techniques you want to use to determine the PID parameter. (c) Use one of the algorithms to determine the PID parameter using MATLAB. (d) Compare the result of optimisation with Fuzzy-PID Problem 10.6 (a) How can Machine Learning algorithm automatically tune the PID parameters for a temperature control systems, based on real-time sensor data and performance feedback, to optimise control and minimise error? (You may use the MATLAB Machine Learning toolbox and the Kaggle website to get data for analysis) (b) Discuss how Machine Learning forecasts the model and what the criteria are. (c) Compare the result with a Fuzzy logic-based algorithm.

734

Design and Analysis of Control Systems

Figure 10.30 Control system for a driverless car.

Problem 10.7 (a) What is an intelligent temperature controller, and how does it differ from traditional temperature control systems? (b) Briefly discuss the advantage and disadvantages of each type. Problem 10.8 (a) How can Fourth Industrial Revolution technologies, such as the Internet of Things (IoT) and Artificial Intelligence (AI), be integrated with industrial temperature control systems to enhance efficiency and performance? (b) Discuss the 4IR in the industry that uses temperature as a controller and how 4IR manages the controller. (c) What is the advanced approach to having an intelligent temperature controller? Problem 10.9 How can data science analysis, such as statistical modelling and Machine Learning algorithms, be applied to temperature control systems to analyse data and determine effective control strategies for maintaining desired temperatures?

A Laplace and Z-Transforms A.1

PROPERTIES OF LAPLACE TRANSFORMS

Number

Laplace Transform

Time Function

Comment



F(s)

f (t)

Transform pair

1

αF1 (s) + β F2 (s)

α f1 (t) + β f2 (t)

Superposition

2

F(s)e−sλ

f (t − λ )

Time delay

3

1 F |a| a

f (at)

Time scaling

4

F(s + a)

e−at f (t)

Frequency shift

f (m) (t)

Differentiation

5

s

sm F(s) − sm−1 f (0) −sm−2 f˙(0) − · · · − f (m−1) (0)

6

1 F(s) s

R

f (ξ )dξ

Integration

7

F1 (s)F2 (s)

f1 (t) ∗ f2 (t)

Convolution

lim sF(s)

f (0+ )

IVT

8

s→∞

s→0

t→∞

lim f (t)

FVT

10

1 R c+ j∞ F1 (ζ )F2 (s − ζ )dζ 2π j c− j∞

f1 (t) f2 (t)

Time product

11



t f (t)

Multiply by time

9

lim sF(s)

d F(s) ds

735

736

A.2

Appendix A

TABLE OF LAPLACE TRANSFORMS

Number

f(t), t ≥ 0

F(s)

1

1

δ (t)

2

1/s

1(t)

3

1/s2

t

4

2!/s3

t2

5

3!/s4

t3

6

m!/sm+1

tm

1 s+a 1 (s + a)2 1 (s + a)3 1 (s + a)m a s(s + a) a s2 (s + a) b−a (s + a)(s + b) s (s + a)2 a2 s(s + a)2 (b − a)s (s + a)(s + b) a s2 + a2 s s2 + a2 s+a (s + a)2 + b2 b (s + a)2 + b2 a2 + b2 s [(s + a)2 + b2 )]

e−at

7 8 9 10 11 12 13 14 15 16 17 18 19 20 21

te−at 1 2 −at t e 2! 1 t m−1 e−at (m − 1)! 1 − e−at 1 (at − 1 + e−at ) a e−at − e−bt (1 − at) e−at 1 − e−at (1 + at) be−bt − ae−at sin at cos at e−at cos bt e−at sin bt   a 1 − e−at cos bt + sin bt b

737

Appendix A

A.3

PROPERTIES OF Z-TRANSFORMS

1. Definition −k F(z) = Σ∞ k=0 f (kh)z

2. Inversion f (kh) = 3. Linearity

1 H F(z)zk−1 dz 2πi

Z {α f + β g} = αZ f + β Zg 4. Time shift Z{q−n f } = z−n F −j Z{qn f } = zn (F − F1 } where F1 (z) = Σn−1 j=0 f ( jh)z 5. Initial-value theorem f (0) = lim F(z) z→∞

6. Final-value theorem If (1 − z−1 )F(z) does not have any poles on or outside the unit circle, then lim f (kh) = lim (1 − z−1 )F(z) z→1

k→∞

7. Convolution  Z{ f × g} = Z Σkn=0 f (n)g(k − n) = (Z f )(Zg)

A.4

TABLE OF Z-TRANSFORMS

Number

F(s)

1, k = 0; k ̸= 0 1, k = ko ; k ̸= ko

1 2 3 4 5 6

7

8 9

f(kT)

1 s 1 s2 1 s3 1 s4 1 sm 1 s+a 1 (s + a)2

1(kT ) kT 1 (kT )2 2! 1 (kT )3 3! (−1)m−1 lim × a→0 (m − 1)!  m−1  ∂ −akT e ∂ am−1 e−akT kTe−akT

F(z) 1 z−ko z z−1 Tz (z −1)2  T 2 z(z + 1) 3 2  (z − 1)  T 3 z(z2 + 4z + 1) 4 6 (z − 1) (−1)m−1 lim × a→0 (m − 1)!   m−1 ∂ z ∂ am−1 z − e−aT z z − e−aT T ze−aT (z − e−aT )2

Continued

738

A.4

Appendix A

TABLE OF Z-TRANSFORMS (CONT.)

No. 10

11

12

13

F(s) 1 (s + a)3

1 (s + a)m

a s(s + a) a s2 (s + a)

f(kT) 1 (kT )2 e−akT 2 (−1)m−1 × 1)! (m −  m−1 ∂ −akT e ∂ am−1 1 − e−akT

1 (akT − 1 + e−akT ) a

F(z) T 2 −aT (z + e−aT ) e z 2 (z − e−aT )3

(−1)m−1 (m − 1)!



∂ m−1 z ∂ am−1 z − e−aT

z(1 − e−aT ) (z − 1)(z − e−aT ) z [Az + B] a(z − 1)2 (z − e−aT ) A = (aT − 1 + e−aT ) B = (1 − e−aT − aTe−aT )

14

b−a (s + a)(s + b)

e−akT − e−bkT

(e−aT − e−bT )z (z − e−aT )(z − e−bT )

15

s (s + a)2

(1 − akT )e−akT

  z z − e−aT (1 + aT ) −aT 2 (z − e )

16

a2 s(s + a)2

1 − e−akT (1 + akT )



z [Az + B] (z − 1)(z − e−aT )2 A = (1 − e−aT − aTe−aT ) B = e−2aT − e−aT + aTe−aT

17

(b − a) s (s + a)(s + b)

be−bkT − ae−akT

  z z(b − a) − (be−aT − ae−bT ) (z − e−aT )(z − e−bT )

18

a s2 + a2

sin akT

z sin aT z2 − (2 cos aT )z + 1

19

s s2 + a2

cos akT

z(z − cos aT ) z2 − (2 cos aT )z + 1

20

s+a (s + a)2 + b2

e−akT cos bkT

z(z − e−aT cos bT ) z2 − 2e−aT (cos bT )z + e−2aT

21

b (s + a)2 + b2

e−akT sin bkT

22

a2 + b2 s [(s + a)2 + b2 ]

1 − e−akT ×(cos bkT a + sin bkT b

ze−aT sin bT z2 − 2e−aT (cos bT )z + e−2aT z(Az + B) (z − 1) [z2 −Cz + e−2aT ] A = 1 − e−aT cos bT a − e−aT sin bT b a B = e−2aT + e−aT sin bT b −aT −e cos bT C = 2e−aT (cos bT )

B MATLAB: Basics and Exercises MATLAB is an interactive high-level programming language for numerical computation and data visualisation. It is used extensively for the design and analysis of control systems. There are many different toolboxes available that extend the basic functions of MATLAB into different application areas. In this Appendix, the basic commands will be introduced and the reader will be familiarised with MATLAB. In addition, a number of exercises involving dynamic systems are provided. MATLAB is supported on Unix, Macintosh, and Windows environments. There are number of MATLAB websites that can be used to supplement this appendix. University of Michigan and the MATLAB main link provide effective MATLAB tutorials through the following websites: https://ctms.engin.umich.edu/CTMS/ https://www.mathworks.com/support/learn-with-matlab-tutorials.html

B.1

GETTING STARTED

MATLAB is invoked by entering the command “MATLAB” at the computer system prompt or by clicking on the MATLAB icon, depending on the type of machine being used. Once started, MATLAB will clear the screen, provide some introductory remarks, and produce the MATLAB command prompt >>. For the most part, MATLAB commands are independent of the type of machine and operating system being used. However, the way that MATLAB interfaces with the computing environment varies dramatically from machine to machine. As a result, use of features such as printing and command line editing are machine dependent. To exit from MATLAB type “quit” or “exit” at the MATLAB prompt, followed by the return or enter key.

B.2

CREATING MATLAB FILES

It is much more convenient to use MATLAB script files than to enter commands line by line at the MATLAB command prompt. A script file is an ASCII file (regular text file) that contains a series of commands written just as they would be entered at the MATLAB command prompt. Statements beginning with % are considered to be comments and are hence ignored during execution. Each script file should have a name that ends with a “.m” extension. The script file (MATLAB programme file) is then executed by typing the name of the script file without the “.m” extension at the MATLAB command prompt. For example, if the script file lab1.m contains the following commands used to plot a sine curve: % MAT LAB (Exercise1) % Plotting a simple sine curve t = 0 : 0.1 : 10; y = sin(2 ∗ t); plot(t, y) Typing lab1 at the MATLAB command prompt will plot the sine curve. The file should be in the same directory as the MATLAB prompt. To verify, type ls at the MATLAB prompt to see that the file exists in the same directory. If not, then type cd directory˙name to go the right directory. Once the MATLAB prompt is set, open another window to edit the text (script) file, i.e., enter the commands of the programme that needs to be executed. The text file should be saved before running it at the MATLAB prompt. 739

740

B.3

Appendix B

COMMANDS

MATLAB has many commands and a few are listed below along with their syntax. They are supplemented with examples and illustrations. B.3.1

VECTORS

For a row vector enter each element of the vector (separated by a space) between brackets, and set it equal to a variable. For example, to create a row vector a, enter the following in a MATLAB command window:   >> a = 1 2 3 4 5 6 7 a= 1 2 3 4 5 6 7 For a proper vector d, enter each element of the vector (separated by a semicolon)   >> d = 1; 2; 3; d= 1 2 3 Alternatively the following transpose command can be used to obtain a proper vector.  ′ >> d = 1 2 3 To create a row vector with elements between 0 and 20 evenly spaced in increments of 2 the following command can be used (this method is frequently used to create a time vector): >> t = 0:2:20 t= 0 2 4 6 8 10 12 14 16 18 20 Manipulating vectors is as easy as creating them. To add 2 to each element of vector a, the command takes the form: >> b = a + 2 b= 3 4 5 6 7 8 9 Now suppose, the sum of two vectors is required. The two vectors have to be of the same length, and the corresponding elements are simply added as shown below: >> c = a + b c= 4 6 8 10 12 14 16 Subtraction of vectors of the same length works in exactly the same way. B.3.2

FUNCTIONS

MATLAB includes many standard functions. Each function is a block of code that accomplishes a specific task. Such MATLAB functions include the following: sin, cos, log, exp, and sqrt. Commonly used constants such as pi, and i or j for the square root of -1, are also incorporated. >> sin(pi/4) ans = 0.7071 To determine the usage of any function, type help function˙name at the MATLAB command window. The function command facilitates the creation of new functions by the user.

Appendix B

B.3.3

741

PLOTTING

It is easy to create plots in MATLAB. Suppose the task involves plotting a sine curve as a function of time. The task involves making a time vector and then computing the value of the sin function at each time value. It is important to note that a semicolon (; ) after a command instructs the MATLAB engine not to display the results of that particular command. The sine curve is plotted as follows: >> t = 0:0.25:7; >> y = sin(t); >> plot(t,y) MATLAB will return the plot in Figure B.1. The plot command has extensive add-on capabilities. The title of the plot, labels of its axes, text and grids can be included to make referencing easy. The following commands can be used inside a MATLAB script file (file˙name.m), and when the file is run, the curve and its new features are produced (Figure B.2): plot(t,y) title(‘Plot name, e.g., System Response’) xlabel(‘Time [sec]’) ylabel(‘y [m]’) grid gtext(‘Put text on graph, e.g., sin(t)’) MATLAB will return the following plot. The grid command produces grid lines on the graph, and these can be used to compare plots. The command gtext allows text to be put on the graph. MATLAB provides crosswires in the graphics window that can be moved by the computer mouse to the desired location for the text label. It is important to note that the opening apostrophe in MATLAB commands must be like a 9 and not a 6 as in normal sentence construction. This is part of MATLAB syntax.

Figure B.1 MATLAB sine plot.

742

Appendix B

Figure B.2 MATLAB sine plot with labels.

B.3.4

POLYNOMIALS

In MATLAB, a polynomial is represented by a row vector. To create a polynomial in MATLAB, each coefficient of the polynomial is entered into the vector in descending order. For instance, consider the following polynomial, s4 + 3s3 − 15s2 − 2s + 9. The polynomial is entered in the following manner:   >> x = 1 3 −15 −2 9 x= 1 3 −15 −2 9 MATLAB interprets a vector of length (n + 1) as an nth-order polynomial. Thus, if the polynomial is missing any coefficients, zeros must be entered at the appropriate places in the vector. For example, s4 + 1, would be represented as:   >> y = 1 0 0 0 1 The value of a polynomial can be found by using the polyval function. For example, to find the value of the above polynomial at s = 2, the following command is used: >> z = polyval([1 0 0 0 1], 2) z= 17 The roots of a polynomial can also be extracted. This is particularly useful for high-order polynomials such as s4 + 3s3 − 15s2 − 2s + 9.

Appendix B

743

The roots are obtained by entering the command: >> roots ([1 3 −15 −2 9]) ans = −5.5745 2.5836 0.7860 −0.7951 The product of two polynomials is found by taking the convolution of their coefficients. The function conv as follows:  is engaged  >> x =  1 2 ;  >> y = 1 4 8 ; >> z = conv (x,y) z= 1 6 16 16 Dividing two polynomials is achieved by using the function deconv. It will return the result including the remainder. For example, dividing z by y from above and leads to: >> [xx, R] = deconv (z,y) xx = 1 2 R= 0 0 0 0 As expected, xx = x, and the remainder is 0. Addition of two polynomials, that have the same order, is achieved by adding their corresponding row vectors. For example, if two polynomials x and y have the same order, then their sum can be obtained as follows >> z = x + y B.3.5

MATRICES

Entering matrices into MATLAB is similar to entering vectors except that each row of elements is separated by a semicolon (;) or a return key. >> A = [1 2; 3 4] yields A= 1 2 3 4 >> A = [1,2 3,4] produces the same result. Matrices in MATLAB can be manipulated in many ways. The transpose of a matrix is obtained using the apostrophe key. >> C = A’ C= 1 3 2 4 It should be noted that if A had been complex, the apostrophe would have produced the complex conjugate transpose. The transpose can also be obtained by just typing A’ at the MATLAB prompt. Two matrices B and C can be multiplied together if the number of columns in B is equal to the number of rows in C. It is also essential to note that the order of multiplication for matrices matters.

744

Appendix B

>> B = [1 0; 0 1]; >> A*B ans = 1 2 3 4 A square matrix can be multiplied by itself many times by raising it to the necessary power. For example, >> A = [1 2; 3 4]; >> Aˆ3 ans = 37 54 81 118 The inverse of a matrix is obtained as follows: >> inv(A) ans = −2.0000 1.0000 1.5000 −0.5000

B.4

PRINTING

Printing in MATLAB is machine-dependent. B.4.1

MACINTOSH

To print a plot or a m-file from a Macintosh, just click on the plot or m-file, select Print under the File menu, and hit return. B.4.2

WINDOWS

To print a plot or a m-file from a computer running Windows, just select Print from the File menu in the window of the plot or m-file, and hit return. B.4.3

UNIX

To print a file on a Unix workstation the following command can be entered: > lp -P file˙name For example, if the name of the printer is lexlab2 and the file is test.m, then, > lp -Plexlab2 test.m B.4.4

PLOTS

Plots can be printed by going to the file menu of the plot and clicking the print option. Another window will pop up, and the print option can be selected. Thus, the file will send to be printed by the default printer. If the objective is to save the plot and print it later, either of the following two commands can be used in the MATLAB command window soon after producing the plot. >> print plot˙name.ps >> print -deps plot˙name.eps Once saved the plot can be printed later or included as part of a text document. For example, in Unix the plot can be printed as follows: > lp -P plot˙name.ps

Appendix B

B.5

745

USING M-FILES IN MATLAB

In handling MATLAB files (m-files), there are slight differences between the machine platforms. B.5.1

MACINTOSH

There is a built-in editor for m-files; The “New M-file” option can be chosen from the File menu. Other editors can be used but its important to save the files in text format and load them when MATLAB is invoked. B.5.2

PC WINDOWS

Running MATLAB from Windows is very similar to running it on a Macintosh. However, its important to note that the m-file will be saved in the clipboard. Therefore, it must be saved as file˙name.m B.5.3

UNIX

Two Unix windows must be opened: the MATLAB command window, and the file editing window. Both windows must be operating from the same directory. All the MATLAB commands are written into the script file “ f ile˙name.m” opened from the editing window. Unix editors include such programs as emacs and pico. For example a file can be opened and edited by first typing the following command in the editing Unix window: > emacs file˙name.m When all the MATLAB commands have been written into file˙name.m, the file can then be executed by typing the following command from the MATLAB command window: >> file˙name

B.6

SAVING WORKSPACE

As has been discussed already, MATLAB commands can be typed directly in a MATLAB command window, or they can be edited into an m-file and then the file executed afterwards. When typing commands directly in the MATLAB window, the work being carried out can be saved into a file file˙name.m by using the diary command as follows: >> diary file˙name.m >> A = [1 2; 3 4] A= 1 2 3 4 >> B = [1 0; 0 1] B= 1 0 0 1 >> diary The command diary file˙name.m causes all subsequent prompt inputs and their outputs to be written into file˙name.m. The command diary off suspends that facility.

B.7

GETTING HELP IN MATLAB

MATLAB has fairly good online help that can be accessed as follows: >> help command˙name

746

Appendix B

It is also important to notice that the value of a particular variable can be obtained at any time by typing its name. For example, >> A A= 1 2 3 4 Also more than one command statement can be placed on a single line, by separating the statements by a semicolon or comma. If a variable is not assigned to a specific operation or result, MATLAB will store it in a temporary variable called “ans.”

B.8

CONTROL FUNCTIONS

MATLAB has built-in functions to help in the design and analysis control systems. Given below are some of the functions: B.8.1

STEP

This function produces the step response of a system, i.e., the system output due to a step input. It takes as arguments the state-variable matrices (A,B,C,D) from the linear system representation, x˙ = Ax + Bu y = Cx + Du. The syntax takes the form >> step (A, u*B, C, u*D, iu, t); The term, u, is the size of the step input (a constant), and iu indicates the number of inputs. The quantity, t, is a user-supplied time vector that specify the time interval over which the step response should be calculated. If it is not specified a default time interval is used. Given below is the MATLAB programme that can be used to produce the step response for a car’s cruise-control system. The input is a step force, u = 500 N. A = [0 1; 0 −0.05]; B = [0; 0.001]; C = [0 1]; D = 0; step (A, 500*B, C, D, 1) title (‘Cruise Control Step Response’) This returns the plot shown in Figure B.3. B.8.2

IMPULSE

This function produces the step response of a system, i.e., the system output due to a step input. It takes as arguments the state-variable matrices (A,B,C,D) from the linear system representation, x˙ = Ax + Bu y = Cx + Du. The syntax takes the form >> impulse(A, B, C, D, iu, t); where the input is an impulse and iu indicates the number of inputs. The vector t is a usersupplied time vector that specifies the time interval over which the step response should be calculated. Given below is an example of a MATLAB programme that produces an impulse response:

Appendix B

747

Figure B.3 MATLAB plot: step response for cruise control.

A = [−3 −2 ; 1 0]; B = [1;0]; C = [2 1]; D = 0; impulse(A,B,C,D,1) title(‘Impulse Response’) The plot in Figure B.4 is obtained. As shown in preceding examples, the state-variable matrices must be entered before the step and impulse functions are used. Alternatively, the system transfer functions can be used to obtain system responses. MATLAB also has built-in programs to convert between different models of dynamic systems, such as the State-Variable Matrix form and the Transfer Function form. For example, >> [num,den] = ss2tf(A,B,C,D) converts the system from the State-Variable Matrix model to the transfer function form. The impulse and step responses are then obtained as follows: >> step(num,den) >> impulse(num,den) Other commands that convert between models include the following >> [A,B,C,D] = tf2ss(num,den) >> [z,p,k] = ss2zp(A,B,C,D) >> [A,B,C,D] = zp2ss(z,p,k) >> [A1,B1,C1,D1] = ss2ss(A,B,C,D)

748

Appendix B

Figure B.4 MATLAB plot: impulse response.

B.9

MORE COMMANDS

1. inv(A) produces the inverse of a square matrix A. A warning message is printed if A is singular or badly scaled 2. loops repeat statements a specific number of times. The general form of a f or statement is: f or variable = expression, statement, . . ., statement end The columns of the expression are stored one at a time in the variable and then the following statements, up to the end, are executed. The expression is often of the form X:Y, in which case its columns are simply scalars. For example: N =10 for I = 1:N; for J = 1:N; A(I,J) = 1/(I+J-1); end end 3. eye (identity matrices) eye(N) produces an N-by-N identity matrix. eye(M, N) or eye([M, N]) produces an M-by-N matrix with 1’s on the diagonal and zeros elsewhere. eye(size(A)) produces an identity matrix with the same size as A. 4. rank (linear independence) If X is a matrix then k = rank(X) gives the number of linearly independent rows or columns of the matrix.

749

Appendix B

B.10

LABWORK I 

   −5 1 0 0   1. Consider A =  0 −2 1  , b =  0  , c = −1 1 0 1 0 0 1 (a) Suppose Ax = b, find x. (b) Suppose yA = c, find y. (c) G(s) = c(sI −A)−1 b, find G(0)  and G(1). (d) Define CM = b Ab A2 b . Find rank of CM . (e) Now consider an arbitrary n × n matrix   A and n × 1 vector b. Let CM = b Ab A2 b · · · An−1 b . Write the script file that computes the rank of CM . 2. Consider the function η(s) H(s) = d(s) where η(s) = s3 + 6.4s2 + 11.29s + 6.76 d(s) = s4 + 14s3 + 46s2 + 64s + 40 (a) Find η(−12), η(−10), η(−8). (b) Find d(−12), d(−10), d(−8). (c) Find H(−12), H(−10), H(−8). 3. Let A1 be any n × m matrix and A2 be any p × q matrix. Create a function block(A1,A2) that generates the (n + p) × (m + q) block diagonal matrix   A1 0 , 0 A2 where the of diagonal blocks have all elements equal to zero. Use the zeros command. 4. For designing an automobile suspension, a two-mass system can be used for modelling as shown in the following Figure B.5.

Figure B.5 The quarter-car model.

750

Appendix B

This is called a quarter-car model because it comprises one of the four wheel suspensions. The car and wheel positions are denoted by y(t) and x(t) respectively. These displacements are from static equilibrium which corresponds to no inputs except gravity. (a) Draw the free-body diagram of this system, assuming one-dimensional vertical motion of the mass above wheel. (b) Write down the equations of motion for the automobile. (c) Express these equations in a State-Variable Matrix model (A,B,C,D) using the following state-variable vector, x(t) = [x x˙ y y] ˙ T, and justify this choice of state variables. Note that the car and wheel positions, y(t) and x(t), are the two outputs of the car system while the input is the unit step bump r(t). (d) Plot the position of the car and the wheel after the car hits a “unit bump” (i.e. r(t) is a unit step) using MATLAB. Assume m1 = 10 kg, m2 = 250 kg, kw = 500, 000 N/m, ks = 10, 000 N/m. Find the value of b that you would prefer if you were a passenger in the car.

B.11

LABWORK II

1. Given the matrices  1  0  A= 2 4  1 x=

0 5 1 2

2 3 1 2

3

5

  1  2  ; B =    4 3   1 ; y= 2

2 3 6 0

  1 5  3  ; C =  1  3 2  1 1   1 4 3 ; z=

 0 2   3  3 0

7

8

9



;

evaluate (a) AB + AC (b) A(B +C) and verify that A(B +C) = AB + AC (c) x + y (d) 3x + 4y + z 2. Plot the following curves for the specified ranges: (a) y (b) y (c) y (d) y

B.12

= = = =

sin 3t for t = 0 : 10 cos 2t for t = 0 : 10 3 sint + 4 cost for t = −5 : 15 e−3t for t = −1 : 5

LABWORK III

(1) Consider the mechanical system shown below (Figure B.6). The forces exerted by the springs are zero when x1 = x2 = x3 = 0. The input force is f (t) and the absolute displacements of m1 , m2 and m3 are x1 , x2 and x3 , respectively. The output is the spring force in k2 . The differential equations which represent the system can be expressed in the StateVariable Matrix model (A, B,C, D). This can be done by choosing the vector of state variables as x =



x1

v1

x2

v2

x3

v3

T

.

751

Appendix B

Figure B.6 Mechanical system.

(a) Show that the State-Variable Matrix model is then given by         

x˙1 v˙1 x˙2 v˙2 x˙3 v˙3



         =                   

0 (k1 + k2 ) − m1 0 k2 m2 0 k1 m3  0 1   m1  0   f (t) 0   0  0

1 b1 − m1 0

0 k2 m1 0 k2 − m2 0

0 0 b1 m3

0 0 1 b2 − m2 0 b2 m3

0



y=



k2

0

−k2

0

0

0 k1 m1 0

   0    

For the following data m1 = 0.5 kg m2 = 0.75 kg m3 = 2.0 kg b1 = 1, 000 Ns/m b2 = 5 Ns/m k1 = 50 N/m k2 = 100 N/m

x1 v1 x2 v2 x3 v3

0 0 k1 − m3

0 b1 m1 0 b2 m2 1 (b1 + b2 ) − m3

     + [0] f (t)   

            

x1 v1 x2 v2 x3 v3

    +   

752

Appendix B

(b) Find the output response y(t) to an input force of 50 N. (c) Find the response to an impulse force input. (d) Choose values of b1 and b2 that will reduce oscillations, give a practical settling time, and produce smooth responses. (e) Explain why the steady-state values of the MATLAB plots are in agreement with what is expected from analysing the mechanical system. (f) Is the matrix A invertible, and why is this the case? (Suggestion: For the responses, specify the time for which the plot is to be graphed, e.g., from 0 to 3 seconds). (2) If the output was the platform displacement is x3 then the matrices C and D are obtained as follows:   x1  v1       x2   + [0] f (t).  y= 0 0 0 0 1 0    v2   x3  v3 (a) Find the output response y(t) to an input force of 50 N. (b) Find the response to an impulse force input. (c) Explain why these MATLAB plots are in agreement with what is expected from analysing the mechanical system. (3) For the translational mechanical system under consideration, the vector of independent variables (minimum number of state variables) can be chosen as  T x(t) = xR1 v1 xR2 v2 v3 , where xR1

= (x1 − x3 )

xR2

= (x1 − x2 )

(a) Show that the State-Variable Matrix model is then given by,  0 1 0 0    b k k 1 2 1 x˙R1  − 0 − −  m1 m1 m1  v˙1     0 1 0 −1  x˙R  =    2  k b2  2  v˙2  0 0 −   m2 m2 v˙3  k1 b1 b2 0 m3 m3 m3       

0 1 m1 0 0 0

     f (t)   

y

=



0

0

k2

0

  0   

xR1 v1 xR2 v2 v3

    + [0] f (t).  

−1 b1 m1 0 b2 m2 (b1 + b2 ) − m3

          

xR1 v1 xR2 v2 v3

   +  

753

Appendix B

(b) Show that this representation produces the same results as the six State-Variable Matrix system by repeating 1(a), (b) and (c) using the new five State-Variable Matrix system. (c) Is the matrix A invertible, and why is this the case?

B.13

LABWORK IV

1. Consider the 6 × 6 matrix state-space system described in LABWORK III. The same system can be represented in transfer function form. In MATLAB one can convert from state-space to transfer function form by using the following command: [num, den] = ss2tf(A, B, C, D). (a) Use the above command to get the pair (num, den) for the above system. Plot the step and impulse responses using num, den. The syntax for plotting step and impulse function using transfer function is given as step(num, den, t) impulse(num, den, t). Compare these plots with those obtained earlier using the state-space matrices (A, B, C, D) and explain why the responses are in agreement with what is expected. (b) Similar to the ss2t f function MATLAB also has t f 2ss function which converts the transfer function to the state-space form. The syntax for this function is given by [A1, B1, C1, D1] = tf2ss(num, den). Using this function to transform the num, den obtained in part (a), to state-space matrices (A1, B1, C1, D1), compare these matrices (A1, B1, C1, D1) with the state-space matrices above (A, B, C, D) and comment on the results so obtained. Is the matrix A1 invertible? 2. Repeat the same problem 1 for the following 5 × 5 matrix state-space system described in LAB III. Compare the denominator of the transfer function obtained for matrices above with the denominator obtained for the above problem (problem 1). 3. Apart from the step and impulse inputs we can have may other kinds of inputs e.g., sinusoidal, triangular wave etc. To obtain the response for such inputs MATLAB has a built in function lsim. The usage for lsim is shown below lsim(num, den, u, t) lsim(A, B, C, D, u, t) where u is the input. Find the response of the system described in problem 1 (using both A, B, C, D and num, den) to the following inputs: (a) u = 2t (b) u = sin 5t What would happen to the response if the frequency in part (b) is increased from 5 to 10? Use subplot to plot the four curves on one sheet and compare them. 4. Repeat the above problem 3 for the system described in problem 2 (using both A, B, C, D and num, den). Data: m1

= 0.5 kg

754

B.14

Appendix B

m2

= 0.75 kg

m3

= 2 kg

b1

= 1, 000 Ns/m

b2

= 5 Ns/m

k1

= 50 N/m

k2

= 100 N/m

LABWORK V

1. Consider the system described by the following state-space matrices: " # " # 0 1 0 1 1 , B= A= 0 − 20  20 C = 1 0 , D = [0] Find the transfer function of the system using the MATLAB function ss2tf. 2. The input-output differential equation of a system with zero initial conditions is given below: y¨ + 6y˙ + 25y = 9u + 3u˙ (a) Find the transfer function of the system by using Laplace transforms. (b) Using the transfer function (num, den) obtained in part (a) find the state-variable matrices (A, B, C and D). (Use MATLAB function tf2ss.) 3. The MATLAB function [z, p, k] = ss2zp(A, B, C, D) finds the zeros and poles of the system described by (A, B, C, and D). Find the zeros and poles of the system described in problem 2 using the state-space description (A, B, C, and D). Also construct the transfer function using the zeros and poles, and compare with the transfer function obtained in problem 2 (a).

B.15

LABWORK VI

Translational, rotational and electrical systems can be shown to manifest the same dynamic behaviour and hence their models can be used interchangeably. Consider the four second-order systems shown in the following Figures B.7–B.10: Using free-body diagrams and circuit laws (current and voltage) it can be shown that the inputoutput differential equations for the four systems are given by: x¨ +

Figure B.7 A translational mechanical system.

k1 1 b1 x˙ + x = f (t) m m m

755

Appendix B

Figure B.8 A rotational mechanical system.

Figure B.9 An RLC series circuit.

Figure B.10 An RLC parallel circuit.

b2 k2 1 θ¨ + θ˙ + θ = T (t) J J J ı¨ + v¨o +

R1 ˙ 1 1 i+ i = v˙i (t) L1 L1C1 L1

1 1 1 v˙o + vo = i˙(t). R2C2 L2C2 C2

1. By comparing these models with the standard second-order model y¨ + 2ξ ωn y˙ + ωn2 y = f (t), find the expressions for the respective natural frequency (ωn ) and damping ratio (ξ ) for each of the four systems. 2. Use the following data for the four dynamic systems for the rest of the problems: b1

= 15 Ns/m

b2

= 15 Nms

k1

= 100 N/m

756

Appendix B

k2

= 100 Nm

m

= 1 kg

J

= 1 kgm2

C1

= 1.5Ω 2 = Ω 3 = C2 = 0.1 F

L1

= L2 = 0.1 H

R1 R2

3.

4.

5. 6.

B.16

(a) Find the differential equation for each of the four systems (hand calculations). (b) Find the transfer function for each of the four systems (hand calculations). (c) Compare the four characteristic equations. (d) Compare the four transfer functions. Use MATLAB to obtain the following: (a) the State-Variable Matrix models (A, B, C, and D) for each of the four systems. (b) the unit step response for each of the four systems. (c) the impulse response for each of the four systems. Compare the system responses (behaviour) of: (a) the two mechanical systems. (b) the two electrical systems. Use the FVT and IVT to confirm the eight MATLAB plots. Find the DC gain for each of the four systems.

LABWORK VII

Consider the inverted pendulum in Figure B.11. From the free-body diagram of this dynamic system, the equation of motion can be found by torque balance between the torque due to gravity, mg × l; inertial torque ml 2 × θ¨ (t); and the externally applied torque T (t) such that the nonlinear InputOutput model of the system is given by:

Figure B.11 Inverted pendulum.

757

Appendix B

ml 2 θ¨ (t) = mgl sin θ (t) + T (t) T (t) g ⇒ θ¨ (t) − sin θ (t) = l ml 1. Find the linearised Input-Output model of the dynamic system. 2. With the angle θ as output and torque T as input, and using the following numerical values: l = 1.09 m, m = 0.8417 kg, and g = 9.81 (m/s2 ), show that the transfer function of the inverted pendulum system is given by: Θ(s) T (s)

= =

1 s2 − 9 1 (s + 3)(s − 3)

3. Deduce from the transfer function that the inverted pendulum system is unstable. Describe an experiment you could do with everyday objects to verify this result. 4. Perform a sequence of attempts to stabilise and control the inverted pendulum. First, consider proportional (P) control. Draw the block diagram of the proportional control system, indicating clearly the signals flowing over the arrows and the transfer functions of the blocks in the diagram. 5. Using the root locus technique, argue that with proportional control, the system cannot be stabilised; at best, it can be made marginally stable. Demonstrate this. 6. Next, consider proportional–derivative (PD) control. Draw the modified block diagram and root locus, and argue that the inverted pendulum system can now be stabilised with the appropriate choice of the controller gain. 7. What is the additional information that the PD controller accounts for to succeed in stabilising the system where the P controller failed? 8. Now, design the PD controller for the following transient specifications: peak time T p = 1.61 seconds and overshoot 20%. Interpret these specifications as the locations of the desired closed-loop dominant poles on the s–plane. What is the natural frequency of the desired (compensated) system? 9. Find the location of the PD controller’s zero and gain K that meet our transient specifications. 10. Now, get MATLAB to display the step response of the PD-compensated inverted pendulum. Explain why the overshoot is slightly higher and the peak time slightly shorter than our design targets. 11. What is the steady-state error? Calculate it directly from the closed-loop transfer function of your PD-compensated inverted pendulum, and then verify your result using MATLAB. 12. As a final effort to correct this system, attempt to eliminate the steady-state error by designing a complete PID controller (with individual gains for the P, I and D components). Now interpreting the peak time and overshoot requirements as upper bounds: achieve peak time less than Tp,max = 1.61 seconds and overshoot no more than 20%. Calculate the final PID controller’s gains KP , KD and KI . Use the MATLAB results to explain the advantages of a PID Controller over the P, PI and PD Controllers. 13. Using the results from your PID Controller in (l), design an Operational Amplifier circuit to implement this PID Controller.

B.17

LABWORK VIII

An inverted pendulum mounted on a motor-driven cart is shown in Figure B.12. This is a model of the attitude control for a space booster on take-off. The objective of the control system is to keep

758

Appendix B

Figure B.12 Inverted pendulum system (space booster model).

the space booster in a vertical position. The same model can represent the SpaceX space rocket that landed on a floating platform at sea on 9 April 2016, and on land on 30 June 2021. The objective is to keep the pendulum in a vertical position, where it will fall with any slight disturbance. Hence the pendulum falls at any time, and for this reason, an effective control system is a needed to bring balance to the pendulum. This pendulum must have a very small change in the value of θ that is δ θ , and the cart displacement δ x. 1. Implement the (linear) State-Space model of the inverted pendulum system as presented in Figure B.12. Use the following physical parameters: M = 10 kg, m = 0.5 kg and L = 1.5 m. 2. Design and implement a full state feedback controller, together with the introduction of a reference input, to control the cart’s position while maintaining the pendulum in the upright position. Your controller should have a settling time of 0.5 and ωn > 1 rad/sec. 2. Discretise the PID controller designed in (a) using: • Tustin’s method. • Matched Pole-Zero (MPZ) method. • Use MATLAB to simulate the unit step response of each of these digital implementations for sample times of T = 1, 0.1, and 0.01 seconds.

B.19 B.19.1

LABWORK X CONTROL USING ARTIFICIAL INTELLIGENCE AND MATLAB

Design and implement a DC motor control system using Artificial Intelligence and MATLAB Simulink. The objective of this exercise is to provide students and practitioners with hands-on experience in controlling a DC motor using Machine Learning techniques in MATLAB Simulink.

761

Appendix B

B.19.1.1 1. 2. 3. 4.

System Modeling

Start MATLAB and open Simulink. Create a new Simulink model. Search for and add the appropriate DC motor block to represent the motor in the model. Connect input and output blocks (e.g., voltage source, encoder) to simulate motor operation.

B.19.1.2

Fuzzy Sets and Membership Functions Design

1. Determine the input and output variables required for motor control (e.g., error, change in error). 2. Define fuzzy sets and design appropriate membership functions for the input and output variables. 3. Configure linguistic terms and their corresponding membership functions using the Fuzzy Logic Toolbox. B.19.1.3

Fuzzy Rule Base Design

1. Identify the rules that govern the behaviour of the motor control system. 2. Develop a set of fuzzy rules based on the input variables and desired output behaviour. 3. Define the fuzzy logic inference system by mapping the rules to appropriate membership function operations. B.19.1.4

Implementing the Fuzzy Logic Controller

1. Return to the Simulink model. 2. Add a “Fuzzy Logic Controller” block from the Simulink Library Browser. 3. Configure the Fuzzy Logic Controller block with the defined fuzzy sets, membership functions, and rule base. 4. Connect the input and output blocks of the DC motor with the Fuzzy Logic Controller block. B.19.1.5

Simulation and Evaluation

1. Start the simulation in Simulink. 2. Monitor the motor’s response and observe the control performance based on the fuzzy logic controller. 3. Analyse the simulation results, such as tracking accuracy, response time, and stability. 4. Discuss the performance of the fuzzy logic control system and compare it with traditional control strategies. B.19.1.6

Fine-Tuning and Optimisation

1. Modify the fuzzy sets, membership functions, or rule base to improve the control performance if necessary. 2. Discuss strategies for optimising the fuzzy logic controller, such as adjusting the scaling factors or adding additional rules. 3. Re-simulate the system and evaluate the impact of the changes on the control performance. B.19.1.7

Conclusion and Discussion

1. Summarise the key findings and outcomes of the exercise. 2. Reflect on the advantages and limitations of using fuzzy logic control for motor control.

762

Appendix B

B.19.1.8

Training Data Generation

1. Determine the input and output variables required for motor control (e.g., input voltage and desired speed). 2. Write a MATLAB script to generate a dataset with various input-output combinations, covering a range of operating conditions. 3. Run the script to generate the training dataset and save it to a file. B.19.1.9

Training the Machine Learning Model

1. Import the training dataset into MATLAB. 2. Preprocess the data if necessary, such as normalising input and output variables. 3. Create an ANN model using the Deep Learning Toolbox in MATLAB. Define the network architecture and hyper-parameters. 4. Split the dataset into training and validation sets. 5. Train the ANN model using the training dataset and monitor the validation performance. 6. Save the trained model for later use. B.19.1.10

Implementing the Control System

1. Return to the Simulink model. 2. Add a “MATLAB Function” block to implement the control logic. 3. Write MATLAB code inside the “MATLAB Function” block to load the trained ANN model and perform control calculations based on measured inputs. 4. Connect the input and output blocks of the DC motor with the “MATLAB Function” block. 5. Configure appropriate parameters, such as sampling time and simulation duration, in the Simulink model. B.19.1.11 1. 2. 3. 4.

Simulation and Evaluation

Start the simulation in Simulink. Monitor the motor’s response and observe the control performance. Analyse the simulation results, such as tracking accuracy, response time, and stability. Discuss the performance of the control system and compare it with traditional control strategies.

B.19.1.12

Conclusion and Discussion

1. Summarise the key findings and outcomes of the exercise. 2. Reflect on the advantages and limitations of using machine learning for motor control. 3. Encourage students to ask questions, share their experiences, and discuss possible improvements. B.19.2

DC MOTOR CONTROL USING ARDUINO

1. Upload the Arduino sketch to the Arduino board. 2. Test the motor control by running the Arduino sketch and observing the motor’s behaviour 3. Discuss the observations and make any necessary adjustments to the code or hardware connections. 4. Demonstrate different motor control scenarios, such as varying the speed or direction of the motor.

763

Appendix B

B.19.2.1

Closed-Loop Control

1. Introduce the concept of closed-loop control using sensors (e.g., potentiometer, optical encoder). 2. Guide students in connecting a sensor to the Arduino board to provide feedback for closedloop control. 3. Modify the Arduino sketch to implement closed-loop control based on the sensor feedback. 4. Test the closed-loop control system and evaluate its performance. B.19.2.2

Performance Evaluation and Optimisation

1. Analyse the motor control performance, including speed accuracy, response time, and stability. 2. Discuss possible improvements and optimisations to the control system, such as tuning control parameters or implementing PID control. 3. Experiment with different control strategies and observe their impact on the motor’s behaviour. B.19.2.3

Conclusion and Discussion

1. Summarise the key findings and outcomes of the laboratory exercise. 2. Reflect on the advantages and limitations of using Arduino for motor control. 3. Encourage students to ask questions, share their experiences, and discuss possible improvements.

Bibliography 1. K. Anderson and P. Parthasarathy. Evolution of control systems with artificial intelligence. Control Engineering, 24:51–59, 2021. 2. K.J. Astrom and B. Wittenmark. Computer Control Systems: Theory and Design. Prentice-Hall: Hoboken, NJ, 1994. 3. K.J. Astrom and B. Wittenmark. Adaptive Control Systems: Theory and Design. Addison-Wesley: Boston, MA, 1995. 4. Y. Bar-Shalom and T.E. Fortmann. Tracking and Data Association. Academic Press: Cambridge, MA, 1988. 5. Y. Bar-Shalom and X. Li. Estimation and Tracking. Artech House: Norwood, MA, 1993. 6. G. Ciaburro. MATLAB for Machine Learning. Packt Publishing: Birmingham, 2017. 7. C.M. Close and D.K. Frederick. Modeling and Analysis of Dynamic Systems, 3rd Ed. Wiley: Hoboken, NJ, 2001. 8. I. Cochin and W. Cadwallender. Analysis and Design of Dynamic Systems. Prentice Hall: Hoboken, NJ, 1997. 9. P. Corke. Robotics and Control: Fundamental Algorithms in MATLAB. Springer Nature Switzerland AG: Switzerland, 2022. 10. R.C. Dorf and R.H. Bishop. Modern Control Systems. Pearson: London, 2016. 11. G.F. Franklin, J.D. Powel, and A. Emami-Naeini. Feedback Control of Dynamic Systems, 7th Ed. Pearson: London, 2015. 12. G.F. Franklin, J.D. Powel, and M.L. Workman. Digital Control of Dynamic Systems, 3rd Ed. AddisonWesley: Boston, MA, 1998. 13. P. Goulart. Introduction to Control Theory. Unpublished Course Notes, 2019. 14. A. Herrmann and W. Brenner. Autonomous Driving: How the Driverless Revolution will Change the World. Emerald Publishing: Bingley, 2018. 15. O.L.R. Jacobs. Introduction to Control Theory. Oxford University Press: Oxford, 1994. 16. R.R. Jitendra and K.G. Ajith. Mobile Intelligent Autonomous Systems. CRC Press: Boca Raton, FL, 2013. 17. R.R. Jitendra and A. Ramakalyan. Control Systems Classical, Modern, and AI-Based Approaches. CRC Press: Boca Raton, FL, 2020. 18. M.A. Johnson and M.H. Moradi. PID Control. Springer Publishing: New York, 2005. 19. B.C. Kuo. Automatic Control Systems, 8th Ed. Wiley: Hoboken, NJ, 2003. 20. P.S. Maybeck. Stochastic Models, Estimation and Control, Vol. 1. Academic Press: Cambridge, MA, 1979. 21. P.S. Maybeck. Stochastic Models, Estimation and Control, Vol. 3. Academic Press: Cambridge, MA, 1982. 22. T.M. Mitchell. Machine Learning, Vol. 1. McGraw-Hill: New York, NY, 2007. 23. A.G.O. Mutambara. Decentralized Estimation and Control for Multisensor Systems. CRC Press: Boca Raton, FL, 1998. 24. A.G.O. Mutambara. Design and Analysis of Control Systems. CRC Press: Boca Raton, FL, 1999. 25. M. Negnevitsky. Artificial Intelligence: A Guide to Intelligent Systems. Pearson Education: London, 2005. 26. N.S. Nise. Control Systems Engineering, 7th Ed. Wiley: Hoboken, NJ, 2015. 27. K. Ogata. System Dynamics, 4th Ed. Pearson: London, 2003. 28. K. Ogata. Modern Control Engineering, 5th Ed. Prentice-Hall: Hoboken, NJ, 2009. 29. M. Paluszek and S. Thomas. Practical MATLAB Deep Learning: A Project-Based Approach. Apress: New York, 2020. 30. N. Siddique. Intelligent Control: A Hybrid Approach Based on Fuzzy Logic, Neural Networks and Genetic Algorithms. Springer International Publishing AG: Cham, 2016. 31. M. Skilton and F. Hovsepian. The 4th Industrial Revolution: Responding to the Impact of Artificial Intelligence on Business. Springer Nature: Berlin, Germany, 2017. 32. S. Skogestad and I. Postlethwaite. Multivariable Feedback Control. John Wiley & Sons: Hoboken, NJ, 1996.

764

Index Z-Transform, 612, 615 convolution theorem, 614 properties, 613 4IR, 32, 714 control system innovations, 32 industrial revolutions, 715 intelligent management, 721 key drivers, 32 skills and capabilities, 32 technologies, 717 Ackermann’s formula, 570 actual output, 5 actuator, 5 adaptive control, 684 adaptive fuzzy logic PID, 710 AI-based control system, 699 artificial neural network, 712 deep learning, 703 fuzzy logic, 703, 705 machine earning, 703 machine learning, 703 methodologies, 702 algebraic equivalence, 660 algebraic Riccati equation, 579 analogue to digital, 605, 632 angle of arrival, 400 angle of departure, 400 artificial general intelligence, 33, 694 artificial intelligence, 33, 694 artificial neural network, 34, 712 ASIMO Robot, 17 associated information matrix, 666 assumed certainty equivalence, 679 asymptote, 407 Atlas Robot, 16 automatic control, 3 axis crossing, 406 backpropagation, 695 Backward Riccati recursion, 678 Bayes’ theorem, 663 bilinear approximation, 620

block diagram, 509 algebra, 83 simplification, 82 direct reduction, 83 Bode plot, 451, 452 factors, 455 complex pole, 464 complex zero, 467 constant, 456 derivative, 463 integrator, 462 simple pole, 460 simple zero, 458 magnitude, 451 MATLAB, 492 phase angle, 451 techniques, 453 breakpoint, 400, 407 CAD, 21 canonical form controllable, 135, 548, 556 diagonal, 550 Jordan, 550 observable, 549 special controllable, 136 centralised control, 680 centroid, 407 certainty equivalence principle, 677 ChatGPT, 33, 698 comparator, 5 compensation, 433, 496 characteristics, 502 control law and estimator, 594 generic compensator, 499 lag, 433 lag compensator, 499 lead, 433 lead compensator, 498 lead-lag compensator, 502 PD compensator, 497 PI compensator, 498 PID compensator, 500

765

766

completing the square, 210 conditional mean, 661, 668 consistency, 664 continuous-data system, 13 control system, 1, 3, 275 classification, 11 closed-loop, 5, 6, 280 components, 5 continuous-data, 13 design requirements, 283 discrete-data, 13 examples, 7 linear, 12 nonlinear, 12, 680 open-loop, 5, 6, 277 regulator, 13 time-invariant, 12 time-variant, 12 tracking, 13 control system design, 14 controllability, 524 controller, 5 Cramer-Rao lower bound, 664 cruise control, 8 modelling closed-loop, 297 open-loop, 293 damping ratio, 232 data fusion, 660 data-driven modelling, 692 DC gain continuous-time system, 227, 616 discrete-time system, 617 DC motor, 300 modelling closed-loop, 308 open-loop, 300 deep learning, 696 deep reinforcement learning, 696 agents, 703 defuzzification, 706, 707 delta (Dirac) function, 607 describing function, 683 design control system, 14 engineering, 13 nonlinear control system, 684 process, 14 desired output, 5 diagonalisation, 518, 550, 553

Index

digital control, 604 digital state-space analysis, 623 digital State-Space model, 624 digital to analogue, 605, 632 digitisation, 604 direct digital control design, 617, 621, 632 discrete time signal, 610 discrete-data system, 13 disturbance modelling, 290 disturbance signal, 5, 290 driverless car, 725 automation level, 726 control system, 727 objective function, 728 drone, 722 advanced control system, 724 basic control system, 722 dynamic programming, 677 dynamic system, 45 analogous, 198 electromechanical, 128 nonlinear, 681 nonlinear elements, 681 translational mechanical, 47 eigenvalues, 552 eigenvectors, 552 energy-storing element, 52 equivalent digital control design, 617, 621 estimation, 660, 662, 666 estimator, 660 estimator design, 582 duality with control, 586 full-order state, 583 reduced-order state, 583, 587 extended Information filter, 672, 674 extended Kalman filter, 667, 671 feedback, 1 full state, 564 negative, 78, 299 positive, 78 feedback linearisation, 683, 684 feedback signal, 5 Final value theorem, 225 first-order hold, 608 first-order system, 190, 228 constant input, 192 stability analysis, 190 Fisher information matrix, 664 Fourier series, 58

767

Index

Fourier transform, 58 discrete form, 611 frequency response, 445 frequency-response design, 446 experimental data, 495 magnitude, 446 phase angle, 446 fuzzification, 706 fuzzy logic, 34 architecture, 35 control system, 35, 705 fuzzy logic PID control, 706 fractional order control, 712 gain scheduling, 710 model prediction control, 710 gain scheduling, 683, 684 Generative AI, 697 Google Bard, 698 Hamiltonian system, 578 Hessian, 664 homogeneous equation, 186 impulse response, 223 Information filter, 662, 665 information matrix, 665 information space, 672 information state contribution, 666 information state vector, 665 information-analytic variable, 665 Initial value theorem, 225 innovations, 670 Input-Output model, 49, 50 intelligent control system, 35 challenges, 38 examples, 36 Jacobian, 668 Jordan canonical form, 562 Kalman filter, 660, 661 LabVIEW, 28 Lagrangian, 576 multiplier, 576 Laplace transform, 58, 520 key functions, 60 properties, 59 system response, 214, 520 likelihood function, 662, 663 linear control system, 12

linear quadratic Gaussian, 676 linearisation, 667 linearisation instability, 675 linearised information matrix, 674 machine learning, 695 classification, 699 deep learning, 696 reinforcement learning, 695 semi-supervised learning, 695 supervised learning, 695 unsupervised learning, 695 workflow, 698 magnitude, 446 matched pole-zero, 617 Mathcad, 32 MATLAB, 22, 323, 739 basics, 739 Control System Toolbox, 22 exercises, 749 SimPowerSystems, 31 Simulink Control Design, 24 Stateflow, 32 matrix exponential, 516 Mechatronics, 1 MIMO, 3, 11, 508 Modal Matrix, 518 model, 44 Block Diagram, 75 examples, 90 Input-Output, 49 linearisation, 162, 163 nonlinear, 162 State-Space, 51, 508 State-Variable Matrix, 51, 508 nonlinear, 56 switching forms, 131, 134 Transfer Function, 57, 58 decomposition, 563 pole-zero form, 74 system response, 221 types, 44, 47, 49 modified matched pole-zero, 619 multisensor fusion, 660 multisensor system, 660 multivariable control, 3 natural frequency damped, 232 undamped, 232 natural response, 223

768

noise covariance, 661, 662 nonhomogeneous equation, 187 nonlinear control analysis, 682 nonlinear control system, 12, 683, 684 Solar PV, 687 nonlinear dynamics, 681 nonlinear estimation, 667 nonlinear observation, 667 nonlinear state transition, 667 nonlinearities, 681 nonminimum phase system, 480 time delay (transport lag), 481 observability, 524 observable canonical form, 559 observations, 661 operational amplifier, 339 optimal control, 574 problem statement, 577 optimal stochastic control, 676 optimality, 574 optimality principle, 677 optimisation, 574 outer product, 668 partial fraction, 205 PD controller general, 316 standard, 319, 334 phase angle, 446 phase plane methods, 683 PI controller general, 315 standard, 319, 334 PID controller, 312 characteristics, 320 classification, 700 continuous-time, 312 design techniques, 337 discrete time, 647 P, 650 PD, 652 PI, 651 PID, 653 general, 317, 318 implementation, 338 modelling, 312 poles and zeros, 325, 327, 333 realisation, 339 standard, 319, 331 tuning, 329, 337

Index

quarter decay, 329 stability limit, 330 trial and error, 329 plant, 5 pole placement, 566 positive feedback, 288 positive semi-definite, 677 prediction, 661, 666 probability distribution, 664 process, 5 propagation coefficient, 666 proportional controller, 314 quadratic cost function, 677 regulator, 13 reinforcement learning, 696 RLC circuit, 50, 51, 106 robust control, 596, 597, 685 root locus, 397, 398 angle criterion, 399 construction, 400 Evans form, 399 examples, 401, 411 gain, 410 magnitude criterion, 399 MATLAB, 440 nonlinear, 439 time delay, 435 Routh-Hurwitz, 364, 422 s-operator, 70 sampled data system, 605 sampler, 606 sampling, 604 sampling rate, 630 score function, 663 second-order system, 196, 232, 239 impulse response, 234 stability analysis, 235 step response, 234 sensitivity, 356, 362 closed-loop, 357 open-loop, 356 sensor, 5 separation principle, 677 similarity transformation, 555 SISO, 3, 11, 149, 509 Sophia the Robot, 19 SpaceX Rocket, 15 SPICE-Based Simulation, 31

769

Index

stability, 364, 378 asymptotic internal, 364 BIBO, 364 Routh-Hurwitz criterion, 368 standard PD controller, 334 standard PI controller, 334 state, 661 state variable, 48 choice, 52 state-space, 508, 509 State-Transition Matrix, 200, 516, 518, 661 steady state, 313, 317, 320, 346 steady-state error, 288, 313, 317, 320, 326, 329, 346–348 disturbance, 346 reference, 346 steady-state response, 288 steady-state tracking, 313, 346 step response, 223 characteristics, 239 supervised learning, 695 backpropagation, 695 system model, 45 system response, 185, 515

frequency domain, 58 pole-zero effects, 243 state-space, 515 time domain, 186 variable matrix solution, 199 system type, 346, 348 Taylor’s series expansion, 668 time delay, 484, 486 time-invariant, 12 time-variant, 12 tracking, 13 traditional (single-task) Artificial Intelligence, 694 Transformation Matrix, 518 Tustin method, 620 unity feedback, 288 unsupervised learning, 695 water level control automatic, 9 manual, 9 zero-order hold, 604, 608, 632 Ziegler-Nichols tuning, 330

Taylor & Francis eBooks www.taylorfrancis.com A single destination for eBooks from Taylor & Francis with increased functionality and an improved user experience to meet the needs of our customers. 90,000+ eBooks of award-winning academic content in Humanities, Social Science, Science, Technology, Engineering, and Medical written by a global network of editors and authors.

TAYLOR & FRANCIS EBOOKS OFFERS:

A streamlined experience for our library customers

A single point of discovery for all of our eBook content

Improved search and discovery of content at both book and chapter level

REQUEST A FREE TRIAL [email protected]